HyperSwap
This chapter walks you through the process of setting up and configuring HyperSwap. It demonstrates how to set up and operate HyperSwap with three sites.
HyperSwap configuration is currently only available through command-line interface (CLI) commands. This chapter describes the needed commands.
Details about the CLI are described in the FlashSystem IBM Knowledge Center in the Reference chapter, Command-line interface section:
HyperSwap capability enables each volume to be presented by two I/O groups. The configuration tolerates combinations of node and site failures, using a flexible choice of host multipathing driver interoperability. In this usage, both the V9000 control enclosure and the flash memory enclosure identify and carry a site attribute. The site attribute is set onto the enclosure during the initial cluster formation where the human operator designs the site that the equipment is in. This is then used later when performing provisioning operations to easily automate the creation of a HyperSwap volume that has multi-site protection.
This chapter includes the following topics:
In this chapter, the term vdisk is used for an individual object created with the mkvdisk command. The term HyperSwap volume is used for the whole LUN mapped to hosts that HyperSwap provides high availability for. The term single-copy volume is used for a LUN mapped to hosts that consists of one vdisk. This is nonstandard terminology used to distinguish between these.
 
Notes:
HyperSwap volumes are configured using CLI commands. The CLI commands are enhanced for local HyperSwap with use in a GUI in a future release. With FlashSystem V9000 software V7.5, The GUI is currently not adapted to HyperSwap requirements. GUI information in this chapter is only used to give a brief overview. CLI is the only way to get consistent information on HyperSwap volumes.
At the time of publishing, the two I/O groups in a HyperSwap configuration must exist within a single FlashSystem V9000 cluster.
 
11.1 Overview
HyperSwap is the high availability (HA) solution for IBM FlashSystem V9000. HyperSwap provides business continuity if hardware failure, power failure, connectivity failure, or disasters occur. HyperSwap is also available on other IBM Spectrum Virtualize products, such as IBM SAN Volume Controller, or IBM Storwize V7000. The following list includes HA requirements:
Two independent main sites
Independent infrastructure for power, fire protection, and so on
Independent servers on each site
Two independent data copies, one in each site
Latency optimized intersite traffic to keep both sites’ data copies in sync
Local high availability in each site
Application site transparency
Figure 11-1 shows a two-site HA environment.
Figure 11-1 High availability environment
The HyperSwap function provides highly available volumes accessible through two sites at a distance up to 300 km apart. A fully-independent copy of the data is maintained at each site. When data is written by hosts at either site, both copies are synchronously updated before the write operation is completed. The HyperSwap function automatically optimizes itself to minimize data transmitted between sites and to minimize host read and write latency.
If the nodes or storage at either site go offline, leaving an online and accessible up-to-date copy, the HyperSwap function will automatically fail over access to the online copy. The HyperSwap function also automatically resynchronizes the two copies when possible.
11.1.1 HyperSwap Implementations
The decision for a HyperSwap failover can be managed by the host or by the storage system. IBM currently has two main solutions:
Host-based HyperSwap. The host handles storage failures.
Storage-based HyperSwap. The storage system handles storage failures.
The next two sections describe these two solutions.
Host-based HyperSwap
A HyperSwap function is available when using the IBM DS8000 family of products together with IBM PowerHA System Mirror for AIX or IBM Geographically Dispersed Parallel Sysplex™ (IBM GDPS®) for IBM z/OS®. The HyperSwap functions on those environments use specific software on that host system. All decisions in split scenarios are made by the host.
Figure 11-2 shows a host-based HyperSwap example of an IBM AIX PowerHA and IBM System Storage DS8000 HyperSwap setup.
Figure 11-2 Host-based HyperSwap example
Storage-based HyperSwap
IBM Spectrum Virtualize provides the HyperSwap feature in the virtualization layer. It uses technologies from:
Metro Mirror
Global Mirror with Change Volumes
Non-disruptive Volume Move
One volume is presented to the host from two different sites. Two IO groups are presenting the same volume to the host. All decisions in split scenarios are made by Spectrum Virtualize software running on IBM FlashSystem V9000.
The host must detect, accept, and handle HyperSwap changes, and manage the application failover. All FlashSystem V9000 failover decisions are valid for all hosts, or host clusters attached to the FlashSystem V9000 cluster.
Figure 11-3 shows a Spectrum Virtualize-based HyperSwap example.
Figure 11-3 Spectrum Virtualize-based HyperSwap example
The HyperSwap function in the FlashSystem V9000 software works with the standard multipathing drivers that are available on a wide variety of host types, with no additional host support required to access the highly available volume. Where multipathing drivers support Asymmetric Logical Unit Assignment (ALUA), the storage system tells the multipathing driver which nodes are closest to it, and should be used to minimize I/O latency. You must assign a site value to the host, to the FlashSystem controller nodes, and storage enclosures. The ALUA supporting multipathing driver configures the host pathing optimally.
Figure 11-3 shows that four vdisks are needed to present one HyperSwap volume to the host. Details about the configuration are described in section 11.5, “Configuration” on page 424.
11.2 HyperSwap design
This section provides high-level information about HyperSwap. Details are described throughout the whole chapter.
The FlashSystem HyperSwap function is an active-active mirror based on Metro Mirror technology. It is an unstoppable configuration. The relationship of a HyperSwap volume is never in “stopped mode”, except during disaster recovery scenarios. During normal running, it is always attempting to keep the copies in sync. It has one of the following statuses:
consistent_copying
inconsistent_copying
consistent_synchronized
The LUN ID of the master vdisk is presented to the host. The auxiliary vdisk is always seen as offline in the GUI and CLI. The auxiliary vdisk is presented to the host with the same LUN ID as the master vdisk. HyperSwap simulates the master LUN ID for the auxiliary vdisk. The LUN ID of the auxiliary vdisk is not visible to the host.
In Figure 11-4, the host can access the HyperSwap volume using the master and the auxiliary (aux) vdisk. In the CLI and GUI the aux vdisk is shown offline, but the host can use it using the LUN ID of the master vdisk.
Figure 11-4 HyperSwap LUN ID Simulation
The host can access the HyperSwap volume using the I/O group on site 1, or the I/O group on site 2, or both. The multipath driver of the host is responsible for selecting the optimal paths.
The example in Figure 11-4 shows a host on site 1 accessing the HyperSwap volume using I/O group 1 and the master vdisk on site 1. Data is replicated to the auxiliary vdisk on site 2. When the connection from the host to the master vdisk is broken, for example the Fibre Channel (FC) connection between host and I/O group 1 is broken, then the host accesses the data using I/O group 2.
The master vdisk is still the primary vdisk, so reads are serviced by the master vdisk and writes are forwarded to the master vdisk and then replicated to the auxiliary, the secondary vdisk. If this scenario is running for more than 20 minutes, the auxiliary vdisk becomes the primary vdisk, servicing reads and writes. The master vdisk becomes the secondary vdisk. I/O arrives in I/O group 2 and is handled by the auxiliary (now primary) vdisk and replicated to the master (now secondary) vdisk.
A HyperSwap volume can be accessed concurrently for read and write I/O from any host in any site. All I/O is forwarded to one I/O group in the site with the primary vdisk. Using the site with the non-primary vdisk increases the long-distance traffic significantly.
HyperSwap Cluster monitors the workload and switches the copy direction if the most workload is arriving on the other site, optimizing performance.
Applications with equal workload pattern to the same HyperSwap volume using both I/O groups, for example Oracle RAC, are currently not optimal for HyperSwap.
Tip: If you are running VMware environment on top of HyperSwap, it is good practice to maintain VMs on the hosts in one site per HyperSwap volume. For example, with VMware Distributed Resource Scheduler (DRS), should run VM-host affinity rules.
A host accessing a HyperSwap volume uses two I/O groups. Therefore, the host multipathing must handle 2x more paths compared to a normal volume. A maximum of eight paths per host is supported. When changing from standard to HyperSwap topology, the host zoning has to be reviewed. If only HyperSwap volumes are configured on FlashSystem V9000, meaning each host accesses a volume using two I/O groups, the maximum number of host objects and the maximum number of volume mappings per host object is cut in half.
HyperSwap volumes use FlashCopy technology to provide consistency protection when synchronizing the master and auxiliary vdisk after a loss of sync, for example when the link between these vdisks was broken. One change volume per HyperSwap volume must be prepared on each site. Therefore, a HyperSwap volume requires the configuration of four internal vdisks. Two FlashCopy mappings to each change volume are required (one in each direction), so four FlashCopy maps are required per HyperSwap volume.
A host accessing a HyperSwap volume using iSCSI cannot take advantage of the high availability function. Due to the requirement for multiple access I/O groups, iSCSI-attached host types are currently not supported with HyperSwap volumes.
A maximum of 1024 HyperSwap volumes can be configured in a cluster. The maximum HyperSwap capacity is 1024 TiB per I/O group, and 2048 TiB per cluster.
Each I/O group needs up to 1024 MiB of FlashCopy bitmap space and 512 MiB of remote copy bitmap space for HyperSwap. Master and auxiliary vdisks need to be accounted in each of their I/O groups.
The size of a HyperSwap volume cannot be changed by using expandvdisksize and shrinkvdisksize commands.
HyperSwap is currently configured using the CLI.
The lsvdisk command shows all vdisks of a HyperSwap volume. The auxiliary vdisk is offline, as shown in Example 11-1.
Example 11-1 lsvdisk command to display all vdisks of a HyperSwap volume
lsvdisk
ID name IO_group_id IO_group_name status mdisk_grp_name capacity
0 HS_Vol_1_Mas 0 io_grp0 online mdiskgrp_west 34.00GB
1 HS_Vol_1_Mas_CV 0 io_grp0 online mdiskgrp_west 34.00GB
2 HS_Vol_1_Aux 1 io_grp1 offline mdiskgrp_east 34.00GB
3 HS_Vol_1_Aux_CV 1 io_grp1 online mdiskgrp_east 34.00GB
Figure 11-5 shows the same HyperSwap volume using the GUI.
Figure 11-5 HyperSwap volume
The HyperSwap volume master vdisk status attribute of the lsvdisk command shows whether hosts are able to access data, for example whether the HyperSwap volume has access to up-to-date data or not, not whether the master vdisk itself is actually online. The value status for auxiliary vdisk is always offline (Example 11-2). Use the status attribute of the lsrcrelationship command to get the information if a vdisk itself is online or offline. Possible values are online, primary_offline, secondary_offline, io_channel_offline, and change_volumes_needed, as described in 11.13.3, “The lsrcrelationship or lsrcconsistgrp commands” on page 460.
Example 11-2 HyperSwap volume lsvdisk status information
lsvdisk HS_Vol_1_Mas
...
status online
...
lsvdisk HS_Vol_1_Aux
...
status offline
...
The lsrcrelationship command shows the four vdisks of a HyperSwap volume, some lines are omitted for better readability, as shown in Example 11-3.
Example 11-3 The lsrcrelationship command
lsrcrelationship HS_Vol_1_rel
name HS_Vol_1_rel
master_vdisk_name HS_Vol_1_Mas
aux_vdisk_name HS_Vol_1_Aux
primary master
state consistent_synchronized
copy_type activeactive
master_change_vdisk_name HS_Vol_1_Mas_CV
aux_change_vdisk_name HS_Vol_1_Aux_CV
Figure 11-6 shows the active-active relationship using the GUI.
Figure 11-6 HyperSwap Volume active-active Relationship
The lsfcmap command shows the four FlashCopy mappings of a HyperSwap volume, as shown in Example 11-4.
Example 11-4 The lsfcmap command
lsfcmap
ID name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name
0 fcmap0 0 HS_Vol_1_Mas 1 HS_Vol_1_Mas_CV
1 fcmap1 1 HS_Vol_1_Mas_CV 0 HS_Vol_1_Mas
2 fcmap2 2 HS_Vol_1_Aux 3 HS_Vol_1_Aux_CV
3 fcmap3 3 HS_Vol_1_Aux_CV 2 HS_Vol_1_Aux
Figure 11-7 shows the FlashCopy mappings using the GUI.
Figure 11-7 HyperSwap Volume and its FlashCopy Mappings
11.3 Comparison with Enhanced Stretched Cluster
Many of the aspects described so far are the same as those of the existing Spectrum Virtualize Enhanced Stretched Cluster function, introduced in version 7.2 of the software. Table 11-1 provides a list of key differences between the Enhanced Stretched Cluster and HyperSwap functions.
 
Note: Enhanced Stretched Cluster is not supported with FlashSystem V9000.
Table 11-1 Enhanced Stretched Cluster and HyperSwap comparison
 
Spectrum Virtualize Enhanced Stretched Cluster
FlashSystem V9000 HyperSwap
Product availability
SAN Volume Controller only
FlashSystem V9000 with 2 or more I/O groups
Configuration
CLI or GUI
CLI based; GUI and CLI enhancement to come in future releases
Sites
Two for data, third for quorum disk
Two for data, third for quorum disk
Distance between sites
Up to 300 km
Up to 300 km
Independent copies of data maintained
Two
Two (Four if additionally Volume Mirroring to two pools in each site)
Host requirements
Standard host multipathing driver
Standard host multipathing driver
Cache retained if only one site online?
No
Yes
Synchronization and resynchronization of copies
Automatic
Automatic
Stale consistent data retained during resynchronization for disaster recovery?
No
Yes
Scope of failure and resynchronization
Single volume
One or more volumes, user configurable
Ability to use FlashCopy together with High Availability solution
Yes (though no awareness of site locality of data)
Limited: can use FlashCopy maps with HyperSwap volume as source, avoids sending data across link between sites
Ability to use Metro Mirror, Global Mirror, or Global Mirror with Change Volumes together with High Availability solution
One remote copy
No
Maximum highly available volume count
4096
1024
Licensing
Included in the base product
Requires Remote Mirroring license.
The Enhanced Stretched Cluster function and the HyperSwap function spread the nodes of the system across two sites, with additional storage at a third site acting as a tie breaking quorum device.
The topologies differ in how the nodes are distributed across the sites:
Enhanced Stretched Cluster
For each I/O group in the system, the Enhanced Stretched Cluster topology has one node on one site, and one node on the other site. The topology works with any number (1 - 4) of I/O groups, but because the I/O group is split into two locations, this is only available with SAN Volume Controller, not FlashSystem V9000.
HyperSwap
The HyperSwap topology locates both nodes of an I/O group in the same site, making this possible to use with either FlashSystem V9000 or SAN Volume Controller products. Therefore, to get a volume resiliently stored on both sites, at least two I/O groups are required.
The Enhanced Stretched Cluster topology uses fewer system resources, enabling a greater number of highly available volumes to be configured. However, during a disaster that makes one site unavailable, so the SAN Volume Controller system cache on the nodes of the surviving site is disabled.
 
Requirement: Using HyperSwap requires the Remote Mirroring license.
11.3.1 Disaster Recovery
The HyperSwap function automatically controls synchronization and resynchronization of vdisk copies. Just before resynchronizing data to a vdisk copy, that copy usually contains crash-consistent but stale (out-of-date) data. The storage system automatically retains that consistent data during the resynchronization process using Change Volume technology.
What this means is that if a problem occurs at the site with the online copy before resynchronization completes, taking that copy offline, you have the opportunity to manually enable read and write access to the consistent, older copy of data, allowing the use of this data for disaster recovery. This option would typically be taken if you know that the offline copy will remain offline for an extended period, and the consistent but older data is useful enough to keep your business running.
As normal with disaster recovery solutions that support business continuity with older data, after the problem is resolved restoring access to the offline copy, you can choose to either revert to that now-online copy, which before the disaster held the latest copy of the data, or continue to work on the stale data used during the disaster. With either choice, the other copy is resynchronized to match the chosen copy.
11.3.2 Consistency Groups
One major advantage of the HyperSwap function, compared to Enhanced Stretched Cluster, is that it is possible to group multiple HyperSwap volumes together for high availability. This is important where an application spans many volumes, and requires the data to be consistent across all those volumes.
The following scenario is an example where the data is not consistent across all those volumes and would affect availability:
1. Site 2 goes offline.
2. Application continues to write to its volumes, changes only applied to site 1.
3. Site 2 comes online again.
4. Volumes are resynchronized back to site 2.
5. Site 1 goes offline during the resynchronization, leaving some volumes already resynchronized and some volumes unresynchronized.
Site 1 is the only site that has usable data. Site 2 might have usable data on some vdisks but not others. If this process is continued, it is possible that neither site will have a complete copy of data, making a failure on either site affect production I/O.
Without consistency groups, site 2’s data would have been made inconsistent on several of the vdisks, by the attempt to resynchronize, which did not complete. The unresynchronized vdisks contain consistent but old data, as described in the previous section 11.3.1, “Disaster Recovery” on page 421. Site 2 now has some vdisks with old data and vdisks with resynchronized data. If the site 1 data cannot be recovered, some other solution is needed to recover business operations.
Using consistency groups to control the synchronization and failover across many HyperSwap volumes in an application ensures that all vdisk copies on a site have data from the same point in time, enabling disaster recovery using that site’s vdisk copies. It also ensures that at least one site has an up-to-date copy of every HyperSwap volume in the consistency group. It further ensures that the other site, if it does not have an up-to-date copy of every vdisk, it has a consistent copy of every vdisk for some out-of-date point-in-time.
11.4 Planning
There are two steps required to configure FlashSystem V9000 for HyperSwap. The first step is to configure the components of the system correctly for the HyperSwap topology. The second step is to create HyperSwap volumes that use that topology.
The first step includes these high-level tasks:
1. Define the sites.
2. Configure the nodes.
3. Configure the FlashSystem V9000 storage enclosures.
4. Configure the external storage controllers and quorum disk.
5. Configure the hosts.
6. Configure the HyperSwap topology.
7. Configure synchronization rates.
You should plan to complete all steps to configure sites for nodes, controllers, hosts, and the system topology in one session. Do not leave a system in production if only some of these steps have been performed.
The FlashSystem V9000 storage enclosures are the flash memory of FlashSystem V9000. The external storage controllers are the additional storage systems, such as an IBM Storwize V7000, which are attached to FlashSystem V9000. One more storage system is needed for the third quorum disk. Figure 11-11 on page 432 shows the additional storage system in a HyperSwap setup.
The following page has details about the configuration limits and restrictions specific to V7.5:
11.4.1 Cabling and quorum disk considerations
The requirements for the HyperSwap topology are similar to those of the Enhanced Stretched Cluster topology. The difference is that in the HyperSwap topology the two nodes of an IO group must be on the same site.
The Redbooks publication IBM SAN Volume Controller Enhanced Stretched Cluster with VMware, SG24-8211 contains many details about the requirements of the Enhanced Stretched Cluster topology, including the quorum storage (particularly sections 3.3.4, 3.3.6, and 4.9 - 4.11), and the connectivity between sites (section 3.4). Replacing Node1 and Node2 in the document with a FlashSystem V9000 I/O group would give a valid HyperSwap topology.
If there are ISLs or active SAN components, such as active conventional wavelength-division multiplexing/dense wavelength-division multiplexing (CWDM/DWDM) used for inter-site links, a private SAN for node-to-node communication is required. Figure 11-8 shows a setup with a private SAN.
Figure 11-8 Private SAN Example
Only a single hop between sites is supported in a HyperSwap cluster.
The needed bandwidth between the two I/O groups for a HyperSwap volume depends on the peak write throughput from all hosts. Additional bandwidth is required in the following cases:
HyperSwap volumes are accessed concurrently by hosts in different sites.
During the HyperSwap volume initial sync and the resync.
A host loses access to the local IO group and accesses an IO group in the remote site.
FlashSystem V9000 uses the spare disks of the FlashSystem enclosures as quorum disks. In a HyperSwap setup, a quorum disk at the third site is needed. The quorum disk on the third site is used as a tiebreaker and must be the active quorum disk. Only the active quorum disk acts as a tiebreaker. 11.5.4, “Configuring the external storage controllers” on page 426 shows how to configure the external storage for the third quorum disk.
11.5 Configuration
Several system objects must be configured before selecting the HyperSwap system topology, including sites, nodes, enclosures, controllers, and hosts.
 
Note: In this section, the term vdisk is used for an individual object created with the mkvdisk command. The term HyperSwap volume is used for the whole LUN mapped to hosts that HyperSwap provides high availability for. This is nonstandard terminology, being used in this document to distinguish between these. A future release of FlashSystem V9000 software will remove the need to create or view the individual objects, and so the need to distinguish between them in this way will not be needed anymore.
11.5.1 Defining the Sites
The site corresponds to a physical location that houses the physical objects of the system. In a client installation, it can correspond to a separate office, an isolated fire zone, for example a separate office with a unique firewall address, a different data center building, or simply different rooms or racked areas of a single data center that has been planned to have internal redundancy.
Parameters that specify that a site exists are used in many of the commands described later. The user does not have to create sites. Table 11-2 shows the four sites statically defined in the system.
Table 11-2 Site information
Site ID
Default site name
Objects that can be in site
Purpose
none
Has no name, cannot be renamed
Hosts, nodes, controllers
The default site for objects when they are not assigned to a specific site. The HyperSwap topology requires objects to be in a site.
1
site1
Hosts, nodes, controllers
The first of two sites to perform high availability between. Has no implied preferences compared to site 2.
2
site2
Hosts, nodes, controllers
The second of two sites to perform high availability between. Has no implied preferences compared to site 1.
3
site3
Controller
A third site providing quorum abilities to act as a tie-break between sites 1 and 2 when connectivity is lost.
Sites 1, 2, and 3 can be renamed from the default name using the chsite command and listed using the lssite command shown in Example 11-5.
Example 11-5 The lssite command to rename default sites
chsite -name datacenter_west 1
chsite -name datacenter_east 2
chsite -name quorum_site 3
lssite
ID site_name
1 datacenter_west
2 datacenter_east
3 quorum_site
In Example 11-5 on page 424, the lssite command is issued to rename site 1 as datacenter_west, site21 as datacenter_east, and site 3 as quorum_site. This can help you to understand and describe the location of objects in a more meaningful way. This document uses the default names site1, site2, and site3 for sites.
11.5.2 Nodes
With a HyperSwap system topology, all nodes in an I/O group must belong to the same site. You should assign the nodes of at least one I/O group to each of sites 1 and 2.
To configure HyperSwap volumes on FlashSystem V9000 you need at least four controller nodes, two building blocks.
Before the HyperSwap system topology can be selected, the site of every node must be set using the chnode command shown in Example 11-6.
Example 11-6 The chnode command to modify existing node to a different site
chnode -site 1 node1
or
chnode -site site1 node1
This modifies the existing node node1 from its current site to site 1. This command has to be used for all nodes.
 
Note: Every node of an I/O group has to be in the same site. A node can never be assigned to site 3.
When the cluster topology is set to HyperSwap and an I/O group contains copies of HyperSwap volumes, the I/O group must stay in the same site even if all nodes have been deleted from that I/O group. New nodes must be added with the same site attribute as the deleted nodes. The only way to move an I/O group from one site to the other is to remove all HyperSwap volumes using that I/O group, delete the nodes from that I/O group, then re-add them to the I/O group but with the new site attribute.
Typically, a HyperSwap configuration might contain two or four building blocks, either with one I/O group on site 1 and one I/O group on site 2, or with two I/O groups on each of sites 1 and 2. It’s possible to configure the system with more I/O groups on one site than the other, but the site with fewer nodes might become a bottleneck.
Each I/O group should have sufficient bitmap capacity defined using the chiogrp command for the HyperSwap volumes in addition to the bitmap capacity requirements of other FlashCopy and Global Mirror or Metro Mirror objects needed.
For each HyperSwap volume, for every 8 GB logical capacity (not physical space), rounded up to the next greatest 8 GB, you need 4 KB remote bitmap memory and 8 KB flash bitmap memory defined in both I/O groups of the HyperSwap volume.
For a two I/O-group system with the maximum 1024 HyperSwap volumes, each being
100 GB, you need to run the following commands to configure sufficient bitmap space, as shown in Example 11-7.
Example 11-7 The chiogrp command to configure bitmap space
chiogrp –feature flash –size 104 0
chiogrp –feature flash –size 104 1
chiogrp –feature remote –size 52 0
chiogrp –feature remote –size 52 1
11.5.3 Configuring the FlashSystem V9000 storage enclosures
You must assign the site attribute to all FlashSystem V9000 storage enclosures using the following command:
chenclosure -site <site id> <enclosure id>
For example, a two building-block setup has two storage enclosures. Use the commands shown in Example 11-8.
Example 11-8 The chenclosure command to assign the site attribute
chenclosure -site 1 1
chenclosure -site 2 2
The lsenclosure command shows the results of the site change, as shown in Example 11-9.
Example 11-9 The lsenclosure command to view site attribute details
lsenclosure -delim :
1:online:expansion:yes:0::9846-AE2:1371055:2:2:2:2:12:0:0:1:site1
2:online:expansion:yes:0::9846-AE2:1330015:2:2:2:2:12:0:0:2:site2
FlashSystem V9000 storage enclosures should be assigned to site 1or site 2. There’s no checking that you’ve set the sites on all enclosures when you change the system to use the HyperSwap topology, but the system does not let you create HyperSwap volumes unless all the mdisks have a site attribute. You cannot assign a site attribute to FlashSystem enclosures with managed mdisks after setting the topology to HyperSwap.
 
Important: Always assign a site attribute to FlashSystem enclosures before changing to HyperSwap topology.
11.5.4 Configuring the external storage controllers
For virtualized external storage, you must assign the site attribute to all controllers using the following command:
chcontroller –site <site id> <controller id>
Controllers should be assigned to site 1, site 2, or site 3 when used for quorum, if they have any managed mdisks and the system is set to use the HyperSwap topology. Mdisks can only be assigned to storage pools if they are allocated from a storage controller with a well-defined site that matches that of the storage pool.
There’s no checking that you have set the sites on all controllers when you change the system to use the HyperSwap topology, but the system does not let you create HyperSwap volumes unless all of the mdisks in each storage pool have the site set up correctly.
Configuring the quorum disk on a third site
Quorum storage must be available in site 3 so that connectivity from all nodes to that storage can be verified correctly. FlashSystem V9000 automatically uses the spare disk of the FlashSystem storage enclosure as quorum disk. If you have more than two FlashSystem V9000 storage enclosures, you must check if every site has a quorum disk.
Always check that the site 3 quorum disk is active.
To qualify as quorum disk candidate, an mdisk must be:
In managed mode
Visible to all nodes
Presented by a storage system that is supported for Extended Quorum
The system automatically attempts to use storage on sites 1, 2, and 3 for the three quorum disks, selecting the one on site 3 to be the active quorum disk. If you override the choice of quorum disks, you must still select one on each site.
 
Note: Always check the site location of the quorum disks and check if site 3 quorum disk is active and configured correctly.
The HyperSwap function requires quorum storage in site 3 to function correctly. Therefore, FlashSystem V9000 needs external storage. Even if there is otherwise no need for external storage, this is necessary because only external storage can be configured to site 3.
HyperSwap requires a quorum disk on each site and the tiebreaker quorum is on the third site. To override the automatic quorum selection, which only selects quorum disks from the FlashSystem storage enclosures, use the-override parameter with the chqurum command. Also use the -active parameter to be sure that the quorum disk on the tiebreaker site is active. Use the chquorum command to set the tiebreaker quorum disk:
quorum -active -mdisk <mdisk of site 3> -override yes <quorom id>
The controller providing the quorum storage has to specify “extended quorum” support on the SAN Volume Controller-supported controller list for the installed software release.
11.5.5 Configuring the hosts
Host objects have a site parameter. This can be configured on existing host objects as follows:
chhost –site 1 Host_AIX
This command defines the ports of host Host_AIX as being on site 1.
 
Important: The system dynamically configures host multipathing so that hosts in site 1 preferentially send I/O to nodes in site 1, and similarly for site 2. So for optimum performance, all of the WWPNs associated with this host object should be on that site. For clustered host systems attached to both sites, you should define a host object per site to optimize the I/O for each physical server in the clustered host system.
New hosts can be added with a defined site using command:
mkhost -fcwwpn <WWPN:WWPN> –site <site id>
When HyperSwap volumes are mapped to a host using the mkvdiskhostmap command, the host must be assigned to either site 1 or site 2.
By default, host objects are associated with all I/O groups. If you use the -iogrp parameter for the mkhost command to override this, you need to make sure that hosts accessing HyperSwap volumes are associated with at least the I/O groups that the master and auxiliary vdisk of the HyperSwap volumes are cached in. Missing an association between the host and such an I/O group prevents the host from being able to access HyperSwap volumes through both sites.
 
Note: A host must be able to access all I/O groups of a HyperSwap volume.
11.5.6 Configuring the HyperSwap topology
All nodes, storage enclosures, and storage controllers can be set to any of sites 1 or 2 (or 3 for controllers) when the system has been set to the standard system topology. Use the command lssystem to check the current topology, as shown in Example 11-10.
Example 11-10 The lssystem command to check current topology
lssystem
...
topology standard
...
Figure 11-9 shows the FlashSystem V9000 GUI in standard topology.
Figure 11-9 FlashSystem V9000 GUI with standard topology
Before the system can be set to the HyperSwap system topology, every node must have a site configured correctly, and it is advisable to set the site of every enclosure, controller, and host too for existing systems. For a new system, you can choose the HyperSwap topology early in your initial configuration, which helps ensure that objects have their sites set correctly.
When all of the sites have been set, the system can be set to use the HyperSwap topology using the chsystem command:
chsystem –topology hyperswap
Figure 11-10 shows the FlashSystem V9000 GUI in HyperSwap topology.
Figure 11-10 FlashSystem V9000 GUI with HyperSwap topology
The site attributes of the nodes, storage enclosures had been set before enabling HyperSwap.
 
Note: You won’t be able to change the topology back to the standard topology if there are any HyperSwap volumes defined.
11.5.7 Configuring synchronization rates
There are two primary factors that affect the synchronization rate, which are similar to those for the existing Metro Mirror and Global Mirror replication technologies:
Partnership bandwidth
The total bandwidth between site 1 and 2. This is foreground traffic, such as transferring new host writes to the second site, and background traffic, such as synchronization of new HyperSwap volumes or resynchronization.
You can limit the background traffic of HyperSwap volumes. Limiting the amount of background traffic assures a minimum value for foreground traffic.
Relationship bandwidth
This is the background traffic limitation per vdisk.
Partnership bandwidth
The primary attribute to configure is the partnership bandwidth. Before the introduction of HyperSwap volumes, this could not be configured for intra-cluster relationships (for example, with both copies in the same system), such as the active-active relationships used for HyperSwap replication. With HyperSwap-capable systems, the local partnership bandwidth can be configured, and represents the amount of physical bandwidth between sites used
for synchronization.
For compatibility with earlier versions, this defaults to 25 MBps (200 Megabits per second) dedicated to synchronization, which can be appropriate for a small environment. For larger systems, or systems with more bandwidth available between sites, you might want to increase this by using the following command:
chpartnership -linkbandwidthmbits 4000 -backgroundcopyrate 20 <localCluster>
In this command, you can specify the bandwidth between sites, and how much can be used for synchronization. <localCluster> should be replaced by the name of the local system.
The -linkbandwidthmbits parameter specifies the aggregate bandwidth of the link between two sites in megabits per second (Mbps). It is a numeric value 15 - 100000. The default is 200, specified in megabits per second (Mbps). This parameter can be specified without stopping the partnership.
The -backgroundcopyrate parameter specifies the maximum percentage of aggregate link bandwidth that can be used for background copy operations. It is a numeric value 0 - 100, and the default value is 100, which means that a maximum of 100% of the aggregate link bandwidth can be used for background copy operations.
As with other types of partnership configuration, the system does not yet use the total amount of bandwidth available in any performance tuning, and only uses the resulting background copy bandwidth to determine HyperSwap synchronization rate. So the previous command could also be expressed as the following command:
chpartnership -linkbandwidthmbits 800 –backgroundcopyrate 100 <localCluster>
This command has the same effect concerning the background traffic, but the earlier command reserves 3200 MBps for foreground vdisk traffic.
The system will attempt to synchronize at the specified rate for background traffic where possible if there are any active-active relationships that require synchronization (including resynchronization after a copy has been offline for some time). This is true no matter how much new host write data is being submitted requiring replication between sites, so be careful not to configure the synchronization rate so high that this synchronization bandwidth consumption affects the amount needed for host writes.
Relationship bandwidth
The other control on how fast a relationship can synchronize is the system relationshipbandwidthlimit setting. This configures the maximum rate at which synchronization I/O is generated for a HyperSwap volume. It can be seen with the lssystem command, as shown in Example 11-11.
Example 11-11 Using lssystem, maximum rate synchronization I/O is generated for a volume
lssystem
...
relationship_bandwidth_limit 25
...
By default this is 25 MBps, this is megabytes, not the megabits of the partnership configuration. This means that no matter how few relationships are synchronizing, the most synchronization I/O that is generated per HyperSwap volume is 25 MBps (this is 25 MBps of reads on the up-to-date copy, and 25 MBps of writes on the other copy).
If your system has storage that cannot handle the additional 25 MBps of I/O, you can configure this to a lower value using the chsystem command:
chsystem –relationshipbandwidthlimit 10
If you want to accelerate synchronization when there aren’t many HyperSwap volumes synchronizing, you might want to increase it to a higher value:
chsystem –relationshipbandwidthlimit 200
The -relationshipbandwidthlimit parameter specifies the new background copy bandwidth in megabytes per second (MBps), 1 - 1000. The default is 25 MBps. This parameter operates system-wide and defines the maximum background copy bandwidth that any relationship can adopt. The existing background copy bandwidth settings that are defined on a partnership continue to operate, with the lower of the partnership and volume rates attempted.
 
Note: Do not set this value higher than the default without establishing that the higher bandwidth can be sustained.
11.5.8 Creating HyperSwap volumes
HyperSwap capability enables each HyperSwap volume to be presented by two I/O groups. One vdisk on an I/O group of each site stores the data. Each of these two vdisks uses a vdisk on the same site as change volume. When the relationship between these four vdisks is defined, one vdisk is the master vdisk, the other vdisk is the auxiliary vdisk, and these two vdisks have an associated Change Volume. The two vdisks are kept synchronized by the Spectrum Virtualize HyperSwap functions. The host only sees one HyperSwap volume. This HyperSwap volume has the LUN ID from the master vdisk.
Figure 11-11 on page 432 shows the four vdisks and the HyperSwap volume presented to the host. The host always sees a HyperSwap volume with ID 1. The vdisk with ID 2 is synchronized with the vdisk with ID 1. If the host detects a HyperSwap volume on both I/O groups, both vdisks show ID 1 to the host. The hosts multipathing driver detects and uses the preferred node for I/O.
In case of a failover, for example I/O group 1 is offline, the host accesses site 2 and uses vdisk 2, which presents ID 1 to the host. Even if internally there are different IDs, the host always sees the master ID 1. Therefore, the multipathing driver of the host can switch seamlessly to site 2.
Figure 11-11 shows four vdisks and the HyperSwap volume.
Figure 11-11 The HyperSwap volume build out of four vdisks
There are four key steps to create a HyperSwap volume:
1. Optionally, use mkrcconsistgrp to enable multiple HyperSwap volumes to copy consistently together.
2. Use mkvdisk to create the different vdisk objects required.
3. Use addvdiskaccess to enable the HyperSwap volume to be accessed on either site.
4. Use mkrcrelationship to create an active-active relationship to coordinate replication.
5. Use chrcrelationship to associate change volumes with the HyperSwap volume.
11.5.9 Creating a consistency group
The advantages of a consistency group are described in 11.3.2, “Consistency Groups” on page 421. Creating a consistency group enables all HyperSwap volumes for a specific application to fail over together, ensuring that at least one site has an up-to-date copy of every HyperSwap volume for the application. Use the mkrcconsistgrp command to create a consistency group:
mkrcconsistgrp -name hsConsGrp0
If all HyperSwap volumes are stand-alone, this step can be omitted (if all HyperSwap volumes are fully independent and can operate individually).
11.5.10 Creating the vdisks
Four vdisks must be created for a HyperSwap volume. The master vdisk, the auxiliary vdisk, the two associated change volumes.
New master vdisk
Each HyperSwap volume needs a master vdisk. This vdisk can be created when required to hold new application data. It is also possible to use an existing vdisk, to add HyperSwap function to an existing application without affecting that application. To create a new vdisk, use the mkvdisk command:
mkvdisk -name hsVol0Mas -size 1 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_site1 -accessiogrp 0:1
This will be the vdisk that holds the initial copy of the data, which is then replicated to the other site. For a completely new HyperSwap volume, it doesn’t matter which site this vdisk is created on.
 
Note: The site of the mdisk group defined by the site of the enclosure matches the site of the caching IO group given by the -iogrp parameter. Both sites must be able to access this HyperSwap volume. Therfore, the -accessiogrp parameter must contain an I/O group of both sites.
The master vdisk should be mapped to all hosts on both sites that need access to the HyperSwap volume.
In this example mkvdisk command, most parameters are as normal and can be configured according to your needs. If you need the HyperSwap volume to be compressed or have particular EasyTier characteristics, specify that here. The -accessiogrp parameter is important, because it enables the HyperSwap volume to be accessed on both sites. Specify the caching I/O groups that you will use for the auxiliary vdisk, in addition to that which you have specified for the master vdisk.
FlashSystem V9000 vdisk is formatted by default with version 7.5. The speed of the formatting is controlled by the -sync parameter.
Using an Existing Master vdisk
If you are using an existing master vdisk, it normally only has access through its own I/O group. To enable access to the HyperSwap volume through both sites, you need to add access to the HyperSwap volume through the auxiliary vdisk’s I/O group too. Use the addvdiskaccess command:
addvdiskaccess -iogrp 1 hsVol0Mas
This part of the process is not verified, but must be completed in order for the HyperSwap volume to provide high availability through nodes on both sites. This step is only performed for the master vdisk.
Auxiliary vdisk
A HyperSwap volume needs the master vdisk and an auxiliary vdisk. This must be the same size as the master vdisk, but using storage from the other site, and in an I/O group on the other site. To create a new auxiliary vdisk, use the mkvdisk command:
mkvdisk -name hsVol0Aux -size 1 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_site2
Do not map the auxiliary vdisk to any hosts.
Normally, the master and auxiliary vdisks should be on similarly performing storage. If this is not possible, write performance is dictated by the slower of the two, and read performance is that of the vdisk currently acting as the primary of the HyperSwap volume.
It is possible to use an auxiliary vdisk of a different provisioning type from the master vdisk, for example mixing a fully allocated vdisk with a thin-provisioned vdisk, or compressed and non-compressed thin-provisioned vdisks. This is not a recommended configuration. You get the performance penalty of the smaller vdisk allocation type, yet don’t get all of the space benefits you could get by making both the same type. However, mixed types do work correctly, and there are conditions where it’s unavoidable, for example, when starting with a compressed master vdisk, and adding an auxiliary vdisk in an I/O group without hardware compression.
Change volumes
Two thin-provisioned vdisks are required to act as change volumes for this HyperSwap volume. These must be the same logical size as the master vdisk. To create the change volumes use the mkvdisk command (Example 11-12).
Example 11-12 Use the mkvdisk command to create change volumes
mkvdisk -name hsVol0MasCV -size 1 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_site1 -rsize 0% -autoexpand
mkvdisk -name hsVol0AuxCV -size 1 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_site2 -rsize 0% -autoexpand
One change volume is created with the same I/O group as the master vdisk, and a storage pool in the same site (not necessarily the same storage pool as the master vdisk, but using the same storage pool assures the same performance, and availability characteristics). Another change volume is created in the auxiliary vdisk’s I/O group, and a storage pool in the same site. The system does not control whether the same pool is used for a change volume as the master/auxiliary vdisk, but future versions might control this.
 
Note: Do not map the change volumes to any hosts.
The change volumes are used to store differences between the copies while resynchronizing the HyperSwap volume copies, and normally only require enough storage performance to satisfy the resynchronization rate. If access is enabled to the stale copy during resynchronization, as outlined in 11.8, “Disaster Recovery with HyperSwap” on page 449, a portion of host reads and writes is serviced by the change volume storage, but this decreases toward zero within a short period of time.
The change volumes normally consume capacity equal to the initially specified rsize. During resynchronization, the change volume at the stale copy grows as it retains the data needed to revert to the stale image. It grows to use the same amount of storage as the quantity of changes between the two copies.
Therefore, a stale copy that needs 20% of its data changed to be synchronized with the up-to-date copy has its change volume grow to use 20% of its logical size. After resynchronization, the change volume will automatically shrink back to the initially specified rsize.
Create the active-active relationship
This is the main step in creating a HyperSwap volume. This step adds the HyperSwap volume’s master and auxiliary vdisks to a new active-active relationship.
Active-active relationships are a special type of relationship that can only be used in HyperSwap volumes. Currently, they cannot be configured through the GUI, you cannot manually start or stop them (other than in the disaster recovery scenarios outlined in section 11.8, “Disaster Recovery with HyperSwap” on page 449), and you cannot convert them into Metro Mirror or Global Mirror relationships.
If the master disk already contains application data, use the following command to create the relationship. The system name in this example is testCluster:
mkrcrelationship -master hsVol0Mas -aux hsVol0Aux -cluster testCluster -activeactive -name hsVol0Rel
At this point, the auxiliary vdisk goes offline, because from now on it is only accessed internally by the HyperSwap function. The master vdisk remains online.
If the master vdisk has not been written to yet, use the -sync parameter to avoid the initial synchronization process:
mkrcrelationship -master hsVol0Mas -aux hsVol0Aux -cluster testCluster -activeactive -name hsVol0Rel -sync
You should not use the -nofmtdisk parameter of the mkvdisk command to disable the quick initialization of fully allocated vdisk data for HyperSwap volumes. The -nofmtdisk parameter means that the two copies are different, so the -sync parameter to the mkrcrelationship command cannot be used to have HA instantly available. If it is necessary, ensure that the -sync parameter of the mkrcrelationship command is omitted so that the system fully synchronizes the two copies, even if neither has been written to.
The HyperSwap function internally joins the master and auxiliary vdisks together so that they can both be accessed through the master vdisk’s LUN ID, using whichever of the vdisks has an up-to-date copy of the data.
If you are creating many HyperSwap volumes that you want to be part of a new consistency group, you should add the active-active relationships to the group as you create them. A relationship created with a -consistgrp parameter is added into the specified consistency group when that consistency group has a state value of inconsistent_stopped (if the -sync flag was omitted), or consistent_stopped (if the -sync flag was provided).
All of the other relationships in that group must have been similarly created, and have not had change volumes configured yet. The following command is an example of using the -consistgrp parameter:
mkrcrelationship -master hsVol0Mas -aux hsVol0Aux -cluster testCluster -activeactive -name hsVol0Rel -consistgrp hsConsGrp0
A relationship created with a -sync flag and with a -consistgrp parameter is added into the specified consistency group if that consistency group has a state value of consistent_stopped, essentially meaning that all other relationships in that group have been similarly created, and have not had change volumes configured yet.
The following command is an example of using the -consistgrp parameter with -sync:
mkrcrelationship -master hsVol0Mas -aux hsVol0Aux -cluster testCluster -activeactive -name hsVol0Rel -consistgrp hsConsGrp0 -sync
See “Adding to a consistency group” on page 436 for details about adding or removing an active-active relationship to or from a consistency group after the relationship has been created.
Adding the change volumes
You must add the two change volumes to the relationship using the chrcrelationship command (Example 11-13).
Example 11-13 The chrcrelationship command to add two volumes to the relationship
chrcrelationship -masterchange hsVol0MasCV hsVol0Rel
chrcrelationship -auxchange hsVol0AuxCV hsVol0Rel
At this point, the active-active relationship starts replicating automatically. If the relationship was created without the -sync flag, the relationship synchronizes the existing data from the master vdisk to the auxiliary vdisk. This initial sync process does not use the change volumes.
The change volume will be used to enable automatic resynchronization after a link outage or other fault causes replication to stop.
Adding to a consistency group
Active-active relationships are added to consistency groups in just the same way as Metro Mirror and Global Mirror relationships. This can either be done when the relationship is created, as mentioned previously, or at a point after the relationship has been created using the chrcrelationship command:
chrcrelationship -consistgrp hsConsGrp0 hsVol0Rel
You cannot mix and match relationship types in a consistency group. When adding an active-active relationship to a consistency group, the group must either be empty or only contain active-active relationships.
You also cannot mix relationships with different states. For HyperSwap, that means that you can only add a relationship to a consistency group if the relationship has a copy in each site of the same state as the consistency group:
The active-active relationship’s state attribute must match the active-active consistency group’s state attribute.
If the state is not consistent_synchronized, the site of the vdisk acting as the primary copy of the active-active relationship must be the same as the site of the vdisks acting as the primary copies of the relationships in the active-active consistency group. Further details about active-active relationship replication direction are given in section 11.7, “Operations” on page 443.
If the state is consistent_synchronized, and the site of the primary vdisk of the active-active relationship is not the same as the primary site of the consistency group, the relationship has its direction switched as it is added so that the primary site matches.
If the site of the master vdisk of the active-active relationship does not match the site of the master vdisks of the relationships in the consistency group, the role of the master and auxiliary vdisks in the active-active relationship are swapped. Host access continues to be provided through the same vdisk ID and host maps, which are now the auxiliary vdisk of the relationship.
The relationship ID is retained even though this now matches the auxiliary vdisk ID.
The master and auxiliary roles are restored if the relationship is removed from the consistency group.
If you need to remove that HyperSwap volume from the consistency group, so it can fail over independently, you use the chrcrelationship command:
chrcrelationship -noconsistgrp hsVol0Rel
11.6 HyperSwap Setup walk through
This section shows examples of configuring FlashSystem V9000 HyperSwap for HyperSwap setup and HyperSwap volume setup, and includes the following tasks:
1. HyperSwap setup:
a. Define the sites.
b. Configure the nodes.
c. Configure the FlashSystem V9000 storage enclosures.
d. Configure the storage controllers and quorum disk.
e. Configure the hosts.
f. Configure the HyperSwap topology.
g. Configure synchronization rates.
2. HyperSwap volume setup:
a. Set up the master vdisk.
b. Set up the auxiliary vdisk.
c. Set up the relationship and change volumes.
d. Add to a consistency group.
e. Map HyperSwap volumes to the host.
Example 11-14 shows an example of configuring FlashSystem V9000 HyperSwap. The command output is shortened for better readability.
Example 11-14 HyperSwap Setup Example
# Define the sites
IBM_FlashSystem:TestCluster:superuser>lssite
ID site_name
1 site1
2 site2
3 site3
IBM_FlashSystem:TestCluster:superuser>chsite -?
chsite
Syntax
>>-chsite-- -- -name--new_ site_name-- ------------------------->
>--+-site_id------------+--------------------------------------><
'-existing_site_name-'
IBM_FlashSystem:TestCluster:superuser>chsite -name datacenter_west 1
IBM_FlashSystem:TestCluster:superuser>chsite -name datacenter_east 2
IBM_FlashSystem:TestCluster:superuser>chsite -name quorum_site 3
IBM_FlashSystem:TestCluster:superuser>lssite
ID site_name
1 datacenter_west
2 datacenter_east
3 quorum_site
 
# Configure the nodes
IBM_FlashSystem:TestCluster:superuser>lsnode
ID name           site_id site_name
1 node1
2 node_78AV610
3 node_75AD820
4 node_75AD830
IBM_FlashSystem:TestCluster:superuser>chnode -?
chnode
Syntax
>>- chnode -- | chnodecanister -- ------------------------------>
>--+--------------------------+-- --+- object_id ---+----------><
+- -site --+- site_id ---+-+ '- object_name -'
| '- site_name -' |
'- -nosite ----------------'
IBM_FlashSystem:TestCluster:superuser>chnode -site datacenter_west node1
IBM_FlashSystem:TestCluster:superuser>chnode -site datacenter_west node_78AV610
IBM_FlashSystem:TestCluster:superuser>chnode -site datacenter_east node_75AD820
IBM_FlashSystem:TestCluster:superuser>chnode -site datacenter_east node_75AD830
IBM_FlashSystem:TestCluster:superuser>lsnode
id name           site_id site_name
1  node1          1 datacenter_west
2  node_78AV610   1 datacenter_west
3  node_75AD820   2 datacenter_east
4  node_75AD830   2 datacenter_east
 
# Configure the FlashSystem V9000 storage enclosures
IBM_FlashSystem:TestCluster:superuser>lsenclosure
id status type       site_id site_name
1 online expansion
2 online expansion
IBM_FlashSystem:TestCluster:superuser>chenclosure -?
chenclosure
Syntax
>>- chenclosure --+- -identify --yes | no-+-- -- -- ------------>
+- -managed --+-yes-+---+
'-no--'
>--+--------------------------+-- -id --enclosure_id-----------><
+- -site --+- site_name -+-+
| '- site_id ---' |
'- -nosite ----------------'
IBM_FlashSystem:TestCluster:superuser>chenclosure -site datacenter_west 1
IBM_FlashSystem:TestCluster:superuser>chenclosure -site datacenter_east 2
IBM_FlashSystem:TestCluster:superuser>lsenclosure
ID status type       site_id site_name
1 online expansion  1 datacenter_west
2 online expansion  2 datacenter_east
 
# Configure the storage controllers
IBM_FlashSystem:TestCluster:superuser>lscontroller
ID controller_name site_id site_name
4 controller0
5 controller1
IBM_FlashSystem:TestCluster:superuser>chcontroller -?
chcontroller
Syntax
>>- chcontroller -- --+---------------------+-- ---------------->
'- -name -- new_name -'
>--+-------------------------+--+--------------------------+---->
'- -allowquorum --+-yes-+-' +- -site --+- site_name -+-+
'-no--' | '- site_id ---' |
'- -nosite ----------------'
>--+- controller_id ---+---------------------------------------><
'- controller_name -'
IBM_FlashSystem:TestCluster:superuser>chcontroller -site quorum_site 4
IBM_FlashSystem:TestCluster:superuser>lscontroller
id controller_name site_id site_name
4 controller0 3 quorum_site
5 controller1 3 quorum_site
 
# Configure quorum disk
#One quorum disk on each site, active quorum disk on third site
IBM_FlashSystem:TestCluster:superuser>lsquorum
quorum_index status id name controller_id active object_type site_id site_name
0 online 2                       no     mdisk       3       datacenter_west
1 online 23                      no     drive       2 datacenter_east
2 online 11                      yes    drive       1 datacenter_west
IBM_FlashSystem:TestCluster:superuser>chquorum -?
chquorum
Syntax
>>- chquorum -- --+-----------+-- ------------------------------>
'- -active -'
>--+--------------------------+-- --+---------------------+----->
+- -mdisk --+-mdisk_id---+-+ '- -override --yes|no-'
| '-mdisk_name-' |
'- -drive -- drive_id -----'
>-- -- quorum_id ----------------------------------------------><
IBM_FlashSystem:TestCluster:superuser>chquorum -active -mdisk mdisk4 -override yes 0
IBM_FlashSystem:TestCluster:superuser>lsquorum
quorum_index status ID name controller_id active object_type site_id site_name
0 online 4 mdisk4 4              yes    mdisk       3 quorum_site
1 online 23                      no     drive       2 datacenter_east
2 online 11                      no     drive       1 datacenter_west
 
# Configure the hosts
IBM_FlashSystem:TestCluster:superuser>lshost
ID name port_count iogrp_count status site_id site_name
0 Windows_1 4 4 online
IBM_FlashSystem:TestCluster:superuser>chhost -?
chhost
Syntax
>>- chhost -- --+----------------------------+------------------>
>--+--------------------------+--+- host_name -+---------------><
+- -site --+- site_name -+-+ '- host_id ---'
| '- site_id ---' |
'- -nosite ----------------'
IBM_FlashSystem:TestCluster:superuser>chhost -site datacenter_west 0
IBM_FlashSystem:TestCluster:superuser>lshost
ID name port_count iogrp_count status site_id site_name
0 Windows_1 4 4 online 1 datacenter_west
 
# Configure the HyperSwap topology
IBM_FlashSystem:TestCluster:superuser>lssystem
. . .
topology standard
topology_status
. . .
IBM_FlashSystem:TestCluster:superuser>chsystem -?
chsystem
Syntax
 
>>- chsystem -- -- --+------------------------+-- -------------->
'- -name -- system_name -'
>--+----------------------------+-- ---------------------------->
'- -topology --+-standard--+-'
'-hyperswap-'
BM_FlashSystem:TestCluster:superuser>chsystem -topology hyperswap
IBM_FlashSystem:TestCluster:superuser>lssystem | grep topo
. . .
topology hyperswap
topology_status dual_site
. . .
# Configure synchronization rates
# 8000 mbits total (writes and synchronization), maximal 50 % synchronization
IBM_FlashSystem:TestCluster:superuser>lspartnership
ID name location partnership type cluster_ip event_log_sequence
000002032060460E TestCluster local
IBM_FlashSystem:TestCluster:superuser>lspartnership 000002032060460E
. . .
link_bandwidth_mbits 200
background_copy_rate 100
IBM_FlashSystem:TestCluster:superuser>chpartnership -?
chpartnership
Syntax
>>- chpartnership -- --+- -start -+-- -------------------------->
>-- --+-------------------------------------+-- -- ------------->
'- -backgroundcopyrate -- percentage -'
>--+-------------------------------------------------+-- ------->
'- -linkbandwidthmbits -- link_bandwidth_in_mbps -'
>--+- remote_cluster_id ---+-----------------------------------><
'- remote_cluster_name -'
IBM_FlashSystem:TestCluster:superuser>chpartnership -linkbandwidthmbits 8000 -backgroundcopyrate 50 TestCluster
IBM_FlashSystem:TestCluster:superuser>lspartnership 000002032060460E
link_bandwidth_mbits 8000
background_copy_rate 50
 
BM_FlashSystem:TestCluster:superuser>lssystem
relationship_bandwidth_limit 25
IBM_FlashSystem:TestCluster:superuser>chsystem -?
chsystem
Syntax
>>- chsystem -- -- --+------------------------+-- -------------->
'- -name -- system_name -'
>--+----------------------------------------------------+-- ---->
'- -relationshipbandwidthlimit -- bandwidth_in_mBps -'
 
# Maximal 200 MB relationship bandwidth for a vdisk
IBM_FlashSystem:TestCluster:superuser>chsystem -relationshipbandwidthlimit 200
IBM_FlashSystem:TestCluster:superuser>lssystem
relationship_bandwidth_limit 200
IBM_FlashSystem:TestCluster:superuser>
Example 11-15 shows an example of configuring FlashSystem V9000 HyperSwap volumes. Two HyperSwap volumes are created. In total, eight vdisks and two active-active relationships are used. The command output is shortened for better readability.
Example 11-15 HyperSwap Volume example
# Check the sites of the pools
IBM_FlashSystem:TestCluster:superuser>lsmdiskgrp
id name status site_id site_name
0  mdiskgrp_west   online 1 datacenter_west
1  mdiskgrp_east   online 2 datacenter_east
2  mdiskgrp_quorum online 3 quorum_site
 
# Master Vdisk and associated Change Volume
mkvdisk -name HS_Vol_1_Mas -size 34 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_west -accessiogrp 0:1 -syncrate 100
mkvdisk -name HS_Vol_1_Mas_CV -size 34 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_west -rsize 0% -autoexpand
 
# Auxiliary Vdisk and associated Change Volume
mkvdisk -name HS_Vol_1_Aux -size 34 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_east -syncrate 100
mkvdisk -name HS_Vol_1_Aux_CV -size 34 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_east -rsize 0% -autoexpand
 
# Relationship master, auxiliary, and change volumes
mkrcrelationship -master HS_Vol_1_Mas -aux HS_Vol_1_Aux -cluster TestCluster -activeactive -name HS_Vol_1_rel
chrcrelationship -masterchange HS_Vol_1_Mas_CV HS_Vol_1_rel
chrcrelationship -auxchange HS_Vol_1_Aux_CV HS_Vol_1_rel
 
# Check the HyperSwap volume, all four vdisks and the relationship are created
IBM_FlashSystem:TestCluster:superuser>lsvdisk
id name            IO_group_name status  mdisk_grp_name capacity RC_name
0 HS_Vol_1_Mas    io_grp0       online  mdiskgrp_west  34.00GB HS_Vol_1_rel
1 HS_Vol_1_Mas_CV io_grp0       online  mdiskgrp_west  34.00GB HS_Vol_1_rel
2 HS_Vol_1_Aux    io_grp1       offline mdiskgrp_east  34.00GB HS_Vol_1_rel
3 HS_Vol_1_Aux_CV io_grp1       online  mdiskgrp_east  34.00GB HS_Vol_1_rel
 
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_1_rel
id 0
name HS_Vol_1_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 0
master_vdisk_name HS_Vol_1_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 2
aux_vdisk_name HS_Vol_1_Aux
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 1
master_change_vdisk_name HS_Vol_1_Mas_CV
aux_change_vdisk_id 3
aux_change_vdisk_name HS_Vol_1_Aux_CV
 
# second HyperSwap volume
mkvdisk -name HS_Vol_2_Mas -size 35 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_west -accessiogrp 0:1 -syncrate 100
mkvdisk -name HS_Vol_2_Mas_CV -size 35 -unit gb -iogrp 0 -mdiskgrp mdiskgrp_west -rsize 0% -autoexpand
mkvdisk -name HS_Vol_2_Aux -size 35 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_east -syncrate 100
mkvdisk -name HS_Vol_2_Aux_CV -size 35 -unit gb -iogrp 1 -mdiskgrp mdiskgrp_east -rsize 0% -autoexpand
mkrcrelationship -master HS_Vol_2_Mas -aux HS_Vol_2_Aux -cluster TestCluster -activeactive -name HS_Vol_2_rel
chrcrelationship -masterchange HS_Vol_2_Mas_CV HS_Vol_2_rel
chrcrelationship -auxchange HS_Vol_2_Aux_CV HS_Vol_2_rel
 
IBM_FlashSystem:TestCluster:superuser>lsvdisk
id name            IO_group_name status mdisk_grp_id mdisk_grp_name capacity type    RC_id RC_name
0 HS_Vol_1_Mas    io_grp0       online 0 mdiskgrp_west 34.00GB striped 0 HS_Vol_1_rel
1 HS_Vol_1_Mas_CV io_grp0       online 0 mdiskgrp_west 34.00GB striped 0 HS_Vol_1_rel
2 HS_Vol_1_Aux    io_grp1       offline 1 mdiskgrp_east 34.00GB striped 0 HS_Vol_1_rel
3 HS_Vol_1_Aux_CV io_grp1       online 1 mdiskgrp_east 34.00GB striped 0 HS_Vol_1_rel
6 HS_Vol_2_Mas    io_grp0       online 0 mdiskgrp_west 35.00GB striped 6 HS_Vol_2_rel
7 HS_Vol_2_Mas_CV io_grp0       online 0 mdiskgrp_west 35.00GB striped 6 HS_Vol_2_rel
8 HS_Vol_2_Aux    io_grp1       offline 1 mdiskgrp_east 35.00GB striped 6 HS_Vol_2_rel
9 HS_Vol_2_Aux_CV io_grp1       online 1 mdiskgrp_east 35.00GB striped 6 HS_Vol_2_rel
 
# Adding to a consistency Group
# Create the consistency group
IBM_FlashSystem:TestCluster:superuser>mkrcconsistgrp -name HS_ConsGrp_0
RC Consistency Group, id [0], successfully created
 
# Add HyperSwap volumes, the active-active relationsship to the consistency group
IBM_FlashSystem:TestCluster:superuser>chrcrelationship -consistgrp HS_ConsGrp_0 HS_Vol_1_rel
IBM_FlashSystem:TestCluster:superuser>chrcrelationship -consistgrp HS_ConsGrp_0 HS_Vol_2_rel
 
IBM_FlashSystem:TestCluster:superuser>lsrcconsistgrp HS_ConsGrp_0
id 0
name HS_ConsGrp_0
master_cluster_id 000002032060460E
master_cluster_name TestCluster
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
RC_rel_id 0
RC_rel_name HS_Vol_1_rel
RC_rel_id 6
RC_rel_name HS_Vol_2_rel
 
# Map HyperSwap volumes to host
IBM_FlashSystem:TestCluster:superuser>mkvdiskhostmap -host Windows_1 HS_Vol_1_Mas
Virtual Disk to Host map, id [0], successfully created
IBM_FlashSystem:TestCluster:superuser>mkvdiskhostmap -host Windows_1 HS_Vol_2_Mas
Virtual Disk to Host map, id [1], successfully created
FlashSystem V9000 HyperSwap is created, and two HyperSwap volumes are mapped to a host. The next step is to configure the host. See Chapter 7, “Host configuration” on page 213 for examples of configuring hosts on various supported operating systems.
11.7 Operations
The active-active relationship has a primary attribute like regular Metro Mirror and Global Mirror relationships. This is set to either master or aux. With an active-active relationship, the vdisk in one I/O group acts as the primary, supplying data for reads, and serializing writes. All reads and writes must be initially processed by that I/O group. This is the method where writes are consistently applied to the HyperSwap volumes.
The HyperSwap function optimizes the I/O traffic. HyperSwap monitors which I/O group gets most of the host I/O. The vdisk of the HyperSwap volume used by the host with the same site ID as the I/O group with most of the hosts I/O should act as primary.
From an initially created HyperSwap volume, the master vdisk acts as the primary. If there are 10 minutes of more I/O being submitted to the auxiliary vdisk’s site, the system switches the direction of the relationship. The hosts have improved read and write performance. The active-active relationship for that HyperSwap volume has a primary attribute of aux.
After initial configuration of a HyperSwap volume, there might be a period of 10 minutes of increased traffic between the sites, but it will be resolved after that initial training period.
For a HyperSwap volume that has had ongoing I/O to the primary vdisk’s site for some time, there has to be a period of 20 minutes of the majority of I/O being submitted to the secondary vdisk’s site for the active-active relationship to swap directions.
HyperSwap volumes in consistency groups all switch direction together. So the direction that a set of active-active relationships in a consistency group replicates depends on which of the two sites has most of the host I/O across all HyperSwap volumes.
During normal or not swapped operation the primary attribute of the lsrcrelationship, and lsrcconsistgrp command is master:
lsrcconsistgrp HS_ConsGrp_0
primary master
Figure 11-12 shows that the master vdisk is the primary vdisk.
Figure 11-12 HyperSwap normal copy direction
During reversed, swapped operation, the primary attribute of the lsrcrelationship and lsrcconsistgrp commands is aux:
lsrcconsistgrp HS_ConsGrp_0
primary aux
Figure 11-13 shows that the auxiliary vdisk is now the primary vdisk. You see the changed direction of the arrow with the freeze time of the master vdisks. The master vdisk in this consistency group is stopped at the same point in time, because one of the master vdisks went offline to keep a consistent state of all master vdisks in this consistency group.
Next to the state a sign is added to show that master and auxiliary vdisks are now switched. Hovering over this sign displays this message. When the Primary Volume column is added to the view, you see the name of the current primary vdisk, and notice that the auxiliary vdisk acts as primary.
Figure 11-13 HyperSwap reversed copy direction, indicated by extra reverse sign and reversed arrow
 
Note: Most of the I/Os are currently a comparison of number of sectors written to rather than a count of I/Os. A 75% majority is required to switch to prevent frequent alternating
of direction.
VMware systems can share data stores between multiple virtualized hosts using a single HyperSwap volume. To minimize cross-site I/O traffic, make sure that a data store is only used for virtual machines primarily running on a single site, as this enables HyperSwap to orient the replication optimally.
11.7.1 Site failure
Normally, the storage and nodes on both sites are online, and both copies of every HyperSwap volume, the master and auxiliary vdisk of every active-active relationship, contain up-to-date data. If a site fails such that the FlashSystem V9000 nodes, the storage, the Fibre Channel connectivity, or a combination is unavailable, through hardware failure, power failure, or site inaccessibility, HyperSwap preserves access to vdisks through the remaining site.
A fully synchronized HyperSwap volume has the active-active relationship with the state consistent_synchronized. If the storage or nodes for the vdisk on one site of a fully synchronized HyperSwap volume goes offline, the following changes occur:
The state of the active-active relationship becomes consistent_copying.
Host I/O pauses for less than a second in a normal case (this can extend to multiple seconds in some cases, particularly with larger consistency groups).
If the offline vdisk was the primary copy, the direction of the relationship switches to make the online copy the primary.
The progress value of the active-active relationship counts down from 100% as the copies become more different (for example, if 10% of the HyperSwap volume was modified while one copy was offline, the progress value shows 90).
The master vdisk remains online, and the auxiliary vdisk remains offline when viewed through the lsvdisk command, regardless of which copy is no longer accessible.
Example 11-16 shows the differences between a running relationship and a failover to the remote site after the master, the primary vdisk went offline. The differences in the output
are highlighted.
Example 11-16 Example of relationship changes after a fail over to the remote site
# relationship before fail over to the remote site
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
id 6
name HS_Vol_2_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 6
master_vdisk_name HS_Vol_2_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 8
aux_vdisk_name HS_Vol_2_Aux
primary master
consistency_group_id 0
consistency_group_name HS_ConsGrp_0
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 7
master_change_vdisk_name HS_Vol_2_Mas_CV
aux_change_vdisk_id 9
aux_change_vdisk_name HS_Vol_2_Aux_CV
 
# relationship after fail over to the remote site
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
id 6
name HS_Vol_2_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 6
master_vdisk_name HS_Vol_2_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 8
aux_vdisk_name HS_Vol_2_Aux
primary aux
consistency_group_id 0
consistency_group_name HS_ConsGrp_0
state consistent_copying
bg_copy_priority 50
progress 99
freeze_time 2015/07/28/11/15/25
status secondary_offline
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 7
master_change_vdisk_name HS_Vol_2_Mas_CV
aux_change_vdisk_id 9
aux_change_vdisk_name HS_Vol_2_Aux_CV
Table 11-3 shows the differences between the two lsrcrelationship commands. The master vdisk is offline and you see it looking at the primary, and status information. The status is secondary_offline, and because the auxiliary vdisk is primary the secondary offline vdisk is the master vdisk.
Table 11-3 The lsrcrelationship changes
State
Before
After
primary
master
aux
state
consistent_synchronized
consistent_copying
progress
<null>
99
freeze_time
<null>
2015/07/28/11/15/25
status
online
secondary_offline
When that offline copy is restored, the progress value counts back up to 100 as the HyperSwap volume is resynchronized. When it has been resynchronized, the state of the active-active relationship becomes consistent_synchronized again. No manual actions are required to make this process occur.
Example 11-17 shows the relationship status after the master vdisk is online again. After the resynchronization completes (depending on the amount of data to be replicated, and new data coming in), the status is identical to the status before the master vdisk went offline. Only the lines different from the offline master are shown.
Example 11-17 Relationship status after volume is online and synchronized again
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
primary master
state consistent_synchronized
progress
freeze_time
status online
For HyperSwap volumes in consistency groups, a single HyperSwap volume with an inaccessible copy will cause this error recovery procedure to take place on every HyperSwap volume in the consistency group. Each active-active relationship becomes consistent_copying. This ensures the consistency of the vdisks on the auxiliary site. If one HyperSwap volume in a group already has one copy offline, and then a different HyperSwap volume vdisk’s in the same group goes offline, you see two different scenarios.
HS_Vol_1_rel and HS_Vol_2_rel are in the same consistency group. Here is the current state of the HS_Vol_1_rel HyperSwap volume: The master vdisk went offline, then the auxiliary vdisk became primary for both HyperSwap volumes (Example 11-18).
Example 11-18 The lsrcrelationship command to see current state of volumes
lsrcrelationship HS_Vol_1_rel
primary aux
state consistent_copying
status secondary_offline
lsrcrelationship HS_Vol_2_rel
primary aux
state consistent_copying
status online
Now one vdisk of the HS_Vol_2_rel HyperSwap volume goes offline:
1. The offline vdisk is on the auxiliary site, the same site with the already-offline vdisk. In Example 11-19, the master vdisk, which is on the secondary site went offline. The HyperSwap volume is still online and accessible from the host.
Example 11-19 Master vdisk offline
lsrcrelationship HS_Vol_1_rel
primary aux
state consistent_copying
status secondary_offline
lsrcrelationship HS_Vol_2_rel
primary aux
state consistent_copying
status primary_offline
2. The offline vdisk is on the primary site, the site currently used for I/O.
In Example 11-20, the auxiliary vdisk, which is on the current primary site went offline. The HyperSwap volume is now offline and not accessible from the host.
Example 11-20 Auxiliary vdisk offline
lsrcrelationship HS_Vol_1_rel
primary aux
state consistent_copying
status secondary_offline
lsrcrelationship HS_Vol_2_rel
primary aux
state consistent_copying
status primary_offline
HyperSwap was not able to hide the offline vdisk on the primary site of the HS_Vol_2_rel relationship. That HyperSwap volume went offline.
11.7.2 Deleting HyperSwap volumes
To delete a HyperSwap volume containing data that is no longer required, simply delete each vdisk created for the HyperSwap volume, using the -force option to rmvdisk. Alternatively, each created object can be individually deleted or unconfigured in the reverse order to how they were created and configured, which avoids the need for the -force option to rmvdisk.
If the data will still be required after unconfiguring the HyperSwap object, use the procedures described in section 11.12, “Unconfiguring HyperSwap” on page 457.
After every HyperSwap volume has been removed or converted to a single-copy volume, the topology of the system can be reverted to the standard topology using the chsystem command:
chsystem –topology standard
11.7.3 FlashCopy with HyperSwap
FlashCopy can be used to take point-in-time copies of HyperSwap volumes.
A FlashCopy map with a HyperSwap volume as its source cannot cross sites. Therefore, a FlashCopy mapping where the target vdisk is on site 1 must use the vdisk of the HyperSwap volume on site 1 as its source, and likewise for site 2. It is not possible for a FlashCopy map with a HyperSwap volume as its source to copy data between sites.
For example, if a HyperSwap volume has vdisk 10 providing data on site 1, and vdisk 11 on site 2, FlashCopy maps can be created as follows using the mkfcmap command (Example 11-21).
Example 11-21 The mkfcmap command to create FlashCopy maps
mkfcmap -source 10 -target 12 ...
mkfcmap -source 11 -target 13 ...
In Example 11-21, vdisk 12 is a single-copy volume already created on site 1, and vdisk 13 on site 2. These two FlashCopy maps can both be used independently to take point-in-time copies of the HyperSwap volume on the two sites. The system provides no coordination of these maps.
When triggering the FlashCopy map, the copy of the HyperSwap volume on the same site as the FlashCopy target vdisk must be either of the following options:
A primary copy of an active-active relationship in any state
A secondary copy of an active-active relationship in a consistent_synchronized state
If access has been enabled to an old but consistent copy of the HyperSwap volume, a FlashCopy map can only be triggered on the site that contains that copy.
A FlashCopy map cannot be created with a HyperSwap volume as its target. If necessary, delete the active-active relationship to convert the HyperSwap volume to a single-copy volume before creating and triggering the FlashCopy map.
 
Note: A FlashCopy can only be taken from the vdisks of a HyperSwap volume, not from the HyperSwap volume itself. A FlashCopy cannot be restored on a vdisk of a HyperSwap volume. FlashCopy Manager is currently not supported with HyperSwap Volumes.
11.8 Disaster Recovery with HyperSwap
The HyperSwap function automatically uses both copies to provide continuous host access to data, providing that both copies are up-to-date. If one copy is up-to-date, and the other is stale, and the up-to-date copy goes offline, the system cannot automatically use the remaining copy to provide high availability to the HyperSwap volume.
However, the user can choose to enable access to that stale copy. This is telling the system to rewind the state of that HyperSwap volume to the point in time of that stale copy.
This rewind to the point in time of that stale copy consists of manual steps, which must be done carefully. Before starting this process, you must make sure that the hosts have not cached data or status of the HyperSwap volumes. Ideally shut down host systems using the HyperSwap volume before taking these steps. Running these commands without these precautions might crash your applications and corrupt the stale copy.
To demonstrate a stale copy with an up-to-date copy going offline, check active-active relationship of a HyperSwap volume using the lsrcrelationship command while the HyperSwap volume is still resynchronizing, after the master had become offline and online again (Example 11-22).
Example 11-22 HyperSwap volume resynchronizing
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
id 6
name HS_Vol_2_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 6
master_vdisk_name HS_Vol_2_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 8
aux_vdisk_name HS_Vol_2_Aux
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 85
freeze_time 2015/07/29/12/08/31
status online
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 7
master_change_vdisk_name HS_Vol_2_Mas_CV
aux_change_vdisk_id 9
aux_change_vdisk_name HS_Vol_2_Aux_CV
Here, the site of the master copy had previously been offline, had returned online, and the HyperSwap volume is resynchronizing. The consistent_copying state of the HyperSwap volume shows a resynchronization where the master copy contains a stale image, and the value contained in the freeze_time field shows when that image dates from. The progress value is increasing toward 100 as the resynchronization process continues.
Now, the site of the auxiliary copy goes offline.
Check the active-active relationship using the lsrcrelationship command. Only the changes to the previous output are shown in Example 11-23. The HyperSwap volume is offline because the primary vdisk went offline during resynchronization.
Example 11-23 The lsrcrelationship command after the site of the auxiliary copy has gone offline
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
...
state consistent_copying
bg_copy_priority 50
progress 87
freeze_time 2015/07/29/12/08/31
status primary_offline
...
With the only up-to-date copy of the HyperSwap volume offline, the active-active relationship cannot switch direction to keep the HyperSwap volume online, so the master vdisk is now offline. You see the offline master and auxiliary disk using the lsvdisk command (Example 11-24).
Example 11-24 The lsvdisk command shows the master vdisk offline
IBM_FlashSystem:TestCluster:superuser>lsvdisk
ID name IO_group_name status mdisk_grp_id mdisk_grp_name capacity
6 HS_Vol_2_Mas io_grp0 offline 0 mdiskgrp_west 35.00GB
7 HS_Vol_2_Mas_CV io_grp0 online 0 mdiskgrp_west 35.00GB
8 HS_Vol_2_Aux io_grp1 offline 1 mdiskgrp_east 35.00GB
9 HS_Vol_2_Aux_CV io_grp1 online 1 mdiskgrp_east 35.00GB
At this point, you look at the freeze_time value. If data from that date is not useful, for example it is from too long ago, or before a recent vital update, it might be best to wait until the offline up-to-date copy of the HyperSwap volume can be brought back online.
However, if the stale data is useful, and it is likely that the up-to-date copy of the HyperSwap volume will remain offline for an extended period of time (or will never come online again, for example after a fatal site failure), you can choose to enable access to the stale copy of the HyperSwap volume. Before running this command, make sure that no data or state from this HyperSwap volume is cached on host systems. Stop the active-active relationship using the stoprcrelationship command:
stoprcrelationship -access <relationship>
Check the active-active relationship using the lsrcrelationship command. Only the changes to the previous output are shown in Example 11-25. Stopping the relationship will take the HyperSwap volume online using the stale copy. The state of the relationship is idling.
Example 11-25 The lsrcrelationship command results after stopping the active-active relationship
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
...
primary master
state idling
bg_copy_priority 50
progress
freeze_time
status
...
At this point, the data presented to hosts from this HyperSwap volume immediately changes to that stored on the stale copy. One way to think of this is that the HyperSwap volume has been consistently rolled back to the point in time denoted by the freeze_time value. The HyperSwap volume continues to be readable and writable at this point. You can start your business applications again, and continue from this stale image.
Replication is paused, even if the up-to-date copy becomes online again. This is because the previously stale image, which is now being accessed by hosts, and the previously up-to-date copy, which contains some changes not present on the previously stale image, are now divergent copies. The two copies were the same at the freeze_time point in time, but then each had different writes applied. Either copy might be the one that the user wants to keep in the long term.
So the system allows the user to choose which copy is more useful to them. This choice is made based on how much data was missing on the stale copy compared to the up-to-date copy, and how much progress has been made on the stale copy since access was enabled
to it.
The first step is determining which copy has the stale copy, which is currently accessible to hosts. This is either the master or auxiliary copy, and is visible under the primary attribute of the active-active relationship. You can choose between two scenarios:
1. Keep using the copy that hosts are currently accessing, and discard the old up-to-date copy.
The other, previously up-to-date copy is online again. You decide not to use it, but to keep using the previously stale copy the host is currently accessing. This scenario is described in section 11.8.1, “Using the vdisk that the hosts are currently accessing” on page 452.
2. Go back to the up-to-date copy and discard the stale copy used for disaster recovery.
The other, old up-to-date copy is online again, and you have decided to discard the changes on this copy and go back to the up-to-date copy. This is described in section 11.8.2, “Going back to the up-to-date copy” on page 453.
11.8.1 Using the vdisk that the hosts are currently accessing
This section describes using the stale copy and discarding the old up-to-date copy to start the active-active relationship of the HyperSwap volume.
The disaster recovery using the stale copy was successful and the host is now using that copy. You have decides that this copy, the stale copy, should be used and discarding the up-to-date copy. Use the lsrcrelationship command to detect the current primary vdisk (Example 11-26).
Example 11-26 The lsrcrelationship command to detect current primary
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship <relationship>
primary master
or
primary aux
Use the startrcrelationship command to start the relationship:
startrcrelationship -primary <current_primary> -force <relationship>
In this example, <current_primary> is the current primary value of the active-active relationship, and is master or aux. The -force flag is there because after you make your decision, that loses the ability to use the copy that is not the primary, so it is telling the system that you are aware that this cannot be reverted. In our example, the command is as follows:
startrcrelationship -primary master -force HS_Vol_2_rel
The host is not affected using this command. There is no need to quiesce host I/O or take any further action. This command resumes HyperSwap replication, and copy across any regions that are different between the two copies to resynchronize as fast as possible. Both copies keep a bitmap of vdisk regions at a 256 KB granularity, used to record writes to that copy that have not yet been replicated to the other copy.
On this resynchronization, we use both sets of information to undo writes only applied to the old up-to-date copy, and also to copy across additional writes made to the stale copy during the disaster recovery. Because the disaster recovery only happened because the copies were resynchronizing before the up-to-date copy went offline, all differences from that interrupted resynchronization process are reverted on the old up-to-date copy now as well.
The active-active relationship goes into an inconsistent_copying state, and as copying continues, the progress increases toward 100. At that point, the relationship goes into a consistent_synchronized state, showing that both copies are up-to-date, and high-availability is restored.
Use the lsrcrelationship command to check the status, as shown in Example 11-27.
Example 11-27 HyperSwap volume relationship using stale copy, discarding the up-to-date copy
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
ID 6
name HS_Vol_2_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 6
master_vdisk_name HS_Vol_2_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 8
aux_vdisk_name HS_Vol_2_Aux
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 7
master_change_vdisk_name HS_Vol_2_Mas_CV
aux_change_vdisk_id 9
aux_change_vdisk_name HS_Vol_2_Aux_CV
11.8.2 Going back to the up-to-date copy
This section describes going back to the up-to-date copy and discarding the stale copy to start the active-active relationship of the HyperSwap volume.
The previous section described the steps keeping the stale copy used for disaster recovery used in the long term. It showed how to synchronize to the other copy.
This section describes the scenario when you decide to discard the writings made to the stale copy and go back to up-to-date copy, the one that held the latest data before the disaster recovery.
This scenario is different from the last one, because the image visible by hosts is going to change again. Just as enabling access to the stale copy required hosts to have no cached data from the HyperSwap volume, and ideally they should be fully shut down, the same is true of reverting to the up-to-date copy.
Before going further, make sure that no running hosts are going to be affected by the data changing, and have no stale data that they might corrupt the up-to-date copy with. When applying the startrcrelationship command the data visible to hosts instantly reverts to the up-to-date copy. Use the startrcrelationship command to start the relationship:
startrcrelationship -primary <current_secondary> -force <relationship>
In this example, <current_secondary> is the copy other than the current primary value of the active-active relationship, and is master or aux. In other words, if the primary field says master, use aux here, and vice versa. You cannot get back to the other set of data after you have run this command, and the -force flag is there to acknowledge this. In our example, the command is as follows:
startrcrelationship -primary aux -force HS_Vol_2_rel
The image visible to hosts instantly reverts to the up-to-date copy, so it reverts as soon as you have run this command. You can bring the hosts back online and start using this HyperSwap volume again.
As with the other scenario, the active-active relationship is in an inconsistent_copying state while resynchronizing, and again this resynchronization uses the bitmaps of writes to each copy to accelerate this resynchronization process. When the copies are fully synchronized, the relationship goes back to a consistent_synchronized state as high availability is restored for the HyperSwap volume.
Use the lsrcrelationship command to check the status. The primary vdisk in this example is the auxiliary vdisk. While resynchronizing, the state is inconsistent_copying until the HyperSwap volume is synchronized, as shown in Example 11-28.
Example 11-28 HyperSwap volume relationship using old up-to-date copy and discarding the used stale copy
IBM_FlashSystem:TestCluster:superuser>lsrcrelationship HS_Vol_2_rel
id 6
name HS_Vol_2_rel
master_cluster_id 000002032060460E
master_cluster_name TestCluster
master_vdisk_id 6
master_vdisk_name HS_Vol_2_Mas
aux_cluster_id 000002032060460E
aux_cluster_name TestCluster
aux_vdisk_id 8
aux_vdisk_name HS_Vol_2_Aux
primary aux
consistency_group_id
consistency_group_name
state inconsistent_copying
bg_copy_priority 50
progress 56
freeze_time
status online
sync
copy_type activeactive
cycling_mode
cycle_period_seconds 300
master_change_vdisk_id 7
master_change_vdisk_name HS_Vol_2_Mas_CV
aux_change_vdisk_id 9
aux_change_vdisk_name HS_Vol_2_Aux_CV
11.9 Disaster Recovery with Consistency Groups
All the descriptions in 11.8, “Disaster Recovery with HyperSwap” on page 449 (and after) about enabling access to a stale copy of a HyperSwap volume also apply to HyperSwap consistency groups, for example multiple HyperSwap volumes where the active-active relationships are contained in a single consistency group.
During resynchronization, if any of the up-to-date copies of HyperSwap volumes in a consistency group is offline or unavailable, typically all would be offline in a disaster. You can choose to enable access to the stale copy of every HyperSwap volume in the consistency group. Because the HyperSwap function links replication and failover across HyperSwap volumes in a consistency group, it is assured that during resynchronization, all copies on one site have a stale consistent copy of data, captured at an identical point in time, ideal for disaster recovery.
The startrcconsistgrp and stoprcconsistgrp commands are the consistency group versions of the startrcrelationship and stoprcrelationship commands used in section 11.8, “Disaster Recovery with HyperSwap” on page 449.
The stoprcconsistgrp command is used to gain access to the stale copies:
stoprcconsistgrp -access <consistency_group>
When restarting the consistency group you can either retain the access to the stale copies or revert to the previous up-to-date copies. Use the lsrcrelationship command to detect the current primary vdisk (Example 11-29).
Example 11-29 The lsrcrelationship command to detect current primary
lsrcrelationship <consistency_group>
primary master
or
primary aux
To retain the stale disaster recovery copies currently visible to hosts, and resume HyperSwap replication while discarding the data of the previous up-to-date copies, use the following command:
startrcconsistgrp -primary <current_primary> -force <consistency_group>
To revert to the previous up-to-date copy and discard the changed data on the stale copies, the following command should be used while the host has no access to the HyperSwap volume, as described in 11.8.2, “Going back to the up-to-date copy” on page 453:
startrcconsistgrp -primary <current_secondary> -force <consistency_group>
11.10 The overridequorum command
FlashSystem V9000 provides the overridequorum command, used to override the tie-breaking performed by the system quorum if it left the system in an unusable state.
One scenario where this could be useful is if a rolling disaster first broke the link between the two sites, resulting in the quorum deciding which site’s nodes should be allowed to continue. Next, the rolling disaster affects the chosen site’s nodes, taking them offline. The entire system is unusable at this point, because of how the tiebreak was resolved.
Use the overridequorum command on a node displaying an error code of 551 or 921 on the site you want to start manually:
satask overridequorum -force
When the overridequorum command is issued on a node displaying a 551 or 921 error, that site’s nodes use their cluster state to form a new cluster, with a new cluster ID, based on the system state at the point that the tiebreak stopped that site’s nodes from taking part in the cluster.
Other than the new cluster ID, this gives the appearance of reverting the system state to the point in time of that tiebreak, and because the restored nodes have system cache and local vdisk copies matching that point in time, the vdisk state is reverted to that point in time as well.
There is no specific interaction between HyperSwap volumes and the overridequorum command, if a HyperSwap volume copy local to the site brought online by the overridequorum command was up-to-date at the time of the lost tiebreak, it will immediately come online after the overridequorum command is run. Alternatively, if the copy was stale at the time of the lost tiebreak, access needs to be enabled to it with the stoprcrelationship command.
This command also removes the nodes in the other site. This means that HyperSwap volume copies on that site need to be deleted and re-created. For this initial release of the HyperSwap function, this is done by deleting the active-active relationship, and whichever out of the vdisks are in the I/O groups that now have no nodes. Section 11.12.1, “Removing HyperSwap volumes completely” on page 457 has more details about unconfiguring HyperSwap.
When the HyperSwap volumes are converted to single-copy volumes on the online site, and the nodes in the other site have been readded to the system, you can then convert the single-copy volumes back to HyperSwap volumes as in section 11.5.8, “Creating HyperSwap volumes” on page 431.
11.11 HyperSwap Failure scenarios
Table 11-4 shows different failure scenarios and their affect on HyperSwap, hosts, and applications.
Table 11-4 HyperSwap failure scenarios
Failure scenario
HyperSwap system behavior
Server and application effect
Single switch failure.
System continues to operate by using an alternative path in the same failure.
None.
Slow read or write performance to a copy (giving greater than 30 seconds response time)
System temporarily stops replicating to slow copy, and resynchronizes after 5 minutes.
None.
Single data storage failure.
System continues to operate by using the other data copy.
None.
Single quorum storage failure on site 3.
System continues to operate using alternative storage at
site 3.
None.
Failure of either site 1 or 2
System continues to operate on the remaining site.
Servers without high availability (HA) functions in the failed site stop. Servers in the other site continue to operate. Servers with HA software functions are restarted from the HA software. The same disks are seen with the same UIDs in the surviving site, and continue to offer similar read and write performance as before the disaster.
Failure of site 3, containing the active quorum disk
System continues to operate on both sites 1 and 2, selecting a quorum disk from sites 1 and 2 to enable I/O processing to continue.
None.
Access loss between sites 1 and 2
System continues to operate the site that wins the quorum race. The cluster continues with operation, while the nodes in the other site stop, waiting for connectivity between sites 1 and 2 to be restored.
Servers without HA functions in the failed site stop. Servers in the other site continue to operate. Servers with HA software functions are restarted from the HA software. The same disks are seen with the same UIDs in the surviving site, and continue to offer similar read and write performance as before the disaster.
Access loss between sites 1 and 2 because of a rolling disaster. One site is down, and the other is still working. Later, the working site also goes down because of the rolling disaster
System continues to operate the site that wins the quorum race. The system continues with operation until the other site goes down. Even if the first site to go down comes back up, the whole system is considered offline until the site that won the quorum race comes back up.
The system can restart using just the site that initially lost the quorum race, by using the overridequorum command. The HyperSwap volumes revert to the state they were at then that site lost the quorum race.
Servers must be stopped before issuing this command, and restarted with the reverted state. Full read and write performance is given.
11.12 Unconfiguring HyperSwap
This section describes the unconfiguring of HyperSwap, including removing the data or keeping the data from the primary or auxiliary site.
11.12.1 Removing HyperSwap volumes completely
If you do not need any data on a HyperSwap volume, and want to delete all objects related to it, simply delete all four vdisks associated with the HyperSwap volume: the master vdisk, the auxiliary vdisk, and the two change volumes. As the vdisks are deleted, the active-active relationship and any host maps are deleted automatically.
Use the rmvdisk command. The -force parameter is needed for the master and auxiliary vdisk, as shown in example Example 11-30.
Example 11-30 The rmvdisk command to delete all four disks associated with a HyperSwap volume
rmvdisk -force hsVol0Mas
rmvdisk -force hsVol0Aux
rmvdisk hsVol0MasCV
rmvdisk hsVol0AuxCV
11.12.2 Converting to single-copy volumes, while retaining access through the master vdisk
If you want to go back to using single-copy volumes you need to decide which copy should be retained. If the master vdisk holds the copy to be retained, this conversion process is simple. Delete the auxiliary vdisk and the two change volumes. The active-active relationship is automatically deleted, and a single-copy volume remains. From outside the system, there is no visible change, because hosts still have access to their data. Use the rmvdisk command. The -force parameter is needed because the active-active relationship is being deleted by the action (Example 11-31).
Example 11-31 The rmvdisk command to delete the auxiliary vdisk and two change volumes
rmvdisk -force hsVol0Aux
rmvdisk hsVol0MasCV
rmvdisk hsVol0AuxCV
If the converted HyperSwap volume was part of a consistency group, it is removed from that group. This means that you would not normally do this conversion for a subset of the HyperSwap volumes supporting a specific application. The host accesses the data on the master vdisk, so it should normally only be done if the master copy is up-to-date. This can be seen by the active-active relationship either having a primary value of master, or a state value of consistent_synchronized.
11.12.3 Converting to single-copy volumes, while retaining access through the auxiliary vdisk
If you need to retain the auxiliary vdisk, there are two possible procedures:
Deleting the master disk using the rmvdisk command
Deleting the master disk using the rmvdisk -keepaux command and parameter
Deleting the master disk using rmvdisk
First, you must quiesce any host using this HyperSwap volume. By deleting the master vdisk and the two change volumes, the active-active relationship is automatically deleted, and only the auxiliary vdisk is left. Host maps are deleted as the master vdisk is deleted, and new host maps must be created from the remaining auxiliary vdisk. Use the rmvdisk command. The -force parameter is needed for the master vdisk (Example 11-32).
Example 11-32 rmvdisk command to delete the master vdisk and two change volumes
rmvdisk -force hsVol0Mas
rmvdisk hsVol0MasCV
rmvdisk hsVol0AuxCV
The host accesses the data on the auxiliary vdisk, so it should normally only be done if the auxiliary copy is up-to-date. This can be seen by the active-active relationship either having a primary value of master, or a state value of consistent_synchronized.
You must remap these auxiliary vdisks to the host systems, because the existing HyperSwap volume host maps were deleted with the master vdisks, so the vdisks are seen with new unique LUN ID’s. Finally you can redetect volumes on the host systems, reconfigure them to use the auxiliary vdisks for I/O, and then resume host I/O.
Deleting the master disk using rmvdisk -keepaux
An alternative procedure is to delete the master vdisk with the following command:
rmvdisk -force -keepaux <mastervdisk>
This command can be run with host I/O running. This deletes the master vdisk’s storage, and replaces it with the auxiliary vdisk’s storage, preserving the master vdisk ID, the master vdisk host maps, and the auxiliary vdisk storage. This also deletes the active-active relationship. Finally, delete the change volumes, which are not deleted as part of the previous step (Example 11-33).
Example 11-33 The rmvdisk command to delete change volumes
rmvdisk hsVol0MasCV
rmvdisk hsVol0AuxCV
This enables a clean-up of failed master storage without affecting host I/O access, potentially as part of replacing the master vdisk’s storage.
11.12.4 Converting to system topology standard
When all active-active relationships have been deleted, the FlashSystem V9000 topology can be changed to standard. Aspects of the system locked down in the HyperSwap system topology, for example node and controller sites, can then be changed. The topology of the system can be reverted to the standard topology using the chsystem command:
chsystem –topology standard
You can check the status with the lssystem command shown in Example 11-34.
Example 11-34 The lssystem command to check system topology
lssystem
...
topology standard
...
11.13 Summary of interesting object states for HyperSwap
This section describes the state values of different commands.
11.13.1 The lsvdisk command
The status attribute shown for HyperSwap volume master vdisk in lsvdisk shows whether hosts are able to access data, for example whether the HyperSwap volume has access to up-to-date data or not, not whether the master vdisk itself is actually online. The value status for auxiliary vdisk is always offline.
Running lsvdisk on a specific vdisk to get detailed information also shows the vdisk copy status value. The RC_id and RC_name attributes for both master and auxiliary vdisks show the active-active relationship supporting the HyperSwap volume (Example 11-35):
Example 11-35 The lsvdisk command on a specific disk to see vdisk copy status
lsvdisk HS_Vol_1_Mas
...
RC_id 0
RC_name HS_Vol_1_rel
RC_change no
 
...
11.13.2 The lsvdiskcopy command
The status attribute for vdisk copies supporting HyperSwap volumes shows whether the underlying storage is online (Example 11-36).
Example 11-36 The lsvdiskcopy command shows if underlying storage is online
lsvdiskcopy HS_Vol_1_Mas
vdisk_id vdisk_name copy_id status sync primary mdisk_grp_id mdisk_grp_name
0 HS_Vol_1_Mas 0 online yes yes 0 mdiskgrp_west
11.13.3 The lsrcrelationship or lsrcconsistgrp commands
These values are for stand-alone relationships seen with lsrcrelationship. Relationships in a consistency group must all share the same state, primary, and freeze_time field values, so they change value based on the condition of all the relationships in that consistency group. The consistency group itself shows the same values when queried using lsrcconsistgrp. The status attribute is the key attribute that tells you the HyperSwap volume copying status:
inconsistent_stopped
This HyperSwap volume only has useful data on the master vdisk, and the relationship’s change volumes are not both configured yet.
consistent_stopped
This HyperSwap volume only has useful data on the master vdisk, the relationship’s change volumes are not both configured yet, but the relationship was created with -sync, limiting needed copying to only the data written to on the master vdisk since the active-active relationship was created.
inconsistent_copying
This HyperSwap volume only has useful data on the master vdisk, but it is correctly configured and is performing initial synchronization.
consistent_synchronized
This HyperSwap volume is correctly configured, has up-to-date data on both vdisks, and is highly available to hosts, if addvdiskaccess has been run correctly.
consistent_copying
This HyperSwap volume is correctly configured, has had or is currently having a period of inaccessibility of one vdisk, leaving that vdisk consistent but stale (the freeze_time attribute will show when that stale data dates from). Access to that data can be provided in a disaster with the stoprcrelationship -access command, and resynchronization automatically will take place when possible.
idling
This HyperSwap volume is correctly configured, and has had access enabled to a stale but consistent copy by running stoprcrelationship -access when the active-active relationship was in a state of consistent_copying. Synchronization is paused, and can be resumed by running startrcrelationship -primary (master | aux) according to the direction that the relationship should resynchronize in.
The primary attribute will be master or auxiliary, and tells you which copy is acting as the primary at the moment, and therefore which I/O group is primarily processing I/Os for this HyperSwap volume.
The status attribute shows online if all needed vdisks are online and are able to synchronize. Otherwise, it shows the reason why synchronization is not possible:
primary_offline
secondary_offline
primary_change_offline
secondary_change_offline
It shows one of the previous statuses if one of the vdisks of the HyperSwap volume is offline, or change_volumes_needed if the HyperSwap volume does not have both change volumes configured.
The progress attribute shows how similar the two copies are as a percentage, rounded down to the nearest percent. During resynchronization, this counts up to 100 as the HyperSwap volume nears being synchronized.
The freeze_time attribute shows at what point the data is frozen on a stale but consistent copy when the relationship has a state of consistent_copying. This enables the user to decide if there is value in using the data (with the stoprcrelationship -access command) if the up-to-date copy goes offline.
11.13.4 The lsfcmap command
The status attribute for FlashCopy mappings used by a HyperSwap volume shows if a FlashCopy mapping is currently used, for example during a resynchronization of vdisks of a HyperSwap volume after a vdisk failure.
Example 11-37 shows a FlashCopy mapping currently used during resynchronization.
Example 11-37 FlashCopy mapping currently used during resynchronization
lsfcmap fcmap1
...
name fcmap1
source_vdisk_name HS_Vol_1_Mas
target_vdisk_name HS_Vol_1_Mas_CV
status copying
progress 42
start_time 150804045444
...
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.144.170