Advanced software functions
In this chapter, we describe the advanced software functions that are available for the IBM FlashSystem V9000. The advanced software functions provide FlashSystem V9000 with rich functionality and create a full Tier 1 storage solution with enterprise class storage mirroring features plus a rich set of storage virtualization and migration features.
The advanced features for storage efficiency increase the efficient capacity of the FlashSystem V9000 beyond the physical capacity. This helps with the economics of flash and can provide flash capacity for less than disk capacity. In addition, FlashSystem can speed up existing storage by automatically placing frequently accessed data on flash memory.
The data migration feature enables easy migration into the virtualized storage environment of the FlashSystem V9000 and supports easy movement of volumes between storage systems and tiers.
With the remote copy features, you can address the need for high availability (HA) and disaster recovery (DR) solutions. IBM FlashCopy enables faster backups and data duplication for testing.
Data encryption protects against stolen data from discarded or stolen flash modules, and prevents accessing the data without the access key.
This chapter includes the following topics:
3.1 Introduction
IBM FlashSystem V9000 offers advanced software functions for storage efficiency, data migration, high availability, and disaster recovery. This chapter has an overview of the features, how they work, and how to use them.
 
Note: FlashSystem V9000 is based on IBM Spectrum Virtualize - IBM SAN Volume Controller technology. You can find more details about the advanced features in Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
3.2 Advanced features for storage efficiency
In modern and complex application environments, the increasing and often unpredictable demands for storage capacity and performance lead to issues of planning and optimization of storage resources.
Consider the following typical storage management issues:
Typically when a storage system is implemented, only a portion of the configurable physical capacity is deployed. When the storage system runs out of the installed capacity and more capacity is requested, a hardware upgrade is implemented to add physical resources to the storage system.
This new physical capacity can hardly be configured to keep an even spread of the overall storage resources. Typically, the new capacity is allocated to fulfill only new storage requests. The existing storage allocations do not benefit from the new physical resources. In a similar way, the new storage requests do not benefit from the existing resources; only new resources are used.
In a complex production environment, optimizing storage allocation for performance is not always possible. The unpredictable rate of storage growth and the fluctuations in throughput requirements, which are I/O per second (IOPS), often lead to inadequate performance.
Furthermore, the tendency to use even larger volumes to simplify storage management works against the granularity of storage allocation, and a cost-efficient storage tiering solution becomes difficult to achieve. With the introduction of high performing technologies, such as solid-state drives (SSD) or all flash arrays, this challenge becomes even more important.
The move to increasingly larger physical disk drive capacities means that previous access densities that were achieved with low-capacity drives can no longer be sustained.
All businesses have some applications that are more critical than others, and there is a need for specific application optimization. Therefore, a need exists to be able to relocate specific application data to faster storage media.
All of these issues deal with data placement and relocation capabilities or data volume reduction. Most of these challenges can be managed by having spare resources available and by moving data, and by the use of data mobility tools or operating systems features (such as host level mirroring) to optimize storage configurations.
However, all of these corrective actions are expensive in terms of hardware resources, labor, and service availability. Relocating data among the physical storage resources that dynamically or effectively reduce the amount of data (that is, transparently) to the attached host systems is becoming increasingly important.
3.2.1 IBM Easy Tier
IBM Easy Tier is a solution that combines functionalities that can add value to other storage, in combination with the highly advanced key points that IBM FlashSystem V9000 can offer, such as IBM MicroLatency and maximum performance.
The great advantage of the tiering approach is the capability to automatically move the most frequently accessed data to the highest performing storage system. In this case, FlashSystem V9000 is the highest performing storage, and the less frequently accessed data can be moved to slower external storage, which can be SSD-based storage or disk-based storage.
FlashSystem V9000 addresses the combination of the lowest latency with the highest functionality and can provide the lowest latency for clients that use traditional disk array storage and need to increase the performance of their critical applications.
Usage of the IBM Easy Tier solution is indicated when there is a need to accelerate general workloads. FlashSystem V9000 has a map of the hot data (more frequently accessed) and the cold data (less frequently accessed), and it moves the hot data to the internal and the cold data to the external storage.
When data that was previously hot becomes cold, it moves from the FlashSystem V9000 to external storage. The inverse process occurs when cold data becomes hot (or more frequently accessed) and is moved from an external storage system to the FlashSystem V9000 internal storage.
Figure 3-1 shows the principles of Easy Tiering, where hot extents are placed on flash, warm extents are placed on serial-attached Small Computer System Interface (SAS) hard disk drives (HDDs), and cold extents are placed on nearline SAS (NL-SAS) disks.
Figure 3-1 Principles of Easy Tier
This solution focuses on accelerating and consolidating the storage infrastructure. It might not reach the same lowest latency that a FlashSystem V9000 solution offers, but it is used to improve the overall performance of the infrastructure.
IBM Easy Tier, a function that responds to the presence of drives in a storage pool that also contains hard disk drives (HDDs). The system automatically and nondisruptively moves frequently accessed data from HDD managed disks (MDisks) to flash mdisks, therefore placing such data in a faster tier of storage.
Easy Tier eliminates manual intervention when you assign highly active data on volumes to faster responding storage. In this dynamically tiered environment, data movement is seamless to the host application regardless of the storage tier in which the data belongs. Manual controls exist so that you can change the default behavior, for example, such as turning off Easy Tier on pools that have any combinations of the three types of mdisks.
The FlashSystem V9000 supports these tiers:
Flash tier: The flash tier exists when flash mdisks are in the pool. The flash mdisks provide greater performance than enterprise or nearline mdisks.
Enterprise tier: The enterprise tier exists when enterprise-class mdisks are in the pool, such as those built from serial-attached SCSI (SAS) drives.
Nearline tier: The nearline tier exists when nearline-class mdisks are used in the pool, such as those drives built from nearline SAS drives.
If a pool contains a single type of mdisk and Easy Tier is set to Auto (default) or On, balancing mode is active. When the pool contains multiple types of mdisks and Easy Tier is set to Auto or On, then automatic placement mode is added in addition to balancing mode. If Easy Tier is set to Off, balancing mode is not active. All external mdisks are put into the Enterprise tier by default. You must manually identify external mdisks and change their tiers.
 
Note: Easy Tier works with compressed volumes, but it migrates data based only on read activity.
Balancing mode
This feature assesses the performance disparity between mdisks in a pool and balances extents in the pool to correct that disparity. Balancing works within a single tier pool over all the mdisks in the pool or within each tier of a multitier storage pool. Balancing does not move the data between tiers, but performance balances the data within each tier. A function of the Easy Tier code is to move the data between tiers.
The process automatically balances existing data when new mdisks are added into an existing pool, even if the pool contains only a single type of storage. This does not mean that it will migrate extents from existing mdisks to achieve even extent distribution among all, old and new, mdisks in the storage pool. Easy Tier balancing mode is based on performance and not capacity of underlying mdisks.
Automatic data placement mode
When IBM Easy Tier on FlashSystem V9000 automatic data placement is active, Easy Tier measures the host access activity to the data on each storage extent. It also provides a mapping that identifies high activity extents, and then moves the high-activity data according to its relocation plan algorithms.
To automatically relocate the data, Easy Tier initiates the following processes:
Monitors volumes for host access to collect average usage statistics for each extent over a random generated period averaging every 17 - 24 hours.
Analyzes the amount of input/output (I/O) activity for each extent, relative to all other extents in the pool to determine if the extent is a candidate for promotion or demotion.
Develops an extent relocation plan for each storage pool to determine exact data relocations within the storage pool. Easy Tier then automatically relocates the data according to the plan.
While relocating volume extents, Easy Tier follows these actions:
Attempts to migrate the most active volume extents first.
Refreshes the task list as the plan changes. The previous plan and any queued extents that are not yet relocated are abandoned.
Automatic data placement is enabled, by default, for storage pools with more than one tier of storage. When automatic data placement is enabled, by default all striped volumes are candidates for automatic data placement. Image mode and sequential volumes are never candidates for automatic data placement. When automatic data placement is enabled, I/O monitoring is done for all volumes whether the volume is a candidate for automatic
data placement.
After automatic data placement is enabled, and if there is sufficient activity to warrant relocation, extents begin to be relocated within a day after enablement. You can control whether Easy Tier automatic data placement and I/O activity monitoring is enabled or disabled by using the settings for each storage pool and each volume.
Evaluation mode
When IBM Easy Tier evaluation mode is enabled for a storage pool, Easy Tier collects usage statistics for all the volumes in the pool and monitors the storage use at the volume extent level. Easy Tier constantly gathers and analyzes monitoring statistics to derive moving averages for the past 24 hours.
Volumes are not monitored, and balancing is disabled when the Easy Tier attribute of a storage pool is set to off. Volumes are monitored when the Easy Tier attribute of a storage pool is set to measured.
For more details about Easy Tier, see these resources:
IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229
Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933
3.2.2 Thin provisioning
In a shared storage environment, thin provisioning is a method for optimizing the usage of available storage. It relies on allocation of blocks of data on demand versus the traditional method of allocating all of the blocks up front. This methodology eliminates almost all white space, helping to avoid the poor usage rates that occur in the traditional storage allocation method where large pools of storage capacity are allocated to individual servers but remain unused (not written to).
Thin provisioning can present more storage space to the hosts or servers that are connected to the storage system than is available in the storage pool.
An example of thin provisioning is when a storage system contains 5000 gigabytes (GB) of usable storage capacity, but the storage administrator mapped volumes of 500 GB each to 15 hosts. In this example, the storage administrator makes 7500 GB of storage space visible to the hosts, even though the storage system has only 5000 GB of usable space, as shown in Figure 3-2 on page 81. In this case, all 15 hosts cannot use immediately all 500 GB that is provisioned to them. The storage administrator must monitor the system and add storage as needed.
Figure 3-2 Concept of thin provisioning
You can imagine thin provisioning to be the same process as when airlines sell more tickets on a flight than there are available physical seats, assuming that some passengers do not appear at check-in. They do not assign actual seats at the time of sale, which avoids each client having a claim on a specific seat number. The same concept applies to thin provisioning (airline) IBM FlashSystem V9000 (plane) and its volumes (seats). The storage administrator (airline ticketing system) must closely monitor the allocation process and set proper thresholds.
Configuring a thin-provisioned volume
Volumes can be configured as thin-provisioned or fully allocated. Thin-provisioned volumes are created with real and virtual capacities. You can still create volumes by using a striped, sequential, or image mode virtualization policy, as you can with any other volume.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the capacity of the volume that is reported to other IBM FlashSystem V9000 components (such as FlashCopy or remote copy) and to the hosts. For example, you can create a volume with real capacity of only 100 GB but virtual capacity of 1 terabyte (TB). The actual space that is used by the volume on FlashSystem V9000 is 100 GB, but hosts see 1 TB volume. The default for the real capacity is 2% of the virtual capacity.
A directory maps the virtual address space to the real address space. The directory and the user data share the real capacity.
Thin-provisioned volumes are available in two operating modes:
The autoexpand mode
The non-autoexpand mode
You can switch the mode at any time. If you select the autoexpand feature, FlashSystem V9000 automatically adds a fixed amount of more real capacity to the thin volume as required. Therefore, the autoexpand feature attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. Contingency capacity is used only when the pool is full. The default is autoexpand mode for volumes created with the graphical user interface (GUI).
A volume that is created without the autoexpand feature, and therefore has a zero contingency capacity, goes offline when the real capacity is used and must expand. Therefore the preference is to use the autoexpand feature.
 
Warning threshold: Enable the warning threshold (by using email with Simple Mail Transfer Protocol (SMTP), or by using a Simple Network Management Protocol (SNMP) trap) when you are working with thin-provisioned volumes. You can enable it on the volume and on the storage pool side. When you do not use the autoexpand mode, set a warning threshold on the volume. Otherwise, the thin volume goes offline if it runs out of space. Use a warning threshold on the storage pool when working with thin-provisioned volumes.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity is recalculated.
A thin-provisioned volume can be converted non-disruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized.
The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains that contain all zeros do not cause any real capacity to be used. If the volume has been in use for some time, it is possible that there is deleted space with data on it. Because FlashSystem V9000 uses a zero detection algorithm to free disk space in the transformation process, you might need to use a utility to zero the deleted space on the volume before starting the fully allocated to thin migration.
 
Tip: Consider the use of thin-provisioned volumes as targets in the FlashCopy relationships.
Thin-provisioned volumes save capacity only if the host server does not write to whole volumes. Whether the thin-provisioned volume works well partly depends on how the file system allocated the space.
FlashSystem V9000 has a zero write-host-detect feature that detects when servers are writing zeros to thin volumes, and ignores them rather than filling up the space on the thin volume. This keeps the thin volume from growing to full size when, for example, a full format is issued before using the disk.
 
Important: Do not use defrag on thin-provisioned or flash volumes. Defragmentation process can write data to different areas of a volume, which can cause thin-provisioned volume to grow up to its virtual size.
3.2.3 IBM Real-time Compression Software
The IBM Real-time Compression Software that is embedded in IBM FlashSystem V9000 addresses the requirements of primary storage data reduction, including performance. It does so by using a purpose-built technology that is called Real-time Compression that uses the Random Access Compression Engine (RACE) engine.
It offers the following benefits:
Compression for active primary data
IBM Real-time Compression can be used with active primary data. Therefore, it supports workloads that are not candidates for compression in other solutions. The solution supports online compression of existing data. Storage administrators can regain free disk space in an existing storage system without requiring administrators and users to clean up or archive data. This configuration significantly enhances the value of existing storage assets, and the benefits to the business are immediate. The capital expense of upgrading or expanding the storage system is delayed.
Compression for replicated or mirrored data
Remote volume copies can be compressed in addition to the volumes at the primary storage tier. This process reduces storage requirements in Metro Mirror and Global Mirror destination volumes also.
No changes to the existing environment are required
IBM Real-time Compression is part of FlashSystem V9000. It was designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems. The solution is not apparent to hosts, so users and applications continue to work as-is. Compression occurs within the FlashSystem V9000 system.
Overall savings in operational expenses
More data is stored in a rack space, so fewer storage expansion enclosures are required to store a data set. This reduced rack space has the following benefits:
 – Reduced power and cooling requirements. More data is stored in a system, which requires less power and cooling per gigabyte or used capacity.
 – Reduced software licensing for external storage. More data that is stored per external storage reduces the overall spending on licensing.
 – Reduced price per capacity because of an increased effective capacity. With this you can realize Flash for less than disk.
Disk space savings are immediate
 
Tip: Implementing compression in FlashSystem V9000 provides the same benefits to internal storage and externally virtualized storage systems.
The space reduction occurs when the host writes the data. This process is unlike other compression solutions in which some or all of the reduction is realized only after a post-process compression batch job is run.
The license for compression is included in the FlashSystem V9000 base license for internal storage. To use compression on external storage, licensing is required. With FlashSystem V9000, Real-time Compression is licensed by capacity, per terabyte of virtual data.
Common use cases
This section addresses the most common use cases for implementing compression:
General-purpose volumes
Databases
Virtualized infrastructures
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home directories, computer aided design/computer aided manufacturing (CAD/CAM), oil and gas geoseismic data, and log data. Storing such types of data in compressed volumes provides immediate capacity reduction to the overall consumed space. More space can be provided to users without any change to the environment.
Many file types can be stored in general-purpose servers. However, for practical information, the estimated compression ratios are based on actual field experience. Expected compression ratios are 50 - 60%.
File systems that contain audio, video files, and compressed files are not good candidates for compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. Observing high compression ratios in database volumes is common. Examples of databases that can greatly benefit from Real-time Compression are IBM DB2®, Oracle, and Microsoft SQL Server. Expected compression ratios are 50 - 80%.
 
Important: Some databases offer optional built-in compression. When compressing already compressed database files, the compression ratio is much lower. Therefore, do not compress already-compressed data.
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage space, with more virtual server images and backups kept online. The use of compression reduces the storage requirements at the source.
Examples of virtualization solutions that can greatly benefit from real-time compression are VMware, Microsoft Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios are 45 - 75%.
 
Tip: Virtual machines with file systems that contain compressed files are not good candidates for compression, as described in “General-purpose volumes”.
Real-time Compression concepts
The Random Access Compression Engine (RACE) technology is based on over 50 patents that are not primarily about compression. Instead, they define how to make industry-standard Lempel-Ziv (LZ) and Huffman compression of primary storage operate in real-time and allow random access. The primary intellectual property behind this is the RACE engine.
At a high level, the IBM RACE component compresses data that is written into the storage system dynamically. This compression occurs transparently, so connected hosts are not aware of the compression. RACE is an inline compression technology, meaning that each host write is compressed as it passes through the FlashSystem V9000 to the disks. This has a clear benefit over other compression technologies that are based on post-processing. These technologies do not provide immediate capacity savings; therefore, they are not a good fit for primary storage workloads, such as databases and active data set applications.
RACE is based on the LZ and Huffman lossless data compression algorithms and operates in a real-time method. When a host sends a write request, it is acknowledged by the write cache of the system, and then staged to the storage pool. As part of its staging, it passes through the compression engine and is then stored in compressed format onto the storage pool. Therefore, writes are acknowledged immediately after they are received by the write cache, with compression occurring as part of the staging to internal or external physical storage.
Capacity is saved when the data is written by the host, because the host writes are smaller when they are written to the storage pool.
IBM Real-time Compression is a self-tuning solution. It adapts to the workload that runs on the system at any particular moment.
Random Access Compression Engine
The IBM patented RACE implements an inverted approach when compared to traditional approaches to compression. RACE uses variable-size chunks for the input, and produces fixed-size chunks for the output.
This method enables an efficient and consistent method to index the compressed data, because it is stored in fixed-size containers.
Figure 3-3 shows random access compression.
Figure 3-3 Random access compression
Location-based compression
Both compression utilities and traditional storage systems compress data by finding repetitions of bytes within the chunk that is being compressed. The compression ratio of this chunk depends on how many repetitions can be detected within the chunk. The number of repetitions is affected by how much the bytes that are stored in the chunk are related to each other. The relation between bytes is driven by the format of the object. For example, an office document might contain textual information and an embedded drawing (like this page).
Because the chunking of the file is arbitrary, the file has no notion of how the data is laid out within the document. Therefore, a compressed chunk can be a mixture of the textual information and part of the drawing. This process yields a lower compression ratio, because the different data types mixed together cause a suboptimal dictionary of repetitions. That is, fewer repetitions can be detected because a repetition of bytes in a text object is unlikely to be found in a drawing.
This traditional approach to data compression is also called location-based compression. The data repetition detection is based on the location of data within the same chunk.
This challenge was also addressed with the predecide mechanism.
Predecide mechanism
Some data chunks have a higher compression ratio than others. Compressing some of the chunks saves little space, but still requires resources, such as processor (CPU) and memory. To avoid spending resources on uncompressible data, and to provide the ability to use a different, more effective (in this particular case) compression algorithm, IBM invented a predecide mechanism.
The chunks that are below a given compression ratio are skipped by the compression engine therefore saving CPU time and memory processing. Chunks that are decided not to be compressed with the main compression algorithm, but that still can be compressed well with the other, are marked and processed accordingly.
For blocks with a very high compression rate, the Huffmann algorithm is used. The LZ algorithm is used for blocks with a normal compression rate, and the block is not compressed if it has a very low compression rate. This improves the write performance and compression rate of the disk. The result might vary because the predecide mechanism does not check the entire block, only a sample of it.
Temporal compression
RACE offers a technology leap beyond location-based compression: Temporal compression.
When host writes arrive to RACE, they are compressed and fill up fixed size chunks also called compressed blocks. Multiple compressed writes can be aggregated into a single compressed block. A dictionary of the detected repetitions is stored within the compressed block. When applications write new data or update existing data, it is typically sent from the host to the storage system as a series of writes. Because these writes are likely to originate from the same application and be from the same data type, more repetitions are usually detected by the compression algorithm.
This type of data compression is called temporal compression because the data repetition detection is based on the time the data was written into the same compressed block. Temporal compression adds the time dimension that is not available to other compression algorithms. It offers a higher compression ratio because the compressed data in a block represents a more homogeneous set of input data.
Figure 3-4 shows (in the upper part) how three writes that are sent one after the other by a host end up in different chunks. They get compressed in different chunks because their location in the volume is not adjacent. This yields a lower compression ratio because the same data must be compressed non-natively by using three separate dictionaries. When the same three writes are sent through RACE (in the lower part of the figure), the writes are compressed together by using a single dictionary. This yields a higher compression ratio than location-based compression.
Figure 3-4 Location-based versus temporal compression
For more details about Easy Tier, see these Redbooks publications:
IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229
Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933
Accelerate with IBM FlashSystem V840 Compression, REDP-5147
Compression of existing data
In addition to compressing data in real time, it is also possible to compress existing data sets. To do that you need to add a compressed mirrored copy of an existing volume. After the copies are fully synchronized, you can delete the original, non-compressed copy. This process is nondisruptive, so the data remains online and accessible by applications
and users.
With this capability, you can regain space from storage pools, which can then be reused for other applications. With virtualization of external storage systems, the ability to compress already-stored data significantly enhances and accelerates the benefit to users.
Compression hardware
FlashSystem V9000 uses a secondary 8-core CPU and 32 GB memory for use with Real-time Compression. This additional, compression-dedicated hardware enables improved system performance when using compression. FlashSystem V9000 offers the option to include two Intel Quick Assist compression acceleration cards based on the Coletto Creek chipset. These compression acceleration cards are configured by default and are required for Real-time Compression.
 
Requirement: To use the Real-time Compression on FlashSystem V9000, the two Quick Assist compression acceleration cards are required.
3.3 Data migration
By using the IBM FlashSystem V9000, you can change the mapping of volume extents to managed disk (mdisk) extents, without interrupting host access to the volume. This functionality is used when volume migrations are performed. It also applies to any volume that is defined on the FlashSystem V9000.
This functionality can be used for the following tasks:
Migrating data from older external storage to FlashSystem V9000 managed storage.
You can redistribute volumes to accomplish the following tasks:
 – Moving workload onto newly installed storage
 – Moving workload off old or failing storage, ahead of decommissioning it
 – Moving workload to rebalance a changed workload
Migrating data from one FlashSystem V9000 clustered system to another FlashSystem V9000 system.
Moving volumes I/O caching between FlashSystem V9000 building blocks to redistribute workload across a FlashSystem V9000 system.
3.3.1 Migration operations
You can perform migration at the volume or the extent level, depending on the purpose of the migration. The following migration tasks are supported:
Migrating extents within a storage pool and redistributing the extents of a volume on the mdisks within the same storage pool.
Migrating extents off an mdisk (which is removed from the storage pool) to other mdisks in the same storage pool.
Migrating a volume from one storage pool to another storage pool.
Migrating a volume to change the virtualization type of the volume to image.
Moving a volume between building blocks nondisruptively.
3.3.2 Migrating data from an image mode volume
This section describes migrating data from an image mode volume to a fully managed volume. This type of migration is used to take an existing host logical unit number (LUN) and move it into the virtualization environment as provided by the FlashSystem V9000 system.
To perform any type of migration activity on an image mode volume, the image mode disk first must be converted into a managed mode disk.
The following mdisk modes are available:
Unmanaged mdisk
An mdisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged mdisk is not associated with any volumes and has no metadata that is stored on it. The FlashSystem V9000 does not write to an mdisk that is in unmanaged mode except when it attempts to change the mode of the mdisk to one of the other modes.
Image mode mdisk
Image mode provides a direct block-for-block translation from the mdisk to the volume. At this point, the back-end storage is partially virtualized. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode mdisk is associated with exactly one volume.
Managed mode mdisk
Managed mode mdisks contribute extents to the pool of available extents in the storage pool. One or more volumes might use these extents.
The image mode volume can then be changed into a managed mode volume and is treated in the same way as any other managed mode volume.
Example for an image mode migration
A typical image mode migration consists of the following steps:
1. Connecting IBM FlashSystem V9000 to your storage area network (SAN) fabric:
a. Create storage zones.
b. Create host zones.
2. Getting your disk serial numbers.
3. Preparing your FlashSystem V9000 to virtualize external disks:
a. Create a storage pool.
b. Create the host definition.
c. Verify that you can see your storage subsystem.
d. Get your disk serial numbers.
4. Moving the LUNs to the FlashSystem V9000:
a. Shut down hosts.
b. Unmap and unmask the disks from the server.
c. Remap and remask the disks to the FlashSystem V9000.
d. From the FlashSystem V9000, discover the new disks by using Detect mdisk.
e. Prepare the server with the recommended multipathing driver and firmware.
f. Create image mode volume and map to server (an import wizard can automate this).
g. Restart server and applications.
5. Migrating the image mode volumes (virtual disks, known as VDisks).
6. Removing the storage system from FlashSystem V9000.
3.4 Advanced copy services
This section describes the advanced copy service features of the FlashSystem V9000.
With these features, you can address the need for high availability and disaster recovery solutions, while FlashCopy enables faster backups and data duplication for testing.
3.4.1 FlashCopy
By using the FlashCopy function of the FlashSystem V9000 you can perform a point-in-time copy of one or more volumes. In this section, we describe the usage scenarios of FlashCopy and provide an overview of its configuration and use.
You can use FlashCopy to help you solve critical and challenging business needs that require duplication of data of your source volume. Volumes can remain online and active while you create crash consistent copies of the data sets. Because the copy is performed at the block level, it operates below the host operating system and cache and, therefore, is not apparent to the host.
 
Important: Because FlashCopy operates at the block level below the host operating system and cache, those levels do need to be flushed for application consistent flash copies. Failing to flush the host operating system and application cache produces what is referred to as a crash-consistent copy.
While the FlashCopy operation is performed, the source volume is frozen briefly to initialize the FlashCopy bitmap and then I/O can resume. Although several FlashCopy options require the data to be copied from the source to the target in the background, which can take some time to complete, the resulting data on the target volume is presented so that the copy appears to complete immediately. This process is done by using a bitmap (or bit array), which tracks changes to the data after the FlashCopy is started and an indirection layer, which enables data to be read from the source volume transparently.
Use cases for FlashCopy
FlashCopy can address a wide range of technical needs. Common use cases for FlashCopy include, but are not limited to the following examples:
Back up improvements with FlashCopy
FlashCopy can be used to minimize and, under certain conditions, eliminate application downtime that is associated with performing backups or transfer the resource usage of performing intensive backups from production systems.
Restore with FlashCopy
FlashCopy can perform a restore from any existing FlashCopy mapping. Therefore, you can restore (or copy) from the target to the source of your regular FlashCopy relationships. This approach can be used for various applications, such as recovering your production database application after an errant batch process that caused extensive damage.
Moving and migrating data with FlashCopy
FlashCopy can be used to facilitate the movement or migration of data between hosts while minimizing downtime for applications. By using FlashCopy, application data can be copied from source volumes to new target volumes while applications remain online. After the volumes are fully copied and synchronized, the application can be brought down and then immediately brought back up on the new server that is accessing the new FlashCopy target volumes.
Application testing with FlashCopy
Often, an important step is to test a new version of an application or operating system that is using actual production data. FlashCopy makes this type of testing easy to accomplish without putting the production data at risk or requiring downtime to create a constant copy. You create a FlashCopy of your source and use that for your testing. This copy is a duplicate of your production data down to the block level so that even physical disk identifiers are copied. Therefore, distinguishing the difference is impossible for your applications.
FlashCopy attributes
The FlashCopy function in FlashSystem V9000 features the following attributes:
The target is the time-zero copy of the source, which is known as FlashCopy mapping targets.
FlashCopy produces an exact copy of the source volume, including any metadata that was written by the host operating system, logical volume manager, and applications.
The target volume is available for read/write (almost) immediately after the FlashCopy operation.
The source and target volumes must be the same “virtual” size.
The source and target volumes must be on the same FlashSystem V9000 system.
The source and target volumes do not need to be in the same building block or storage pool.
The storage pool extent sizes can differ between the source and target.
The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
The target volumes can be the source volumes for other FlashCopy relationships (cascaded FlashCopy).
Consistency groups are supported to enable FlashCopy across multiple volumes at the same time.
Up to 255 FlashCopy consistency groups are supported per system.
Up to 512 FlashCopy mappings can be placed in one consistency group.
The target volume can be updated independently of the source volume.
Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both nodes of the FlashSystem V9000 building block to prevent a single point of failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the completion of the background copy.
Space Efficient FlashCopy (or Snapshot in GUI) use disk space only when updates are made to the source or target data and not for the entire capacity of a volume copy.
FlashCopy licensing is included for internal storage and based on the virtual capacity of the source volumes for external storage.
Incremental FlashCopy copies all of the data when you first start FlashCopy and then only the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy can substantially reduce the time that is required to re-create an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete.
The maximum number of supported FlashCopy mappings is 4096 per FlashSystem V9000 system.
The size of the source and target volumes cannot be altered (increased or decreased) while a FlashCopy mapping is defined.
Configuring FlashCopy
The FlashSystem V9000 GUI provides three FlashCopy presets (Snapshot, Clone, and Backup) to simplify the more common FlashCopy operations.
These presets meet most FlashCopy requirements, but they do not provide support for all possible FlashCopy options. More specialized options are available by using Advanced FlashCopy. Using the options Create Snapshot, Create Clone, Create Backup automatically creates the target volume to be the same type as the source and puts that FlashCopy in the same pool. If different options are needed, use Advanced FlashCopy.
Figure 3-5 shows the FlashCopy GUI. In the FlashCopy panel, you can select the presets by right-clicking a volume. Select multiple volumes to create a FlashCopy for multiple volume as a consistency group.
Figure 3-5 FlashCopy GUI
Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an independent copy. Instead, it is used to maintain a view of the production data at the time that the snapshot is created. Therefore, the snapshot holds only the data from regions of the production volume that changed since the snapshot was created. Because the snapshot preset uses thin provisioning, only the capacity that is required for the changes is used.
Snapshot uses the following preset parameters:
Background copy: None
Incremental: No
Delete after completion: No
Cleaning rate: No
Primary copy source pool: Target pool
Use case: Snapshot
The user wants to produce a copy of a volume without affecting the availability of the volume. The user does not anticipate many changes to be made to the source or target volume; a significant proportion of the volumes remains unchanged.
By ensuring that only changes require a copy of data to be made, the total amount of disk space that is required for the copy is reduced. Therefore, many snapshot copies can be used in the environment.
Snapshots are useful for providing protection against some corruptions or similar issues with the validity/content of the data, but they do not provide protection from problems affecting access to or data loss in the source volume. Snapshots can also provide a vehicle for performing repeatable testing (including “what-if” modeling that is based on production data) without requiring a full copy of the data to be provisioned.
Clone
The clone preset creates a replica of the volume, which can be changed without affecting the original volume. After the copy completes, the mapping that was created by the preset is automatically deleted.
Clone uses the following preset parameters:
Background copy rate: 50 seconds
Incremental: No
Delete after completion: Yes
Cleaning rate: 50
Primary copy source pool: Target pool
The Background copy rate can be changed at any time during the FlashCopy process on a clone or incremental FlashCopy.
Table 3-1 lists possible speeds for the background copy rate.
Table 3-1 Background copy rate
Background copy rate
Data copied per second
    1-10
128 kilobytes (KB)
  11-20
256 KB
  21-30
512 KB
  31-40
    1 megabytes (MB)
  41-50
    2 MB
  51-60
    4 MB
  61-70
    8 MB
  71-80
  16 MB
  81-90
  32 MB
91-100
  64 MB
Use case: Clone
Users want a copy of the volume that they can modify without affecting the original volume. After the clone is established, there is no expectation that it is refreshed or that there are any further need to reference the original production data again. If the source is thin-provisioned, the target is thin-provisioned for the auto-create target.
Backup
The backup preset creates a point-in-time replica of the production data. After the copy completes, the backup view can be refreshed from the production data, with minimal copying of data from the production volume to the backup volume.
Backup uses the following preset parameters:
Background copy rate: 50
Incremental: Yes
Delete after completion: No
Cleaning rate: 50
Primary copy source pool: Target pool
Use case: Backup
The user wants to create a copy of the volume that can be used as a backup if the source becomes unavailable, as in the case of the loss of the underlying physical controller. The user plans to periodically update the secondary copy and does not want to suffer the resource cost of creating a new copy each time (and incremental FlashCopy times are faster than full copy, which helps to reduce the window where the new backup is not yet fully effective). If the source is thin-provisioned, the target is thin-provisioned on this option for the auto-create target.
Another use case is to create and maintain (periodically refresh) an independent image that can be subjected to intensive I/O (for example, data mining) without affecting the source volume’s performance.
FlashCopy consistency groups can be used to create consistent copies spanning multiple volumes. FlashCopy consistency groups are managed in the Consistency Groups panel. Figure 3-6 shows the Consistency Groups panel. Remote Copy Consistency Groups are managed from the Remote Copy panel. Here you can create, start, stop, delete, and rename FlashCopy consistency groups. In addition, you can move FlashCopy mappings to or remove them from consistency groups, look at related volumes, and start, stop, delete, rename, and edit them.
Figure 3-6 FlashCopy Consistency Groups
FlashCopy Mappings are managed from the FlashCopy Mappings panel shown in Figure 3-7. Use the Actions menu for tasks to create, start, stop, delete, rename, and edit FlashCopy mappings, look at related volumes, and move them to or remove them from consistency groups.
Figure 3-7 FlashCopy Mappings
IBM Spectrum Protect Snapshot (Tivoli Storage FlashCopy Manager)
IIBM Spectrum Protect Snapshot (formerly IBM Tivoli® Storage FlashCopy Manager) provides fast application-aware backups and restores using advanced point-in-time image technologies in the FlashSystem V9000. In addition, it provides an optional integration with IBM Spectrum Protect (formally IBM Tivoli Storage Manager) for the long-term storage of snapshots. IBM Spectrum Control Protect Snapshot is part of Virtual Storage Center (VSC), which is part of the IBM Spectrum Control family.
With IBM Spectrum Protect Snapshot (Tivoli Storage FlashCopy Manager), you can coordinate and automate host preparation steps before you issue FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode and flush file system cache before starting the FlashCopy.
Tivoli Storage FlashCopy Manager also enables easier management of on-disk backups that use FlashCopy, and provides a simple interface to perform the “reverse” operation.
IBM Tivoli Storage FlashCopy Manager V4.1.2 supports the following applications:
VMware vSphere 6 environments with Site Recovery Manager (SRM) integration
Instant restore for Virtual Machine File System (VMFS) data stores
Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability Groups
IBM DB2
Oracle database
DB2 pureScale
Other applications can be supported through script customizing
IBM Spectrum Protect Snapshot (Tivoli Storage FlashCopy Manager) can create FlashCopy backups from remote copy target volumes. This means that backup doesn’t have to be copied from primary site to secondary site because it is already copied through Metro Mirror (MM), Global Mirror and Metro Mirror, or Global Mirror (GM). An application running in the primary site can have its backup taken in the secondary site, where the source of this backup is target remote copy volumes.
For more information about Spectrum Protect Snapshot, see the following website:
3.4.2 Volume mirroring and migration options
Volume mirroring is a simple Redundant Array of Independent Disks 1 (RAID 1)-type function that enables a volume to remain online even when one storage pool backing it up becomes inaccessible. Volume mirroring is designed to protect the volume from storage infrastructure failures by seamless mirroring between storage pools.
Volume mirroring is provided by a specific volume mirroring function in the I/O stack; it cannot be manipulated like a FlashCopy or other types of copy volumes. However, this feature provides migration functionality, which can be obtained by splitting the mirrored copy from the source or by using the “migrate to” function.
With volume mirroring you can move data to different mdisks within the same storage pool, or move data between different storage pools. The benefit of using volume mirroring over volume migration is that with volume mirroring storage pools do not need the same extent size as is the case with volume migration.
 
Note: Volume mirroring does not create a second volume before you split copies. Volume mirroring adds a second copy of the data under the same volume, so you end up having one volume presented to the host with two copies of data connected to this volume. Only splitting copies creates another volume, and then both volumes have only one copy of
the data.
You can create a mirrored copy by right-clicking a volume and selecting Volume Copy Actions  Add Mirrored Copy (Figure 3-8).
Figure 3-8 Volume Mirror
By right-clicking a copy of a volume, you can split the copy into a new volume, validate the volume copy, and make a copy the primary copy for read IOs.
3.4.3 Remote Copy
In this section, we describe the Remote Copy services, which are a synchronous remote copy called Metro Mirror (MM), asynchronous remote copy called Global Mirror (GM), and Global Mirror with Changed Volumes.
The general application of remote copy services is to maintain two real-time synchronized copies of a disk. If the master copy fails, you can enable an auxiliary copy for I/O operation.
A typical application of this function is to set up a dual-site solution that uses two FlashSystem V9000 systems, but FlashSystem V9000 supports remote copy relationships to FlashSystem V840, IBM SAN Volume Controller, and Storwize V7000 systems.
The first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.
Each FlashSystem V9000 can maintain up to three partner system relationships, which enables as many as four systems to be directly associated with each other. This FlashSystem V9000 partnership capability enables the implementation of disaster recovery (DR) solutions.
IP Partnerships
FlashSystem V9000 already supports remote copy over Fibre Channel (FC). Remote copy over Internet Protocol (IP) communication is supported on IBM FlashSystem V9000 systems by using Ethernet communication links. Remote copy over native IP provides a less expensive alternative to using Fibre Channel configurations.
With native IP partnership, the following Copy Services features are supported:
Metro Mirror
Referred to as synchronous replication, Metro Mirror provides a consistent copy of a source virtual disk on a target virtual disk. Data is written to the target virtual disk synchronously after it is written to the source virtual disk so that the copy is continuously updated.
Global Mirror and Global Mirror Change Volumes
Referred to as asynchronous replication, Global Mirror provides a consistent copy of a source virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously so that the copy is continuously updated. However, the copy might not contain the last few updates if a disaster recovery operation is performed. An added extension to Global Mirror is Global Mirror with Change Volumes. Global Mirror with Change Volumes is the preferred method for use with native IP replication.
 
Note: For IP partnerships, the suggested method of copying is Global Mirror with Change Volumes.
Intersite link planning
If you use IP partnership, you must meet the following requirements:
Transmission Control Protocol (TCP) ports 3260 and 3265 are used by systems for IP partnership communications. Therefore, these ports need to be open.
The maximum supported round-trip time between systems is 80 milliseconds (ms) for a
1 gigabits per second (Gbps) link.
The maximum supported round-trip time between systems is 10 ms for a 10 Gbps link.
For IP partnerships, the suggested method of copying is Global Mirror with Change Volumes.
This method is suggested because of the performance benefits. Also, Global Mirror and Metro Mirror might be more susceptible to the loss of synchronization.
The amount of intersite heartbeat traffic is 1 megabit per second (Mbps) per link.
The minimum bandwidth requirement for the intersite link is 10 Mbps. This, however, scales up with the amount of host I/O that you choose to do.
Consistency Groups
A Remote Copy Consistency Group can contain an arbitrary number of relationships up to the maximum number of MM/GM relationships that is supported by the FlashSystem V9000 system. MM/GM commands can be issued to a Remote Copy Consistency Group. Therefore, these commands can be issued simultaneously for all MM/GM relationships that are defined within that Consistency Group, or to a single MM/GM relationship that is not part of a Remote Copy Consistency Group.
For more details about advanced copy services, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
Figure 3-9 shows the concept of Metro Mirror Consistency Groups. The same applies to Global Mirror and FlashCopy Consistency Groups.
Figure 3-9 Metro Mirror Consistency Group
Because MM_Relationship 1 and 2 are part of the Consistency Group, they can be handled as one entity. Stand-alone MM_Relationship 3 is handled separately.
Metro Mirror
Metro Mirror establishes a synchronous relationship between two volumes of equal size. The volumes in a Metro Mirror relationship are referred to as the master (primary) volume and the auxiliary (secondary) volume. Traditional FC Metro Mirror is primarily used in a metropolitan area or geographical area, up to a maximum distance of 300 km (186.4 miles), to provide synchronous replication of data.
With synchronous copies, host applications write to the master volume, but they do not receive confirmation that the write operation completed until the data is written to the auxiliary volume. This action ensures that both the volumes have identical data when the copy completes. After the initial copy completes, the Metro Mirror function maintains a fully synchronized copy of the source data at the target site always.
Metro Mirror has the following characteristics:
Zero recovery point objective (RPO)
Synchronous
Production application performance that is affected by round-trip latency
Increased distance directly affects host I/O performance because the writes are synchronous. Use the requirements for application performance when you are selecting your Metro Mirror auxiliary location.
Consistency Groups can be used to maintain data integrity for dependent writes, which is similar to FlashCopy and Global Mirror Consistency Groups.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the master volume and the auxiliary volume to fail. In that case, Metro Mirror suspends writes to the auxiliary volume and allows I/O to the master volume to continue, to avoid affecting the operation of the master volumes.
Figure 3-10 shows how a write to the master volume is mirrored to the cache of the auxiliary volume before an acknowledgment of the write is sent back to the host that issued the write. This process ensures that the auxiliary is synchronized in real time if it is needed in a failover situation.
Figure 3-10 Write on volume in Metro Mirror relationship
However, this process also means that the application is exposed to the latency and bandwidth limitations (if any) of the communication link between the master and auxiliary volumes. This process might lead to unacceptable application performance, particularly when placed under peak load. Therefore, the use of traditional Fibre Channel Metro Mirror has distance limitations that are based on your performance requirements. FlashSystem V9000 does support up to 300 km (186.4 miles), but this greatly increases the latency, especially with flash memory.
Metro Mirror features
FlashSystem V9000 Metro Mirror supports the following features:
Synchronous remote copy of volumes that are dispersed over metropolitan distances.
FlashSystem V9000 implements Metro Mirror relationships between volume pairs, with each volume in a pair that is managed by a FlashSystem V9000 system.
Intracluster Metro Mirror, where both volumes belong to the same system and building block.
Intercluster Metro Mirror, where each volume belongs to a separate FlashSystem V9000 system. All intercluster Metro Mirror processing occurs between two FlashSystem V9000 systems that are configured in a partnership.
Intercluster and intracluster Metro Mirror can be used concurrently.
For intercluster Metro Mirror, FlashSystem V9000 maintains a control link between two systems. This control link is used to control the state and coordinate updates at either end. The control link is implemented on top of the same Fibre Channel (FC) fabric connection that the FlashSystem V9000 uses for Metro Mirror I/O.
FlashSystem V9000 implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events.
The FlashSystem V9000 allows the resynchronization of changed data so that write failures that occur on the master or auxiliary volumes do not require a complete resynchronization of the relationship.
Metro Mirror attributes
The Metro Mirror function in FlashSystem V9000 possesses the following attributes:
A FlashSystem V9000 system partnership can be created between a FlashSystem V9000 systems and another FlashSystem V9000, a FlashSystem V840, a SAN Volume Controller System or an IBM Storwize V7000 operating in the replication layer (for intercluster Metro Mirror).
A Metro Mirror relationship is created between two volumes of the same size.
To manage multiple Metro Mirror relationships as one entity, relationships can be made part of a Metro Mirror Consistency Group, which ensures data consistency across multiple Metro Mirror relationships and provides ease of management.
When a Metro Mirror relationship is started and when the background copy completes, the relationship becomes consistent and synchronized.
After the relationship is synchronized, the auxiliary volume holds a copy of the production data at the primary, which can be used for DR.
The auxiliary volume is in read-only mode when relationship is active.
To access the auxiliary volume, the Metro Mirror relationship must be stopped with the access option enabled before write I/O is allowed to the auxiliary. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.
Global Mirror
The Global Mirror copy service is an asynchronous remote copy service. It provides and maintains a consistent mirrored copy of a source volume to a target volume.
Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The volumes in a Global Mirror relationship are referred to as the master (source) volume and the auxiliary (target) volume, which is the same as Metro Mirror.
Consistency Groups can be used to maintain data integrity for dependent writes, which is similar to FlashCopy and Metro Mirror Consistency Groups.
Global Mirror writes data to the auxiliary volume asynchronously, which means that host writes to the master volume provide the host with confirmation that the write is complete before the I/O completing on the auxiliary volume.
Global Mirror has the following characteristics:
Near-zero RPO
Asynchronous
Production application performance is affected by I/O sequencing preparation time.
Asynchronous remote copy
Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, the write operations are completed on the primary site and the write acknowledgment is sent to the host before it is received at the secondary site. An update of this write operation is sent to the secondary site at a later stage, which provides the capability to perform remote copy over distances that exceed the limitations of synchronous remote copy.
The Global Mirror function provides the same function as Metro Mirror remote copy, but over long-distance links with higher latency without requiring the hosts to wait for the full round-trip delay of the long-distance link. Figure 3-11 shows that a write operation to the master volume is acknowledged back to the host that is issuing the write before the write operation is mirrored to the cache for the auxiliary volume.
Figure 3-11 Global Mirror write sequence
The Global Mirror algorithms always maintain a consistent image on the auxiliary. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of write ordering and read stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary system. Therefore, it is not subject to the latency of the long-distance link. These two elements of the protocol ensure that the throughput of the total system can be grown by increasing system size while maintaining consistency across a growing data set.
In a failover scenario where the secondary site must become the master source of data, certain updates might be missing at the secondary site. Therefore, any applications that use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, a transaction log replay.
Global Mirror is supported over FC, Fibre Channel over IP (FCIP), Fibre Channel over Ethernet (FCoE), and native IP connections.
FlashSystem V9000 Global Mirror features
FlashSystem V9000 Global Mirror supports the following features:
Asynchronous remote copy of volumes that are dispersed over global scale distances (up to 25,000 km or 250 ms latency).
FlashSystem V9000 implements the Global Mirror relationship between a volume pair, with each volume in the pair being managed by a FlashSystem V9000, FlashSystem V840, SAN Volume Controller, or Storwize V7000 system.
Intracluster Global Mirror where both volumes belong to the same system and building block.
Intercluster Global Mirror in which each volume belongs to its separate FlashSystem V9000 system. A FlashSystem V9000 system can be configured for partnership with between one and three other systems.
Intercluster and intracluster Global Mirror can be used concurrently but not for the same volume.
FlashSystem V9000 does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the FlashSystem V9000 maintains a control link between the two systems. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the FlashSystem V9000 uses for Global Mirror I/O.
FlashSystem V9000 implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events.
FlashSystem V9000 implements flexible resynchronization support, enabling it to resynchronize volume pairs that experienced write I/Os to both disks and to resynchronize only those regions that changed.
An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. It is useful in intracluster scenarios for testing purposes.
Global Mirror source and target volumes can be associated with Change Volumes.
Using Global Mirror with Change Volumes
Global Mirror is designed to achieve a recovery point objective (RPO) as low as possible so that data is as up-to-date as possible. This design places several strict requirements on your infrastructure. In certain situations with low network link quality, congested, or overloaded hosts, you might be affected by multiple 1920 congestion errors.
Congestion errors happen in the following primary situations:
Congestion at the source site through the host or network
Congestion in the network link or network path
Congestion at the target site through the host/storage/network
To address these issues, Change Volumes are an option for Global Mirror relationships. Change Volumes use the FlashCopy functionality, but they cannot be manipulated as FlashCopy volumes because they are special-purpose only. Change Volumes replicate point-in-time images on a cycling period. The default is 300 seconds. Your change rate needs to include only the condition of the data at the point-in-time that the image was taken, rather than all the updates during the period. The use of this function can provide significant reductions in replication volume.
Global Mirror with Change Volumes has the following characteristics:
Larger RPO
Point-in-time copies
Asynchronous
Possible system performance reduction, because point-in-time copies are created locally
Figure 3-12 shows a simple Global Mirror relationship with Change Volumes.
Figure 3-12 Global Mirror without Change Volumes
With Change Volumes, a FlashCopy mapping exists between the primary volume and the primary Change Volume. The mapping is updated on the cycling period (60 seconds to one day). The primary Change Volume is then replicated to the secondary Global Mirror volume at the target site, which is then captured in another Change Volume on the target site. This approach provides an always consistent image at the target site, and protects your data from being inconsistent during resynchronization.
If a copy does not complete in the cycle period, the next cycle does not start until the prior cycle completes. For this reason, the use of Change Volumes gives you the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, your RPO is twice the completion time. The next cycling period starts immediately after the prior cycling period is finished.
Carefully consider your business requirements versus the performance of Global Mirror with Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for more frequent cycling periods. Therefore, selecting the shortest cycle periods possible is not always the answer. In most cases, the default must meet requirements and perform well.
 
Important: When you create your Global Mirror volumes with Change Volumes, make sure that you remember to select the Change Volume on the auxiliary (target) site. Failure to do so leaves you exposed during a resynchronization operation.
Configuring Remote Copy
Remote Copy relationships and consistency groups can be managed from the Remote Copy pane. There, you create, start, stop, rename, switch, and delete remote copy mappings and consistency groups. In addition, you can add mappings to and remove them from consistency groups. This is shown in Figure 3-13.
Figure 3-13 Remote Copy
3.5 Data encryption
The IBM FlashSystem V9000 provides optional encryption of data at rest, which protects against the potential exposure of sensitive user data and user metadata that are stored on discarded or stolen flash modules. Encryption of system data and metadata is not required, so system data and metadata are not encrypted.
Encryption logic is actually still implemented by the IBM FlashSystem V9000 while in the encryption-disabled state, but uses a default, or well-known, key. Therefore, in terms of security, encryption-disabled is effectively the same as not encrypting at all.
In a system that is encryption-enabled, an access key must be provided to unlock the IBM FlashSystem V9000 so that it can transparently perform all required encryption-related functionality, such as encrypt on write and decrypt on read.
At system start (power on), the encryption key must be provided by an outside source so that the IBM FlashSystem V9000 can be accessed. The encryption key is provided by inserting the USB flash drives that were created during system initialization in one of the AC2 control enclosures in the solution.
Key encryption is protected by an Advanced Encryption Standard (XTS-AES) algorithm key wrap using the 256-bit symmetric option in XTS mode, as defined in the Institute of Electrical and Electronics Engineers (IEEE)1619-2007 standard. An HMAC-SHA256 algorithm is used to create a hash message authentication code (HMAC) for corruption detection, and it is additionally protected by a system-generated cyclic redundancy check (CRC).
3.6 IBM HyperSwap
New as of FlashSystem V9000 firmware 7.5 is HyperSwap. HyperSwap capability enables each volume to be presented by two FlashSystem V9000 I/O groups. The configuration tolerates combinations of node and site failures, using host multipathing driver based on that available for the IBM FlashSystem V9000.
The use of FlashCopy helps maintain a golden image during automatic resynchronization. Because remote mirroring is used to support the HyperSwap capability, Remote Mirroring licensing is a requirement for using HyperSwap on FlashSystem V9000.
 
Golden Image: The following notes describe how a golden image is created and used to resynchronize a broken HyperSwap relationship:
An example of an out of sync HyperSwap relationship is when one site goes offline. When the HyperSwap relationship is re-established, then both copies are now out of sync.
Before the sync process starts, a FlashCopy is taken on the “not in sync” site. The FlashCopy uses the change volume that was assigned to that site during the HyperSwap setup.
This FlashCopy is now a golden image, so if the other site crashes or the sync process breaks, this FlashCopy contains the data before the sync process was started.
The golden image only exists during the resync of a broken and reestablished HyperSwap relationship.
3.6.1 Overview of HyperSwap
The HyperSwap high availability function in the FlashSystem V9000 software provides business continuity if hardware failure, power failure, connectivity failure, or disasters, such as fire or flooding, occur. HyperSwap is available on the IBM SAN Volume Controller, IBM Storwize V7000, Storwize V7000 Unified, Storwize V5000, and FlashSystem V9000 products.
The HyperSwap function provides highly available volumes accessible through two sites at up to 300 km apart. A fully independent copy of the data is maintained at each site. When data is written by hosts at either site, both copies are synchronously updated before the write operation is completed. The HyperSwap function automatically optimizes itself to minimize data transmitted between sites and to minimize host read and write latency.
If the nodes or storage at either site go offline, leaving an online and accessible up-to-date copy, the HyperSwap function will automatically fail over access to the online copy. The HyperSwap function also automatically resynchronizes the two copies when possible.
The HyperSwap function in the FlashSystem V9000 software works with the standard multipathing drivers that are available on a wide variety of host types, with no additional host support required to access the highly available volume. Where multipathing drivers support Asymmetric Logical Unit Assignment (ALUA), the storage system tells the multipathing driver which nodes are closest to it, and should be used to minimize I/O latency. You need to tell the storage system which site a host is connected to, and it configures host pathing optimally.
For more information about how to use HyperSwap, see Chapter 11, “HyperSwap” on page 411.
3.7 IBM Spectrum Control (IBM Tivoli Storage Productivity Center)
IBM Tivoli Storage Productivity Center is data and storage management software for managing heterogeneous storage infrastructures. It helps to improve visibility, control, and automation for data and storage infrastructures. Organizations with multiple storage systems can simplify storage provisioning, performance management, and data replication.
Tivoli Storage Productivity Center simplifies the following data and storage management processes:
A single console for managing all types of data on disk, flash, file, and object storage systems.
Simplified visual administration tools – including an advanced web-based user interface, VMware vCenter plug-in, and IBM Cognos® Business Intelligence, with pre-designed reports.
Storage and device management to give you fast deployment with agent-less device management – while intelligent presets improve provisioning consistency and control.
Integrated performance management features end-to-end views – including devices, SAN fabrics, and storage systems. The server-centric view of storage infrastructure enables fast troubleshooting.
Data replication management that enables you to have Remote Mirror, snapshot, and copy management, and that supports Windows, Linux, UNIX, and IBM System z® data.
IBM Spectrum Protect Snapshot (formally Tivoli Storage FlashCopy Manager), Tivoli Storage Productivity Center, and part of Virtual Storage Center (VSC) are all included in the IBM Spectrum Control family.
For more information about Tivoli Storage Productivity Center, see the following website:
For more information about IBM Data Management and Storage Management, see the following website:
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.34.205