System overview
This chapter provides an overview of IBM Spectrum Virtualize software and IBM Storwize V7000 architecture and components.
The first part of the chapter covers the IBM Storwize V7000 hardware components, and the software elements that build the IBM Storwize V7000 platform.
The second part of this chapter provides an overview of the useful management and support tools that helps to maintain and operate the IBM Storwize V7000.
This chapter covers the following topics:
IBM Spectrum Virtualize
All of the topics included in this chapter are described in more detail later in this book.
2.1 IBM Spectrum Virtualize
IBM Spectrum Virtualize is a software-enabled storage virtualization engine that provides a single point of control for storage resources within the data centers. IBM Spectrum Virtualize is a core software engine of well-established and industry-proven IBM storage virtualization solutions. These solutions include IBM SAN Volume Controller and all versions of IBM Storwize family of products (IBM Storwize V3700, IBM Storwize V5000, IBM Storwize V7000, and IBM FlashSystem V9000).
 
Naming: With the introduction of the IBM Spectrum Storage™ family, the software that runs on IBM SAN Volume Controller and IBM Storwize family products is called IBM Spectrum Virtualize. The name of the underlying hardware platform remains intact.
The objectives of IBM Spectrum Virtualize are to manage storage resources in your IT infrastructure, and to ensure that they are used to the advantage of your business. These processes take place quickly, efficiently, and in real time, while avoiding increases in administrative costs.
Although IBM Spectrum Virtualize is a core software engine of the whole family of IBM Storwize products (see Figure 2-1), the information in this book is intentionally related to the deployment considerations of IBM Storwize V7000 Gen2 and Gen2+.
Throughout this book, the term IBM Storwize V7000 is used to refer to both models of the IBM Storwize V7000 when the text applies similarly to both.
Figure 2-1 IBM Spectrum Virtualize software
2.2 Storage virtualization
Storage virtualization, like server virtualization, is one of the foundations of building a flexible and reliable infrastructure solution that enables companies to better align their IT needs. Storage virtualization enables an organization to achieve affordability and manageability by implementing storage pools across several physically separate disk systems (which might be from different vendors).
Storage can then be deployed from these pools, and can be migrated transparently between pools without interruption to the attached host systems and their applications. Storage virtualization provides a single set of tools for advanced functions, such as instant copy and remote mirroring solutions, which enables faster and seamless deployment of storage regardless of the underlying hardware.
Because the storage virtualization represented by IBM Spectrum Virtualize is a software-enabled function, it offers additional features that are typically not available on a regular pure storage subsystem. These include but are not limited to the following features:
Data compression
Software and hardware encryption
IBM Easy Tier for workload balancing
Thin provisioning
Mirroring and copy services
Interface to Cloud Service Providers
Figure 2-2 shows these features at a glance.
Figure 2-2 IBM Spectrum Storage virtualization
There are five top reasons to choose software-defined storage virtualization infrastructure and benefit from IBM Spectrum Virtualize:
Lower license cost. Avoid purchasing licenses from multiple storage vendors for advanced features (replication, tiering, snapshots, compression, and so on) of each external storage subsystem. Manage them centrally from IBM Spectrum Virtualize.
Feed more data on existing physical disks. External SAN disk arrays have physical boundaries. Although one subsystem might run out of space, another has free capacity. Virtualization removes these boundaries.
Choose lower-cost disk arrays. Less-demanding applications can easily run on cheaper disks with lower performance. IBM Spectrum Virtualize automatically and transparently moves data up and down between low-performance and high-performance disk arrays (tiering).
Cloud FlashCopy. IBM Spectrum Virtualize has an interface to the external cloud service providers. IBM Spectrum Cloud FlashCopy supports full and incremental backup and restore from cloud snapshots.
Quick adoption of new technologies. IBM Spectrum Virtualize seamlessly integrates invention in storage technologies, such as new array types, new disk vendors, and so on.
Extended high availability (HA). Cross-site virtualization, workload migration, and copy services enhance options for deployment of high availability scenarios or disaster recovery (DR) solutions (IBM SAN Volume Controller Enhanced Stretched Cluster, IBM HyperSwap).
2.3 IBM Storwize V7000 overview
IBM Storwize V7000 solution incorporates IBM Spectrum Virtualize software and provides a modular storage system that includes the capability to virtualize its internal and external SAN-attached storage. IBM Storwize V7000 solution is built on IBM Spectrum Virtualize.
IBM Storwize V7000 system provides several configuration options that are aimed at simplifying the implementation process. These configuration options conform to different implementation scenarios regarding the size of your data center, and SAN and local area network (LAN) topology. IBM Storwize V7000 system is a clustered, scalable, midrange storage system, easy to deploy, and easy to use.
Figure 2-3 shows a high-level overview of IBM Storwize V7000.
Figure 2-3 IBM Storwize V7000 overview
The IBM Spectrum Virtualize software that runs on IBM Storwize V7000 provides a GUI that enables storage to be deployed quickly and efficiently. The GUI is provisioned by IBM Spectrum Virtualize code and there is no need for a separate console.
The management GUI contains a series of preestablished configuration options that are called presets, and that use common settings to quickly configure objects on the system. Presets are available for creating volumes and IBM FlashCopy mappings, and for setting up a RAID configuration, including traditional RAIDs and the new feature of distributed RAID.
An IBM Storwize V7000 solution provides a choice of up to 760 disk drives per system or 1024 disk drives per clustered system (using dense drawers). The solution uses SAS cables and connectors to attach to the optional expansion enclosures.
The IBM Storwize V7000 system supports a range of external disk systems similar to what IBM SAN Volume Controller supports today. See Figure 2-4 for a view of an IBM Storwize V7000 control enclosure.
Figure 2-4 Top-front view of a Storwize V7000 control enclosure
The IBM Storwize V7000 solution consists of 1 - 4 control enclosures and optionally, up to 36 expansion enclosures. It supports the intermixing of the different expansion enclosures. Each enclosure contains two canisters. Control enclosures contain two node canisters, and expansion enclosures contain two expansion canisters.
2.3.1 IBM Storwize V7000 models
The IBM Storwize V7000 consists of enclosures and drives. An enclosure contains two canisters that are seen as part of the enclosure, although they can be replaced independently.
 
Additional information: For the most up-to-date information about features, benefits, and specifications of IBM Storwize V7000 models, see:
The information in this book is valid at the time of writing and covers IBM Spectrum Virtualize V8.1, but as IBM Storwize V7000 matures, expect to see new features and enhanced specifications.
The IBM Storwize V7000 models are described in Table 2-1.
Table 2-1 IBM Storwize V7000 models
Model
Cache
Fibre Channel (FC) / iSCSI / SAS ports
Drive slots
Power supply
2076-AF1 (with two node canisters Gen2+)
64, 128, or 256 gigabytes (GB)
16 x 16 gigabit (Gb) /
6 x 1 Gb + 8x 10 Gb /
4 x 12 Gb
24 x 2.5 inch
(All Flash)
Integrated dual power supplies with battery
2076-624 (with two node canisters Gen2+)
64, 128, or 256 gigabytes (GB)
16 x 16 gigabit (Gb) /
6 x 1 Gb + 8x 10 Gb / 4 x 12 Gb
24 x 2.5 inch
 
Integrated dual power supplies with battery
2076-524 (with two node canisters Gen2)
32 or 64 gigabytes (GB)
4 x 16 gigabit (Gb) /
4 x 1 Gb + 4 x 10 Gb / 4 x 12 Gb
24 x 2.5 inch
Integrated dual power supplies with battery
2076-212 (with two expansion canisters)
Not applicable (N/A)
-- / -- / 4 x 12 Gb
12 x 3.5 inch
Integrated dual power supplies
2076-224 (with two expansion canisters)
N/A
-- / -- / 4 x 12 Gb
24 x 2.5 inch
Integrated dual power supplies
2076-12F (with two expansion canisters Gen2)
N/A
-- / -- / 4 x 12 Gb
12 x 3.5 inch
Integrated dual power supplies (attaches to 2076-524 and 2076-624 only)
2076-24F (with two expansion canisters Gen2)
N/A
-- / -- / 4 x 12 Gb
24 x 2.5 inch
Integrated dual power supplies (attaches to 2076-524 and 2076-624 only)
2076-92F (with two expansion canisters Gen2)
N/A
-- / -- / 4 x 12 Gb
92 x 3.5 inch
Integrated dual power supplies (attaches to 2076-524 and 2076-624 only)
 
 
 
Note: The first generation of control enclosures (2076 - models 112, 124, 312, and 324) has been withdrawn from marketing. However, expansion enclosures 2076-212 and 2076-224 can still be ordered (see Table 2-1) as they only attach to those control enclosures. Intermix of control enclosures with expansion enclosures of different generations is not a supported combination, and is refused by IBM Spectrum Virtualize software.
The first generation of IBM Storwize V7000 hardware is not supported by IBM Spectrum Virtualize V8.1. Any attempt to upgrade to V8.1 will be rejected by software. The last supported version for first-generation Storwize V7000 is V7.8.
2.3.2 IBM Storage Utility Offerings
The IBM 2076 Model U7A is the IBM Storwize V7000 with a three-year warranty to be used in the Storage Utility Offering space. These models are physically and functionally identical to the Storwize V7000 model 624 except for target configurations and variable capacity billing. The variable capacity billing uses IBM Spectrum Control™ Storage Insights to monitor the system usage, allowing allocated storage usage above a base subscription rate to be billed per TB, per month. Allocated storage is identified as storage that is allocated to a specific host (and unusable to other hosts), whether data is written or not. For thin-provisioning, the data that is actually written is considered used. For thick provisioning, total allocated volume space is considered used.
IBM Storage Utility Offerings include the IBM FlashSystem 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage. The variable capacity usage is billed on a quarterly basis. This feature enables customers to grow or shrink their usage, and only pay for configured capacity.
IBM Storage Utility Offering models are provided for customers who can benefit from a variable capacity system, where billing is based on actually provisioned space above the base. The base subscription is covered by a three-year lease that entitles the customer to use the base capacity at no additional cost. If storage needs increase beyond the base capacity, usage is billed based on the average daily provisioned capacity per terabyte, per month, on a quarterly basis.
Example
A customer has a Storwize V5030 utility model with 2 TB nearline disks, for a total system capacity of 48 TB. The base subscription for such a system is 16.8 TB. During the months where the average daily usage is below 16.8 TB, there is no additional billing.
The system monitors daily provisioned capacity and averages those daily usage rates over the month term. The result is the average daily usage for the month.
If a customer uses 25 TB, 42.6 TB, and 22.2 TB in three consecutive months, Storage Insights calculates the overage (rounding to the nearest terabyte) as shown in Table 2-2.
Table 2-2 Overage calculation
Average daily
Base
Overage
To be billed
25
16.8
8.2
8
42.6
16.8
25.8
26
22.2
16.8
5.4
5
The capacity billed at the end of the quarter is a total of 39 TB-months in this example.
Disk expansions (2076-24F for the Storwize V7000 and 2078-24F for the Storwize V5030) can be ordered with the system at the initial purchase, but cannot be added through MES. The expansions must have like-type and capacity drives, and must be fully populated. For example, on a Storwize V7000 utility model with twenty four 7.68 TB flash drives in the controller, a 2076-24F with twenty-four 7.68 TB drives can be configured with the initial system. Expansion drawers do not apply to FlashSystem 900 (9843-UF3). Storwize V5030 and Storwize V7000 utility model systems support up to 760 drives in the system.
The usage data collected by Storage Insights is used by IBM to determine the actual physical data provisioned in the system. This data is compared to the base system capacity subscription, and any provisioned capacity beyond that base subscription is billed per terabyte, per month, on a quarterly basis. The calculated usage is based on the average use over a specific month.
In a highly variable environment, such as managed or cloud service providers, this feature enables the system to be used only as much as is necessary during each month. Usage might increase or decrease, and will be billed accordingly. Provisioned capacity is considered capacity that is reserved by the system.
In thick-provisioned environments (available on FlashSystem 900 and Storwize), this is the capacity that is allocated to a host whether it has data written or not. For thin-provisioned environments (available on the Storwize), this is the data that is actually written and used. This difference is because of the different ways in which thick and thin provisioning use disk space.
For more information download this PDF:
These systems are available worldwide, but there are specific client and program differences by location. Consult with your IBM Business Partner or sales person for specifics.
2.3.3 IBM Storwize V7000 functions
The following functions are available with the current release of IBM Spectrum Virtualize:
Thin provisioning
Traditional fully allocated volumes allocate real physical disk capacity for an entire volume even if that capacity is never used. Thin-provisioned volumes allocate real physical disk capacity only when data is written to the logical volume.
Volume mirroring
Provides a single volume image to the attached host systems while maintaining pointers to two copies of data in separate storage pools. Copies can be on separate disk storage systems that are being virtualized. If one copy is failing, IBM Storwize V7000 provides continuous data access by redirecting input/output (I/O) to the remaining copy. When the copy becomes available, automatic resynchronization occurs.
FlashCopy
Provides a volume level point-in-time copy function for any storage being virtualized by IBM Spectrum Virtualize. This function creates copies for backup, parallel processing, testing, and development, and has the copies available almost immediately.
IBM Storwize V7000 includes the following IBM FlashCopy functions enabled by IBM Spectrum Virtualize V8.1:
 – Full or incremental copy
This function copies only the changes from either the source or target data since the last FlashCopy operation, and enables completion of point-in-time online backups much more quickly than using traditional FlashCopy.
 – Multitarget FlashCopy
IBM Storwize V7000 supports copying of up to 256 target volumes from a single source volume. Each copy is managed by a unique mapping and in general, each mapping acts independently and is not affected by other mappings sharing the source volume.
 – Cascaded FlashCopy
This function is used to create copies of copies, and supports full, incremental, or nocopy operations.
 – Reverse FlashCopy
This function enables data from an earlier point-in-time copy to be restored with minimal disruption to the host.
 – FlashCopy nocopy with thin provisioning
This function provides a combination of using thin-provisioned volumes and FlashCopy together to help reduce disk space requirements when making copies.
This option has two variations:
 • Space-efficient source and target with background copy
Copies only the allocated space.
 • Space-efficient target with no background copy
Copies only the space used for changes between the source and target, and is generally referred to as a snapshot.
This function can be used with multitarget, cascaded, and incremental FlashCopy.
 – Consistency groups
Consistency groups address the issue where application data is on multiple volumes. By placing the FlashCopy relationships into a consistency group, commands can be issued against all of the volumes in the group. This action provides a consistent point-in-time copy of all of the data, even if it is on a physically separate volume.
FlashCopy mappings can be members of a consistency group, or they can be operated in a stand-alone manner, not as part of a consistency group. FlashCopy commands can be issued to a FlashCopy consistency group, which affects all FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not part of a defined FlashCopy consistency group.
Metro Mirror
Provides a synchronous remote mirroring function up to approximately 300 kilometers (km) between sites. Because the host I/O only completes after the data is cached at both locations, performance requirements might limit the practical distance. Metro Mirror provides fully synchronized copies at both sites with zero data loss after the initial copy is completed. Metro Mirror can operate between multiple IBM Storwize systems and IBM SAN Volume Controllers.
Global Mirror
Provides a long-distance asynchronous remote mirroring function up to approximately
8,000 km between sites. With Global Mirror, the host I/O completes locally and the changed data is sent to the remote site later. This feature maintains a consistent recoverable copy of data at the remote site, which lags behind the local site. Global Mirror can operate between multiple IBM Storwize systems and IBM SAN Volume Controllers.
External virtualization
IBM Storwize V7000 provides a data migration function that can be used to import external storage systems into the IBM Storwize V7000 system. The following tasks can be accomplished:
 – Move volumes nondisruptively onto a newly installed storage system.
 – Move volumes to rebalance a changed workload.
 – Migrate data from other back-end storage to IBM Storwize V7000-managed storage.
Software Encryption
IBM Storwize V7000 Gen2 and IBM Storwize Gen2+ provide optional encryption of data-at-rest functionality, which protects against the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption can only be enabled and configured on enclosures that support encryption.
Starting with IBM Spectrum Virtualize V7.6, IBM Spectrum Virtualize offers software-enabled encryption (hardware encryption was introduced in V7.4), which includes encryption of internal or external storage. Encryption keys now can be stored either on USB flash drives, IBM Security Key Lifecycle Manager server, or both.
IBM Easy Tier
Provides a mechanism to seamlessly migrate hot spots to the most appropriate tier within the IBM Storwize V7000 solution. This migration could be to internal drives within the IBM Storwize V7000 or to external storage systems that are virtualized by IBM Storwize V7000. Independently on Easy Tier, IBM Storwize V7000 provides an automatic storage pool balancing that is enabled by default and requires no license.
IBM Real-time Compression
IBM Real-time Compression technology is based on the Random Access Compression Engine (RACE). RACE is an integral part of the software stack of IBM Spectrum Virtualize V6.4 and later. This integration does not alter the behavior of the system, so previously existing features are supported for compressed volumes. Starting with IBM Spectrum Virtualize V7.6, two software-enabled RACE engines sharing HW resources for compression are required.
2.3.4 IBM Storwize V7000 licensing
With the broad range of technical features and capabilities of the IBM Storwize V7000 Gen2 and Gen2+, including Copy Services, External Virtualization, Easy Tier, and Real-time Compression, IBM simplified the licensing model to include these new features. The IBM Storwize V7000 offers two ways of license procurement:
Fully flexible
Bundled (license packages)
The license model is based on a license-per-enclosure concept familiar from the first generation of IBM Storwize V7000. However, the second generation offers more flexibility to match your specific needs.
 
Upgrade: Installing or upgrading the code on the first generation of IBM Storwize V7000 to the V8.1 does not change your existing license model or license needs.
The conceptual model of the licensing in IBM Storwize V7000 Gen2 and Gen2+ is depicted in Figure 2-5.
Figure 2-5 Licensing model on IBM Storwize V7000 Gen2 and Gen2+
The base module is represented by IBM Spectrum Virtualize family and is mandatory for every controller, enclosure, or externally managed controller unit. Additional licensed features can be purchased on-demand, either as a full software bundle or each feature separately.
Any additional licenses need to be procured per every existing enclosure where they are planned to be used. So, for example, if you plan to build a disk array with compression enabled in one enclosure only (no matter if this is the control or expansion enclosure), you need to purchase just one compression license.
2.4 IBM Storwize V7000 hardware
Along with the previous release of the IBM Spectrum Virtualize V7.3, IBM introduced a hardware refresh for the IBM Storwize V7000 platform (code name Gen2) with another update in V7.8 (Gen2+). These improvements are further enhanced with the most recent version of IBM Spectrum Virtualize software V8.1. This section introduces and partially repeats these hardware changes and software improvements associated with these updates. They include the following changes and improvements:
New internal component layout, such as canister and ports
Integrated battery pack within node canisters
Enhanced scalability and flexibility with 16 gigabits per second (Gbps) I/O adapters
Improved Real-time Compression engine with hardware assistance
Extended disk drive support
Cache upgrade option to 128 GB per canister
To meet these objectives, the base hardware configuration of the IBM Storwize V7000 Gen2 and Gen2+ were substantially improved to support more advanced processors, more memory, and faster data interfaces.
2.5 IBM Storwize V7000 components
The IBM Storwize V7000 is a modular, midrange virtualization RAID storage subsystem that employs the IBM Spectrum Virtualize software engine. It has the following benefits:
Brings enterprise technology to midrange storage
Specialty administrators are not required
Client setup and service are simplified
The system can grow incrementally as storage capacity and performance needs change
Multiple storage tiers are in a single system with nondisruptive migration between them
Simple integration can be done into the server environment
The IBM Storwize V7000 consists of a set of drive enclosures. Control enclosures contain disk drives and two nodes (an I/O group), which are attached to the SAN fabric or 10 gigabit Ethernet (GbE) fast Ethernet. Expansion enclosures contain drives and are attached to control enclosures.
The simplest use of the IBM Storwize V7000 is as a traditional RAID subsystem. The internal drives are configured into RAID arrays, and virtual disks are created from those arrays. With IBM Spectrum Virtualize software, this usage is extended by the deployment of distributed RAID arrays, which shrink the reconstruction time of failed drives.
IBM Storwize V7000 supports spinning disks and flash drives. When different tiers are employed, the IBM Storwize V7000 uses IBM Easy Tier to automatically place volume hot spots on better-performing storage. Even without the Easy Tier enabled, the Storage Pool balancing is available and enabled by default to equally distribute workload equally across all MDisks in that tier.
2.5.1 Hosts
A host system is a server that is connected to IBM Storwize V7000 through a Fibre Channel connection, Fibre Channel over Ethernet (FCoE), or through an Internet SCSI (iSCSI) connection.
Hosts are defined to IBM Spectrum Virtualize by identifying their worldwide port names (WWPNs) for Fibre Channel hosts. For iSCSI hosts, they are identified by using their iSCSI names. The iSCSI names can either be iSCSI qualified names (IQNs) or extended unique identifiers (EUIs).
2.5.2 Host cluster
Host cluster is a host object in the IBM Storwize V7000. A host cluster is a combination of two or more servers that is connected to IBM Storwize V7000 through a Fibre Channel, FCoE, or iSCSI connection. Host cluster object can see the same set of volumes.
2.5.3 Nodes or canisters
An IBM Storwize V7000 can have 2 - 8 hardware components, called nodes, node canisters, or just canisters. They perform the virtualization of internal and external volumes, and the cache and copy services (remote copy) functions. A clustered system consists of 1 - 4 node pairs.
One of the canisters within the system is known as the configuration node. This is the node that manages configuration activity for the clustered system. If this node fails, the system automatically nominates another node to become the configuration node.
2.5.4 I/O groups
Within IBM Storwize V7000, there are 1 - 4 pairs of node canisters known as I/O groups. The IBM Storwize V7000 with installed IBM Spectrum Virtualize supports eight node canisters in the clustered system, which provides four I/O groups. When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O group.
One node of the I/O group acts as a preferred node for its own specific subset of the total number of volumes that the I/O group presents to the host servers (a maximum of 2048 volumes per I/O group). However, each node also acts as a failover node for its partner node within the I/O group, so a node takes over the I/O workload from its partner node, if required, with no effect to the server’s application.
In an IBM Storwize V7000 environment, using active/active architecture, the I/O handling for a volume can be managed by both nodes of the I/O group. Therefore, it is mandatory for servers that are connected through Fibre Channel connectors to use multipath device drivers to be able to handle this capability.
The IBM Storwize V7000 I/O groups are connected to the SAN so that all application servers accessing volumes from the I/O group have access to them. Up to 2048 host server objects can be defined in four I/O groups.
 
Important: The active/active architecture provides availability to process I/Os for both controller nodes. It enables the application to continue running smoothly, even if the server has only one access route or path to the storage controller. This type of architecture eliminates the path and LUN thrashing typical of an active/passive architecture.
2.5.5 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive experience seek time and latency time at the drive level, which can result in 1 ms - 10 ms of response time (for an enterprise-class disk).
Cache is allocated in 4 KiB segments. A segment holds part of one track. A track is the unit of locking and destaging granularity in the cache. The cache virtual track size is 32 KiB (eight segments). A track might be only partially populated with valid pages. The IBM Storwize V7000 combines writes up to a 256 KiB track size if the writes are in the same tracks before destage. For example, if 4 KiB is written into a track, another 4 KiB is written to another location in the same track.
Therefore, the blocks that are written from the IBM Storwize V7000 to the disk subsystem can be any size between 512 bytes up to 256 KiB. The large cache and advanced cache management algorithms allow it to improve on the performance of many types of underlying disk technologies. The system’s capability to manage, in the background, the destaging operations that are incurred by writes (in addition to still supporting full data integrity) assists with system’s capability in achieving good database performance.
The cache is separated into two layers: An upper cache and a lower cache.
Figure 2-6 shows the separation of the upper and lower cache.
Figure 2-6 Separation of upper and lower cache
The upper cache delivers the following functionality, which enables the system to streamline data write performance:
Provides fast write response times to the host by being as high up in the I/O stack as possible
Provides partitioning
The lower cache delivers the following additional functions:
Ensures that the write cache between two nodes is in sync
Caches partitioning to ensure that a slow back end cannot use the entire cache
Uses a destage algorithm that adapts to the amount of data and the back-end performance
Provides read caching and prefetching
Combined, the two levels of cache also deliver the following functions:
Pins data when the LUN goes offline
Provides enhanced statistics for IBM Tivoli® Storage Productivity Center, and maintains compatibility with an earlier version
Provides trace for debugging
Reports medium errors
Resynchronizes cache correctly and provides the atomic write functionality
Ensures that other partitions continue operation when one partition becomes 100% full of pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
Supports two-way operation
Supports none, read-only, and read/write as user-exposed caching policies
Supports flush-when-idle
Supports expanding cache as more memory becomes available to the platform
Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the two nodes of the I/O Group
Enables switching of the preferred node without needing to move volumes between I/O Groups
Depending on the size, age, and technology level of the disk storage system, the total available cache in the system can be larger, smaller, or about the same as the cache that is associated with the disk storage. Because hits to the cache can occur in either the IBM Storwize V7000 or at the disk controller level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever the cache is located. Therefore, if the storage controller level of the cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the IBM Storwize V7000 cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role in enabling sequentially organized data to flow smoothly through the system. The IBM Storwize V7000 cannot increase the throughput of the underlying disks in all cases because this increase depends on both the underlying storage technology and the degree to which the workload exhibits hotspots or sensitivity to cache size or cache algorithms.
IBM Spectrum Virtualize V7.3 introduced a major upgrade to the cache code. In association with IBM Storwize V7000 Gen2 hardware, it provided an additional cache capacity upgrade.
These limits have been raised with IBM Storwize V7000 Gen2+ and V8.1. Before this release, the SVC memory manager (PLMM) could only address 64 GB of memory. In V8.1, the underlying PLMM has been rewritten and the structure size increased. The cache size can be now upgraded up to 128 GB per canister and the entire memory can now be used. However, the write cache is still assigned to a maximum of 12 GB and compression cache to a maximum of 34 GB. The remaining installed cache is simply used as read cache (including allocation for features such as FlashCopy, Global or Metro Mirror, and so on).
 
Note: When installing additional cache in your IBM Storwize V7000 Gen2+ and upgrading to V8.1, the error message “1199 Detected hardware needs activation” will appear in the GUI event log with an associated error code 0x841 in CLI. Run the command chnodehw <node_id> -force for each affected canister in sequence or perform the fix procedure using the GUI.
2.5.6 Clustered system
A clustered system consists of 1 - 4 pairs of nodes. All configuration, monitoring, and service tasks are performed at the system level, and the configuration settings are replicated across all node canisters in the clustered system. To facilitate these tasks, 1 or 2 management IP addresses are set for the system.
A process is provided to back up the system configuration data to disk so that the clustered system can be restored if there is a disaster. This method does not back up application data, only the IBM Storwize V7000 system configuration information.
 
System configuration backup: After backing up the system configuration, save the backup data outside of the SAN. If you are unable to access the IBM Storwize V7000, you do not have access to the backup data if it is on the SAN.
For the purposes of remote data mirroring, two or more clustered systems (IBM Spectrum Virtualize systems starting from software V7.1) must form a partnership before creating relationships between mirrored volumes.
 
Important: IBM Storwize V7000 V6.3 introduced the layer parameter. It can be changed by running chsystem using the command-line interface (CLI). The default is the storage layer. You must change it to replication if you need to set up a copy services relationship between the IBM Storwize family and IBM SAN Volume Controller.
One node is designated as the configuration node canister. It is the only node that activates the system IP address. If the configuration node canister fails, the system chooses a new configuration node and the new configuration node takes over the system IP addresses.
The system can be configured by using either the IBM Spectrum Virtualize GUI-based management software, the CLI, or through an application, such as IBM Spectrum Control, that uses the IBM Storwize V7000 Common Information Model object manager (CIMOM).
2.5.7 HyperSwap
IBM HyperSwap function is an HA feature that provides dual-site, active-active access to a volume. Active-active volumes have a copy in one site and a copy at another site. Data that is written to the volume is automatically sent to both copies so that either site can provide access to the volume if the other site becomes unavailable. Active-active relationships are made between the copies at each site. These relationships automatically run and switch direction according to which copy or copies are online and up to date.
Relationships can be grouped into consistency groups just like Metro Mirror and Global Mirror relationships. The consistency groups fail over consistently as a group based on the state of all copies in the group. An image that can be used if there is a disaster is maintained at each site. When the system topology is set to HyperSwap, each IBM Storwize V7000 controller, and hostmap object in the system configuration must have a site attribute set to 1 or 2.
This site must be the same site as the site of the controllers that provide the managed disks to that I/O group. When managed disks are added to storage pools, their site attributes must match. This requirement ensures that each copy in the active-active relationship is fully independent and is at a distinct site.
2.5.8 Dense expansion drawers
Dense expansion drawers, or just dense drawers, are optional disk expansion enclosures that are 5U rack-mounted. Each chassis features two expansion canisters, two power supplies, two expander modules, and a total of four fan modules.
Each Dense Drawer can hold up 92 drives that are positioned in four rows of 14 and an additional three rows of 12 mounted drives assemblies. There are two Secondary Expander Modules (SEM) that are centrally located in the chassis. One SEM addresses 54 drive ports, and the other addresses 38 drive ports.
The drive slots are numbered 1 - 14, starting from the left rear slot and working from left to right, back to front.
Each canister in the Dense Drawer chassis features two SAS ports numbered 1 and 2. The use of SAS port1 is mandatory because the expansion enclosure must be attached to an IBM Storwize V7000 node or another expansion enclosure. SAS connector 2 is optional, because it is used to attach to more expansion enclosures.
Each IBM Storwize V7000 can support up to four dense drawers per SAS chain.
Figure 2-7 shows a dense expansion drawer.
Figure 2-7 IBM dense expansion drawer
2.5.9 Expansion Enclosures
There are two types of available Expansion Enclosures:
IBM Storwize V7000 large form factor (LFF) Expansion Enclosure Model 12F
IBM Storwize V7000 small form factor (SFF) 24F
IBM Storwize V7000 Gen2 LFF 12F includes the following components:
Two expansion canisters
12 Gb SAS ports for control enclosure and Expansion Enclosure attachment
Twelve slots for 3.5-inch SAS drives
2U, 19-inch rack mount enclosure with AC power supplies
IBM Storwize V7000 SFF Expansion Enclosure Model 24F includes the following components:
Two expansion canisters
12 Gb SAS ports for control enclosure and expansion enclosure attachment
Twenty-four slots for 2.5-inch SAS drives
2U, 19-inch rack mount enclosure with AC power supplies
The Expansion Enclosure is a 2U enclosure, containing the following components:
24 2.5-inch drives (HDDs or SSDs)
2 Storage Bridge Bay (SBB)-compliant enclosure services manager (ESM) canisters
Two fan assemblies, which mount between the drive midplane and the Node Canisters. Each fan module is removable when the Node Canister is removed.
Two Power supplies
RS232 port on the back panel (3.5 mm stereo jack), which is used for configuration during manufacturing.
The front of an Expansion Enclosure is shown in Figure 2-8.
Figure 2-8 Front of IBM Storwize V7000 Expansion Enclosure
Figure 2-9 shows a rear view of an expansion enclosure.
Figure 2-9 Rear of IBM Storwize V7000 expansion enclosure
SAS chain limitations
When attaching expansion enclosures to the control enclosure, you are not limited by type of the enclosure (as long as it meets all generation level restrictions). The only limitation for each SAS chain is its chain weight. Each type of enclosure has defined its own chain weight as follows:
Enclosures 12F and 24F have a chain weight of 1
Enclosure 92F has a chain weight of 2.5
The maximum chain weight is 10.
For example, you can combine seven 24F and one 92F expansions (7x1 + 1x2.5 = 9.5 chain weight). Or two 92F enclosures, one 12F, and four 24F (2x2.5 + 1x1 + 4x1 = 10 chain weight).
An example of chain weight 4.5 with one 24F, one 12F, and one 92F enclosures, all correctly cabled, is shown in Figure 2-10.
Figure 2-10 Connecting SAS cables while complying with maximum chain weight
2.5.10 RAID
The IBM Storwize V7000 setup contains several internal drive objects, but these drives cannot be directly added to the storage pools. Drives need to be included in a RAID to provide protection against the failure of individual drives.
These drives are referred to as members of the array. Each array has a RAID level. RAID levels provide various degrees of redundancy and performance, and have various restrictions regarding the number of members in the array.
Apart from traditional disk arrays, IBM Spectrum Virtualize V7.6 introduced distributed RAID. Distributed RAID improves recovery time of failed disk drives in an array by distributing spare capacity between primary disks, rather than dedicating a whole spare drive for replacement.
IBM Storwize V7000 supports hot spare drives. When an array member drive fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array members.
Each array has a set of goals that describe the location and performance of each array. A sequence of drive failures and hot spare takeovers can leave an array unbalanced, with members that do not match these goals. The system automatically rebalances such arrays when the appropriate drives are available.
2.5.11 Read Intensive Flash Drives
Generally, there are two types of SSDs for Enterprise Storage: The Multi-level cell (MLC) and Single-level cell (SLC).
The most common SSD technology is MLC. They are found in consumer products such as portable electronic devices. However, they are also strongly present in some enterprise storage products. Enterprise class SSDs are built on mid to high-endurance multi-level cell flash technology, known as mainstream endurance SSD.
MLC SSDs uses multi cell to store data and feature the Wear Leveling method, which is the process to evenly spread the data across all memory cells on the SSD. This method helps to eliminate potential hotspots caused by repetitive Write-Erase cycles. SLC SSDs use a single cell to store one bit of data, which makes them generally faster.
To support particular business demands, IBM Spectrum Virtualize has qualified the use of Read Intensive (RI) Flash Drives (based on SSD technology) with applications where the read operations are significantly high.
RI Flash Drives are available as an optional purchase product to IBM SAN Volume Controller and IBM Storwize Family.
For more information about Read Intensive Flash Drives, the certified models, and the IBM Spectrum Virtualize code level to support RI Flash Drives, see this website:
2.5.12 Managed disks
A managed disk (MDisk) is the unit of storage that IBM Spectrum Virtualize virtualizes. This unit could be a logical volume on an external storage array presented to the IBM Storwize V7000, or a RAID consisting of internal drives. IBM Spectrum Virtualize can then allocate these MDisks into various storage pools. An MDisk is not visible to a host system on the storage area network because it is internal or zoned only to the IBM Storwize V7000 system.
An MDisk has the following modes:
Array
Array mode MDisks are constructed from drives using the RAID function. Array MDisks are always associated with storage pools.
Unmanaged
Unmanaged MDisks are not being used by the system. This situation might occur when an MDisk is first imported into the system, for example.
Managed
Managed MDisks are assigned to a storage pool and provide extents so that volumes can use it.
Image
Image MDisks are assigned directly to a volume with a one-to-one mapping of extents between the MDisk and the volume. This situation is normally used when importing logical volumes into the clustered system that already have data on them, which ensures that the data is preserved as it is imported into the clustered system.
2.5.13 Quorum disks
A quorum disk is an MDisk that contains a reserved area for use exclusively by the system. In IBM Storwize V7000, internal drives can be considered as quorum candidates. The clustered system uses quorum disks to break a tie when exactly half the nodes in the system remain after a SAN failure.
The clustered system automatically forms the quorum disk by taking a small amount of space from an MDisk. It allocates space from up to three different MDisks for redundancy, although only one quorum disk is active.
If the environment has multiple storage systems, to avoid the possibility of losing all of the quorum disks because of a failure of a single storage system, allocate the quorum disk on different storage systems. It is possible to manage the quorum disks by using the CLI.
2.5.14 IP Quorum
In a HyperSwap configuration, there must be a third, independent site to house quorum devices. To use a quorum disk as the quorum device, this third site must use Fibre Channel connectivity together with an external storage system. Sometimes, Fibre Channel connectivity is not possible. In a local environment, no extra hardware or networking, such as Fibre Channel or SAS-attached storage, is required beyond what is normally always provisioned within a system.
To use an IP-based quorum application as the quorum device for the third site, no Fibre Channel connectivity is used. Java applications are run on hosts at the third site. However, there are strict requirements on the IP network, and some disadvantages with using IP quorum applications.
Unlike quorum disks, all IP quorum applications must be reconfigured and redeployed to hosts when certain aspects of the system configuration change. These aspects include adding or removing a node from the system, or when node service IP addresses are changed.
For stable quorum resolutions, an IP network must provide the following requirements:
Connectivity from the hosts to the service IP addresses of all nodes. If IP quorum is configured incorrectly, the network must also deal with possible security implications of exposing the service IP addresses because this connectivity can also be used to access the service GUI.
Port 1260 is used by IP quorum applications to communicate from the hosts to all nodes.
The maximum round-trip delay must not exceed 80 ms, which means 40 ms in each direction.
A minimum bandwidth of 2 MBps is ensured for node-to-quorum traffic.
Even with IP quorum applications at the third site, quorum disks at site one and site two are required because they are used to store metadata. To provide quorum resolution, use the mkquorumapp command to generate a Java application that is copied from the system and run on a host at a third site. The maximum number of applications that can be deployed is five. Currently, supported Java runtime environments (JREs) are IBM Java 7.1 and IBM Java 8.
2.5.15 Storage pool
A storage pool is a collection of MDisks that are grouped to provide capacity for volumes. All MDisks in the pool are split into extents with the same size. Volumes are then allocated out of the storage pool and are mapped to a host system.
 
IBM Storwize V7000 object names: The names must begin with a letter, which cannot be numeric. The name can be a maximum of 63 characters. Valid characters are uppercase letters (A-Z), lowercase letters (a-z), digits (0 - 9), underscore (_), period (.), hyphen (-), and space. The names must not begin or end with a space.
MDisks can be added to a storage pool at any time to increase the capacity of the storage pool. MDisks can belong in only one storage pool, and only MDisks in unmanaged mode can be added to the storage pool. When an MDisk is added to the storage pool, the mode changes from unmanaged to managed (and vice versa when you remove it).
Each MDisk in the storage pool is divided into several extents. The size of the extent is selected by the administrator at creation time of the storage pool, and cannot be changed later. The size of the extent ranges from 16 megabytes (MB) - 8 GB. The default extent size in V8.1 is 1024 MB.
The extent size has a direct effect on the maximum volume size and storage capacity of the clustered system. A system can manage 4 million (4 x 1024 x 1024) extents. For example, a system with a 16 MB extent size can manage up to 16 MB x 4 MB = 64 terabytes (TB) of storage.
Use the same extent size for all storage pools in a clustered system, which is a prerequisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring to copy volumes between storage pools.
 
Default extent size: The IBM Storwize V7000 GUI has a default extent size value of 1024 MB when you define a new storage pool. This setting can only be changed by using the CLI.
Figure 2-11 shows the storage pool components.
Figure 2-11 IBM Spectrum Virtualize virtualization components
2.5.16 Volumes
A volume is a logical disk that is presented to a host system by the clustered system. In our virtualized environment, the host system has a volume mapped to it by IBM Storwize V7000. IBM Spectrum Virtualize translates this volume into several extents, which are allocated across MDisks. The advantage of storage virtualization is that the host is “decoupled” from the underlying storage, so the virtualization appliance can move the extents without affecting the host system.
The host system cannot directly access the underlying MDisks in the same manner as it can access RAID arrays in a traditional storage environment.
There are three types of volumes:
Striped
A striped volume is allocated one extent in turn from each MDisk in the storage pool. This process continues until the space required for the volume has been satisfied.
It is also possible to supply a list of MDisks to use.
Figure 2-12 shows how a striped volume is allocated, assuming that 10 extents are required.
Figure 2-12 Striped volume
Sequential
A sequential volume is where the extents are allocated one after the other, from one MDisk to the next MDisk (Figure 2-13).
Figure 2-13 Sequential volume
Image mode
Image mode volumes are special volumes that have a direct relationship with one MDisk. They are used to migrate existing data into and out of the clustered system.
When the image mode volume is created, a direct mapping is made between extents that are on the MDisk and the extents that are on the volume. The LBA x on the MDisk is the same as the LBA x on the volume, which ensures that the data on the MDisk is preserved as it is brought into the clustered system (Figure 2-14).
Figure 2-14 Image mode volume
Some virtualization functions are not available for image mode volumes, so it is often useful to migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a managed MDisk.
If you add an MDisk containing data to a storage pool, any data on the MDisk is lost. Ensure that you create image mode volumes from MDisks that contain data before adding MDisks to the storage pools.
2.5.17 Easy Tier
IBM Easy Tier is a performance optimization function that automatically migrates or moves the extents of a volume across two or more tiers. Easy Tier monitors the host I/O activity and latency on the extent of all volumes with the Easy Tier function turned on, in a multitiered storage pool, over a 24-hour period.
After this period, the IBM Easy Tier creates an extent migration plan based on this activity, and then dynamically moves high activity (or hot) extents to a higher disk tier within the storage pool. It also moves extent activity that has dropped off (or cooled) from the high-tiered MDisk back to a lower-tiered MDisk.
The Easy Tier function can be turned on or off at the storage pool and the volume level. The automatic storage pool balancing is an integrated part of the IBM Easy Tier engine and is enabled by default on all pools. Storage Pool Balancing and IBM Easy Tier do not require additional licenses within IBM Spectrum Virtualize running on IBM Storwize V7000. This load balancing feature is not considered an Easy Tier function, although it uses the same principles and mechanism.
It is possible to demonstrate the potential benefit of IBM Easy Tier in the environment before installing different drives. When IBM Easy Tier is turned on, the algorithm produces the statistic file that can be offloaded from IBM Storwize V7000.
The IBM Storage Tier Advisor Tool (STAT) can be used to create a summary report using the statistic file offloaded from IBM Storwize V7000.
The STAT tool can be found in the following website:
2.5.18 Encryption
IBM Storwize V7000 provides optional encryption of data at rest, which protects against the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption of system data and system metadata is not required, so system data and metadata are not encrypted.
Planning for encryption involves purchasing a licensed function and then activating and enabling the function on the system. A license is required to encrypt data that is stored on drives. When encryption is activated and enabled on the system, valid encryption keys must be present on the system when the system unlocks the drives or the user generates a new key.
IBM Spectrum Virtualize V7.4, hardware encryption was introduced, with software encryption introduced in V7.6. Encryption keys could be either managed by IBM SKLM or stored on USB flash drives attached to a minimum of one of the nodes. V8.1 now allows a combination of SKLM and USB key repositories.
IBM Security Key Lifecycle Manager is an IBM solution to provide the infrastructure and processes to locally create, distribute, backup, and manage the lifecycle of encryption keys and certificates. Before activating and enabling encryption, you must determine the method of accessing key information during times when the system requires an encryption key to be present.
When Security Key Lifecycle Manager is used as a key manager for encryption, you can run into a deadlock situation when the key servers are hosted on encrypted storage provided by the IBM Storwize V7000. In other words, if the storage used by these SKLM servers is allocated from the encrypted pool on the same Storwize V7000. To avoid a deadlock situation, ensure that the IBM Storwize V7000 node canisters are able to communicate with an encryption server to get the unlock key after a power-on or restart scenario.
Data encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses a 256-bit symmetric encryption key in XTS mode, as defined in the Institute of Electrical and Electronics Engineers (IEEE) 1619-2007 standard as XTS-AES-256. That data encryption key is itself protected by a 256-bit AES key wrap when stored in non-volatile form.
2.5.19 iSCSI
iSCSI is an alternative means of attaching hosts and external storage controllers to the IBM Storwize V7000. Within IBM Spectrum Virtualize release 7.7, IBM introduced software capabilities to allow the underlying virtualized storage to attach to IBM Storwize V7000 by using the iSCSI protocol.
In the simplest terms, iSCSI enables the transport of SCSI commands and data over an Internet Protocol network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into Transmission Control Protocol/Internet Protocol (TCP/IP) packets and uses an existing IP network, rather than requiring expensive FC host bus adapters (HBAs) and a SAN fabric infrastructure.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the Internet Protocol network, especially over a potentially unreliable IP network.
Every iSCSI node in the network must have an iSCSI name and address as follows:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name that stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses.
2.5.20 Real-time Compression
IBM Real-time Compression is an attractive solution to address increasing data storage requirements for power, cooling, and floor space. When applied, IBM Real-time Compression can significantly save storage space because more data is stored in rack space so fewer storage enclosures are required to store a data set. IBM Real-time Compression provides the following benefits:
Compression for active primary data. IBM Real-time Compression can be used with active primary data.
Compression for replicated/mirrored data. Remote volume copies can be compressed in addition to the volumes at the primary storage tier. This process also reduces storage requirements in Metro Mirror and Global Mirror destination volumes.
No changes to the existing environment are required. IBM Real-time Compression is part of the storage system.
Overall savings in operational expenses. More data is stored in a rack space, so fewer storage expansion enclosures are required to store a data set. This reduced rack space has the following benefits:
 – Reduced power and cooling requirements. More data is stored in a system, therefore requiring less power and cooling per gigabyte or used capacity.
 – Reduced software licensing for additional functions in the system. More data stored per enclosure reduces the overall spending on licensing.
Disk space savings are immediate. The space reduction occurs when the host writes the data. This process is unlike other compression solutions, in which some or all of the reduction is realized only after a post-process compression batch job is run.
2.5.21 IP replication
IP replication was introduced in V7.2 and allows data replication between IBM Spectrum Virtualize family members. IP replication uses IP-based ports of the cluster node canisters. This function is transparent to servers and applications in the same way that traditional FC-based mirroring is. All remote mirroring modes (Metro Mirror, Global Mirror, and Global Mirror with changed volumes) are supported.
The configuration of the system is straightforward, and IBM Storwize family systems normally find each other in the network and can be selected from the GUI.
IP replication includes Bridgeworks SANSlide network optimization technology and is available at no additional charge. Remember, remote mirror is a chargeable option, but the price does not change with IP replication. Existing remote mirror users have access to the function at no additional charge.
IP connections that are used for replication can have long latency (the time to transmit a signal from one end to the other). This latency can be caused by distance or by many “hops” between switches and other appliances in the network. Traditional replication solutions transmit data, wait for a response, and then transmit more data, which can result in network utilization as low as 20% (based on IBM measurements).
Bridgeworks SANSlide technology, which is integrated with the IBM Storwize family, requires no separate appliances and so requires no additional cost and no configuration steps. It uses artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x. Therefore, customers can deploy a less costly network infrastructure, or take advantage of faster data transfer to speed replication cycles, improve remote data currency, and enjoy faster recovery.
2.5.22 IBM Storwize V7000 copy services
IBM Spectrum Virtualize supports the following copy services:
Synchronous remote copy
Asynchronous remote copy
FlashCopy
Transparent Cloud Tiering
Starting with V6.3 (now IBM Spectrum Virtualize), copy services functions are implemented within a single IBM Storwize V7000 or between multiple members of the IBM Spectrum Virtualize family. The Copy Services layer sits above and operates independently of the function or characteristics of the underlying disk subsystems used to provide storage resources to an IBM Storwize V7000.
2.5.23 Synchronous or asynchronous remote copy
The general application of remote copy seeks to maintain two copies of data. Often, the two copies are separated by distance, but not always. The remote copy can be maintained in either synchronous or asynchronous modes.
With IBM Spectrum Virtualize, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy.
Synchronous remote copy ensures that updates are committed at both the primary and the secondary volumes before the application considers the updates complete. Therefore, the secondary volume is fully up to date if it is needed in a failover.
However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary volume. In a truly remote situation, this extra latency can have a significant adverse effect on application performance.
Special configuration guidelines exist for SAN fabrics and IP networks that are used for data replication. Consider the distance and available bandwidth of the intersite links when planning for synchronous remote copy.
A function of Global Mirror designed for low bandwidth has been introduced in IBM Spectrum Virtualize. It uses change volumes that are associated with the primary and secondary volumes. These volumes are used to record changes to the remote copy volume, the FlashCopy relationship that exists between the secondary volume and the change volume, and between the primary volume and the change volume.
This function is called Global Mirror cycling mode. Figure 2-15 shows an example of this function where you can see the relationship between volumes and change volumes.
Figure 2-15 Global Mirror with change volumes
In asynchronous remote copy, the application acknowledges that the write is complete before the write is committed at the secondary volume. Therefore, on a failover, certain updates (data) might be missing at the secondary volume. The application must have an external mechanism for recovering the missing updates, if possible. This mechanism can involve user intervention. Recovery on the secondary site involves starting the application on this recent backup, and then rolling forward or backward to the most recent commit point.
2.5.24 FlashCopy and Transparent Cloud Tiering
FlashCopy and Cloud Backup are used to make a copy of a source volume on a target volume. After the copy operation has started, the original content of the target volume is lost. The target volume has the contents of the source volume as they existed at a single point in time. Although the copy operation takes time, the resulting data at the target appears as though the copy was made instantaneously.
FlashCopy
FlashCopy is sometimes described as an instance of a time-zero (T0) copy or a point-in-time (PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target volumes from their respective source volumes.
With IBM Spectrum Virtualize, multiple target volumes can undergo FlashCopy from the same source volume. This capability can be used to create images from separate points in time for the source volume, and to create multiple images from a source volume at a common point in time. Source and target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship, and without waiting for the original copy operation to complete. IBM Spectrum Virtualize supports multiple targets and, therefore, multiple rollback points.
Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases. An IBM solution to this is provided by IBM Spectrum Protect™, which is described on the following website:
Transparent Cloud Tiering
IBM Spectrum Transparent Cloud Tiering is a new function introduced in IBM Spectrum Virtualize V7.8. Transparent Cloud Tiering is an alternative solution for data protection, backup, and restore that interfaces to Cloud Service Providers, such as IBM SoftLayer, Amazon S3, and OpenStack Swift. Transparent Cloud Tiering is charged as additional priced software per IBM Storwize V7000 Controller.
The Transparent Cloud Tiering function helps organizations to reduce costs related to power and cooling when off-site data protection is required to send sensitive data out of the main site.
Transparent Cloud Tiering uses IBM FlashCopy techniques that provide full and incremental snapshots of one or more volumes. Snapshots are encrypted and compressed before being uploaded to the cloud. Reverse operations are also supported within that function. When a set of data is transferred out to cloud, the volume snapshot is stored as an object storage.
Cloud object storage uses innovative approach and cost-effective solution to store large amount of unstructured data and delivers mechanisms to provide security services, high availability, and reliability.
The management GUI provides an easy-to-use initial setup, advanced security settings, and audit logs that records all backup and restore to cloud.
To know more about cloud object storage, visit this website:
2.6 Business continuity
Business continuity and continuous application availability are among the top requirements for many organizations. Advances in virtualization, storage, and networking have made enhanced business continuity possible.
Information technology solutions can now manage both planned and unplanned outages, and provide the flexibility and cost efficiencies that are available from cloud-computing models.
2.6.1 Business Continuity with HyperSwap
The HyperSwap high availability feature in the IBM Spectrum Virtualize software allows business continuity in the event of hardware failure, power failure, connectivity failure, or disasters such as fire or flooding. The HyperSwap feature is available on the IBM SAN Volume Controller, IBM Storwize V7000, IBM Storwize V7000 Unified, and IBM Storwize V5000 products.
The HyperSwap feature provides highly available volumes accessible through two sites at up to 300 km apart. A fully independent copy of the data is maintained at each site. When data is written by hosts at either site, both copies are synchronously updated before the write operation is completed. The HyperSwap feature will automatically optimize itself to minimize data transmitted between sites and to minimize host read and write latency.
The following are some key features of HyperSwap:
Works with SVC and IBM Storwize V7000, V5000, V7000 Unified hardware as well.
Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities along with existing change volume and access I/O group technologies.
Makes a host’s volumes accessible across two IBM V7000 / V5000 Storwize or SVC I/O groups in a clustered system using the Metro Mirror relationship in the background. The volumes look like a single volume to the host.
Works with the standard multipathing drivers that are available on a wide variety of host types, with no additional host support required to access the highly available volume.
2.7 Management and support tools
The IBM Spectrum Virtualize system can be managed through the included management software that runs on the IBM Storwize V7000 hardware.
2.7.1 IBM Assist On-site and Remote Support Assistance
The IBM Assist On-site tool is a remote desktop-sharing solution that is offered through the IBM website. With it, the IBM service representative can remotely view your system to troubleshoot a problem.
You can maintain a chat session with the IBM service representative so that you can monitor this activity and either understand how to fix the problem yourself or allow the representative to fix it for you.
To use the IBM Assist On-site tool, the master console must be able to access the Internet. The following website provides further information about this tool:
When you access the website, you sign in and enter a code that the IBM service representative provides to you. This code is unique to each IBM Assist On-site session. A plug-in is downloaded onto your master console to connect you and your IBM service representative to the remote service session. The IBM Assist On-site tool contains several layers of security to protect your applications and your computers. The plug-in is removed after the next reboot.
You can also use security features to restrict access by the IBM service representative. Your IBM service representative can provide you with more detailed instructions for using the tool.
The embedded part of the IBM Spectrum Virtualize V8.1 code is a software toolset called Remote Support Client. It establishes a network connection over a secured channel with Remote Support Server in the IBM network. The Remote Support Server provides predictive analysis of SVC status and assists administrators for troubleshooting and fix activities. Remote Support Assistance is at no extra fee, and no additional license is needed.
2.7.2 Event notifications
IBM Storwize V7000 can use Simple Network Management Protocol (SNMP) traps, syslog messages, and a Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously.
Each event that IBM Storwize V7000 detects is assigned a notification type of Error, Warning, or Information. You can configure the IBM Storwize V7000 to send each type of notification to specific recipients.
2.7.3 Simple Network Management Protocol traps
SNMP is a standard protocol for managing networks and exchanging messages. The IBM Spectrum Virtualize can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that IBM Spectrum Virtualize sends. You can use the management GUI or the IBM Storwize V7000 CLI to configure and modify your SNMP settings.
You can use the Management Information Base (MIB) file for SNMP to configure a network management program to receive SNMP messages that are sent by the IBM Spectrum Virtualize.
2.7.4 Syslog messages
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6.
IBM Storwize V7000 can send syslog messages that notify personnel about an event. The event messages can be sent in either expanded or concise format. You can use a syslog manager to view the syslog messages that IBM Storwize V7000 sends.
IBM Spectrum Virtualize uses the User Datagram Protocol (UDP) to transmit the syslog message. You can use the management GUI or the IBM Storwize V7000 CLI to configure and modify your syslog settings.
2.7.5 Call Home email
The Call Home feature transmits operational and error-related data to you and IBM through an SMTP server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. You can use the Call Home function if you have a maintenance contract with IBM or if the IBM Storwize V7000 is within the warranty period.
To send email, at least one SMTP server must be configured. The system support as many as five more SMTP servers for backup purposes. The SMTP server must accept the relaying of email from the IBM Storwize V7000 clustered system IP address.
Use the management GUI or the IBM Storwize V7000 CLI to configure the email settings, including contact information and email recipients. Set the reply address to a valid email address.
Send a test email to check that all connections and infrastructure are set up correctly. The Call Home function can be disabled at any time by using the management GUI or the IBM Storwize V7000 CLI.
2.8 Useful IBM Storwize V7000 websites
See the following IBM Storwize V7000 web pages for more information:
IBM Support page:
IBM Storwize V7000 Unified and IBM Storwize V7000 Disk Systems:
List of supported hardware:
Configuration limits and restrictions:
Direct attachment of IBM Storwize V7000
IBM Knowledge Center:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.254.28