FlashSystem V9000 architecture
This chapter describes the IBM FlashSystem V9000 architecture, detailing the components, capabilities, and features that make up this product. An introduction to the FlashSystem V9000, product features, a comparison to the IBM FlashSystem V840, and an overview of the architecture and hardware are included.
This chapter includes the following topics:
For more details about the IBM FlashSystem architecture, see the IBM FlashSystem V9000 web page in the IBM Knowledge Center:
2.1 Introduction to IBM FlashSystem V9000
IBM FlashSystem V9000 is an all-flash storage array that provides extreme performance and large capacity while also delivering enterprise-class reliability and “green” data center power and cooling requirements. The IBM FlashSystem V9000 building block holds up to twelve 5.7 terabytes (TB) IBM MicroLatency modules in only 6U of rack space, making it an extremely dense all-flash storage array solution.
FlashSystem V9000 uses a fully featured and scalable all-flash architecture that performs at up to 2.5 million input/output operations per second (IOPS) with IBM MicroLatency, is scalable up to 19.2 gigabytes per second (GBps), and delivers up to 2.28 petabytes (PB) effective capacity. Using its flash-optimized design, FlashSystem V9000 can provide response times of 200 microseconds. This high capacity, extreme performance, and enterprise reliability are powered by the patented IBM FlashCore Technology.
Advanced data services that are provided include copy services, mirroring, replication, external virtualization, IBM HyperSwap, Microsoft Offloaded Data Transfer (ODX) and VMware vSphere Storage APIs - Array Integration (VAAI) support. Host interface support includes 8 gigabit (Gb) and 16 Gb FC, and 10 Gb Fibre Channel over Ethernet (FCoE) and Internet Small Computer System Interface (iSCSI). Advanced Encryption Standard (AES) 256 hardware-based encryption adds to the rich feature set.
IBM FlashSystem V9000 is made up of the two control enclosures, referred to as AC2s, and one storage enclosure, referred to as AE2. The IBM FlashSystem V9000 core attributes are described next. Figure 2-1 shows the front view of the IBM FlashSystem V9000.
Figure 2-1 IBM FlashSystem V9000
2.1.1 Capacity
IBM FlashSystem V9000 supports a maximum of four building blocks and four additional storage enclosures. Each building block or storage enclosure can accommodate up to twelve 5.7 TB IBM MicroLatency modules, which provide a capacity of 57 TB (RAID 5). The FlashSystem V9000 therefore supports a maximum physical capacity of 456 TB. Using the optional IBM Real-time Compression and other design elements, the FlashSystem V9000 provides up to 57 TB usable capacity and up to 285 TB effective capacity in only 6U. This scales to 456 TB usable capacity and up to 2.28 PB effective capacity in only 36U of rack space.
Each IBM FlashSystem V9000 building block can be ordered with 4, 6, 8, 10, or 12 MicroLatency modules. The MicroLatency modules available are either 1.2 TB, 2.9 TB, or 5.7 TB storage capacity.
 
Important: 1.2 TB, 2.9 TB, and 5.7 TB IBM MicroLatency modules cannot be intermixed in the same IBM FlashSystem V9000 storage enclosure.
IBM FlashSystem V9000 supports RAID 5 configurations.
 
Note: The maximum usable capacity of IBM FlashSystem V9000 in RAID 5 mode is
51.8 tebibytes (TiB) per building block.
IBM FlashSystem V9000 supports the creation of up to 2,048 logical unit numbers (LUNs) per building block. The size of the LUNs can be 1 MiB - 51.8 TiB in size (not to exceed the total system capacity). The IBM FlashSystem V9000 supports up to 2,048 host connections and up to 256 host connections for each interface port. The IBM FlashSystem V9000 supports the mapping of multiple LUNs to each host for Fibre Channel, Fibre Channel over Ethernet (FCoE), and iSCSI protocols.
IBM FlashSystem V9000 supports up to 256 host connections for the iSCSI protocol.
Table 2-1 lists all the combinations of storage capacities for various configurations of the IBM FlashSystem V9000 building block.
Table 2-1 IBM FlashSystem V9000 capacity in TB and TiB for RAID 5
IBM FlashSystem 900 configuration
RAID 5 TB
RAID 5 TiB
Four 1.2 TB flash modules
  2.2
  2.0
Six 1.2 TB flash modules
  4.5
  4.1
Eight 1.2 TB flash modules
  6.8
  6.2
Ten 1.2 TB flash modules
  9.1
  8.3
Twelve 1.2 TB flash modules
11.4
10.4
Six 2.9 TB flash modules
11.4
10.3
Eight 2.9 TB flash modules
17.1
15.5
Ten 2.9 TB flash modules
22.8
20.7
Twelve 2.9 TB flash modules
28.5
28.9
Six 5.7 TB flash modules
22.8
20.7
Eight 5.7 TB flash modules
34.2
31.0
Ten 5.7 TB flash modules
45.6
41.4
Twelve 5.7 TB flash modules
57.0
51.8
2.1.2 Performance and latency
IBM FlashSystem V9000 uses all hardware field-programmable gateway array (FPGA) components in the AE2 storage enclosure data path, which enables fast I/O rates and low latency. IBM FlashSystem V9000 provides extreme performance of up to 2.5 million IOPS and up to 19.2 GBps in bandwidth. The IBM FlashSystem V9000 provides response times as low as 200 μs.
2.1.3 IBM FlashCore technology
IBM FlashSystem V9000 provides enterprise class reliability and serviceability that are unique for all-flash storage arrays. FlashSystem V9000 uses the patented IBM FlashCore Technology to provide data protection and maximum system uptime:
IBM Advanced Flash Management improves flash endurance 9x over standard implementations:
 – Proprietary garbage collection, relocation, and block-picking algorithms that were invented by IBM.
 – Flash wear leveling includes the following functions:
 • ECC algorithms that correct very high bit error rates.
 • Variable voltage and read-level shifting to maximize flash endurance.
 • Health binning and heat segregation continually monitor the health of flash blocks and perform asymmetrical wear leveling and sub-chip tiering.
 • Hot-data placement provides up to 57% improvement in endurance. Heat-level grouping provides up to 45% reduction in write amplification.
IBM Variable Stripe RAID is a patented IBM technology that provides an intra-module RAID stripe on each flash module.
With two-dimensional (2D) Flash RAID, system-wide RAID 5 along with Variable Stripe RAID helps reduce downtime and maintain performance, and enables the provisioning of an entire flash module as a spare to be used in another flash module failure.
Terminology
The following terms are mentioned in this book:
Wear leveling An algorithm that assures even usage of all blocks.
Garbage collection Erasing blocks, which are not used anymore, so that they can be rewritten.
Relocation Moving a block to another location.
Block picking The first step of the garbage collection process. Using proprietary algorithms, the best block is picked for garbage collection.
More reliability and serviceability features of IBM FlashSystem V9000
In addition to the standard features, IBM FlashSystem V9000 includes the following reliability and serviceability features:
Hot-swappable IBM MicroLatency modules with tool-less front panel access
If a MicroLatency module failure occurs, critical client applications can remain online while the defective module is replaced.
Because client application downtime does not need to be scheduled, you can typically perform this service immediately versus waiting for days for a service window. The Directed Maintenance Procedure (DMP), accessible from the GUI, can be used to prepare the IBM FlashSystem V9000 for a MicroLatency module replacement. You can easily remove the MicroLatency modules from the front of the IBM FlashSystem V9000 unit without needing to remove the top access panels or extend cabling.
 
Concurrent code loads
IBM FlashSystem V9000 supports concurrent code load, enabling client applications to remain online during firmware upgrades to all components, including the flash modules.
Redundant hot-swappable components
RAID controllers (called canisters), management modules, and interface cards (all contained in the canister), and batteries, fans, and power supplies are all redundant and hot-swappable. All components are easily accessible through the front or rear of the unit so the IBM FlashSystem V9000 does not need to be moved in the rack, and top access panels or cables do not need to be extended. This makes servicing the unit easy.
 
Tip: Concurrent code loads require that all connected hosts have at least two connections, at least one to each control enclosure, to the FlashSystem V9000.
2.1.4 Overview of IBM Variable Stripe RAID and 2D Flash RAID
Storage systems of any kind are typically designed to perform two main functions: to store and protect data. The IBM FlashSystem V9000 includes the following features for data protection:
RAID data protection:
 – IBM Variable Stripe RAID
 – Two-dimensional (2D) Flash RAID
Flash memory protection methods
Optimized RAID rebuild times
Variable Stripe RAID
Variable Stripe RAID is a unique IBM technology that provides data protection on the page, block, or chip level. It eliminates the necessity to replace a whole flash module when a single chip or plane fails. This, in turn, expands the life and endurance of flash modules and reduces considerably maintenance events throughout the life of the system.
Variable Stripe RAID provides high redundancy across chips within a flash module. RAID is implemented at multiple addressable segments within chips, in a 15+1 or 12+1 RAID 5 fashion, and it is controlled at the flash controller level (up to four in each flash module). Due to the massive parallelism of direct memory access (DMA) operations controlled by each FPGA and parallel access to chip sets, dies, planes, blocks, and pages, the implementation of Variable Stripe RAID has minimal effect on performance.
The following information describes some of the most important aspects of Variable Stripe RAID implementation:
Variable Stripe RAID is managed and controlled by each of the up to four flash controllers within a single module.
A flash controller is in charge of only 13 or 16 flash chips (IBM MicroLatency module capacity size-dependent).
Data is written on flash pages of 8 kilobytes (KB) and erased in 1 megabyte (MB) flash blocks.
Variable Stripe RAID is implemented and managed at flash chip plane levels.
There are 16 planes per chip.
Before a plane fails, at least 256 flash blocks within a plane must be deemed failed.
A plane can also fail in its entirety.
Up to 64 planes can fail before a whole module is considered failed.
Up to four chips can fail before a whole module is considered failed.
When a flash module is considered failed, 2D Flash RAID takes control of data protection and recovery.
When a plane or a chip fails, Variable Stripe RAID activates to protect data while maintaining system-level performance and capacity.
How Variable Stripe RAID works
Variable Stripe RAID is an IBM patented technology. It includes but is more advanced than a simple RAID of flash chips. Variable Stripe RAID introduces two key concepts:
The RAID stripe is not solely across chips; it actually spans across flash layers.
The RAID stripe can automatically vary based on observed flash plane failures within a flash module. For example, stripes are not fixed at n+1 RAID 5 stripe members, but they can go down to 15+1, 14+1, or even 13+1 based on plane failures.
This ability to protect the data at variable stripes effectively maximizes flash capacity even after flash component failures. Figure 2-2 shows an overview of the IBM FlashSystem Variable Stripe RAID.
Figure 2-2 IBM FlashSystem Variable Stripe RAID (VSR)
Figure 2-3 shows the benefits of IBM Variable Stripe RAID.
Figure 2-3 The value of the IBM FlashSystem Variable Stripe RAID
An important aspect to emphasize is that Variable Stripe RAID has an effect at only the plane level. Therefore, only the affected planes within a plane failure are converted to N-1. Variable Stripe RAID maintains the current stripe member count (N+1) layout through the remainder of the areas of all other planes and chips that are not involved in the plane failure.
To illustrate how Variable Stripe RAID functions, assume that a plane fails within a flash chip and is no longer available to store data. This might occur as a result of a physical failure within the chip, or some damage is inflicted on the address or power lines to the chip. The plane failure is detected and the system changes the format of the page stripes that are used.
The data that was previously stored in physical locations across chips in all 16 or 13 lanes using a page stripe format with 10 pages is now stored across chips in only nine lanes using a page stripe format with nine pages. Therefore, no data stored in the memory system was lost, and the memory system can self-adapt to the failure and continue to perform and operate by processing read and write requests from host devices.
This ability of the system to automatically self-adapt, when needed, to chip and intra-chip failures makes the FlashSystem flash module extremely rugged and robust, and capable of operating despite the failure of one or more chips or intra-chip regions. It also makes the system easier to use because the failure of one, two, or even more individual memory chips or devices does not require the removal and potential disposal of previously used memory storage components.
The reconfiguration or reformatting of the data to change the page stripe formatting to account for chip or intra-chip failures might reduce the amount of physical memory space that is held in reserve by the system and available for the system for background operation. Note that in all but the most extreme circumstances (in which case the system creates alerts), it does not affect performance. Even in the case of extreme circumstances, the usable capacity is not affected and the system fails the module first.
Reliability, availability, and serviceability
The previous explanation points out an increase in reliability, availability, and serviceability (RAS) levels and the IBM FlashSystem RAS levels over other technologies.
In summary, Variable Stripe RAID has these capabilities:
Patented Variable Stripe RAID allows RAID stripe sizes to vary.
If one plane fails in a chip stripe, only the failed plane is bypassed, and then data is restriped across the remaining chips. No system rebuild is needed.
Variable Stripe RAID reduces maintenance intervals caused by flash failures.
Two-dimensional (2D) Flash RAID
Two-dimensional (2D) Flash RAID refers to the combination of Variable Stripe RAID (at the flash module level) and system-level RAID 5.
The second dimension of data protection is implemented across flash modules of RAID 5 protection. This system-level RAID 5 is striped across the appropriate number of flash modules in the system based on the selected configuration. System-level RAID-5 can stripe across four (2D+1P+1S - IBM MicroLatency 1.2 TB module only), six (4D+1P+1S), eight (6D+1P+1S), ten (8D+1P+1S), or twelve flash modules (10D+1P+1S).
The architecture enables you to designate a dynamic flash module hot spare.
Figure 2-4 shows the IBM FlashSystem V9000 2D RAID.
Figure 2-4 IBM FlashSystem 2D RAID
Two-dimensional (2D) Flash RAID technology within the IBM FlashSystem V9000 provides two independent layers of RAID 5 data protection within each system:
The module-level Variable Stripe RAID technology
An additional system-level RAID 5 across flash modules
The system-level RAID 5 complements the Variable Stripe RAID technology implemented within each flash module, and it provides protection against data loss and data unavailability resulting from flash module failures. It also enables data to be rebuilt onto a hot-spare flash module, so that flash modules can be replaced without data disruption.
Other reliability features
In addition to 2D Flash RAID and Variable Stripe RAID data protection, the IBM FlashSystem family storage systems incorporate other reliability features:
Error-correcting codes to provide bit-level reconstruction of data from flash chips.
Checksums and data integrity fields designed to protect all internal data transfers within the system.
Overprovisioning to enhance write endurance and decrease write amplification.
Wear-leveling algorithms balance the number of writes among flash chips throughout the system.
Sweeper algorithms help ensure that all data within the system is read periodically to avoid data fade issues.
Understanding 2D Flash RAID enables you to visualize the advantage over other flash memory solutions. Both Variable Stripe RAID and 2D Flash RAID are implemented and controlled at FPGA hardware-based levels. Two-dimensional flash RAID eliminates single points of failure and provides enhanced system-level reliability.
2.1.5 Scalability
The IBM FlashSystem V9000 supports the ability to grow both the storage capacity and performance after deployment, which we refer to as scale up and scale out. IBM FlashSystem V9000 scale up or scale out is achieved by using scalable building blocks and additional storage enclosures. IBM FlashSystem V9000 supports a maximum configuration of twelve 1.2 TB, 2.9 TB, or 5.7 TB IBM MicroLatency modules per scalable building block. The IBM FlashSystem V9000 can be purchased with 4, 6, 8, 10, or 12 modules of 1.2 TB, 2.9 TB, or 5.7 TB sizes.
IBM FlashSystem V9000 offers these upgrade options:
Systems that are purchased with 4 MicroLatency modules can be expanded to 6, 8, 10, or 12 of the same capacity MicroLatency modules.
Systems that are purchased with 6 MicroLatency modules can be expanded to 8, 10, or 12 of the same capacity MicroLatency modules.
Systems that are purchased with 8 MicroLatency modules can be expanded to 10 or 12 of the same capacity MicroLatency modules.
Systems that are purchased with 10 MicroLatency modules can be expanded to 12 of the same capacity MicroLatency modules.
 
Note: Adding MicroLatency modules is a disruptive activity for the FlashSystem V9000 building blocks and the entire solution.
FlashSystem V9000 delivers up to 57 TB per building block, scales to four building blocks, and offers up to four more 57 TB V9000 storage enclosure expansion units for large-scale enterprise storage system capability. Building blocks can be either fixed or scalable. You can combine scalable building blocks to create larger clustered systems in such a way that operations are not disrupted.
A scalable building block can be scaled up by adding IBM FlashSystem V9000 AE2 storage enclosures for increased storage capacity. You can add a maximum of four extra storage enclosures, one extra storage enclosure per building block, to any scaled solution. A scalable building block can be scaled out by combining up to four building blocks to provide higher IOPS and bandwidth needs for increased performance as shown in Figure 2-5.
With IBM Real-time Compression technology, FlashSystem V9000 further extends the economic value of all-flash systems. FlashSystem V9000 provides up to two times the improvement in Real-time Compression over the model it is replacing, by using dedicated Compression Acceleration Cards. Using the optional Real-time Compression and other design elements, the V9000 provides up to 57 TB usable capacity and up to 285 TB effective capacity in only 6U. This scales to 456 TB usable capacity and up to 2.28 PB effective capacity in only 36U.
Figure 2-5 FlashSystem V9000 Scalability options
A fixed building block contains one FlashSystem V9000. The AE2 storage enclosure is cabled directly to each AC2 control enclosure using 8 Gb links, and each AC2 control enclosure is connected to switches or to a host. The AC2 control enclosures are directly connected without the use of switches or a SAN fabric, to form the cluster links. A fixed building block can be upgraded to a scalable building block, but the upgrade process is disruptive to operations.
Figure 2-6 on page 29 shows the relationship between the fixed building block and the scalable building blocks.
Figure 2-6 V9000 Fixed versus Scalable Building Blocks
For more details about cabling for a fixed building block see 6.4.1, “Connecting the components in a fixed building block” on page 190.
Scalable building blocks can contain multiple AC2 control enclosure pairs and multiple AE2 storage enclosures. The building block components are connected to each other through an Ethernet switch, using the management ports on the enclosures, to create a private management local area network (LAN). In a scalable building block, AC2 control enclosures are not cabled to each other. This infrastructure means that you can add building blocks or storage enclosures nondisruptively. Fibre Channel switches are used to create a private storage fabric.
The Fibre Channel switch fabric is dedicated, and is not shared with hosts or server-side storage area networks (SANs). After connecting the components in a scalable building block, no physical cable connects any host to any switch in the internal Fibre Channel switch fabric. This private fabric is therefore not affected by traditional host-side SAN traffic, saturation issues, or accidental or intentional zoning issues, therefore providing maximum availability and maximum cluster performance.
For more details about cabling for a scalable building block, see 6.4.2, “Connecting the components in a scalable building block” on page 191.
For a comparison and the configuration guidelines of the following two suggested methods for port utilization in a V9000 scalable environment, see Appendix A, “Guidelines: Port utilization in an IBM FlashSystem V9000 scalable environment” on page 557:
V9000 port utilization for infrastructure savings
This method reduces the number of required customer Fibre Channel ports attached to the customer fabrics. This method provides high performance and low latency but performance might be port-limited for certain configurations. Intra-cluster communication and AE2 storage traffic occur over the internal switches.
V9000 port utilization for performance
This method uses more customer switch ports to improve performance for certain configurations. Only ports designated for intra-cluster communication are attached to private internal switches. The private internal switches are optional and all ports can be attached to customer switches.
 
Note: The Fibre Channel internal connection switches are ordered together with the first FlashSystem V9000 scalable building block. IBM also supports the use of customer-supplied Fibre Channel switched and cables, provided it is supported by IBM. See the list of supported Fibre Channel switches:
Remember these important considerations:
Mixing different capacity MicroLatency modules (1.2 TB, 2.9 TB, or 5.7 TB) in any configuration on the storage enclosure AE2 is not supported.
If an IBM FlashSystem V9000 is purchased with 1.2 TB MicroLatency modules, all system expansions must be with 1.2 TB MicroLatency modules within a building block (BB). Different BBs can have different module sizes.
If an IBM FlashSystem V9000 is purchased with 2.9 TB MicroLatency modules, all system expansions must be with 2.9 TB MicroLatency modules.
If an IBM FlashSystem V9000 is purchased with 5.7 TB MicroLatency modules, all system expansions must be with 5.7 TB MicroLatency modules.
Expanding an IBM FlashSystem V9000 unit with 2, 4, 6, or 8 extra MicroLatency modules requires that the system is reconfigured. A backup of the system configuration and data migration, if needed, must be planned before the expansion.
The V9000 fixed building block interconnect can use only 8 gigabits per second (Gbps) links.
2.1.6 Protocol support
IBM FlashSystem V9000 supports the following interface protocols:
8 Gbps Fibre Channel
16 Gbps Fibre Channel
10 Gbps Fibre Channel over Ethernet (FCoE)
10 Gbps iSCSI
 
Fixed building block protocols and connections
This section illustrates the interface protocols and maximum number of connections supported for a FlashSystem V9000 fixed building block for the following configurations:
8 Gbps FC fixed building block
16 Gbps FC or 10 Gbps (FCoE/iSCSI) fixed building block
Figure 2-7 illustrates the interface protocol and maximum number of connections supported for a FlashSystem V9000 fixed building block for a six host, 8 Gbps Fibre Channel configuration. Only 8 Gbps connections are supported for the internal links between the AC2s and AE2 in a fixed building block.
Figure 2-7 Fixed building block protocols and connections for 8 FC Gbps configuration
Figure 2-8 illustrates the interface protocols and maximum number of connections supported for a FlashSystem V9000 fixed building block for a four-host, 16 Gbps FC or 10 Gbps (FCoE/iSCSI) configuration. Only 8 Gbps connections are supported for the internal links between the AC2s and AE2 in a fixed building block.
Figure 2-8 Fixed building block protocols and connections for 16 Gbps or 10 Gbps FC and FCoE/iSCSI hosts configurations
Scalable building block protocols and connections with 8 Gb interconnect fabric
The following topic describes interface protocols and the maximum number of connections that are supported for scalable building blocks, which use an 8 Gbps interconnect fabric for the following designs:
8 Gbps FC configuration
16 Gbps FC and 10 Gbps FCoE/iSCSI configuration
Figure 2-9 illustrates the 8 Gbps FC interface protocol and maximum number of connections that are supported for a scalable building block. This configuration uses an 8 Gbps internal interconnect fabric with six hosts and six connections per node to internal switches (with a maximum of three 4-port 8 Gbps cards per AC2).
Figure 2-9 Scalable building block protocols and connections: 8 Gbps interconnect
Figure 2-10 illustrates the interface protocols and maximum number of connections that are supported for a scalable building block in a 16 Gbps FC or 10 Gbps FCoE/iSCSI environment. This configuration uses an 8 Gbps FC internal interconnect fabric, with four hosts and eight connections per node to the internal switches (For the 16 Gbps existing scalable building block, you would have two 4-port 8 Gbps cards, and two 2-port 16 Gbps cards).
Figure 2-10 Scalable building block protocols and connections, 16 Gbps FC, and FCoE/iSCSI interconnect
Scalable building block protocols and connections with 16 Gb interconnect fabric
Figure 2-11 illustrates the interface protocols and maximum number of connections that are supported for scalable building blocks, which use a 16 Gbps internal interconnect fabric.
Figure 2-11 Scalable building block protocols and connections: 16 Gbps interconnect
Scalable building block for Fibre Channel, infrastructure savings method
Figure 2-12 is another view of Figure 2-11, which depicts the port mappings of a Fibre Channel 8 x 16 Gbps per a scalable building block for infrastructure savings. We have a total of 8 customer links and a total 16 internal links. For more details about the infrastructure savings method, see Appendix A.3, “Guidelines: The infrastructure savings method” on page 564.
Figure 2-12 Scalable building block Infrastructure savings - 8 X 16 Gbps per AC2
Scalable building block for Fibre Channel, performance method
Figure 2-13 depicts the port mappings of Fibre Channel 8 X 16 Gbps for a scalable building block configuration for performance.
When you compare the port utilization for infrastructure savings in Figure 2-12 on page 34 with the port utilization for performance in Figure 2-13, you can see that the port connection mapping and customer port usage is much higher in the performance method. This method provides shared use ports that use the full bidirectional capabilities in Fibre Channel, resulting in higher performance. For more details, see Appendix A.2, “Guidelines: The performance method” on page 559.
Figure 2-13 Scalable building block - Performance solution 8 X 16 Gbps per AC2
 
 
Note: The FlashSystem V9000 code release V7.5 supports 16 Gbps direct host connections without a switch (except IBM AIX-based hosts).
 
2.1.7 Encryption support
IBM FlashSystem V9000 provides optional encryption of data at rest, which protects against the potential exposure of sensitive user data and user metadata that are stored on discarded or stolen flash modules. Encryption of system data and metadata is not required, so system data and metadata are not encrypted. The FlashSystem V9000 encryption is supported on only the internal AE2 storage enclosure.
 
Note: Some IBM products, which implement encryption of data at rest stored on a fixed block storage device, implement encryption using self-encrypting disk drives (SEDs). The IBM FlashSystem V9000 flash module chips do not use SEDs. The IBM FlashSystem V9000 data encryption and decryption are performed by the IBM MicroLatency modules, which can be thought of as the functional equivalent of Self-Encrypting Flash Controller (SEFC) cards.
 
The general encryption concepts and terms for the IBM FlashSystem V9000 are as follows:
Encryption-capable is the ability of IBM FlashSystem V9000 to optionally encrypt user data and metadata by using a secret key.
Encryption-disabled is a system where no secret key is configured. The secret key is not required or used to encrypt or decrypt user data. Encryption logic is actually still implemented by the IBM FlashSystem V9000 while in the encryption-disabled state, but uses a default, or well-known, key. Therefore, in terms of security, encryption-disabled is effectively the same as not encrypting at all.
Encryption-enabled is a system where a secret key is configured and used. This does not necessarily mean that any access control was configured to ensure that the system is operating securely. Encryption-enabled only means that the system is encrypting user data and metadata using the secret key.
Access-control-enabled describes an encryption-enabled system that is configured so that an access key must be provided to authenticate with an encrypted entity, such as a secret key or flash module, to unlock and operate that entity. The IBM FlashSystem V9000 permits access control enablement only when it is encryption-enabled. A system that is encryption-enabled can optionally also be access-control-enabled to provide functional security.
Protection-enabled describes a system that is both encryption-enabled and access-control-enabled. An access key must be provided to unlock the IBM FlashSystem V9000 so that it can transparently perform all required encryption-related functionality, such as encrypt on write and decrypt on read.
Protection Enablement Process (PEP) transitions the IBM FlashSystem V9000 from a state that is not protection-enabled to a state that is protection-enabled. PEP requires that the client provide a secret key to access the system, and the secret key must be resiliently stored and backed up externally to the system, for example, on a Universal Serial Bus (USB) flash drive.
PEP is not merely activating a feature through the graphical user interface (GUI) or command-line interface (CLI). To avoid the loss of data that was written to the system before the PEP occurs, the client must move all of the data to be retained off the system before the PEP is initiated, and then must move the data back onto the system after the PEP completes. PEP is performed during the system initialization process, if encryption
is activated.
Application-transparent encryption is an attribute of the IBM FlashSystem V9000 encryption architecture, referring to the fact that applications are not aware that encryption and protection are occurring. This is in contrast to application-managed encryption (AME), where an application must serve keys to a storage device.
Hot key activation is the process of changing an encryption-disabled FlashSystem V9000 to encryption-enabled while the system is running.
Nondisruptive rekey is the process of creating a new encryption key that supersedes the existing key on a running FlashSystem V9000.
Encryption has no performance effect.
 
Note: IBM FlashSystem V9000 requires a license for encryption. If encryption is required, validate with IBM Sales or your IBM Business Partner that the license is ordered with the equipment.
Configuring encryption
You can activate encryption with the easy setup wizard during initialization or the hot key activation process after the FlashSystem V9000 is already initialized, when an encryption feature code is purchased. If encryption is activated, an encryption key is generated by the system to be used for access to the system. The processes start a wizard that guides the user through the process of copying the encryption key to multiple USB keys. For details about setting up encryption, see 9.4, “Security menu” on page 360.
IBM FlashSystem V9000 supports Encryption Rekey to create new encryption keys that supersede the existing encryption keys.
 
Note: If you plan to implement either hot key activation or encryption rekey, be sure to inform IBM Support so that it can monitor the operation. IBM Support personnel will guide you through this process.
Accessing an encrypted system
At system start (power on) or to access an encrypted system, the encryption key must be provided by an outside source so that the IBM FlashSystem V9000 can be accessed. The encryption key is provided by inserting the USB flash drives that were created during system initialization into one of the AC2 control enclosures in the solution.
Encryption technology
Key encryption is protected by an Advanced Encryption Standard (XTS-AES) algorithm key wrap using the 256-bit symmetric option in XTS mode, as defined in the IEEE1619-2007 standard. An HMAC-SHA256 algorithm is used to create a hash message authentication code (HMAC) for corruption detection, and it is additionally protected by a system-generated cyclic redundancy check (CRC).
2.1.8 Comparison of IBM FlashSystem models V840 and V9000
Table 2-2 lists differences between features of FlashSystem V840 and FlashSystem V9000.
Table 2-2 Comparison of IBM FlashSystem V840 and V9000 building block
Feature to compare
IBM FlashSystem V840
IBM FlashSystem V9000
Storage capacity options
Up to 320 TB
Up to 456 TB
Form factor
6U
6U
Performance (IOPS)
Up to 2,500,000
Up to 2,500,000
Bandwidth
19.2 GBps
19.2 GBps
Latency
200 µs
200 µs
Available interfaces
16, 8, and 4 Gbps FC
10 Gbps FCoE
10 Gbps iSCSI
16, 8, and 4 Gbps FC
10 Gbps FCoE
10 Gbps iSCSI
Chip type
eMLC
IBM Enhanced MLC
Chip RAID
Yes
Yes
System RAID
Yes
Yes
Power consumption (steady-state RAID 5)
2400 W
2400 W
LUN masking
Yes
Yes
Management
IBM SAN Volume Controller
CLI
IBM Storage GUI
Simple Network Management Protocol (SNMP)
Email alerts
Syslog redirect
FlashSystem V9000 Storage GUI
CLI
SNMP
Email alerts
Syslog redirect
Table 2-3 further describes architectural differences between FlashSystem V840 and FlashSystem V9000.
Table 2-3 IBM FlashSystem V840 and V9000 architectural comparison
Feature to compare
IBM FlashSystem V840
IBM FlashSystem V9000
Cluster Management
Two independent systems:
Each with its own cluster
Each with its own management
One system:
Cluster is only in the controller nodes
Single GUI
Single CLI
System setup
Each system is set up independently:
Tech port or USB port on AC2 control enclosure, then GUI or CLI
USB setup (inittool) on the storage enclosure (AE1)
Single setup:
Tech port or USB port on the AC2 control enclosure, then GUI or CLI
No USB setup (inittool)needed on the storage enclosure (AE2)
Cabling
Same
Same, but Ethernet not required
Event logging
Each system had its own event log
Single integrated event log
Code Upgrade
Each system upgraded independently:
No requirement that code levels are the same
Not the same code package
One system, all parts run same level of code:
Single upgrade process
Upgrade updates all system components: Storage enclosure and controllers.
Encryption
Encryption all managed from the storage enclosure (AE1)
Encryption managed from the control enclosure (AC2)
2.1.9 Management
IBM FlashSystem V9000 includes a single state-of-the-art IBM storage management interface. The IBM FlashSystem V9000 single graphical user interface (GUI) and command-line interface (CLI) are updated from previous versions of the IBM FlashSystem products to include the IBM SAN Volume Controller CLI and GUI, which resembles the popular IBM XIV GUI.
IBM FlashSystem V9000 also supports Simple Network Management Protocol (SNMP), email notification (Simple Mail Transfer Protocol (SMTP)), and syslog redirection.
Figure 2-14 on page 39 shows the IBM FlashSystem V9000 GUI for a fixed building block system and Figure 2-15 on page 39 shows the GUI for a fully configured Scale up and scale out building block system.
For more details about the use of the FlashSystem V9000 GUI and CLI, see 2.5.1, “System management” on page 60.
Figure 2-14 IBM FlashSystem V9000 GUI for a Fixed Building Block System
Figure 2-15 IBM FlashSystem V9000 GUI for a Scale up and scale out scalable Building Block System
The IBM Mobile Storage Dashboard, version 1.5.4, also supports FlashSystem V9000. IBM Storage Mobile Dashboard is a no-cost application that provides basic monitoring capabilities for IBM storage systems. Storage administrators can securely check the health and performance status of their IBM Storage systems by viewing events and also real-time performance metrics. You can download this application for your Apple iPhone from this page:
Figure 2-16 shows examples of the IBM Storage Mobile Dashboard.
Figure 2-16 IBM Storage Mobile Dashboard
2.2 Architecture of IBM FlashSystem V9000
The IBM FlashSystem V9000 architecture is explained in the following section together with key product design characteristics, performance, and serviceability. Hardware components are also described.
2.2.1 Overview of architecture
The FlashSystem V9000 AC2 control enclosure combines software and hardware into a comprehensive, modular appliance that uses symmetric virtualization. Single virtualization engines, which are known as AC2 control enclosures, are combined to create clusters. In a scalable solution, each cluster can contain between two and eight control enclosures.
Symmetric virtualization is achieved by creating a pool of managed disks (MDisks) from the attached storage systems or enclosures. Those storage systems or enclosures are then mapped to a set of volumes for use by attached host systems. System administrators can view and access a common pool of storage on the storage area network (SAN). This functionality helps administrators to use storage resources more efficiently and provides a common base for advanced functions.
The design goals for the IBM FlashSystem V9000 are to provide the client with the fastest and most reliable all-flash storage array on the market, while making it simple to service and support with no downtime. The IBM FlashSystem V9000 uses many FPGA components and as little software as possible, keeping I/O latency to a minimum and I/O performance to a maximum.
Figure 2-17 illustrates the IBM FlashSystem V9000 AE2 storage enclosure design. At the core of the system are the two high-speed non-blocking crossbar buses. The crossbar buses provide two high-speed paths, which carry the data traffic, and they can be used by any host entry path into the system. There is also a slower speed bus for management traffic.
Connected to the crossbar buses are high-speed non-blocking RAID modules and IBM MicroLatency modules. There is also a passive main system board (midplane) to which both the RAID canisters and all the flash modules connect, and also connections to battery modules, fan modules, and power supply units.
The two RAID canisters contain crossbar controllers, management modules, interface controllers and interface adapters, and fan modules. The two RAID canisters form a logical cluster, and there is no single point of failure in the design (assuming that all host connections have at least one path to each canister).
Figure 2-17 AE2 Strode Enclosure architecture
Software
The FlashSystem V9000 software provides these functions for the host systems that attach to FlashSystem V9000:
Creates a single pool of storage
If there is more than one AE2, a separate pool can be created for each AE2 or one storage pool can be created that spans all AE2 storage enclosures in the solution
Provides logical unit virtualization
Manages logical volumes
FlashSystem V9000 system also provides these advanced functions:
Large scalable cache
Copy services:
 – IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
 – Metro Mirror (synchronous copy)
 – Global Mirror (asynchronous copy)
Data migration
Space management
IBM Easy Tier function to automatically migrate the most frequently used data to higher-performance storage
Thin-provisioned logical volumes
Compressed volumes to consolidate storage
The new release of FlashSystem V9000 V7.5 code gives the following enhancements :
 – HyperSwap, which enables each volume to be presented by two I/O groups
 – Microsoft Offloaded Data Transfer (ODX)
 – VMware and vSphere 6.0 support
 – Enhanced FlashCopy bitmap space increased
For more information about the FlashSystem V9000 advance software features, see Chapter 3, “Advanced software functions” on page 75.
MDisks
The AC2 control enclosures view the storage that is presented to it by the AE2 storage enclosures as several disks or LUNs, which are known as managed disks or MDisks. Every AE2 storage enclosure will be presented as a single MDisk. If there is more than one AE2, a separate pool can be created for each AE2 or one storage pool can be created that spans all AE2 storage enclosures in the solution. The host servers do not see the mdisks. Rather, they see several logical disks, which are known as virtual disks or volumes, which are presented by the FlashSystem V9000 through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.
The mdisks are placed into storage pools where they are divided into several extents, which are 16 - 8192 MB, as defined by the FlashSystem V9000 administrator. For more information about the total storage capacity that is manageable per system regarding the selection of extents, see the following web page:
A volume is host-accessible storage that was provisioned from one storage pool. Or, if it is a mirrored volume, it was provisioned from two storage pools. The maximum size of an mdisk is 1 PB. One FlashSystem V9000 supports up to 4096 mdisks.
Mdisks consideration for FlashSystem V9000
Previously with the IBM Spectrum Virtualize (2145 SAN Volume Controller Model DH8) and previously with IBM FlashSystem V840, the leading practices stated that the back-end storage (on SAN Volume Controller) or internal storage (in FlashSystem V840) should be divided into 16 mdisks for the best performance.
On the FlashSystem V9000, the leading practices now state that one mdisk per AE2 array should be created, rather than the 16 mdisks previously used on older products.
The reason for this change can be explained in the relationship of the I/O throughput on the machine, versus the number of cores and threading on the control node architecture.
The control nodes assign workloads to different cores, depending on the object that is associated with the workload. There are three categories as shown here:
1. Interface Channel (I/O) ports
2. Vdisk
3. Mdisks
When an I/O comes in, it gets assigned to the core associated with an interface channel port. It moves to the vdisk thread and then to the mdisk thread and finally back to an interface channel thread, for de-staging back out of the system.
The vdisk (item 2 previously) has the most amount of work associated with it.
The redesign for FlashSystem V9000 was done to enable the interface ports to use all 8 threads, but vdisks are restricted to 7 threads and mdisks must all use the thread that vdisks do not use. In testing, it proved that the vdisk work is approximately 7 times more than the mdisk work. Using this method, it loads all of the threads that the FlashSystem V9000 7.5 software has in a FlashSystem V9000 node.
 
Note: interface I/O is actually handled on all eight threads. If you do not assign in this way, core 1 only runs at about 70%.
Figure 2-18 shows the mdisk assignment transition from FlashSystem V840 with AC0 control nodes, through the AC1 nodes and to the current AC2 nodes used on the FlashSystem V9000 system.
Figure 2-18 Mdisk assignments with the three versions of FlashSystem control enclosures
Storage pool
A storage pool is a collection of up to 128 mdisks that provides the pool of storage from which volumes are provisioned. A single system can manage up to 128 storage pools. The size of these pools can be changed (expanded or shrunk) at run time by adding or removing mdisks, without taking the storage pool or the volumes offline.
At any point in time, an mdisk can be a member in one storage pool only, except for image mode volumes. Image mode provides a direct block-for-block translation from the mdisk to the volume by using virtualization. Image mode enables the virtualization of mdisks that already contain data that was written directly and not through a FlashSystem V9000; rather, it was created by a direct-connected host.
Each mdisk in the storage pool is divided into several extents. The size of the extent is selected by the administrator when the storage pool is created and cannot be changed later. The size of the extent is 16 - 8192 MB.
 
Tip: A preferred practice is to use the same extent size for all storage pools in a system. This approach is a prerequisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring to copy volumes between pools.
FlashSystem V9000 limits the number of extents in a system to 222 = ~4 million. Because the number of addressable extents is limited, the total capacity of a FlashSystem V9000 system depends on the extent size that is chosen by the administrator.
The capacity numbers that are specified in Table 2-4 for a FlashSystem V9000 assume that all defined storage pools were created with the same extent size.
Table 2-4 Extent size-to-address ability matrix
Extent size (MB)
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB
Maximum MDisk capacity in GB
Total storage capacity manageable per system
16
    2,048 (    2 TB)
    2,000
       2,048 (       2 TB)
  64 TB
32
    4,096 (    4 TB)
    4,000
       4,096 (       4 TB)
128 TB
64
    8,192 (    8 TB)
    8,000
       8,192 (       8 TB)
256 TB
128
  16,384 (  16 TB)
  16,000
     16,384 (     16 TB)
512 TB
256
  32,768 (  32 TB)
  32,000
     32,768 (     32 TB)
    1 PB
512
  65,536 (  64 TB)
  65,000
     65,536 (     64 TB)
    2 PB
1024
131,072 (128 TB)
130,000
   131,072 (   128 TB)
    4 PB
2048
262,144 (256 TB)
260,000
   262,144 (   256 TB)
    8 PB
4096
262,144 (256 TB)
262,144
   524,288 (   512 TB)
  16 PB
8192
262,144 (256 TB)
262,144
1,048,576 (1,024 TB)
  32 PB
 
Notes:
The total capacity values amount assumes that all of the storage pools in the system use the same extent size.
For most systems, a capacity of 1 - 2 PB is sufficient. A preferred practice is to use 256 MB for larger clustered systems. The default extent size is 1024 MB.
For more information, see IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
Volumes
A system of FlashSystem V9000 control enclosures presents volumes to the hosts. Most of the advanced functions that FlashSystem V9000 provides are defined on volumes. These volumes are created from managed disks (mdisks) pools that are presented by the RAID storage systems or by RAID arrays that are provided by the storage enclosures. All data transfer occurs through the FlashSystem V9000 control enclosures, which is described as symmetric virtualization.
The control enclosures in a system are arranged into pairs that are known as I/O groups. A single pair is responsible for serving I/O on a volume. Because a volume is served by two control enclosures, no loss of availability occurs if one control enclosure fails or is taken offline.
System management
The FlashSystem V9000 AC2 control enclosures in a clustered system operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the AC2 control enclosures in the system, which is called the configuration node. The configuration AC2 control enclosure runs a web server and provides a CLI.
The configuration node is a role that any AC2 control enclosure can take. If the current
AC2 control enclosure fails, a new configuration node is automatically selected from the remaining nodes. Each node also provides a CLI and web interface for initiating hardware service actions.
2.2.2 Hardware components
Each FlashSystem V9000 AC2 control enclosure is an individual server in a FlashSystem V9000 clustered system (I/O group) on which the FlashSystem V9000 software runs.
I/O groups take the storage that is presented to it by the AE2 storage enclosures as mdisks, adds these to pools, and translates the storage into logical disks (volumes) that are used by applications on the hosts. An AC2 control enclosure is in only one I/O group and provides access to the volumes in that I/O group.
These are the core IBM FlashSystem V9000 components:
Canisters
Interface cards
IBM MicroLatency modules
Battery modules
Power supply units
Fan modules
Figure 2-19 shows the IBM FlashSystem V9000 front view. The 12 IBM MicroLatency modules are in the middle of the unit.
Figure 2-19 IBM FlashSystem V9000 front view
Figure 2-20 shows the IBM FlashSystem V9000 rear view. The two AC2 control enclosures are at the top and bottom, with the AE2 storage controller in the middle. All power supply units are to the right (small units).
Figure 2-20 IBM FlashSystem V9000 rear view
2.2.3 Power requirements
IBM FlashSystem V9000 is green data center friendly. The IBM FlashSystem V9000 building block uses only 3100 W of power under maximum load, and uses six standard single phase (100v - 240v) electrical outlets, two per AC2 storage controller and two for the AE2 storage enclosure. Plan to attach each of the two power supplies in each of the enclosures, to separate main power supply lines.
The FlashSystem V9000 maximum configuration, with four scalable building blocks and four additional AE2 storage enclosures, consumes 17900 W of power under maximum load.
 
AE2 storage enclosure: The 1300 W power supply for high-line voltage provides the AE2 storage enclosure with high power to run at maximum performance for longer durations during power supply servicing, resulting in more predictable performance under unexpected failure conditions. Optimal operation is achieved when operating between 200V - 240V (nominal). The maximum and minimum voltage ranges (Vrms) and associated high line AC ranges are as follows:
Minimum: 180V
Nominal: 200 - 240V
Maximum: 265V
Using two power sources provides power redundancy. The suggestion is to place the two power supplies on different circuits.
Important: The power cord is the main power disconnect. Ensure that the socket outlets are located near the equipment and are easily accessible.
2.2.4 Physical specifications
The IBM FlashSystem V9000 installs in a standard 19-inch equipment rack. The IBM FlashSystem V900 building block is 6U high and 19 inches wide. A standard data 42U 19-inch data center rack can be used to be populated with the maximum FlashSystem V9000 configuration to use up to 36U.
The IBM FlashSystem V9000 has the following physical dimensions:
FlashSystem V9000 control enclosure (AC2) each:
 – Width: 445 mm (17.5 in) (19-inch Rack Standard)
 – Depth: 746 mm (29.4 in)
 – Height: 86 mm (3.4 in)
 – Weight: 22.0 kg (48.4 lb)
 – Airflow path: Cool air flows into the front of unit (intake) to rear of unit (exhaust)
 – Heat dissipation: 3480.24 BTU per hour
FlashSystem V9000 storage enclosure (AE2):
 – Width: 445 mm (17.6 in); 19-inch rack standard
 – Depth: 761 mm (29.96 in)
 – Height: 86.2 mm (3.39 in)
 – Weight (maximum configuration is 12 flash modules): 34 kg (75 lb)
 – Airflow path: Cool air flows into the front of unit (intake) to rear of unit (exhaust)
 – Heat dissipation: 1194 BTU (maximum configuration RAID 5)
2.3 Control enclosure (AC2) of the FlashSystem V9000
The IBM FlashSystem V9000 AC2 control enclosures are based on IBM System x server technology. Each AC2 control enclosure has the following key hardware features:
Two Intel Xeon E5 v2 Series eight-core processors with 64 GB memory
16 Gb FC, 8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet I/O ports for FC, iSCSI, and FCoE connectivity
Hardware-assisted compression acceleration (optional feature)
Two integrated battery units
2U, 19-inch rack mount enclosure with ac power supplies
The AC2 control inclosure includes three 1 Gb Ethernet ports standard for iSCSI connectivity. It can be configured with up to four I/O adapter features providing up to eight 16 Gb FC ports, up to twelve 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre Channel over Ethernet (FCoE)) ports. See the optional feature section in the IBM Knowledge Center:
Real-time Compression workloads can benefit from the FlashSystem V9000 with two 8-core processors with 64 GB of memory (total system memory). Compression workloads can also benefit from the hardware-assisted acceleration offered by the addition of two Compression Acceleration Cards.
The front panel unit contains a dual-battery pack with its backplane acting as an uninterruptible power supply to the AC2. Batteries are fully redundant and hot-swappable. An AC2 is able to operate with one healthy battery, however it is logged and the control enclosure becomes degraded, but it still continues I/O operations.
The AC2 control enclosure hardware layout is presented in Figure 2-21.
Figure 2-21 AC2 control enclosure
Figure 2-22 illustrates the back view of the AC2 control enclosure.
Figure 2-22 Rear view of AC2 control enclosure
2.3.1 I/O connectivity
IBM FlashSystem V9000 offers various options of I/O cards for installation and configuration. The 2U rack-mount form factor of the AC2 enables the control enclosure to accommodate up to six PCIe Gen3 cards for I/O connectivity or compression support. The rear view of the AC2 is shown in Figure 2-23.
Figure 2-23 Rear view of AC2 control enclosure
Slots 4 - 6 are internally attached to processor 2, and are available with both processors and 64 GB of total memory installed in the AC2 control enclosure node.
The I/O card installation options for a FlashSystem V9000 fixed building block are outlined in Table 2-5.
Table 2-5 Layout for FlashSystem V9000 fixed building block
Top of node
Processor 1 attachment
Processor 2 attachment
Slot 1: I/O card (8 Gbps)
Slot 4: Compression Acceleration Card (optional)
Slot 2: I/O card (8 Gbps)
Slot 5: I/O card (8 Gbps)
Slot 3: Not used
Slot 6: Compression Acceleration Card (optional)
The I/O card installation options for a FlashSystem V9000 16 Gbps scalable building block are outlined in Table 2-6.
Table 2-6 Layout for FlashSystem V9000 16 Gbps scalable building block
Top of node
Processor 1 attachment
Processor 2 attachment
Slot 1: I/O card (16 Gbps)
Slot 4: Compression Acceleration Card (optional)
Slot 2: Adapter for host connections only. Options include:
Four port 8 Gbps FC adapter
Two port 16 Gbps FC adapter
Four port 10 Gbps Ethernet adapter (FC over Ethernet or iSCSI)
Slot 5: Adapter for host connections only. Options include:
Four port 8 Gbps FC adapter
Two port 16 Gbps FC adapter
Four port 10 Gbps Ethernet adapter (FC over Ethernet or iSCSI)
Slot 3: I/O card (16 Gbps)
Slot 6: Compression Acceleration Card (optional)
 
Important: For FlashSystem V9000 V7.5 release of code, the AC2 supports direct FC adapter connection to the hosts at 16 Gbps without a switch. At the time of the writing of this book, the only restriction is that AIX hosts are not supported with this direct connect.
2.3.2 Compression Acceleration Card
Compressed volumes are a special type of volume where data is compressed as it is written to disk, saving more space. The AC2 control enclosures must have two Compression Acceleration Cards installed to use compression. Enabling compression on the FlashSystem V9000 does not affect non-compressed host to disk I/O performance. Figure 2-24 shows the Compression Acceleration Card and its possible placement in the AC2 control enclosures.
Figure 2-24 Placement of Compression Acceleration Cards
 
Remember: To use compression, two Compression Acceleration Cards are compulsory for each AC2 control enclosure in a FlashSystem V9000.
For a FlashSystem V9000 with no Compression Acceleration Cards, an attempt to create a compressed volume fails.
A fully equipped FlashSystem V9000 building block with four compression accelerators supports up to 512 compressed volumes.
2.3.3 Technician port
The purpose and key benefit of the FlashSystem V9000 Technician port is to simplify and ease the initial basic configuration of the system by the local administrator or by service personnel. The Technician port is marked with the letter “T” (Ethernet port 4) as depicted in Figure 2-25.
Figure 2-25 Location of Technician port
To initialize a new system, you must connect a personal computer to the Technician port on the rear of one of the AC2 control enclosures in the solution and run the initialization wizard. This port runs a Dynamic Host Configuration Protocol (DHCP) server to facilitate service and maintenance that is ready to use in lieu of the front panel. Be sure that your computer has DHCP enabled, otherwise manually provide these settings:
IP address set to 192.168.0.2
Network mask set to 255.255.255.0,
Gateway set to 192.168.0.1.
 
Attention: Never connect the Technician port to a switch. If a switch is detected, the Technician port connection might shut down, causing an AC2 error 746 in the log.
If the node has Candidate status when you open the web browser, the initialization wizard is displayed. Otherwise, the service assistant interface is displayed. The procedure of the initial configuration is described in Chapter 6, “Installation and configuration” on page 187.
2.3.4 Battery backup
The AC2 control enclosure has two hot-swappable batteries in the front of the enclosure, with the battery backplane at the back of battery drawers. See Figure 2-26 for details.
Figure 2-26 Position of batteries in AC2 control enclosure
The AC2 battery units provide these items:
Dual batteries per AC2.
They are hot-swappable.
Designed as redundant within as AC2 control enclosure.
Batteries incorporate a test load capability.
Each battery has its own fault LED indicator.
The AC2 control enclosure is designed for two batteries, but continues to operate on a single battery. To achieve maximum redundancy and to get the full life rating of the cells, the system needs to run with both batteries. Running with a single battery results in almost a full discharge and places a higher discharge current on the cells, which leads to a reduced capacity after several cycles. Running with just one battery is a degraded state and a node error event is logged to ensure the missing or failed battery is replaced.
An AC2 control enclosure is able to continue operation with one failed battery, although after an AC power failure, the node might have to wait for the battery to charge before resuming host I/O operation.
The operational status of batteries and their Vital Product Data are available from the
IBM FlashSystem V9000 CLI by using the sainfo lsservicestatus command (Example 2-1).
Example 2-1 Checking the battery status from CLI
IBM_2145:ITSO_SVC2:superuser>sainfo lsservicestatus
Battery_count 2
Battery_id 1
Battery_status active
Battery_FRU_part 00AR260
Battery_part_identity 11S00AR056YM30BG43J0C8
Battery_fault_led off
Battery_charging_status idle
Battery_cycle_count 3
Battery_power_on_hours 298
Battery_last_recondition 140512140814
Battery_id 2
Battery_status active
Battery_FRU_part 00AR260
Battery_part_identity 11S00AR056YM30BG43J0BA
Battery_fault_led off
Battery_charging_status idle
Battery_cycle_count 2
Battery_power_on_hours 298
Battery_last_recondition
2.4 Storage enclosure (AE2) of the FlashSystem V9000
The AE2 storage enclosure components include flash modules, battery modules, and power supplies.
Each IBM FlashSystem AE2 storage enclosure contains two fully redundant canisters. The fan modules are at the bottom and the interface cards are at the top. Each canister contains a RAID controller, two interface cards, and a management controller with an associated 1 Gbps Ethernet port. Each canister also has a USB port and two hot-swappable fan modules.
Figure 2-27 shows the components of the AE2 storage enclosure. One of the two canisters was removed, and now you can see the interface cards and fan modules. The power supply unit to the right of the fans provides redundant power to the system. All components are concurrently maintainable except the midplane and the power interposer, which has no active components. All external connections are from the rear of the system.
Figure 2-27 AE2 storage enclosure components
To maintain redundancy, the canisters are hot swappable. If any of the components (except the fans) within a canister fail, the entire canister is replaced as a unit. Both fan modules in each canister are hot-swappable.
 
Notes: If either interface card in a canister fails, the entire canister (minus the fans) must be replaced as an entire unit. When replacing hardware in the AE2 storage enclosure, follow the DMP that is accessible through the GUI.
For more details about the IBM FlashSystem canisters, including canister state LEDs, see the IBM FlashSystem V9000 web page in the IBM Knowledge Center:
2.4.1 Interface cards
The AE2 storage enclosure supports the following interface cards:
Fibre Channel 8 Gbps (mandatory for FlashSystem V9000 fixed building blocks)
Fibre Channel 16 Gbps
 
Figure 2-28 shows a four-port FC interface card, which is used for 16 Gbps FC (only two ports used), and 8 Gbps (four ports used).
Figure 2-28 AE2 storage enclosure FC interface card
Support of 16 Gbps Fibre Channel
The AE2 storage controller supports the new 16 Gbps FC connection speed through the standard FC interface card. The following rules apply to supporting 16 Gbps FC on the AE2:
If using 16 Gbps FC, only two (of the four) ports on the FC modules can be used. The two leftmost ports (1 and 2) on each interface card are used for 16 Gbps support. The two right-most ports (3 and 4) are disabled when 16 Gbps is sensed on any port in the AE2.
If using 16 Gbps FC, the interface is configured as either 16 Gb FC (only two ports active), or 8 Gb FC (4 ports active). This is configured at the factory and is not changeable by the client.
2.4.2 MicroLatency modules
The IBM FlashSystem AE2 storage enclosure supports up to 12 IBM MicroLatency modules, accessible from the enclosure front panel. Each IBM MicroLatency module has a usable capacity of either 1.06 TiB (1.2 TB), 2.62 TiB (2.9 TB), or 5.24 TiB (5.7 TB) of flash memory. IBM MicroLatency modules without the daughterboard are either half-populated with 1.06 TiB (1.2 TB) or fully populated with 2.62 TiB (2.9 TB). The optional daughterboard adds another 2.62 TiB (2.9 TB) for a total of 5.24 TiB (5.7 TB).
Figure 2-29 illustrates an AE2 storage enclosure MicroLatency module (base unit and optional daughterboard).
Figure 2-29 AE2 storage enclosure MicroLatency module
Note: All MicroLatency modules in the AE2 storage enclosure must be ordered as 1.2 TB, 2.9 TB, or 5.7 TB. IBM MicroLatency modules types cannot be mixed. The daughterboard cannot be added after deployment.
The maximum storage capacity of the IBM FlashSystem V9000 is based on the following factors:
In a RAID 5 configuration, one IBM MicroLatency module is reserved as an active spare, and capacity equivalent to one module is used to implement a distributed parity algorithm. Therefore, the maximum usable capacity of a RAID 5 configuration is 57 TB (51.8 TiB), which is 10 MicroLatency modules x 5.7 TB (5.184 TiB).
IBM MicroLatency modules are installed in the AE2 storage enclosure based on the following configuration guidelines:
A minimum of four MicroLatency modules must be installed in the system. RAID 5 is the only supported configuration of the IBM FlashSystem V9000. RAID 10 is not supported on the FlashSystem V9000.
The system supports configurations of 4, 6, 8, 10, and 12 MicroLatency modules in RAID 5.
All MicroLatency modules that are installed in the enclosure must be identical in capacity and type.
For optimal airflow and cooling, if fewer than 12 MicroLatency modules are installed in the enclosure, populate the module bays beginning in the center of the slots and adding on either side until all 12 slots are populated.
See Table 2-7 for suggestions to populate MicroLatency module bays.
Table 2-7 Supported MicroLatency module configurations
No. of installed
flash modules1
Flash
mod.
slot 1
Flash
mod.
slot 2
Flash
mod.
slot 3
Flash
mod.
slot 4
Flash
mod.
slot 5
Flash
mod.
slot 6
Flash
mod.
slot 7
Flash
mod.
slot 8
Flash
mod.
slot 9
Flash
mod.
slot 10
 
Flash
mod.
slot 11
Flash
mod.
slot 12
 
Four
 
 
 
 
X
X
X
X
 
 
 
 
Six
 
 
 
X
X
X
X
X
X
 
 
 
Eight
 
 
X
X
X
X
X
X
X
X
 
 
Ten
 
X
X
X
X
X
X
X
X
X
X
 
Twelve
X
X
X
X
X
X
X
X
X
X
X
X

1 RAID 5 is supported with configurations of 4, 6, 8, 10, and 12 MicroLatency modules.
 
Notes:
If fewer than 12 modules are installed, module blanks must be installed in the empty bays to maintain cooling airflow in the system enclosure.
During system setup, the system automatically configures RAID settings based on the number of flash modules in the system.
All MicroLatency modules installed in the enclosure must be identical in capacity and type.
Important:
MicroLatency modules are hot swappable. However, to replace a module, you must power down the MicroLatency module by using the management GUI before you remove and replace the module. This service action does not affect the active logical unit numbers (LUNs), and I/O to the connected hosts can continue while the MicroLatency module is replaced. Be sure to follow the DMP from the IBM FlashSystem V9000 GUI before any hardware replacement.
The suggestion is for the AE2 storage enclosure to remain powered on, or be powered on periodically, to retain array consistency. The AE2 storage enclosure can be safely powered down for up to 90 days, in temperatures up to 40 degrees C. Although the MicroLatency modules retain data if the enclosure is temporarily disconnected from power, if the system is powered off for a period of time exceeding 90 days, data might be lost.
FlashSystem V840 MicroLatency modules are not supported in the FlashSystem V9000 AE2 storage enclosure and installation should not be attempted.
2.4.3 Battery modules
The AE2 storage enclosure contains two hot-swappable battery modules. The function of the battery modules is to ensure that the system is gracefully shut down (write buffer fully flushed and synchronized) when AC power is lost to the unit. The battery modules are hot-swappable. Figure 2-30 on page 58 shows Battery Module 1, which is in the leftmost front of the AE2 storage enclosure. A battery module can be hot-swapped without software intervention; however, be sure to follow the DMP from the IBM FlashSystem V9000 GUI before any hardware replacement.
Battery reconditioning
The AE2 storage enclosure has a battery reconditioning feature that calibrates the gauge that reports the amount of charge on the batteries. On systems that have been installed for 10 months or more, or systems that have experienced several power outages, the recommendation to run “battery reconditioning” is displayed in the Event Log shortly after upgrading. For more information, see the FlashSystem V9000 web page in the IBM Knowledge Center:
Figure 2-30 AE2 storage enclosure battery module 1
Power supply units
The AE2 contains two hot-swappable power supply units. The system can remain fully online if one of the power supply units fails. The power supply units are accessible from the rear of the unit and are fully hot swappable.
Figure 2-31 on page 59 shows the two hot-swappable power supply units. The IBM FlashSystem V9000 GUI and alerting systems (SNMP and so on) will report a power supply fault. The power supply can be hot-swapped without software intervention; however, be sure to follow the DMP from the IBM FlashSystem V9000 GUI before any hardware replacement.
Figure 2-31 AE2 storage enclosure hot swappable power supply units
Fan modules
The AE2 contains four hot-swappable fan modules. Each canister holds two hot swappable fan modules. Each fan module contains two fans. The system can remain fully online if one of the fan modules fails. The fan modules are accessible from the rear of the unit (in each canister) and are fully hot swappable.
Figure 2-32 shows a hot-swappable fan module. The IBM FlashSystem V9000 GUI and alerting systems (SNMP and so on) will report a fan module fault. The fan module can be hot-swapped without software intervention; however, be sure to follow the DMP from the IBM FlashSystem V9000 GUI before any hardware replacement.
Figure 2-32 AE2 storage enclosure fan module
2.5 Administration and maintenance of FlashSystem V9000
The IBM FlashSystem V9000 storage system capabilities for administration, maintenance, and serviceability are described in this section.
2.5.1 System management
FlashSystem V9000 includes the use of the popular IBM SAN Volume Controller CLI and GUI, which delivers the functions of IBM Spectrum Virtualize, part of the IBM Spectrum Storage Family. The IBM FlashSystem V9000 supports SNMP, email forwarding (SMTP), and syslog redirection for complete enterprise management access.
Graphical user interface (GUI)
IBM FlashSystem V9000 includes the use of the standard IBM Spectrum Virtualize GUI.
The IBM FlashSystem V9000 GUI is started from a supported Internet browser when you enter the systems management IP address. The login window then opens (Figure 2-33).
 
Figure 2-33 IBM FlashSystem V9000 GUI login window
Enter a valid user name and password. A system overview window opens (Figure 2-34 for Fixed Building Block or Figure 2-35 for Scalable Building Block configurations). The middle of the window displays a real-time graphic of the IBM FlashSystem V9000.
Figure 2-34 System overview window (Fixed Building Block)
Figure 2-35 System overview window for a Scale up and scale out environment
At the bottom of the window are three dashboard icons:
Capacity
Performance
System status
The left side of the window displays seven function icons:
These are briefly described next. Also, see the following chapters:
Details about the Settings function are in Chapter 9, “Configuring settings” on page 339.
Monitoring function
Figure 2-36 shows the Monitoring icon and the associated branch-out menu. Click the Monitoring icon if you want to select any of these actions:
System: Monitor the system health of the IBM FlashSystem V9000 hardware.
Events: View the events log of the IBM FlashSystem V9000.
Performance: Start the system I/O performance graphs.
Figure 2-36 IBM FlashSystem V9000 GUI: Monitoring icon and branch-out menu
Pools function
Figure 2-37 on page 63 shows the Pools icon and the associated branch-out menu. Click the Pools icon if you want to select any of these actions:
Volumes by Pool: View a list of volumes (LUNs) that are associated with pools, create new associations, or delete associations.
Internal Storage: View all internal storage associated with the FlashSystem V9000.
Mdisks by Pools: View a list of mdisk that are associated with pools, create new associations, or delete associations.
System Migration: Perform storage migration actions for data from externally virtualized storage.
Figure 2-37 IBM FlashSystem V9000 GUI: Pools icon and branch-out menu
Volumes function
Figure 2-38 shows the Volumes icon and the associated branch-out menu. Click the Volumes icon if you want to do any of these actions:
Volumes: View a list of all system storage volumes (LUNs), create new volumes, edit existing volumes, and delete volumes.
Volumes by Pools: View a list of volumes that are associated with pools, create new associations, or delete associations.
Volumes by Host: View a list of volumes that are associated with hosts, create new associations, or delete associations.
Figure 2-38 IBM FlashSystem V9000 GUI: Volumes icon and branch-out menu
Hosts function
Figure 2-39 shows the Hosts icon and the associated branch-out menu. Click the Hosts icon if you want to select any of these actions:
Hosts: View a list of all hosts, create new hosts, edit existing hosts, and delete hosts.
Port by Host: View a list of ports that are associated with a host, create new hosts, edit existing hosts, and delete hosts.
Host Mappings: View mappings per host regarding volumes.
Volumes by Host: View a list of volumes that are associated with hosts, create new associations, or delete associations.
Figure 2-39 IBM FlashSystem V9000 GUI: Hosts icon and branch-out menu
Copy Services function
Figure 2-40 on page 65 shows the Copy Services icon and associated branch-out menu. Click the Copy Services icon if you want to select any of these actions:
FlashCopy: View a list of all volumes and their associated flash copies, create new FlashCopy relationships, and edit or delete existing relationships.
Consistency Groups: View the consistency groups created for remote copy partnerships, create new groups, edit existing groups, delete groups.
FlashCopy Mappings: View a list of current FlashCopy mappings and their status, create new mappings, edit existing mappings, and delete mappings.
Remote Copy: View the consistency groups created for remote copy partnerships, create new groups, edit existing groups, and delete groups.
Partnerships: View the system partnerships with secondary system, create a new partnership, edit a partnership, and delete a partnership.
Figure 2-40 IBM FlashSystem V9000 GUI: Copy Services icon and branch-out menu
Access function
Figure 2-41 shows the Access icon and associated branch-out menu. Click the Access icon if you want to select any of these actions:
Users: View a list of current users, create new users, edit existing users, and delete users.
Audit Log: View the system access log and view actions by individual users.
Figure 2-41 IBM FlashSystem V9000 GUI: Access icon and branch-out menu
Settings function
Figure 2-42 shows the Settings icon and associated branch-out menu. Click the Settings icon if you want to configure system parameters, including alerting, open access, GUI settings, and other system-wide configuration.
Figure 2-42 IBM FlashSystem V9000 GUI: Settings icon and branch-out menu
Command-line interface (CLI)
IBM FlashSystem V9000 uses the standard IBM Spectrum Virtualize storage CLI. This CLI is common among several IBM storage products, including IBM Spectrum Virtualize and the IBM Storwize family of products: the V7000, IBM V5000, IBM V3700, and IBM V3500 disk systems. IBM Spectrum Virtualize CLI is easy to use with built-in help and hint menus.
To access the IBM FlashSystem V9000 Spectrum Virtualize CLI, a Secure Shell (SSH) session to the management IP address must be established. You are then prompted for a user name and password.
Call home email SMTP support
IBM FlashSystem V9000 supports setting up a Simple Mail Transfer Protocol (SMTP) mail server for alerting the IBM Support Center of system incidents that might require a service event. These emails can also be sent within the client’s enterprise to other email accounts that are specified. After it is set up, system events that might require service are emailed automatically to an IBM Service account specified in the IBM FlashSystem V9000 code.
The email alerting can be set up as part of the system initialization process or added or edited at anytime through the IBM FlashSystem V9000 GUI. Also, a test email can be generated at anytime to test the connections. Figure 2-43 on page 67 shows the IBM FlashSystem V9000 Email setup window.
 
Tip: Be sure to set up Call Home. For details see 9.2.1, “Email and call home” on page 340.
Figure 2-43 IBM FlashSystem V9000 Email alerting setup window
SNMP support
IBM FlashSystem V9000 supports SNMP versions 1 and 2. The GUI is used to set up SNMP support on the IBM FlashSystem V9000.
To set up SNMP support on the IBM FlashSystem V9000, click the Settings icon at the left side of the window, click the Notifications tab and click the SNMP tab to enter the SNMP trap receiver IP address and community access information. Figure 2-44 shows the IBM FlashSystem V9000 SNMP setup window.
Figure 2-44 IBM FlashSystem V9000 SNMP setup window
Note: The IBM FlashSystem V9000 CLI can also be used to program the SNMP settings.
Redirection of syslog
You can redirect syslog messages to another host for system monitoring. Use the GUI to set up syslog redirection on the IBM FlashSystem V9000. To set up syslog redirection, click the Settings icon on the lower left of the window, click the Notifications tab, and then click the Syslog tab to enter the remote host trap IP address and directory information. Figure 2-45 shows the Syslog redirection setup window.
Figure 2-45 IBM FlashSystem V9000 Syslog redirection setup window
Note: The IBM FlashSystem V9000 CLI can also be used to set up syslog redirection.
2.5.2 Software and licensing
FlashSystem V9000 uses the advanced software features of SAN Volume Controller, which delivers the functionality of IBM Spectrum Control. FlashSystem V9000 data services are provided through FlashSystem V9000 software. FlashSystem V9000 has both base and optional software licenses.
For more information about FlashSystem V9000 advanced software functionality, see these resources:
IBM FlashSystem V9000 Product Guide, TIPS1281:
Base licensed features
The following functions are provided with the FlashSystem V9000 base software license:
Virtualization of FlashSystem V9000 storage enclosures
Enables rapid, flexible provisioning, and simple configuration changes.
Thin provisioning
Helps improve efficiency by allocating disk storage space in a flexible manner among multiple users, based on the minimum space that is required by each user at any given time.
Data migration
Enables easy and nondisruptive moves of volumes from another storage system onto the FlashSystem V9000 system by using Fibre Channel connectivity. Dynamic migration helps speed data migrations from weeks or months to days, eliminating the cost of add-on migration tools and providing continuous availability of applications by eliminating downtime.
Simple GUI
Simplified management with the intuitive GUI enables storage to be quickly deployed and efficiently managed. The GUI runs on the FlashSystem V9000 system, so there is no need for a separate console. All you need to do is point your web browser to the system.
IBM Easy Tier technology
This feature provides a mechanism to seamlessly migrate data to the most appropriate tier within the FlashSystem V9000. This migration can be to the internal flash memory within FlashSystem V9000 storage enclosure, or to external storage systems that are virtualized by FlashSystem V9000 control enclosure. EasyTier technology adds more blended economy of capacity and is useful for cost effective expansion and usage of your existing storage capacity investment.
Easy Tier supports up to three tiers of storage. For example, you can set up a storage pool intended for Easy Tier volumes where the pool consists of the FlashSystem V9000 storage enclosures, 15,000 RPM Fibre Channel disk drives, and SAS disk drives.
Automatic restriping of data across storage pools
When growing a storage pool by adding more storage to it, FlashSystem V9000 software can restripe your data on pools of storage so you do not need to implement any manual or scripting steps. This feature helps grow storage environments with greater ease while retaining the performance benefits that come from striping the data across the disk systems in a storage pool.
The following functions are provided with the FlashSystem V9000 base software license for internal storage only:
FlashCopy provides a volume level point-in-time copy function for any storage that is virtualized by FlashSystem V9000. FlashCopy and snapshot functions enable you to create copies of data for backup, parallel processing, testing, and development, and have the copies available almost immediately.
Real-time Compression helps improve efficiency by compressing data by as much as 80%, enabling storage of up to 5x as much data in the same physical space. Unlike other approaches to compression, Real-time Compression is designed to be used with active primary data such as production databases and email systems, dramatically expanding the range of candidate data that can benefit from compression.
Microsoft Windows Offloaded Data Transfer (ODX) is supported with FlashSystem V9000 Software V7.5. This functionality in Windows improves efficiencies by intelligently managing FlashSystem V9000 systems to directly transfer data within or between systems, bypassing the Windows host system.
VMware and vSphere 6.0. FlashSystem V9000 Software V7.5 supports vCenter Site Recovery Manager (SRM) and vCenter Web Client (IBM Spectrum Control Base 2.1 functionality). Also supported are vStorage application programming interfaces (APIs) for storage awareness.
Remote Mirroring provides storage system-based data replication by using either synchronous or asynchronous data transfers over Fibre Channel communication links:
 – Metro Mirror maintains a fully synchronized copy at metropolitan distances (up to 300 km).
 – Global Mirror operates asynchronously and maintains a copy at much greater distances (up to 8000 km).
Both functions support VMware Site Recovery Manager to help speed disaster recovery.
FlashSystem V9000 remote mirroring interoperates with other FlashSystem V9000, V840, SAN Volume Controller, and V7000 storage systems.
FlashSystem Software is installable only on FlashSystem V9000 control enclosures (AC2) and storage enclosures (AE2).
Optional licensed features
The following optional licensed features are offered with the FlashSystem V9000 Software for external storage:
External storage virtualization
Enables FlashSystem V9000 to manage capacity in other Fibre Channel SAN storage systems. When FlashSystem V9000 virtualizes a storage system, its capacity becomes part of the FlashSystem V9000 system and it is managed in the same way as capacity on internal flash modules within FlashSystem V9000. Capacity in external storage systems inherits all the functional richness of the FlashSystem V9000.
Real-time Compression
Helps improve efficiency by compressing data by as much as 80%, enabling storage of up to 5x as much data in the same physical space. Unlike other approaches to compression, Real-time Compression is designed to be used with active primary data such as production databases and email systems, dramatically expanding the range of candidate data that can benefit from compression.
 
Note: With FlashSystem V9000 V7.5 software, there are revised licensing rules for SAN Volume Controller Real-time Compression Software (5641-CP7).
As it pertains to Externally Virtualized storage (not the internal flash capacity), rather than using the volume size as the measure for determining how many terabytes of SAN Volume Controller Real-time Compression 5641-CP7 to license as previously announced, effective immediately for all licensed SAN Volume Controller Real-time Compression users, the measured terabyte capacity now applies to the actual managed disk capacity consumed by the compressed volumes.
For example, suppose that you want to store 500 TB of data where 300 TB of that data cannot be compressed (so it is not configured on compressed volumes), but 200 TB of that data can be compressed and is configured on compressed volumes.
Rather than needing to license 200 TB of Real-time Compression, the compression ratio can be applied to determine how much storage that 200 TB of volumes actually uses. The compression ratio can be obtained in advance using the IBM Comprestimator tool, or it can be shown in the system later as the actual amount of managed disk space used by those compressed volumes.
If, for example, the compression ratio is 3:1 for that 200 TB of data, meaning that only 1 TB of managed storage is consumed for every 3 TB of data, the user would license only 1/3 of the 200 TB, or 67 TB of the 5641-CP7 license. The 5641-CP7 license continues to not be licensed to a specific SAN Volume Controller hardware device, but is licensed to the customer within a country, in the same way that SAN Volume Controller standard (5639-VC7) software is licensed today.
FlashCopy
FlashCopy provides a volume level point-in-time copy function for any storage that is virtualized by FlashSystem V9000. FlashCopy and snapshot functions enable you to create copies of data for backup, parallel processing, testing, and development, and have the copies available almost immediately.
Remote Mirroring
Provides storage system-based data replication by using either synchronous or asynchronous data transfers over Fibre Channel communication links.
Metro Mirror
Maintains a fully synchronized copy at metropolitan distances (up to 300 km).
Global Mirror
Operates asynchronously and maintains a copy at much greater distances (up to 8000 km).
Global Mirror with Change Volumes
Is the term for the asynchronous remote copy of a locally and remotely created FlashCopy.
All functions support VMware Site Recovery Manager to help speed disaster recovery.
FlashSystem V9000 remote mirroring interoperates with other FlashSystem V9000, V840, SAN Volume Controller and V7000 storage systems.
 
Note: The 5641-VC7 (External Virtualization, FlashCopy, and Remote Mirroring Features) and 5641-CP7 FC 0708 (Compression) licenses are licensed per enterprise within one country and are the same licenses as for SAN Volume Controller. So, existing SAN Volume Controller licenses can be used for the FlashSystem V9000 for these features.
HyperSwap
HyperSwap capability enables each volume to be presented by two I/O groups. The configuration tolerates combinations of node and site failures, using a flexible choice of host multipathing driver interoperability. In this usage, both the FlashSystem V9000 Control Enclosure and the Flash enclosure identify and carry a site attribute.
The site attribute is set onto the enclosure during the initial cluster formation where the human operator designs the site the equipment is in. This is then used later when performing provisioning operations to easily automate the creation of a vdisk that has multi-site protection.
The HyperSwap function uses the following capabilities:
 – Spreads the nodes of the system across two sites, with storage at a third site acting as a tie-breaking quorum device.
 – Locates both nodes of an I/O group in the same site. Therefore, to get a volume resiliently stored on both sites, at least two I/O groups are required.
 – Uses additional system resources to support a full independent cache on each site, enabling full performance even if one site is lost. In some environments, a “HyperSwap” topology provides better performance than a “stretched” topology.
The HyperSwap function can initially be configured through the system CLI and will be fully configurable through the GUI and a simplified set of CLI commands in a future version of the FlashSystem V9000 software.
Hosts, FlashSystem V9000 control enclosures and FlashSystem V9000 storage enclosures are in one of two failure domains/sites, and volumes are visible as a single object across both sites (I/O groups).
Figure 2-46 shows the HyperSwap Overview and how the function works.
Figure 2-46 HyperSwap Overview
The HyperSwap Overview in Figure 2-46 shows the following components:
 – Each primary volume (denoted by the “p” in the volume name) has a secondary volume (denoted by the “s” in the volume name) on the opposite I/O group.
 – The secondary volumes are not mapped to the hosts.
 – The dual write to the secondary volumes is handled by the FlashSystem V9000 HyperSwap function, and is transparent to the hosts.
HyperSwap Golden image
The use of FlashCopy (no FlashCopy licensing required for the use of HyperSwap) helps maintain a golden image during automatic resynchronization. Because remote mirroring is used to support the HyperSwap capability, Remote Mirroring licensing is a requirement for using HyperSwap.
An example of an out-of-sync HyperSwap relationship is when one site goes offline. When the HyperSwap relationship is re-established, both copies are out-of-sync.
Resynchronization is started automatically when possible, with the failed site acting as a secondary for that resynchronization. Before the resynchronization process starts, a FlashCopy is taken from the secondary copy, the copy which was previously offline. The FlashCopy uses the change volume that was assigned to the site during the HyperSwap setup.
This FlashCopy is now a golden image, so if the other site crashes or the sync process breaks, the FlashCopy contains the data before the sync process was started.
 
 
Remember: The golden image only exists during the resync of a broken and reestablished HyperSwap relationship.
For additional information and examples of the HyperSwap function, see Chapter 11, “HyperSwap” on page 411.
For additional information and examples of how licensing is done, see IBM FlashSystem V9000 Product Guide, TIPS1281:
2.5.3 Serviceability and software enhancements
IBM FlashSystem V9000 includes several design enhancements for the administration, management, connectivity, and serviceability of the system:
Concurrent code load
IBM FlashSystem V9000 supports upgrading the system firmware on the AC2 control and AE2 storage enclosures (RAID controllers, management modules, and interface cards) and flash modules without affecting the connected hosts or their applications.
Easily accessible hot swappable modules with no single point of failure
IBM FlashSystem V9000 design enables the easy replacement of any hardware module through the front or rear of the unit. The IBM FlashSystem V9000 does not require the top panel to be removed nor does it need to be moved in the rack to replace any component.
Standard IBM CLI and GUI
IBM FlashSystem V9000 uses the latest Spectrum Virtualize CLI and GUI for simple and familiar management of the unit.
Encryption support
IBM FlashSystem V9000 supports hardware encryption of the flash modules to meet the audit requirements of enterprise, financial, and government clients.
Sixteen Gbps FC support
IBM FlashSystem V9000 supports 16 Gbps FC, enabling clients to take advantage of the latest high-speed networking equipment while increasing performance.
2.6 Support matrix for the FlashSystem V9000
The IBM FlashSystem V9000 supports a wide range of operating systems (Windows Server 2008 and 2012, Linux, and IBM AIX), hardware platforms (IBM System x, IBM Power Systems™, and x86 servers not from IBM), host bus adapters (HBAs), and SAN fabrics.
For specific information, see the IBM System Storage Interoperation Center (SSIC):
Contact your IBM sales representative or IBM Business Partner for assistance or questions about the IBM FlashSystem V9000 interoperability.
2.7 Warranty information
FlashSystem V9000 includes a one-year or a three-year warranty.
Technical Advisor support is provided during the warranty period. This support enhances end-to-end support for the client’s complex IT solutions. The Technical Advisor uses an integrated approach for proactive, coordinated cross-team support to enable customers to maximize IT availability.
Technical Advisor support for FlashSystem V9000 is delivered remotely and includes a documented support plan, coordinated problem and crisis management, information about which reports on the client’s hardware inventories and software levels, and consultation regarding FlashSystem software updates. The Technical Advisor conducts a Welcome Call with the client and provides a statement of work for this support.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.113.208