Planning
This chapter describes the steps that are required when you plan the installation of the IBM FlashSystem V9000 in your environment. This chapter considers the implications of your storage network from both the host attachment side and the virtualized storage expansion side. This chapter also describes all the environmental requirements that you must consider.
This chapter includes the following topics:
This planning guide is based on the IBM FlashSystem V9000 AC3 control enclosure and the AE3 storage enclosure. Details about AE3 storage enclosure and about IBM Spectrum Virtualize 8.1 are described in these publications:
Implementing IBM FlashSystem 900 Model AE3, SG24-8414
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1, SG24-7933
For any configurations based on the AC2 or AC3 and the AE2 storage enclosure, you should see the previous editions of the Redbooks publications:
Introducing and Implementing IBM FlashSystem V9000, SG24-8273
IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317
 
2.1 General planning introduction
The new IBM FlashSystem V9000 AC3 / AE3 combination is fundamentally different from the previous AC2 / AE2 configuration. The IBM FlashSystem V9000 AE3 storage enclosure is now a virtualized enclosure. On the older IBM FlashSystem V9000 AC2 or AC3 and the AE2 combinations, the AE2 storage enclosure was managed by the AC2 or AC3 control enclosures. Why this change was made is described in IBM FlashSystem V9000 Model AE3 Product Guide, REDP-5468.
Figure 2-1 shows the relation of the IBM FlashSystem V9000 AC3 control enclosures, and the new virtualized AE3 storage enclosure.
Figure 2-1 IBM FlashSystem V9000 AC3 control enclosures, and virtualized AE3 storage enclosure
Figure 2-2 shows the relation of the V9000 AC3 control nodes, the managed AE2 storage enclosure, and the new virtualized AE3 storage enclosure, in a mixed storage enclosure environment.
Figure 2-2 IBM FlashSystem V9000 AC3 / AE2 / AE3 enclosure combinations
To achieve the most benefit from the IBM FlashSystem V9000, preinstallation planning must include several important steps. These steps can ensure that the IBM FlashSystem V9000 provides the best possible performance, reliability, and ease of management to meet the needs of your solution. Proper planning and configuration also helps minimize future downtime by avoiding the need for changes to the IBM FlashSystem V9000 and the storage area network (SAN) environment to meet future growth needs.
Important steps include planning the IBM FlashSystem V9000 configuration and completing the planning tasks and worksheets before system installation.
An IBM FlashSystem V9000 solution is sold in what is referred to as a building block, as shown in Figure 2-3. A single building block consists of two AC3 control enclosures and one AE3 storage enclosure. Each building block is an I/O Group in the solution.
 
Note: In this chapter, the FlashSystem V9000 AE3 storage enclosure is called external storage because it is not managed using the FlashSystem V9000 controller GUI, but using its own FlashSystem V9000 AE3 storage enclosure GUI.
Figure 2-3 IBM FlashSystem V9000 base building block
IBM FlashSystem V9000 can be grown in two directions depending on the needs of the environment. This is known as the scale-up, scale-out capability:
It can have all its capabilities increased by adding up to four total building blocks to the solution. This increases both the capacity and the performance alike.
If just capacity is needed, it can be increased by adding up to four total AE3 storage enclosures beyond the single AE3 contained within each building block.
A fully configured IBM FlashSystem V9000 consists of eight AC3 control enclosures and eight AE3 storage enclosures, sometimes referred to as an eight by eight configuration.
This chapter covers planning for the installation of a single IBM FlashSystem V9000 solution, consisting of a single building block (two AC3 control enclosures and one AE3 storage enclosure). When you plan for larger IBM FlashSystem V9000 configurations, consider the required SAN and networking connections for the appropriate number of building blocks and scale-up expansion AE3 storage controllers.
 
Note: If you have an existing V9000 comprised of two AC2 or AC3 control enclosures and one, or more AE2 storage enclosures, and are adding a new additional AE3 enclosure, then please refer to the section in this chapter concerning mixed storage enclosure types.
For details about scalability and multiple building blocks, see Chapter 3, “Scalability” on page 61.
 
Requirement: A pre-sale Technical Delivery Assessment (TDA) must be conducted to ensure that the configuration is correct and the solution being planned for is valid. A pre-install TDA must be conducted shortly after the order is placed and before the equipment arrives at the customer’s location to ensure that the site is ready for the delivery and that roles and responsibilities are documented regarding all the parties who will be engaged during the installation and implementation.
Before the system is installed and configured, you must complete all the planning worksheets. When the planning worksheets are completed, you submit them to the IBM service support representative (SSR).
Follow these steps when you plan for an IBM FlashSystem V9000 solution:
1. Collect and document the number of hosts (application servers) to attach to the IBM FlashSystem V9000, the traffic profile activity (read or write, sequential, or random), and the performance expectations for each user group (input/output (I/O) operations per second (IOPS) and throughput in megabytes per second (MBps)).
2. Collect and document the storage requirements and capacities:
 – Total external storage that will be attached to the IBM FlashSystem V9000
 – Required storage capacity for local mirror copy (Volume mirroring)
 – Required storage capacity for point-in-time copy (IBM FlashCopy)
 – Required storage capacity for remote copy (Metro Mirror and Global Mirror)
 – Required storage capacity for use of the IBM HyperSwap function
 – Required storage capacity for compressed volumes
 – Per host for storage capacity, the host logical unit number (LUN) quantity, and sizes
 – Required virtual storage capacity that is used as a fully managed volume and used as a thin-provisioned volume
3. Define the local and remote IBM FlashSystem V9000 SAN fabrics to be used for both the internal connections and the host and external storage. Also plan for the remote copy or the secondary disaster recovery site as needed.
4. Define the number of building blocks and additional expansion AE3 storage controllers required for the site solution. Each building block that makes up an I/O Group is the container for the volume. The number of necessary I/O Groups depends on the overall performance requirements.
5. If applicable, also consider any 12F, 24F, or 92F expansion enclosure requirements and the type of drives need in each enclosure. See section 2.2.4, “SAS expansion enclosures” on page 39 for more details.
6. Design the host side of the SAN according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth that is needed between the host and the IBM FlashSystem V9000, and the IBM FlashSystem V9000 and the external storage subsystems.
7. Design the internal side of the SAN according to the requirements as outlined in the cabling specifications for the building blocks being installed. This SAN network is used for IBM FlashSystem V9000 control nodes, and the expansion storage data transfers. Connecting this network across inter-switch links (ISL) is not supported.
 
Important: Check and carefully count the required ports for the wanted configuration. Equally important, consider future expansion when planning an initial installation to ensure ease of growth.
8. If your solution uses Internet Small Computer System Interface (iSCSI), design the iSCSI network according to the requirements for high availability (HA) and best performance. Consider the total number of ports and bandwidth that is needed between the host and the IBM FlashSystem V9000.
9. Determine the IBM FlashSystem V9000 cluster management and service Internet Protocol (IP) addresses needed. The V9000 system requires the following addresses:
a. Cluster IP address for the V9000 System
b. Service IP addresses for each AC3 control enclosures
c. Cluster IP address for each virtualized AE3 storage enclosure
d. Service IP addresses for each of the AE3 storage enclosures nodes
10. Determine the IP addresses for the IBM FlashSystem V9000 system and for the hosts that connect through the iSCSI network.
11. Define a naming convention for the IBM FlashSystem V9000 AC3 control enclosures, host, and any external storage subsystem planned. For example, ITSO_V9000-1 shows that the IBM FlashSystem V9000 is mainly used by the International Technical Support Organization (ITSO) Redbooks team, and is the first IBM FlashSystem V9000 in the department.
12. Define the managed disks (MDisks) from external storage subsystems.
 
Note: IBM FlashSystem V9000 AE3 storage enclosures must have eight volumes assign by the SSR. These then become the MDisks assigned, per enclosure, to the V9000. Assignment of these volumes is an IBM SSR install task.
13. Define storage pools. The use of storage pools depend on the workload, any external storage subsystem connected, more expansions or building blocks being added, and the focus for their use. There might also be a need for defining pools for use by data migration requirements or EasyTier. EasyTier is discussed in detail in 2.3.6, “EasyTier” on page 52.
14. Plan the logical configuration of the volumes within the I/O Groups and the storage pools to optimize the I/O load between the hosts and the IBM FlashSystem V9000.
15. Plan for the physical location of the equipment in the rack. IBM FlashSystem V9000 planning can be categorized into two types:
 – Physical planning
 – Logical planning
The following sections describe these planning types in more detail.
 
Note: IBM FlashSystem V9000 V8.1.0 provides GUI management of the HyperSwap function. HyperSwap enables each volume to be presented by two I/O groups. If you plan to use this function, you must consider the I/O Group assignments in the planning for the IBM FlashSystem V9000.
 
2.2 Physical planning
Use the information in this section as guidance when you are planning the physical layout and connections to use for installing your IBM FlashSystem V9000 in a rack and connecting to your environment.
Industry standard racks are defined by the Electronic Industries Alliance (EIA) as 19-inch wide by 1.75-inch tall rack spaces or units, each of which is commonly referred to as 1U of the rack. Each IBM FlashSystem V9000 building block requires 6U of contiguous space in a standard rack. Additionally, each add-on expansion enclosure requires another 2U of space.
When growing the IBM FlashSystem V9000 solution, by adding building blocks and expansions, the best approach is to plan for all of the members to be installed in the same rack for ease of cabling the internal dedicated SAN fabric connections. One 42U rack can house an entire maximum configuration of an IBM FlashSystem V9000 solution, and also its SAN switches and an Ethernet switch for management connections.
Figure 2-4 shows a fully configured solution of four building blocks plus four additional scale out storage enclosures in a 42U rack. This is known as an 8 x 8 configuration because it contains a total of eight control enclosures and eight storage enclosures.
Figure 2-4 Maximum full configuration of IBM FlashSystemV9000 fully scaled-out and scaled-up
The AC3 control enclosures
Each AC3 control enclosure can support up to eight PCIe expansion I/O cards, as identified in Table 2-1, to provide a range of connectivity and capacity expansion options. However at this time of writing, only seven slots are used (slot 1 use is not supported).
Table 2-1 Layout of expansion card options for AC3 control enclosures
PCIe slot
Adapter Type
1
Not supported for use
2
SAS
3
Fibre Channel or Ethernet
4
Fibre Channel or Ethernet
5
SAS or Compression accelerator
6
Fibre Channel or Ethernet
7
Fibre Channel or Ethernet
8
Compression Accelerator
Five I/O adapter options can be ordered:
Feature code AH10: Four-port 8 gigabits per second (Gbps) FC Card:
 – Includes one four-port 8 Gbps FC Card with four Shortwave Transceivers
 – Maximum feature quantity is three
Feature code AH11: Two-port 16 Gbps FC Card:
 – Includes one two-port 16 Gbps FC Card with two Shortwave Transceivers
 – Maximum feature quantity is four
Feature code AH12: 4-port 10 Gbps Ethernet (iSCSI/FCoE):
 – Includes one four-port 10 GbE Card with four small form-factor pluggable plus (SFP+) transceivers
 – Maximum feature quantity is one
Feature code AH13: 4-port 12 Gbps SAS
Feature code AF44: 4-port 16 Gbps Fibre Channel
There is also an option for ordering the compression accelerator feature, which is included by default with IBM Real-time Compression software:
Feature code AH1A: Compression Acceleration Card:
 – Includes one Compression Acceleration Card
 – Maximum feature quantity is two
Note the following information about the AC3 control enclosure PCIe adapters and slots:
A maximum of four 4-port 16 Gbps Fibre Channel adapters can be installed in each control enclosure
A maximum of one 4-port 10 Gbs Ethernet (iSCSI/FCoE) adapter can be installed in each control enclosure
The 4-port SAS adapter can connect to V9000 standard or high-density expansion enclosures only. Only ports 1 and 3 can be used to provide the connections to each of the expansion enclosures.
The compression accelerator adapter has no external ports. Compression adapters can be installed in PCIe slots 5 and 8 only. Two adapters can be installed offering improved I/O performance when using compressed volumes.
For more IBM FlashSystem product details, see IBM FlashSystem V9000 Model AE3 Product Guide, REDP-5468.
Figure 2-5 shows the rear view of an AC3 control enclosure with the eight available (only seven operational) PCIe adapter slots locations identified.
Figure 2-5 AC3 control enclosure rear view
The AE3 storage enclosure is a flash memory enclosure that can house up to 12 modules of 3.6 TB, 8.5 TB, and 18 TB capacities. The enclosure is equipped with either four FC adapters configured with four 8 Gbps ports, or configured with two 16 Gbps ports. There are two adapters per canister for a total of sixteen or eight ports. The AE3 storage enclosure also has two redundant 1300 W power supplies.
Figure 2-6 shows locations of these components.
Figure 2-6 AE3 rear view
For more detailed information about the AE3 storage enclosure, please refer to the FlashSystem 900 Implementation Guide: Implementing IBM FlashSystem 900 Model AE3, SG24-8414.
2.2.1 Racking considerations
IBM FlashSystem V9000 must be installed in a minimum of a one building block configuration. Each building block is designed with the two AC3 control enclosures and the AE3 enclosure in the middle. These enclosures must be installed contiguously and in the proper order for the system bezel to be attached to the front of the system. A total of 6U is needed for a single building block. Ensure that the space for the entire system is available.
Location of IBM FlashSystem V9000 in the rack
Because the IBM FlashSystem V9000 AC3 control enclosures and AE3 storage enclosure must be racked together behind their front bezel, all the members of the IBM FlashSystem V9000 must be interconnected together; the location where you rack the AC3 and the AE3 enclosures is important.
Use Table 2-2 to help plan the rack locations that you use for up to a 42U rack. Complete the table for the hardware locations of the IBM FlashSystem V9000 system and other devices.
Table 2-2 Hardware location planning of the IBM FlashSystem V9000 in the rack
Rack unit
Component
EIA 42
 
EIA 41
 
EIA 40
 
EIA 39
 
EIA 38
 
EIA 37
 
EIA 36
 
EIA 35
 
EIA 34
 
EIA 33
 
EIA 32
 
EIA 31
 
EIA 30
 
EIA 29
 
EIA 28
 
EIA 27
 
EIA 26
 
EIA 25
 
EIA 24
 
EIA 23
 
EIA 22
 
EIA 21
 
EIA 20
 
EIA 19
 
EIA 18
 
EIA 17
 
EIA 16
 
EIA 15
 
EIA 14
 
EIA 13
 
EIA 12
 
EIA 11
 
EIA 10
 
EIA 9
 
EIA 8
 
EIA 7
 
EIA 6
 
EIA 5
 
EIA 4
 
EIA 3
 
EIA 2
 
EIA 1
 
Figure 2-7 shows a single base building block IBM FlashSystem V9000 rack installation with an additional AE3 storage enclosure plus space for future growth.
Figure 2-7 Sample racking of an IBM FlashSystemV9000 single building block with an add-on expansion for capacity
2.2.2 Power requirements
Each AC3 and AE3 enclosures requires two IEC-C13 power cable connections to connect to their 750 W and 1300 W power supplies. Country specifics power cables are available for ordering to ensure that proper cabling is provided for the specific region. A total of six power cords are required to connect each IBM FlashSystem V9000 building block to power.
Figure 2-8 shows an example of a base building block with the two AC2s, with two 750-W power supplies in each, and the AE3 with two 1300-W power supplies. There are six connections that require power for the IBM FlashSystem V9000 system.
Figure 2-8 IBM FlashSystemV9000 fixed building block power cable connections
Upstream redundancy of the power to your cabinet (power circuit panels and on-floor Power Distribution Units (PDUs)), within cabinet power redundancy (dual power strips or in-cabinet PDUs), and upstream high availability structures (uninterruptible power supply (UPS), generators, and so on) influence your power cabling decisions.
If you are designing an initial layout that will have future growth plans to follow, you should plan to allow for the additional building blocks to be co-located in the same rack with your initial system for ease of planning for the additional interconnects required. A maximum configuration of the IBM FlashSystem V9000 with dedicated internal switches for SAN and local area network (LAN) can almost fill a 42U 19-inch rack.
Figure 2-7 on page 33 shows a single 42U rack cabinet implementation of a base building block IBM FlashSystem V9000 and also one optional IBM FlashSystem V9000 AE3 expansion add-on, all racked with SAN and LAN switches capable of handling additional future scaled out, scaled up additions with the 16 Gb switches for the SAN.
 
Tip: When cabling the power, connect one power cable from each AC3 control enclosures and AE3 storage enclosure to the left side internal PDU and the other power supply power cable to the right side internal PDU. This enables the cabinet to be split between two independent power sources for greater availability. When adding more IBM FlashSystem V9000 building blocks to the solution, continue the same power cabling scheme for each additional enclosure.
You must consider the maximum power rating of the rack: do not exceed it. For more power requirement information, see IBM FlashSystem V9000 at IBM Knowledge Center.
2.2.3 Network cable connections
As shown in Figure 2-9 the FC ports for this example (an 8 Gbps fixed building block) are identified for all the connections of the internal (back-end) fiber connections.
Figure 2-9 IBM FlashSystemV9000 fixed building block 8 Gbps FC cable connections
Create a cable connection table or similar documentation to track all of the connections that are required for the setup of these items:
AC3 controller enclosures
AE3 storage enclosures
Ethernet
FC ports: Host and internal
iSCSI and Fibre Channel over Ethernet (FCoE) connections: Host Only
Figure 2-10 shows the back of the AC3 control enclosure with PCIe slots information.
Figure 2-10 AC3 control enclosure rear view with PCIe slots information
Slot numbers and adapter types are listed in Table 2-3.
Table 2-3 AC3 control enclosure PCIe slot numbers and adapter type
PCIe slot
Adapter types
1
Not supported for use
2
SAS
3
Fibre Channel or Ethernet
4
Fibre Channel or Ethernet
5
SAS or Compression accelerator
6
Fibre Channel or Ethernet
7
Fibre Channel or Ethernet
8
Compression accelerator
You can download a sample cable connection table from the IBM FlashSystem V9000 page of IBM Knowledge Center by using the following steps:
1. Go to the IBM FlashSystem V9000 page in IBM Knowledge Center.
2. Click Planning on the right side panel.
3. In the list of results, select Planning for the hardware installation (customer task).
4. Here you can select either option for download:
 – Planning worksheets for fixed building blocks
 – Planning worksheets for scalable building blocks
Use Table 2-4 to document the management and service IP address settings for the V9000 building block (both AC3 and AE3) in your environment.
Table 2-4 Management IP addresses for the IBM FlashSystem V9000 building block.
Cluster name:
IBM FlashSystem V9000 AC3 Control Enclosures:
Cluster IP address:
IP:
 
Subnet mask:
 
Gateway:
 
Node #1 Service IP address:
IP:
 
Subnet mask:
 
Gateway:
 
Node #2 Service IP address:
IP:
 
Subnet mask:
 
Gateway:
 
IBM FlashSystem V9000 AE3 Storage Enclosure
Management IP address:
IP:
 
Subnet mask:
 
Gateway:
 
Canister 1 Service IP address:
IP:
 
Subnet mask:
 
Gateway:
 
Canister 2 Service IP address:
IP:
 
Subnet mask:
 
Gateway:
 
 
Note: If you have more than one AE3 storage enclosure to configure, then you will need a extra management IP and two service IP addresses, per additional storage enclosure.
Use Table 2-5 to document FC port connections for a single building block in your environment.
Table 2-5 Fibre Channel (FC) port connections
Location
Item
Fibre Channel port 1
Fibre Channel port 2
 
Fibre Channel port 3
(8 Gb FC only)
Fibre Channel port 4
(8 Gb FC only)
AC3 - Node1
Fibre Channel card 1
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 1
Fibre Channel card 2
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 1
Fibre Channel card 3
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 1
Fibre Channel card 4 (16 Gbps only)
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
 
AE3 - Canister 1
Fibre Channel card 1 (left)
AC3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AE3 - Canister 1
Fibre Channel card 2 (right)
AC3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
 
AE3 - Canister 2
Fibre Channel card 1 (left)
AC3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AE3 - Canister 2
Fibre Channel card 2 (right)
AC3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
 
AC3 - Node 2
Fibre Channel card 1
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 2
Fibre Channel card 2
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 2
Fibre Channel card 3
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
AC3 - Node 2
Fibre Channel card 4 (16 Gbps only)
AE3, Switch host:
 
 
 
 
Port:
 
 
 
 
Speed:
 
 
 
 
A complete suggested cabling guide is in the installation section of the IBM FlashSystem V9000 in IBM Knowledge Center.
2.2.4 SAS expansion enclosures
Three models of SAS expansion enclosures are offered:
9846/9848-12F
9846/9848-24F
9846/9848-92F
Expansion enclosure models 12F and 24F
To support a flash-optimized tiered storage configuration for mixed workloads, up to 20 9846/9848-12F or 9846/9848-24F SAS expansion enclosures can be connected to each building block in the system.
Maximum expansion enclosure capacity:
A 9846/9848-12F SAS expansion enclosure contains up to 12 3.5 inch nearline SAS drives, and up to 9.6 PB raw capacity using 3.5 inch nearline SAS drives
A 9846/9848-24F SAS expansion enclosure contains up to 24 2.5 inch high capacity SSDs, and up to 29.4 PB raw capacity
Each building block supports up to 480 drives with expansion enclosure Model 24F (SFF) and up to 240 drives with expansion enclosure Model 12F (LFF)
Expansion enclosure model 92F
IBM FlashSystem V9000 High-Density (HD) Expansion Enclosure Model 92F delivers increased storage density and capacity for IBM FlashSystem V9000 with cost-efficiency while maintaining its highly flexible and intuitive characteristics:
A 9846/9848-92F IBM FlashSystem HD expansion
Expansion enclosure Model 92F offers the following features:
 – 5U, 19-inch rack mount enclosure with slide rail and cable management assembly
 – Support for up to ninety-two 3.5-inch large-form factor (LFF) 12 Gbps SAS top-loading drives
 – High-performance disk drives, high-capacity nearline disk drives, and flash drive support
 – High-capacity, archival-class nearline disk drives in 8 TB and 10 TB 7,200 rpm
 – High capacity SSDs in 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB
 – Redundant 200 - 240VA power supplies (new PDU power cord required)
 – Up to 8 HD expansion enclosures are supported per IBM FlashSystem V9000 building block, providing up to 368 drives with expansion Model 92F for up to 7.36 PB of raw SAS HDD or 11.3 PB SSD capacity in each building block (up to a maximum of 32 PB total)
 – With four building blocks, a maximum of 32 HD expansion enclosures can be attached giving a maximum 29.4 PB of raw SAS capacity and 32PB of raw SSD capacity is supported
All drives within an enclosure must be the same model, but, a variety of drive models are supported for use in the IBM FlashSystem expansion enclosures, including SAS flash drives or SAS hard disk drives. These drives are hot swappable and have a modular design for easy replacement.
 
Note: To support SAS expansion enclosures, an AH13 - SAS Enclosure Attach adapter card must be installed in expansion slot 2 of each AC3 control enclosure in the building block.This is only for version 8.1 or higher.
Expansion enclosure worksheet
If the system includes optional SAS expansion enclosures, you must record the configuration values that will be used by the IBM SSR during the installation process.
Complete Table 2-6 based on your particular system and provide this worksheet to the IBM SSR prior to system installation.
Table 2-6 Configuration values: SAS enclosure x, building block x, and SAS enclosure n, building block n
Configuration setting
Value
Usage in CLI
MDisk group name
xxxx
mkmdiskgrp -name mdisk_group_name -ext extent_size
MDisk extent size in MB
xxxx
RAID level (RAID5 or RAID6)
xxxx
mkdistributedarray -level raid_level -driveclass driveclass_id -drivecount x -stripewidth x -rebuildareas x mdiskgrp_id | mdiskgrp_name
driveclass_id:
The class that is being used to create the array, which must be a numeric value.
xxxx
drivecount:
The number of drives to use for the array. The minimum drive count for RAID5 is 4; the minimum drive count for RAID6 is 6.
xxxx
stripewidth:
The width of a single unit of redundancy within a distributed set of drives. For RAID5, it is 3 - 16; for RAID6, it is 5 - 16.
xxxx
rebuildareas:
The reserved capacity that is distributed across all drives available to an array. Valid values for RAID5 and RAID6 are 1, 2, 3, and 4.
xxxx
2.3 Logical planning
Each IBM FlashSystem V9000 building block creates an I/O Group for the IBM FlashSystem V9000 system. IBM FlashSystem V9000 can contain up to four I/O Groups, with a total of eight AC3 control enclosures in four building blocks.
This section includes the following topics:
2.3.1 Management IP addressing plan
To manage the IBM FlashSystem V9000 system, you access the management GUI of the system by directing a web browser to the cluster’s management IP address.
In addition to this, the IBM FlashSystem V9000 Model AE3 storage enclosure attached to the V9000 AC3 control enclosures, also uses its own management GUI, accessed via its own management IP address.
The IBM FlashSystem V9000 AC3 uses a technician port feature. This is defined on Ethernet port 4 of any AC3 control enclosures and is allocated as the technician service port (and marked with the letter “T”). All initial configuration for the IBM FlashSystem V9000 AC3’s is performed through a technician port. The port broadcasts a Dynamic Host Configuration Protocol (DHCP) service so that any notebook or computer with DHCP enabled can be automatically assigned an IP address on connection to the port.
IBM FlashSystem V9000 AE3 storage enclosure uses a USB key initialization tool process. All initial configuration for the IBM FlashSystem V9000 AE3 is performed through the use of this USB key process.
 
Note: The hardware installation process for the V9000 is completed by the IBM SSR. If the V9000 is a scalable solution, then the SSR will work in conjunction with the IBM Lab Services team, to complete the installation.
See FlashSystem 900 Implementation Guide Implementing IBM FlashSystem 900 Model AE3, SG24-8414, Installation Chapter, for further details on how to use this configuration process.
After the initial cluster configuration has been completed, the technician port automatically routes the connected user directly to the service GUI for the specific AC3 control enclosure if attached.
 
Note: The default IP address for the technician port on a V9000 AC3 node is 192.168.0.1. If the technician port is connected to a switch, it is disabled and an error is logged.
Each IBM FlashSystem V9000 AC3 control enclosure requires one Ethernet cable connection to an Ethernet switch or hub. The cable must be connected to port 1. For each cable, a 10/100/1000 Mb Ethernet connection is required. Both Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) are supported.
 
Note: For increased redundancy, an optional second Ethernet connection is supported for each AC3 control enclosure. This cable can be connected to Ethernet port 2.
To ensure system failover operations, Ethernet port 1 on all AC3 control enclosures, and on the AE3 storage enclosures, must be connected to the common set of subnets. If used for increased redundancy, Ethernet port 2 on all AC3 enclosures must also be connected to a common set of subnets. However, the subnet for Ethernet port 1 does not have to be the same as the subnet for Ethernet port 2.
Each IBM FlashSystem V9000 cluster must have a cluster management IP address and also a service IP address for each of the AC3 control enclosures in the cluster. Similarly each AE3 storage enclosure must have a management IP address and two service IP addresses assigned and cables connected, per enclosure.
Example 2-1 shows details. of the AC3 addresses
Example 2-1 AC3 Management IP address example (two building block configuration)
management IP add. 10.11.12.120
node 1 service IP add. 10.11.12.121
node 2 service IP add. 10.11.12.122
node 3 service IP add. 10.11.12.123
node 4 service IP add. 10.11.12.124
 
 
Requirement: Each AC3 control enclosure in an IBM FlashSystem V9000 clustered system must have at least one Ethernet connection.
Support for iSCSI on the IBM FlashSystem V9000 is available from only the optional 10 GbE adapters and would require extra IPv4 or extra IPv6 addresses for each of those 10 GbE ports used on each of the AC3 nodes. These IP addresses are independent of the IBM FlashSystem V9000 clustered system configuration IP addresses on the 10 GbE port 1 and port 2 for AC3 control enclosures.
When accessing the IBM FlashSystem V9000 through the GUI or Secure Shell (SSH), choose one of the available management or service IP addresses to connect to. In this case, no automatic failover capability is available. If one network is down, use an IP address on the alternative network.
 
Note: The Service Assistant tool described in the (reference to SVC section 13.10 and 900 Chapter 10 (verify) Redbooks is a web-based GUI that is used to service individual nodes and/or canisters, primarily when a node has a fault and is in a service state. This GUI is usually only used with guidance from IBM remote support. On the V9000 with AE3 storage enclosures, the service ports in the canisters of the AE3 enclosures should be assigned IP addresses and connected to the network. In prior versions of the V9000, with an AE2 storage enclosure, it was possible to manage the canisters of the AE2 by logging in to one of the nodes of control enclosures. With the AE3, a dedicated Ethernet connection to each canister of each AE3 storage enclosure is required to use the service assistant interface.
2.3.2 SAN zoning and SAN connections
IBM FlashSystem V9000 can connect to 8 Gbps or 16 Gbps Fibre Channel (FC) switches for SAN attachments. From a performance perspective, connecting the IBM FlashSystem V9000 to 16 GBps switches is better. For the internal SAN attachments, 16 Gbps switches are both better-performing and more cost-effective, because the 8 Gbps solution requires four switch fabrics, compared to the 16 Gbps needing only two.
 
Note: In the internal (back-end) fabric, ISLs are not allowed in the data path.
Both 8 Gbps and 16 Gbps SAN connections require correct zoning or VSAN configurations on the SAN switch or directors to bring security and performance together. Implement a dual-host bus adapter (HBA) approach at the host to access the IBM FlashSystem V9000. This example shows the 16 Gbps connections; details about the 8 Gbps connections are at IBM Knowledge Center.
 
Note: The IBM FlashSystem V9000 V8.1 supports 16 Gbps direct host connections without a switch.
Port configuration
With the IBM FlashSystem V9000 there are up to sixteen 16 Gbps Fibre Channel (FC) ports per building block used for the AE3 (eight ports) and internal AC3 communications (four per AC3, back-end) traffic. There are also two adapters, which if FC type, can be divided between the Advanced Mirroring features, host, and external virtualized storage (front-end) traffic.
If you want to achieve the lowest latency storage environment, the “scaled building block” solution provides the most ports per node to intercluster and inter-I/O group traffic with all the back-end ports zoned together. When creating a scaled out solution, the same port usage model is repeated with all building blocks. When creating a scaled up solution, you will add the new AE3 ports to the zone configurations equally so that the traffic load and redundancy are kept equally balanced.
 
Note: Connecting the AC3 control enclosures FC ports and the AE3 FC ports in an IBM FlashSystem V9000 scalable environment is an IBM lab-based services task. For details, see the IBM FlashSystem V9000 web page at IBM Knowledge Center.
Customer provided switches and zoning
This topic applies to anyone using customer-provided switches or directors.
External virtualized storage systems are attached along with the host on the front-end FC ports for access by the AC3 control enclosures of the IBM FlashSystem V9000. Carefully create zoning plans for each additional storage system so that these systems will be properly configured for use and best performance between storage systems and the IBM FlashSystem V9000. Configure all external storage systems with all IBM FlashSystem V9000 AC3 control enclosures; arrange them for a balanced spread across the system.
All IBM FlashSystem V9000 AC3 control enclosures in the IBM FlashSystem V9000 system must be connected to the same SANs, so that they all can present volumes to the hosts. These volumes are created from storage pools that are composed of the virtualized AE3 storage enclosure MDisks, and if licensed, the external storage systems MDisks that are managed by the IBM FlashSystem V9000.
2.3.3 Call home option
IBM FlashSystem V9000 supports setting up a Simple Mail Transfer Protocol (SMTP) mail server for alerting the IBM Support Center of system incidents that might require a service event. This is the call home option. You can enable this option during the setup.
 
Tip: Setting up call home involves providing a contact that is available 24 x 7 if a serious call home issue occurs. IBM support strives to report any issues to clients in a timely manner; having a valid contact is important to achieving service level agreements (SLAs). For more detail about properly configuring call home, see section Notifications menu in the IBM Redbooks publication Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1, SG24-7933.
Table 2-7 lists the necessary items for the AC3.
Table 2-7 Call home options for AC3 control enclosures
Configuration item
Value
Primary Domain Name System (DNS) server
 
SMTP gateway address
 
SMTP gateway name
 
SMTP “From” address
Example: V9000_name@customer_domain.com
Optional: Customer email alert group name
Example: group_name@customer_domain.com
Network Time Protocol (NTP) manager
 
Time zone
 
In addition to the IBM FlashSystem V9000 AC3 call home set-up, you also have to perform a similar set-up on the IBM FlashSystem V9000 AE3 storage enclosure, as its performs its own call home functions.
Table 2-8 lists the necessary items for the AC3.
Table 2-8 Call home options for AE3 storage enclosure
Configuration item
Value
Primary Domain Name System (DNS) server
 
SMTP gateway address
 
SMTP gateway name
 
SMTP “From” address
Example: FS900_name@customer_domain.com
Optional: Customer email alert group name
Example: group_name@customer_domain.com
Network Time Protocol (NTP) manager
 
Time zone
 
See the Installation chapter in the FlashSystem 900 AE3 implementation Guide, Implementing IBM FlashSystem 900 Model AE3, SG24-8414, for information in setting up the IBM FlashSystem V9000 AE3 storage enclosure call home function.
2.3.4 Remote Support Assistance
The IBM FlashSystem V9000 AC3 control enclosure and the IBM FlashSystem V9000 AE3 storage enclosure both support the new remote support assistance (RSA) feature. This function is enabled, and available, at code level 8.1.x and higher for the AC3 and code level 1.5.x and higher or the AE3.
By using the Remote Support Assistance (RSA), the customer is able to initiate a secure connection from FlashSystem V9000 to IBM when problems arise. An IBM remote support specialist can then connect to the system to collect system logs, analyze a problem, if possible run repair actions remotely, or assist the client or an IBM SSR who is on site.
 
Important: IBM encourages all customers to use the high-speed remote support solution that is enabled by RSA. Problem analysis and repair actions without a remote connection can get more complicated and time-consuming.
The RSA uses a high-speed internet connection, but it gives the customer the ability to initiate an outbound Secure Shell (SSH) call to a secure IBM server. Fire wall rules might need to be configured at the customer’s fire wall to allow the FlashSystem V9000 Cluster and Service IPs to establish a connection to the IBM Remote Support Center via SSH. This applies to both, to the V9000 AC3 control enclosure and also the IBM FlashSystem V9000 AE3 storage enclosure.
 
Note: The type of access that is required for a remote support connection is outbound port TCP/22 (SSH) from the FlashSystem V9000 Cluster and Service IPs (both AC3 and AE3).
The RSA consists of FlashSystem V9000 internal functions with a set of globally deployed supporting servers. Together, they provide secure remote access to the FlashSystem V9000 when necessary and when authorized by the customer’s personnel.
Figure 2-11 shows the overview of the V9000 RSA set-up, which has three major components.
Figure 2-11 Overview of the V9000 RSA set-up without proxy
Remote Support Client (machine internal)
The Remote Support Client is a software component inside FlashSystem V9000 that handles remote support connectivity. It resides on all nodes of the V9000 AC3 control enclosure and also the on the nodes of the IBM FlashSystem V9000 AE3 storage. The software component relies only on a single outgoing Transmission Control Protocol (TCP) connection, and it cannot receive inbound connections of any kind. The Remote Support Client is controlled either by using CLI or the GUI.
Remote Support Center Front Server (Internet)
Front Servers are on an IBM Demilitarized Zone (DMZ) of the internet and receive connections from the Remote Support Client and the IBM Remote Support Center Back Server. Front Servers are security-hardened machines that provide a minimal set of services, such as maintaining connectivity to connected Clients and to the Back Server.
They are strictly inbound, and never initiate anything on their own accord. No sensitive information is ever stored on the Front Server, and all data that passes through the Front Server from the client to the Back Server is encrypted so that the Front Server cannot access this data.
 
Note: When activating Remote Support Assistant, the following four Front Servers will be used via port TCP/22 (SSH):
204.146.30.139
129.33.206.139
204.146.30.157 (by default for V9000 AE3 storage enclosures only)
129.33.207.37 (by default for V9000 AE3 storage enclosures only)
Remote Support Center Back Server (IBM Intranet)
The Back Server manages most of the logic of the Remote Support Assistance system. It is located within the IBM Intranet. The Back Server maintains connection to all FrontServers and is access-controlled. Only IBM employees who are authorized to perform remote support of FlashSystem V9000 are allowed to use it. The Back Server is in charge of authenticating a support person.
It provides the support person with an user interface (UI) through which to choose a system to support based on the support person’s permissions. It also provides the list of systems that are currently connected to the Front Servers, and it manages the remote support session as it progresses (logging it, allowing additional support persons to join the session, and so on).
In addition the V9000 remote support solution can take advantage of the following two IBM internet support environments.
IBM Enhanced Customer Data Repository (ECuRep)
Further, if a remote connection exists, the IBM remote support specialists can off load the required support logs by themselves. For additional information about ECuRep, see the support web page Overview.
IBM Fix Central
FixCentral provides fixes and updates for IBM system’s software, hardware, and operating system. The V9000 AC3 control enclosure provides the possibility to allow an IBM remote support specialist to perform software updates remotely. During this process the V9000 control enclosure automatically downloads the required software packages from the IBM.
 
Note: To download software update packages, the following six IP addresses are used via outbound port TCP/22 (SSH) from the V9000 AC3 control enclosure to FixCentral:
170.225.15.105
170.225.15.104
170.225.15.107
129.35.224.105
129.35.224.104
129.35.224.107
Firewall rules might need to be configured. Further it is required to configure a DNS server to allow the download function to work.
Remote Support Proxy.
Optionally, an application called Remote Support Proxy can be used when one or more FlashSystem V9000 systems do not have direct access to the Internet (for example, because of firewall restrictions). The Remote Support Client within the FlashSystem will then connect through this optional proxy server to the Remote Support Center Front Servers. The Remote Support Proxy runs as a service on a Linux system that has Internet connectivity to the Remote Support Center and local network connectivity to the FlashSystem V9000.
Figure 2-12 illustrates the connection through the Remote Support Proxy.
Figure 2-12 Remote Support Proxy set-up
The communication between the Remote Support Proxy and the Remote Support Center is encrypted with an additional layer of Secure Sockets Layer (SSL).
 
Note: The host that is running the Remote Support Proxy must have TCP/443 (SSL) outbound access to Remote Support Front Servers.
Remote Support Proxy software.
The Remote Support Proxy is a small program which is supported on some Linux versions. The software is also used for other IBM Storage Systems like IBM XIV or FlashSystem A9000. The installation files and documentations are available at the storage portal website.
“Setting Up Remote Support” in the Redbooks publication Implementing IBM FlashSystem 900 Model AE3, SG24-8414 shows how to setup the Remote Support Proxy.
 
Note: At the time of writing the Redbook, The Remote Support Proxy does not support a connection to IBM Enhanced Customer Data Repository (ECuRep) for automatically uploading logs. Also the software download from FixCentral is not supported through the optional Remote Support proxy.
For the AC3 set-up and RSA configuration, please refer to the “Remote Support Assistance” in Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1, SG24-7933.
For the AE3 set-up and RSA configuration, please refer to the FlashSystem 900 Implementation Guide Implementing IBM FlashSystem 900 Model AE3, SG24-8414.
2.3.5 IBM FlashSystem V9000 system configuration
To ensure proper performance and high availability in the IBM FlashSystem V9000 installations, consider the following guidelines when you design a SAN to support the IBM FlashSystem V9000:
All nodes in a clustered system must be on the same LAN segment, because any node in the clustered system must be able to assume the clustered system management IP address. Make sure that the network configuration allows any of the nodes to use these IP addresses. If you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment.
To maintain application uptime in the unlikely event of an individual AC3 control enclosure failing, IBM FlashSystem V9000 control enclosures are always deployed in pairs (I/O Groups). If a control enclosure fails or is removed from the configuration, the remaining control enclosures operates in a degraded mode, but the configuration is still valid for the I/O Group.
 
Important: The IBM FlashSystem V9000 V8.1 release includes the HyperSwap function, which allows each volume to be presented by two I/O groups. If you plan to use this function, you must consider the I/O Group assignments in the planning for the IBM FlashSystem V9000.
The FC SAN connections between the AC3 control enclosures and the switches are optical fiber. These connections can run at either 8 or 16 Gbps depending on your switch hardware.
The AC3 control enclosures ports can be configured to connect either by 8 Gbps direct connect, known as the fixed building block configuration, or by 16 Gbps to an FC switch fabric, known as a scalable building block.
Direct connections between the AC3 control enclosures and hosts are supported with some exceptions.
Direct connection of AC3 control enclosures and external storage subsystems are not supported.
 
Exception: The IBM FlashSystem V9000 AE3 storage enclosure can be direct attached to the AC3 control enclosure in a fixed block configuration.
Two IBM FlashSystem V9000 clustered systems cannot have access to the same external virtualized storage LUNs within a disk subsystem.
 
Attention: Configuring zoning so that two IBM FlashSystem V9000 clustered systems have access to the same external LUNs (MDisks) can result in data corruption.
The IBM FlashSystem V9000 enclosures within a building block must be co-located (within the same set of racks) and in a contiguous 6U section.
The IBM FlashSystem V9000 uses three MDisks as quorum disks for the clustered system. A preferred practice for redundancy is to have each quorum disk in a separate storage subsystem, where possible. The current locations of the quorum disks can be displayed using the lsquorum command and relocated using the chquorum command.
The storage pool and MDisk
The storage pool is at the center of the relationship between the MDisks and the volumes (VDisk). It acts as a container from which MDisks contribute chunks of physical capacity known as extents, and from which VDisks are created.
The internal MDisks in the IBM FlashSystem V9000 are created on a basis of eight MDisk’s for managed expansion AE3 enclosures attached to the IBM FlashSystem V9000 clustered system. These AE3 storage enclosures can be part of a building block, or an add-on expansion in a scale-up configuration.
Additionally, MDisks are also created for each external storage attached LUN assigned to the IBM FlashSystem V9000 as a managed or as unmanaged MDisk for migrating data. A managed MDisk is an MDisk that is assigned as a member of a storage pool:
A storage pool is a collection of MDisks. An MDisk can only be contained within a single storage pool
IBM FlashSystem V9000 can support up to 1,024 storage pools
The number of volumes that can be allocated per system limit is 10,000
Volumes are associated with a single storage pool, except in cases where a volume is being migrated or mirrored between storage pools
 
Information: For more information about the MDisk assignments and explanation of why eight MDisk’s per AE3 storage enclosure is used, see “MDisks” on page 13.
For the most up-to-date SAN Volume Controller configuration limits, search for the “Configuration Limits and Restrictions” topic for the latest SAN Volume Controller version.
Extent size
Each MDisk is divided into chunks of equal size called extents. Extents are a unit of mapping that provides the logical connection between MDisks and volume copies.
The extent size is a property of the storage pool and is set when the storage pool is created. All MDisks in the storage pool have the same extent size, and all volumes that are allocated from the storage pool have the same extent size. The extent size of a storage pool cannot be changed. If you want another extent size, the storage pool must be deleted and a new storage pool configured.
The IBM FlashSystem V9000 supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192 MB. By default, the MDisk created for the internal expansions of flash memory in the IBM FlashSystem V9000 building block are created with an extent size of 1024 MB. To use a value that differs from the default requires the use of CLI commands to delete and re-create with different value settings. For information about the use of the CLI commands, search for CLI commands in IBM Knowledge Center.
Table 2-9 lists all of the extent sizes that are available in an IBM FlashSystem V9000.
Table 2-9 Extent size and maximum clustered system capacities
Extent size
Maximum clustered system capacity
     16 MB
  64 TB
     32 MB
128 TB
     64 MB
256 TB
   128 MB
512 TB
   256 MB
    1 petabyte (PB)
   512 MB
    2 PB
1,024 MB
    4 PB
2,048 MB
    8 PB
4,096 MB
  16 PB
8,192 MB
  32 PB
Consider the following information about storage pools:
Maximum clustered system capacity is related to the extent size:
 – 16 MB extent = 64 TB and doubles for each increment in extent size; for example,
32 MB = 128 TB. For the internal expansion enclosure MDisk, the default extent size is 1024 MB.
 – You cannot migrate volumes between storage pools with separate extent sizes. However, you can use volume mirroring to create copies between storage pools with separate extent sizes.
Storage pools for performance and capacity:
 – Before deciding whether to create a single or multiple storage pools, carefully evaluate which option best fits the solution needs, considering data availability and recovery management. Storage pool design affects the extents that make up a volume. The extents are the mapping to the disk storage that affects performance of the volume.
Reliability, availability, and serviceability (RAS):
 – With external storage license, it might make sense to create multiple storage pools in circumstances where a host only gets its volumes built from one of the storage pools. If the storage pool goes offline, it affects only a subset of all the hosts using the IBM FlashSystem V9000.
 – If you do not isolate hosts to storage pools, create one large storage pool. Creating one large storage pool assumes that the MDisk members are all of the same type, size, speed, and RAID level.
 – The storage pool goes offline if any of its MDisks are not available, even if the MDisk has no data on it. Therefore, do not put MDisks into a storage pool until they are needed.
 – If needed, create at least one separate storage pool for all the image mode volumes.
 – Make sure that the LUNs that are given to the IBM FlashSystem V9000 have all host-persistent reserves removed.
2.3.6 EasyTier
IBM EasyTier is a function that automatically and non disruptively moves frequently accessed data from HDD MDisks to flash drive MDisks, thus placing such data in a faster tier of storage. With version 7.8 and higher, EasyTier supports 4 tiers of storage.
The IBM FlashSystem V9000 supports these tiers:
Tier 0 flash: Specifies a tier0_flash IBM FlashSystem MicroLatency module or an external MDisk for the newly discovered or external volume.
Tier 1 flash: Specifies a tier1_flash (or flash SSD drive) for the newly discovered or external volume.
Enterprise tier: Enterprise tier exists when the pool contains enterprise-class MDisks, which are disk drives that are optimized for performance.
Nearline tier: Nearline tier exists when the pool contains nearline-class MDisks, which are disk drives that are optimized for capacity.
All MDisks belong to one of the tiers, which includes MDisks that are not yet part of a pool.
If the AE3 storage enclosure is used in an EasyTier pool and encryption is enabled in V9000 AC3 on the pool, then the AC3 nodes will send encrypted, incompressible data to the AE3. The Spectrum Virtualize software detects, if an mdisk is encrypted by the FlashSystem. Therefore, if an AE3 flash enclosure will be part of an encrypted easy-tier pool, encryption must be enabled on the AE3 BEFORE it is enabled in the EasyTier pool.
IBM Spectrum Virtualize does not attempt to encrypt data in an array that is already encrypted, This will allows the hardware compression of the AE3 to be effective. However, there are cases in which using SVC's software compression is preferred, such as if there is highly compressible data, (for example 3:1 or higher). In these cases, both encryption and compression can be done by the AC3 control nodes.
For more information about EasyTier, see the IBM redbooks publication Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1, SG24-7933.
Storage pools have an EasyTier setting that controls how EasyTier operates. The setting can be viewed through the management GUI but can only be changed by the CLI.
By default the storage pool setting for EasyTier is set to Auto (Active). In this state, storage pools with all managed disks of a single tier have EasyTier status of Balanced.
If a storage pool has managed disks of multiple tiers, the EasyTier status is changed to Active. The chmdiskgrp -easytier off 1 command sets the EasyTier status for storage pool 1 to Inactive. The chmdiskgrp -easytier measure 2  command sets the EasyTier status for storage pool 2 to Measured.
Figure 2-13 shows four possible EasyTier states.
Figure 2-13 EasyTier status for CLI and GUI
EasyTier evaluation mode
EasyTier evaluation mode is enabled for a storage pool with a single tier of storage when the status is changed with the command line to Measured. In this state, EasyTier collects usage statistics for all the volumes in the pool. These statistics are collected over a 24-hour operational cycle, so you will have to wait several days to have multiple files to analyze. The statistics are copied from the control enclosures and viewed with the IBM Storage Tier Advisor Tool.
Instructions for downloading and using the tool are available in the “Extracting and viewing performance data with the IBM Storage Tier Advisor Tool” topic at IBM Knowledge Center.
This tool is intended to supplement and support, but not replace, detailed preinstallation sizing and planning analysis.
EasyTier considerations
When a volume is created in a pool that has EasyTier active, the volume extents are initially be allocated only from the Enterprise tier. If that tier is not present or all the extents have been used, the volume will be assigned extents from other tiers.
To ensure optimal performance, all MDisks in a storage pool tier must have the same technology and performance characteristics.
EasyTier functions best for workloads that have hot spots or data. Synthetic random workloads across an entire tier are not a good fit for this function. Also, you should not allocate all the space in the storage pool to volumes. You should leave some capacity free on the fastest tier for EasyTier to use for migration.
EasyTier volume planning
FlashSystem V9000 Flash Enclosure Model AE3 has the capability to alter the default quantity and size of managed disks. By default, the AE3 storage enclosure is configured with eight equal sized volumes referred to as managed disks (MDisks). For example, a 200TiB maximum effective capacity AE3 would then have eight 25TiB volumes.
This is different from the prior generation AE2 storage enclosure, which presented one large volume which was the entire capacity of the AE2 flash enclosure (for example 57TB). The capability to use individual MDisks (AE3 storage enclosure volumes) is important as it allows V9000 configurations that are intended to be used for EasyTier.
The following example assumes that only two storage tiers are used and the cold to hot capacity ratio is 5:1.
Setup 1: Two pools
AE3 storage enclosure with 200 TiB effective capacity, consisting of eight 25 TiB volumes, comprising the pool Pool0.
Pool0 (200 TiB) has the following MDisks:
mdisk0 (25 TiB), mdisk1(25 TiB), mdisk2 (25 TiB), mdisk3 (25 TiB), mdisk4 (25 TB), mdisk5 (25 TiB), mdisk6 (25 TiB), mdisk7(25 TiB)
A frequent practice with EasyTier is to have a particular ratio of slower lower tier to that of high performance flash tier. If the goal is to use a 5 to 1 (5:1) ratio and you have 100TiB of spinning nearline HDD, then a 5:1 ratio would suggest that adding about 20TiB of AE3 flash then satisfies that ratio.
To illustrate this configuration, consider the externally virtualized HDD capacity is a 100TiB known as mdisk10. The pool Pool1 is formed with this mdisk.
Pool1 (100 TiB) has the following MDisk:
mdisk10 (100 TiB)
Setup 2: Two pools, one pool with an EasyTier configuration
AE3 with 200TB effective capacity, and mdisk10 as a virtualized HDD.
AE3 storage enclosure with 200TiB effective capacity, consisting of eight 25 TiB volumes, comprising the pool Pool0. Externally virtualized HDD capacity os one MDisk of 100TiB, comprising the pool Pool1. The corresponding MDisks are listed in Setup 1.
With the default AE3 being configured as eight equally sized mdisks, simply remove one of the mdisks from Pool0, then add it to Pool1. The resulting configuration then looks like:
Pool0 (175 TiB) has following seven (7) MDisks:
mdisk0 (25 TiB), mdisk1(25 TiB), mdisk2 (25 TiB), mdisk3 (25 TiB), mdisk4 (25 TB), mdisk5 (25 TiB), mdisk6 (25 TiB)
Pool1 (125 TiB) has following MDisks:
mdisk10 (100 TiB), mdisk7(25 TiB)
The changed pool Pool1 has a 4:1 ratio.
Setup 3: Two pools, one pool with an optimized EasyTier configuration
To get a more precise setting, the AE3 storage enclosure GUI will need to be used to alter MDisk sizes. The goal in this example is to get to a 5:1 ratio exactly, which means that mdisk7 as 25 TiB is too big. To solve this, we can go to the AE3 storage enclosure GUI and delete mdisk7, after unconfiguring it from any V9000 control enclosure pool.
Then, create a new mdisk7 as 20 TiB, and to not be wasteful of capacity, create a 5 TiB mdisk8. From the V9000 cluster GUI, detect these changes in the mdisks, then add mdisk7 (20 TiB) to Pool1 and mdisk8 (5iTB) to Pool0. The resulting configuration then looks like:
Pool0 (180 TiB) has following seven (7) MDisks:
mdisk0 (25 TiB), mdisk1(25 TiB), mdisk2 (25 TiB), mdisk3 (25 TiB), mdisk4 (25 TB), mdisk5 (25 TiB), mdisk6 (25 TiB), mdisk8 (5 TiB)
Pool1 (125 TiB) has following MDisks:
mdisk10 (100 TiB), mdisk7(20 TiB)
Here, Pool1 is now exactly the desired 5:1 ratio, and all remaining other AE3 capacity of 180TB is in Pool0.
2.3.7 Volume configuration
An individual volume is a member of one storage pool and one I/O Group:
The storage pool defines which MDisks provided by the disk subsystem make up the volume.
The I/O Group (two nodes make an I/O Group) defines which IBM FlashSystem V9000 nodes provide I/O access to the volume.
 
Important: No fixed relationship exists between I/O Groups and storage pools.
Perform volume allocation based on the following considerations:
Optimize performance between the hosts and the IBM FlashSystem V9000 by attempting to distribute volumes evenly across available I/O Groups and nodes in the clustered system.
Reach the level of performance, reliability, and capacity that you require by using the storage pool that corresponds to your needs (you can access any storage pool from any node). Choose the storage pool that fulfills the demands for your volumes regarding performance, reliability, and capacity.
I/O Group considerations:
 – With the IBM FlashSystem V9000, each building block that is connected into the cluster is an additional I/O Group for that clustered V9000 system.
 – When you create a volume, it is associated with one node of an I/O Group. By default, every time that you create a new volume, it is associated with the next node using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the volume rather than using the round-robin algorithm. A volume is defined for an I/O Group.
 – Even if you have eight paths for each volume, all I/O traffic flows toward only one node (the preferred node). Therefore, only four paths are used by the IBM Subsystem Device Driver (SDD). The other four paths are used only in the case of a failure of the preferred node or when concurrent code upgrade is running.
Thin-provisioned volume considerations:
 – When creating the thin-provisioned volume, be sure to understand the utilization patterns by the applications or group users accessing this volume. You must consider items such as the actual size of the data, the rate of creation of new data, and modifying or deleting existing data.
 – Two operating modes for thin-provisioned volumes are available:
 • Autoexpand volumes allocate storage from a storage pool on demand with minimal required user intervention. However, a misbehaving application can cause a volume to expand until it has consumed all of the storage in a storage pool.
 • Non-autoexpand volumes have a fixed amount of assigned storage. In this case, the user must monitor the volume and assign additional capacity when required. A misbehaving application can only cause the volume that it uses to fill up.
 – Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline, either through a lack of available physical storage for autoexpand, or because a volume that is marked as non-expand had not been expanded in time, a danger exists of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the IBM FlashSystem V9000 cache as a backup storage mechanism.
 
Important:
Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity.
Warnings must not be ignored by an administrator.
Use the autoexpand feature of the thin-provisioned volumes.
 – When you create a thin-provisioned volume, you can choose the grain size for allocating space in 32 kilobytes (KB), 64 KB, 128 KB, or 256 KB chunks. The grain size that you select affects the maximum virtual capacity for the thin-provisioned volume. The default grain size is 256 KB, and is the preferred option. If you select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The grain size cannot be changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which could adversely affect performance. If you will not be using the thin-provisioned volume as a FlashCopy source or target volume, use 256 KB to maximize performance. If you will be using the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the FlashCopy function.
 – Thin-provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a thin-provisioned volume requires approximately one directory I/O for every user I/O.
 – The directory is two-way write-back-cached (just like the IBM FlashSystem V9000 fast write cache), so certain applications perform better.
 – Thin-provisioned volumes require more processor processing, so the performance per I/O Group can also be reduced.
 – A thin-provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a thin-provisioned volume using volume mirroring.
Volume mirroring guidelines:
 – With the IBM FlashSystem V9000 system in a high performance environment, this capability is only possible with a scale up or scale out solution as the single expansion of the first building block only provides one MDisk in one storage pool. If you are considering volume mirroring for data redundancy, a second expansion with its own storage pool would be needed for the mirror to be on.
 – Create or identify two separate storage pools to allocate space for your mirrored volume.
 – If performance is of concern, use a storage pool with MDisks that share the same characteristics. Otherwise, the mirrored pair can be on external virtualized storage with lesser-performing MDisks.
2.3.8 SAN boot support
The IBM FlashSystem V9000 supports SAN boot or startup for IBM AIX®, Microsoft Windows Server, and other operating systems. SAN boot support can change, so check the following SSIC web page regularly.
2.4 License features
All FlashSystem V9000 model AE3 systems have the FlashSystem V9000 software pre-installed. One 5639-RB8 license is required for each model AE3, 12F, and 24F storage enclosure, with four 5639-RB8 licenses being required for each model 92F storage enclosure. These models are the FlashSystem virtualized storage and expansion enclosures.
The system requires Storage Capacity Units (SCU) licenses for any external systems that are being virtualized. More more information on IBM Spectrum Virtualize licensing, see the sales manual.
With the AE3 storage enclosure, there is also a licensed feature code for hardware assisted encryption:
Feature code AF14 - Encryption Enablement Pack.
More more information on licensing, see the IBM FlashSystem V9000 Model AE3 Product Guide.
2.4.1 Encryption feature
The IBM FlashSystem V9000 Encryption feature is offered with the IBM FlashSystem V9000 under the following feature:
Feature code AF14 - Encryption Enablement Pack:
 – Includes three USB keys on which to store the encryption key
 – Maximum feature quantity is eight (for a full scale up and scale out solution)
 – Enables data encryption at rest on the AE3 storage enclosure assigned MDisks
There are two ways to install encryption feature on the IBM FlashSystem V9000 as follows:
USB Keys on each of the AE3 storage enclosures
IBM Security Key Lifecycle Manager (SKLM)
You can use one or both ways to install encryption. Using both USB and SKLM methods together gives the most flexible availability of the encryption enablement.
 
Note: To invoke either method requires the purchase of the Feature code AF14 - Encryption Enablement Pack.
USB Keys
This feature supplies three USB keys to store the encryption key when the feature is enabled and installed. If necessary, there is a rekey feature that can also be performed. When the UBS keys encryption feature is being installed, the IBM FlashSystem V9000 AE3 GUI is used for each AE3 that will have the encryption feature installed. The USB keys must be installed in the USB ports in the rear of the AE3 storage enclosure.
Figure 2-14 (rear view) shows the location of USB ports on the AE3 storage enclosure.
Figure 2-14 Location of USB ports on the AE3 storage enclosure
IBM Security Key Lifecycle Manager (SKLM)
IBM FlashSystem V9000 Software V8.1 adds improved security with support for encryption key management software that complies with the Key Management Interoperability Protocol (KMIP) standards, such as IBM Security Key Lifecycle Manager (SKLM) to help centralize, simplify, and automate the encryption key management process.
Before IBM FlashSystem V9000 Software V8.1, you could enable encryption by using USB flash drives to copy the encryption key to the system.
 
Note: If you are creating a new cluster with V 8.1, you have the option to either use USB encryption or key server encryption or both. The USB flash drive method and key server method can be used in parallel on the same system. Existing clients that are currently using USB encryption will be able to move to key server encryption. The migration of a local (USB) key to a centrally managed key (SKLM key server) is also available.
Encryption summary
Encryption can take place at the hardware or software level.
Encryption at the AE3 level (hardware)
A IBM FlashSystem V9000 with an AE3 enclosure supports hot encryption activation when enabling encryption in the storage enclosure. With hot encryption activation, you can enable encryption on an existing flash array without having to remove the data. Enabling encryption this way is a non-destructive process.
Hardware encryption is the preferred method for AE3 storage enclosures because this method works with the hardware compression that is built in to the Flash Modules of the AE3 flash storage enclosure.
 
Note: When an AE3 storage enclosure is configured with 18 TB modules, then compression using RtC may be more effective for data with compression rations greater than 1.2:1. Using RtC may not deliver the lowest FlashSystem 900 AE3 latency.
Encryption at the AC3 level (software)
With highly compressible data, (greater than a 2.5:1 ratio), compression using the RtC engine in the IBM FlashSystem V9000 control nodes makes more effective use of space on the flash enclosure. If using this compression option, then either software or hardware encryption is acceptable.
Software encryption should be used with other storage that does not support its own hardware encryption. For more information about encryption technologies supported by other IBM storage devices, see the IBM DS8880 Data-at-rest Encryption, REDP-4500.
2.4.2 Compression
There are two ways to compress data on the V9000 depending on the type of storage attached to the system as follows:
Real-time Compression (RtC)
AE3 In-line Hardware Compression
FlashSystem V9000 AE3 storage enclosure in-line hardware compression is always on. The best usable to maximum effective capacity ratio is depending on the MicroLatency module capacity. Some workload not demanding lowest latency and having a good possible compression rate could be a candidate for using RtC. Also see “Physical and effective capacity based on compression rates” in the IBM Redbooks publication Implementing IBM FlashSystem 900 Model AE3, SG24-8414.
Real-time Compression
The IBM FlashSystem V9000 Real-time Compression (RtC) feature uses additional hardware that is dedicated to the improvement of the Real-time Compression functionality. When ordered, the feature includes two Compression Acceleration Cards per control enclosure for the I/O Group to support compressed volumes.
The compression accelerator feature is ordered, by default, with RtC software: Feature code AH1A - Compression Acceleration Card: The quantity of the Compression Acceleration Cards per controller is either zero (0) or two (2).
With two Compression Acceleration cards in each node (a total of four cards per I/O group), the total number of managed compressed volumes is up to 512 per I/O group. RtC type compression would be used on the previous generation IBM FlashSystem V9000 AC2 or AC3 and AE2 combinations or for externally virtualized storage systems that do not support their own compression function.
AE3 Inline Hardware Compression
The IBM FlashSystem V9000 AE3 storage enclosure has in-line hardware compression as part of its architecture. This type of compression is “always on” and cannot be switched off. For further details of the AE3 compression, its architecture and operation, please see the Architecture topic in Implementing IBM FlashSystem 900 Model AE3, SG24-8414.
2.5 IBM FlashSystem V9000 configuration backup procedure
Configuration backup is the process of extracting configuration settings from a clustered system and writing it to disk. The configuration restore process uses backup configuration data files for the system to restore a specific system configuration. Restoring the system configuration is an important part of a complete backup and disaster recovery solution.
Only the data that describes the system configuration is backed up. You must back up your application data by using the appropriate backup methods.
To enable routine maintenance, the configuration settings for each system are stored on each node. If power fails on a system or if a node in a system is replaced, the system configuration settings are automatically restored when the repaired node is added to the system. To restore the system configuration in a disaster (if all nodes in a system are lost simultaneously), plan to back up the system configuration settings to tertiary storage. You can use the configuration backup functions to back up the system configuration. The preferred practice is to implement an automatic configuration backup by applying the configuration backup command.
The virtualization map is stored on the quorum disks of external MDisks, and is accessible to every IBM FlashSystem V9000 control enclosure.
For complete disaster recovery, regularly back up the business data that is stored on volumes at the application server level or the host level.
Before making major changes to the IBM FlashSystem V9000 configuration be sure to save the configuration of the system. By saving the current configuration, you create a backup of the licenses that are installed on the system. This can assist you in restoring the system configuration. You can save the configuration by using the “svcconfig backup” CLI command.
The next two steps show how to create a backup of the V9000 AC3 configuration file and to copy the file to another system:
1. Log in to the cluster IP using an SSH client and back up the IBM FlashSystem V9000 configuration. Example 2-2 shows the output of the “svcconfig backup” CLI command.
Example 2-2 Output of the “svcconfig backup” CLI command
superuser> svcconfig backup
...............................................................
CMMVC6155I SVCCONFIG processing completed successfully
2. Copy the configuration backup file from the system. Using secure copy, copy the following file from the system and store it:
/tmp/svc.config.backup.xml
For example, use pscp.exe, which is part of the PuTTY commands family. Example 2-3 shows the output of the pscp.exe CLI command.
Example 2-3 Using pscp.exe
pscp.exe superuser@<cluster_ip >:/tmp/svc.config.backup.xml .
superuser@ycluster_ip> password:
svc.config.backup.xml | 163 kB | 163.1 kB/s | ETA: 00:00:00 | 100%
This process also needs to be completed on each AE3 storage enclosure in the IBM FlashSystem V9000 cluster. You will need to log in to each of the AE3 cluster IP addresses, using an SSH client and run the “svcconfig backup” command on each of the FlashSystem AE3 attached storage enclosures.
 
Note: This process saves only the configuration of the V9000 system. User data must be backed up by using normal system backup processes
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.24.30