Planning
This chapter describes the steps that are required when you plan the installation of the IBM FlashSystem 9100 in your environment. This chapter considers the implications of your storage network from the host attachment and virtualized storage expansion sides.
This chapter also describes all the environmental requirements that you must consider and includes the following topics:
 
 
Note: This planning guide is based on the IBM FlashSystem 9100 models AF7 and AF8. It also covers the SAS expansion enclosures models A9F and AFF.
 
4.1 FlashSystem 9100
The IBM FlashSystem 9100 storage system has the node canisters and the NVMe drives in one 2U high enclosure. In the previous product (the IBM FlashSystem V9000 AC2 and AC3 and the AE2 and AE3 combinations), the storage enclosures were separate 2U units and managed by the AC2 or AC3 control enclosures. This configuration makes the V9000 clusters 6U high or more.
 
Note: The IBM FlashSystem 9100 term node canisters are also sometimes referred to as controllers or nodes. These terms are interchangeable.
Figure 4-1 shows the relation of the previous IBM FlashSystem V9000 AC3 control enclosures, the managed AE2 enclosure, and the virtualized AE3 storage enclosure.
Figure 4-1 IBM FlashSystem V9000 AC2/AC3/AE2/AE3 enclosure combinations
Figure 4-2 shows the relation of the IBM FlashSystem 9100 node canisters and the NVMe storage array. The complete system is contained in a 2U high enclosure, which reduces the amount of rack space that is needed per system.
Figure 4-2 IBM FlashSystem 9100 node canisters and the NVMe storage array
4.2 General planning introduction
To achieve the most benefit from the IBM FlashSystem 9100, pre-installation planning must include several important steps. These steps can ensure that the IBM FlashSystem 9100 provides the best possible performance, reliability, and ease of management to meet the needs of your solution. Proper planning and configuration also helps minimize future downtime by avoiding the need for changes to the IBM FlashSystem 9100 and the storage area network (SAN) environment to meet future growth needs.
Important steps include planning the IBM FlashSystem 9100 configuration and completing the planning tasks and worksheets before system installation.
Figure 4-3 shows the IBM FlashSystem 9100 front view with one of the NVMe Flash Core Module drives partially removed.
Figure 4-3 IBM FlashSystem 9100 front view
IBM FlashSystem 9100 can be grown in two directions, depending on the needs of the environment. This feature is known as the scale-up, scale-out capability. Consider the following points:
If extra capacity is needed (that is, scale-up), it can be increased by adding up to 24 NVMe drives per control enclosure.
The IBM FlashSystem 9100 can have its capabilities increased (that is, scale-out), by adding up to four control enclosures in total to the solution to form a cluster. This addition increases the capacity and the performance alike.
The total capacity can be further extended by the addition of SAS all flash expansion enclosures. This change is part of the scale-up strategy.
A fully configured IBM FlashSystem 9100 cluster consists of four control enclosures, each with 24 NVMe drives per enclosure.
This chapter covers planning for the installation of a single IBM FlashSystem 9100 solution, which consists of a single control enclosure. When you plan for larger IBM FlashSystem 9100 configurations, consider the required SAN and networking connections for the appropriate number of control enclosures and scale-up expansion of the SAS external enclosures.
For more information about scalability and multiple control enclosures, see Chapter 5, “Scalability” on page 113.
 
Requirement: A pre-sale Technical Delivery Assessment (TDA) must be conducted to ensure that the configuration is correct and the solution that is planned for is valid. A preinstall TDA must be conducted shortly after the order is placed and before the equipment arrives at the customer’s location to ensure that the site is ready for the delivery and that roles and responsibilities are documented regarding all the parties who will be engaged during the installation and implementation.
Before the system is installed and configured, you must complete all the planning worksheets. When the planning worksheets are completed, you submit them to the IBM service support representative (SSR).
Complete the following steps when you plan for an IBM FlashSystem 9100 solution:
1. Collect and document the number of hosts (application servers) to attach to the IBM FlashSystem 9100, the traffic profile activity (read or write, sequential, or random), and the performance expectations for each user group; that is, input/output (I/O) operations per second (IOPS) and throughput in megabytes per second (MBps).
2. Collect and document the following storage requirements and capacities:
 – Total external storage that is attached to the IBM FlashSystem 9100
 – Required storage capacity for local mirror copy (Volume mirroring)
 – Required storage capacity for point-in-time copy (IBM FlashCopy)
 – Required storage capacity for remote copy (Metro Mirror and Global Mirror)
 – Required storage capacity for use of the IBM HyperSwap function
 – Required storage capacity for compressed volumes
 – Per host for storage capacity, the host logical unit number (LUN) quantity, and sizes
 – Required virtual storage capacity that is used as a fully managed volume and used as a thin-provisioned volume
3. Define the local and remote IBM FlashSystem 9100 SAN fabrics to be used for both of the internal connections (if this system is multi-enclosure) and the host and any external storage. Also, plan for the remote copy or the secondary disaster recovery site as needed.
4. Define the number of IBM FlashSystem 9100 control enclosures and more expansion storage controllers that are required for the site solution. Each IBM FlashSystem 9100 control enclosure that makes up an I/O Group is the container for the volume. The number of necessary I/O Groups depends on the overall performance requirements.
5. If applicable, also consider any IBM FlashSystem 9100 AFF or A9F expansion enclosure requirements and the type of drives need in each expansion enclosure. For more information about planning for the expansion enclosures, see 4.3.5, “SAS expansion enclosures” on page 74.
6. Design the host side of the SAN according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth that is needed between the host and the IBM FlashSystem 9100, and the IBM FlashSystem 9100 and the external storage subsystems.
7. Design the internal side of the SAN according to the requirements as outlined in the cabling specifications for the number of IBM FlashSystem 9100 control enclosures being installed. This SAN network is used for the IBM FlashSystem 9100 control enclosures and any external storage, if installed, and data transfers. Connecting this network across inter-switch links (ISL) is not supported.
 
Important: Check and carefully count the required ports for the wanted configuration. Equally important, consider future expansion when planning an initial installation to ensure ease of growth.
8. If your solution uses internet Small Computer System Interface (iSCSI), design the iSCSI network according to the requirements for high availability (HA) and best performance. Consider the total number of ports and bandwidth that is needed between the host and the IBM FlashSystem 9100.
9. Determine the IBM FlashSystem 9100 cluster management and service Internet Protocol (IP) addresses needed. The IBM FlashSystem 9100 system requires the following addresses:
 – One cluster IP address for the IBM FlashSystem 9100 system as a whole
 – Two service IP addresses, one for each node canister within the control enclosures
For example, an IBM FlashSystem 9100 cluster that is composed of three control enclosures needs one management IP address and six service IP addresses assigned.
10. Determine the IP addresses for the IBM FlashSystem 9100 system and for the hosts that connect through the iSCSI network.
11. Define a naming convention for the IBM FlashSystem 9100 control enclosures, host, and any external storage subsystem planned. For example, ITSO_FS9100-1 shows that the IBM FlashSystem 9100 is used by the International Technical Support Organization (ITSO) Redbooks publication team, and is the first IBM FlashSystem 9100 in the department.
12. Define the managed disks (MDisks) from any external storage subsystems.
13. Define storage pools. The use of storage pools depends on the workload, any external storage subsystem that is connected, more expansions or control enclosures being added, and the focus for their use. A need might also exist for defining pools for use by data migration requirements or Easy Tier. For more information about Easy Tier, see in 4.6.2, “IBM Easy Tier” on page 102.
14. Plan the logical configuration of the volumes within the I/O Groups and the storage pools to optimize the I/O load between the hosts and the IBM FlashSystem 9100.
15. Plan for the physical location of the equipment in the rack. IBM FlashSystem 9100 planning can be categorized into the following types:
 – Physical planning
 – Logical planning
The following sections describe these planning types.
 
Note: IBM FlashSystem 9100 V8.2.0 provides GUI management of the HyperSwap function. HyperSwap enables each volume to be presented by two I/O groups. If you plan to use this function, you must consider the I/O Group assignments in the planning for the IBM FlashSystem 9100.
4.3 Physical planning
Use the information in this section as guidance when you are planning the physical layout and connections to use for installing your IBM FlashSystem 9100 in a rack and connecting to your environment.
Industry standard racks are defined by the Electronic Industries Alliance (EIA) as 19-inch wide by 1.75-inch tall rack spaces or units, each of which is commonly referred to as 1U of the rack. Each IBM FlashSystem 9100 control enclosure requires 2U of space in a standard rack. Additionally, each add-on SAS expansion enclosure requires another 2U of space for the AFF or 5U of space for the A9F.
 
Important: IBM FlashSystem 9100 is approximately 850 mm (33.46 inches) deep and require a rack of these dimensions to house it. Also it must have the required service clearance at the rear of the rack to allow for concurrent maintenance of the node canisters.
For more information, see IBM Knowledge Center.
For non-IBM racks, the service clearance at the rear must be at least 915 mm (36 inches) to allow for installation and concurrent maintenance of the node canisters.
When the IBM FlashSystem 9100 solution is developed by adding control enclosures and expansions, the best approach is to plan for all of the members to be installed in the same rack for ease of cabling the internal dedicated SAN fabric connections. One 42U rack can house an entire maximum configuration of an IBM FlashSystem 9100 solution, its SAN switches, and an Ethernet switch for management connections. Depending on the number of extra expansion enclosures, you might need to plan for extra racks.
Figure 4-4 shows a partially configured solution of two IBM FlashSystem 9100 control enclosures plus two other scale-out A9F expansion enclosures and two AFF expansion enclosures in a 42U rack.
Figure 4-4 Control enclosures plus more scale-out expansion enclosures
4.3.1 IBM FlashSystem 9100 control enclosures
Each IBM FlashSystem 9100 control enclosure can support up to six PCIe expansion I/O cards, as listed in Table 4-1, to provide a range of connectivity and capacity expansion options.
Table 4-1 IBM FlashSystem 9100 control enclosure adapter card options
Number of cards
Ports
Protocol
Possible slots
Comments
0 - 3
4
16 Gb Fibre Channel
1, 2, 3
 
0 - 3
2
25 Gb Ethernet (iWarp)
1, 2, 3
 
0 - 3
2
25 Gb Ethernet (RoCE)
1, 2, 3
 
0 - 1
2 - See comment
12 Gb SAS Expansion
1, 2, 3
Card is 4-port with only two ports active (ports 1 and 3)
The following types I/O adapter options can be ordered:
Feature Code AHB3 - 16 Gb FC 4 Port Adapter Cards (Pair)
 – This feature provides two I/O adapter cards, each with four 16 Gb FC ports and shortwave SFP transceivers. It is used to add 16 Gb FC connectivity to the IBM FlashSystem 9100 control enclosure.
 – This card also supports longwave transceivers that can be intermixed on the card with shortwave transceivers in any combination. Longwave transceivers are ordered by using feature ACHU.
 – Minimum required: None.
 – Maximum allowed:
 • None when the total quantity of features AHB6, AHB7, and AHBA is three
 • One when the total quantity of features AHB6, AHB7, and AHBA is two
 • Two when the total quantity of features AHB6, AHB7, and AHBA is one
 • Three when the total quantity of features AHB6, AHB7, and AHBA is zero
Feature Code AHB6 - 25 GbE (RoCE) Adapter Cards (Pair)
 – This feature provides two I/O adapter cards, each with two 25 Gb Ethernet ports and SFP28 transceivers. It is used to add 25 Gb Ethernet connectivity to the IBM FlashSystem 9100 control enclosure and are designed to support RDMA with RoCE v2.
 
Note: This adapter does not support FCoE connectivity. When two of these adapters are installed, clustering with other IBM FlashSystem 9100 systems is not possible.
 – Minimum required: None.
 – Maximum allowed:
 • None when the total quantity of features AHB3, AHB7, and AHBA is three
 • One when the total quantity of features AHB3, AHB7, and AHBA is two
 • Two when the total quantity of features AHB3, AHB7, and AHBA is one
 • Three when the total quantity of features AHB3, AHB7, and AHBA is zero
Feature Code AHB7 - 25 GbE (iWARP) Adapter Cards (Pair)
 – This feature provides two I/O adapter cards, each with two 25 Gb Ethernet ports and SFP28 transceivers. It is used to add 25 Gb Ethernet connectivity to the IBM FlashSystem 9100 control enclosure and are designed to support RDMA with iWARP.
 
Note: This adapter does not support FCoE connectivity. When two of these adapters are installed, clustering with other FlashSystem 9100 systems is not possible.
 – Minimum required: None.
 – Maximum allowed:
 • None when the total quantity of features AHB3, AHB6, and AHBA is three
 • One when the total quantity of features AHB3, AHB6, and AHBA is two
 • Two when the total quantity of features AHB3, AHB6, and AHBA is one
 • Three when the total quantity of features AHB3, AHB6, and AHBA is zero
Feature Code AHBA - SAS Expansion Enclosure Attach Card (Pair)
 – This feature provides two 4-port 12 Gb SAS expansion enclosure attachment card.
 – This feature is used to attach up to 20 expansion enclosures to an IBM FlashSystem 9100 control enclosure.
 – Minimum required: None.
 – Maximum allowed:
 • None when the total quantity of features AHB3, AHB6, and AHB7 is three
 • One when the total quantity of features AHB3, AHB6, and AHB7 is two or less
 
Note: Only two of the four SAS ports on the SAS expansion enclosure attachment card are used for expansion enclosure attachment. Only ports 1 and 3 are used; the other two SAS ports are inactive.
Figure 4-5 shows the IBM FlashSystem 9100 PCIe slot locations.
Figure 4-5 IBM FlashSystem 9100IBM FlashSystem 9100 PCIe slot locations
 
Attention: The upper controller PCIe slot positions are counted right to left because the node canister hardware is mounted upside down in the enclosure.
4.3.2 Racking considerations
IBM FlashSystem 9100 is installed as a minimum of a one control enclosure configuration. Each control enclosure is designed with the two node canisters and up to 24 NVMe drives that are installed as is 2U high. Ensure that the space for the entire system is available if more than one IBM FlashSystem 9100 control enclosure or more expansion encloses are to be installed.
Location of IBM FlashSystem 9100 in the rack
Use Table 4-2 on page 66 to help plan the rack locations that you use for up to a 42U rack. Complete the table for the hardware locations of the IBM FlashSystem 9100 system and other devices.
Table 4-2 Hardware location planning of the IBM FlashSystem 9100 in the rack
Rack unit
Component
EIA 42
 
EIA 41
 
EIA 40
 
EIA 39
 
EIA 38
 
EIA 37
 
EIA 36
 
EIA 35
 
EIA 34
 
EIA 33
 
EIA 32
 
EIA 31
 
EIA 30
 
EIA 29
 
EIA 28
 
EIA 27
 
EIA 26
 
EIA 25
 
EIA 24
 
EIA 23
 
EIA 22
 
EIA 21
 
EIA 20
 
EIA 19
 
EIA 18
 
EIA 17
 
EIA 16
 
EIA 15
 
EIA 14
 
EIA 13
 
EIA 12
 
EIA 11
 
EIA 10
 
EIA 9
 
EIA 8
 
EIA 7
 
EIA 6
 
EIA 5
 
EIA 4
 
EIA 3
 
EIA 2
 
EIA 1
 
4.3.3 Power requirements
Each IBM FlashSystem 9100 control enclosure requires two IEC-C13 power cable connections to connect to their 2000 W (2 KW) power supplies. Country-specific power cables are available for ordering to ensure that proper cabling is used. A total of two power cords are required to connect each IBM FlashSystem 9100 control enclosure to the rack power.
Figure 4-6 shows an example of a FlashSystem 9100 control enclosure with the two 2000-W power supplies and the connection points for the power cables in each node canister.
Figure 4-6 IBM FlashSystem 9100 control enclosure power connections
Each IBM FlashSystem 9100 Model AFF SAS expansion enclosure requires two IEC-C13 power cable connections to connect to their 764 W power supplies. Country-specific power cables are available for ordering to ensure that proper cabling is used. A total of two power cords are required to connect each IBM FlashSystem 9100 AFF expansion enclosure to the rack power.
Figure 4-7 on page 68 shows an example of a IBM FlashSystem 9100 AFF expansion enclosure with the two 764-W power supplies and the connection points for the power cables in each node canister.
Figure 4-7 IBM FlashSystem 9100 AFF expansion enclosure power connections
Each IBM FlashSystem 9100 Model A9F SAS expansion enclosure requires two IEC-C19 power cable connections to connect to their 2400 W (2.4 KW) power supplies. Country-specific power cables are available for ordering to ensure that proper cabling is used. A total of two power cords are required to connect each IBM FlashSystem 9100 A9F expansion enclosure to the rack power.
Figure 4-8 shows an example of a IBM FlashSystem 9100 A9F expansion enclosure with the two 2400-W power supplies and the connection points for the power cables in each controller.
Figure 4-8 IBM FlashSystem 9100 A9F expansion enclosure power connections
Upstream redundancy of the power to your cabinet (power circuit panels and on-floor Power Distribution Units [PDUs]), within cabinet power redundancy (dual power strips or in-cabinet PDUs), and upstream high availability structures (uninterruptible power supply [UPS], generators, and so on) influence your power cabling decisions.
If you are designing an initial layout that includes future growth plans, plan to allow for the extra control enclosures to be colocated in the same rack with your initial system for ease of planning for the extra interconnects that are required. A maximum configuration of the IBM FlashSystem 9100, with dedicated internal switches for SAN and local area network (LAN) and extra expansion enclosures, can almost fill a 42U 19-inch rack.
 
Tip: When cabling the power, connect one power cable from each enclosure to the left side internal PDU and the other power supply power cable to the right side internal PDU. These connections enable the cabinet to be split between two independent power sources for greater availability. When adding IBM FlashSystem 9100 control or expansion enclosures to the solution, continue the same power cabling scheme for each extra enclosure.
You must consider the maximum power rating of the rack; do not exceed it. For more information about requirements, see IBM Knowledge Center.
4.3.4 Network cable connections
Various checklists and tables are available that you can use to plan for all the various types of network connections (for example FC, Ethernet, iSCSI, and SAS) on the IBM FlashSystem 9100.
You can download the latest cable connection tables from the IBM FlashSystem 9100 web page of IBM Knowledge Center by completing the following steps:
1. Go to the IBM FlashSystem 9100 page in IBM Knowledge Center.
2. Go to the Table of Contents.
3. Click Planning on the left side panel.
4. In the list of results, select Planning worksheets (customer task).
5. Here, you can select from the following download options:
 – Planning worksheets for system connections
 – Planning worksheets for network connections
 – Planning for management and service IP addresses
 – Planning for SAS Expansion enclosures (if installed)
We also included some sample worksheets here to give an overview of what information is required.
PCIe adapters and connections
Figure 4-9 shows the FC port locations, which are identified for all of the possible fiber connections across the two IBM FlashSystem 9100 node canisters.
Figure 4-9 IBM FlashSystem 9100 FC port locations
 
Note: The upper node canister FC card PCIe slot positions are counted right to left because the node canister hardware is installed upside down in the IBM FlashSystem 9100 control enclosure.
Create a cable connection table or similar documentation to track all of the connections that are required for setting up the following items:
Ethernet
FC ports: Host and internal
iSCSI (iWarp or ROCE)
Slot numbers and adapter types are listed in Table 4-3.
Table 4-3 IBM FlashSystem 9100 node canister PCIe slot numbers and adapter type
PCIe slot
Adapter types
1
Fibre Channel or Ethernet or SAS
2
Fibre Channel or Ethernet or SAS
3
Fibre Channel or Ethernet or SAS
In the following sections, we provide sample charts for the various network connections.
Fibre Channel ports
Use Table 4-4 to document FC port connections for a single control enclosure.
Table 4-4 Fibre Channel port connections
Location
Item
Fibre Channel port 1
Fibre Channel port 2
 
Fibre Channel port 3
 
Fibre Channel port 4
 
Node canister 1
Fibre Channel card 1
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
Node canister 1
Fibre Channel card 2
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
Node canister 1
Fibre Channel card 3
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
 
Node canister 2
Fibre Channel card 1
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
Node canister 2
Fibre Channel card 2
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
Node canister 2
Fibre Channel card 3
Switch host
 
 
 
 
Port
 
 
 
 
Speed
 
 
 
 
 
 
 
 
 
 
 
 
Ethernet port connections
Support for Ethernet connections is by way of the on-board ports or by adding Ethernet PCIe adapters.
Table 4-5 lists the layout of the on-board Ethernet connections.
Table 4-5 Node canister on-board Ethernet port connections
Component
Ethernet port 1
Ethernet port 2
Ethernet port 3
Ethernet port 4
Technician port
 
Node Canister 1 (upper)
Switch
 
 
 
 
None
Port
 
 
 
 
None
Speed
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
1 Gbps only
Node Canister 2 (lower
Switch
 
 
 
 
None
Port
 
 
 
 
None
Speed
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
10 Gbps or 1 Gbps
1 Gbps only
Each node canister also supports up to three optional 2-port 25 Gbps internet Wide-area RDMA Protocol (iWARP) or RDMA over Converged Ethernet (RoCE) Ethernet adapters.
The following guidelines must be followed if 25 Gbps Ethernet adapters are installed:
iWARP and RoCE Ethernet adapters cannot be mixed within a node canister.
Fibre Channel adapters are installed before Ethernet adapters, beginning with slot 1, then slot 2 and slot 3.
Ethernet adapters are installed beginning with the first available slot.
If a SAS adapter is required to connect to expansion enclosures, it must be installed in slot 3.
Table 4-6 lists the 25 Gbps Ethernet adapter port connections, speeds, and switch port assignments.
Table 4-6 25 Gbps Ethernet adapter port connections
Component
Adapter 1 Ethernet port 1
Adapter 1 Ethernet port 2
Adapter 2 Ethernet port 1
Adapter 2 Ethernet port 2
Adapter 3 Ethernet port 1
Adapter 3 Ethernet port 2
Node Canister 1 (upper)
Switch
 
 
 
 
 
 
Port
 
 
 
 
 
 
Speed
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
Node Canister 2 (lower)
Switch
 
 
 
 
 
 
Port
 
 
 
 
 
 
Speed
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
25 or 10 Gbps
Management and service IP addresses
Figure 4-10 shows the locations of the on-board Ethernet ports and the technician port. The technician port is used by the SSR when the IBM FlashSystem 9100 is installed.
Figure 4-10 IBM FlashSystem 9100 technician and Ethernet port locations
Use Table 4-7 to document the management and service IP address settings for the IBM FlashSystem 9100 control enclosure in your environment.
 
Important: The upper node canister Ethernet port positions are counted right to left because the upper node canister hardware is installed upside down in the IBM FlashSystem 9100 control enclosure.
Table 4-7 IP addresses for the IBM FlashSystem 9100 control enclosure
Cluster name:
IBM FlashSystem 9100 control enclosure
Management IP address
IP
 
Subnet mask
 
Gateway
 
Node canister #1 Service IP address
IP
 
Subnet mask
 
Gateway
 
Node canister #2 Service IP address
IP
 
Subnet mask
 
Gateway
 
 
Note: If you have more than one IBM FlashSystem 9100 control enclosure to configure, two extra service IP addresses are needed per extra IBM FlashSystem 9100 control enclosure. Only one management IP address is required per IBM FlashSystem 9100 cluster.
For more information about the assignments of the extra Ethernet ports that can be used for host I/O, see 4.4.1, “Management IP addressing plan” on page 76.
4.3.5 SAS expansion enclosures
The following models of SAS expansion enclosures are offered:
9846/9848-A9F
9846/9848-AFF
The following maximum individual expansion enclosure capacities are available:
A 9846/9848-AFF SAS expansion enclosure contains up to 24 2.5-inch high capacity SSDs, and up to 368.6 TB raw capacity.
A 9846/9848-A9F SAS expansion enclosure supports up to 92 drives 2.5-inch high capacity SSDs (in 3.5-inch carriers) and up to 1.413 TB raw capacity.
To support a flash-optimized tiered storage configuration for mixed workloads, up to 20 9846/9848- AFF SAS expansion enclosures can be connected to each IBM FlashSystem 9100 control enclosure in the system. A maximum of eight A9F expansion enclosures can be attached.
For more information about the rules for mixing the AFF and A9F expansion enclosures that are attached to each IBM FlashSystem 9100 control enclosure, see IBM Knowledge Center.
A single FlashSystem 9100 control enclosure can support up to 20 IBM FlashSystem 9100 SFF expansion enclosures with a maximum of 504 drives per system or up to eight IBM FlashSystem 9100 LFF HD expansion enclosures with a maximum of 760 drives per system. Intermixing of expansion enclosures in a system is supported. Expansion enclosures are dynamically added with virtually no downtime, which helps to quickly and seamlessly respond to growing capacity demands.
With four-way system clustering, the size of the system can be increased to a maximum of 3,040 drives. IBM FlashSystem 9100 systems can be added into IBM FlashSystem 9100 clustered systems.
Further scalability can be achieved with virtualization of external storage. When IBM FlashSystem 9100 virtualizes an external storage system, capacity in the external system inherits the functional richness and eases of use of IBM FlashSystem 9100.
Expansion enclosure model AFF
The IBM FlashSystem 9100 SFF Expansion Enclosure Model AFF includes the following features:
Two expansion canisters
12 Gb SAS ports for control enclosure and expansion enclosure attachment
Support for up to 24 2.5-inch SAS SSD flash drives
2U, 19-inch rack mount enclosure with AC power supplies
Expansion enclosure model A9F
The IBM FlashSystem 9100 High-Density (HD) Expansion Enclosure Model A9F delivers increased storage density and capacity for IBM FlashSystem 9100 with cost-efficiency while maintaining its highly flexible and intuitive characteristics. It includes the following features:
5U, 19-inch rack mount enclosure with slide rail and cable management assembly
Support for up to 92 3.5-inch large-form factor (LFF) 12 Gbps SAS top-loading SSDs
Redundant 200 - 240 VA power supplies (new PDU power cord required)
Up to 8 HD expansion enclosures are supported per IBM FlashSystem 9100 control enclosure, which provides up to 368 drives and 11.3 PB SSD capacity in each enclosure (up to a maximum of 32 PB total)
With four enclosures, a maximum of 32 HD expansion enclosures can be attached, which gives a maximum of 32 PB of supported raw SSD capacity
All drives within an expansion enclosure must be the SSD type; however, various drive models are supported for use in the IBM FlashSystem 9100 expansion enclosures. These drives are hot swappable and have a modular design for easy replacement.
The following 12 Gb SAS industry-standard drives are supported in IBM FlashSystem 9100 AFF and A9F expansion enclosures:
1.92 TB 12 Gb SAS flash drive (2.5-inch and 3.5-inch form factor features)
3.84 TB 12 Gb SAS flash drive (2.5-inch and 3.5-inch form factor features)
7.68 TB 12 Gb SAS flash drive (2.5-inch and 3.5-inch form factor features)
15.36 TB 12 Gb SAS flash drive (2.5-inch and 3.5-inch form factor features)
 
Note: To support SAS expansion enclosures, an AHBA - SAS Enclosure Attach adapter card must be installed in each node canister of the IBM FlashSystem 9100 control enclosure.
SAS expansion enclosure worksheet
If the system includes optional SAS expansion enclosures, you must record the configuration values that are used by the IBM SSR during the installation process.
Complete Table 4-8 based on your particular system and provide this worksheet to the IBM SSR before the system is installed (“xxxx” are replaced with your values).
Table 4-8 Configuration values: SAS enclosure x, controller block x, and SAS enclosure n, controller block n
Configuration setting
Value
Usage in CLI
MDisk group name
xxxx
mkmdiskgrp -name mdisk_group_name -ext extent_size
MDisk extent size in MB
xxxx
RAID level (DRAID5 or DRAID6)
xxxx
mkdistributedarray -level raid_level -driveclass driveclass_id -drivecount x -stripewidth x -rebuildareas x mdiskgrp_id | mdiskgrp_name
driveclass_id:
The class that is being used to create the array, which must be a numeric value.
xxxx
drivecount:
The number of drives to use for the array. The minimum drive count for DRAID5 is 4; the minimum drive count for DRAID6 is 6.
xxxx
stripewidth:
The width of a single unit of redundancy within a distributed set of drives. For DRAID5, it is 3 - 16; for RAID6, it is 5 - 16.
xxxx
rebuildareas:
The reserved capacity that is distributed across all drives available to an array. Valid values for DRAID5 and DRAID6 are 1, 2, 3, and 4.
xxxx
For more information if SAS expansion intermix of enclosures is required, see 5.3, “Scale up for capacity” on page 117.
External storage systems
You can attach and virtualize many types of external storage systems to the IBM FlashSystem 9100 (IBM and non-IBM varieties). For more information, see IBM Knowledge Center.
4.4 Logical planning
Each IBM FlashSystem 9100 control enclosure creates a single I/O group and can contain up to four I/O groups, with a total of four control enclosures that are configured as one cluster.
4.4.1 Management IP addressing plan
To manage the IBM FlashSystem 9100 system, you access the management GUI of the system by directing a web browser to the cluster’s management IP address.
The IBM FlashSystem 9100 also uses a technician port feature. This feature is defined on node canisters and marked with the letter “T”). All initial configuration for the IBM FlashSystem 9100 is performed by the Throughways technician port. The port broadcasts a Dynamic Host Configuration Protocol (DHCP) service so that any notebook or computer with DHCP enabled can be automatically assigned an IP address on connection to the port.
 
Note: The hardware installation process for the IBM FlashSystem 9100 is completed by the IBM SSR. If the IBM FlashSystem 9100 is a scalable solution, the SSR works with the IBM Lab Services team to complete the installation.
After the initial cluster configuration is completed, the technician port automatically routes the connected user directly to the service GUI for the specific node canister.
Table 4-9 lists the on-board Ethernet ports, speed, and function.
Table 4-9 On-board Ethernet ports, speed, and function
Ethernet port
Speed
Function
Comments
1
10 Gbps
Management IP, Service IP, and Host I/O
Primary Management Port
2
10 Gbps
Management IP, Service IP, and Host I/O
Secondary Management Port
 
3
10 Gbps
Host I/O
Cannot be used for internal control enclosure communications
4
10 Gbps
Host I/O
Cannot be used for entangle control ensure communications
 
T
1 Gbps
Technician Port - DHCP/DNS for direct attach service management
SSR Use Only
Each IBM FlashSystem 9100 node canister requires one Ethernet cable connection to an Ethernet switch or hub. The cable must be connected to port 1. For each cable, a 10/100/1000 Mb Ethernet connection is required. Both Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) are supported.
 
Note: For increased redundancy, an optional second Ethernet connection is supported for each node canister. This cable can be connected to Ethernet port 2.
To ensure system failover operations, Ethernet port 1 on all IBM FlashSystem 9100 node canisters must be connected to the common set of subnets. If used for increased redundancy, Ethernet port 2 on all IBM FlashSystem 9100 node canisters must also be connected to a common set of subnets. However, the subnet for Ethernet port 1 does not have to be the same as the subnet for Ethernet port 2.
Each IBM FlashSystem 9100 cluster must have a cluster management IP address and also a service IP address for each of the IBM FlashSystem 9100 node canisters in the cluster. The service IP address does not have its own unique Ethernet cable. It uses the same physical cable that the management IP address uses.
Example 4-1 shows the IBM FlashSystem 9100 Management IP addresses for one control enclosure.
Example 4-1 IBM FlashSystem 9100 Management IP address
management IP add. 10.11.12.120
node 1 service IP add. 10.11.12.121
node 2 service IP add. 10.11.12.122
 
 
Requirement: Each IBM FlashSystem 9100 node canister in a clustered system must have at least one Ethernet connection.
Support for iSCSI on the IBM FlashSystem 9100 is also available from the on-board 10 GbE ports 3 and 4 and require extra IPv4 or extra IPv6 addresses for each of those 10 GbE ports that are used on each of the IBM FlashSystem 9100 node canisters. These IP addresses are independent of the IBM FlashSystem 9100 cluster management IP addresses on the 10 GbE port 1 and port 2 of the node canisters within the control enclosures.
More iSCSI support is available by ordering and installing PCIe adapters with Feature Code AHB6 - 25 GbE (RoCE) Adapter Cards (Pair) or Feature Code AHB7 - 25 GbE (iWARP) Adapter Cards (Pair).
For more information about these feature codes, see the IBM FlashSystem 9100 announcement materials on this IBM web page.
When accessing the IBM FlashSystem 9100 by using the GUI or Secure Shell (SSH), choose one of the available management or service IP addresses to connect to. In this case, no automatic failover capability is available. If one network is down, use an IP address on the alternative network.
 
Note: The Service Assistant tool that is described in this book is a web-based GUI that is used to service individual nodes canisters, primarily when a node has a fault, and is in a service state. This GUI is often used only with guidance from IBM remote support. On the IBM FlashSystem 9100 control enclosures, the service ports in the node canisters are assigned IP addresses and connected to the network.
4.4.2 SAN zoning and SAN connections
IBM FlashSystem 9100 can connect to 8 Gbps or 16 Gbps Fibre Channel (FC) switches for SAN attachments. From a performance perspective, connecting the IBM FlashSystem 9100 to 16 GBps switches is better. For the internal SAN attachments, 16 Gbps switches are better-performing and more cost-effective.
Both 8 Gbps and 16 Gbps SAN connections require correct zoning or VSAN configurations on the SAN switch or directors to bring security and performance together. Implement a dual-host bus adapter (HBA) approach at the host to access the IBM FlashSystem 9100.
For more information about examples of the HBA connections, see IBM Knowledge Center.
 
Note: The IBM FlashSystem 9100 V8.2 supports 16 Gbps direct host connections without a switch.
Port configuration
With the IBM FlashSystem 9100, up to 24 16 Gbps Fibre Channel (FC) ports are available per enclosure when Feature Code AHB3 is ordered. Some of these ports are used for internal communications when the IBM FlashSystem 9100 is running in a clustered solution with more than one control enclosure.
The iSCSI ports can also be used for clustering the IBM FlashSystem 9100 enclosures if required. For more information, see Chapter 5, “Scalability” on page 113.
The remaining ports are used for host connections and any externally virtualized storage, if installed. For more information about the zoning for inter-cluster, hosts, and external storage FC connections, see IBM Knowledge Center.
Consider the following points:
Configuring SAN communication between nodes in the same I/O group is optional. All internode communication between ports in the same I/O group must not cross ISLs.
Each node in the system must have at least two ports with paths to all other nodes that are in different enclosures in the same system.
A node cannot have more than 16 paths to another node in the same system.
FC connections between the system and the switch can vary based on fibre types and different SFPs (longwave and shortwave).
 
Note: New IBM FlashSystem 9100 systems that are installed with version 8.2.0 or later have N_Port ID Virtualization (NPIV) enabled as the default status. If a system is updated to version 8.2.0, it retains its NPIV status.
Customer-provided SAN switches and zoning
External virtualized storage systems are attached along with the host on the front-end FC ports for access by the control enclosures of the IBM FlashSystem 9100. Carefully create zoning plans for each extra storage system so that these systems are properly configured for use and best performance between storage systems and the IBM FlashSystem 9100. Configure all external storage systems with all IBM FlashSystem 9100 control enclosures; arrange them for a balanced spread across the system.
All IBM FlashSystem 9100 control enclosures in the system must be connected to the same SANs so that they all can present volumes to the hosts. These volumes are created from storage pools that are composed of the virtualized control enclosure MDisks, and if licensed, the external storage systems MDisks that are managed by the IBM FlashSystem 9100.
For more information about suggested fabric zoning, see IBM Knowledge Center.
4.4.3 IP Replication and Mirroring
In this section, we describe IP Replication and mirroring.
iSCSI IP addressing plan
IBM FlashSystem 9100 supports host access through iSCSI (as an alternative to FC). IBM FlashSystem 9100 can use the built-in Ethernet ports for iSCSI traffic.
Two optional 2-port 25 Gbps Ethernet adapters are supported in each node canister after V8.2.0, for iSCSI communication with iSCSI-capable Ethernet ports in hosts by way of Ethernet switches.
However, the use of two 25-Gbps Ethernet adapters per node canister prevents adding this control enclosure to a system or adding another control enclosure to a system that was made from this controller (sometime known as clustering) until a future software release adds support for clustering by way of the 25-Gbps Ethernet ports. These 2-port 25 Gbps Ethernet adapters do not support FCoE.
The following types of 25-Gbps Ethernet adapters are supported:
RDMA over Converged Ethernet (RoCE)
Internet Wide-area RDMA Protocol (iWARP)
Either of these adapters work for standard iSCSI communications; that is, not using Remote Direct Memory Access (RDMA). A future software release will add (RDMA) links that use new protocols that support RDMA, such as NVMe over Ethernet.
IBM FlashSystem 9100 supports the Challenge Handshake Authentication Protocol (CHAP) authentication methods for iSCSI. iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for multipathing support in the iSCSI host.
iSCSI IP addresses can be configured for one or more nodes. iSCSI simple name server (iSNS) addresses can be configured in IBM FlashSystem 9100.
For more information about the iSCSI-qualified name (IQN) for a IBM FlashSystem 9100 node, see this website.
Because the IQN contains the clustered system name and the node name, it is important not to change these names after iSCSI is deployed. Each node can be given an iSCSI alias, as an alternative to the IQN.
The IQN of the host to a IBM FlashSystem 9100 host object is added in the same way that you add FC worldwide port names (WWPNs). Host objects can have both WWPNs and IQNs.
Standard iSCSI host connection procedures can be used to discover and configure IBM FlashSystem 9100 as an iSCSI target.
4.4.4 Native IP replication
IBM FlashSystem 9100 supports native IP replication, which enables the use of lower-cost Ethernet connections for remote mirroring. The capability is available as an option (Metro Mirror or Global Mirror) on all IBM FlashSystem 9100 systems. The function is not apparent to servers and applications in the same way that traditional FC-based mirroring is not apparent.
All remote mirroring modes (Metro Mirror, Global Mirror, and Global Mirror with Changed Volumes) are supported.
Configuration of the system is straightforward: IBM FlashSystem 9100 systems can normally find each other in the network and can be selected from the GUI. IP replication includes Bridgeworks SANSlide network optimization technology, and is available at no extra charge. Remote mirror is a chargeable option, but the price does not change with IP replication. Remote mirror users can access the new function at no extra charge.
 
Note: For more information about how to set up and configure IP replication, see IBM SAN Volume Controller and Storwize Family Native IP Replication, REDP-5103.
4.4.5 Advanced Copy Services
The IBM FlashSystem 9100 offers the following Advanced Copy Services:
FlashCopy
Metro Mirror
Global Mirror
IBM FlashSystem 9100 Advanced Copy Services must apply the guidelines that are described next.
FlashCopy guidelines
Adhere to the following guidelines for FlashCopy:
Identify each application that must have a FlashCopy function implemented for its volume.
FlashCopy is a relationship between volumes. Those volumes can belong to separate storage pools and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the IBM Spectrum Control or for cloning a particular environment.
Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned, or Incremental.
Define which FlashCopy rate best fits your requirement in terms of the performance and the amount of time to complete the FlashCopy.
Table 4-10 lists the relationship of the background copy rate value to the attempted number of grains to be split per second.
Table 4-10 Grain splits per second
User percentage
Data copied per
second
 
256 KB grain per
second
 
64 KB grain per
second
 
1% - 10%
128 KB
0.5
2
11% to 20%
256 KB
1
4
21% to 30%
512 KB
2
8
31% to 40%
1 MB
4
16
41% to 50%
2 MB
8
32
50% to 60%
4 MB
16
64
61% to 70%
8 MB
32
128
71% to 80%
16 MB
64
256
81% to 90%
32 MB
128
512
91% to 100%
64 MB
256
1024
Define the grain size that you want to use. A grain is the unit of data that is represented by a single bit in Table 4-10. Larger grain sizes can cause a longer FlashCopy elapsed time, and a higher space usage in the FlashCopy target volume. Smaller grain sizes can have the opposite effect. The data structure and the source data location can modify those effects.
In an environment, check the results of your FlashCopy procedure in terms of the data that is copied at every run and in terms of elapsed time, comparing them to the new IBM FlashSystem 9100 FlashCopy results. Eventually, adapt the grain per second and the copy rate parameter to fit your environment’s requirements.
Metro Mirror and Global Mirror guidelines
IBM FlashSystem 9100 supports inter-cluster Metro Mirror and Global Mirror. Inter-cluster operation needs at least two clustered systems that are separated by several moderately high-bandwidth links.
Figure 4-11 shows a schematic of Metro Mirror connections.
Figure 4-11 Metro Mirror connections
Figure 4-11 also contains two redundant fabrics. Part of each fabric exists at the localclustered system and at the remote clustered system. No direct connection exists between the two fabrics.
Technologies for extending the distance between two IBM FlashSystem 9100 clustered systems can be broadly divided into two categories: FC extenders and SAN multiprotocolrouters.
Because of the more complex interactions that are involved, IBM tests products of this class for interoperability with the IBM FlashSystem 9100. For more information about the supported SAN routers in the supported hardware list, see the IBM FlashSystem 9100 support website.
IBM tested several FC extenders and SAN router technologies with the IBM FlashSystem 9100. You must plan, install, and test FC extenders and SAN router technologies with the IBM FlashSystem 9100 so that the following requirements are met:
The round-trip latency between sites must not exceed 80 milliseconds (ms), 40 ms one way.
For Global Mirror, this limit enables a distance between the primary and secondary sites of up to 8000 km (4970.96 miles) by using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency.
The latency of long-distance links depends on the technology that is used to implement them. A point-to-point dark fiber-based link typically provides a round-trip latency of 1 ms per 100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies, which affects the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for IBM FlashSystem 9100 inter-cluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clustered systems.
The bandwidth between sites must be sized at least to meet the peak workload requirements, in addition to maintaining the maximum latency that was specified previously. You must evaluate the peak workload requirement by considering the average write workload over a period of 1 minute or less, plus the required synchronization copy bandwidth.
Determine the true bandwidth that is required for the link by considering the peak write bandwidth to volumes participating in Metro Mirror or Global Mirror relationships, and adding it to the peak synchronization copy bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single failures, you must size the link so that the bandwidth and latency statements allow the link to continue to function.
The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual fail back to the primary site from the secondary.
The configuration must be tested to confirm that any failover mechanisms in the inter-cluster links interoperate satisfactorily with the IBM FlashSystem 9100.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of, the client. They are not part of the standard installation of the IBM FlashSystem 9100 by IBM. Make these measurements during installation, and record the measurements. Testing must be repeated after any significant changes to the equipment that provides the inter-cluster link.
Global Mirror guidelines
For Global Mirror, the following guidelines apply:
When IBM FlashSystem 9100 Global Mirror is used, all components in the SAN must sustain the workload that is generated by application hosts and the Global Mirror background copy workload. Otherwise, Global Mirror can automatically stop your relationships to protect your application hosts from increased response times. Therefore, it is important to configure each component correctly.
Use a SAN performance monitoring tool, such as IBM Spectrum Control Center, which enables you to continuously monitor the SAN components for error conditions and performance problems. This tool helps you detect potential issues before they affect your disaster recovery solution.
The long-distance link between the two clustered systems must be provisioned to provide for the peak application write workload to the Global Mirror source volumes, plus the client-defined level of background copy.
The peak application write workload ideally must be determined by analyzing the IBM FlashSystem 9100 performance statistics.
Statistics must be gathered over a typical application I/O workload cycle, which might be days, weeks, or months, depending on the environment on which the IBM FlashSystem 9100 is used. These statistics must be used to find the peak write workload that the link must support.
Characteristics of the link can change with use. For example, latency can increase as the link is used to carry an increased bandwidth. The user must be aware of the link’s behavior in such situations, and ensure that the link remains within the specified limits. If the characteristics are not known, testing must be performed to gain confidence of the link’s suitability.
Users of Global Mirror must consider how to optimize the performance of the long-distance link, which depends on the technology that is used to implement the link.
For example, when transmitting FC traffic over an IP link, it can be wanted to enable jumbo frames to improve efficiency.
The use of Global Mirror and Metro Mirror between the same two clustered systems is supported.
The use of Global Mirror and Metro Mirror between the IBM FlashSystem 9100 clustered system and IBM Storwize systems with a minimum code level of 7.2 is supported. For more information about the code level matrix for support, see this IBM Support web page.
Although participating in a Global Mirror relationship is supported for cache-disabled volumes, it is not a preferred practice.
The gmlinktolerance parameter of the remote copy partnership must be set to an appropriate value. The default value is 300 seconds (5 minutes), which is appropriate for most clients.
During SAN maintenance, the user must choose to reduce the application I/O workload during the maintenance (so that the degraded SAN components can manage the new workload). Consider the following points:
 – Disable the gmlinktolerance feature.
 – Increase the gmlinktolerance value (meaning that application hosts might see extended response times from Global Mirror volumes).
 – Stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting n minutes, it must be reset to only the normal value n minutes after the end of the maintenance activity.
If gmlinktolerance is disabled during the maintenance, it must be reenabled after the maintenance is complete.
Global Mirror volumes must have their preferred nodes evenly distributed between the nodes of the clustered systems. Each volume within an I/O Group features a preferred node property that can be used to balance the I/O load between nodes in that group.
Figure 4-12 shows the correct relationship between volumes in a Metro Mirror or Global Mirror solution.
Figure 4-12 Correct volume relationship
The capabilities of the storage controllers at the secondary clustered system must be provisioned to provide for the peak application workload to the Global Mirror volumes, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary clustered system can be limited by the performance of the back-end storage controllers at the secondary clustered system to maximize the amount of I/O that applications can perform to Global Mirror volumes.
It is necessary to perform a complete review before Serial Advanced Technology Attachment (SATA) is used for Metro Mirror or Global Mirror secondary volumes. The use of a slower disk subsystem for the secondary volumes for high-performance primary volumes can mean that the IBM FlashSystem 9100 cache might not be able to buffer all of the writes, and flushing cache writes to SATA might slow I/O at the production site.
Storage controllers must be configured to support the Global Mirror workload that is required of them. Consider the following points:
 – Dedicate storage controllers to only Global Mirror volumes.
 – Configure the controller to ensure sufficient quality of service (QoS) for the disks that are used by Global Mirror.
 – Ensure that physical disks are not shared between Global Mirror volumes and other I/O (for example, by not splitting an individual RAID array).
MDisks in a Global Mirror storage pool must be similar in their characteristics; for example, RAID level, physical disk count, and disk speed. This requirement is true of all storage pools, but it is important to maintain performance when Global Mirror is used.
When a consistent relationship is stopped (for example, by a persistent I/O error on the intercluster link), the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship begins the process of synchronizing new data to the secondary disk.
While this synchronization is in progress, the relationship is in the inconsistent_copying state. Therefore, the Global Mirror secondary volume is not in a usable state until the copy completes and the relationship returns to a Consistent state.
For this reason, it is highly advisable to create a FlashCopy of the secondary volume before restarting the relationship. When started, the FlashCopy provides a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (for example, if the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes.
If you plan to use a Fibre Channel over IP (FCIP) intercluster link, the pipe must be sized and designed correctly.
Example 4-2 shows a best-guess bandwidth sizing formula, assuming that the write and change rate is consistent.
Example 4-2  
Amount of write data within 24 hours times 4 to allow for peaks
Translate into MB/s to determine WAN link needed
Example:
250 GB a day
250 GB * 4 = 1 TB
24 hours * 3600 secs/hr. = 86400 secs
1,000,000,000,000/ 86400 = approximately 12 MB/s,
Which means OC3 or higher is needed (155 Mbps or higher)
If compression is available on routers or wide area network (WAN) communication devices, smaller pipelines might be adequate. The workload likely is not evenly spread across 24 hours. If extended periods of high data change rates occur, consider suspending Global Mirror during that time.
If the network bandwidth is too small to handle the traffic, the application write I/O response times might be elongated. For the IBM FlashSystem 9100, Global Mirror must support short-term Peak Write bandwidth requirements.
You must also consider the initial sync and resync workload. The Global Mirror partnership’s background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. The more bandwidth that you give to the sync and resync operation, the less workload can be delivered by the IBM FlashSystem 9100 for the regular data traffic.
Do not propose Global Mirror if the data change rate exceeds the communication bandwidth, or if the round-trip latency exceeds 80 - 120 ms. A greater than 80 ms round-trip latency requires Solution for Compliance in a Regulated Environment and request for price quotation (SCORE/RPQ) submission.
4.4.6 Call home option
IBM FlashSystem 9100 supports setting up a Simple Mail Transfer Protocol (SMTP) mail server for alerting the IBM Support Center of system incidents that might require a service event. This option is the call home option. You can enable this option during the setup.
 
Tip: Setting up call home involves providing a contact that is available 24 x 7 if a serious call home issue occurs. IBM support strives to report any issues to clients in a timely manner; having a valid contact is important to achieving service level agreements (SLAs).
Table 4-11 lists the required items for setting up the IBM FlashSystem 9100 call home function.
Table 4-11 IBM FlashSystem 9100 call home function settings
Configuration item
Value
Primary Domain Name System (DNS) server
 
SMTP gateway address
 
SMTP gateway name
 
SMTP “From” address
Example: FS9100_name@customer_domain.com
Optional: Customer email alert group name
Example: group_name@customer_domain.com
Network Time Protocol (NTP) manager
 
Time zone
 
Call Home and Remote Support Assistance information
Complete Table 4-12 for your facility so that the SSR can set up Call Home and Remote Support Assistance (RSA) contact information.
Table 4-12 Call Home and RSA contact information
Contact Information
 
Contact name
 
Email address
 
Phone (Primary)
 
Phone (Alternate)
 
Machine location
 
System location information
 
Company name
 
Street address
 
City
 
State or province
 
Postal code
 
Country or region
 
Proxy server IP addresses for remote support assistance
IP address 1 Port 1
 
IP address 2 Port 2
 
IP address 3 Port 3
 
IP address 4 Port 4
 
IP address 5 Port 5
 
IP address 6 Port 6
 
For more information about setting up the IBM FlashSystem 9100 control enclosure Call Home function, see Chapter 6, “Installing and configuring the IBM FlashSystem 9100 system” on page 165.
4.4.7 Remote Support Assistance
The IBM FlashSystem 9100 control enclosure supports the new RSA feature.
By using RSA, the customer can start a secure connection from the IBM FlashSystem 9100 to IBM, when problems occur. An IBM remote support specialist can then connect to the system to collect system logs, analyze the problem, and run repair actions remotely if possible, or assist the client or an IBM SSR who is on-site.
The RSA feature can also be used for remote code upgrades, in which the remote support specialist upgrades the code on the machine without the need to send an SSR on site.
 
Important: IBM encourages all customers to use the high-speed remote support solution that is enabled by RSA. Problem analysis and repair actions without a remote connection can become more complicated and time-consuming.
RSA uses a high-speed internet connection, but it gives the customer the ability to start an outbound Secure Shell (SSH) call to a secure IBM server. Fire wall rules might need to be configured at the customer’s fire wall to allow the FlashSystem V9000 Cluster and Service IPs to establish a connection to the IBM Remote Support Center by way of SSH.
Note: The type of access that is required for a remote support connection is outbound port TCP/22 (SSH) from the IBM FlashSystem 9100 Cluster and Service IPs. For more information about the IBM IP addresses that are used for RSA, see the gray note box on page 89.
RSA consists of IBM FlashSystem 9100 internal functions with a set of globally deployed supporting servers. Together, they provide secure remote access to the IBM FlashSystem 9100 when necessary and when authorized by the customer’s personnel.
Figure 4-13 shows the overview of the RSA setup, which includes three major components.
Figure 4-13 IBM FlashSystem 9100 Remote Support Set Up
Remote Support Client
The Remote Support Client is a software component that is inside IBM FlashSystem 9100 that handles remote support connectivity. It is on both canister nodes of the IBM FlashSystem 9100 control enclosure. The software component relies on a single outgoing Transmission Control Protocol (TCP) connection only, and it cannot receive inbound connections of any kind.
The Remote Support Client is controlled by using the CLI or the GUI. The customer can control the connection progress by using the CLI to open or close it. They can also add a password that IBM must request before logging in by using the RSA link.
Remote Support Center Front Server
Front Servers are on an IBM Demilitarized Zone (DMZ) of the internet and receive connections from the Remote Support Client and the IBM Remote Support Center Back Server. Front Servers are security-hardened machines that provide a minimal set of services, such as maintaining connectivity to connected clients and to the Back Server.
They are strictly inbound, and never start any process on their own accord. No sensitive information is ever stored on the Front Server, and all data that passes through the Front Server from the client to the Back Server is encrypted so that the Front Server cannot access this data.
 
Note: When activating Remote Support Assistant, the following four Front Servers are used by way of port TCP/22 (SSH):
204.146.30.139
129.33.206.139
204.146.30.157
129.33.207.37
Remote Support Center Back Server
The Back Server manages most of the logic of the RSA system and is within the IBM intranet. The Back Server maintains connection to all FrontServers and is access-controlled. Only IBM employees who are authorized to perform remote support of the FlashSystem 9100 can use it. The Back Server is in charge of authenticating a support person.
It provides the support person with a user interface (UI) through which to choose a system to support based on the support person’s permissions. It also provides the list of systems that are currently connected to the Front Servers, and manages the remote support session as it progresses (logging it, allowing more support persons to join the session, and so on).
In addition, the IBM FlashSystem 9100 remote support solution can use the following two IBM internet support environments:
IBM Enhanced Customer Data Repository
If a remote connection exists, the IBM remote support specialists can offload the required support logs.
IBM Fix Central
IBM Fix Central provides fixes and updates for IBM system’s software, hardware, and operating system. The IBM FlashSystem 9100 control enclosure allows an IBM remote support specialist to perform software updates remotely. During this process, the IBM FlashSystem 9100 control enclosure automatically downloads the required software packages from IBM.
 
Note: To download software update packages, the following six IP addresses are used by way of outbound port TCP/22 (SSH) from the IBM FlashSystem 9100 control enclosure to Fix Central:
170.225.15.105
170.225.15.104
170.225.15.107
129.35.224.105
129.35.224.104
129.35.224.107
Firewall rules might need to be configured. Also, it is required to configure a DNS server to allow the download function to work.
For more information about manually downloading the code from IBM Fix Central, see this website.
 
Note: You must have available the machine type, origin, and serial number during the download process to validate the entitlement for software downloads.
Remote Support Proxy
Optionally, an application called Remote Support Proxy can be used when one or more IBM FlashSystem 9100 systems do not have direct access to the internet (for example, because of firewall restrictions). The Remote Support Client within the FlashSystem connect through this optional proxy server to the Remote Support Center Front Servers. The Remote Support Proxy runs as a service on a Linux system that has internet connectivity to the Remote Support Center and local network connectivity to the FlashSystem 9100.
Figure 4-14 shows the connection through the Remote Support Proxy.
Figure 4-14 Connections through the Remote Support Proxy.
The communication between the Remote Support Proxy and the Remote Support Center is encrypted with another layer of Secure Sockets Layer (SSL).
 
Note: The host that is running the Remote Support Proxy must have TCP/443 (SSL) outbound access to Remote Support Front Servers.
Remote Support Proxy software
The Remote Support Proxy is a small program that is supported on some Linux versions. The software is also used for other IBM Storage Systems, such as IBM XIV or FlashSystem A9000. The installation files and documentations are available at the storage portal website.
 
Note: At the time of this writing, Remote Support Proxy does not support a connection to IBM ECuRep for automatically uploading logs. Also, the software download from IBM Fix Central is not supported through the optional Remote Support proxy.
Before you configure remote support assistance, the proxy server must be installed and configured separately. During the setup process for support assistance, specify the IP address and the port number for the proxy server on the Remote Support Centers page.
For more information about the remote support proxy, see this IBM Support Fix Central web page.
Complete the following steps:
Figure 4-15 shows the main page of the IBM XIV software from where the Remote Proxy code can be downloaded.
Figure 4-15 Locating the XIV Remote Support Proxy software
2. Select the IBM XIV Remote Support Proxy, as shown in Figure 4-15 on page 91.
Always use the latest Proxy Server because it references the latest secure front endservers at IBM, as shown in Figure 4-16.
Figure 4-16 Select the Latest Proxy Server package
The installation instructions are included in the User’s Guide.
 
Note: The common remote support proxy software package can be used for IBM FlashSystem 900, XIV, IBM FlashSystem A9000, IBM FlashSystem V9000, and IBM FlashSystem 9100.
4.5 IBM Storage Insights
IBM Storage Insights is an important part of monitoring and ensuring continued availability of the IBM FlashSystem 9100.
Available at no charge, cloud-based IBM Storage Insights provides a single dashboard that gives you a clear view of all your IBM block storage. You can make better decisions by seeing trends in performance and capacity.
Storage health information enables you to focus on areas that need attention and when IBM Support is needed, Storage Insights simplifies uploading logs, speeds resolution with online configuration data, and provides an overview of open tickets all in one place.
IBM Storage Insights includes the following features:
A unified view of IBM systems:
 – Provides a single view to see all your systems characteristics.
 – Displays all of your IBM storage inventory.
 – Provides a live event feed so that you know up-to-the-second what is occurring with your storage and enables you to act quickly.
IBM Storage Insights collects telemetry data and call home data and provides up-to-the-second system reporting of capacity and performance.
Overall storage monitoring, which reviews the following components:
 – Overall health of the system
 – The configuration to see whether it meets known best practices
 – System resource management; that is, checks if the system is overly taxed and provides proactive recommendations to fix it
Storage Insights provides advanced customer service with an event filter to that includes the following features:
 – The ability for you and IBM Support to view, open, and close support tickets open and track trends.
 – By using the auto-log collection capability, you can collect the logs and send them to IBM before IBM Support reviews the issue. This process can save as much as 50% of the time to resolve the case.
IBM Storage Insights Pro is available in addition to the free IBM Storage Insights. IBM Storage Insights Pro is a subscription service that provides longer historical views of data, more reporting and optimization options, and supports IBM file and block storage with EMC VNX and VMAX.
The IBM Storage Insights and IBM Storage Insights Pro products are compared in Figure 4-17.
Figure 4-17 IBM Storage Insights and IBM Storage Insights Pro comparison
4.5.1 Architecture, security, and data collection
Figure 4-18 shows the architecture of the IBM Storage Insights application, the products that are supported, and the three main teams that can benefit from the use of the tool.
Figure 4-18 IBM Storage Insights architecture
IBM Storage Insights provides a lightweight data collector that is deployed on a customer-supplied server. This server can be a Linux, Windows, or AIX server or a guest in a virtual machine (for example, a VMware guest).
The data collector streams performance, capacity, asset, and configuration metadata to your IBM Cloud instance.
The metadata flows in one direction: from your data center to IBM Cloud over HTTPS. In the IBM Cloud, your metadata is protected by physical, organizational, access, and security controls. IBM Storage Insights is ISO/IEC 27001 Information Security Management certified.
Figure 4-19 shows the data flow from systems to the IBM Storage Insights cloud.
Figure 4-19 Data flow from the storage systems to the IBM Storage Insights cloud
What metadata is collected
Metadata about the configuration and operations of storage resources is collected, including the following examples:
Name, model, firmware, and type of storage system.
Inventory and configuration metadata for the storage system’s resources, such as volumes, pools, disks, and ports.
Capacity values, such as capacity, unassigned space, used space, and the compression ratio.
Performance metrics, such as read and write data rates, I/O rates, and response times.
The application data that is stored on the storage systems cannot be accessed by the data collector.
Who can access the metadata
Access to the metadata that is collected is restricted to the following users:
The customer who owns the dashboard.
The administrators who are authorized to access the dashboard, such as the customer's operations team.
The IBM Cloud team that is responsible for the day-to-day operation and maintenance of IBM Cloud instances.
IBM Support for investigating and closing service tickets.
4.5.2 Customer dashboard
Figure 4-20 shows the IBM Storage Insights main dashboard and the systems that it is monitoring.
Figure 4-20 Storage Insights dashboard example
Other views and images of dashboard displays and drill downs can be found in the supporting documentation that is described next.
IBM Storage Insights information resources
For more information about IBM Storage Insights and to sign up and register for the free service, see the following resources:
Demonstration (login required)
Registration (login required)
4.6 IBM FlashSystem 9100 system configuration
To ensure proper performance and high availability in the IBM FlashSystem 9100 installations, consider the following guidelines when you design a SAN to support the IBM FlashSystem 9100:
All nodes in a clustered system must be on the same LAN segment, because any node in the clustered system must assume the clustered system management IP address. Ensure that the network configuration allows any of the nodes to use these IP addresses.
If you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment.
To maintain application uptime if an individual IBM FlashSystem 9100 node canister fails, the IBM FlashSystem 9100 control enclosure houses two node canisters to form one I/O Group. If a node canister fails or is removed from the configuration, the remaining node canister operates in a degraded mode, but the configuration is still valid for the I/O Group.
 
Important: IBM FlashSystem 9100 V8.2 release includes the HyperSwap function, which allows each volume to be presented by two I/O groups. If you plan to use this function, you must consider the I/O Group assignments in the planning for the IBM FlashSystem 9100.
The FC SAN connections between the IBM FlashSystem 9100 control enclosures are optical fiber. Although these connections can run at 8 or 16 Gbps (depending on your switch hardware), 16 Gbps is recommended to ensure best performance.
Direct connections between the IBM FlashSystem 9100 control enclosures and hosts are supported, with some exceptions.
Direct connection of IBM FlashSystem 9100 control enclosures and external storage subsystems are not supported.
 
Two IBM FlashSystem 9100 clustered systems cannot have access to the same external virtualized storage LUNs within a disk subsystem.
 
Attention: Configuring zoning so that two IBM FlashSystem 9100 clustered systems have access to the same external LUNs (MDisks) can result in data corruption.
Storage pool and MDisk
The storage pool, which is an mdiskgroup, is at the center of the relationship between the MDisks and the volumes (VDisk). It acts as a container from which MDisks contribute chunks of physical capacity known as extents, and from which VDisks are created.
With the IBM FlashSystem 9100, we now have a new type of storage pool: the Data Reduction Pool (DRP). The DRP allows the users to define volumes that use the new data reduction and de-duplication capabilities, which were introduced in Spectrum Virtualize software.
The complete compliment of storage pool types are available:
Regular non thin-provisioned pools
Thin-provisioned pools
DRP thin provisioned compressed pools
The DRPs also can support de-duplication. This support is enabled at a volume level when the volumes are created within a pool. For more information about DRP and deduplication, see Chapter 3, “Data reduction and tools” on page 23.
 
Important: On IBM FlashSystem 9100, you want to use fully allocated DRPs with compression and no deduplication or DRPS with compression and deduplication.
After the system set-up process is complete, you must configure storage by creating pools and assigning storage to specific pools. Ensure that a pool or pools are created before assigning storage.
In the management GUI, select Pools → Actions → Add Storage. The Add Storage automatically configures drives into arrays. Use the lsarrayrecommendation command to display the system recommendations for configuring an array.
For greatest control and flexibility, you can use the mkarray command-line interface (CLI) command to configure a nondistributed array on your system. However, the recommended arrays for the IBM FlashSystem 9100 is to configure a distributed array (DRAID6) by using the mkdistributedarray command.
 
Note: DRAID6 arrays give better performance and rebuild times if an FCM or NVMe drive fails because the spare capacity that is allocated for the rebuild is shared across all of the drives in the system. That capacity is not reserved to one physical drive as it is in the traditional RAID arrays.
MDisks are also created for each external storage attached LUN that is assigned to the IBM FlashSystem 9100 as a managed or as unmanaged MDisk for migrating data. A managed MDisk is an MDisk that is assigned as a member of a storage pool. Consider the following points:
A storage pool is a collection of MDisks. An MDisk can be contained within a single storage pool only.
IBM FlashSystem 9100 can support up to 1,024 storage pools.
The number of volumes that can be allocated per system limit is 10,000.
Volumes are associated with a single storage pool, except in cases where a volume is being migrated or mirrored between storage pools.
 
Information: For more information about IBM FlashSystem 9100 configuration limits, see this IBM Support web page.
Extent size
Each MDisk is divided into chunks of equal size, which are called extents. Extents are a unit of mapping that provides the logical connection between MDisks and volume copies.
The extent size is a property of the storage pool and is set when the storage pool is created. All MDisks in the storage pool have the same extent size, and all volumes that are allocated from the storage pool have the same extent size. The extent size of a storage pool cannot be changed. If you want another extent size, the storage pool must be deleted and a new storage pool configured.
The IBM FlashSystem 9100 supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192 MB. By default, the MDisk created for the internal storage of flash memory in the IBM FlashSystem 9100 are created with an extent size of 1024 MB.
To use a value that differs from the default requires the use of CLI commands to delete and re-create with different value settings. For more information about the use of the CLI commands, see IBM Knowledge Center.
Table 4-13 lists all of the extent sizes that are available in an IBM FlashSystem 9100.
Table 4-13 Extent size and maximum clustered system capacities
Extent size
Maximum clustered system capacity
16 MB
64 TB
32 MB
128 TB
64 MB
256 TB
128 MB
512 TB
256 MB
1 petabyte (PB)
512 MB
2 PB
1,024 MB
4 PB
2,048 MB
8 PB
4,096 MB
16 PB
8,192 MB
32 PB
 
Only the maximum extents sizes versus the maximum cluster size when up to four IBM FlashSystem 9100 are clustered together are listed in Table 4-13.
For more information about the extents in regular, thin, and compressed pools, and the latest information about the IBM FlashSystem 9100 configuration limits, see this IBM Support web page.
When planning storage pool layout, consider the following points:
Pool extent size:
 – Generally, use 1 GB or higher. Consider the following points:
 • If DRP is used, 4 GB must be used.
 • If DRP is not used, 1 GB can be used.
 
Note: 4 GB is the default now.
 – For more information about all of the values and rules for extents, see this IBM Support web page.
 – Choose the extent size and then use that size for all storage pools.
 – You cannot migrate volumes between storage pools with different extent sizes.
However, you can use volume mirroring to create copies between storage pools with different extent sizes.
Storage pool reliability, availability, and serviceability (RAS) considerations:
 – The number and size of storage pools affects system availability.
 – The use of larger number of smaller pools reduces the failure domain if one of the pools goes offline. However, more storage pools introduces management overhead, effects storage space use efficiency, and is subject to the configuration maximum limit.
 – An alternative approach is to create few large storage pools. All MDisks that constitute each of the pools must have the same performance characteristics.
 – The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. Do not put MDisks into a storage pool until they are needed.
 – Put image mode volumes in a dedicated storage pool or pools.
Storage pool performance considerations:
 – It might make sense to create multiple storage pools if you are attempting to isolate workloads to separate disk drives.
 – Create storage pools out of MDisks with similar performance. This technique is the only way to ensure consistent performance characteristics of volumes that are created from the pool.
This rule does not apply when you intentionally place MDisks from different storage tiers in the pool with the intent to use Easy Tier to dynamically manage workload placement on drives with appropriate performance characteristics.
4.6.1 Volume considerations
An individual volume is a member of one storage pool and one I/O Group. Consider the following points:
The storage pool defines which MDisks that are provided by the disk subsystem make up the volume.
The I/O Group (two nodes canisters make an I/O Group) defines which IBM FlashSystem 9100 nodes provide I/O access to the volume. In a single enclosure, IBM FlashSystem 9100 features only one I/O group.
 
Important: No fixed relationship exists between I/O Groups and storage pools.
Perform volume allocation based on the following considerations:
Optimize performance between the hosts and the IBM FlashSystem 9100 by attempting to distribute volumes evenly across available I/O Groups and nodes in the clustered system.
Reach the level of performance, reliability, and capacity that you require by using the storage pool that corresponds to your needs (you can access any storage pool from any node). Choose the storage pool that fulfills the demands for your volumes regarding performance, reliability, and capacity.
I/O Group considerations:
 – With the IBM FlashSystem 9100, each control enclosure that is connected into the cluster is another I/O Group for that clustered IBM FlashSystem 9100 system.
 – When you create a volume, it is associated with one node of an I/O Group. By default, whenever that you create a volume, it is associated with the next node by using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the volume rather than the use of the round-robin algorithm. A volume is defined for an I/O Group.
 – Even if you have eight paths for each volume, all I/O traffic flows toward only one node (the preferred node). Therefore, only four paths are used by the IBM Subsystem Device Driver (SDD). The other four paths are used only if the preferred node fails or when concurrent code upgrade is running.
Thin-provisioned volume considerations:
 – When creating the thin-provisioned volume, be sure to understand the use patterns of the applications or group users that are accessing this volume. You must consider items, such as the size of the data, rate of creation of data, and modifying or deleting existing data.
 – The following operating modes for thin-provisioned volumes are available:
 • Autoexpand volumes allocate storage from a storage pool on demand with minimal required user intervention. However, a misbehaving application can cause a volume to expand until it uses all of the storage in a storage pool.
 • Non-autoexpand volumes feature a fixed amount of assigned storage. In this case, the user must monitor the volume and assign more capacity when required. A misbehaving application can cause only the volume that it uses to fill up.
 – Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline (through a lack of available physical storage for autoexpand, or because a volume that is marked as non-expand was not expanded in time), a danger exists of data being left in the cache until storage is made available. Although this situation is not a data integrity or data loss issue, you must not rely on the IBM FlashSystem 9100 cache as a backup storage mechanism.
 
Important: Consider the following points:
Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity.
Warnings must not be ignored by an administrator.
Use the autoexpand feature of the thin-provisioned volumes.
 – When you create a thin-provisioned volume, you can choose the grain size for allocating space in 32 KB, 64 KB, 128 KB, or 256 KB chunks. The grain size that you select affects the maximum virtual capacity for the thin-provisioned volume. The default grain size is 256 KB, and is the preferred option. If you select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The grain size cannot be changed after the thin-provisioned volume is created.
 – Generally, smaller grain sizes save space but require more metadata access, which can adversely affect performance. If you are not using the thin-provisioned volume as a FlashCopy source or target volume, use 256 KB to maximize performance. If you are using the thin-provisioned volume as a FlashCopy source or target volume, specify the same grain size for the volume and for the FlashCopy function.
 – Thin-provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a thin-provisioned volume requires approximately one directory I/O for every user I/O.
 – The directory is two-way write-back-cached (as with the IBM FlashSystem V9000 fast write cache); therefore, certain applications perform better.
 – Thin-provisioned volumes require more processor processing; therefore, the performance per I/O Group can also be reduced.
 – A thin-provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when a fully allocated volume is converted to a thin-provisioned volume by using volume mirroring.
Volume mirroring guidelines:
 – With the IBM FlashSystem 9100 system in a high-performance environment, this capability is possible with a scale up or scale out solution only. If you are considering volume mirroring for data redundancy, a second control enclosure with its own storage pool is needed for the mirror to be on.
 – Create or identify two separate storage pools to allocate space for your mirrored volume.
 – If performance is a concern, use a storage pool with MDisks that share characteristics. Otherwise, the mirrored pair can be on external virtualized storage with lesser-performing MDisks.
Data Reduction Pool volumes
When configuring DRP-based volumes, special considerations must be followed. For more information, see Chapter 3, “Data reduction and tools” on page 23.
4.6.2 IBM Easy Tier
IBM Easy Tier is a function that automatically and non-disruptively moves frequently accessed data from various types of MDisks to flash drive MDisks. This process places such data in a faster tier of storage. Easy Tier supports four tiers of storage.
The IBM FlashSystem 9100 supports the following tiers:
Tier 0 flash: Specifies a tier0_flash IBM Flash Core Modules or an external MDisk for the newly discovered or external volume.
Tier 1 flash: Specifies a tier1_flash (or flash SSD drive) for the newly discovered or external volume.
Enterprise tier: Enterprise tier exists when the pool contains enterprise-class MDisks, which are disk drives that are optimized for performance.
Nearline tier: Nearline tier exists when the pool contains nearline-class MDisks, which are disk drives that are optimized for capacity.
 
Note: In the IBM FlashSystem 9100, these Enterprise or Nearline drives are in external arrays. All managed arrays on the IBM FlashSystem 9100 system contain NVMe class drives in the IBM FlashSystem 9100 control enclosures or SSD class drives in the SAS expansion enclosures.
All MDisks belong to one of the tiers, which includes MDisks that are not yet part of a pool.
If the IBM FlashSystem 9100 control enclosure is used in an Easy Tier pool and is enabled on the pool, the nodes canisters send encrypted, incompressible data to the NVMe drives. IBM Spectrum Virtualize software detects if an MDisk is encrypted by the FlashSystem 9100. Therefore, if an IBM FlashSystem 9100 control enclosure is part of an encrypted Easy Tier pool, encryption must be enabled on the IBM FlashSystem 9100 before it is enabled in the Easy Tier pool.
IBM Spectrum Virtualize does not attempt to encrypt data in an array that is encrypted. This feature allows the hardware compression of the IBM FlashSystem 9100 to be effective if the FCM type NVMe drives are used.
 
However, cases exist in which the use of IBM FlashSystem 9100 software compression is preferred, such as if highly compressible data exists (for example, 3:1 or higher). In these cases, encryption and compression can be done by the IBM FlashSystem 9100 nodes canisters.
For more information about Easy Tier, see Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1, SG24-7933, and IBM Knowledge Center.
Storage pools have an Easy Tier setting that controls how Easy Tier operates. The setting can be viewed through the management GUI, but can be changed by the CLI only.
By default, the storage pool setting for Easy Tier is set to Auto (Active). In this state, storage pools with all managed disks of a single tier have Easy Tier status of Balanced.
If a storage pool includes managed disks of multiple tiers, the Easy Tier status is changed to Active. The use of the chmdiskgrp -easytier off 1 command sets the Easy Tier status for storage pool 1 to Inactive. The use of the chmdiskgrp -easytier measure 2 command sets the Easy Tier status for storage pool 2 to Measured.
Figure 4-21 shows the four possible Easy Tier states.
Figure 4-21 Easy Tier status for CLI and GUI
Easy Tier evaluation mode
Easy Tier evaluation mode is enabled for a storage pool with a single tier of storage when the status is changed by using the command line to Measured. In this state, Easy Tier collects usage statistics for all the volumes in the pool. These statistics are collected over a 24-hour operational cycle, so you must wait several days to have multiple files to analyze. The statistics are copied from the control enclosures and viewed with the IBM Storage Tier Advisor Tool.
For more information about downloading and the use of the tool, see this website.
This tool is intended to supplement and support (but not replace) detailed preinstallation sizing and planning analysis.
Easy Tier considerations
When a volume is created in a pool that has Easy Tier active, the volume extents are initially allocated only from the Enterprise tier. If that tier is not present or all the extents are used, the volume is assigned extents from other tiers.
To ensure optimal performance, all MDisks in a storage pool tier must have the same technology and performance characteristics.
Easy Tier functions best for workloads that have hot spots or data. Synthetic random workloads across an entire tier are not a good fit for this function. Also, you should not allocate all the space in the storage pool to volumes. You should leave some capacity free on the fastest tier for Easy Tier to use for migration.
For more information about the Easy Tier considerations and recommendations, see this website.
4.6.3 SAN boot support
The IBM FlashSystem 9100 supports SAN boot or startup for IBM AIX, Microsoft Windows Server, and other operating systems. Because SAN boot support can change, see this IBM SSIC web page.
4.7 Licensing and features
In this section, we describe base products licenses and feature licensing.
4.7.1 IBM FlashSystem 9100 products licenses
The following IBM FlashSystem 9100 base products licenses are available:
IBM FlashSystem 9110 Base Model AF7- PID 5639-FA2
IBM FlashSystem 9150 Base Model AF8- PID 5639-FA3
The following functions and features are included in the base IBM FlashSystem 9100 products licenses:
Enclosure Virtualization
Thin Provisioning
FlashCopy
Encryption
Easy Tier
DRP Compression
4.7.2 SAS Expansion Enclosures
Each SAS expansion enclosure requires the IBM FlashSystem 9100 Expansion Enclosure Base Model AFF- PID 5639-FA1 license.
 
Note: Each IBM FlashSystem 9100 Expansion Enclosure Base Model A9F requires a quantity of four licenses per enclosure. The IBM FlashSystem 9100 Expansion Enclosure Base Model AFF requires only a quantity of one license per enclosure.
4.7.3 Externally virtualized expansion enclosures or external arrays
Each externally virtualized expansion enclosure or storage array requires one of the following licenses
Spectrum Virtualize for SAN Volume Controller - PID 5641-VC8
IBM Virtual Storage Center (VSC) - PID 5648-AE1
 
Note: IBM FlashSystem 9100 internal enclosures are licensed to the hardware serial number. IBM Spectrum Virtualize software and VSC packages are perpetual and not tied to any hardware serial number.
In addition to one of these licenses, the capacity of each enclosure or array includes an SCU value applied. A SCU is measured by category of usable capacity being virtualized and managed. The following categories are available:
Category 1: 1 SCU = 1 TiB or 1 TiB = 1.0 SCU
Flash and SSD
Category 2: 1 SCU = 1.18 TiB or 1 TiB = 0.847 SCU
Serial Attached SCSI (SAS), Fibre Channel, systems that use Cat 3 drives with advanced architectures (for example, XIV or Infinidat)
Category 3: 1 SCU = 4 TiB or 1 TiB = 0.25 SCU
NL-SAS and SATA
 
Note: Calculations are rounded up to the nearest whole number in each category.
Next, we describe other license feature codes that might be required.
4.7.4 Encryption
The IBM FlashSystem 9100 Encryption feature is offered with the IBM FlashSystem 9100 under the following features:
Feature code ACE7 - Encryption Enablement Pack
This feature enables data encryption at rest on the IBM FlashSystem 9100 control enclosure assigned MDisks. USB flash drives (feature ACEA) or IBM Security Key Manager (SKLM) are required for encryption key management.
Only one of these features is needed per IBM FlashSystem 9100 cluster.
This feature enables the encryption function. A single instance of this feature enables the function on the entire IBM FlashSystem 9100 system (IBM FlashSystem 9100 control enclosure and all attached IBM FlashSystem 9100 expansion enclosures) and on externally virtualized storage subsystems.
Feature code ACEA - Encryption USB Flash Drives (Four Pack)
This feature provides four USB flash drives for storing the encryption master access key.
Unless IBM Security Key Manager (SKLM) is used for encryption keys management, a total of three USB flash drives are required per IBM FlashSystem 9100 cluster when encryption is enabled in the cluster, regardless of the number of systems in the cluster. If encryption is used in a cluster, this feature is ordered on one IBM FlashSystem 9100 system, which results in a shipment of four USB flash drives.
You must have three USB keys when you enable encryption to store the master key. These keys are plugged into active nodes in your cluster. To start the system, you must have one working USB stick plugged into one working canister in the system. Therefore, you must have three copies of the encryption master key before you are allowed to use encryption.
The following methods can be used to install the encryption feature on the IBM FlashSystem IBM FlashSystem 9100:
USB Keys on each of the control enclosures
IBM Security Key Lifecycle Manager (SKLM)
You can use one or both methods to install encryption. The use of the USB and SKLM methods together gives the most flexible availability of the encryption enablement.
 
Note: To start either method requires the purchase of the Feature code ACE7- Encryption Enablement Pack as a minimum.
USB Keys
This feature supplies four USB keys to store the encryption key when the feature is enabled and installed. If necessary, a rekey feature can also be performed. When the UBS keys encryption feature is being installed, the IBM FlashSystem 9100 GUI is used for each control enclosure that has the encryption feature installed. The USB keys must be installed in the USB ports in the rear of the nodes canisters.
Figure 4-22 is a rear view of the location of USB ports on the IBM FlashSystem 9100 node canisters.
Figure 4-22 Location of USB ports
IBM Security Key Lifecycle Manager
IBM FlashSystem IBM FlashSystem 9100 Software V8.2 adds improved security with support for encryption key management software that complies with the Key Management Interoperability Protocol (KMIP) standards, such as IBM Security Key Lifecycle Manager (SKLM) to help centralize, simplify, and automate the encryption key management process.
Before IBM FlashSystem IBM FlashSystem 9100 Software V8.2, you enabled encryption by using USB flash drives to copy the encryption key to the system.
 
Note: If you are creating a cluster with V 8.2, you can use USB encryption, key server encryption, or both. The USB flash drive method and key server method can be used in parallel on the same system. Clients that use USB encryption can move to key server encryption. The migration of a local (USB) key to a centrally managed key (SKLM key server) is also available, and is a concurrent operation.
Encryption summary
Encryption can occur at the hardware or software level.
Hardware Encryption at the IBM FlashSystem 9100
IBM FlashSystem IBM FlashSystem 9100 supports hot encryption activation when encryption is enabled in the control enclosure. With hot encryption activation, you can enable encryption on a flash array without having to remove the data. Enabling encryption in this way is a non-destructive process.
Hardware encryption is the preferred method for IBM FlashSystem 9100 enclosures because this method works with the hardware compression that is built in to the Flash Core Modules of the IBM FlashSystem 9100 storage enclosure.
 
Software Encryption at the IBM FlashSystem 9100
Software encryption is used with other storage that does not support its own hardware encryption. For more information about encryption technologies that are supported by other IBM storage devices, see the IBM DS8880 Data-at-rest Encryption, REDP-4500.
4.7.5 Compression
The following methods are available to compress data on the IBM FlashSystem 9100, depending on the type of storage that is installed in the control enclosure and attached to the system:
NVMe FCM In-line Hardware Compression
Data Reduction Pool (DRP) Compression
The IBM FlashSystem 9100 software does not support Real-time Compression (RtC) type compressed volumes. If the user wants to use these established volumes on the IBM FlashSystem 9100, they must migrate them to the new DRP model. They must use volume mirroring to clone data to a new DRP. DRP pools no longer support older migrate commands.
 
Important: On IBM FlashSystem 9100, you want to use fully allocated DRPs with compression and no deduplication or DRPs with compression and deduplication.
Figure 4-23 shows the method that traditional volumes and those volumes that are compressed under the RtC process use to migrate to the new DRPs model.
Figure 4-23 RtC to DRP volume migration
IBM FlashSystem 9100 enclosure that use the IBM Flash Core Modules (FCM) have inline hardware compression as always on. The best usable-to-maximum-effective capacity ratio depends on the FCM capacity.
Some workloads that are not demanding the lowest latency and have a good possible compression rate can be a candidate for the use of software-based compression or the DRP. For more information, see Chapter 3, “Data reduction and tools” on page 23.
The IBM FlashSystem 9100 enclosure that uses industry standard NVMe drives do not include built-in hardware compression. Therefore, they must rely on the use of DRPs to provide a level of data reduction, if required.
The user can also opt for standard pools and fully allocated volumes and then use the FCM in built-in hardware compression to give a level of data reduction, depending on the data pattern stored.
DRP software compression
The IBM FlashSystem 9100 DRP software compression uses extra hardware that is dedicated to the improvement of the compression functionality. This hardware is built in on the node canister motherboard.
No separate PCIe type compression cards are used as were used on previous products. These accelerators work with the DRP software within the control enclosure for the I/O Group to support compressed volumes.
 
Important: On IBM FlashSystem 9100, you want to use fully allocated DRPs with compression and no deduplication or DRPs with compression and deduplication.
Inline hardware compression
The IBM FlashSystem 9100 FCM type drives have inline hardware compression as part of its architecture, if they are installed. The industry standard NVMe drives rely on software with hardware-assisted compression or the use of the DRPs. This type of FCM compression is always on and cannot be switched off. For more information about compression, its architecture, and operation, see Chapter 2, “IBM FlashSystem 9100 architecture” on page 11.
Data reduction at two levels
Solutions can be created in which where data reduction technologies are applied at the storage and virtualization appliance levels.
It is important to understand which of these options makes the most sense to ensure that performance is not affected and space is used in the best way possible.
Table 4-14 lists known best practices when IBM FlashSystem 9100 and other external storage are used.
Table 4-14 IBM FlashSystem 9100 and external storage best practices
Front end
External Storage
Recommendations
IBM FlashSystem 9100 - DRP above simple RAID
Storwize 5000 or any other Fully Allocated Volumes
Yes
Consider the following points:
Use DRP at top level to plan for de-duplication and snapshot optimizations.
DRP at top level provides best application capacity reporting (volume written capacity).
Always use compression in DRP to get best performance.
Bottlenecks in compression performance come from metadata overheads, not compression processing.
IBM FlashSystem 9100 - Fully Allocated
IBM FlashSystem A9000
Use with care
Consider the following points:
Need to track physical capacity use carefully to avoid out-of-space.
SAN Volume Controller can report physical use, but does not manage to avoid out-of-space.
No visibility of each application’s use at SAN Volume Controller layer.
If actual out-of-space happens, limited ability is available to recover. Consider creating sacrificial emergency space volume.
IBM FlashSystem 9100 - Fully Allocated above multitier data reducing back-end
IBM FlashSystem A9000
and IBM Storwize 5000 with DRP
Use with great care
Consider the following points:
Easy Tier is unaware of physical capacity in tiers of hybrid pool.
Easy Tier tends to fill the top tier with hottest data.
Changes in compressibility of data in top tier can overcommit the storage leading to out-of-space.
IBM FlashSystem 9100 - DRP above data reducing back-end
IBM FlashSystem 900 AE3
Yes
Consider the following points:
Assume 1:1 compression in back-end storage. Do not overcommit.
Small extra savings are realized from compressing metadata.
IBM FlashSystem 9100 - DRP and fully allocated above data reducing back-end
IBM FlashSystem 900 AE3
Use with great care
Consider the following points:
Makes it difficult to measure physical capacity use of the Fully Allocated volumes.
Temptation is to use capacity savings, which might overcommit backend.
DRP garbage collection acts as though Fully Allocated volumes are 100% used.
IBM FlashSystem 9100 - DRP
IBM V7000 or other DRP
No - Avoid
Consider the following points:
Creates two levels of IO amplification on metadata.
Two levels of capacity overhead.
DRP at bottom layer provides no benefit.
For more information about compressed volumes, see IBM Knowledge Center.
4.8 IBM FlashSystem 9100 configuration backup procedure
Configuration backup is the process of extracting configuration settings from a clustered system and writing it to disk. The configuration restore process uses backup configuration data files for the system to restore a specific system configuration. Restoring the system configuration is an important part of a complete backup and disaster recovery solution.
Only the data that describes the system configuration is backed up. You must back up your application data by using the appropriate backup methods.
To enable routine maintenance, the configuration settings for each system are stored on each node. If power fails on a system or if a node in a system is replaced, the system configuration settings are automatically restored when the repaired node is added to the system.
To restore the system configuration in a disaster (if all nodes in a system are lost simultaneously), plan to back up the system configuration settings to tertiary storage. You can use the configuration backup functions to back up the system configuration. The preferred practice is to implement an automatic configuration backup by applying the configuration backup command.
The virtualization map is stored on the quorum disks of external MDisks. The map is accessible to every IBM FlashSystem 9100 control enclosure.
For complete disaster recovery, regularly back up the business data that is stored on volumes at the application server level or the host level.
Before making major changes to the IBM FlashSystem 9100 configuration, be sure to save the configuration of the system. By saving the current configuration, you create a backup of the licenses that are installed on the system. This backup can assist you in restoring the system configuration. You can save the configuration by using the svcconfig backup command.
Complete the following steps to create a backup of the IBM FlashSystem 9100 configuration file and to copy it the file to another system:
1. Log in to the cluster IP by using an SSH client and back up the IBM FlashSystem 9100 configuration. Example 4-3 shows the output of the svcconfig backup command.
Example 4-3 Output of the svcconfig backup command
superuser> svcconfig backup
...............................................................
CMMVC6155I SVCCONFIG processing completed successfully
2. Copy the configuration backup file from the system. By using secure copy, copy the following file from the system and store it, as shown in the following example:
/tmp/svc.config.backup.xml
For example, use pscp.exe, which is part of the PuTTY commands family. Example 4-4 shows the output of the pscp.exe command.
Example 4-4 Using pscp.exe
pscp.exe superuser@<cluster_ip >:/tmp/svc.config.backup.xml .
superuser@ycluster_ip> password:
svc.config.backup.xml | 163 kB | 163.1 kB/s | ETA: 00:00:00 | 100%
This process also must be completed on any external storage in the IBM FlashSystem 9100 cluster. If you have the IBM FlashSystem 900 AE3 as external storage, you must log in to each of the AE3 cluster IP addresses by using an SSH client and run the svcconfig backup command on each of the FlashSystem AE3 attached storage enclosures. The same process applies to any IBM Storwize system that is used as external storage on the cluster.
 
Note: This process saves only the configuration of the IBM FlashSystem 9100 system. User data must be backed up by using normal system backup processes
4.9 Multi-Cloud offerings and solutions
The IBM FlashSystem 9100 includes software that can help you start to develop a multi-cloud strategy if your storage environment includes cloud services, whether public, private, or hybrid cloud.
The IBM FlashSystem 9100 offers a series of multi-cloud software options. A set of base software options is provided with the system purchase. You can explore the integration of the FlashSystem 9100 with the following cloud-based software offerings:
IBM Spectrum Protect Plus Multi-Cloud starter for FlashSystem 9100
IBM Spectrum Copy Data Management Multi-Cloud starter for FlashSystem 9100
IBM Spectrum Virtualize for Public Cloud Multi-Cloud starter for FlashSystem 9100
In addition, IBM offers a set of integrated software solutions that are associated with the IBM FlashSystem 9100. These multi-cloud solutions are provided as optional software packages that are available with the FlashSystem 9100. Each of the following software solutions includes all the software that is needed to construct the solution and an IBM-tested blueprint that describes how to construct the solution:
IBM FlashSystem 9100 Multi-Cloud Solution for Data Reuse, Protection, and Efficiency
IBM FlashSystem 9100 Multi-Cloud Solution for Business Continuity and Data
IBM FlashSystem 9100 Multi-Cloud Solution for Private Cloud Flexibility, and Data Protection
For more information about the software products that are included with the FlashSystem 9100 purchase, see IBM Knowledge Center.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.248.13