Scalability
This chapter describes the scaling capabilities of IBM FlashSystem V9000:
Scale out for capacity
Scale up for performance
A single IBM FlashSystem V9000 storage building block consists of two IBM FlashSystem V9000 control enclosures (AC2 or AC3) and one IBM FlashSystem V9000 storage enclosure (AE2). Additionally, the AC3 control enclosures can be configured with SAS-enclosures for capacity expansion.
The examples of scaling in this chapter show how to add control enclosures, a storage enclosure, and an expansion enclosure, and how to configure scaled systems.
This chapter demonstrates scaling out with additional building blocks and adding one additional storage enclosure. This setup consists of two IBM FlashSystem V9000 building blocks configured as one IBM FlashSystem V9000 cluster.
This chapter includes the following topics:
5.1 Overview
IBM FlashSystem V9000 has a scalable architecture that enables flash capacity to be added (scaled up) to support multiple applications. The virtualized system can also be expanded (scaled out) to support higher IOPS and bandwidth, or the solution can be simultaneously scaled up and out to improve capacity, IOPS, and bandwidth while maintaining MicroLatency. As a result, your organization can gain a competitive advantage through MicroLatency response times and a more efficient storage environment. IBM FlashSystem V9000 has the following scalability features per building block:
Slots for up to 12 hot-swappable flash memory modules (1.2 TB, 2.9 TB, or 5.7 TB modules)
Configurable 2.4 - 57 TB of capacity for increased flexibility per storage enclosure
Up to 20 standard expansion enclosures per controller pair (up to 80 total) with up to 9.6 PB raw capacity using NL-SAS HDDs or 29.4 PB raw capacity using SSDs
Up to 8 high-density (HD) expansion enclosures per controller pair (up to 32 total) with up to 29.4 PB raw capacity using NL-SAS HDDs or 32 PB raw capacity using SSDs
IBM FlashSystem V9000 has the following flexible scalability configuration options:
 – Scale up: Add more flash capacity
 – Scale up: Add more SAS capacity
 – Scale out: Expand virtualized system
 – Scale up and out: Add more flash and SAS capacity and expand virtualized system
Four types of storage enclosures are discussed in this chapter:
IBM FlashSystem V9000 storage enclosure (AE2)
 – Native IBM FlashSystem V9000 storage
 – Fibre channel attached
 – Based on MicroLatency Modules (flash modules)
IBM FlashSystem V9000 expansion enclosure (12F, 24F, or 92F)
 – SAS drive based either SSD or nearline drives
 – SAS attached
 – Used for capacity expansion
 – Model 12F and 24F available from Version 7.7.1
 – Model 92F available from Version 7.8
5.2 Building block for scaling
A single IBM FlashSystem V9000 storage platform consists of two IBM FlashSystem V9000 control enclosures (AC2 or AC3) directly cabled to one IBM FlashSystem V9000 storage enclosure (AE2), representing a building block.
For balanced increase of performance and scale, up to four IBM FlashSystem V9000 building blocks can be clustered into a single storage system, multiplying performance and capacity with each addition. The scalable building blocks require connectivity through Fibre Channel switches. The scalable building block configurations also support the addition of up to four individual IBM FlashSystem V9000 storage enclosures to be added to the storage system.
If 228 TB from four building blocks is not enough capacity, up to four extra AE2 storage enclosures can then be added. In total, an IBM FlashSystem V9000 storage system can contain a maximum of eight IBM FlashSystem V9000 storage enclosures, offering a potential storage capacity of 456 TB, and up to 2.2 PB effective capacity is available at 80% compression. Real-time Compression is available as a software feature, assisted by hardware accelerator cards in the IBM FlashSystem V9000 control enclosures. Real-time Compression enables users to deploy Real-time Compression where it is applicable.
From Version. 7.7.1 of IBM FlashSystem V9000, SAS attached expansion enclosures are also supported.
 
Note: The scalable building blocks require connectivity through Fibre Channel switches.
A fixed building block uses direct internal connections without any switches. Contact your IBM representative if you want to scale up or scale out from a fixed building block.
Figure 5-1 illustrates the scalable capacity of IBM FlashSystem V9000. It also shows that extra AE2 storage enclosures can be added to a single building block, and also to two, three, or four building blocks.
Figure 5-1 IBM FlashSystem V9000 scaling
5.3 Scaling concepts
IBM FlashSystem V9000 provides three scaling concepts:
Scale up: Add more flash capacity.
 – Add up to four extra IBM FlashSystem V9000 storage enclosures.
Scale up: Add more SAS capacity.
 – Add up to 80 IBM FlashSystem V9000 model 12F or 24F expansion enclosures.
 – Add up to 32 IBM FlashSystem V9000 model 92F expansion enclosures.
Scale out: Expand virtualized system.
 – Add up to three IBM FlashSystem V9000 building blocks for extra performance and capacity.
The first scalable IBM FlashSystem V9000 building block consists of two IBM FlashSystem V9000 control enclosures (AC2 or AC3), one IBM FlashSystem V9000 storage enclosure (AE2), representing a building block and two Fibre Channel switches for the internal 16 Gbps FC cabling. This building block with switches is called scalable building block.
 
Note: Internal FC speed for the AC2 control enclosures can be either 16 Gbps or 8 Gbps. For the AC3 control enclosures only 16 Gbps is supported.
IBM FlashSystem V9000 can have up to four extra storage enclosures and scale out to four building blocks as shown in Figure 5-1 on page 181. The maximum configuration has eight IBM FlashSystem V9000 control enclosures, and eight IBM FlashSystem V9000 storage enclosures.
5.3.1 Scale up for capacity
Scale up for capacity is adding an internal IBM FlashSystem V9000 storage enclosure to an existing building block. This internal storage enclosure will then be managed by the same GUI or CLI as the existing storage enclosures. This IBM FlashSystem V9000 might be a scalable building block or already be a scaled IBM FlashSystem V9000. Adding other storage to an IBM FlashSystem V9000, such as IBM Storwize V7000 or IBM FlashSystem 900, is not considered as IBM FlashSystem V9000 scale up, because it is not managed by the IBM FlashSystem V9000 GUI and it is attached using the external fabric and not the internal switches.
To add an extra IBM FlashSystem V9000 storage enclosure, see 5.4, “Adding an IBM FlashSystem V9000 storage enclosure (AE2)” on page 186.
To add an extra IBM FlashSystem V9000 expansion enclosure, see 5.6, “Adding an IBM FlashSystem V9000 expansion enclosure” on page 202
5.3.2 Scale out for performance
Scaling out for performance is equivalent to adding a second, third, or fourth building block to a scalable building block. This additional building block is managed by the same GUI or CLI as the existing IBM FlashSystem V9000. This existing IBM FlashSystem V9000 might be a single scalable building block, so that the switches are already in place, or already be a scaled IBM FlashSystem V9000 of up to three building blocks.
Scale out always adds two controller nodes and one storage enclosure per building block to an existing IBM FlashSystem V9000.
To add another IBM FlashSystem V9000 building block, see 5.5, “Adding a second building block” on page 192.
5.3.3 IBM FlashSystem V9000 scaled configurations
Table 5-1 on page 183 summarizes the minimum and maximum capacity for scalable building blocks including the addition of AE2 storage enclosures.
Table 5-1 IBM FlashSystem V9000, scalable building blocks including additional storage enclosures
Scalable building blocks (BB)
Minimum capacity (TB)
Maximum capacity (TB)
Maximum effective capacity (TB) with Real-time Compression
1 BB
  2.2
  57
  285
1 BB + 1 AE2
  4.4
114
  570
1 BB + 2 AE2
  6.6
171
  855
1 BB + 3 AE2
  8.8
228
1140
1 BB + 4 AE2
11.0
285
1425
2 BB
  4.4
114
  570
2 BB + 1 AE2
  6.6
171
  855
2 BB + 2 AE2
  8.8
228
1140
2 BB + 3 AE2
11.0
285
1425
2 BB + 4 AE2
13.2
342
1710
3 BB
  6.6
171
  855
3 BB + 1 AE2
  8.8
228
1140
3 BB + 2 AE2
11.0
285
1425
3 BB + 3 AE2
13.2
342
1710
3 BB + 4 AE2
15.4
399
1995
4 BB
  8.8
228
1140
4 BB + 1 AE2
11.0
285
1425
4 BB + 2 AE2
13.2
342
1710
4 BB + 3 AE2
15.4
399
1995
4 BB + 4 AE2
17.6
456
2280
PCIe expansion ports
Seven Peripheral Component Interconnect Express (PCIe) slots are available for port expansions in the IBM FlashSystem V9000 AC3 control enclosures.
Table 5-2 shows the host port count per building block configuration (1, 2, 3, or up to 4 building blocks).
Table 5-2 Host port count per building blocks
Building blocks
16 Gbps FC
(host and storage)
10 Gbps iSCSI
(host and storage)
10 Gbps FCoE
(host)
1
32
  8
  8
2
64
16
16
3
96
24
24
4
128
32
32
Expansion enclosures
IBM FlashSystem V9000 Software V7.7.1 introduces support for the addition of expansion enclosures also called tiered solution Models 9846/8-12F and 9846/8-24F, which are available for the AC3 control enclosures.
The next generation IBM FlashSystem V9000 Software V7.8 offers an additional expansion enclosure, Model 9846/8-92F. The IBM FlashSystem V9000 High-density (HD) Large Form Factor (LFF) Expansion Enclosure Model 92F supports up to 92 drives per enclosure, with a mixture of rotating disks and SSD drives in various capacities.
 
Note: The focus of this book is on IBM FlashSystem V9000 Software V7.7.1 and therefore does not include examples of connecting expansion enclosures model 92F.
IBM FlashSystem V9000 Small Form Factor (SFF) expansion enclosure model 24F offers new tiering options with low cost solid-state drives (SSDs). Each SFF expansion enclosure supports up to 24 2.5-inch low cost SSD drives.
Up to 20 expansion enclosures model 12F or 24F are supported per IBM FlashSystem V9000 building block, providing up to 480 drives with expansion enclosure model 24F (SFF) and up to 240 drives with expansion model 12F (LFF) for up to 2.4 PB of raw NL-SAS capacity in each building block. With four building blocks 9.6 PB of raw NL-SAS capacity is supported.
The HD Expansion Enclosure Model 92F provides additional configuration options. Up to eight HD expansion enclosures model 92F are supported per IBM FlashSystem V9000 building block, providing up to 736 drives for up to 7.3 PB of raw NL-SAS capacity or 11.3 PB SSD capacity in each building block. With four building blocks a maximum of 32 HD expansion enclosures model 92F can be attached giving a maximum 29.4 PB of raw NL-SAS capacity and 32 PB of raw SSD capacity. For information about the allowed intermix of expansion enclosures, see 2.6.1, “SAS expansion enclosures intermix” on page 82.
 
Note: IBM FlashSystem V9000 Version 7.7.1 has maximum manageable capacity of 32 PB. Managing 32 PB of storage requires MDisk extent size of 8192 MB.
Figure 5-2 on page 185 shows the maximum possible configuration with a single building block using a combination of native IBM FlashSystem V9000 storage enclosures and expansion enclosures.
Figure 5-2 Maximum configuration with a single scalable building block using model 12F and 24F expansion enclosures
Table 5-3 IBM FlashSystem V9000 maximum capacities
 
Model 12F
10TB NL-SAS
Model 24F 15.36TB SSD
Model 92F
10TB NL-SAS
Model 92F 15.36TB SSD
1 building block
(8 x 92F)
 
 
7.3 PB
11.3 PB
4 building blocks (32 x 92F)
 
 
29.4 PB
32 PB1
1 building block
(20 x 12F or 24F)
2.4 PB
7.3 PB
 
 
4 building blocks (80 x 12F or 24F)
9.6 PB
29.4 PB
 
 

1 IBM FlashSystem V9000 Version 7.7.1 has a maximum manageable capacity of 32 PB.
High-density (HD) solid-state drives (SSDs)
High-density SSDs allow applications to scale and achieve high performance while maintaining traditional reliability and endurance levels. 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB SAS 2.5-inch SSD options are available for IBM FlashSystem V9000 SFF expansion enclosure model 24F for up to 7.3 PB raw SSD capacity in each building block for a maximum 29.4 PB with four building blocks.
With expansion enclosure model 92F 7.68 TB and 15.36 TB SSD drives are available for up to 11.3 PB raw SSD capacity in each building block for a maximum 32 PB with four building blocks.
 
Note: IBM FlashSystem V9000 Version 7.7.1 has a maximum manageable capacity of 32 PB.
High capacity nearline drives
High capacity nearline drives enables high value tiered storage with hot data stored in flash and warm data on lower cost NL-SAS HDDs all managed by IBM Easy Tier. The 10 TB SAS 3.5-inch nearline drives are available for IBM FlashSystem V9000 LFF expansion enclosure Model 12F and for Model 92F. Maximum capacities with four building blocks using expansion enclosure Model 12F is 9.6 PB raw nearline capacity and 29.4 PB using model 92F.
RAID types
RAID5 with standby hot spare is the only available RAID option for IBM FlashSystem V9000 native flash storage expansion. However, the additional SAS attached expansion enclosures can be configured with various RAID options. Distributed RAID (DRAID 5 and DRAID 6), which offers improved RAID rebuild times, is preferred for expansion enclosures.
 
Note: To support SAS attached expansion enclosures, an AH13 - SAS Enclosure Attach adapter card must be installed in expansion slot 2 of each AC3 control enclosure in the building block.
5.4 Adding an IBM FlashSystem V9000 storage enclosure (AE2)
This section gives an example of adding an extra IBM FlashSystemV9000 storage enclosure (AE2) to a single scalable building block. Before scaling a building block, be sure that the internal cabling is set up and zoning on the switches has been implemented.
 
Note: The Fibre Channel internal connection switches are ordered together with the first IBM FlashSystem V9000 scalable building block. You can also supply your own Fibre Channel switches and cables, if they are supported by IBM. See the list of supported Fibre Channel switches at the SSIC web page:
Figure 5-3 shows a scalable building block before adding an extra IBM FlashSystem V9000 storage enclosure.
Figure 5-3 Single scalable building block
 
Note: The GUI example shown in Figure 5-3 illustrates AC2 controllers. When using new model controllers the GUI changes to show the layout in the new models.
To add an IBM FlashSystem V9000 storage enclosure (AE2), complete the following steps:
1. After installing an additional storage enclosure into the IBM FlashSystemV9000 rack and cabling it to the internal switches, the IBM FlashSystemV9000 GUI shows the added storage enclosure (the display now differs from the display in Figure 5-3). Now the controller nodes are grouped on the left and the storage enclosures on the right. Hover the mouse over the existing storage enclosure to get its details (Figure 5-4).
Figure 5-4 First enclosure details
2. Hover over the empty storage enclosure frame; the click to add additional storage message is displayed (Figure 5-5).
Figure 5-5 Single building block with unconfigured additional storage enclosure
3. Click to open the Welcome page of the Add Enclosures wizard (Figure 5-6).
Figure 5-6 Add Enclosure Wizard
4. Click Next to add the storage enclosure.
After some minutes, the enclosure is added to the IBM FlashSystem V9000 and you see the Task completed message (Figure 5-7).
Figure 5-7 Adding storage enclosure completed
5. Click Close at the Task Completed window. A summary is displayed showing capacity and flash modules to be added as shown in Figure 5-8.
Figure 5-8 Add enclosure summary
6. Click Finish. After some minutes, the array is initialized and the task finishes (Figure 5-9).
Figure 5-9 Task completed
7. Click Close at the Task Completed screen.
8. The Add Enclosure wizard finishes by advising you that MDisks must be added manually through the MDisks by Pools page (Figure 5-10).
Figure 5-10 Add storage completed
9. Click Close and navigate to the Pools → MDisks by Pools menu or simply click MDisks by Pools at the final window in the Add Enclosure wizard.
The process of adding MDisks is described in step 8 on page 197.
You must decide whether to add the new MDisks in an existing storage pool or in a new storage pool:
To maximize performance, place all storage in a single pool to improve tiering performance. This way, the VDisks have their extents striped across all storage enclosures, which gives best performance.
To maximize availability, place each expansion enclosure in a separate pool for fault tolerance purposes.
In this example, the new MDisk is added to the existing MDisk pool. The result is shown in Figure 5-11.
Figure 5-11 New MDisk to the existing MDisk pool
 
Important: Before deciding whether to create a single or multiple storage pools, carefully evaluate which option best fits your solution needs, considering data availability and recovery management.
GUI after adding the IBM FlashSystem V9000 storage enclosure
From the IBM FlashSystem V9000 GUI home window, you now see the added enclosure (highlighted in Figure 5-12).
Figure 5-12 IBM FlashSystem V9000 with added storage enclosure
Hover over the newly added enclosure to get detailed information about it (Figure 5-13).
Figure 5-13 Enclosure 2 details
Click the added enclosure to review the installed components flash modules and batteries as shown in Figure 5-14.
Figure 5-14 Review the installed components
Click the arrow to spin the enclosure to see the rear-view components (Figure 5-15).
Figure 5-15 Review components from the rear side
5.5 Adding a second building block
This section provides an example on adding an extra IBM FlashSystem V9000 AC3-based building block to a single AC2-based scalable building block.
Figure 5-16 on page 193 shows the home window for an AC2-based scalable building block before an extra IBM FlashSystem V9000 AC3-based building block is added.
Figure 5-16 Single scalable building block with AC2 control enclosures
 
Note: IBM FlashSystem V9000 AC2 and AC3 control enclosures have different layout of internal disks and batteries. This difference is depicted in the GUI.
The process of adding an extra IBM FlashSystem V9000 building block includes rack mount, cabling, power up of the new IBM FlashSystem V9000 and zoning of switches. Use the service assistance tool to check that the new devices are visible and available to the system as shown in Figure 5-17.
Figure 5-17 Service assistant shows that the new devices are available
For more information about how to access the Service Assistant see Chapter 10, “Service Assistant Tool” on page 475.
After preparing the extra IBM FlashSystem V9000, the window now differs from Figure 5-16. Now the controller nodes are grouped on the left side and the storage enclosures on the right.
Complete the following steps:
1. Hover over the non-configured I/O group 1 (Figure 5-18) to get a hint about how to configure it.
Figure 5-18 IBM FlashSystem V9000 with unconfigured second building block
2. Click the empty IO group or the empty storage enclosure to open the Welcome page of the Add Enclosures wizard (Figure 5-19) to configure control and storage enclosure.
Figure 5-19 Add enclosure wizard
3. Click Next to add the storage enclosure.
After a few minutes, the enclosure is added to the IBM FlashSystem V9000 and the Task completed message is displayed (Figure 5-20).
Figure 5-20 Adding storage enclosures
4. Now only the storage enclosure is added. Click Close to select nodes to add to the new I/O group, which by default is numbered with the next higher number. The first I/O group was named io_grp0, and io_grp1 is being added. IBM FlashSystem V9000 supports up to four I/O groups.
5. Figure 5-21 shows the Add Enclosures wizard where the new nodes are automatically selected. Click Next.
Figure 5-21 Verify node names and add enclosures
6. A summary is displayed (Figure 5-22). It shows that one new I/O group with two nodes are being added. The capacity field shows zero bytes being added. Capacity will be added later through the MDisks by Pools menu. Click Finish.
Figure 5-22 Summary for adding IO group and storage enclosure
7. After a few minutes the array initialization task finishes. Click Close in the Task Completed window (Figure 5-23).
Figure 5-23 Task completed after adding control and storage enclosures
When the task is completed, the Add Enclosure wizard finishes by indicating that the new MDisk must be added through the MDisks by Pools page (Figure 5-24).
Figure 5-24 Adding enclosure completed
8. Either click MDisks by Pools or click Close and then navigate to Pools → MDisks by Pools, which now shows that the new MDisk is unassigned (Figure 5-25).
Figure 5-25 New MDisk is unassigned
Before deciding whether to create a single or multiple storage pools, carefully evaluate which option best fits your solution needs, considering data availability and recovery management.
In the next example, an additional storage pool for the new MDisk is created. The new MDisk could also have been added to the existing storage pool.
9. Next, the Create Pool wizard opens (Figure 5-26). Select the extent size or accept the default of 1 GiB. Type a name for the new pool, which in this example is mdiskgrp1. Then, click Create.
Figure 5-26 Create Pool
The Create Pool wizard creates an empty storage pool. The Task completed message is displayed (Figure 5-27). Click Close.
Figure 5-27 The new pool is created
An empty storage pool is now available for the new MDisk (Figure 5-28).
Figure 5-28 New MDisk is unassigned
10. Either click Add Storage to the right of mdiskgrp1 or right-click the unassigned MDisk and click Assign.
11. The Assign Storage to Pool wizard starts (Figure 5-29). Click Internal Flash, select the available MDisk and then click Assign.
Figure 5-29 Assign storage to mdiskgrp1
12. Figure 5-30 shows that the wizard added the new MDisk to MDisk pool mdiskgrp1. Click Close to complete the wizard.
Figure 5-30 The new MDisk is added to the new pool
The new MDisk is now included in MDisk pool mdiskgrp1 (Figure 5-31). Adding new storage is completed.
Figure 5-31 Adding new storage has completed
GUI after adding the extra building block
From the GUI home window you now see the added control enclosures and the added storage enclosure (Figure 5-32). The capacity indicated has increased with the value of the added storage capacity.
Figure 5-32 IBM FlashSystem V9000 scale out configuration
 
Note: You might have to refresh the web-browser to see the new capacity values.
Figure 5-33 shows details of the IBM FlashSystem V9000 when you hover over the components.
Figure 5-33 Hovering over the components
Click the added enclosure to review the installed disks (Figure 5-34).
Figure 5-34 AC3 control enclosure frontside
Click the arrow to spin the enclosure in order to view the components at the rear side (Figure 5-35).
Figure 5-35 AC3 control enclosure rear side
5.6 Adding an IBM FlashSystem V9000 expansion enclosure
This section gives an example of adding an IBM FlashSystem V9000 expansion enclosure (12F or 24F) to a scalable building block with two building blocks. The expansion enclosure is added to the second building block, which has AC3 controller nodes.
Figure 5-36 shows a scalable building block with two building blocks before adding an extra IBM FlashSystem V9000 expansion enclosure.
Figure 5-36 Two building blocks before adding an IBM FlashSystem V9000 expansion enclosure
To add an IBM FlashSystem V9000 expansion enclosure (12F or 24F), complete these steps.
1. Install the hardware:
 – SAS Enclosure Attach adapter for both AC3 control enclosures in the building block.
 – One or more expansion enclosures: Install the new enclosures in the rack.
 – SAS cables: Connect the SAS cables to both nodes at the AC3 control enclosures.
 – Power cables: Connect the expansion enclosures to power.
 
Note: To support the IBM FlashSystem V9000 expansion enclosures, an AH13 - SAS Enclosure Attach adapter card must be installed in expansion slot 2 of each AC3 control enclosure in the building block.
2. Power on the IBM FlashSystem V9000 expansion enclosure. Wait for the system LEDs to turn green. If any LEDs are yellow, troubleshoot the issue before proceeding.
3. Hover over the empty IBM FlashSystem V9000 expansion enclosure frame; the Click to add additional storage message is displayed (Figure 5-37 on page 203). Click the unassigned expansion enclosure and follow the instructions in the Add Enclosure wizard.
Figure 5-37 Click the unassigned expansion enclosure
Figure 5-38 shows the Add SAS Enclosures wizard and where the unassigned expansion enclosure is displayed with model, type and serial number. The new disk shelf is online and will be assigned an ID of 3. Click Add.
Figure 5-38 Unassigned expansion enclosures to be added
Figure 5-39 shows the Task Completed message and the command that was used to assign the new enclosure.
Figure 5-39 Command to assign enclosure is running
When the enclosure is assigned to the system the device is now managed. Disks in the new enclosure appear when you select Pools → Internal Storage (Figure 5-40). The disks are unused and ready to be used in the system.
Figure 5-40 Internal storage: new disks are unused
4. Navigate to Pools → MDisks by Pools.
The system recognizes that the new disks are unused and suggests to include them in the configuration (Figure 5-41). Click Yes.
Figure 5-41 Include new disks
Figure 5-42 shows the CLI commands to include the new disks in the configuration. Disks now have the status of candidate.
Figure 5-42 CLI executes and drives are now candidate drives
5. In the MDisks by Pools menu, you must now decide to either expand an existing MDisk pool or create a new pool. The next example takes advantage of the Easy Tier function and therefore expanding the existing pool mdiskgrp0 is chosen.
Figure 5-43 shows that 18 drives are available for storage pools.
Figure 5-43 18 drives are available for storage pools
Because you are about to mix flash and SAS drives in a single storage pool, a good practice is to name the MDisks to reflect in which storage or expansion enclosure they are located. Right-click the MDisk and click Rename (Figure 5-44).
Figure 5-44 Rename the MDisks
Rename the existing MDisks to mdisk0-FLASH1 and mdisk1-FLASH2 to indicate that these MDisks come from flash storage. The final result is shown in Figure 5-45.
Figure 5-45 Existing MDisks renamed to reflect their type
6. Assign the new disks to the MDisk pool mdiskgrp0. Right-click the storage pool and click Add Storage (Figure 5-46).
Figure 5-46 Add storage to the MDisk pool
The Assign Storage to Pool wizard opens (Figure 5-47). You can choose disks from Internal or Internal Custom. Selecting Internal disks provides only one single default option for configuring RAID and spares. Selecting Internal Custom gives more choices where all available RAID and Distributed RAID (DRAID) can be selected and where stripe width and number of spares can also be configured.
Select Distributed RAID-6 with 2 spares and click Assign.
Figure 5-47 Assign disks with RAID-6 and two spares
The Task completed message is displayed (Figure 5-48).
Figure 5-48 Task completed: The new MDisk is created
Just as you renamed the MDisks coming from flash storage, you also rename the MDisk coming from SAS storage (Figure 5-49).
Figure 5-49 Rename the SAS MDisk
The new name should reflect the disk types within the storage pool. Enter the new name mdisk3-SAS10K-3 (Figure 5-50) and then click Rename to continue.
Figure 5-50 Enter new name
7. Check the tiering level. Tiering level is automatically determined by IBM Easy Tier and can be changed only by using CLI commands. Easy Tier operates with three levels, or tiers, with Version 7.7 of IBM FlashSystem V9000 software: flash, enterprise, and nearline.
Easy Tier can manage five types of drives in up to three tiers within a managed pool:
 – Tier 0 or flash tier: Flash cards and flash drives (SSD). This tier is the highest performance drive class that is currently available.
 – Tier 1 or enterprise tier: 10K RPM or 15K RPM SAS disk drives. This tier is the high-performance drive class.
 – Tier 2 or nearline tier: 7.2K RPM nearline disk drives. This tier is the low-cost, large capacity, storage drive class.
 
Note: Starting with IBM FlashSystem V9000 Version 7.8, an additional tier of flash storage is provided to differentiate tier 0 flash from tier 1 flash. See 3.2.1 IBM Easy Tier for more information.
To check the tier levels, right-click the blue bar at the top of the MDisks by Pools window and select Tier. The resulting MDisks and tier levels are shown in Figure 5-51.
Figure 5-51 MDisks and tiers
 
Note: Version 7.8 of IBM FlashSystem V9000 introduces an extra tier level supporting read-intensive solid-state drives (RI SSD). These are the new tiering levels:
Tier0 - Flash tier (MicroLatency flash modules)
Tier1 - SSD tier (new) (SSD drives)
Tier2 - HDD tier (SAS disk drives)
Tier3 - nearline tier (NL-SAS disk drives)
GUI after adding the expansion enclosure
The IBM FlashSystem V9000 home window (Figure 5-52) now shows the added enclosure.
Figure 5-52 The added expansion enclosure appears on the GUI
Click the added enclosure to review the installed disks (Figure 5-53).
Figure 5-53 SAS enclosure front
Click the arrow to spin the enclosure to view components at the rear side (Figure 5-54).
Figure 5-54 SAS enclosure rear side
5.7 Planning
See the following areas of this book:
Chapter 4, “Planning” on page 135 describes details for planning the set up of a scaled IBM FlashSystem V9000.
Appendix A, “Guidelines: Port utilization in an IBM FlashSystem V9000 scalable environment” on page 657 provides examples and guidelines for configuring port utilization and zoning to optimize performance and properly isolate the types of Fibre Channel traffic.
Guidelines are provided for two suggested methods of port utilization in an IBM FlashSystem V9000 scalable environment, dependent on customer requirements:
 – IBM FlashSystem V9000 port utilization for infrastructure savings
This method reduces the number of required Fibre Channel ports attached to the customer’s fabrics. This method provides high performance and low latency, but performance might be port-limited for certain configurations. Intra-cluster communication and AE2 storage traffic occur over the internal switches.
 – IBM FlashSystem V9000 port utilization for performance
This method uses more customer switch ports to improve performance for certain configurations. Only ports that are designated for intra-cluster communication are attached to private internal switches. The private internal switches are optional and all ports can be attached to customer switches.
5.8 Installing
Chapter 6, “Installation and configuration” on page 231 includes details of how to install and configure IBM FlashSystem V9000. It describes the tasks that are done by the IBM Service Support Representative or IBM lab-based services to set up the system and the follow-on task done by the customer.
5.9 Operations
The IBM FlashSystem V9000 GUI is the focal point for operating the system. You need only one GUI to create volumes and hosts, and map the volumes to the host.
This section shows how to add an AIX host to I/O group 0 and I/O group 1. A Red Hat Enterprise Linux host will be added to I/O group 1. This section provides an example of host and volume creation for a scaled IBM FlashSystem V9000.
For information about host and volume creation, see Chapter 8, “Using IBM FlashSystem V9000” on page 321.
Complete the following steps:
1. In the GUI, select Hosts and click Add Hosts.
2. The Add Host wizard opens (Figure 5-55). Click Fibre Channel to add the AIX host.
Figure 5-55 Add host wizard
3. The fields to set the new host are displayed (Figure 5-56). Provide a name for the host and select the host port WWPN. Use the default for the host type unless you have other requirements. Select io_grp0 and io_grp1. Click Add to create the host.
Figure 5-56 Add AIX host to two I/O groups
4. To add a Redhat host, restart the wizard and add the Redhat information. Figure 5-57 shows adding a host only for I/O group 1. Click Add to create the Redhat host.
Figure 5-57 Add RedHat host to only I/O group 1
The hosts are now created and available to the number of I/O groups (Figure 5-58).
Figure 5-58 Hosts and number of I/O groups
5. The next step is to create volumes for the hosts. Click Create Volumes in the Volumes menu and click Custom. Provide a name for the volumes, and enter the capacity and number of volumes. Create four volumes for the AIX host (Figure 5-59).
Figure 5-59 Create 4 volumes
6. After you enter the volume detail data, click Volume Location.
7. Volumes are presented to the host by one I/O group. A cluster with two or more I/O groups will, by default, automatically balance the volumes over all I/O groups. The AIX host can access both I/O groups in this example setup. Therefore, the volumes for the AIX host can be auto-balanced over the two I/O groups. This is the default setup, as shown in Figure 5-60. Click Create to create the four volumes.
Figure 5-60 Default volume location settings
 
Note: By default, volume format is always on in custom and non custom mode. By selecting custom mode in the Create Volumes wizard, you can deselect Format volume on the General tab. Volume format can take a long time and might not be needed.
8. The volume information shows four AIX volumes distributed over both I/O groups. The header in the next figure was altered to show preferred node ID and caching I/O group. The caching I/O group presents the volume to the host, as shown in Figure 5-61.
Figure 5-61 Volumes and caching I/O group
 
Note: The Preferred Node ID in the example in Figure 5-61 shows IDs 1,2,9 and 10. In an actual customer environment where a building block is being added, the node IDs are assigned the next higher number which, in this example, would be 3 and 4. The example shows numbers 9 and 10 due to several additions and removals of I/O groups in our test environment.
9. The next step creates volumes for the Red Hat host (RedHat). The Redhat host is attached to only I/O group 1 and therefore on the Volume Location tab, the caching I/O group is set to io_grp1 (Figure 5-62). Click Create to create the volumes.
Figure 5-62 Limiting access to I/O group 1
10. The volume information shows that the four added Redhat volumes are presented by I/O group 1 to the host (Figure 5-63).
Figure 5-63 Volumes and caching I/O group
Move unmapped volumes to another I/O group
If there is a need to move a host and its volumes from one I/O group to another, this can be completed with volumes mapped to the host or with unmapped volumes. To move unmapped volumes complete the following steps:
1. From the Hosts menu, right-click a host (Figure 5-64) and click Properties. Then, click Show Details.
Figure 5-64 Show hosts properties with detailed view
The host is enabled only for I/O group 1, and you must enable it for I/O group 0 also.
2. Click Edit and then select the I/O group (Figure 5-65) that the host and volumes are to be migrated to. The host is going to be available to I/O group 0 and I/O group 1. Click Save.
Figure 5-65 Edit details: select additional I/O group
3. From the Volumes menu, select the volumes to be moved to the other I/O group. Either right-click or click the Actions menu and select Modify I/O Group (Figure 5-66).
Figure 5-66 Modify I/O group
4. The Move Volumes to a New I/O Group wizard opens (Figure 5-67). Select the new I/O group and click Move.
Figure 5-67 Move I/O group
The volumes are moved to I/O group 0 (Figure 5-68).
Figure 5-68 Volumes now appear in the other I/O group
The host can now be deselected from I/O group 1 or it can be left as is from the Hosts menu.
Move mapped volumes to another I/O group
Moving host-mapped volumes requires more attention from you, because the hosts have to discover new paths and they have to remove the old paths, while at the same time host access to its volumes must remain uninterrupted.
The following example moves the four Red Hat volumes back to I/O group 1 from I/O group 0.
To move host-mapped volumes, complete the following steps:
1. From the Volumes menu select the volumes to be moved to another I/O group, right-click and select Modify I/O Group. The Move Volumes to New I/O Group wizard opens (Figure 5-69 on page 220).
Figure 5-69 Move Volumes to a New I/O Group wizard
2. In this example, you allow only the Red Hat host to access I/O group 0 and I/O group 1 so only I/O group 1 is a candidate. Select the new I/O group and click Apply and Next (Figure 5-70).
Figure 5-70 Select new I/O group
CLI commands execute and then the Task completed message is displayed (Figure 5-71).
Figure 5-71 Task completed
3. The system now maps the volumes to the new I/O group while, at the same time, keeping them in the first I/O group. The host now has to discover the new paths, so you must perform path discovery from the host side before continuing to make sure that the host has appropriate access to its volumes on the new paths. After new paths are detected, click Apply and Next (Figure 5-72).
Figure 5-72 Detect new paths
4. When you confirm that hosts discovered all new paths for the volumes, the Move Volumes to a New I/O Group wizard removes access to the first I/O group and thereby also removes all paths to it. Click Next to remove the old paths (Figure 5-73).
Figure 5-73 Old paths are removed
5. Review the summary and click Finish (Figure 5-74).
Figure 5-74 Finish Move Volumes to a new I/O Group wizard
The volumes are now moved back to I/O group 1 (Figure 5-75).
Figure 5-75 Volumes were moved to the new I/O group
Creating volumes, adding hosts, and mapping volumes to hosts in an IBM FlashSystem V9000 scaled-out configuration needs careful planning of host connections with regard to the I/O groups. This example of the AIX host and the Red Hat host demonstrates these steps.
5.10 Concurrent code load in a scaled-out system
This section demonstrates the IBM FlashSystem V9000 software update. Before you start a system update, be sure that the system has no problems that might interfere with a successful update. When the system uses HyperSwap volumes, make sure that all HyperSwap relationships have a status of Online by running the lsrcrelationship command or by using the GUI. Hosts must be configured with multipathing between the nodes of the accessing I/O group or groups when using HyperSwap.
 
Note: The software release notes contain the current information about the update.
The update process is described in 9.5.3, “Update software” on page 455. This section includes a brief description of the update on an IBM FlashSystem V9000 scaled out system.
Figure 5-76 on page 224 shows an IBM FlashSystem V9000 full scaled-out cluster using four building blocks and four additional IBM FlashSystem V9000 storage enclosures (in total, eight controller nodes and eight storage enclosures).
Figure 5-76 Full scaled-out and scaled-up IBM FlashSystem V9000
IBM FlashSystem V9000 update consists of three phases, with different steps for each phase:
1. Phase 1: IBM FlashSystem V9000 control enclosures:
a. Update of one controller node per I/O group, one controller at a time.
b. Pause for approximately 30 minutes for host path discovery; hosts have to reconnect to the updated controller nodes.
c. Update of the other controller node of an I/O group, one controller at a time.
2. Phase 2: IBM FlashSystem V9000 storage enclosures software:
a. Update of one canister per storage enclosure, all in parallel.
b. Update of the other canister of a storage enclosure, all in parallel.
3. Phase 3: IBM FlashSystem V9000 storage enclosures hardware:
a. Update batteries, flashcards, and so on.
The update takes about 2.5 hours for a cluster with one building block. You can add 10 - 15 minutes per additional node. Adding IBM FlashSystem V9000 storage enclosures does not increase the amount of time for the update because they are all updated in parallel.
Complete the following steps:
1. To start the concurrent code load (CCL), click Settings  System  Update System  Update.
2. The Update System wizard opens (Figure 5-77). Provide a name for the test utility and the update package. Use the folder buttons to select the correct file names and click Update.
Figure 5-77 Update system file selection
3. You are prompted whether you want the update to be automatic or manual. Select Automatic update (Figure 5-78) and then click Finish.
Figure 5-78 Automatic or manual update
After you click Finish, these steps occur:
1. The files are uploaded to IBM FlashSystem V9000. A progress bar shows the status of the upload (Figure 5-79).
Figure 5-79 CCL upload progress bar
2. After uploading the files, the GUI shows the overall progress bar. It takes one to two minutes after uploading the files before the overall progress status is shown. Figure 5-80 shows the information for a cluster of eight nodes and four building blocks.
Figure 5-80 Overall Progress
At the starting point of the update, all eight nodes and the system part, which also includes the IBM FlashSystem V9000 enclosures, are in state Not updated. Then all eight nodes are updated, one by one, starting with the second node of each I/O group. After one node of every I/O group is updated, the update pauses for approximately 30 minutes for host path discovery. Then, the other nodes are updated.
3. Figure 5-81 shows the progress on the first node updating, which is node 2 in this example.
Figure 5-81 Updating first node, the second node of the first I/O group
4. Figure 5-82 shows the progress on the second node that is updating, which is node4 in this example.
Figure 5-82 Second node updating (node 4)
5. Figure 5-83 shows the progress on the fourth node that is updating, which is node8 in this example.
Figure 5-83 Fourth node updating, which is node 8
6. After updating half of all nodes, one node of every I/O group, the update pauses for approximately 30 minutes for host path discovery.
After the configuration node update starts, a node failover message is displayed (Figure 5-84).
Figure 5-84 Node failover due to update of the configuration node
7. After all of the IBM FlashSystem V9000 controller nodes are updated, IBM FlashSystem V9000 storage enclosure software is updated. All IBM FlashSystem storage enclosures are updated in parallel, that is, at the same time. First, IBM FlashSystem V9000 storage enclosure software is updated and then all other components, such as flash modules, batteries, and so on are updated (Figure 5-85).
Figure 5-85 IBM FlashSystem storage enclosure canister update
8. When the update is complete, a message indicates that IBM FlashSystem V9000 is running the most current software (Figure 5-86).
Figure 5-86 IBM FlashSystem V9000 running up-to-date software
All IBM FlashSystem V9000 controller nodes and storage enclosures are now updated to the current software level. The updated GUI now has new icons for configuring IP Quorum and NPIV.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.92.234