Storage pools
This chapter describes how the IBM Storwize V5000 Gen2 manages physical storage resources. All storage resources that are under IBM Storwize V5000 Gen2 control are managed by using storage pools. Storage pools facilitates to dynamically allocate resources, maximize productivity and reduce costs. Internal and external managed disks (MDisks), advanced internal storage, and storage pool management are covered in this chapter. External storage controllers are covered in Chapter 11, “External storage virtualization” on page 635.
Storage pools can be configured through the Initial Setup wizard when the system is first installed, as described in Chapter 2, “Initial configuration” on page 37. They can also be configured after the initial setup through the management GUI, which provides a set of presets to help you configuring different Redundant Array of Independent Disks (RAID) types.
The recommended configuration presets configure all drives into RAID arrays based on drive class and protect them with the correct number of spare drives. Alternatively, you can configure the storage to your own requirements. Selections include the drive class, the number of drives to configure, whether to configure spare drives and optimization for performance or capacity.
Specifically, this chapter provides information about the following topics:
4.1 Working with internal drives
This section describes how to configure the internal storage disk drives by using different RAID levels and optimization strategies. For more information about RAID settings see section 4.3.2, “RAID configuration” on page 179.
The IBM Storwize V5000 Gen2 storage system provides an Internal Storage window for managing all internal drives. The Internal Storage window can be accessed by opening the System window, clicking the Pools option and then clicking Internal Storage, as shown in Figure 4-1.
Figure 4-1 Path to Internal Storage window
4.1.1 Internal Storage window
The Internal Storage window (as shown in Figure 4-2 on page 145) provides an overview of the internal drives that are installed in the IBM Storwize V5000 Gen2 storage system. Selecting All Internal under the Drive Class Filter shows all the drives that are installed in the managed system, including attached expansion enclosures. Alternatively, you can filter the drives by their type or class. For example, you can choose to show only Enterprise drive class (serial-attached Small Computer System Interface (SCSI) or (SAS)), Nearline SAS, or Flash drives.
Figure 4-2 Internal Storage window
The right side of the Internal Storage window lists the selected type of internal disk drives. By default, the following information is listed:
Logical drive ID
Drive capacity
Current type of use (unused, candidate, member, spare, or failed)
Status (online, offline, and degraded)
MDisk name that the drive is a member of
Enclosure ID that the drive is installed in
Slot ID of the enclosure in which the drive is installed
The default sort order is by enclosure ID. This default can be changed to any other column by left-clicking the column header. To toggle between ascending and descending sort order, left-click the column header again. By hovering over the header names such as Drive ID, it you display a brief description of the items within that column.
Additional columns can be included by right-clicking the gray header bar of the table, which opens the selection panel, as shown in Figure 4-3. To restore the default column options, select Restore Default View.
Figure 4-3 Additional column options for Internal Storage window
The overall internal storage capacity allocation indicator is shown in the upper-right corner. The Total Capacity shows the overall capacity of the internal storage that is installed in the IBM Storwize V5000 Gen2 storage system. The MDisk Capacity shows the internal storage capacity that is assigned to the MDisks. The Spare Capacity shows the internal storage capacity that is used for hot spare disks.
The percentage bar that is shown in Figure 4-4 indicates how much capacity is allocated.
Figure 4-4 Internal Storage allocation indicator
4.1.2 Actions on internal drives
You can perform several actions by right-clicking the internal drives or clicking on the Actions drop-down menu, as shown in Figure 4-5. If you click Actions without selecting any drive, the only option available will be Upgrade All.
Figure 4-5 Internal drive actions menu
Depending on the status of the selected drive, the following actions are available.
Take Offline
The internal drives can be taken offline if a problem on the drive is identified. A confirmation window opens, as shown in Figure 4-6. The default selection is to only take a drive offline if a spare drive is available, which is strongly recommended and avoids redundancy loss in the array. Click OK to take the drive offline.
Figure 4-6 Warning before taking offline an internal drive
If the drive fails (as shown in Figure 4-7), the MDisk (from which the failed drive is a member) remains online and a hot spare is automatically reassigned.
Figure 4-7 Internal drive taken offline
If sufficient spare drives are not available and a drive must be taken offline, the second option for no redundancy (Take the drive offline even if redundancy is lost on the array) must be selected. This option results in a degraded storage pool due to the degraded MDisk, as shown in Figure 4-8.
Figure 4-8 Degraded MDisk
The IBM Storwize V5000 Gen2 storage system prevents the drive from being taken offline if it can result in data loss. A drive cannot be taken offline (as shown in Figure 4-9 on page 149) if no suitable spare drives are available and based on the RAID level of the MDisk, no sufficient redundancy will be available. Click Close to return to the Internal Storage panel.
Figure 4-9 Internal drive offline not allowed because of insufficient redundancy
Example 4-1 shows how to use the chdrive command-line interface (CLI) command to set the drive to failed.
Example 4-1 The use of the chdrive command to set the drive to failed
chdrive -use failed driveID
chdrive -use failed -allowdegraded driveID
Mark as
The internal drives in the IBM Storwize V5000 Gen2 storage system can be assigned to the following usage roles by right-clicking the drives and selecting the Mark as option, as shown in Figure 4-10:
Unused: The drive is not in use, and it cannot be used as a spare.
Candidate: The drive is available for use in an array.
Spare: The drive can be used as a hot spare, if required.
Figure 4-10 Selecting the internal drive “Mark as” action
Defining a new role to a drive depends on the current drive usage role. These dependencies are shown in Figure 4-11.
Figure 4-11 Internal drive usage role table
Identify
Use the Identify action to turn on the LED light so that you can easily identify a drive that must be replaced or that you want to physically troubleshoot. The panel that is shown in Figure 4-12 appears when the LED is on. Click Turn LED off when you are finished to turn the drive LED off and return to the Internal Storage panel.
Figure 4-12 Internal drive identification
Example 4-2 shows how to use the chenclosureslot command to turn on and turn off the drive LED.
Example 4-2 The use of the chenclosureslot command to turn on and turn off the drive LED
chenclosureslot -identify yes/no -slot slot enclosureID
Upgrade
From this option, you can easily upgrade the drive firmware. You can use the GUI to upgrade individual drives or upgrade all drives for which updates are available. For more information about upgrading drive firmware, see 12.4.2, “Updating the drive firmware” on page 693 and this website:
Dependent Volumes
Clicking Dependent Volumes shows the volumes that depend on the selected drive. Volumes depend on a drive only when their underlying MDisks are in a degraded or inaccessible state and when the removal of more hardware causes the volume to go offline. This condition is true for any RAID 0 MDisk because it has no redundancy, or if the associated MDisk is already degraded.
Use the Dependent Volumes option before you perform any drive maintenance to determine which volumes are affected.
 
Important: A lack of listed dependent volumes does not imply that no volumes are using the drive.
Figure 4-13 shows an example if no dependent volumes are detected for a specific drive. If a dependent volume is identified, it will be listed within this panel. When you have volumes listed as dependent you can also check volume saving and throttle by selecting the volume and clicking Actions. Click Close to return to the Internal Storage panel.
Figure 4-13 Internal drive with no dependent volumes
Example 4-3 shows how to view dependent volumes for a specific drive by using the CLI.
Example 4-3 Command to view dependent virtual disks (VDisks) for a specific drive
lsdependentvdisks -drive driveID
Properties
Clicking Properties in the Actions menu or double-clicking the drive provides the vital product data (VPD) and the configuration information, as shown in Figure 4-14. The Show Details option in the bottom-left of the Properties panel was selected to show more information.
Figure 4-14 Detailed internal drive properties
If the Show Details option is not selected, the technical information section is reduced, as shown in Figure 4-15.
Figure 4-15 Internal drive properties without details
A tab for the Drive Slot is available in the Properties panel (as shown in Figure 4-16 on page 155) to obtain specific information about the slot of the selected drive. The Show Details option is also applicable to this tab. If you do not select it, the Fault LED information disappears from the panel. Click Close to return to the Internal Storage panel.
Figure 4-16 Internal drive properties slot
Example 4-4 shows how to use the lsdrive command to display the configuration information and drive VPD.
Example 4-4 The use of the lsdrive command to display configuration information and drive VPD
IBM_Storwize:ITSO V5000:superuser>lsdrive 1
id 1
status online
error_sequence_number
use member
UID 5000cca05b1d97b0
tech_type sas_hdd
capacity 278.9GB
block_size 512
vendor_id IBM-E050
product_id HUC156030CSS20
FRU_part_number 01AC594
FRU_identity 11S00D5385YXXX0TGJ8J4P
RPM 15000
firmware_level J2G9
FPGA_level
mdisk_id 0
mdisk_name MDisk_01
member_id 5
enclosure_id 1
slot_id 1
node_id
node_name
quorum_id
port_1_status online
port_2_status online
interface_speed 12Gb
protection_enabled yes
auto_manage inactive
drive_class_id 145
IBM_Storwize:ITSO V5000:superuser>
Customize Columns
Click Customize Columns in the Actions menu to add or remove several columns that are available in the Internal Storage window.
To restore the default column options, select Restore Default View, as shown in Figure 4-17.
Figure 4-17 Customizing columns on the Internal Storage window
4.2 Working with storage pools
Storage pools (or pools) act as containers for MDisks and provision the capacity to volumes. MDisks can be provisioned through internal or external storage. MDisks created from internal storage are created as RAID arrays.
Figure 4-18 on page 157 provides an overview of how storage pools, MDisks, and volumes are related. The numbers in the figure represents the following components:
Hosts (1)
Volumes (5)
Pools (4)
External MDisks (0)
Arrays (2)
This panel is available by browsing to Monitoring  System and clicking Overview on the upper-right corner of the panel. You can also identify the name of each resource by hovering over the elements on the Overview window.
Figure 4-18 System Overview panel
IBM Storwize V5000 Gen2 organizes storage into pools to ease storage management and make it more efficient. All MDisks in a pool are split into extents of the same size and volumes are created from these available extents. The extent size is a property of the storage pool and when an MDisk is added to a pool the size of the extents that composes it will be based on the attribute of the pool to which the MDisk was added.
Storage pools can be further divided into sub-containers named as child pools. Child pools inherit the properties of the parent pool and can also be used to provision volumes.
Storage pools are managed either through the Pools panel or the MDisks by Pool panel. Both panels allow you to run the same actions; however, actions on child pools can be performed only through the Pools panel. To access the Pools panel browse to Pools  Pools, as shown in Figure 4-19.
Figure 4-19 Pools panel
The panel lists all storage pools available in the system. If a storage pool has child pools, you can toggle the sign to the left of the storage pool icon to either show or hide the child pools.
4.2.1 Creating storage pools
If you are installing a brand new IBM Storwize V5000 Gen2, no pools are created when you first log in, so the system automatically suggests a pool creation, which leads directly to the Create Pool panel. You can access the Pools panel in the future through the Pools menu, as previously shown in Figure 4-19 on page 157.
To create a new storage pool, you can use one of the following alternatives:
Navigate to Pools  Pools and click Create, as shown in Figure 4-20.
Figure 4-20 Create button on Pools panel
Navigate to Pools  MDisks by Pools and click Create Pool, as shown in Figure 4-21.
Figure 4-21 Create Pool button on MDisks by Pools panel
Both of these alternatives open the dialog box shown in Figure 4-22.
Figure 4-22 Create Pool dialog box
 
Note: If encryption is enabled, you can additionally select whether the storage pool is encrypted. The encryption setting of a storage pool is selected at creation time and cannot be changed later. By default, if encryption is enabled, encryption is selected.
If advanced pool settings are enabled, you can additionally select an extent size at the time of the pool creation, as shown in Figure 4-23.
Figure 4-23 Creating pool with advanced pool settings enabled
 
Note: Every storage pool created through the GUI has a default extent size of 1 GB. The size of the extent is selected at creation time and cannot be changed later. If you want to specify a different extent size at the time of the pool creation, browse to Settings  GUI Preferences and select Advanced pool settings.
In the Create Pool dialog box, enter the pool name and click Create. The new pool is created and is included in the list of storage pools with zero bytes, as shown in Figure 4-24.
Figure 4-24 New pool with zero bytes included in the list
4.2.2 Actions on storage pools
There are several actions that can be performed on storage pools, which can be accessed through the Pools panel or the Mdisks by Pools panel. To select an action, select the storage pool and click Actions. Alternatively, right-click the storage pool.
Figure 4-25 shows the list of available actions for storage pools being accessed through the Pools panel.
Figure 4-25 Actions list for storage pools
Create child pool
Selecting Create Child Pool starts the wizard to create a child storage pool. For information about child storage pools and a detailed description of this wizard, see 4.2.3, “Child storage pools” on page 166
Rename
Selecting Rename at anytime allows you to modify the name of a storage pool, as shown in Figure 4-26. Enter the new name and click Rename.
Figure 4-26 Renaming pools
Modify threshold
The storage pool threshold refers to the percentage of storage capacity that must be in use for a warning event to be generated. The threshold is especially useful when using thin-provisioned volumes that are configured to expand automatically.
The threshold can be modified by selecting Modify Threshold and entering the new value, as shown in Figure 4-27. The default threshold is 80%. Warnings can be disabled by setting the threshold to 0%.
Figure 4-27 Modifying pool threshold
Add storage
Selecting Add Storage starts the wizard to assign storage to the pool. For a detailed description of this wizard, see 4.3.1, “Assigning managed disks to storage pools” on page 172.
Edit Throttle
You can create, modify, and remove throttles for pools by using the management GUI or the command-line interface. Throttling is a mechanism to control the amount of resources that are used when the system is processing I/Os on a specific pool. If a throttle is defined, the system either processes the I/O, or delays the processing of the I/O to free resources for more critical I/O.
There are two parameters that can be defined through the Edit Throttle option:
Bandwidth limit defines the maximum amount of bandwidth the pool can process before the system delays I/O processing for this pool.
IOPS limit defines the maximum I/O operations per second the pool can process before the system delays I/O processing for this pool
If the pool does not have throttle settings configured, selecting Edit Throttle displays a dialog box with blank fields as shown in Figure 4-28. Define the limits and click Create.
Figure 4-28 Edit throttle initial configuration
For a pool that already has defined throttle settings, selecting Edit Throttle displays a different dialog box, in which the current bandwidth and IOPS limits will be displayed, as shown in Figure 4-29. You can either change or remove the current bandwidth and IOPS limits by modifying the values and clicking Save or clicking Remove to disable a limitation.
Figure 4-29 Editing throttles
View all throttles
Selecting View All Throttles opens a panel (shown in Figure 4-30) that displays the current throttle information, which includes the limits that were previously applied for bandwidth and IOPS.
Figure 4-30 View All Throttles panel
As a default, when the View All Throttles panel is opened through the Pools window, it displays throttle information related to pools, but through the same panel you are allowed to select different objects, as shown in Figure 4-31. Selecting a different category displays the throttle information for that specific selection.
Figure 4-31 Selecting specific throttle information
Delete
Pools can only be deleted through the GUI if no volumes are assigned to the pool. If the pool has any volumes within it the option is not available. Selecting Delete immediately deletes the pool without additional confirmation.
Through the CLI, you can delete a pool and all of its contents by using the -force parameter. However, all volumes and host mappings are deleted, and you cannot recover them.
 
Important: After you delete the pool through the CLI, all data that is stored in the pool is lost except for the image mode MDisks. The image mode MDisk volume definition is deleted, but the data on the imported MDisk remains untouched.
After deleting a pool, all of the managed or image mode MDisks in the pool return to the unmanaged status.
Properties
Selecting Properties displays information about the storage pool, as shown in Figure 4-32.
Figure 4-32 Storage pool properties
Additional information is available by clicking View more details and by hovering over the elements on the window, as shown in Figure 4-33. Click Close to return to the Pools panel.
Figure 4-33 Additional details for storage pool properties
Customize columns
Selecting Customize Columns in the Actions menu allows you to include additional information fields in the Pools panel, as shown in Figure 4-34.
Figure 4-34 Customizing columns in the Pools panel
4.2.3 Child storage pools
A child storage pool is a storage pool created within a storage pool. The storage pool in which the child storage pool is created is called parent storage pool.
Unlike a parent pool, a child pool does not contain MDisks; its capacity is provided exclusively by the parent pool in the form of extents. The capacity of a child pool is set at creation time, but can be nondisruptively modified later. The capacity must be a multiple of the parent pool extent size and must be smaller than the free capacity of the parent pool.
Child pools are useful when the capacity allocated to a specific set of volumes must be controlled.
Child pools inherit most properties from their parent pools, and these cannot be changed. The inherited properties include:
Extent size
Easy Tier setting
Encryption setting, but only if the parent pool is encrypted
Creating a child pool
To create a child pool you can either browse to Pools → Pools  Actions or Pools  MDisks by Pools  Actions and select Create Child Pool. Alternatively, you can right-click the parent pool, as shown in Figure 4-35.
Figure 4-35 Selecting child menu creation
Enter the name and the capacity of the child pool and click Create, as shown in Figure 4-36.
Figure 4-36 Create Child Pool panel
 
Note: You cannot create an encrypted child pool from an unencrypted parent pool if the parent pool contains any unencrypted array or an MDisk that is not self-encrypting and there are nodes in the system that do not support software encryption (for example, nodes that do not have encryption license enabled).
An encrypted child pool created from an unencrypted parent pool reports as unencrypted if the parent pool contains any unencrypted arrays. Remove these arrays to ensure that the child pool is fully encrypted.
After the child pool is created it is listed in the Pools panel under its parent pool, as shown in Figure 4-37. Toggle the arrow sign in the left of the storage pool name to either show or hide the child pools.
Creating a child pool within a child pool is not possible.
Figure 4-37 Child pool list
Actions on child storage pools
All actions supported for parent storage pools are supported for child storage pools, with the exception of Add Storage. Child pools additionally support the Resize action.
To select an action, right-click the child storage pool, as shown in Figure 4-38. Alternatively, select the storage pool and click Actions.
Figure 4-38 Child pools list of actions
Resize
Selecting Resize allows you to increase or decrease the capacity of the child storage pool, as shown in Figure 4-39. Enter the new pool capacity and click Resize.
 
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a child pool needs to be larger than the capacity used by its volumes.
Figure 4-39 Resizing child pools
Delete
Deleting a child pool is a task quite similar to deleting a parent pool. As with a parent pool, the Delete action is disabled if the child pool contains volumes. After deleting a child pool the extents that were being occupied return to the parent pool as free capacity.
 
Note: A volume in a child pool can only be migrated to another child pool within the same parent pool or to its own parent pool. In any other case use volume mirroring instead. During migration from a child pool to its parent pool, or vice versa, there is no real data move. There is only a reassignment of extents between the pools.
4.3 Working with managed disks
A storage pool is created as an empty container, with no storage assigned to it. Storage is then added in the form of MDisks. An MDisk can be either an array from internal storage or an LU from an external storage system. The same storage pool can include both internal and external MDisks.
Arrays are created from internal storage using RAID technology to provide redundancy and increased performance. The system supports two types of RAID: traditional RAID and distributed RAID. Arrays are assigned to storage pools at creation time and cannot be moved between storage pools. It is not possible to have an array that does not belong to any storage pool.
External MDisks can have one of the following modes:
Unmanaged
External MDisks are discovered by the system as unmanaged MDisks. An unmanaged MDisk is not a member of any storage pool, is not associated with any volumes, and has no metadata stored on it. The system does not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes.
Managed
When unmanaged MDisks are added to storage pools, they become managed. Managed mode MDisks are always members of a storage pool and provide extents to the storage pool. This mode is the most common and normal mode for an MDisk.
Image
Image mode provides a direct block-for-block translation from the MDisk to a volume. This mode is provided to satisfy the following major usage scenarios:
 – Virtualization of external LUs that contain data not written through the IBM Storwize V5000 Gen2
 – Exporting MDisks from the IBM Storwize V5000 Gen2 after volume migrations to image mode MDisks.
MDisks are managed through the MDisks by Pools panel. To access the MDisks by Pools panel browse to Pools  MDisks by Pools, as shown in Figure 4-40.
Figure 4-40 MDisks by Pools panel
The panel lists all the MDisks available in the system under the storage pool to which they belong.
4.3.1 Assigning managed disks to storage pools
MDisks can be assigned to a storage pool at any time to increase the number of extents available in the pool. The system automatically balances volume extents between the MDisks to provide the best performance to the volumes.
Arrays are created and assigned to a storage pool at the same time.
To assign MDisks to a storage pool navigate to Pools → MDisks by Pools and choose one of the following options:
Option 1: Select Add Storage on the right side of the storage pool, as shown in Figure 4-41. The Add Storage button is shown only when the pool has no capacity assigned or when the pool capacity usage is over the warning threshold.
Figure 4-41 Add storage: option 1
Option 2: Right-click the pool and select Add Storage, as shown in Figure 4-42. Alternatively, select a pool and click Actions.
Figure 4-42 Add storage: option 2
Option 3: Select Assign under a specific drive class or external storage controller, as shown in Figure 4-43.
Figure 4-43 Add storage: option 3
Both options 1 and 2 start the configuration wizard shown in Figure 4-44.
Figure 4-44 Assigning storage to storage pool
Option 3 starts the quick internal wizard for the selected drive class only, as shown in Figure 4-45.
Figure 4-45 Assigning specific storage class
Quick internal configuration
Selecting Internal suggests a configuration for internal drives based on RAID configuration presets, considering drive class and number of drives available. It automatically defaults parameters such as stripe width, number of spares (for traditional RAID), number of rebuild areas (for distributed RAID), and number of drives of each class. The number of drives is the only value that can be adjusted.
Figure 4-46 shows an example of a quick configuration.
Figure 4-46 Quick configuration wizard
This configuration combines two drive classes, belonging to two different tiers of storage (Nearline and Enterprise). This is the default option and takes advantage of the Easy Tier functionality. However, this can be adjusted by setting the number of drives of different classes to zero, as shown in Figure 4-47.
 
Note: If any drive class is not compatible with the drives being assigned, that drive class cannot be selected.
Figure 4-47 Quick configuration wizard with a zeroed storage class
If you are adding storage to a pool with storage already assigned, the existing storage is also taken into consideration, with some properties being inherited from existing arrays for a given drive class. Drive classes incompatible with the classes already in the pool are disabled as well.
When you are satisfied with the presented configuration click Assign as shown in Figure 4-48. The array MDisks are then created and initialized on the background.
Figure 4-48 Clicking assign on quick configuration wizard
Advanced internal custom configuration
Selecting Internal Custom allows the user to customize the configuration for internal drives.
 
Tip: It is advised to use the advanced configuration only when the quick configuration suggested does not fit your business requirements.
The following values can be customized:
RAID level
Number of spares
Array width
Stripe width
Number of drives of each class
Figure 4-49 shows an example with 6 drives ready to be configured as RAID 6. Click Summary to see the list of MDisks arrays to be created. To return to the default settings, select the refresh icon next to the pool capacity and to create and assign the arrays, click Assign.
Figure 4-49 Advanced internal custom configuration
4.3.2 RAID configuration
In this topic, we describe the Redundant Array of Independent Disks (RAID) configuration and technology.
Introduction to RAID technology
RAID provides two key design goals:
Increased data reliability
Increased input/output (I/O) performance
When multiple physical disks are set up to use the RAID technology, they are in a RAID array. The IBM Storwize V5000 Gen2 provides multiple, traditional RAID levels:
RAID 0
RAID 1
RAID 5
RAID 6
RAID 10
RAID technology can provide better performance for data access, high availability for the data, or a combination. RAID levels define a trade-off between high availability, performance, and cost.
The RAID concept must be extended to disk rebuild time because of increasing physical disk capacity.
In a disk failure, traditional RAID writes the data to a single spare drive. With increasing capacity, the rebuild time is also increased and the probability of a second failure during the rebuild process becomes more likely, as well. In addition, the spares, when they are not being used, are idle, wasting resources.
Distributed RAID (DRAID) addresses those points and it is available for the IBM Storwize V5000 Gen2 in two types:
Distributed RAID 5 (DRAID 5)
Distributed RAID 6 (DRAID 6)
Distributed RAID reduces the recovery time and the probability of a second failure during rebuild. Just like traditional RAID, a distributed RAID 5 array can lose one physical drive and survive. If another drive fails in the same array before the bad drive is recovered, the MDisk and the storage pool go offline as they are supposed to. So, distributed RAID does not change the general RAID behavior.
 
Note: Although Traditional RAID is still supported and is the default choice in the GUI, the suggestion is to use distributed RAID 6 whenever possible.
4.3.3 Distributed RAID
In distributed RAID, all drives are active, which improves performance. Spare capacity is used instead of the idle spare drives from traditional RAID. Because no drives are spare, all drives contribute to performance. The spare capacity is rotated across the disk drives so the write rebuild load is distributed across multiple drives and the bottleneck of one drive is removed.
Figure 4-50 on page 181 shows an example of a distributed RAID with 10 disks. The physical disk drives are divided into multiple packs. The reserved spare capacity (which is marked in yellow) is equivalent to two spare drives, but the capacity is distributed across all of the physical disk drives.
The data is distributed across a single row. For simplification, not all packs are shown in Figure 4-50.
Figure 4-50 Distributed RAID 6
Figure 4-51 on page 182 shows a single drive failure in the distributed RAID 6 (DRAID 6) environment. Physical disk 3 failed and the RAID 6 algorithm is using the spare capacity for a single spare drive in each pack for rebuild (which is marked in green). All disk drives are involved in the rebuild process, which significantly reduces the rebuild time.
For simplification, not all packs are shown in Figure 4-51.
Figure 4-51 Single drive failure with DRAID 6
The usage of multiple drives improves the rebuild process, which is up to 10 times faster than traditional RAID. This speed is even more important when you use large drives.
The conversion from traditional RAID to distributed RAID is possible by using volume mirroring or volume migration. Mixing traditional RAID and distributed RAID in the same storage pool is also possible.
Example
The same number of disks can be configured by using traditional or distributed RAID. In our example, we use 6 disk drives and assign those disks as RAID 6 to a single pool.
Figure 4-52 shows the setup for a traditional RAID 6 environment. The pool consists of one MDisk, with 5 disk drives. The spare drive is not listed in this summary.
Figure 4-52 Array configuration for a traditional RAID 6 with 6 disks
Figure 4-53 shows the setup for a distributed RAID 6 environment. The pool consists of a single MDisk with 6 disk drives. The spare drives are included in this summary.
Figure 4-53 Array configuration for a distributed RAID 6 with 6 disks
4.3.4 RAID configuration presets
RAID configuration presets are used to configure internal drives. They are based on the advised values for the RAID level and drive class. Each preset has a specific goal for the number of drives per array and the number of spare drives to maintain redundancy.
For the best performance with solid-state drives (SSDs), arrays with the same number of drives are recommended, which is the same design for traditional RAID arrays.
Table 4-1 describes the presets that are used for Flash drives for the IBM Storwize V5000 Gen2 storage system.
Table 4-1 Flash RAID presets
Preset
Purpose
RAID level
Drives per array goal
Drive count (min - max)
Spare drive goal
Flash
RAID 5
Protects against a single drive failure. Data and one stripe of parity are striped across all array members.
5
8
3 - 16
1
Flash Distributed RAID 5
Protects against a single drive failure. Data and one stripe of parity are striped across all array members.
5
8
3 - 16
1
Flash
RAID 6
Protects against two drive failures. Data and two stripes of parity are striped across all array members.
6
12
5 - 16
1
Flash Distributed RAID 6
Protects against two drive failures. Data and two stripes of parity are striped across all array members.
6
12
5 - 16
1
Flash
RAID 10
Protects against at least one drive failure. All data is mirrored on two array members.
10
8
4 - 16 (even number of drives)
1
Flash
RAID 1
Protects against at least one drive failure. All data is mirrored on two array members.
1
2
2
1
Flash
RAID 0
Provides no protection against drive failures.
0
8
1 - 8
0
Flash
Easy Tier
Mirrors data to protect against drive failure. The mirrored pairs are spread between storage pools to use for the Easy Tier function.
10
2
4 - 16 (even number of drives)
1
 
Flash RAID instances: In all Flash RAID instances, drives in the array are balanced across enclosure chains, if possible.
Table 4-2 describes the RAID presets that are used for Enterprise SAS and Nearline SAS drives for the IBM Storwize V5000 Gen2 storage system.
Table 4-2 Hard disk drive (HDD) RAID presets
Preset
Purpose
RAID level
Drives per array goal
Drive count (min - max)
Spare goal
Chain balance
Basic
RAID 5
Protects against a single drive failure. Data and one stripe of parity are striped across all array members.
5
8
3 - 16
1
All drives in the array are from the same chain wherever possible.
Distributed RAID 5
Protects against a single drive failure. Data and one stripe of parity are striped across all array members.
5
48 - 60
4 - 128
1: 0 - 36
2: 37 - 72
3: 73 - 100
4: 101 - 128
All drives in the array are from the same chain wherever possible.
Basic
RAID 6
Protects against two drive failures. Data and two stripes of parity are striped across all array members.
6
12
5 - 16
1
All drives in the array
are from the same chain wherever possible.
Distributed RAID 6
Protects against two drive failures. Data and two stripes of parity are striped across all array members.
6
48 - 60
6 - 128
1: 0 - 36
2: 37 - 72
3: 73 - 100
4: 101 - 128
All drives in the array
are from the same
chain wherever possible.
Basic
RAID 10
Protects against at least one drive failure. All data is mirrored on two array members.
10
8
4 - 16 (must be an even number of drives)
1
All drives in the array
are from the same
chain wherever possible.
Balanced RAID 10
Protects against at least one drive or enclosure failure. All data is mirrored on two array members. The mirrors are balanced across the two enclosure chains.
10
8
4 - 16 (even)
1
Exactly half of the drives are from each chain.
RAID 0
Provides no protection against drive failures.
0
8
1 - 8
0
All drives in the array
are from the same
chain wherever possible.
4.3.5 Actions on arrays
MDisks created from internal storage are RAID arrays and support specific actions that are not supported on external MDisks. Some actions supported on traditional RAID arrays are not supported on distributed RAID arrays and vice versa.
To choose an action select the array and click Actions, as shown in Figure 4-54. Alternatively, right-click the array.
Figure 4-54 Available actions on arrays
Swap drive
Selecting Swap Drive allows the user to replace a drive in an array with another drive. The other drive needs to have a use of Candidate or Spare. This action can be used to replace a drive that is expected to fail soon.
Figure 4-55 shows the dialog box that opens. Select the member drive to be replaced and the replacement drive.
Figure 4-55 Swap Drive panel
After defining the disk to be removed and the disk to be added, click Swap as shown in Figure 4-56 on page 186.
Figure 4-56 Swap button on Swap Drive panel
Set Spare Goal
This action is available only for traditional RAID arrays. Selecting Set Spare Goal allows you to set the number of spare drives required to protect the array from drive failures. If the number of spare drives available does not meet the configured goal an error is logged in the event log. This error can be fixed by adding more drives of a compatible drive class as spares.
Figure 4-57 shows the dialog box that opens when you select Set Spare Goal. Define the number of required spares and click Save.
Figure 4-57 Spare Goal panel
Set rebuild areas goal
This action is available only for distributed RAID arrays. Selecting Set Rebuild Areas Goal enables you to set the number of rebuild areas required to protect the array from drive failures. If the number of rebuild areas available does not meet the configured goal, an error is logged in the event log. This error can be fixed by replacing the failed drives in the array with new drives of a compatible drive class.
Figure 4-58 shows the dialog box that opens when you select Set Rebuild Areas Goal. Define the representative number of required spares that will compose the rebuild area and click Save.
Figure 4-58 Rebuild Areas Goal panel
Delete
Selecting Delete removes the array from the storage pool and deletes it.
 
Remember: An array does not exist outside of a storage pool. Therefore an array cannot be removed from the pool without being deleted.
If there are no volumes using extents from the array the deletion command runs immediately without additional confirmation. If there are volumes using extents from the array, you are prompted to confirm the action as shown in Figure 4-59. Click Yes to migrate the volumes or No to cancel the deletion process.
Figure 4-59 MDisk deletion confirmation panel
Confirming the action starts the migration of the volumes to extents from other MDisks that remain in the pool; after the action completes the array is removed from the storage pool and deleted.
 
Note: Ensure that you have enough available capacity remaining in the storage pool to allocate the data being migrated from the removed array, otherwise the command fails.
Drives
Selecting Drives shows information about the drives that are included in the array, as shown in Figure 4-60.
Figure 4-60 Panel showing the drives that are members of an MDisk
4.3.6 Actions on external MDisks
External MDisks support specific actions that are not supported on arrays. Some actions are supported only on unmanaged external MDisks and some are supported only on managed external MDisks.
To choose an action right-click the external MDisk, as shown in Figure 4-61. Alternatively, select the external MDisk and click Actions.
Figure 4-61 Available actions for external MDisks
Assign
This action is available only for unmanaged MDisks. Selecting Assign opens the dialog box shown in Figure 4-62. This action acts only on the selected MDisk or MDisks.
Figure 4-62 Assigning external MDisks to a pool
 
Important: If you need to preserve existing data on an unmanaged MDisk do not assign it to a storage pool because this action deletes the data on the MDisk. Use Import instead.
Modify tier
Selecting Modify Tier allows the user to modify the tier to which the external MDisk is assigned, as shown in Figure 4-63. This setting is adjustable because the system cannot detect the tiers associated with external storage automatically. Enterprise Disk (Tier 2) is the option selected by default.
Figure 4-63 Modifying external MDisk tier
This option is available only when encryption is enabled. Selecting Modify Encryption allows the user to modify the encryption setting for the MDisk, as shown in Figure 4-64 on page 191.
For example, if the external MDisk is already encrypted by the external storage system, change the encryption state of the MDisk to Externally encrypted. This stops the system from encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
Figure 4-64 Modifying external MDisk encryption
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk allows the user to preserve the data on the MDisk, either by migrating the data to a new volume or by keeping the data on the external system.
Selecting Import allows you to choose one of the following migration methods:
Import to temporary pool as image-mode volume does not migrate data from the source MDisk. It creates an image-mode volume that has a direct block-for-block translation of the MDisk. The existing data is preserved on the external storage system, but it is also accessible from the IBM Storwize V5000 Gen2 system.
If this method is selected the image-mode volume is created in a temporary migration pool and presented through the IBM Storwize V5000 Gen2. Choose the extent size of the temporary pool and click Import, as shown in Figure 4-65.
Figure 4-65 Importing an external MDisk as an image-mode volume
The MDisk is imported and listed as an image-mode MDisk in the temporary migration pool, as shown in Figure 4-66.
Figure 4-66 Image-mode MDisk
A corresponding image-mode volume is now available in the same migration pool, as shown in Figure 4-67.
Figure 4-67 Corresponding image-mode volume
The image-mode volume can then be mapped to the original host mode. The data is still physically present on the physical disk of the original external storage controller system and no automatic migration process is currently running. If needed, the image-mode volume can be migrated manually to another storage pool using volume migration or volume mirroring later.
Migrate to an existing pool starts by creating an image-mode volume as the first method. However, it then migrates the data from the image-mode volume onto another volume in the selected storage pool. After the migration process completes the image-mode volume and temporary migration pool are deleted.
If this method is selected, choose the storage pool to hold the new volume and click Import, as shown in Figure 4-68. Only pools with sufficient free extent capacity are listed.
Figure 4-68 Importing an external MDisk to an existing pool
The data migration begins automatically after the MDisk is imported successfully as an image-mode volume. You can check the migration progress by navigating to Pools → System Migration, as shown in Figure 4-69.
Figure 4-69 Importing external MDisk progress
After the migration completes, the volume is available in the chosen destination pool, as shown in Figure 4-70. This volume is no longer an image-mode volume; it is a normal striped volume.
Figure 4-70 Striped volume after migration
At this point all data has been migrated from the source MDisk and the MDisk is no longer in image mode, as shown in Figure 4-71. The MDisk can be removed from the temporary pool and used as a regular MDisk to host volumes.
Figure 4-71 Volume shows in Managed mode
Alternatively, import and migration of external MDisks to another pool can be done by selecting Pools → System Migration. Migration and the system migration wizard are described in more detail in Chapter 7, “Storage migration” on page 347.
Include
The system can exclude an MDisk with multiple I/O failures or persistent connection errors from its storage pool to ensure these errors do not interfere with data access. If an MDisk has been automatically excluded, run the fix procedures to resolve any connection and I/O failure errors. Drives used by the excluded MDisk with multiple errors might require replacing or reseating.
After the problems have been fixed, select Include to add the excluded MDisk back into the storage pool.
Remove
In some cases you may want to remove external MDisks from storage pools to reorganize your storage allocation. Selecting Remove removes the MDisk from the storage pool. After the MDisk is removed it goes back to unmanaged. If there are no volumes in the storage pool to which this MDisk is allocated, the command runs immediately without additional confirmation. If there are volumes in the pool, you are prompted to confirm the action, as shown in Figure 4-72. Click Yes to migrate the volumes or No to cancel the deletion process.
Figure 4-72 Removing an external MDisk
Confirming the action starts the migration of the volumes to extents from other MDisks that remain in the pool. When the action completes, the MDisk is removed from the storage pool and returns to unmanaged.
 
Note: Ensure that you have enough available capacity remaining in the storage pool to allocate the data being migrated from the removed MDisk or else the command fails.
Important: The MDisk being removed must remain accessible to the system while all data is copied to other MDisks in the same storage pool. If the MDisk is unmapped before the migration finishes all volumes in the storage pool go offline and remain in this state until the removed MDisk is connected again.
4.3.7 More actions on MDisks
There are a few additional actions supported both on arrays and external MDisks.
Discover storage
The Discover storage option in the upper left of the MDisks by Pools window is useful if external storage controllers are in your environment. (For more information, see Chapter 11, “External storage virtualization” on page 635). The Discover storage action starts a rescan of the Fibre Channel network. It discovers any new MDisks that were mapped to the IBM Storwize V5000 Gen2 storage system and rebalances MDisk access across the available controller device ports.
This action also detects any loss of controller port availability and updates the IBM Storwize V5000 Gen2 configuration to reflect any changes.
When external storage controllers are added to the IBM Storwize V5000 Gen2 environment, the IBM Storwize V5000 Gen2 automatically discovers the controllers, and the logical unit numbers (LUNs) that are presented by those controllers are listed as unmanaged MDisks.
However, if you attached new storage and the IBM Storwize V5000 Gen2 did not detect it, you might need to use the Discover storage option before the system detects the new LUNs. If the configuration of the external controllers is modified afterward, the IBM Storwize V5000 Gen2 might be unaware of these configuration changes. Use Detect MDisk to rescan the Fibre Channel network and update the list of unmanaged MDisks.
Figure 4-73 shows the Discover storage option.
Figure 4-73 Discover storage action
 
Note: The Discover storage action is asynchronous. Although the task appears to be finished, it might still be running in the background.
Rename
MDisks can be renamed by selecting the MDisk and clicking Rename from the Actions menu. Enter the new name of your MDisk (as shown in Figure 4-74) and click Rename.
Figure 4-74 Rename MDisk
Show Dependent Volumes
Figure 4-75 shows the volumes that depend on an MDisk. The volumes can be displayed by selecting the MDisk and clicking Show Dependent Volumes from the Actions menu. The volumes are listed with general information.
Figure 4-75 Show MDisk dependent volumes
Properties
The Properties action for an MDisk shows the information that you need to identify it. In the MDisks by Pools window, select the MDisk and click Properties from the Actions menu. Alternatively, right-click the MDisk and select Properties. For additional information related to the selected MDisk, click View more details as shown in Figure 4-76.
Figure 4-76 MDisk properties
4.4 Working with external storage controllers
After the internal storage configuration is complete, you can find the MDisks that were created by using the internal drives in the MDisks by Pools window. When you use this window, you can manage all MDisks that are made up of internal and external storage.
Logical unit numbers (LUNs) that are presented by external storage systems to IBM Storwize V5000 Gen2 are discovered as unmanaged MDisks. Initially, the MDisk is not a member of any storage pools, which means that it is not used by the IBM Storwize V5000 Gen2 storage system.
To learn more about external storage, see Chapter 11, “External storage virtualization” on page 635.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.118.87