Storage pools
This chapter describes how IBM SAN Volume Controller (SVC) manages physical storage resources. All storage resources that are under the system control are managed by using storage pools. Storage pools aggregate internal and external capacity and provide the containers in which volumes can be created. Storage pools make it easier to dynamically allocate resources, maximize productivity, and reduce costs.
This chapter includes the following topics:
6.1 Working with storage pools
A managed disk (MDisk) is a logical unit (LU) of physical storage. MDisks are either arrays (RAID) from internal storage or LUs that are exported from external storage systems. Storage pools act as a container for MDisks by dividing the MDisks into extents. Storage pools provision the available capacity from the extents to volumes.
Figure 6-1 provides an overview of how storage pools, MDisks, and volumes are related. This pane is available by browsing to Monitoring  System and clicking the Overview button on the upper-right corner of the pane. In the example in Figure 6-1, the system has four LUs from internal disks arrays, no LUs from external storage, four storage pools, and 93 defined volumes, mapped to four hosts.
Figure 6-1 Relationship between MDisks, Storage Pools, and Volumes
SVC organizes storage into pools to ease storage management and make it more efficient. All MDisks in a pool are split into extents of the same size and volumes are created out of the available extents. The extent size is a property of the storage pool and cannot be changed after the pool is created. It is possible to add MDisks to an existing pool to provide additional extents.
Storage pools can be further divided into subcontainers that are called child pools. Child pools inherit the properties of the parent pool (extent size, throttle, reduction feature) and can also be used to provision volumes.
Storage pools are managed either by using the Pools pane or the MDisks by Pool pane. Both panes allow you to run the same actions on parent pools. However, actions on child pools can be performed only through the Pools pane. To access the Pools pane, click Pools  Pools, as shown in Figure 6-2.
Figure 6-2 Accessing the Storage Pool pane
The pane lists all storage pools available in the system. If a storage pool has child pools, you can toggle the sign to the left of the storage pool icon to either show or hide the child pools.
6.1.1 Creating storage pools
To create a storage pool, you can use one of the following alternatives:
Navigate to Pools  Pools and click Create, as shown in Figure 6-3.
Figure 6-3 Option to create a storage pool in the Pools pane
Navigate to Pools  MDisks by Pools and click Create Pool, as shown in Figure 6-4.
Figure 6-4 Option to create a storage pool in the MDisks by Pools pane
All alternatives open the dialog box that is shown in Figure 6-5.
Figure 6-5 Create Pool dialog box
Every storage pool that is created using the GUI has a default extent size of 1 GB. The size of the extent is selected at creation time and cannot be changed later. If you want to specify a different extent size, browse to Settings  GUI Preferences and select Advanced pool settings, as shown in Figure 6-6.
Figure 6-6 Advanced pool settings
When advanced pool settings are enabled, you can additionally select an extent size at creation time, as shown in Figure 6-7.
Figure 6-7 Creating a pool with advanced settings selected
If encryption is enabled, you can additionally select whether the storage pool is encrypted, as shown in Figure 6-8.
 
Note: The encryption setting of a storage pool is selected at creation time and cannot be changed later. By default, if encryption is enabled, encryption is selected. For more information about encryption and encrypted storage pools, see Chapter 12, “Encryption” on page 633.
Figure 6-8 Creating a pool with Encryption enabled
Enter the name for the pool and click Create. The new pool is created and is included in the list of storage pools with zero bytes, as shown in Figure 6-9.
Figure 6-9 Newly created empty pool
6.1.2 Actions on storage pools
A number of actions can be performed on storage pools. To select an action, select the storage pool and click Actions, as shown in Figure 6-10. Alternatively, right-click the storage pool.
Figure 6-10 Pools actions menu
Create child pool
Selecting Create Child Pool starts the wizard to create a child storage pool. For information about child storage pools and a detailed description of this wizard, see 6.1.3, “Child storage pools” on page 208. It is not possible to create a child pool from an empty pool.
Rename
Selecting Rename allows you to modify the name of a storage pool, as shown in Figure 6-11. Enter the new name and click Rename.
Figure 6-11 Renaming a pool
 
Naming rules: When you choose a name for a pool, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a volume named ABC and an MDisk called ABC, but you cannot have two volumes called ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Modify Threshold
The storage pool threshold refers to the percentage of storage capacity that must be in use for a warning event to be generated. When using thin-provisioned volumes that auto-expand (automatically use available extents from the pool), monitor the capacity usage and get warnings before the pool runs out of free extents, so you can add storage. If a thin-provisioned volume does not have sufficient extents to expand, it goes offline and a 1865 error is generated.
The threshold can be modified by selecting Modify Threshold and entering the new value, as shown in Figure 6-12. The default threshold is 80%. Warnings can be disabled by setting the threshold to 0%.
Figure 6-12 Modifying a pool’s threshold
The threshold is visible in the pool properties and is indicated with a red bar as shown on Figure 6-13.
Figure 6-13 Pool properties
Add storage
Selecting Add Storage starts the wizard to assign storage to the pool. For a detailed description of this wizard, see 6.2.1, “Assigning managed disks to storage pools” on page 214.
Edit Throttle
When clicking this option, a new window opens allowing you to set the Pool’s throttle.
Throttles can be defined for storage pools to control I/O operations on storage systems. Storage pool throttles can be used to avoid overwhelming the storage system (either external storage or internal storage) and be used with virtual volumes. Because virtual volumes use child pools, and throttle limit for the child pool can control the I/O operations from that virtual volume. Parent and child pool throttles are independent of each other so a child pool can have higher throttle limits than its parent pool. See 6.1.3, “Child storage pools” on page 208 for information about child pools.
You can define a throttle for IOPS, bandwidth, or both as shown in Figure 6-14.
IOPS limit indicates the limit of configured IOPS. The value is a numeric string 0 - 33554432. If no limit is specified, the value is blank.
Bandwidth limit indicates the bandwidth, in megabytes per second (MBps) limit. The value is a numeric string 0 - 268435456. If no limit is specified, the value is blank.
Figure 6-14 Editing throttle for a pool
If more than one throttle applies to an I/O operation, the lowest and most stringent throttle is used. For example, if a throttle of 200 MBps is defined on a pool and 100 MBps throttle is defined on a Volume of that pool, then the I/O operations are limited to 100 MBps.
 
Note: The storage pool throttle objects for a child pool and a parent pool work independently of each other.
View all Throttles
It is possible to display defined throttles from Pools pane. Right-click a pool and select View all Throttles to display the list of pools throttles. If you want to view the throttle of other elements (like Volumes or Hosts for example), you can select All Throttles in the drop-down list, like shown in Figure 6-15.
Figure 6-15 Viewing all throttles
A child pool throttle is independent of its parent pool throttle. However, volumes of that child pool inherit the throttle from the pool they are in. In the example on Figure 6-15, “T3_SASNL_child” has a throttle of 200 MBps defined, its parent pool, “T3_SASNL” has a throttle of 100 MBps, and Volume “TEST_ITSO” has one of 1000 IOps. If the workload applied on the Volume is greater than 200 MBps, then it will be capped by the “T3_SASNL_child” throttle.
Delete
A storage pool can be deleted using the GUI only if no volumes are associated with it. Selecting Delete deletes the pool immediately without any additional confirmation.
 
Note: If there are volumes in the pool, Delete cannot be selected. If that is the case, either delete the volumes or move them to another storage pool before proceeding. To move a volume, you can either migrate it or use volume mirroring. For information about volume migration and volume mirroring, see Chapter 7, “Volumes” on page 251.
After you delete a pool, the following actions occur:
All the managed or image mode MDisks in the pool return to a status of unmanaged.
All the array mode MDisks in the pool are deleted.
All member drives return to a status of candidate.
Properties
Selecting Properties displays information about the storage pool. Additional information is available by clicking View more details and by hovering over the elements on the window, as shown in Figure 6-16.
Figure 6-16 Pool properties and details
6.1.3 Child storage pools
A child storage pool is a storage pool created within a storage pool. The storage pool in which the child storage pool is created is called parent storage pool.
Unlike a parent pool, a child pool does not contain MDisks. Its capacity is provided exclusively by the parent pool in the form of extents. The capacity of a child pool is set at creation time, but can be modified later nondisruptively. The capacity must be a multiple of the parent pool extent size and must be smaller than the free capacity of the parent pool.
Child pools are useful when the capacity allocated to a specific set of volumes must be controlled. For example, child pools can be used with VMware vSphere Virtual Volumes (VVols). Storage administrators can restrict access of VMware administrators to only a part of the storage pool and prevent volumes creation from affecting the rest of the parent storage pool.
Child pools can also be useful when strict control over thin-provisioned volumes expansion is needed. You could, for example, create a child pool with no volumes in it that would act as an emergency set of extents. That way, if the parent pool ever runs out of free extent, you can use the ones from the child pool.
Child pools can also be used when a different encryption key is needed for different sets of volumes.
Child pools inherit most properties from their parent pools, and these cannot be changed. The inherited properties include the following:
Extent size
Easy Tier setting
Encryption setting, but only if the parent pool is encrypted
 
Note: For information about encryption and encrypted child storage pools, see Chapter 12, “Encryption” on page 633.
Creating a child storage pool
To create a child pool, browse to Pools → Pools, right-click the parent pool that you want to create a child pool from, and select Create Child Pool, as shown in Figure 6-17.
Figure 6-17 Creating a child pool
When the dialog window opens, enter the name and capacity of the child pool and click Create, as shown in Figure 6-18.
Figure 6-18 Defining a child pool
After the child pool is created, it is listed in the Pools pane under its parent pool, as shown in Figure 6-19. Toggle the sign to the left of the storage pool icon to either show or hide the child pools.
Figure 6-19 Listing parent and child pools
Creation of a child pool within a child pool is not possible.
Actions on child storage pools
All actions supported for parent storage pools are supported for child storage pools, with the exception of Add Storage. Child pools additionally support the Resize action.
To select an action, right-click the child storage pool, as shown in Figure 6-20. Alternatively, select the storage pool and click Actions.
Figure 6-20 Actions on child pools
Resize
Selecting Resize allows you to increase or decrease the capacity of the child storage pool, as shown in Figure 6-21. Enter the new pool capacity and click Resize.
 
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a child pool needs to be larger than the capacity used by its volumes.
Figure 6-21 Resizing a child pool
When the child pool is shrunk, the system resets the warning threshold and issues a warning if the threshold is reached.
Delete
Deleting a child pool is a task quite similar to deleting a parent pool. As with a parent pool, the Delete action is disabled if the child pool contains volumes, as shown in Figure 6-22.
Figure 6-22 Deleting a child pool
After deleting a child pool, the extents that it occupied return to the parent pool as free capacity.
Volumes Migration
To move a volume to another pool, you can use migration or volume mirroring in the same way you use them for parent pools. For information about volume migration and volume mirroring, see Chapter 7, “Volumes” on page 251.
Volume migration between a child storage pool and its parent storage pool can be performed in the Volumes menu, on the Volumes by Pool page. Right-clicking a volume allows you to migrate it into a suitable pool.
In the example on Figure 6-23, volume TEST ITSO has been created in child pool T3_SASNL_child. Note that child pools appear exactly like parent pools in the Volumes by Pool pane.
Figure 6-23 Actions menu in Volumes by pool
A volume from a child pool can only be migrated to its parent pool or to another child pool of the same parent pool. As shown on Figure 6-24, the volume TEST ITSO can only be migrated to its parent pool (T3_SASNL) or a child pool with same parent pool (T3_SASNL_child0). This migration limitation does not apply to volumes belonging to parent pools.
During a volume migration within a parent pool (between a child and its parent or between children with same parent), there is no data movement but only extent reassignments.
Figure 6-24 Migrating a volume within a parent pool
6.1.4 Encrypted storage pools
SVC supports two types of encryption: Hardware encryption and software encryption.
Hardware encryption is implemented at an array level, whereas software encryption is implemented at a storage pool level. For information about encryption and encrypted storage pools, see Chapter 12, “Encryption” on page 633.
6.2 Working with managed disks
A storage pool is created as an empty container, with no storage assigned to it. Storage is then added in the form of MDisks. An MDisk can be either an array from internal storage or an LU from an external storage system. The same storage pool can include both internal and external MDisks.
Arrays are created from internal storage using RAID technology to provide redundancy and increased performance. The system supports two types of RAID: Traditional RAID and distributed RAID. Arrays are assigned to storage pools at creation time and cannot be moved between storage pools. You cannot have an array that does not belong to any storage pool.
External MDisks can have one of the following modes:
Unmanaged
External MDisks are discovered by the system as unmanaged MDisks. An unmanaged MDisk is not a member of any storage pool. It is not associated with any volumes, and has no metadata stored on it. The system does not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes.
Managed
When unmanaged MDisks are added to storage pools, they become managed. Managed mode MDisks are always members of a storage pool, and their extents contribute to the storage pool. This mode is the most common and normal mode for an MDisk.
Image
Image mode provides a direct block-for-block translation from the MDisk to a volume. This mode is provided to satisfy the following major usage scenarios:
 – Virtualization of external LUs that contain data that is not written through the SVC.
 – Exporting MDisks from the SVC after migration of volumes to image mode MDisks.
MDisks are managed by using the MDisks by Pools pane. To access the MDisks by Pools pane, browse to Pools  MDisks by Pools, as shown in Figure 6-25.
Figure 6-25 MDisks by pool
The pane lists all the MDisks available in the system under the storage pool to which they belong. Both arrays and external MDisks are listed.
Additionally, external MDisks can be managed through the External Storage pane. To access the External Storage pane, browse to Pools → External Storage.
6.2.1 Assigning managed disks to storage pools
MDisks can be assigned to a storage pool at any time to increase the number of extents available in the pool. Unless Easy Tier is turned off for the storage pool you are adding MDisks to, the system automatically balances volume extents between the MDisks to provide the best performance to the volumes.
For more information about IBM Easy Tier feature, see Chapter 10, “Advanced features for storage efficiency” on page 407.
 
Note: When Easy Tier is turned on for a pool, movement of extents between tiers of storage (inter-tier) or between MDisks within a single tier (intra-tier) is based on the activity that is monitored. Therefore, when adding an MDisk to a pool, extent migration will not be performed immediately. No migration of extents will occur until there is sufficient activity to trigger it.
If balancing of extents within the pool is needed immediately after the MDisks are added, then a manual extents placement is needed. Because this manual process can be quite complex, IBM provides a script available here:
This script provides a solution to the problem of rebalancing the extents in a pool after a new MDisk has been added. The script uses available free space to shuffle extents until the number of extents from each volume on each MDisk is directly proportional to the size of the MDisk.
To assign MDisks to a storage pool, navigate to Pools → MDisks by Pools and choose one of the following options:
Option 1: Select Add Storage on the right side of the storage pool, as shown in Figure 6-26. The Add Storage button is shown only when the pool has no capacity assigned or when the pool capacity usage is over the warning threshold.
Figure 6-26 Adding storage to a pool, option 1
Option 2: Right-click the pool and select Add Storage, as shown in Figure 6-27.
Figure 6-27 Adding storage to a pool, option 2
Option 3: When on the MDisks by Pool pane, select Assign under a specific drive class or external storage controller, as shown in Figure 6-28.
Figure 6-28 Adding storage to a pool, option 3
Both options 1 and 2 start the configuration wizard shown in Figure 6-29. If no external storage is attached, the External option is not shown. If internal is chosen, then the system guides you through MDisks creation. If external is selected, then MDisks are already created and the systems guides you through the selection of external storage. Option 3 allows you to select the pool you want to add new MDisks to.
Figure 6-29 Assigning storage to a pool
Quick internal configuration
Because SVC pools use IBM Spectrum Virtualize to manage internal storage, you cannot create MDisks out of internal drives without assigning them to a pool. Only MDisks (RAID arrays of disks) can be added to pools. It is not possible to add JBOD or a single drive. Then, when assigning storage to a pool, the system needs first to create one or many MDisks.
The Quick internal configuration option of assigning storage to a pool guides the user into the steps of creating one or many MDisks and then affects them to the selected pool. Because it is possible to assign multiple MDisks at the same time during this process, or because the existing pool has an already configured set of MDisks, compatibility checks are done by the system when it creates the new MDisks.
For example, if you have a set of 10K RPM drives and another set of 15K RPM drives available, you cannot place an MDisk made of 10K RPM drives and an MDisk made of 15K RPM disks to the same pool. You would need to create two separate pools.
Selecting Quick internal automatically defaults parameters such as stripe width, number of spares (for traditional RAID), number of rebuild areas (for distributed RAID), and number of drives of each class. The number of drives is the only value that can be adjusted when creating the array. Depending on the number of drives selected for the new array, the RAID level automatically adjusts.
For example, if you select two drives only, the system will automatically create a RAID-10 array, with no spare drive. For more control of the array creation steps, you can select the Internal Custom option. For more information, see “Advanced internal configuration” on page 218.
By default, if there are enough candidate drives, the system recommends traditional arrays for most new configurations of MDisks. However, use Distributed RAID when possible, with the Advanced Internal Custom option. For information about traditional and Distributed RAID, see 6.2.2, “Traditional and distributed RAID” on page 220. Figure 6-30 shows an example of a Quick internal configuration.
Figure 6-30 Quick internal configuration: Pool with a single class of drives
 
Note: Whenever possible and appropriate, the preferred configuration is Distributed RAID6.
If the system has multiple drives classes (like Flash and Enterprise disks for example), the default option is to create multiple arrays of different tiers and assign them to the pool to take advantage of the Easy Tier functionality. However, this configuration can be adjusted by setting the number of drives of different classes to zero. For information about Easy Tier see Chapter 10, “Advanced features for storage efficiency” on page 407.
If you are adding storage to a pool with storage already assigned, the existing storage is taken into consideration, with some properties being inherited from existing arrays for a specific drive class. Drive classes incompatible with the classes already in the pool are disabled.
When you are satisfied with the configuration presented, click Assign. The RAID arrays, or MDisks, are then created and start initializing in the background. The progress of the initialization process can be monitored by selecting the correct task under Running Tasks in the upper-right corner, as shown in Figure 6-31. The array is available for I/O during this process.
Figure 6-31 Array Initialization task
By clicking View in the Running tasks list, you can see the initialization progress and the time remaining as shown in Figure 6-32. Note that the array creation depends on the type of drives it is made of. Initializing an array of Flash drives will be much quicker than with NL-SAS drives for example.
Figure 6-32 Array initialization task progress information
Advanced internal configuration
Selecting Internal Custom allows the user to customize the configuration of MDisks made out of internal drives.
The following values can be customized:
RAID level
Number of spares
Array width
Stripe width
Number of drives of each class
Figure 6-33 shows an example with nine drives ready to be configured as DRAID 6, with the equivalent of one drive capacity of spare (distributed over the nine disks).
Figure 6-33 Adding internal storage to a pool using the Advanced option
To return to the default settings, click the Refresh button next to the pool capacity. To create and assign the arrays, click Assign.
Quick external configuration
Selecting External allows the user to assign external MDisks to storage pools. Unlike array MDisks, the tier associated with an external MDisk cannot be determined automatically and so must be set manually. Select the external storage controller, the MDisks you want to assign, and the tier to which they belong, as shown in Figure 6-34 on page 220. The tier of an external MDisk can be modified after creation.
 
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it to a storage pool because this action deletes the data on the MDisk. Use Import instead. See “Import” on page 230 for information about this action.
Figure 6-34 Adding external storage to a pool
6.2.2 Traditional and distributed RAID
IBM Spectrum Virtualize V7.6 introduced a new way of managing physical drives, as an alternative to the traditional Redundant Array of Independent Disks (RAID). It is called distributed RAID.
 
Note: Use Distributed RAID whenever possible. The distributed configuration dramatically reduces rebuild times and decreases the exposure volumes have to the extra load of recovering redundancy.
SVC supports the following traditional and distributed RAID levels:
RAID0 (only with command line)
RAID1
RAID5
RAID6
RAID10
Distributed RAID5 (DRAID5)
Distributed RAID6 (DRAID6)
Traditional RAID
In a traditional RAID approach, whether it is RAID10, RAID5, or RAID6, data is spread among drives in an array. However, the spare space is constituted by spare drives, which are global and sit outside of the array. When one of the drives within the array fails, all data is read from the mirrored copy (for RAID10), or is calculated from remaining data stripes and parity (for RAID5 or RAID6), and written to one single spare drive.
Figure 6-35 shows a traditional RAID6 array with two global spare drives, and data and parity striped among five drives.
Figure 6-35 Traditional RAID6 array
If a drive fails, data is calculated from the remaining strips in a stripe and written to the spare strip in the same stripe on a spare drive, as shown in Figure 6-36.
Figure 6-36 Failed drive
This model has the following main disadvantages:
In case of a drive failure, data is read from many drives but written to only one. This process can affect performance of foreground operations and means that the rebuild process can take a long time, depending on the size of the drives.
The spare drives are idle and do not perform any operations until one of the array drives fails.
Distributed RAID
In distributed RAID, the spare drive (or drives) are included in the array as spare space. All drives in the array have spare space reserved that is used if one of the drives among the same array fails. This configuration means that there are no idling spare drives and all drives in the array take part in data processing. In case of a drive failure, the data is written to several drives, which reduces recovery time and consequently the probability of a second drive failure occurring during rebuild.
Distributed RAID also has the ability to distribute data and parity strips among more drives than traditional RAID. This feature means more drives can be used to create one array, improving performance of a single managed disk.
Figure 6-37 shows a distributed RAID6 array with the stripe width of five distributed among 10 physical drives. The reserved spare space is marked as yellow and is equivalent to two spare drives. Both distributed RAID5 and distributed RAID6 divide the physical drives into rows and packs. The row has the size of the array width and has only one stripe from each drive in an array. A pack is a group of several continuous rows, and its size depends on the number of strips in a stripe.
Figure 6-37 Distributed RAID6
In case of a drive failure, all data is calculated using the remaining data stripes and parities and written to a spare space within each row, as shown in Figure 6-38.
Figure 6-38 Distributed RAID6 array with failed drive
This model addresses the following main disadvantages of traditional RAID:
In case of a drive failure, data is read from many drives and written to many drives. This process minimizes the effect on performance during the rebuild process and significantly reduces rebuild time (depending on the distributed array configuration and drive sizes the rebuild process can be up to 10 times faster).
Spare space is distributed throughout the array, so there are more drives processing I/O and no idle spare drives.
The model has the following additional advantages:
In case of a drive failure, only the actual data is rebuilt. Space that is not allocated to volumes is not re-created to the spare regions of the array.
Arrays can be much larger than before, spanning over many more drives and therefore improving the performance of the array.
 
Note: Distributed RAID does not change the number of failed drives an array can endure. Just like in traditional RAID, a distributed RAID5 array can only lose one physical drive and survive. If another drive fails in the same array before the array finishes rebuilding, both the managed disk and storage pool go offline.
The following are the minimum number of drives needed to build a Distributed Array:
Six drives for a Distributed RAID6 array
Four drives for a Distributed RAID5 array
The maximum number of drives a Distributed Array can contain is 128.
6.2.3 Actions on arrays
MDisks created from internal storage are RAID arrays and support specific actions that are not supported on external MDisks. Some actions supported on traditional RAID arrays are not supported on distributed RAID arrays and vice versa.
To choose an action, select the array (MDisk) and click Actions, as shown in Figure 6-39. Alternatively, right-click the array.
Figure 6-39 Actions on arrays
Swap drive
Selecting Swap Drive allows the user to replace a drive in the array with another drive. The other drive needs to have use of Candidate or Spare. This action can be used to replace a drive that is expected to fail soon, for example, as indicated by an error message in the event log.
Figure 6-40 shows the dialog box that opens. Select the member drive to be replaced and the replacement drive, and click Swap.
Figure 6-40 Swapping drive 0 with another candidate or spare drive
The exchange of the drives starts running in the background. The volumes on the affected MDisk remain accessible during the process.
Set spare goal
This action is available only for traditional RAID arrays. Selecting Set Spare Goal allows you to set the number of spare drives that are required to protect the array from drive failures.
If the number of spare drives available does not meet the configured goal, an error is logged in the event log as shown in Figure 6-41. This error can be fixed by adding more drives of a compatible drive class as spares in the array.
Figure 6-41 If there are insufficient spare drives available, an error 1690 is logged
Set rebuild areas goal
This action is available only for distributed RAID arrays. Selecting Set Rebuild Areas Goal allows you to set the number of rebuild areas required to protect the array from drive failures. If the number of rebuild areas available does not meet the configured goal, an error is logged in the event log. This error can be fixed by replacing the failed drives in the array with new drives of a compatible drive class.
Delete
Selecting Delete removes the array from the storage pool and deletes it.
 
Remember: An array or an MDisk does not exist outside of a storage pool. Therefore, an array cannot be removed from the pool without being deleted.
If there are no volumes using extents from this array, the command runs immediately without additional confirmation. If there are volumes using extents from this array, you are prompted to confirm the action, as shown in Figure 6-42.
Figure 6-42 Deleting an array from a non-empty storage pool
Confirming the action starts the migration of the volumes to extents from other MDisks that remain in the pool. After the action completes, the array is removed from the storage pool and deleted. When an MDisk is deleted from a storage pool, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, extents from the other tier are used.
 
Note: Ensure that you have enough available capacity remaining in the storage pool to allocate the data being migrated from the removed array, or else the command will fail.
Dependent Volumes
Volumes are entities made of extents from a storage pool. The extents of the storage pool come from various MDisks. A volume can then be spread over multiple MDisks, and MDisks can serve multiple volumes. Clicking the Dependent Volumes Action menu of an MDisk lists the volumes that are depending on that MDisk as shown in Figure 6-43.
Figure 6-43 Dependent volumes for MDisk10
Drives
Selecting Drives shows information about the drives that are included in the array as shown in Figure 6-44.
Figure 6-44 List of drives in an array
6.2.4 Actions on external MDisks
External MDisks support specific actions that are not supported on arrays. Some actions are supported only on unmanaged external MDisks, and some are supported only on managed external MDisks.
To choose an action, right-click the external MDisk, as shown in Figure 6-45. Alternatively, select the external MDisk and click Actions.
Figure 6-45 Actions on MDisks
Assign
This action is available only for unmanaged MDisks. Selecting Assign opens the dialog box that is shown in Figure 6-46. This action is equivalent to the wizard described in Quick external configuration, but acts only on the selected MDisk or MDisks.
Figure 6-46 Assigning an MDisk to a pool
 
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it to a storage pool because this action deletes the data on the MDisk. Use Import instead.
Modify tier
Selecting Modify Tier allows the user to modify the tier to which the external MDisk is assigned, as shown in Figure 6-47. This setting is adjustable because the system cannot detect the tiers that are associated with external storage automatically, unlike with internal arrays.
Figure 6-47 Modifying an external MDisk tier
For information about storage tiers and their importance, see Chapter 10, “Advanced features for storage efficiency” on page 407.
Modify encryption
This option is available only when encryption is enabled. Selecting Modify Encryption allows the user to modify the encryption setting for the MDisk, as shown in Figure 6-48.
For example, if the external MDisk is already encrypted by the external storage system, change the encryption state of the MDisk to Externally encrypted. This setting stops the system from encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
For information about encryption, encrypted storage pools, and self-encrypting MDisks, see Chapter 12, “Encryption” on page 633.
Figure 6-48 Modifying encryption setting for an MDisk
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk allows the user to preserve the data on the MDisk, either by migrating the data to a new volume or by keeping the data on the external system.
 
Note: This is the preferred method to migrate data from legacy storage to the SVC. When an MDisk presents as imported, the data on the original disks is not modified. The system acts as a pass-through and the extents of the imported MDisk do not contribute to storage pools.
Selecting Import allows you to choose one of the following migration methods:
Import to temporary pool as image-mode volume does not migrate data from the source MDisk. It creates an image-mode volume that has a direct block-for-block translation of the MDisk. The existing data is preserved on the external storage system, but it is also accessible from the SVC system.
If this method is selected, the image-mode volume is created in a temporary migration pool and presented through the SVC. Choose the extent size of the temporary pool and click Import, as shown in Figure 6-49.
Figure 6-49 Importing an unmanaged MDisk
The MDisk is imported and listed as an image mode MDisk in the temporary migration pool, as shown in Figure 6-50.
Figure 6-50 Image-mode imported MDisk
A corresponding image-mode volume is now available in the same migration pool, as shown in Figure 6-51.
Figure 6-51 Image-mode Volume
The image-mode volume can then be mapped to the original host. The data is still physically present on the physical disk of the original external storage controller system and no automatic migration process is currently running. The original host sees no difference and the applications can continue to run. The image-mode Volume can now be handled by Spectrum Virtualize. If needed, the image-mode volume can be migrated manually to another storage pool by using volume migration or volume mirroring later.
 
Migrate to an existing pool starts by creating an image-mode volume as the first method. However, it then migrates the data from the image-mode volume onto another volume in the selected storage pool. After the migration process completes the image-mode volume and temporary migration pool are deleted.
If this method is selected, choose the storage pool to hold the new volume and click Import, as shown in Figure 6-52. When migrating the MDisk, the system creates a temporary copy of the disk in a volume. It then copies the extents from that temporary volume to a new volume in the selected pool. Therefore, free extents must be available in the selected pool so that data can be copied there.
Figure 6-52 Migrating an MDisk to an existing pool
The data migration begins automatically after the MDisk is imported successfully as an image-mode volume. You can check the migration progress by clicking the task under Running Tasks, as shown in Figure 6-53.
Figure 6-53 MDisk migration in the tasks pane
After the migration completes, the volume is available in the chosen destination pool, as shown in Figure 6-54. This volume is no longer an image-mode volume. Instead, it is a normal striped volume.
Figure 6-54 The migrated MDisk is now a Volume in the selected pool
At this point, all data has been migrated off the source MDisk and the MDisk is no longer in image mode, as shown in Figure 6-55. The MDisk can be removed from the temporary pool. It returns in the list of external MDisks and can be used as a regular MDisk to host volumes, or the legacy storage bay can be decommissioned.
Figure 6-55 Imported MDisks appear as “Managed”
Alternatively, import and migration of external MDisks to another pool can be done by selecting Pools → System Migration. Migration and the system migration wizard are described in more detail in Chapter 9, “Storage migration” on page 391.
Include
The system can exclude an MDisk with multiple I/O failures or persistent connection errors from its storage pool to ensure that these errors do not interfere with data access. If an MDisk has been automatically excluded, run the fix procedures to resolve any connection and I/O failure errors. Drives used by the excluded MDisk with multiple errors might require replacing or reseating.
After the problems have been fixed, select Include to add the excluded MDisk back into the storage pool.
Remove
In some cases, you might want to remove external MDisks from storage pools to reorganize your storage allocation. Selecting Remove removes the MDisk from the storage pool. After the MDisk is removed, it goes back to unmanaged. If there are no volumes in the storage pool to which this MDisk is allocated, the command runs immediately without additional confirmation. If there are volumes in the pool, you are prompted to confirm the action, as shown in Figure 6-56.
Figure 6-56 Removing an MDisk from a pool
Confirming the action starts the migration of the volumes to extents from other MDisks that remain in the pool. When the action completes, the MDisk is removed from the storage pool and returns to unmanaged. When an MDisk is removed from a storage pool, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, extents from the other tier are used.
Ensure that you have enough available capacity remaining in the storage pool to allocate the data being migrated from the removed MDisk or else the command fails.
 
Important: The MDisk being removed must remain accessible to the system while all data is copied to other MDisks in the same storage pool. If the MDisk is unmapped before the migration finishes, all volumes in the storage pool go offline and remain in this state until the removed MDisk is connected again.
6.3 Working with internal drives
The SVC system provides an Internal Storage pane for managing all internal drives. To access the Internal Storage pane, browse to Pools → Internal Storage, as shown in Figure 6-57.
Figure 6-57 Internal storage pane
The pane gives an overview of the internal drives in the SVC system. Selecting All Internal in the drive class filter displays all drives that are managed in the system, including all I/O groups and expansion enclosures. Selecting the class of the drives on the left side of the pane filters the list and display only the drives of the selected class.
You can find information regarding the capacity allocation of each drive class in the upper right corner, as shown in Figure 6-58:
Total Capacity shows the overall capacity of the selected drive class.
MDisk Capacity shows the storage capacity of the selected drive class that is assigned to MDisks.
Spare Capacity shows the storage capacity of the selected drive class that is used for spare drives.
If All Internal is selected under the drive class filter, the values shown refer to the entire internal storage.
Figure 6-58 Capacity usage of internal drives
The percentage bar indicates how much of the total capacity is allocated to MDisks and spare drives, with MDisk capacity being represented by dark blue and spare capacity by light blue.
6.3.1 Actions on internal drives
A number of actions can be performed on internal drives. To perform any action, select the drives and right-click the selection, as shown in Figure 6-59. Alternatively, select the drives and click Actions.
Figure 6-59 Actions on internal storage
The actions available depend on the status of the drive or drives selected. Some actions can only be run for individual drives.
Fix error
This action is only available if the drive selected is in an error condition. Selecting Fix Error starts the Directed Maintenance Procedure (DMP) for the defective drive. For more information about DMPs, see Chapter 13, “RAS, monitoring, and troubleshooting” on page 689.
Take offline
Selecting Take Offline allows the user to take a drive offline. Select this action only if there is a problem with the drive and a spare drive is available. When selected you are prompted to confirm the action, as shown in Figure 6-60.
Figure 6-60 Taking a drive offline
If a spare drive is available and the drive is taken offline, the MDisk of which the failed drive is a member remains Online and the spare is automatically reassigned. If no spare drive is available and the drive is taken offline, the array of which the failed drive is a member gets Degraded. Consequently, the storage pool to which the MDisk belongs gets Degraded as well, as shown in Figure 6-61.
Figure 6-61 Degraded Pool and MDisk in case there is no more spare in the array
The system prevents you from taking the drive offline if one of the following conditions is true:
The first option was selected and no suitable spares are available as shown in Figure 6-62.
Figure 6-62 Taking a drive offline fails in case there is no spare in the array
Losing another drive in the MDisk results in data loss. Figure 6-63 shows the error in this case.
Figure 6-63 Taking a drive offline fails if there is a risk of losing data
A drive that is taken offline is considered Failed, as shown in Figure 6-64.
Figure 6-64 An offline drive is marked as failed
Mark as
Selecting Mark as allows you to change the usage assigned to the drive. The following use options are available as shown in Figure 6-65:
Unused: The drive is not in use and cannot be used as a spare.
Candidate: The drive is available to be used in an MDisk.
Spare: The drive can be used as a hot spare, if required.
Figure 6-65 A drive can be marked as Unused, Candidate, or Spare
The use that can be assigned depends on the current drive use. These dependencies are shown in Figure 6-66.
Figure 6-66 Allowed usage changes for internal drives
Identify
Selecting Identify turns on the LED light so you can easily identify a drive that must be replaced or that you want to troubleshoot. Selecting this action opens a dialog box like the one shown in Figure 6-67.
Figure 6-67 Identifying an internal drive
Click Turn LED Off when you are finished.
Upgrade
Selecting Upgrade allows the user to update the drive firmware as shown in Figure 6-68. You can choose to update individual drives or all the drives that have available updates.
Figure 6-68 Upgrading a drive or a set of drives
For information about updating drive firmware, see Chapter 13, “RAS, monitoring, and troubleshooting” on page 689.
Show dependent volumes
Selecting Show Dependent Volumes lists the volumes that are dependent on the selected drive. A volume is dependent on a drive or a set of drives when removal or failure of that drive or set of drives causes the volume to become unavailable. Use this option before performing maintenance operations to determine which volumes will be affected.
Figure 6-69 shows the list of volumes dependent on a set of three drives that belong to the same MDisk. This configuration means that all listed volumes will go offline if all selected drives go offline. If only one drive goes offline, then there is no volume dependency.
Figure 6-69 List of volumes dependent on disks 10, 11, and 12
 
Note: A lack of dependent volumes does not imply that there are no volumes using the drive. Volume dependency actually shows the list of volumes that would become unavailable if the drive alone or the group of selected drive become unavailable.
Properties
Selecting Properties provides more information about the drive, as shown in Figure 6-70.
Figure 6-70 Drive properties
Checking Show Details on the left corner of the window shows more details, including vendor ID, product ID, and part number. You can also display drive slot details by selecting Drive Slot.
6.4 Working with external storage controllers
IBM Spectrum Virtualize supports external storage controllers attached through iSCSI and through Fibre Channel.
External storage controllers with both types of attachment can be managed through the External Storage pane. To access the External Storage pane, browse to Pools → External Storage, as shown in Figure 6-71.
Figure 6-71 External Storage pane
The pane lists the external controllers that are connected to the SVC system and all the external MDisks detected by the system. The MDisks are organized by the external storage system that presents them. You can toggle the sign to the left of the controller icon to either show or hide the MDisks associated with the controller.
 
Note: A controller connected through Fibre Channel is detected automatically by the system, provided the cabling, the zoning, and the system layer are configured correctly. A controller connected through iSCSI must be added to the system manually.
If you have configured logical unit names on your external storage systems, it is not possible for the system to determine this name because it is local to the external storage system. However, you can use the external storage system WWNNs and the LU number to identify each device.
6.4.1 Fibre Channel external storage controllers
A controller connected through Fibre Channel is detected automatically by the system, provided the cabling, the zoning, and the system layer are configured correctly.
If the external controller is not detected, ensure that the SVC is cabled and zoned into the same storage area network (SAN) as the external storage system. If you are using Fibre Channel, connect the Fibre Channel cables to the Fibre Channel ports of the canisters in your system, and then to the Fibre Channel network. If you are using Fibre Channel over Ethernet, connect Ethernet cables to the 10 Gbps Ethernet ports.
 
Attention: If the external controller is a Storwize system, the SVC must be configured at the replication layer and the external controller must be configured at the storage layer. The default layer for a Storwize system is storage. Make sure that the layers are correct before zoning the two systems together. Changing the system layer is not available in the GUI. You need to use the command-line interface (CLI).
Ensure that the layer of both systems is correct by entering the following command:
svcinfo lssystem
If needed, change the layer of the SVC to replication by entering the following command:
chsystem -layer replication
If needed, change the layer of the Storwize controller to storage by entering the following command:
chsystem -layer storage
For more information about layers and how to change them, see Chapter 11, “Advanced Copy Services” on page 461.
6.4.2 iSCSI external storage controllers
Unlike Fibre Channel connections, you must manually configure iSCSI connections between the SVC and the external storage controller. Until then, the controller is not listed in the External Storage pane.
Before adding an iSCSI-attached controller, ensure that the following prerequisites are fulfilled:
The SVC and the external storage system are connected through one or more Ethernet switches. Symmetric ports on all nodes of the SVC are connected to the same switch and configured on the same subnet. Optionally, you can use a virtual local area network (VLAN) to define network traffic for the system ports.
Direct attachment between this system and the external controller is not supported. To avoid a single point of failure, use a dual switch configuration. For full redundancy, a minimum of two paths between each initiator node and target node must be configured with each path on a separate switch.
Figure 6-72 shows an example of a fully redundant iSCSI connection between Storwize system and an SVC cluster. In this example, the SVC is composed of two I/O groups. Each node has a maximum of four initiator ports with two ports configured, through two switches, to the target ports on the other Storwize system. The first ports (orange) on each initiator and target nodes are connected through Ethernet switch 1. The second ports (blue) on each initiator and target nodes are connected through Ethernet switch 2. Each target node on the storage system has one iSCSI qualified name (IQN) that represents all the LUs on that node.
Figure 6-72 Fully redundant iSCSI connection between a Storwize system and SVC
 
IBM Spectrum Accelerate and Dell EqualLogic:
For an example of how to cable the IBM Spectrum Accelerate to the SVC, see IBM Knowledge Center:
For an example of how to cable the Dell EqualLogic to the SVC, see IBM Knowledge Center at:
The ports used for iSCSI attachment are enabled for external storage connections. By default, Ethernet ports are disabled for external storage connections. You can verify the setting of your Ethernet ports by navigating to Settings → Network and selecting Ethernet Ports, as shown in Figure 6-73.
Figure 6-73 Ethernet ports settings
To enable the port for external storage connections, select the port, click Actions and select Modify Storage Ports, as shown in Figure 6-74.
Figure 6-74 Modifying Ethernet port settings
Set the port as Enabled for either IPv4 or IPv6, depending on the protocol version configured for the connection, as shown in Figure 6-75.
Figure 6-75 Enabling a Storage port
When all prerequisites are fulfilled, you are ready to add the iSCSI controller. To do so, navigate to Pools → External Storage and click Add External iSCSI Storage, as shown in Figure 6-76.
Figure 6-76 Adding external iSCSI storage
 
Attention: Unlike Fibre Channel connections, iSCSI connections require the SVC to be configured at the replication layer for every type of external controller. However, as with Fibre Channel, if the external controller is a Storwize system, the controller must be configured at the storage layer. The default layer for a Storwize system is storage.
If the SVC is not configured at the replication layer when Add External iSCSI Storage is selected, you are prompted to do so, as shown in Figure 6-77 on page 247.
If the Storwize controller is not configured at the storage layer, this must be changed by using the CLI.
Ensure that the layer of the Storwize controller is correct by entering the following command:
svcinfo lssystem
If needed, change the layer of the Storwize controller to storage by entering the following command:
chsystem -layer storage
For more information about layers and how to change them see Chapter 11, “Advanced Copy Services” on page 461.
Figure 6-77 Converting the system layer to replication to add iSCSI external storage
Select Convert the system to the replication layer and click Next.
Select the type of external storage, as shown in Figure 6-78. For this example, the IBM Storwize type is chosen. Click Next.
Figure 6-78 Adding an external iSCSI controller: controller type
Enter the iSCSI connection details, as shown in Figure 6-79.
Figure 6-79 Adding an external iSCSI controller: Connection details
Complete the following fields as described:
CHAP secret: If the Challenge Handshake Authentication Protocol (CHAP) is used to secure iSCSI connections on the system, enter the current CHAP secret. This field is not required if you do not use CHAP.
Source port 1 connections
 – Select source port 1: Select one of the ports to be used as initiator for the iSCSI connection between the node and the external storage system.
 – Target port on remote storage 1: Enter the IP address for one of the ports on the external storage system targeted by this source port.
 – Target port on remote storage 2: Enter the IP address for the other port on the external storage system targeted by this source port.
Source port 2 connections
 – Select source port 2: Select the other port to be used as initiator for the iSCSI connection between the node and the external storage system.
 – Target port on remote storage 1: Enter the IP address for one of the ports on the external storage system targeted by this source port.
 – Target port on remote storage 2: Enter the IP address for the other port on the external storage system targeted by this source port.
The fields available vary depending on the configuration of your system and external controller type. However, the meaning of each field is always kept. The following fields can also be available:
Site: Enter the site associated with the external storage system. This field is shown only for configurations by using HyperSwap.
User name: Enter the user name associated with this connection. If the target storage system uses CHAP to authenticate connections, you must enter a user name. If you specify a user name, you must specify a CHAP secret. This field is not required if you do not use CHAP. This field is shown only for IBM Spectrum Accelerate and Dell EqualLogic controllers.
Click Finish. The system attempts to discover the target ports and establish iSCSI sessions between source and target. If the attempt is successful, the controller is added. Otherwise, the action fails.
6.4.3 Actions on external storage controllers
A number of actions can be performed on external storage controllers. Some actions are available for external iSCSI controllers only.
To select any action, right-click the controller, as shown in Figure 6-80. Alternatively, select the controller and click Actions.
Figure 6-80 Actions on external storage
Discover storage
When you create or remove LUs on an external storage system, the change is not always automatically detected. If that is the case select Discover Storage for the system to rescan the Fibre Channel or iSCSI network. The rescan process discovers any new MDisks that were added to the system and rebalances MDisk access across the available ports. It also detects any loss of availability of the controller ports.
Rename
Selecting Rename allows the user to modify the name of an external controller, as shown in Figure 6-81. Enter the new name and click Rename.
Figure 6-81 Renaming an external storage controller
Naming rules: When you choose a name for a controller, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a volume named ABC and an MDisk called ABC, but you cannot have two volumes called ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Remove iSCSI sessions
This action is available only for external controllers attached with iSCSI. Right-click the session and select Remove to remove the iSCSI session established between the source and target port.
Modify site
This action is available only for systems that use HyperSwap. Selecting Modify Site allows the user to modify the site with which the external controller is associated, as shown in Figure 6-82.
Figure 6-82 Modifying the site of an external controller
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.240.222