Volumes
This chapter explains how to create, manage, and migrate volumes (formerly known as vdisks) across I/O groups. It also explains how to use IBM FlashCopy.
This chapter includes the following sections:
4.1 Overview of volumes
Three types of volumes are possible: Striped, sequential, and image. These types are determined by how the extents are allocated from the storage pool.
A striped-mode volume has extents that are allocated from each managed disk (MDisk) in the storage pool in a round-robin fashion.
With a sequential-mode volume, extents are allocated sequentially from an MDisk.
An image-mode volume is a one-to-one mapped extent mode volume.
4.1.1 Striping compared to sequential type
With a few exceptions, you must always configure volumes by using striping. One exception is for an environment in which you have a 100% sequential workload and disk loading across all volumes is guaranteed to be balanced by the nature of the application. An example of this exception is specialized video streaming applications.
Another exception to configuration by using volume striping is an environment with a high dependency on many flash copies. In this case, FlashCopy loads the volumes evenly, and the sequential I/O, which is generated by the flash copies, has a higher throughput potential than what is possible with striping. This situation is rare considering that you rarely need to optimize for FlashCopy as opposed to an online workload.
The general rule now is to always go with striped. The main target is to take the 60 MDisk queue depth into account, which typically says for HDD MDisks, to aim for eight spindles per MDisk. Previous cache algorithms meant that for sequential workloads and to avoid stripe on stripe issues, you had to keep things more logical and doing the single pool or single volume per MDisk on the underlying storage. This is no longer valid now with the cache algorithms of today (for reference it was added with version 7.3).
So here, the rule would be, for the number of drives you have on the backend in a pool, say 64 drives in the pool, then 64/8 = 8 volumes created on the backend and presented to IBM Spectrum Virtualize or Storwize as 8 MDisks. This rule means that when you get the 60 MDisk queue, you get roughly a queue depth of 8 per drive. This setting keeps a spinning disk well used. It also gives better concurrency across the ports of the back-end controller.
There could be other scenarios, such as when IBM Spectrum Virtualize or 106Storwize are acting as back-end storage for IBM TS7650G ProtecTIER® Gateway. In this scenario, still generally use sequential volumes mostly when using disk drives with very large sizes such as 2 TB or 3 TB for user data repository. The reason is that those large disk drives end up having very large arrays/MDisks/LUNs. If ProtecTIER handles this large LUN by itself, it is able to optimize its file system structure and workload without overcommitting or congesting the single array, rather than striping the LUNs over an entire multi-array Storage Pool.
4.1.2 Thin-provisioned volumes
Volumes can be configured as Thin-provisioned or fully allocated. Thin-provisioned volumes are created with real and virtual capacities. You can still create volumes by using a striped, sequential, or image mode virtualization policy as you can with any other volume.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the capacity of the volume that is reported to other IBM Spectrum Virtualize or Storwize components (such as FlashCopy or remote copy) and to the hosts.
A directory maps the virtual address space to the real address space. The directory and the user data share the real capacity.
Thin-provisioned volumes are available in two operating modes: Autoexpand and nonautoexpand. You can switch the mode at any time. If you select the autoexpand feature, IBM Spectrum Virtualize or Storwize automatically adds a fixed amount of extra real capacity to the thin volume as required. Therefore, the autoexpand feature attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity. The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and thus has a zero contingency capacity, goes offline when the real capacity is used and must expand.
 
Warning threshold: Enable the warning threshold (by using email or an SNMP trap) when you are working with Thin-provisioned volumes, on the volume, and on the storage pool side, especially when you do not use the autoexpand mode. Otherwise, the thin volume goes offline if it runs out of space.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity is recalculated.
A Thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a Thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized.
The fully allocated to Thin-provisioned migration procedure uses a zero-detection algorithm so that grains that contain all zeros do not cause any real capacity to be used.
 
Tip: Consider the use of Thin-provisioned volumes as targets in FlashCopy relationships.
4.1.3 Space allocation
When a Thin-provisioned volume is created, a small amount of the real capacity is used for initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to) cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the grains (that were previously written to) update the grain where data was previously written.
 
Grain definition: The grain is defined when the volume is created and can be 8 KB, 32 KB, 64 KB, 128 KB, or 256 KB (default). Note that16 KB is not supported.
Smaller granularities can save more space, but they have larger directories. When you use Thin provisioning with FlashCopy, specify the same grain size for the Thin-provisioned volume and FlashCopy.
The 8 KB grain is an optimization for clients using flash storage who want to achieve the maximum flash storage space usage optimization for better protection of their investment on valuable disk space.
4.1.4 Compressed volumes
A compressed volume is, first of all, a Thin-provisioned volume. The compression technology is implemented into the IBM Spectrum Virtualize or Storwize Thin provisioning layer and is an organic part of the stack.
You can create, delete, migrate, mirror, map (assign), and unmap (unassign) a compressed volume as though it were a fully allocated volume. This compression method provides nondisruptive conversion between compressed and uncompressed volumes. This conversion provides a uniform user experience and eliminates the need for special procedures to deal with compressed volumes.
For more information about compression technology, see IBM Real-time Compression in IBM SAN Volume Controller and IBM Storwize V7000, REDP-4859.
When using Real-time Compression (RtC), always use IBM Spectrum Virtualize nodes or Storwize hardware with dedicated RtC CPU and RtC accelerator cards installed where available.
Refer to your IBM SSR or representative before implementing RtC in production so that person can perform a space and performance assessment first.
Use the RtC estimator tool that is available on your IBM Spectrum Virtualize and Storwize CLI starting with version 7.6 and with the GUI starting from version 7.7 to identify the best volume candidates to be compressed.
When using the CLI, use the commands shown in Example 4-1 to run volume analysis on a single volume.
Example 4-1 An analyzevdisk command example
IBM_2145:SVC_ESC:superuser>svctask analyzevdisk -h
.
analyzevdisk
.
Syntax
.
>>- analyzevdisk -- --+----------+-- --+- vdisk_id ---+--------><
'- -cancel-' '- vdisk_name -'
.
For more details type 'help analyzevdisk'.
.
IBM_2145:SVC_ESC:superuser>svctask analyzevdisk fcs2
IBM_2145:SVC_ESC:superuser>
When using the CLI, use the commands shown in Example 4-2 to run Volume analysis for an entire subsystem.
Example 4-2 An analyzevdiskbysystem command example
IBM_2145:SVC_ESC:superuser>svctask analyzevdiskbysystem -h
.
analyzevdiskbysystem
.
Syntax
.
>>- analyzevdiskbysystem -- --+----------+-- ------------------><
'- -cancel-'
.
For more details type 'help analyzevdiskbysystem'.
.
IBM_2145:SVC_ESC:superuser>svctask analyzevdiskbysystem
IBM_2145:SVC_ESC:superuser>
 
Note: The analyzevdisk and analyzevdiskbysystem commands return to the prompt.
To see the result of the analysis and its progress, run the CLI command as shown in Example 4-3.
Example 4-3 A lsvdiskanalysis command example
IBM_2145:SVC_ESC:superuser>svcinfo lsvdiskanalysis
id name state analysis_time capacity thin_size thin_savings thin_savings_ratio compressed_size compression_savings compression_savings_ratio total_savings total_savings_ratio margin_of_error
0 fcs0 sparse 161011155433 5.00GB 0.00MB 0.00MB 0 0.00MB 0.00MB 0 0.00MB 0 0
.
lines omitted for brevity
.
8 tgtrm sparse 161011155438 5.00GB 0.00MB 0.00MB 0 0.00MB 0.00MB 0 0.00MB 0 0
IBM_2145:SVC_ESC:superuser>svcinfo lsvdiskanalysisprogress
vdisk_count pending_analysis estimated_completion_time
9 0
When using the GUI, go to the menu as shown in Figure 4-1 to run volume analysis by single volume or by multiple volumes. Select all of the volumes that you need to be analyzed.
From the same menu shown in Figure 4-1, you can download the Saving report in .csv format.
Figure 4-1 Use of Estimate Compression Saving with GUI
If you are planning to virtualize volumes that are connected to your hosts directly from any storage subsystems, and you want to know what the space saving you will achieve using RtC on those volumes, run the Comprestimator Utility available at:
Comprestimator is a command line host-based utility that can be used to estimate an expected compression rate for block devices. The above link provides all the instructions needed.
The following are the preferred practices:
After you run Comprestimator, consider applying RtC only on those volumes that show a capacity saving of not less than 40%. For other volumes, the tradeoff between space saving and hardware resource consumption to compress your data might not make sense.
After you compress your selected Volumes, look at what volumes have the most space saving benefits from Thin Provisioning rather than RtC. Consider moving these volumes to Thin Provisioning only. This configuration requires some effort, but saves hardware resources that are then available to give better performance to those Volumes, which will achieve more benefit from RtC than Thin Provisioning.
The GUI can help you by going to the Volumes menu and selecting the fields shown in Figure 4-2. Customize the Volume view to get all the metrics you might need to help make your decision.
Figure 4-2 Customized view
4.1.5 Thin-provisioned volume
Thin provisioning is a well understood technology in the storage industry and it saves capacity only if the host server does not write to whole volumes. Whether the Thin-provisioned volume works well partly depends on how the file system allocated the space.
A volume that is Thin-provisioned by SVC or Storwize is a volume where the large chunk of binary zeros are not stored in the storage pool. So, if you have not written to the volume yet, you do not need to use valuable resources storing data that does not exist yet in the form of zeros.
It is important to note that there are some file systems that are more Thin Provisioning friendly than others. Figure 4-3 shows some examples. This is not an official reference, but it is information that is based on experience and observation.
Figure 4-3 Friendly file systems
There are a number of different properties of Thin-provisioned volumes that are useful to understand for the rest of the chapter:
The size of the volume presented to the host. This does not really have a name, but we refer to this concept as volume capacity.
The amount of user data that has been written to the storage pool. This is called the used capacity.
The capacity that has been removed from the storage pool and has been dedicated to this volume. This is called the real capacity. The real capacity must always be greater than the used capacity.
There is also a warning threshold.
For a Compressed Volume only (because Compressed Columns are based on Thin-provisioned Volumes), there is the amount of uncompressed user data that has been written into the volume. This is called the uncompressed used capacity. It is used to calculate the compression ratio:
((uncompressed used capacity - used capacity) / uncompressed used capacity * 100 = compression ratio)
Because there are at least two ways of calculating compression ratios. It useful to remember that bigger is better, so a 90% compression ratio is better than 50% compression ratio.
As stated, Thin provisioning means “don’t store the zeros,” so what does overallocation mean? Simply put, a storage pool is only overallocated after the sum of all volume capacities exceeds the size of the storage pool.
One of the things that worries administrators the most is the question “what if I run out of space?”
The first thing to remember is that if you already have enough capacity on disk to store fully allocated volumes, then if you convert to Thin provisioning, you will have enough space to store everything even if the server writes to every byte of virtual capacity. However, this is not going to be a problem for the short term. You will have time to monitor your system and understand how your capacity grows, but you must monitor it.
Even if you are creating a storage pool, it is likely that you will not start over provisioning for a few weeks after you start writing to that pool. You do not actually need to overallocate until you feel comfortable that you have a handle on Thin provisioning.
How do I monitor Thin provisioning?
The basics of capacity planning for Thin provisioning or compressed volumes are no different than capacity planning for fully allocated. The capacity planner needs to monitor the amount of capacity being used versus the capacity of the storage pool. Make sure that you purchase more capacity before you run out.
The main difference is that in a fully allocated world, the used capacity normally only increases during working hours because the increase is caused by an administrator creating more volumes. In a Thin Provisioning world, the used capacity can increase at any time as long as the File Systems grow. Thus you need to approach capacity planning carefully.
To avoid unpleasant situations where some volumes can go offline due to lack of space, the storage administrator needs to monitor the real capacity rather than the volume capacity. And that is the main difference. Of course, they need to monitor it regularly because the real capacity can increase at any time of day for any reason.
Tools like IBM Spectrum Control can capture the real capacity of a storage pool and enable you to graph the real capacity so you can see how it is growing over time. Having a tool to show how the real capacity is growing over time is an important requirement to be able to predict when the space will run out.
IBM Spectrum Virtualize or Storwize also alert you by putting an event into the event log when the storage pool breaches a configurable threshold, called the warning level. The GUI sets this threshold to 80% of the capacity of the storage pool by default, although you can change it.
Have event notifications turned on so that someone gets an email or pop up on your monitoring system when an error is added to the event log. Note that this event will not call home to IBM. You need to respond to this notification yourself.
What to do if you run out of space
There are numerous options here. You can use just one of these options, or a combination of as many as you like.
Consider if one server decides to write to the space that you allocated to it and it uses up all of the free space in the storage pool. If the system does not have any more capacity to store the host writes, then the volume goes offline. But it is not only that one volume that goes offline. All the volumes in the storage pool are now at risk of going offline.
The following mechanisms and processes can help you deal with this situation:
Automatic out of space protection provided by the product
Buy more storage
If the storage pool runs out of space, each volume now has its own emergency capacity. That emergency capacity is normally sizable (2% is the default). The emergency capacity that is dedicated to a volume could allow that volume to stay online for anywhere between minutes to days depending on the change rate of that volume. This feature means that when you run out of space, you do have some time to repair things before everything starts going offline.
So you might implement a policy of 10% emergency capacity per volume if you wanted to be safer. Also, remember that you do not need to have the same contingency capacity for every volume.
 
Note: This automatic protection will probably solve most immediate problems, but remember that after you are informed that you have run out of space, you have a limited amount of time to react. You need a plan about what to do next.
Have unallocated storage on standby
You can always have spare drives or managed disks ready to be added to whichever storage pool runs out of space within only a few minutes. This capacity gives you some breathing room while you take other actions. The more managed disks or drives that you have available, the more time you have to solve the problem.
Move or delete volumes
After you run out of space, you can migrate volumes to other pools to free up space. This technique is useful. However, data migration on IBM Spectrum Virtualize and Storwize is designed to go slowly to avoid causing performance problems. Therefore, it might be impossible to complete this migration before your applications go offline.
A very rapid but extreme solution is to delete one or more volumes to make space. This technique is not recommended. This can be used if you are sharing the storage pool with both production and development. You might choose to sacrifice less important volumes to preserve the critical volumes.
Policy-based solutions
No policy is going to solve the problem if you run out of space, but you can use policies to reduce the likelihood of that ever happening to the point where you feel comfortable doing less of the other options.
You can use these types of policies for Thin provisioning:
 
Note: The policies below use arbitrary numbers. These arbitrary numbers are designed to make the suggested policies more readable. We do not give any recommended numbers to insert into these policies because they are determined by business risk, and this consideration is different for every client.
 – Manage free space such that there is always enough free capacity for your 10 biggest volumes to reach 100% full without running out of free space.
 – Never overallocate more than 200%. In other words, if you have 100 TB of capacity in the storage pool, then the sum of the volume capacities in the same pool must not exceed 200 TB.
 – Always start the process of buying more capacity when the storage pool reaches 60% full.
 – If you keep your FlashCopy backups and your production data in the same pool, you might choose to not overallocate the production data. If you run out of space, you can delete backups to free up space.
Child Pools
Version 7.4 introduced a feature called child pools that allows you to make a storage pool that takes its capacity from a parent storage pool rather than from managed disks. This has a couple of possible use cases for this Thin provisioning:
 – You could separate different applications into different child pools. This technique prevents any problems with a server in child pool A affecting a server in child pool B. If Child Pool A runs out of space, and the parent pool still has space, then you can easily grow the child pool.
 – You can use child pools to create a child pool that is called something descriptive like “DO NOT USE” and allocate (for example) 10% of the storage pool capacity to that child pool. Then, if the parent pool ever runs out, you have emergency capacity that can be given back to the parent pool. In this technique, you must figure out which server was eating up all the space and stopped whatever it was doing.
For more information about Thin Provisioning usage and best practices, see:
4.1.6 Limits on virtual capacity of Thin-provisioned volumes
The extent and grain size factors limit the virtual capacity of Thin-provisioned volumes beyond the factors that limit the capacity of regular volumes. Table 4-1 shows the maximum Thin-provisioned volume virtual capacities for an extent size.
Table 4-1 Maximum thin volume virtual capacities for an extent size
Extent size in MB
Maximum volume real capacity
in GB
Maximum thin virtual capacity
in GB
16
2,048
2,000
32
4,096
4,000
64
8,192
8,000
128
16,384
16,000
256
32,768
32,000
512
65,536
65,000
1024
131,072
130,000
2048
262,144
260,000
4096
524,288
520,000
8192
1,048,576
1,040,000
Table 4-2 show the maximum Thin-provisioned volume virtual capacities for a grain size.
Table 4-2 Maximum thin volume virtual capacities for a grain size
Grain size in KB
Maximum thin virtual capacity in GB
32
260,000
64
520,000
128
1,040,000
256
2,080,000
4.2 Creating volumes
To create volumes, follow the procedure that is described in Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6, SG24-7933.
When you are creating volumes, adhere to the following guidelines:
Decide on your naming convention before you begin. It is much easier to assign the correct names when the volume is created than to modify them afterward.
Each volume has an I/O group and preferred node that balances the load between nodes in the I/O group. Therefore, balance the volumes across the I/O groups in the cluster to balance the load across the cluster.
In configurations with many attached hosts where it is not possible to zone a host to multiple I/O groups, you might not be able to choose to which I/O group to attach the volumes. The volume must be created in the I/O group to which its host belongs.
 
Tip: Migrating volumes across I/O groups can be a disruptive action. Therefore, specify the correct I/O group at the time the volume is created.
By default, the preferred node, which owns a volume within an I/O group, is selected on a load balancing basis. At the time that the volume is created, the workload to be placed on the volume might be unknown. However, you must distribute the workload evenly on each node within an I/O group. If you must change the preferred node, see 4.2.1, “Changing the preferred node within an I/O group or cross I/O group” on page 107.
In Stretched Cluster environments, it is best to configure the preferred node based on site awareness.
The maximum number of volumes per I/O group, at the time of writing, is 2048 and 8192 per cluster for versions up to 7.7.x. These limits can change with newer versions. Always confirm the limits related to your specific version as shown in the following link for version 7.7.x:
For version 7.8.x, see this link:
The smaller the extent size that you select, the finer the granularity of the volume of space that is occupied on the underlying storage controller. A volume occupies an integer number of extents, but its length does not need to be an integer multiple of the extent size. The length does need to be an integer multiple of the block size. Any space left over between the last logical block in the volume and the end of the last extent in the volume is unused. A small extent size is used to minimize this unused space.
The counter to this view is that, the smaller the extent size is, the smaller the total storage volume is that IBM Spectrum Virtualize or Storwize can address. The extent size does not affect performance. For most clients, extent sizes of 128 MB or 256 MB give a reasonable balance between volume granularity and cluster capacity. Extent size is set during the Storage Pool creation.
 
Important: You can migrate volumes by using the migratevolume command only between storage pools that have the same extent size, except for mirrored volumes that can allocate space on different storage pool s with different extent size.
As described in 4.1, “Overview of volumes” on page 96, a volume can be created as Thin-provisioned or fully allocated, in one mode (striped, sequential, or image), and with one or two copies (volume mirroring). With a few rare exceptions, you must always configure volumes by using striping mode.
4.2.1 Changing the preferred node within an I/O group or cross I/O group
Currently, a nondisruptive method is available to change the preferred node within an I/O group and across I/O groups. The correct method is to migrate the volume to a recovery group and migrate back with the preferred node.
Changing the preferred node within an I/O group is nondisruptive. However, it can lead to some delay in performance and for some specific operating systems or application could affect some specific time outs.
Changing the preferred node within an I/O group can always be done by using the CLI, and with the GUI only if you have at least two I/O groups.
To change the preferred node across I/O groups, there are some limitations, mostly in a Host Cluster environment. See the Supported Hardware List, Device Driver, Firmware and Recommended Software Levels for Spectrum Virtualize and Storwize for your specific version, which is available at:
Also, see the Configuration Limits and Restrictions for IBM System Storage SAN Volume Controller for your specific version available at:
The function that is used to change preferred node across I/O groups is named Non-Disruptive Volume Move (NDVM).
 
Attention: These migration tasks can be nondisruptive if performed correctly and the hosts that are mapped to the volume support NDVM. The cached data that is held within the system must first be written to disk before the allocation of the volume can be changed.
Modifying the I/O group that services the volume can be done concurrently with I/O operations if the host supports NDVM. This process also requires a rescan at the host level to ensure that the multipathing driver is notified that the allocation of the preferred node changed and the ports by which the volume is accessed changed. This can be done when one pair of nodes becomes over-used.
If there are any host mappings for the volume, the hosts must be members of the target I/O group or the migration will fail.
Ensure that you create paths to I/O groups on the host system. After the system successfully added the new I/O group to the volume’s access set and you moved the selected volumes to another I/O group, detect the new paths to the volumes on the host. The commands and actions on the host vary depending on the type of host and the connection method that is used. These steps must be completed on all hosts to which the selected volumes are currently mapped.
To move a volume between I/O groups by using the CLI, complete the steps listed in the IBM Knowledge Center for IBM Spectrum Virtualize that is available at:
4.3 Volume migration
A volume can be migrated from one storage pool to another storage pool regardless of the virtualization type (image, striped, or sequential). The command varies depending on the type of migration, as shown in Table 4-3.
Table 4-3 Migration types and associated commands
Storage pool-to-storage pool type
Command
Managed-to-managed or
Image-to-managed
migratevdisk
Managed-to-image or
Image-to-image
migratetoimage
Migrating a volume from one storage pool to another is nondisruptive to the host application using the volume. Depending on the workload of IBM Spectrum Virtualize or Storwize, there might be a slight performance impact. For this reason, migrate a volume from one storage pool to another when the SAN Volume Controller has a relatively low load.
 
Migrating a volume from one storage pool to another storage pool: For the migration to be acceptable, the source and destination storage pool must have the same extent size. Volume mirroring can also be used to migrate a volume between storage pools. You can use this method if the extent sizes of the two pools are not the same.
This section provides guidance for migrating volumes.
4.3.1 Image-type to striped-type migration
When you are migrating existing storage into the IBM Spectrum Virtualize cluster, the existing storage is brought in as image-type volumes, which means that the volume is based on a single MDisk. The CLI command that can be used is migratevdisk.
Example 4-4 shows the migratevdisk command that can be used to migrate an image-type volume to a striped-type volume, and can be used to migrate a striped-type volume to a striped-type volume as well.
Example 4-4 The migratevdisk command
IBM_2145:svccg8:admin>svctask migratevdisk -mdiskgrp MDG1DS4K -threads 4 -vdisk Migrate_sample
This command migrates the volume, Migrate_sample, to the storage pool, MDG1DS4K, and uses four threads when migrating. Instead of using the volume name, you can use its ID number. For more information about this process, see Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6, SG24-7933.
You can monitor the migration process by using the svcinfo lsmigrate command, as shown in Example 4-5.
Example 4-5 Monitoring the migration process
IBM_2145:svccg8:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 3
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:svccg8:admin>
4.3.2 Migrating to image-type volume
An image-type volume is a direct, “straight-through” mapping to one image mode MDisk. If a volume is migrated to another MDisk, the volume is represented as being in managed mode during the migration (because it is striped on two MDisks). It is only represented as an image-type volume after it reaches the state where it is a straight-through mapping.
Image-type disks are used to migrate existing data to an IBM Spectrum Virtualize or Storwize and to migrate data out of virtualization. Image-type volumes cannot be expanded.
Often the reason for migrating a volume to an image type volume is to move the data on the disk to a nonvirtualized environment.
If the migration is interrupted by a cluster recovery, the migration resumes after the recovery completes.
The migratetoimage command migrates the data of a user-specified volume by consolidating its extents (which might be on one or more MDisks) onto the extents of the target MDisk that you specify. After migration is complete, the volume is classified as an image type volume, and the corresponding MDisk is classified as an image mode MDisk.
 
Remember: This command cannot be used to if the source volume copy is in a child pool or if the MDisk group that is specified is a child pool.
This command does not work if the volume is fast formatting.
The managed disk that is specified as the target must be in an unmanaged state at the time that the command is run. Running this command results in the inclusion of the MDisk into the user-specified storage pool.
The migratetoimage command fails if the target or source volume is offline. Correct the offline condition before attempting to migrate the volume.
 
Remember: This command cannot be used on a volume that is owned by a file system or if the source MDisk is an SAS MDisk (which works in image mode only).
If the volume (or volume copy) is a target of a FlashCopy mapping with a source volume in an active-active relationship, the new managed disk group must be in the same site as the source volume.
If the volume is in an active-active relationship, the new managed disk group must be located in the same site as the source volume. Additionally, the site information for the MDisk being added must be well-defined and match the site information for other MDisks in the storage pool.
 
Note: You cannot migrate date from a volume if the target volume’s formatting attribute value is yes.
An encryption key cannot be used when migrating an image mode MDisk. To use encryption (when the MDisk has an encryption key), the MDisk must be self-encrypting.
IBM Spectrum Virtualize and Storwize migratetoimage command is useful when you want to use your system as a data mover. To better understand all requirements and specification for that command, see IBM Knowledge Center at:
4.3.3 Migrating with volume mirroring
Volume mirroring offers the ability to migrate volumes between storage pools with different extent sizes. Complete the following steps to migrate volumes between storage pools:
1. Add a copy to the target storage pool.
2. Wait until the synchronization is complete.
3. Remove the copy in the source storage pool.
To migrate from a Thin-provisioned volume to a fully allocated volume, the following steps are similar:
1. Add a target fully allocated copy.
2. Wait for synchronization to complete.
3. Remove the source Thin-provisioned copy.
The preferred practice is to try to not overload the systems with a high syncrate and not overload the system with too many migrations at the same time.
The syncrate parameter specifies the copy synchronization rate. A value of zero (0) prevents synchronization. The default value is 50. See Figure 4-4 for the supported -syncrate values and their corresponding rates. Use this parameter to alter the rate at which the fully allocated volume or mirrored volume format before synchronization.
.
Figure 4-4 Sample syncrate values
For more information, see IBM Knowledge Center at:
4.4 VMware Virtual Volumes
IBM Spectrum Virtualize and VMware’s Virtual Volumes (VVols) are paving the way towards a truly Software Defined Environment. IBM Spectrum Virtualize is at the very core of Software Defined Storage. The addition of Virtual Volumes enables a fundamentally more efficient operational model for storage in virtualized environments, centering it around the virtual machine (VM) rather than the physical infrastructure.
Before the arrival of Virtual Volumes, a virtual machine disk (VMDK) is be presented to a VM in the form of a file. This file represents a disk to the VM. The VM is then accessed by the guest operating system in the same way as a physical disk is accessed on a physical server. This VMDK is stored on a VMware Virtual Machine File System (VMFS) formatted data store.
The VMFS data store is hosted by a single volume on a storage system such as IBM Spectrum Virtualize or Storwize. A single VMFS data store, sometimes referred to as the VMFS blender, can have hundreds or even thousands of VMDKs.
Virtual Volumes provides a one-to-one mapping between the VM’s disks and the volumes (VVols) hosted by the storage system. This VVol is wholly owned by the VM. Exposing the VVol at the storage level enables storage-system-based operations at the granular VM level.
For example, capabilities such as compression and encryption can be applied to an individual VM. Similarly, IBM FlashCopy can be used at the VVol level when performing snapshot and clone operations.
For more information about VVols prerequisites, implementation, and configuration in IBM Spectrum Virtualize or Storwize environments, see Configuring VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize, SG24-8328, and Quick-start Guide to Configuring VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize, REDP-5321.
4.5 Preferred paths to a volume
For I/O purposes, IBM Spectrum Virtualize and Storwize nodes within the cluster are grouped into pairs, which are called I/O groups (sometimes cache I/O groups). A single pair is responsible for serving I/O on a specific volume. One node within the I/O group represents the preferred path for I/O to a specific volume. The other node represents the nonpreferred path. This preference alternates between nodes as each volume is created within an I/O group to balance the workload evenly between the two nodes.
IBM Spectrum Virtualize and Storwize implements the concept of each volume having a preferred owner node, which improves cache efficiency and cache usage. The cache component read/write algorithms depend on one node that owns all the blocks for a specific track. The preferred node is set at the time of volume creation manually by the user or automatically by IBM Spectrum Virtualize and Storwize.
Because read-miss performance is better when the host issues a read request to the owning node, you want the host to know which node owns a track. The SCSI command set provides a mechanism for determining a preferred path to a specific volume. Because a track is part of a volume, the cache component distributes ownership by volume. The preferred paths are then all the paths through the owning node. Therefore, a preferred path is any port on a preferred controller, assuming that the SAN zoning is correct.
 
Tip: Performance can be better if the access is made on the preferred node. The data can still be accessed by the partner node in the I/O group if a failure occurs.
By default, IBM Spectrum Virtualize and Storwize assign ownership of even-numbered volumes to one node of a caching pair and the ownership of odd-numbered volumes to the other node. It is possible for the ownership distribution in a caching pair to become unbalanced if volume sizes are different between the nodes or if the volume numbers that are assigned to the caching pair are predominantly even or odd.
To provide flexibility in making plans to avoid this problem, the ownership for a specific volume can be explicitly assigned to a specific node when the volume is created. A node that is explicitly assigned as an owner of a volume is known as the preferred node. Because it is expected that hosts access volumes through the preferred nodes, those nodes can become overloaded. When a node becomes overloaded, volumes can be moved to other I/O groups because the ownership of a volume cannot be changed after the volume is created.
Multipathing Software, SDDPCM, or SDDDSM (SDD for brevity) is aware of the preferred paths that IBM Spectrum Virtualize or Storwize sets per volume. SDD uses a load balancing and optimizing algorithm when failing over paths. That is, it tries the next known preferred path. If this effort fails and all preferred paths were tried, it load balances on the nonpreferred paths until it finds an available path. If all paths are unavailable, the volume goes offline. Therefore, it can take time to perform path failover when multiple paths go offline. SDD also performs load balancing across the preferred paths where appropriate.
Sometimes when debugging performance problems, it can be useful to look at the Non-Preferred Node Usage Percentage metric in IBM Spectrum Control. I/O to the non-preferred node might cause performance problems for the I/O group. This metric identifies any usage of non-preferred nodes to the user.
For more information about this metric and more, see IBM Spectrum Control™ in IBM Knowledge Center at:
4.5.1 Governing of volumes
I/O governing effectively throttles the number of I/O operations per second (IOPS) or MBps that can be achieved to and from a specific volume. You might want to use I/O governing if you have a volume that has an access pattern that adversely affects the performance of other volumes on the same set of MDisks. An example is a volume that uses most of the available bandwidth.
If this application is highly important, you might want to migrate the volume to another set of MDisks. However, in some cases, it is an issue with the I/O profile of the application rather than a measure of its use or importance.
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile of the application. Database applications often issue large amounts of I/O, but they transfer only a relatively small amount of data. In this case, setting an I/O governing throttle that is based on MBps does not achieve much throttling. It is better to use an IOPS throttle.
Conversely, a streaming video application often issues a small amount of I/O, but it transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle that is based on IOPS does not achieve much throttling. For a streaming video application, it is better to use an MBps throttle.
Quality of Services Enhancement
As already stated, I/O throttling is a mechanism to limit the volume of I/O processed by a storage system. Throttling primarily limits IOPS or Bandwidth available to a volume or a host and I/O rate is limited by queuing I/O requests if preset limits are exceeded.
With previous versions, you could limit IOPS per volume or MBps per volume, but not both at the same time. Starting with IBM Spectrum Virtualize and Storwize V7.7 you have two kinds of throttling. The new I/O throttling works at a finer granularity interval. Short bursts are allowed to avoid delays in workload I/O bursts, and provide fairness across throttled and incoming I/Os.
Figure 4-5 shows how the throttle algorithm interacts with I/Os.
Figure 4-5 Throttle activity
Volume throttles
With Volume throttles, a per volume throttle can be configured, and IOPs limits, bandwidth limits, or both can be set. V7.7 has also introduced a node level throttle enforcement, and Standard SCSI read and write operations are throttled.
Throttling at a volume level can be set by using the chvdisk command and -rate throttle_rate -unitmb parameter. This command specifies the I/O governing rate for the volume, which caps the amount of I/O that is accepted. The default throttle_rate units are I/Os. By default, the throttle_rate parameter is disabled.
To change the throttle_rate units to megabytes per second (MBps), specify the -unitmb parameter. The governing rate for a volume can be specified by I/Os or by MBps, but not both. However, you can set the rate to I/Os for some volumes and to MBps for others.
When the IOPS limit is configured on a volume, and it is smaller than 100 IOPS, the throttling logic rounds it to 100 IOPS. Even if throttle is set to a value smaller than 100 IOPs, the actual throttling occurs at 100 IOPs.
 
Note: To disable the throttling on a specific volume, set the throttle_rate value to zero.
To set Volume throttle, use the chvdisk command and a throttle object is created. Then, you can list your created throttle objects by using the lsthrottle command, and change their parameters with the chthrottle command. Example 4-6 shows some command examples.
Example 4-6 Throttle command example
IBM_2145:SVC_ESC:superuser>chvdisk -rate 100 -unitmb fcs0
 
IBM_2145:SVC_ESC:superuser>lsthrottle
throttle_id throttle_name object_id object_name throttle_type IOPs_limit bandwidth_limit_MB
0 throttle0 0 fcs0 vdisk 100
 
IBM_2145:SVC_ESC:superuser>chthrottle -iops 1000 fcs0
 
IBM_2145:SVC_ESC:superuser>lsthrottle
throttle_id throttle_name object_id object_name throttle_type IOPs_limit bandwidth_limit_MB
0 throttle0 0 fcs0 vdisk 1000 100
 
IBM_2145:SVC_ESC:superuser>lsthrottle fcs0
id 0
throttle_name throttle0
object_id 0
object_name fcs0
throttle_type vdisk
IOPs_limit 1000
bandwidth_limit_MB 100
Offload throttles
Starting with IBM Spectrum Virtualize and Storwize version 7.7, an Offload I/O throttle is supported. Offload throttles are applied to xcopy, and write_same primitive SCSI command that are used in the VMware environment with VAAI.
Throttle objects can have IOPs limits, bandwidth limits, or both, and can be created per volume. I/Os are queued if I/O flow exceeds configured limits and queuing has microsecond granularity.
Figure 4-6 shows the throttles flow.
Figure 4-6 Throttles flow
To configure offload throttles, use the mkthrottle, chthrottle and lsthrottle commands, as shown in Example 4-7.
Example 4-7 A throttle command example
IBM_2145:SVC_ESC:superuser>mkthrottle -type offload -bandwidth 200 -iops 2000 -name OffThrottle
Throttle, id [1], successfully created.
 
IBM_2145:SVC_ESC:superuser>lsthrottle
throttle_id throttle_name object_id object_name throttle_type IOPs_limit bandwidth_limit_MB
0 throttle0 0 fcs0 vdisk 1000 100
1 OffThrottle offload 2000 200
 
IBM_2145:SVC_ESC:superuser>chthrottle -iops 5000 OffThrottle
IBM_2145:SVC_ESC:superuser>lsthrottle OffThrottle
id 1
throttle_name OffThrottle
object_id
object_name
throttle_type offload
IOPs_limit 5000
bandwidth_limit_MB 200
Benefits of throttling
Throttling has these benefits:
Manage performance impact of offloaded I/Os:
 – Offloaded I/O for VM management:
 • VMware uses XCOPY and WriteSame (EagerZeroedThick and Storage VMotion).
 • Microsoft HyperV uses ODX.
 – Offloaded I/O commands have small footprint, but can generate huge controller activity that can severely impact regular SCSI I/O performance. Using offloaded throttle, you can accomplish these objectives:
 • Limit bandwidth used by offloaded I/Os.
 • Reduce performance impact on regular SCSI I/Os.
Figure 4-7 shows an offloaded I/Os example.
Figure 4-7 Offloaded I/O example
Bandwidth consumed by secondary applications:
 – Secondary applications like backup, data mining jobs generate bandwidth intensive workloads, which can adversely impact production job bandwidth and latency.
 – Applying throttle on volume copy might improve production job performance that uses the primary/source volume.
Fairness among large number of volumes.
Smoothing of I/O bursts.
Bandwidth and IOPs distribution among different applications.
Protection against rouge applications overloading storage controller.
Bandwidth distribution among large numbers of virtual machines.
Figure 4-8 shows the benefits of throttling.
Figure 4-8 Benefits of throttling
4.6 Cache mode and cache-disabled volumes
Cache in IBM Spectrum Virtualize and Storwize can be set at a single volume granularity. For each volume, the cache can be readwrite, readonly, or none. The meaning of each parameter is self-explanatory. By default, when a Volume has been created the cache is set to readwrite.
You use cache-disabled (none) volumes primarily when you are virtualizing an existing storage infrastructure and you want to retain the existing storage system copy services. You might want to use cache-disabled volumes where intellectual capital is in existing copy services automation scripts. Keep the use of cache-disabled volumes to minimum for normal workloads.
You can also use cache-disabled volumes to control the allocation of cache resources. By disabling the cache for certain volumes, more cache resources are available to cache I/Os to other volumes in the same I/O group. This technique of using cache-disabled volumes is effective where an I/O group serves volumes that benefit from cache and other volumes, where the benefits of caching are small or nonexistent.
4.6.1 Underlying controller remote copy with IBM Spectrum Virtualize and Storwize cache-disabled volumes
When synchronous or asynchronous remote copy is used in the underlying storage controller, you must map the controller logical unit numbers (LUNs) at the source and destination through IBM Spectrum Virtualize and Storwize as image mode disks. IBM Spectrum Virtualize and Storwize cache must be disabled.
You can access the source or the target of the remote copy from a host directly, rather than through IBM Spectrum Virtualize and Storwize. You can use IBM Spectrum Virtualize and Storwize copy services with the image mode volume that represents the primary site of the controller remote copy relationship.
Do not use IBM Spectrum Virtualize and Storwize copy services with the volume at the secondary site because IBM Spectrum Virtualize and Storwize does not detect the data that is flowing to this LUN through the controller.
Figure 4-9 shows the relationships between IBM Spectrum Virtualize and Storwize, the volume, and the underlying storage controller for a cache-disabled volume.
Figure 4-9 Cache-disabled volume in a remote copy relationship
4.6.2 Using underlying controller FlashCopy with IBM Spectrum Virtualize and Storwize cache disabled volumes
When FlashCopy is used in the underlying storage controller, you must map the controller LUNs for the source and the target through IBM Spectrum Virtualize and Storwize as image mode disks, as shown in Figure 4-10. IBM Spectrum Virtualize and Storwize cache must be disabled. You can access the source or the target of the FlashCopy from a host directly rather than through IBM Spectrum Virtualize and Storwize.
Figure 4-10 FlashCopy with cache-disabled volumes
4.6.3 Changing the cache mode of a volume
The cache mode of a volume can be concurrently (with I/O) changed by using the svctask chvdisk command. This command must not fail I/O to the user, and the command must be allowed to run on any volume. If used correctly without the -force flag, the command must not result in a corrupted volume. Therefore, the cache must be flushed and must discard cache data if the user disables the cache on a volume.
Example 4-8 shows an image volume VDISK_IMAGE_1 that changed the cache parameter after it was created.
Example 4-8 Changing the cache mode of a volume
IBM_2145:svccg8:admin>svctask mkvdisk -name VDISK_IMAGE_1 -iogrp 0 -mdiskgrp IMAGE_Test -vtype image -mdisk D8K_L3331_1108
Virtual Disk, id [9], successfully created
IBM_2145:svccg8:admin>svcinfo lsvdisk VDISK_IMAGE_1
id 9
.
lines removed for brevity
.
fast_write_state empty
cache readwrite
.
lines removed for brevity
 
IBM_2145:svccg8:admin>svctask chvdisk -cache none VDISK_IMAGE_1
IBM_2145:svccg8:admin>svcinfo lsvdisk VDISK_IMAGE_1
id 9
.
lines removed for brevity
.
cache none
.
lines removed for brevity
 
Tip: By default, the volumes are created with the cache mode enabled (read/write), but you can specify the cache mode when the volume is created by using the -cache option.
4.7 Using IBM Spectrum Virtualize or Storwize with FlashSystem
There can be some specific scenarios where you want to virtualize IBM or OEM all-flash array (AFA) because you want to have specific performance for specific workloads. The MDisk supplied by those AFA will be encompassed in a dedicated storage pool where Volumes are configured.
In this scenario, i perform some optimization on the IBM Spectrum Virtualize or Storwize Volumes cache depending on the infrastructure you are building.
Figure 4-11 shows write operation behavior when volume cache is activated (readwrite).
Figure 4-11 Cache activated
Figure 4-12 shows a write operation behavior when volume cache is deactivated (none).
Figure 4-12 Cache deactivated
In this case, an environment with Copy Services (FlashCopy, Metro Mirror, Global Mirror, and Volume Mirroring), and typical workloads, disabling SVC cache is detrimental to overall performance.
In cases where there are no advanced functions and there are extremely high IOPS is required, disabling the cache might help.
 
Attention: Carefully evaluate the impact to the entire system with quantitative analysis before and after making this change
4.8 FlashCopy services
This section provides a short list of rules to apply when you implement IBM Spectrum Virtualize or Storwize FlashCopy services.
4.8.1 FlashCopy rules summary
You must comply with the following rules for using FlashCopy:
FlashCopy services can be provided only inside a SAN Volume Controller cluster. If you want to use FlashCopy for remote storage, you must define the remote storage locally to the SAN Volume Controller cluster.
To maintain data integrity, ensure that all application I/Os and host I/Os are flushed from any application and operating system buffers.
You might need to stop your application for it to be restarted with a copy of the volume that you make. Check with your application vendor if you have any doubts.
Be careful if you want to map the target flash-copied volume to the same host that has the source volume mapped to it. Check that your operating system supports this configuration.
The target volume must be the same size as the source volume. However, the target volume can be a different type (image, striped, or sequential mode) or have different cache settings (cache-enabled or cache-disabled).
If you stop a FlashCopy mapping or a consistency group before it is completed, you lose access to the target volumes. If the target volumes are mapped to hosts, they will have I/O errors.
A volume can be the source for up to 256 targets.
You can create a FlashCopy mapping by using a target volume that is part of a remote copy relationship. This way, you can use the reverse feature with a disaster recovery implementation. You can also use fast failback from a consistent copy that is held on a FlashCopy target volume at the auxiliary cluster to the master copy.
4.8.2 IBM Spectrum Protect Snapshot
The management of many large FlashCopy relationships and consistency groups is a complex task without a form of automation for assistance. IBM Spectrum Protect™ Snapshot provides integration between IBM Spectrum Virtualize or Storwize, and IBM Spectrum Protect for Advanced Copy Services. It provides application-aware backup and restore by using the IBM Spectrum Virtualize or Storwize FlashCopy features and function.
For more information about IBM Spectrum Protect Snapshot, see:
4.8.3 IBM System Storage Support for Microsoft Volume Shadow Copy Service
IBM Spectrum Virtualize and Storwize provide support for the Microsoft Volume Shadow Copy Service and Virtual Disk Service. The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume when the volume is mounted and files are in use.
The Microsoft Virtual Disk Service provides a single vendor and technology-neutral interface for managing block storage virtualization, whether done by operating system software, RAID storage hardware, or other storage virtualization engines.
The following components are used to support the service:
SAN Volume Controller
The cluster Common Information Model (CIM) server
The IBM System Storage hardware provider, which is known as the IBM System Storage Support, for Microsoft Volume Shadow Copy Service and Virtual Disk Service software
Microsoft Volume Shadow Copy Service
The VMware vSphere Web Services when it is in a VMware virtual platform
The IBM System Storage hardware provider is installed on the Windows host. To provide the point-in-time shadow copy, the components complete the following process:
1. A backup application on the Windows host starts a snapshot backup.
2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider that a copy is needed.
3. IBM Spectrum Virtualize and Storwize prepare the volumes for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for the copy.
5. IBM Spectrum Virtualize and Storwize create the shadow copy by using the FlashCopy Copy Service.
6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful.
The Volume Shadow Copy Service maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of volumes. These pools are implemented as virtual host systems on the SAN Volume Controller.
For more information about how to implement and work with IBM System Storage Support for Microsoft Volume Shadow Copy Service, see Third Party Host Software at:
4.9 Configuration Backup
To achieve the most benefit from an IBM Spectrum Virtualize or Storwize systems implementation, postinstallation planning must include several important steps. These steps ensure that your infrastructure can be recovered with either the same or a different configuration in one of the surviving sites with minimal impact to the client applications. Correct planning and configuration backup also help to minimize possible downtime.
Regardless of which failure scenario you face, apply the following guidelines.
To plan the IBM Spectrum Virtualize or Storwize configuration backup, complete these steps:
1. Collect a detailed IBM Spectrum Virtualize or Storwize configuration. To do so, run a daily configuration backup with the command-line interface (CLI) commands that are shown in Example 4-9. The configuration backup can be automated with your own script.
Example 4-9 Saving the Storwize V7000 configuration
IBM_Storwize:ITSO_V7K_HyperSwap:superuser>svcconfig backup
........................................................................................
CMMVC6155I SVCCONFIG processing completed successfully
IBM_Storwize:ITSO_V7K_HyperSwap:superuser>lsdumps
id filename
0 reinst.7836494-1.trc
1 svc.config.cron.bak_7836494-2
.
.lines removed for brevity
.40 svc.config.backup.xml_7836494-1
2. Save the .xml file that is produced in a safe place, as shown in Example 4-10.
Example 4-10 Copying the configuration
C:Program FilesPuTTY>pscp -load V7K_HyperSwap [email protected]:/tmp/SVC.config.backup.xml_7836494-1 c: empconfigbackup.xml
configbackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%
3. Save the output of the CLI commands that is shown in Example 4-11 in .txt format.
Example 4-11 List of Storwize V7000 commands to issue
lssystem
lssite
lsnodecanister
lsnodecanister <nodes name>
lsnodecanisterhw <nodes name>
lsiogrp
lsiogrp <iogrps name>
lscontroller
lscontroller <controllers name>
lsmdiskgrp
lsmdiskgrp <mdiskgrps name>
lsmdisk
lsquorum
lsquorum <quorum id>
lsvdisk
lshost
lshost <host name>
lshostvdiskmap
lsrcrelationship
lsrcconsistgrp
 
Note: Example 4-11 represent Storwize Systems but same commands can be applied to an IBM Spectrum Virtualize.
From the output of these commands and the .xml file, you have a complete picture of the Storwize V7000 HyperSwap infrastructure. Remember the Storwize V7000 HyperSwap ports’ worldwide node names (WWNNs) so that you can reuse them during the recovery operation.
Example 4-12, which is contained in the .xml file, shows what you need to recreate a Storwize V7000 environment after a critical event.
Example 4-12 XML configuration file example
<xml
label="Configuration Back-up"
version="750"
file_version="1.206.9.169"
timestamp="2015/08/12 13:20:30 PDT" >
 
<!-- cluster section -->
 
<object type="cluster" >
<property name="id" value="00000100216001E0" />
<property name="name" value="ITSO_V7K_HyperSwap" />
</object >
.
many lines omitted for brevity
.
<!-- controller section -->
 
<object type="controller" >
<property name="id" value="0" />
<property name="controller_name" value="ITSO_V7K_Q_N1" />
<property name="WWNN" value="50050768020000EF" />
<property name="mdisk_link_count" value="2" />
<property name="max_mdisk_link_count" value="2" />
<property name="degraded" value="no" />
<property name="vendor_id" value="IBM " />
<property name="product_id_low" value="2145 " />
<property name="product_id_high" value=" " />
<property name="product_revision" value="0000" />
<property name="ctrl_s/n" value="2076 " />
<property name="allow_quorum" value="yes" />
<property name="fabric_type" value="fc" />
<property name="site_id" value="3" />
<property name="site_name" value="ITSO_SITE_Q" />
<property name="WWPN" value="50050768021000EF" />
<property name="path_count" value="0" />
<property name="max_path_count" value="0" />
<property name="WWPN" value="50050768022000EF" />
<property name="path_count" value="0" />
<property name="max_path_count" value="0" />
</object >
<object type="controller" >
<property name="id" value="1" />
<property name="controller_name" value="ITSO_V7K_Q_N2" />
<property name="WWNN" value="50050768020000F0" />
<property name="mdisk_link_count" value="2" />
<property name="max_mdisk_link_count" value="2" />
<property name="degraded" value="no" />
<property name="vendor_id" value="IBM " />
<property name="product_id_low" value="2145 " />
<property name="product_id_high" value=" " />
<property name="product_revision" value="0000" />
<property name="ctrl_s/n" value="2076 " />
<property name="allow_quorum" value="yes" />
<property name="fabric_type" value="fc" />
<property name="site_id" value="3" />
<property name="site_name" value="ITSO_SITE_Q" />
<property name="WWPN" value="50050768021000F0" />
<property name="path_count" value="8" />
<property name="max_path_count" value="8" />
<property name="WWPN" value="50050768022000F0" />
<property name="path_count" value="8" />
<property name="max_path_count" value="8" />
</object >
many lines omitted for brevity
You can also get this information from the .txt command output that is shown in Example 4-13.
Example 4-13 Example lsnodecanister command output
IBM_Storwize:ITSO_V7K_HyperSwap:superuser>lsnodecanister ITSO_HS_SITE_A_N1
id 8
name ITSO_HS_SITE_A_N1
UPS_serial_number
WWNN 500507680B0021A8
status online
IO_group_id 0
IO_group_name io_grp0_SITE_A
partner_node_id 9
partner_node_name ITSO_HS_SITE_A_N2
config_node yes
UPS_unique_id
port_id 500507680B2121A8
port_status active
port_speed 4Gb
port_id 500507680B2221A8
port_status active
port_speed 4Gb
port_id 500507680B2321A8
port_status active
port_speed 2Gb
port_id 500507680B2421A8
port_status active
port_speed 2Gb
hardware 400
iscsi_name iqn.1986-03.com.ibm:2145.itsov7khyperswap.itsohssitean1
iscsi_alias
failover_active no
failover_name ITSO_HS_SITE_A_N2
failover_iscsi_name iqn.1986-03.com.ibm:2145.itsov7khyperswap.itsohssitean2
failover_iscsi_alias
panel_name 01-1
enclosure_id 1
canister_id 1
enclosure_serial_number 7836494
service_IP_address 10.18.228.55
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6
service_IP_mode static
service_IP_mode_6
site_id 1
site_name ITSO_SITE_A
identify_LED off
product_mtm 2076-424
code_level 7.5.0.2 (build 115.51.1507081154000)
For more information about backing up your configuration, see the Storwize V7000 at IBM Knowledge Center:
and
Also, see IBM Spectrum Virtualize at IBM Knowledge Center:
4. Create an up-to-date, high-level copy of your configuration that describes all elements and connections.
5. Create a standard labeling schema and naming convention for your Fibre Channel (FC) or Ethernet (ETH) cabling, and ensure that it is fully documented.
6. Back up your storage area network (SAN) zoning by using your FC switch CLI or graphical user interface (GUI).
The essential zoning configuration data, domain ID, zoning, alias, configuration, and zone set can be saved in a .txt file by using the output from the CLI commands. You can also use the appropriate utility to back up the entire configuration.
The following IBM b-type/Brocade FC switch or director commands are helpful to collect the essential zoning configuration data:
 – switchshow
 – fabricshow
 – cfgshow
During the implementation, use WWNN zoning. During the recovery phase after a critical event, reuse the same domain ID and same port number that were used in the failing site, if possible. Zoning is propagated on each switch because of the SAN extension with inter-switch link (ISL).
For more information about how to back up your FC switch or director zoning configuration, see your switch vendor’s documentation.
7. Back up your back-end storage subsystems configuration.
In your IBM Spectrum Virtualize or Storwize System implementation, you can also virtualize the external storage controller. If you virtualized the external storage controller, back up your storage subsystem configuration. This way, if a critical event occurs, you can re-create the same environment when you reestablish your infrastructure in a different site with new storage subsystems.
Back up your storage subsystem in one of the following ways:
 – For the IBM DS8000 storage subsystem, save the output of the DS8000 CLI commands in .txt format, as shown in Example 4-14.
Example 4-14 DS8000 commands
lsarraysite –l
lsarray –l
lsrank –l
lsextpool –l
lsfbvol –l
lshostconnect –l
lsvolgrp –l
showvolgrp –lunmap <SVC vg_name>
 – For the IBM XIV Storage System, save the output of the XCLI commands in .txt format, as shown in Example 4-15.
Example 4-15 XIV subsystem commands
host_list
host_list_ports
mapping_list
vol_mapping_list
pool_list
vol_list
 – For IBM Storwize V7000, collect the configuration files and the output report as described previously.
 – For any other supported storage vendor’s products, see their documentation.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.1.156