Performance data and statistics gathering
This appendix provides a brief overview of the performance analysis capabilities of the IBM System Storage SAN Volume Controller (SVC) and IBM Spectrum Virtualize V8.1. It also describes a method that you can use to collect and process IBM Spectrum Virtualize performance statistics.
It is beyond the intended scope of this book to provide an in-depth understanding of performance statistics or to explain how to interpret them. For more information about the performance of the SVC, see IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
This appendix describes the following topics:
SAN Volume Controller performance overview
Storage virtualization with the IBM Spectrum Virtualize provides many administrative benefits. In addition, it can provide an increase in performance for some workloads. The caching capability of the IBM Spectrum Virtualize and its ability to stripe volumes across multiple disk arrays can provide a performance improvement over what can otherwise be achieved when midrange disk subsystems are used.
To ensure that the performance levels of your system are maintained, monitor performance periodically to provide visibility to potential problems that exist or are developing so that they can be addressed in a timely manner.
Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing infrastructure, you must consider many factors in terms of their potential effect on performance. These factors include, but are not limited to dissimilar workloads competing for the same resources, overloaded resources, insufficient available resources, poor performing resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network (SAN) and IBM Spectrum Virtualize layout:
Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant input/output (I/O) load across ISLs. The recommendation is to maintain a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on the edges and the SVC is the core.
Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In standard setups, this load can be ignored. Although this area is not entirely negligible, it does not contribute significantly to the ISL load. However, node-to-node ISL oversubscription is mentioned here in relation to the split-cluster capability that was made available since V6.3 (Stretched Cluster and HyperSwap).
When the system is running in this manner, the number of ISL links becomes more important. As with the storage-to-SVC ISL oversubscription, this load also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you determine the number of ISLs to implement. If you need assistance, contact your IBM representative and request technical assistance.
ISL trunking/port channeling
For the best performance and availability, use ISL trunking or port channeling. Independent ISL links can easily become overloaded and turn into performance bottlenecks. Bonded or trunked ISLs automatically share load and provide better redundancy in a failure.
Number of paths per host multipath device
The maximum supported number of paths per multipath device that is visible on the host is eight. Although the IBM Subsystem Device Driver Path Control Module (SDDPCM), related products, and most vendor multipathing software can support more paths, the SVC expects a maximum of eight paths. In general, you see only an effect on performance from more paths than eight. Although the IBM Spectrum Virtualize can work with more than eight paths, this design is technically unsupported.
Do not intermix dissimilar array types or sizes
Although the IBM Spectrum Virtualize supports an intermix of differing storage within storage pools, it is best to always use the same array model, Redundant Array of Independent Disks (RAID) mode. RAID size (RAID 5 6+P+S does not mix well with RAID 6 14+2), and drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance can provide a validation that design expectations are met, and identify opportunities for improvement.
IBM Spectrum Virtualize performance perspectives
The software was developed by the IBM Research Group. It is designed to run on commodity hardware (mass-produced Intel-based processors (CPUs) with mass-produced expansion cards) and to provide distributed cache and a scalable cluster architecture. One of the main goals of this design was to use refreshes in hardware. Currently, the SVC cluster is scalable up to eight nodes and these nodes can be swapped for newer hardware while online.
This capability provides a great investment value because the nodes are relatively inexpensive and a node swap can be done online. This capability provides an instant performance boost with no license changes. Newer nodes, such as 2145-SV1 models, which dramatically increased cache of 32 - 64 gigabytes (GB) per node, provide an extra benefit on top of the typical refresh cycle.
To set the Fibre Channel port mapping for the 2145-SV1, you can use following application link. This link only supports upgrades from 2145-CF8, 2145-CG8, and 2145-DH8:
The performance is near linear when nodes are added into the cluster until performance eventually becomes limited by the attached components. Although virtualization provides significant flexibility in terms of the components that are used, it does not diminish the necessity of designing the system around the components so that it can deliver the level of performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning requirements, but the end goal is that you always want to maximize the bandwidth that is available to the SVC ports. The SVC is one of the few devices that can drive ports to their limits on average, so it is imperative that you put significant thought into planning the SAN layout.
Essentially, performance improvements are gained by spreading the workload across a greater number of back-end resources and by more caching. These capabilities are provided by the SVC cluster. However, the performance of individual resources eventually becomes the limiting factor.
Performance monitoring
This section highlights several performance monitoring techniques.
Collecting performance statistics
IBM Spectrum Virtualize is constantly collecting performance statistics. The default frequency by which files are created is 5-minute intervals. The collection interval can be changed by using the startstats command.
The statistics files (Volume, managed disk (MDisk), and Node) are saved at the end of the sampling interval. A maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion. This design then provides statistics for the most recent 80-minute period if the default 5-minute sampling interval is used. IBM Spectrum Virtualize supports user-defined sampling intervals of 1 - 60 minutes.
The maximum space that is required for a performance statistics file is around 1 MB (1,153,482 bytes). Up to 128 (16 per each of the three types across eight nodes) different files can exist across eight SVC nodes. This design makes the total space requirement a maximum of a bit more than 147 MB (147,645,694 bytes) for all performance statistics from all nodes in an 8-node SVC cluster.
 
Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all nodes in an SVC cluster when you are in time-critical situations. The required size is not otherwise important because SVC node hardware can map the space.
You can define the sampling interval by using the startstats -interval 2 command to collect statistics at, in this example, 2-minute intervals.
Statistics are collected at the end of each sampling period (as specified by the -interval parameter). These statistics are written to a file. A file is created at the end of each sampling period. Separate files are created for MDisks, volumes, and node statistics.
Use the startstats command to start the collection of statistics, as shown in Example A-1.
Example A-1 The startstats command
IBM_2145:ITSO_SVC_DH8:superuser>startstats -interval 4
This command starts statistics collection and gathers data at 4-minute intervals.
To verify that the statistics collection interval, display the system properties again, as shown in Example A-2.
Example A-2 Statistics collection status and frequency
IBM_2145:ITSO_SVC_DH8:superuser>lssystem
statistics_status on
statistics_frequency 4
-- The output has been shortened for easier reading. --
It is not possible to stop statistics collection with the command stopstats starting with V8.1.
 
Collection intervals: Although more frequent collection intervals provide a more detailed view of what happens within IBM Spectrum Virtualize and SVC, they shorten the amount of time that the historical data is available on the IBM Spectrum Virtualize. For example, rather than an 80-minute period of data with the default five-minute interval, if you adjust to 2-minute intervals, you have a 32-minute period instead.
Statistics are collected per node. The sampling of the internal performance counters is coordinated across the cluster so that when a sample is taken, all nodes sample their internal counters at the same time. It is important to collect all files from all nodes for a complete analysis. Tools, such as IBM Spectrum Control, perform this intensive data collection for you.
Statistics file naming
The statistics files that are generated are written to the /dumps/iostats/ directory. The file name is in the following formats:
Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisks statistics
Nv_stats_<node_frontpanel_id>_<date>_<time> for Volumes statistics
Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistic
Nd_stats_<node_frontpanel_id>_<date>_<time> for drives statistic
The node_frontpanel_id is of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an MDisk statistics file name:
Nm_stats_113986_161024_151832
Example A-3 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-3 File names of per node statistics
IBM_2145:ITSO_SVC3:superuser>lsdumps -prefix /dumps/iostats
id filename
0 Nd_stats_75ACXP0_171004_144620
1 Nn_stats_75ACXP0_171004_144620
2 Nm_stats_75ACXP0_171004_150120
3 Nd_stats_75ACXP0_171004_150120
4 Nv_stats_75ACXP0_171004_150120
5 Nn_stats_75ACXP0_171004_150120
6 Nd_stats_75ACXP0_171004_151620
7 Nn_stats_75ACXP0_171004_151620
8 Nm_stats_75ACXP0_171004_151620
9 Nv_stats_75ACXP0_171004_151620
10 Nn_stats_75ACXP0_171004_152254
 
Tip: The performance statistics files can be copied from the SVC nodes to a local drive on your workstation by using pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example:
C:Program FilesPuTTY>pscp -unsafe -load ITSO_SVC3 [email protected]:/dumps/iostats/* c:statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
Specify the -unsafe parameter when you use wildcards.
You can obtain PuTTY from the following website:
Real-time performance monitoring
SVC supports real-time performance monitoring. Real-time performance statistics provide short-term status information for the SVC. The statistics are shown as graphs in the management GUI, or can be viewed from the CLI.
With system-level statistics, you can quickly view the CPU usage and the bandwidth of volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes per second (MBps) or I/O operations per second (IOPS), and a view of bandwidth over time.
Each node collects various performance statistics, mostly at 5-second intervals, and the statistics that are available from the config node in a clustered environment. This information can help you determine the performance effect of a specific node. As with system statistics, node statistics help you to evaluate whether the node is operating within normal performance metrics.
Real-time performance monitoring gathers the following system-level performance statistics:
CPU utilization
Port utilization and I/O rates
Volume and MDisk I/O rates
Bandwidth
Latency
Real-time statistics are not a configurable option and cannot be disabled.
Real-time performance monitoring with the CLI
The lsnodestats and lssystemstats commands are available for monitoring the statistics through the CLI.
The lsnodestats command provides performance statistics for the nodes that are part of a clustered system, as shown in Example A-4. This output is truncated and shows only part of the available statistics. You can also specify a node name in the command to limit the output for a specific node.
Example A-4 The lsnodestats command output
IBM_2145:ITSO_DH8:superuser>lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 node_75ACXP0 compression_cpu_pc 0 0 171004174607
1 node_75ACXP0 cpu_pc 4 4 171004174607
1 node_75ACXP0 fc_mb 0 13 171004174202
1 node_75ACXP0 fc_io 219 315 171004174202
1 node_75ACXP0 sas_mb 0 0 171004174607
1 node_75ACXP0 sas_io 0 0 171004174607
1 node_75ACXP0 iscsi_mb 0 0 171004174607
1 node_75ACXP0 iscsi_io 0 0 171004174607
1 node_75ACXP0 write_cache_pc 0 0 171004174607
1 node_75ACXP0 total_cache_pc 0 0 171004174607
1 node_75ACXP0 vdisk_mb 0 0 171004174607
1 node_75ACXP0 vdisk_io 0 0 171004174607
1 node_75ACXP0 vdisk_ms 0 0 171004174607
1 node_75ACXP0 mdisk_mb 0 13 171004174202
1 node_75ACXP0 mdisk_io 5 96 171004174202
1 node_75ACXP0 mdisk_ms 0 12 171004174202
1 node_75ACXP0 drive_mb 0 0 171004174607
1 node_75ACXP0 drive_io 0 0 171004174607
1 node_75ACXP0 drive_ms 0 0 171004174607
1 node_75ACXP0 vdisk_r_mb 0 0 171004174607
1 node_75ACXP0 vdisk_r_io 0 0 171004174607
1 node_75ACXP0 vdisk_r_ms 0 0 171004174607
1 node_75ACXP0 vdisk_w_mb 0 0 171004174607
...
2 node_75ACXF0 mdisk_w_ms 0 0 171004174607
2 node_75ACXF0 drive_r_mb 0 0 171004174607
2 node_75ACXF0 drive_r_io 0 0 171004174607
2 node_75ACXF0 drive_r_ms 0 0 171004174607
2 node_75ACXF0 drive_w_mb 0 0 171004174607
2 node_75ACXF0 drive_w_io 0 0 171004174607
2 node_75ACXF0 drive_w_ms 0 0 171004174607
2 node_75ACXF0 iplink_mb 0 0 171004174607
2 node_75ACXF0 iplink_io 0 0 171004174607
2 node_75ACXF0 iplink_comp_mb 0 0 171004174607
2 node_75ACXF0 cloud_up_mb 0 0 171004174607
2 node_75ACXF0 cloud_up_ms 0 0 171004174607
2 node_75ACXF0 cloud_down_mb 0 0 171004174607
2 node_75ACXF0 cloud_down_ms 0 0 171004174607
Example A-4 on page 758 shows statistics for the two nodes members of cluster ITSO_DH8. For each node, the following columns are displayed:
stat_name: The name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
The lsnodestats command can also be used with a node name or ID as an argument. For example, you can enter the following command to display the statistics of node with ID 1 only:
lsnodestats 1
The lssystemstats command lists the same set of statistics that is listed with the lsnodestats command, but representing all nodes in the cluster. The values for these statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated by using data from the whole cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Example A-5 shows the resulting output of the lssystemstats command.
Example A-5 The lssystemstats command output
IBM_2145:ITSO_DH8:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 0 0 171005115305
cpu_pc 4 4 171005115305
fc_mb 0 13 171005115200
fc_io 426 536 171005115200
sas_mb 0 0 171005115305
sas_io 0 0 171005115305
iscsi_mb 0 0 171005115305
iscsi_io 0 0 171005115305
write_cache_pc 0 0 171005115305
total_cache_pc 0 0 171005115305
vdisk_mb 0 0 171005115305
vdisk_io 0 0 171005115305
vdisk_ms 0 0 171005115305
mdisk_mb 0 13 171005115200
mdisk_io 5 96 171005115200
mdisk_ms 0 5 171005115200
drive_mb 0 0 171005115305
drive_io 0 0 171005115305
drive_ms 0 0 171005115305
vdisk_r_mb 0 0 171005115305
vdisk_r_io 0 0 171005115305
vdisk_r_ms 0 0 171005115305
vdisk_w_mb 0 0 171005115305
vdisk_w_io 0 0 171005115305
vdisk_w_ms 0 0 171005115305
mdisk_r_mb 0 13 171005115200
mdisk_r_io 0 95 171005115200
mdisk_r_ms 0 6 171005115200
mdisk_w_mb 0 0 171005115305
mdisk_w_io 5 5 171005115305
mdisk_w_ms 0 0 171005115305
drive_r_mb 0 0 171005115305
drive_r_io 0 0 171005115305
drive_r_ms 0 0 171005115305
drive_w_mb 0 0 171005115305
drive_w_io 0 0 171005115305
drive_w_ms 0 0 171005115305
iplink_mb 0 0 171005115305
iplink_io 0 0 171005115305
iplink_comp_mb 0 0 171005115305
cloud_up_mb 0 0 171005115305
cloud_up_ms 0 0 171005115305
cloud_down_mb 0 0 171005115305
cloud_down_ms 0 0 171005115305
Table A-1 gives the description of the different counters that are presented by the lssystemstats and lsnodestats commands.
Table A-1 List of counters in lssystemstats and lsnodestats
Value
Description
compression_cpu_pc
Displays the percentage of allocated CPU capacity that is used for compression.
cpu_pc
Displays the percentage of allocated CPU capacity that is used for the system.
fc_mb
Displays the total number of megabytes transferred per second for Fibre Channel traffic on the system. This value includes host I/O and any bandwidth that is used for communication within the system.
fc_io
Displays the total I/O operations that are transferred per second for Fibre Channel traffic on the system. This value includes host I/O and any bandwidth that is used for communication within the system.
sas_mb
Displays the total number of megabytes transferred per second for serial-attached SCSI (SAS) traffic on the system. This value includes host I/O and bandwidth that is used for background RAID activity.
sas_io
Displays the total I/O operations that are transferred per second for SAS traffic on the system. This value includes host I/O and bandwidth that is used for background RAID activity.
iscsi_mb
Displays the total number of megabytes transferred per second for iSCSI traffic on the system.
iscsi_io
Displays the total I/O operations that are transferred per second for iSCSI traffic on the system.
write_cache_pc
Displays the percentage of the write cache usage for the node.
total_cache_pc
Displays the total percentage for both the write and read cache usage for the node.
vdisk_mb
Displays the average number of megabytes transferred per second for read and write operations to volumes during the sample period.
vdisk_io
Displays the average number of I/O operations that are transferred per second for read and write operations to volumes during the sample period.
vdisk_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to volumes over the sample period.
mdisk_mb
Displays the average number of megabytes transferred per second for read and write operations to MDisks during the sample period.
mdisk_io
Displays the average number of I/O operations that are transferred per second for read and write operations to MDisks during the sample period.
mdisk_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to MDisks over the sample period.
drive_mb
Displays the average number of megabytes transferred per second for read and write operations to drives during the sample period
drive_io
Displays the average number of I/O operations that are transferred per second for read and write operations to drives during the sample period.
drive_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to drives over the sample period.
vdisk_w_mb
Displays the average number of megabytes transferred per second for read and write operations to volumes during the sample period.
vdisk_w_io
Displays the average number of I/O operations that are transferred per second for write operations to volumes during the sample period.
vdisk_w_ms
Displays the average amount of time in milliseconds that the system takes to respond to write requests to volumes over the sample period.
mdisk_w_mb
Displays the average number of megabytes transferred per second for write operations to MDisks during the sample period.
mdisk_w_io
Displays the average number of I/O operations that are transferred per second for write operations to MDisks during the sample period.
mdisk_w_ms
Displays the average amount of time in milliseconds that the system takes to respond to write requests to MDisks over the sample period.
drive_w_mb
Displays the average number of megabytes transferred per second for write operations to drives during the sample period.
drive_w_io
Displays the average number of I/O operations that are transferred per second for write operations to drives during the sample period.
drive_w_ms
Displays the average amount of time in milliseconds that the system takes to respond write requests to drives over the sample period.
vdisk_r_mb
Displays the average number of megabytes transferred per second for read operations to volumes during the sample period.
vdisk_r_io
Displays the average number of I/O operations that are transferred per second for read operations to volumes during the sample period.
vdisk_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to volumes over the sample period.
mdisk_r_mb
Displays the average number of megabytes transferred per second for read operations to MDisks during the sample period.
mdisk_r_io
Displays the average number of I/O operations that are transferred per second for read operations to MDisks during the sample period.
mdisk_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to MDisks over the sample period.
drive_r_mb
Displays the average number of megabytes transferred per second for read operations to drives during the sample period
drive_r_io
Displays the average number of I/O operations that are transferred per second for read operations to drives during the sample period.
drive_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to drives over the sample period.
iplink_mb
The total number of megabytes transferred per second for Internet Protocol (IP) replication traffic on the system. This value does not include iSCSI host I/O operations.
iplink_comp_mb
Displays the average number of compressed MBps over the IP replication link during the sample period.
iplink_io
The total I/O operations that are transferred per second for IP partnership traffic on the system. This value does not include Internet Small Computer System Interface (iSCSI) host I/O operations.
cloud_up_mb
Displays the average number of Mbps for upload operations to a cloud account during the sample period.
cloud_up_ms
Displays the average amount of time (in milliseconds) it takes for the system to respond to upload requests to a cloud account during the sample period.
cloud_down_mb
Displays the average number of Mbps for download operations to a cloud account during the sample period.
cloud_down_ms
Displays the average amount of time (in milliseconds) that it takes for the system to respond to download requests to a cloud account during the sample period.
Real-time performance statistics monitoring with the GUI
The Spectrum Virtualize dashboard gives you performance at a glance by displaying some information about the system. You can see the entire cluster (the system) performance by selecting the information between Bandwidth, Response Time, IOps, or CPU utilization. You can also display a Node Comparison by selecting the same information as for the cluster, and then switching the button like shown in Figure A-1 and Figure A-2 on page 764.
Figure A-1 Spectrum Virtualize Dashboard displaying System performance overview
Figure A-2 shows the display after switching the button.
Figure A-2 Spectrum Virtualize Dashboard displaying Nodes performance overview
You can also use real-time statistics to monitor CPU utilization, volume, interface, and MDisk bandwidth of your system and nodes. Each graph represents five minutes of collected statistics and provides a means of assessing the overall performance of your system.
The real-time statistics are available from the IBM Spectrum Virtualize GUI. Click Monitoring → Performance (as shown in Figure A-3) to open the Performance Monitoring window.
Figure A-3 Selecting performance pane in the monitoring menu
As shown in Figure A-4 on page 765, the Performance monitoring pane is divided into the following sections that provide utilization views for the following resources:
CPU Utilization: The CPU utilization graph shows the current percentage of CPU usage and peaks in utilization. It can also display compression CPU usage for systems with compressed volumes.
Volumes: Shows four metrics on the overall volume utilization graphics:
 – Read
 – Write
 – Read latency
 – Write latency
Interfaces: The Interfaces graph displays data points for FC, iSCSI, serial-attached SCSI (SAS), and IP Remote Copy interfaces. You can use this information to help determine connectivity issues that might affect performance.
 – Fibre Channel
 – iSCSI
 – SAS
 – IP Remote Copy
MDisks: Also shows four metrics on the overall MDisks graphics:
 – Read
 – Write
 – Read latency
 – Write latency
You can use these metrics to help determine the overall performance health of the volumes and MDisks on your system. Consistent unexpected results can indicate errors in configuration, system faults, or connectivity issues.
The system’s performance is also always visible in the bottom of the Spectrum Virtualize window, as shown in Figure A-4.
 
Note: The indicated values in the graphics are averaged on a 1 second based sample.
Figure A-4 Spectrum Virtualize performance window
You can also select to view performance statistics for each of the available nodes of the system, as shown in Figure A-5.
Figure A-5 View statistics per node or for the entire system
You can also change the metric between MBps or IOPS, as shown in Figure A-6
Figure A-6 View performance metrics by MBps or IOps
On any of these views, you can select any point with your cursor to know the exact value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure A-7.
Figure A-7 Viewing performance with details
For each of the resources, various values exist that you can view by selecting the value. For example, as shown in Figure A-8, the four available fields are selected for the MDisks view: Read, Write, Read latency, and Write latency. In our example, Read and Write MBps are not selected.
Figure A-8 Displaying performance counters
Performance data collection and IBM Spectrum Control
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze SVC performance statistics.
IBM Spectrum Control is installed separately on a dedicated system, and is not part of the IBM Spectrum Virtualize bundle.
A Software as a Service (SaaS) version of IBM Spectrum Control, called IBM Spectrum Control Storage Insights, allows you to use the solution as a service (no installation) in minutes and offers a free trial for 30 days.
For more information about the use of IBM Spectrum Control to monitor your storage subsystem, see:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.249.154