Performance data and statistics gathering
This appendix provides a brief overview of the performance analysis capabilities of the IBM SAN Volume Controller and IBM Spectrum Virtualize V8.4. It also describes a method that you can use to collect and process IBM Spectrum Virtualize performance statistics.
It is beyond the intended scope of this book to provide an in-depth understanding of performance statistics or to explain how to interpret them. For more information about the performance of the SAN Volume Controller, see IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and Performance Guidelines, SG24-7521.
This appendix includes the following topics:
IBM SAN Volume Controller performance overview
Storage virtualization with IBM Spectrum Virtualize provides many administrative benefits. In addition, it can provide an increase in performance for some workloads. The caching capability of the IBM Spectrum Virtualize and its ability to stripe volumes across multiple disk arrays can provide a performance improvement over what can otherwise be achieved when midrange disk subsystems are used.
To ensure that the performance levels of your system are maintained, monitor performance periodically to provide visibility into potential problems that exist or are developing so that they can be addressed in a timely manner.
Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an infrastructure, you must consider many factors in terms of their potential effect on performance. These factors include dissimilar workloads that are competing for the same resources, overloaded resources, insufficient available resources, poor performing resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network (SAN) and IBM Spectrum Virtualize layout:
Host-to-SAN Volume Controller inter-switch link (ISL) oversubscription.
This area is the most significant input/output (I/O) load across ISLs. The recommendation is to maintain a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on the edges and the IBM SAN Volume Controller is the core.
Storage-to-SAN Volume Controller ISL oversubscription.
This area is the second most significant I/O load across ISLs. The maximum oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription.
This area is the least significant load of the three possible oversubscription bottlenecks. In standard setups, this load can be ignored. Although this load is not entirely negligible, it does not contribute significantly to the ISL load. However, node-to-node ISL oversubscription is referenced here in relation to the Stretched cluster capability.
When the system is running in this manner, the number of ISL links becomes more important. As with the storage-to-SAN Volume Controller ISL oversubscription, this load also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you determine the number of ISLs to implement. If you need assistance, contact your IBM representative and request technical assistance.
ISL trunking and port channeling.
For the best performance and availability, use ISL trunking or port channeling. Independent ISL links can easily become overloaded and turn into performance bottlenecks. Bonded or trunked ISLs automatically share load and provide better redundancy in a failure.
Number of paths per host multipath device.
The maximum supported number of paths per multipath device that is visible on the host is eight (with HyperSwap, you can have up to 16 active paths). Although, most vendor multipathing software can support more paths, the IBM SAN Volume Controller expects a maximum of eight paths. In general, you see only an effect on performance from more paths than eight. Although the IBM Spectrum Virtualize system can work with more than eight paths, this design is technically unsupported.
Do not intermix dissimilar array types or sizes.
Although the IBM Spectrum Virtualize supports an intermix of differing storage within storage pools, it is best to always use the same array model, which is Redundant Array of Independent Disks (RAID) mode, RAID size (RAID 5 6+P+S does not mix well with RAID 6 14+2), and drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance can provide a validation that design expectations are met, and identify opportunities for improvement.
IBM Spectrum Virtualize performance perspectives
The IBM Spectrum Virtualize software was developed by the IBM Research Group. It is designed to run on IMB SAN Volume Controller, IBM Storwize products, and commodity hardware (mass-produced Intel-based processors with mass-produced expansion cards). It is also designed to provide distributed cache and a scalable cluster architecture.
Currently, the IBM SAN Volume Controller cluster is scalable up to eight nodes and these nodes can be swapped for newer hardware while online. This capability provides a great investment value because the nodes are relatively inexpensive and a node swap can be done online.
This capability provides an instant performance boost with no license changes. The IBM SAN Volume Controller node model 2145-SV1, which has 64 GB of cache and can be upgraded to up to 256 GB per node, provides an extra benefit on top of the typical refresh cycle.
For more information about replacing nodes nondisruptively, see IBM Documentation.
For more information about setting up Fibre Channel (FC) port masking when upgrading from nodes 2145-CF8, 2145-CG8, or 2145-DH8 to 2145-SV1, see this web page.
The performance is near linear when nodes are added into the cluster until performance eventually becomes limited by the attached components. Although virtualization provides significant flexibility in terms of the components that are used, it does not diminish the necessity of designing the system around the components so that it can deliver the level of performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning requirements, but the goal is that you always want to maximize the bandwidth that is available to the IBM SAN Volume Controller ports. The IBM SAN Volume Controller is one of the few devices that can drive ports to their limits on average, so it is imperative that you put significant thought into planning the SAN layout.
Essentially, performance improvements are gained by spreading the workload across a greater number of back-end resources and by more caching. These capabilities are provided by the IBM SAN Volume Controller cluster. However, the performance of individual resources eventually becomes the limiting factor.
Performance monitoring
This section highlights several performance monitoring techniques.
Collecting performance statistics
IBM Spectrum Virtualize is constantly collecting performance statistics. The default frequency by which files are created is 15-minute intervals. The collection interval can be changed by running the startstats command.
The statistics files for volumes, managed disks (MDisks), nodes, and drives are saved at the end of the sampling interval. A maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion. This design then provides statistics for the most recent 240-minute period if the default 15-minute sampling interval is used. IBM Spectrum Virtualize supports user-defined sampling intervals of 1 - 60 minutes.IBM Storage Insights requires and recommends interval of 5 minutes.
For each type of object (volumes, MDisks, nodes, and drives), a separate file with statistic data is created at the end of each sampling period and stored in /dumps/iostats.
Run the startstats command to start the collection of statistics, as shown in Example A-1.
Example A-1 The startstats command
IBM_2145:IBM Redbook SVC:superuser>startstats -interval 5
This command starts statistics collection and gathers data at 5-minute intervals.
To verify the statistics status and collection interval, display the system properties, as shown in Example A-2.
Example A-2 Statistics collection status and frequency
IBM_2145:IBM Redbook SVC:superuser>lssystem
id 0000020320219A08
name IBM Redbook SVC
statistics_status on
statistics_frequency 5
-- The output has been shortened for easier reading. --
Starting with V8.1, it is not possible to stop statistics collection by running the stopstats command.
 
Collection intervals: Although more frequent collection intervals provide a more detailed view of what happens within IBM Spectrum Virtualize and IBM SAN Volume Controller, they shorten the amount of time that the historical data is available on the IBM Spectrum Virtualize system. For example, rather than a 240-minute period of data with the default 15-minute interval, if you adjust to 2-minute intervals, you have a 32-minute period instead.
Statistics are collected per node. The sampling of the internal performance counters is coordinated across the cluster so that when a sample is taken, all nodes sample their internal counters at the same time. It is important to collect all files from all nodes for a complete analysis. Tools, such as IBM Spectrum Control and Spectrum Insight® Pro, perform this intensive data collection for you.
Statistics file naming
The statistics files that are generated are written to the /dumps/iostats/ directory. The file name is in the following formats:
Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisks statistics
Nv_stats_<node_frontpanel_id>_<date>_<time> for Volumes statistics
Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics
Nd_stats_<node_frontpanel_id>_<date>_<time> for drives statistics
The node_frontpanel_id is the pane name of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an MDisk statistics file name:
Nv_stats_78HYLD0_201026_112425
Example A-3 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-3 File names of per node statistics
IBM_2145:IBM Redbook SVC:superuser>lsdumps -prefix /dumps/iostats
id filename
0 Nm_stats_78HYLD0_201026_112425
1 Nv_stats_78HYLD0_201026_112425
2 Nd_stats_78HYLD0_201026_112425
3 Nn_stats_78HYLD0_201026_112425
4 Nm_stats_78HYLD0_201026_113927
5 Nd_stats_78HYLD0_201026_113927
6 Nm_stats_78HXWY0_201026_113927
7 Nv_stats_78HYLD0_201026_113927
8 Nv_stats_78HXWY0_201026_113927
9 Nd_stats_78HXWY0_201026_113927
...
126 Nn_stats_78HYLD0_201026_152209
127 Nd_stats_78HXWY0_201026_152209
128 Nn_stats_78HXWY0_201026_152209
129 Nd_stats_78HYLD0_201026_152209
130 Nm_stats_78HXWY0_201026_152209
131 Nm_stats_78HYLD0_201026_152209
IBM_2145:IBM Redbook SVC:superuser>
 
 
 
Note: For more information about statistics files name convention, see IBM Documentation.
 
 
Tip: The performance statistics files can be copied from the IBM SAN Volume Controller nodes to a local drive on your workstation by using pscp.exe (included with PuTTY) from an MS-DOS command prompt, as shown in the following example:
C:Program FilesPuTTY>pscp -unsafe -load IBM Redbook SVC [email protected]:/dumps/iostats/* c:statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
Specify the -unsafe parameter when you use wildcards.
PuTTY is available for download at this website.
Real-time performance monitoring
IBM SAN Volume Controller supports real-time performance monitoring. Real-time performance statistics provide short-term status information for the IMB SAN Volume Controller. The statistics are shown as graphs in the management GUI, or can be viewed from the CLI.
With system-level statistics, you can quickly view the CPU usage and the bandwidth of volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes per second (MBps) or input/output operations per second (IOPS), and a view of bandwidth over time.
Each node collects various performance statistics, mostly at 5-second intervals, and the statistics that are available from the config node in a clustered environment. This information can help you determine the performance effect of a specific node. As with system statistics, node statistics help you to evaluate whether the node is operating within normal performance metrics.
Real-time performance monitoring gathers the following system-level performance statistics:
CPU usage
Port usage and I/O rates
Volume and MDisk I/O rates
Bandwidth
Latency
Real-time statistics are not a configurable option and cannot be disabled.
Real-time performance monitoring with the CLI
The lsnodestats and lssystemstats commands are available for monitoring the statistics through the CLI.
The lsnodestats command provides performance statistics for the nodes that are part of a clustered system, as shown in Example A-4. This output is truncated and shows only part of the available statistics. You can also specify a node name in the command to limit the output for a specific node.
Example A-4 The lsnodestats command output
IBM_2145:IBM Redbook SVC:superuser>lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 node 1 compression_cpu_pc 0 0 201026152448
1 node 1 cpu_pc 12 12 201026152448
1 node 1 fc_mb 0 0 201026152448
1 node 1 fc_io 1196 1220 201026152302
1 node 1 sas_mb 0 0 201026152448
1 node 1 sas_io 0 0 201026152448
1 node 1 iscsi_mb 0 0 201026152448
1 node 1 iscsi_io 0 0 201026152448
1 node 1 write_cache_pc 0 0 201026152448
1 node 1 total_cache_pc 1 1 201026152448
1 node 1 vdisk_mb 0 0 201026152448
1 node 1 vdisk_io 0 0 201026152448
1 node 1 vdisk_ms 0 0 201026152448
1 node 1 mdisk_mb 0 0 201026152448
1 node 1 mdisk_io 0 0 201026152448
1 node 1 mdisk_ms 0 0 201026152448
1 node 1 drive_mb 0 0 201026152448
1 node 1 drive_io 0 0 201026152448
1 node 1 drive_ms 0 0 201026152448
1 node 1 vdisk_r_mb 0 0 201026152448
1 node 1 vdisk_r_io 0 0 201026152448
1 node 1 vdisk_r_ms 0 0 201026152448
1 node 1 vdisk_w_mb 0 0 201026152448
1 node 1 vdisk_w_io 0 0 201026152448
1 node 1 vdisk_w_ms 0 0 201026152448
1 node 1 mdisk_r_mb 0 0 201026152448
1 node 1 mdisk_r_io 0 0 201026152448
1 node 1 mdisk_r_ms 0 0 201026152448
1 node 1 mdisk_w_mb 0 0 201026152448
1 node 1 mdisk_w_io 0 0 201026152448
1 node 1 mdisk_w_ms 0 0 201026152448
1 node 1 drive_r_mb 0 0 201026152448
1 node 1 drive_r_io 0 0 201026152448
1 node 1 drive_r_ms 0 0 201026152448
1 node 1 drive_w_mb 0 0 201026152448
1 node 1 drive_w_io 0 0 201026152448
1 node 1 drive_w_ms 0 0 201026152448
1 node 1 iplink_mb 0 0 201026152448
1 node 1 iplink_io 0 0 201026152448
1 node 1 iplink_comp_mb 0 0 201026152448
1 node 1 cloud_up_mb 0 0 201026152448
1 node 1 cloud_up_ms 0 0 201026152448
1 node 1 cloud_down_mb 0 0 201026152448
1 node 1 cloud_down_ms 0 0 201026152448
1 node 1 iser_mb 0 0 201026152448
1 node 1 iser_io 0 0 201026152448
2 node 2 compression_cpu_pc 0 0 201026152445
2 node 2 cpu_pc 16 16 201026152445
2 node 2 fc_mb 0 0 201026152445
2 node 2 fc_io 1147 1265 201026152300
2 node 2 sas_mb 0 0 201026152445
2 node 2 sas_io 0 0 201026152445
2 node 2 iscsi_mb 0 0 201026152445
2 node 2 iscsi_io 0 0 201026152445
2 node 2 write_cache_pc 0 0 201026152445
2 node 2 total_cache_pc 0 0 201026152445
2 node 2 vdisk_mb 0 0 201026152445
2 node 2 vdisk_io 0 0 201026152445
2 node 2 vdisk_ms 0 0 201026152445
2 node 2 mdisk_mb 0 0 201026152445
2 node 2 mdisk_io 0 0 201026152445
2 node 2 mdisk_ms 0 0 201026152445
2 node 2 drive_mb 0 0 201026152445
2 node 2 drive_io 0 0 201026152445
2 node 2 drive_ms 0 0 201026152445
2 node 2 vdisk_r_mb 0 0 201026152445
2 node 2 vdisk_r_io 0 0 201026152445
2 node 2 vdisk_r_ms 0 0 201026152445
2 node 2 vdisk_w_mb 0 0 201026152445
2 node 2 vdisk_w_io 0 0 201026152445
2 node 2 vdisk_w_ms 0 0 201026152445
2 node 2 mdisk_r_mb 0 0 201026152445
2 node 2 mdisk_r_io 0 0 201026152445
2 node 2 mdisk_r_ms 0 0 201026152445
2 node 2 mdisk_w_mb 0 0 201026152445
2 node 2 mdisk_w_io 0 0 201026152445
2 node 2 mdisk_w_ms 0 0 201026152445
2 node 2 drive_r_mb 0 0 201026152445
2 node 2 drive_r_io 0 0 201026152445
2 node 2 drive_r_ms 0 0 201026152445
2 node 2 drive_w_mb 0 0 201026152445
2 node 2 drive_w_io 0 0 201026152445
2 node 2 drive_w_ms 0 0 201026152445
2 node 2 iplink_mb 0 0 201026152445
2 node 2 iplink_io 0 0 201026152445
2 node 2 iplink_comp_mb 0 0 201026152445
2 node 2 cloud_up_mb 0 0 201026152445
2 node 2 cloud_up_ms 0 0 201026152445
2 node 2 cloud_down_mb 0 0 201026152445
2 node 2 cloud_down_ms 0 0 201026152445
2 node 2 iser_mb 0 0 201026152445
2 node 2 iser_io 0 0 201026152445
IBM_2145:IBM Redbook SVC:superuser>
Example A-4 on page 865 shows statistics for the two nodes members of Cluster_10.155.19.123. For each node, the following columns are displayed:
stat_name: The name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
The lsnodestats command can also be used with a node name or ID as an argument. For example, you run the lsnodestats node1 command to display the statistics of node with name node1 only.
The lssystemstats command lists the same set of statistics that is listed by running the lsnodestats command, but representing all nodes in the cluster. The values for these statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes.
Latency: Average latency for the cluster, which is calculated by using data from the whole cluster, not an average of the single node values.
IOPS: Total IOPS of all nodes.
CPU percentage: Average CPU percentage of all nodes.
Example A-5 shows the resulting output of the lssystemstats command.
Example A-5 The lssystemstats command output
IBM_2145:IBM Redbook SVC:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 0 0 201026152509
cpu_pc 14 14 201026152509
fc_mb 0 0 201026152509
fc_io 2423 2457 201026152258
sas_mb 0 0 201026152509
sas_io 0 0 201026152509
iscsi_mb 0 0 201026152509
iscsi_io 0 0 201026152509
write_cache_pc 0 0 201026152509
total_cache_pc 1 1 201026152509
vdisk_mb 0 0 201026152509
vdisk_io 0 0 201026152509
vdisk_ms 0 0 201026152509
mdisk_mb 0 0 201026152509
mdisk_io 0 0 201026152509
mdisk_ms 0 0 201026152509
drive_mb 0 0 201026152509
drive_io 0 0 201026152509
drive_ms 0 0 201026152509
vdisk_r_mb 0 0 201026152509
vdisk_r_io 0 0 201026152509
vdisk_r_ms 0 0 201026152509
vdisk_w_mb 0 0 201026152509
vdisk_w_io 0 0 201026152509
vdisk_w_ms 0 0 201026152509
mdisk_r_mb 0 0 201026152509
mdisk_r_io 0 0 201026152509
mdisk_r_ms 0 0 201026152509
mdisk_w_mb 0 0 201026152509
mdisk_w_io 0 0 201026152509
mdisk_w_ms 0 0 201026152509
drive_r_mb 0 0 201026152509
drive_r_io 0 0 201026152509
drive_r_ms 0 0 201026152509
drive_w_mb 0 0 201026152509
drive_w_io 0 0 201026152509
drive_w_ms 0 0 201026152509
iplink_mb 0 0 201026152509
iplink_io 0 0 201026152509
iplink_comp_mb 0 0 201026152509
cloud_up_mb 0 0 201026152509
cloud_up_ms 0 0 201026152509
cloud_down_mb 0 0 201026152509
cloud_down_ms 0 0 201026152509
iser_mb 0 0 201026152509
iser_io 0 0 201026152509
IBM_2145:IBM Redbook SVC:superuser>
Table A-1 lists the different counters that are presented by the lssystemstats and lsnodestats commands.
Table A-1 Counters in lssystemstats and lsnodestats
Value
Description
compression_cpu_pc
Displays the percentage of allocated CPU capacity that is used for compression.
cpu_pc
Displays the percentage of allocated CPU capacity that is used for the system.
fc_mb
Displays the total number of MBps for FC traffic on the system. This value includes host I/O and any bandwidth that is used for communication within the system.
fc_io
Displays the total IOPS for FC traffic on the system. This value includes host I/O and any bandwidth that is used for communication within the system.
sas_mb
Displays the total number of MBps for serial-attached Small Computer System Interface (SCSI) (SAS) traffic on the system. This value includes host I/O and bandwidth that is used for background RAID activity.
sas_io
Displays the total IOPS for SAS traffic on the system. This value includes host I/O and bandwidth that is used for background RAID activity.
iscsi_mb
Displays the total number of MBps for internet Small Computer Systems Interface (iSCSI) traffic on the system.
iscsi_io
Displays the total IOPS for iSCSI traffic on the system.
write_cache_pc
Displays the percentage of the write cache usage for the node.
total_cache_pc
Displays the total percentage for both the write and read cache usage for the node.
vdisk_mb
Displays the average number of MBps for read and write operations to volumes during the sample period.
vdisk_io
Displays the average number of IOPS for read and write operations to volumes during the sample period.
vdisk_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to volumes over the sample period.
mdisk_mb
Displays the average number of MBps for read and write operations to MDisks during the sample period.
mdisk_io
Displays the average number of IOPS for read and write operations to MDisks during the sample period.
mdisk_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to MDisks over the sample period.
drive_mb
Displays the average number of MBps for read and write operations to drives during the sample period.
drive_io
Displays the average number of IOPS for read and write operations to drives during the sample period.
drive_ms
Displays the average amount of time in milliseconds that the system takes to respond to read and write requests to drives over the sample period.
vdisk_w_mb
Displays the average number of MBps for read and write operations to volumes during the sample period.
vdisk_w_io
Displays the average number of IOPS for write operations to volumes during the sample period.
vdisk_w_ms
Displays the average amount of time in milliseconds that the system takes to respond to write requests to volumes over the sample period.
mdisk_w_mb
Displays the average number of MBps for write operations to MDisks during the sample period.
mdisk_w_io
Displays the average number of IOPS for write operations to MDisks during the sample period.
mdisk_w_ms
Displays the average amount of time in milliseconds that the system takes to respond to write requests to MDisks over the sample period.
drive_w_mb
Displays the average number of MBps for write operations to drives during the sample period.
drive_w_io
Displays the average number of IOPS for write operations to drives during the sample period.
drive_w_ms
Displays the average amount of time in milliseconds that the system takes to respond write requests to drives over the sample period.
vdisk_r_mb
Displays the average number of MBps for read operations to volumes during the sample period.
vdisk_r_io
Displays the average number of IOPS for read operations to volumes during the sample period.
vdisk_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to volumes over the sample period.
mdisk_r_mb
Displays the average number of MBps for read operations to MDisks during the sample period.
mdisk_r_io
Displays the average number of IOPS for read operations to MDisks during the sample period.
mdisk_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to MDisks over the sample period.
drive_r_mb
Displays the average number of MBps for read operations to drives during the sample period.
drive_r_io
Displays the average number of IOPS for read operations to drives during the sample period.
drive_r_ms
Displays the average amount of time in milliseconds that the system takes to respond to read requests to drives over the sample period.
iplink_mb
The total number of MBps for IP replication traffic on the system. This value does not include iSCSI host I/O operations.
iplink_comp_mb
Displays the average number of compressed MBps over the IP replication link during the sample period.
iplink_io
The total IOPS for IP partnership traffic on the system. This value does not include iSCSI host I/O operations.
cloud_up_mb
Displays the average number of Mbps for upload operations to a cloud account during the sample period.
cloud_up_ms
Displays the average amount of time (in milliseconds) it takes for the system to respond to upload requests to a cloud account during the sample period.
cloud_down_mb
Displays the average number of Mbps for download operations to a cloud account during the sample period.
cloud_down_ms
Displays the average amount of time (in milliseconds) that it takes for the system to respond to download requests to a cloud account during the sample period.
iser_mb
Displays the total number of MBps for iSCSI Extensions for RDMA (iSER) traffic on the system.
iser_io
Displays the total IOPS for iSER traffic on the system.
Real-time performance statistics monitoring by using the GUI
The IBM Spectrum Virtualize dashboard gives you performance at a glance by displaying important information about the system. You can see the entire cluster (the system) performance by selecting the information between Bandwidth, Response Time, IOPS, or CPU utilization. You can also display a Node Comparison by selecting the same information as for the cluster, and then switching the button, as shown in Figure A-1.
Figure A-1 IBM Spectrum Virtualize Dashboard displaying System performance overview.
Figure A-2 shows the display after switching the button.
Figure A-2 IBM Spectrum Virtualize Dashboard displaying Nodes performance overview
You can also use real-time statistics to monitor CPU utilization, volume, interface, and MDisk bandwidth of your system and nodes. Each graph represents 5 minutes of collected statistics and provides a means of assessing the overall performance of your system.
The real-time statistics are available from the IBM Spectrum Virtualize GUI. Click Monitoring → Performance (as shown in Figure A-3) to open the Performance Monitoring window.
Figure A-3 Selecting Performance in the Monitoring menu
As shown in Figure A-4 on page 873, the Performance monitoring pane is divided into the following sections that provide usage views for the following resources:
CPU Utilization: The CPU utilization graph shows the current percentage of CPU usage and peaks in utilization. It can also display compression CPU usage for systems with compressed volumes.
Volumes: Shows four metrics for the overall volume utilization graphics:
 – Read
 – Write
 – Read latency
 – Write latency
Interfaces: The Interfaces graph displays data points for FC, iSCSI, SAS, and IP Remote Copy (RC) interfaces. You can use this information to help determine connectivity issues that might affect performance:
 – FC
 – iSCSI
 – SAS
 – IP Remote Copy
MDisks: Shows four metrics for the overall MDisks graphics:
 – Read
 – Write
 – Read latency
 – Write latency
You can use these metrics to help determine the overall performance health of the volumes and MDisks on your system. Consistent unexpected results can indicate errors in configuration, system faults, or connectivity issues.
The system’s performance is always visible in the bottom of the IBM Spectrum Virtualize window, as shown in Figure A-4 on page 873.
 
Note: The indicated values in the graphics are averaged on a 5-seconds based sample.
Figure A-4 IBM Spectrum Virtualize Performance window
You can also view the performance statistics for each of the available nodes of the system, as shown in Figure A-5.
Figure A-5 View statistics per node or for the entire system
You can also change the metric between MBps or IOPS, as shown in Figure A-6.
Figure A-6 View performance metrics by MBps or IOPS
On any of these views, you can select any point with your cursor to see the value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure A-7.
Figure A-7 Viewing performance with details
For each of the resources, various metrics are available and you can select which ones are shown. For example, as shown in Figure A-8, from the four available metrics for the MDisks view (Read, Write, Read latency, and Write latency), only Write and Write latency IOPS are selected.
Figure A-8 Displaying performance counters
Performance data collection and IBM Spectrum Control
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a less practical and more complicated method to analyze the IBM Spectrum Virtualize performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze SAN Volume Controller performance statistics.
IBM Spectrum Control is installed separately on a dedicated system, and is not part of the IBM Spectrum Virtualize bundle.
For more information about the use of IBM Spectrum Control to monitor your storage subsystem, see this web page.
As an alternative to IBM Spectrum Control, a cloud-based tool is available that is called IBM Storage Insights. This tool provides a single dashboard that gives you a clear view of all your IBM block storage and shows performance and capacity information. You do not have to install this tool in your environment because it is a cloud-based solution. Only an agent is required to collect the data of the storage devices.
For more information about IBM Storage Insights, see this web page.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.166.7