Hints and tips
FlashSystem V9000 has exceptional capabilities for customers to address data security, redundancy, and application integration. It uses industry-leading IBM support infrastructure, including the IBM Comprestimator utility. In this chapter, we provide helpful hints and tips to use these capabilities in productive ways.
This chapter includes the following topics:
13.1 Performance data and statistics gathering
In this section, we provide a brief overview of the performance analysis capabilities of the IBM FlashSystem V9000. We also describe a method that you can use to collect and process V9000 performance statistics. For a more in-depth understanding of performance statistics and interpretation, see IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
FlashSystem V9000 differs from IBM SAN Volume Controller with FlashSystem 900 in that the V9000 is tightly integrated with its flash memory. FlashSystem V9000 is optimized to work with its AE2 storage as a single managed disk. FlashSystem V9000 high IOPS and low latency often require host tuning to realize its performance capabilities. See Chapter 7, “Host configuration” on page 213 for host attachment and configuration guidance.
IBM SAN Volume Controller and FlashSystem 900 requires extra steps to configure and tune for optimal performance. For more details regarding IBM FlashSystem 900 running with IBM SAN Volume Controller, see the chapter about product integration in Implementing IBM FlashSystem 900, SG24-8271.
13.1.1 FlashSystem V9000 controller performance overview
The caching capability of the V9000 controller and its ability to effectively manage multiple FlashSystem enclosures along with standard disk arrays can provide a significant performance improvement over what can otherwise be achieved when disk subsystems alone are used. To ensure that the wanted performance levels of your system are maintained, monitor performance periodically to provide visibility to potential problems that exist or are developing so that they can be addressed in a timely manner.
Performance considerations
When you are designing a FlashSystem V9000 storage infrastructure or maintaining an existing infrastructure, you must consider many factors in terms of their potential effect on performance. These factors include, but are not limited to, mixed workloads competing for the same resources, overloaded resources, insufficient resources available, poorly performing resources, and similar performance constraints, especially when using external storage.
Remember these high-level rules as you design your SAN and FlashSystem V9000 layout:
Host-to-V9000 controller inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The suggestion is to maintain a maximum ratio of 7:1 oversubscription. A higher ratio is possible, but it tends to lead to I/O bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on the edge and the FlashSystem V9000 Controller is on the core.
Storage-to-V9000 controller ISL oversubscription
FlashSystem V9000 scale-up scale-out configurations suggest dedicated SAN48B-5 FC Switches to support the V9000 controllers and enclosures. Although any supported switch can be used, be careful to avoid bottlenecks.
Node-to-node ISL oversubscription
FlashSystem V9000 guidelines do not allow for node-to-node ISL oversubscription.
This area is the least significant load of the three possible oversubscription bottlenecks. In standard setups, this load can be ignored. Although it is not entirely negligible, it does not contribute significantly to ISL load. However, it is mentioned here regarding the split-cluster capability that was made available with SAN Volume Controller Technology.
 
Note: FlashSystem V9000 does not currently support split-cluster capability.
ISL trunking or port channeling
For the best performance and availability, we strongly suggest that you use ISL trunking or port channeling. Independent ISL links can easily become overloaded and turn into performance bottlenecks. Bonded or trunked ISLs automatically share load and provide better redundancy in the case of a failure.
Number of paths per host multipath device
The maximum supported number of paths per multipath device that is visible on the host is eight. Although the Subsystem Device Driver Path Control Module (SDDPCM), related products, and most vendor multipathing software can support more paths, the V9000 controller expects a maximum of eight paths. In general, you see only a negative effect on performance from more paths than eight. Although the SAN Volume Controller can work with more than eight paths, this design is technically unsupported.
Do not intermix dissimilar array types or sizes
Although the V9000 controller supports an intermix of differing storage within storage pools, the best approach is to always use the same array model, RAID mode, RAID size (RAID 5 6+P+S does not mix well with RAID 6 14+2), and drive speeds. Mixing standard storage with FlashSystem volumes is not suggested unless the intent is to use Easy Tier.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance can provide a validation that design expectations are met and identify opportunities for improvement.
FlashSystem V9000 performance perspectives
The FlashSystem V9000 controller consists of software and hardware. The software was developed by the IBM Research Group for IBM Spectrum Virtualize, which delivers the function of SAN Volume Controller and was designed to run on commodity hardware (mass-produced Intel-based CPUs with mass-produced expansion cards), while providing distributed cache and a scalable cluster architecture.
One of the main advantages of this design is the capability to easily refresh hardware. Currently, the V9000 controller is scalable up to four building blocks (eight controllers) and these nodes can be swapped for newer hardware while online. This capability provides a great investment value because the nodes are relatively inexpensive. This capability provides an instant performance boost with no license changes. Newer nodes can dramatically increase cache per node, provide an extra benefit on top of the typical refresh cycle.
For more information about the node replacement and swap and instructions about adding nodes, see the following website:
The performance is near linear when nodes are added into the cluster until performance eventually becomes limited by the attached components. This scalability is significantly enhanced using FlashCore technology included with the storage enclosures in each building block.
FlashCore technology is built on three core principles: hardware accelerated I/O, IBM MicroLatency module, and advanced flash management. Partnership with Micron and FlashSystem development teams help to ensure system reliability and optimization for flash. The design goals for IBM FlashSystem V9000 are to provide the customer with the fastest and most reliable all flash memory arrays on the market, while making it simple to service and support.
Virtualization with the V9000 controller building block design provides specific guidance in terms of the components that are used, so that it can deliver optimal performance. The key item for planning is your SAN layout. Switch vendors have slightly different planning requirements, but the goal is that you always want to maximize the bandwidth that is available to the V9000 controller ports. The V9000 controller is one of the few devices that can drive ports to their limits on average, so be sure that you put significant thought into planning the SAN layout.
Figure 13-1 shows the overall environment with two SAN fabrics:
Dedicated SAN Switch Fabric for building block communications
SAN Switch fabric with host zone and an optional storage zone for external storage.
 
Tip: In a real environment, a preferred practice is to use redundant SAN fabrics, which is not shown here.
Figure 13-1 V9000 scale up scale out
A dedicated SAN is suggested but not required; the objective is to not introduce external latency between V9000 controllers and their FlashSystem storage enclosures. This can be accomplished through other SAN administration techniques.
Essentially, V9000 controller performance improvements are gained by optimizing delivery of FlashCore technology storage enclosure resources and with advanced functionality that is provided by the V9000 controller cluster. However, the performance of individual resources to hosts on the SAN eventually becomes the limiting factor.
FlashSystem V9000 Deployment Options
IBM FlashSystem V9000 brings high capacity and fully integrated management to the enterprise data center. FlashSystem V9000 delivers up to 57 TB per building block, scales to four building blocks, and offers up to four additional 57 TB V9000 storage enclosure expansion units for large-scale enterprise storage system capability.
FlashSystem V9000 has the following flexible scalability configuration options:
Base configuration
Scale up: Add capacity
Scale out: Add controllers and capacity
In the following topics, we illustrate the performance benefits of two-dimensional scaling in various environments.
By offering two-dimensional scaling, the V9000 provides scalable performance that is difficult to surpass. Figure 12-2 shows examples of the maximum performance that can be achieved.
Figure 13-2 Two-dimensional scaling
The next series of figures illustrate several deployment options for the V9000 solution.
Figure 13-3 shows the Application Accelerator option. This example represents an environment that is scaled up for added capacity and shows an environment that includes an extra four storage enclosures.
Figure 13-3 Application Accelerator option
Figure 13-4 shows the Mixed Workload Accelerator option. Scale out is used for an expanded virtualized system with added building blocks, each containing one storage enclosure and two control enclosures. For balanced increase of performance and scale, up to four FlashSystem building blocks can be clustered into a single system, multiplying performance and capacity with each addition.
Figure 13-4 Mixed Workload Accelerator option
Figure 13-5 shows the Small Data Center option. In this scenario, a single fixed building block is used. A fixed FlashSystem V9000 platform consists of two FlashSystem V9000 control enclosures that are directly cabled to one FlashSystem storage enclosure.
Figure 13-5 Small Data Center option
Figure 13-6 shows the Public or Private Cloud option. This example is an environment where the virtualized system is scaled up and out to improve capacity, IOPS, and bandwidth while maintaining MicroLatency.
Figure 13-6 Public or Private Cloud option
Figure 13-7 shows the Virtualized Data Center option. The environment in this example has a single FlashSystem V9000, which is virtualizing external storage. External storage virtualization is an optional license feature that enables FlashSystem V9000 to manage capacity in other Fibre Channel SAN storage systems. When FlashSystem V9000 virtualizes a storage system, its capacity becomes part of the V9000 system. Capacity in external systems inherits all the functional richness of the FlashSystem V9000. Product number 5641-VC7, feature code 0663 is required. For more details, see the IBM FlashSystem V9000 Product Guide, TIPS1281:
Figure 13-7 Virtualized Data Center option
FlashSystem V9000 offers exceptional flexibility and performance with configurations that can be tailored to address your specific requirements.
13.1.2 Performance monitoring
In this section, we highlight several performance monitoring techniques.
 
Note: FlashSystem V9000 enclosure performance is not included in this section. For more details about enclosure performance for IBM SAN Volume Controller, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
Collecting performance statistics
The FlashSystem V9000 components are constantly collecting performance statistics. The default frequency by which files are created is at 5-minute intervals with a supported range of 15 - 60 minutes.
 
Tip: The collection interval can be changed by using the svctask startstats command.
The statistics files (VDisk, MDisk, and Node) are saved at the end of the sampling interval and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion. This design provides statistics for the most recent 80-minute period if the default 5-minute sampling interval is used. The V9000 supports user-defined sampling intervals of 1- 60 minutes.
The maximum space that is required for a performance statistics file is 1,153,482 bytes. There can be up to 128 files (16 for each of the three types across eight nodes) across eight V9000 AC2 controller nodes. This design makes the total space requirement a maximum of 147,645,694 bytes for all performance statistics from all nodes in a V9000 controller cluster.
Make note of this maximum when you are in time-critical situations. The required size is not otherwise important because V9000 controller node hardware can map the space. You can define the sampling interval by using the startstats -interval 2 command to collect statistics at 2-minute intervals.
For more information, see the startstats command at the V9000 web page in the IBM Knowledge Center:
In the table of contents at the website, select IBM FlashSystem V9000 7.4.0  Reference  Command line interface  Clustered system commands.
 
Collection intervals: Although more frequent collection intervals provide a more detailed view of what happens within the FlashSystem V9000, they shorten the amount of time that the historical data is available on the V9000 Controller. For example, rather than an 80-minute period of data with the default five-minute interval, if you adjust to 2-minute intervals, you have a 32-minute period instead.
The V9000 does not collect cluster-level statistics. Instead, you use the per node statistics that are collected. The sampling of the internal performance counters is coordinated across e both nodes of the V9000 controller cluster so that when a sample is taken, all nodes sample their internal counters at the same time. An important step is to collect all files from all nodes for a complete analysis. Tools such as Tivoli Storage Productivity Center, a component of IBM Spectrum Control, can perform this intensive data collection for you.
 
Note: Starting with Tivoli Storage Productivity Center version 5.2.6, FlashSystem V9000 is supported. See the current support matrix for more detail:
Statistics file naming
The files that are generated are written to the /dumps/iostats/ directory. The file name is in the following formats:
For MDisk statistics:
Nm_stats_<node_serial_number>_<date>_<time>
For VDisk statistics:
Nv_stats_<node_serial_number>_<date>_<time>
For node statistics:
Nn_stats_<node_serial_number>_<date>_<time>
For disk drive statistics (not used for FlashSystem V9000 Controller):
Nd_stats_<node_serial_number>_<date>_<time>
The node_node_serial_number is of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an MDisk statistics file name:
Nm_stats_75AM710_150323_075241
Example 13-1 shows the typical MDisk, volume, node, and disk drive statistics file names.
Example 13-1 File names of per node statistics
IBM_FlashSystem:ITSO_V9000:superuser>svcinfo lsiostatsdumps
id iostat_filename
0 Nv_stats_75AM710_150323_164316
1 Nm_stats_75AM710_150323_164316
2 Nd_stats_75AM710_150323_164316
3 Nn_stats_75AM710_150323_164316
4 Nm_stats_75AM730_150323_164316
5 Nv_stats_75AM730_150323_164316
6 Nd_stats_75AM730_150323_164316
7 Nn_stats_75AM730_150323_164316
 
Tip: The performance statistics files can be copied from the V9000 Controllers to a local drive on your workstation by using the pscp.exe (included with PuTTY) from an MS-DOS command prompt, as shown in this example:
C:>pscp -unsafe -load ITSOadmin ITSOadmin@ITSO_V9000:/dumps/iostats/* c:statsfiles
Specify the -unsafe parameter when you use wildcards.
Use the -load parameter to specify the session that is defined in PuTTY.
The qperf utility
qperf is an unofficial (no initial cost and unsupported) collection of awk scripts that was made available for download from IBM Techdocs. It provides a quick performance overview using the CLI and a UNIX Korn shell (it can also be used with Cygwin on Windows platforms).
You can download qperf from the following address:
The performance statistics files are in .xml format. They can be manipulated by using various tools and techniques. Figure 13-8 shows the type of chart that you can produce by using the V9000 controller performance statistics.
Figure 13-8 Spreadsheet example
Real-time performance monitoring
FlashSystem V9000 controller supports real-time performance monitoring. Real-time performance statistics provide short-term status information for the V9000 controller. The statistics are shown as graphs in the management GUI or can be viewed from the CLI. With system-level statistics, you can quickly view the CPU usage and the bandwidth of volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes per second (MBps) or I/O per second (IOPS), and a view of bandwidth over time.
Each node collects various performance statistics, mostly at 5-second intervals, and the statistics that are available from the config node in a clustered environment. This information can help you determine the performance effect of a specific node. As with system statistics, node statistics help you to evaluate whether the node is operating within normal performance metrics.
Real-time performance monitoring gathers the following system-level performance statistics:
CPU utilization
Port utilization and I/O rates
Volume and MDisk I/O rates
Bandwidth
Latency
 
Note: Real-time statistics are not a configurable option and cannot be disabled.
Real-time performance monitoring with the CLI
The lsnodestats and lssystemstats commands are available for monitoring the statistics through the CLI. Next, we show you examples of how to use them.
The lsnodestats command provides performance statistics for the nodes that are part of a clustered system, as shown in Example 13-2 (the output is truncated and shows only part of the available statistics). You can also specify a node name in the command to limit the output for a specific node. See Table 13-1 on page 510 for statistics field name descriptions.
Example 13-2 The lsnodestats command output
$ ssh superuser@ITSO_V9000 lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 BB1ACN1sn75AM710 compression_cpu_pc 10 10 150326170835
1 BB1ACN1sn75AM710 cpu_pc 28 28 150326170835
1 BB1ACN1sn75AM710 fc_mb 351 351 150326170835
1 BB1ACN1sn75AM710 fc_io 109447 111531 150326170805
1 BB1ACN1sn75AM710 drive_io 0 5 150326170820
1 BB1ACN1sn75AM710 drive_ms 0 0 150326170835
2 BB1ACN2sn75AM730 write_cache_pc 34 35 150326170738
2 BB1ACN2sn75AM730 total_cache_pc 80 80 150326170838
2 BB1ACN2sn75AM730 vdisk_mb 212 213 150326170833
2 BB1ACN2sn75AM730 vdisk_io 16272 16389 150326170358
2 BB1ACN2sn75AM730 vdisk_ms 0 0 150326170838
2 BB1ACN2sn75AM730 mdisk_mb 25 27 150326170733
2 BB1ACN2sn75AM730 mdisk_io 1717 2101 150326170423
The example shows statistics for the two node members of cluster ITSO_V9000. For each node, the following columns are displayed:
stat_name. The name of the statistic field.
stat_current. The current value of the statistic field.
stat_peak. The peak value of the statistic field in the last 5 minutes.
stat_peak_time. The time that the peak occurred.
However, the lssystemstats command lists the same set of statistics that is listed with the lsnodestats command, but representing all nodes in the cluster. The values for these statistics are calculated from the node statistics values in the following way:
Bandwidth. Sum of bandwidth of all nodes.
Latency. Average latency for the cluster, which is calculated by using data from the whole cluster, not an average of the single node values.
IOPS. Total IOPS of all nodes.
CPU percentage. Average CPU percentage of all nodes.
Example 13-3 shows the resulting output of the lssystemstats command.
Example 13-3 The lssystemstats command output
$ ssh superuser@ITSO_V9000 lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 9 10 150326172634
cpu_pc 28 28 150326172649
fc_mb 757 780 150326172629
fc_io 217243 219767 150326172454
sas_mb 0 0 150326172649
sas_io 0 0 150326172649
iscsi_mb 0 0 150326172649
iscsi_io 0 0 150326172649
write_cache_pc 34 35 150326172639
total_cache_pc 80 80 150326172649
vdisk_mb 392 414 150326172154
vdisk_io 31891 32894 150326172154
vdisk_ms 0 0 150326172649
mdisk_mb 99 116 150326172439
...
Table 13-1 has a brief description of each of the statistics that are presented by the lssystemstats and lsnodestats commands.
Table 13-1 The lssystemstats and lsnodestats statistics field name descriptions
Field name
Unit
Description
cpu_pc
Percentage
Utilization of node CPUs
fc_mb
MBps
Fibre Channel bandwidth
fc_io
IOPS
Fibre Channel throughput
sas_mb
MBps
SAS bandwidth
sas_io
IOPS
SAS throughput
iscsi_mb
MBps
IP-based Small Computer System Interface (iSCSI) bandwidth
iscsi_io
IOPS
iSCSI throughput
write_cache_pc
Percentage
Write cache fullness. Updated every 10 seconds.
total_cache_pc
Percentage
Total cache fullness. Updated every 10 seconds.
vdisk_mb
MBps
Total VDisk bandwidth
vdisk_io
IOPS
Total VDisk throughput
vdisk_ms
Milliseconds
Average VDisk latency
mdisk_mb
MBps
MDisk (SAN and RAID) bandwidth
mdisk_io
IOPS
MDisk (SAN and RAID) throughput
mdisk_ms
Milliseconds
Average MDisk latency
drive_mb
MBps
Drive bandwidth
drive_io
IOPS
Drive throughput
drive_ms
Milliseconds
Average drive latency
vdisk_w_mb
MBps
VDisk write bandwidth
vdisk_w_io
IOPS
VDisk write throughput
vdisk_w_ms
Milliseconds
Average VDisk write latency
mdisk_w_mb
MBps
MDisk (SAN and RAID) write bandwidth
mdisk_w_io
IOPS
MDisk (SAN and RAID) write throughput
mdisk_w_ms
Milliseconds
Average MDisk write latency
drive_w_mb
MBps
Drive write bandwidth
drive_w_io
IOPS
Drive write throughput
drive_w_ms
Milliseconds
Average drive write latency
vdisk_r_mb
MBps
VDisk read bandwidth
vdisk_r_io
IOPS
VDisk read throughput
vdisk_r_ms
Milliseconds
Average VDisk read latency
mdisk_r_mb
MBps
MDisk (SAN and RAID) read bandwidth
mdisk_r_io
IOPS
MDisk (SAN and RAID) read throughput
mdisk_r_ms
Milliseconds
Average MDisk read latency
drive_r_mb
MBps
Drive read bandwidth
drive_r_io
IOPS
Drive read throughput
drive_r_ms
Milliseconds
Average drive read latency
Real-time performance monitoring with the GUI
Real-time statistics are also available from the V9000 controller GUI. Select Monitoring  Performance (Figure 13-9) to open the performance monitoring window.
Figure 13-9 V9000 Monitoring menu
As shown in Figure 13-10 on page 513, the Performance monitoring window is divided into the following sections that provide utilization views for the following resources:
CPU Utilization. Shows the overall CPU usage percentage.
Volumes. Shows the overall volume utilization with the following fields:
 – Read
 – Write
 – Read latency
 – Write latency
Interfaces. Shows the overall statistics for each of the available interfaces:
 – Fibre Channel
 – iSCSI
 – SAS
 – IP Remote Copy
MDisks. Shows the following overall statistics for the MDisks:
 – Read
 – Write
 – Read latency
 – Write latency
Figure 13-10 Performance monitoring window
You can also select to view performance statistics for each of the available nodes of the system, as shown in Figure 13-11.
Figure 13-11 Select controller node
Also possible is to change the metric between MBps or IOPS, as shown in Figure 13-12.
Figure 13-12 Changing to MBps or IOPS
On any of these views, you can select any point with your cursor to know the exact value and when it occurred. When you place your cursor over the timeline, it becomes a dotted line with the various values gathered, as shown in Figure 13-13.
Figure 13-13 Detailed resource use
For each of the resources, you can view various values by selecting the value. For example, as shown in Figure 13-14, the four available fields are selected for the MDisks view: Read, Write, Read latency, and Write latency. In our example latencies are not selected.
Figure 13-14 Detailed resource use
Performance data collection and IBM Spectrum Control
IBM Spectrum Control delivers the functionality of IBM Tivoli Storage Productivity Center and IBM Virtual Storage Center.
Spectrum Control provides efficient infrastructure management for virtualized, cloud, and software-defined storage to simplify and automate storage provisioning, capacity management, availability monitoring, and reporting.
The functionality of IBM Spectrum Control is provided by IBM Data and Storage Management Solutions and includes functionality delivered by IBM Virtual Storage Center, Tivoli Storage Productivity Center, IBM Storage Integration Server, and others.
For more information, see the IBM Data Management and Storage Management website:
 
Note: See the current support matrix for more detail:
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a less practical and less easy method to analyze the V9000 controller performance statistics. IBM Spectrum Control, which delivers the functionality of IBM Tivoli Storage Productivity Center and IBM Virtual Storage Center, is the supported IBM tool to collect and analyze V9000 controller performance statistics.
For more information about the use of the Tivoli Storage Productivity Center component of IBM Spectrum Control to monitor your storage subsystem, see SAN Storage Performance Management Using Tivoli Storage Productivity Center, SG24-7364.
13.2 Estimating compression savings
Some common data types are good candidates for compression, and others are not. The best candidates for data compression are data types that are not compressed by nature. Viable candidates include data types that are involved in many workloads and applications, such as databases, character/ASCII based data, email systems, server virtualization, CAD/CAM, software development systems, and vector data.
IBM provides a tool called Comprestimator that can help determine whether your data is a candidate for compression, and to which degree data can be compressed. IBM Comprestimator can quickly scan existing volumes and provide an accurate estimation of expected compression ratio.
13.2.1 IBM Comprestimator utility
IBM Comprestimator is a command-line host-based utility that can be used to estimate the compression rate for block-devices. The IBM Comprestimator utility uses advanced mathematical and statistical formulas to perform the sampling and analysis process in a short and efficient way. The utility also displays its accuracy level by showing the maximum error range of the results achieved based on the formulas it uses. The utility runs on a host that has access to the devices to be analyzed, and only runs read operations, so it has no effect on the data stored on the device.
The following section provides useful information about installing IBM Comprestimator on a host and using it to analyze devices on that host. Depending on the environment configuration, in many cases, IBM Comprestimator is used on more than one host to analyze additional data types.
IBM Comprestimator is supported and as of the time this Redbooks publication was written, can be used on the following client operating system versions:
Windows 2008 R2 Server, Windows 2012, Windows 7, Windows 8
Red Hat Enterprise Linux Version 5.x, 6.x (x86 64 bit)
ESXi 4.1, 5.0
Sun Solaris 10, 11
AIX 6.1, 7
HPUX 11.31
SUSE SLES 11 (x86 64 bit)
Ubuntu 12 (x86 64 bit)
Comprestimator is available from IBM on the following website:
This link includes the installation instructions for the IBM Comprestimator tool.
13.2.2 Installing IBM Comprestimator
IBM Comprestimator must initially be installed on a supported operating system (see the previous list). After installation completes, the binary files for other supported operating systems are available in the Windows installation folder.
By default, the files are copied to the following locations:
In Windows 64-bit: C:Program Files (x86)IBMComprestimator
In Windows 32-bit: C:Program FilesIBMComprestimator
After transferring the operating system-dependent IBM Comprestimator tools to your system, follow the installation instructions that are provided on the Comprestimator download page. The program invocation is different on different operating systems, but the output is the same.
13.2.3 Using IBM Comprestimator
The following topic describes how to use the IBM Comprestimator utility.
IBM Comprestimator syntax
Example 13-4 provides an example of syntax for the IBM Comprestimator tool.
Example 13-4 IBM Comprestimator syntax
Comprestimator version : 1.5.2.2 (Build w0098)
Usage :
comprestimator <-s storage_type> [ -h | -l | -d device | -n disk_number] [-c filename] [-v] [-p number_of_threads] [-P] [-I] [--storageVer=version] [--config=task_file]
-n Disk number
-l List devices
-d device name Path of device to analyze
-p number Number of threads (default 10)
-c Export the results to a CSV file
-v Verbose output
-h Print this help message
-P Display results using a paragraph format
-s,--storageSys Storage system type. Supported values are: SAN Volume Controller, XIV
-I Allow larger scale of io-error threshold rate (up to 5%)
--config=file Configuration file that contains list of devices to analyze
--storageVer=version Target storage system version. Supported Storwize/SVC/Flex options: 6.4, 7.1, 7.2, 7.3; default: 7.3, XIV options: 11.6
 
IBM Comprestimator output
To list storage devices, use the comprestimator -l command, as shown in Example 13-5.
Example 13-5 Comprestimator: List devices (output shortened for clarity)
C:Program Files (x86)ibmComprestimatorWindows>comprestimator -l
Drive number [0] \?scsi#disk&ven_lsilogic&prod_logical_volume#5&138f362
Drive number [1] \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&363
Drive number [2] \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&363
Drive number [3] \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&363
Drive number [4] \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&363
Drive number [5] \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&363
We choose to analyze drive number 2, as shown in Example 13-6.
Example 13-6 Analyze Drive number 2 (output shortened for clarity)
C:Program Files (x86)ibmComprestimatorWindows>comprestimator -n 2 -s SAN Volume Controller -v
Version: 1.5.2.2 (Build w0098)
Start time: 15/07/2015 13:48:18.103676
Device name: \?mpio#disk&ven_ibm&prod_2145&rev_0000#1&7f6ac24&0&3630303530373
Device size: 100.0 GB
Number of processes: 10
 
Sample#| Device | Size(GB) | Compressed | Total | Total | Thin Provisioning | Compression | Compression
| Name | | Size(GB) | Savings(GB)| Savings(%) | Savings(%) | Savings(%) | Accuracy Range(%)
-------+--------+----------+------------+------------+------------+-------------------+-------------+------------------
32 |********| 100.0 | 6.2 | 93.8 | 93.8% | 75.6% | 74.5% | 51.3%
69 |********| 100.0 | 8.1 | 91.9 | 91.9% | 72.8% | 70.0% | 34.9%
103 |********| 100.0 | 9.1 | 90.9 | 90.9% | 71.3% | 68.2% | 28.6%
According to Example 13-6 we would get savings from compression in the range 68.2 - 74.5% by enabling compression on the system containing the FlashSystem V9000 volume.
 
Tip: For FlashSystem V840 and FlashSystem V9000 systems, select SAN Volume Controller as the Storage system type.
Explanation of compression output
Table 13-2 shows an explanation of the output from IBM Comprestimator.
Table 13-2 IBM Comprestimator output explanations
Header
Explanation
Sample#
The number of the current sample reported.
Device
The device name used in the scan.
Size (GB)
The total size of the device as reported by the operating system, in gigabytes.
Compressed Size (GB)
The estimated size of the device if it is compressed using FlashSystem V9000 Real-time Compression, in gigabytes.
Total Savings (GB)
The total estimated savings from thin-provisioning and compression, in gigabytes.
Total Savings (%)
The estimated savings from thin-provisioning and compression, in percentage of the size of the device. This value is calculated in the following method: Total Savings (%) = 1 - ( Compressed Size (GB) / Size (GB) ).
Thin Provision Savings (%)
The estimated savings from thin provisioning (areas with zeros are stored using minimal capacity).
Compression Savings (%)
The estimated savings from compression.
Compression Accuracy Range (%)
The accuracy of the estimate provided by Comprestimator. The results provided are estimated based on samples from the device, and therefore might be lower or higher than the actual compression that would be achieved. The approximate accuracy of the results is represented as a percentage of the total size of the device. For example, if the estimated Compression Savings (%) is 67%, and the Compression Accuracy Range is 5%, the actual compression savings (in percentage) if this device is compressed on FlashSystem V9000 is 62% - 72%.
13.3 Command-line hints
FlashSystem V9000 contains a robust command-line interface based on the IBM SAN Volume Controller and Storwize family of products. These command-line scripting techniques can be used to automate the following tasks:
Running commands on the cluster
Creating connections
V9000 command-line scripting
Example commands
Backing up the Configuration
Running the Software Upgrade Test Utility
Secure Erase of Data
13.3.1 Running commands on the FlashSystem V9000
To automate copy services processes, you must connect to the cluster. In normal operations, you connect to the cluster by using the GUI or command line. The GUI is not an appropriate interface for automating processes, so that alternative is not described here. All automation techniques are achieved through the FlashSystem V9000 command line or the Common Information Model Object Manager (CIMOM), which currently acts as a proxy to the
command line.
This section uses the term user agent. The user agent can be the CIMOM, which connects to the cluster by using Secure Shell (SSH). Or the user agent can be a user connecting directly with an SSH client, either in an interactive mode or by using a script.
Running commands to the cluster follows this sequence of steps:
5. Running a command (Execution)
Connection
Commands are submitted to the cluster during a connection session to the cluster. User agents make connections through the SSH protocol. FlashSystem has several security features that affect how often you can attempt connections. These security features are in place to prevent attacks (malicious or accidental) that can bring down a V9000 controller node. These features might initially seem restrictive, but they are relatively simple to work with to maintain a valid connection.
When creating automation by using the CLI, an important consideration is to be sure that scripts behave responsibly and do not attempt to breach the connection rules. At a minimum, an automation system must ensure that it can gracefully handle rejected connection attempts.
Figure 13-15 shows how V9000 connection restrictions work.
Figure 13-15 FlashSystem V9000 SSH restrictions
Figure 13-15 shows that two queues are in action: Pending Connections and Active Connections. The connection process follows this sequence:
1. A connection request comes into the V9000. If the Pending Connections queue has a free position, the request is added to it otherwise, the connection is explicitly rejected.
2. Pending Connections are handled in one of two ways:
a. If any of the following conditions are true, the connection request is rejected:
 • No key is provided, or the provided key is incorrect.
 • The provided user name is not admin or service.
 • The Active Connections queue is full. In this case, a warning is returned to the SSH client as shown in Example 13-7.
b. If none of the conditions listed in the previous step are true, the connection request is accepted and moved from the Pending Connections queue to the Active Connections queue.
3. Active Connections end after any of the following events:
 – The user logs off manually.
 – The SAN Volume Controller SSH daemon recognizes that the connection has
grown idle.
 – The network connectivity fails.
 – The configuration node fails over.
In this case, both queues are cleared because the SHH daemon stops and restarts on a different node.
Example 13-7 shows a V9000 command-line warning about too many logins. Only 10 concurrent active SSH sessions are allowed.
Example 13-7 V9000 Command-line warning about too many logins
$ ssh ITSOadmin@ITSO_V9000
CMMVC7017E Login has failed because the maximum number of concurrent CLI sessions has been reached.
 
Connection to ITSO_V9000 closed.
If the limit of 10 concurrent active SSH sessions is reached, an entry is generated on the error log as shown in Figure 13-16.
Figure 13-16 GUI console warning the limit was reached
Double-click the status alert to see the event panel (Figure 13-17).
Figure 13-17 Error 2500 - SSH Session limit reached
To view the details, right-click the error event and select Properties. The event details are displayed (Figure 13-18).
Figure 13-18 Event details
To fix this error, choose the Run Fix Procedure (which is detailed in “Directed Maintenance Procedure” on page 282, and shown in Figure 8-38 on page 283). A list with active SSH sessions is displayed (Figure 13-19). The quickest way to resolve this is as follows:
1. Close all connections.
2. Click Next to continue.
Figure 13-19 SSH Session limit reached
3. Selecting Close all SSH sessions through this fix procedure closes the listed sessions, and the error is fixed. If you close the active sessions manually on the host side without choosing to close all of the sessions through the Run Maintenance Procedures, you must select The number of SSH sessions has been reduced, mark this event as fixed.
4. A warning that all CLI connections will be closed is shown in Figure 13-20. Click Next to determine whether the process is completed.
Figure 13-20 Warning about closing all SSH connections
Authentication
FlashSystem V9000 enables you to log in with basically a user name and password. The two types of users who can access the system are local users and remote users. These types are based on how the users are authenticated to the system:
Local users must provide a password, an SSH key, or both. Local users are authenticated through the authentication methods that are in the SAN Volume Controller system. If the local user needs access to the management GUI, a password is needed for the user. If the user requires access to the CLI through SSH, either a password or a valid SSH key file is necessary.
Local users must be part of a user group that is defined on the system. User groups define roles that authorize the users within that group to a specific set of operations on the system.
Remote users are authenticated on a remote service with either Tivoli Integrated Portal or Lightweight Directory Access Protocol (LDAP v3) support, such as IBM Tivoli Storage Productivity Center, which delivers the functionality of IBM Spectrum Control, or IBM Security Directory Server. A remote user does not need local authentication methods.
With Tivoli Integrated Portal, both a password and SSH key are required to use the CLI. With LDAP, having a password and SSH key is not necessary, although SSH keys optionally can be configured. Remote users who need to access the system when the remote service is down also need to configure local credentials. Remote users have their groups defined by the remote authentication service.
See the following sections:
For details about using the management GUI to manage users and user groups on the system, see 8.8.1, “Users” on page 333.
To configure remote authentication with Tivoli Integrated Portal or Lightweight Directory Access Protocol, see 9.4.1, “Configure remote authentication” on page 361.
For information about the auditing of commands on the V9000 cluster, see 8.8.2, “Audit log” on page 337.
Submission
When connected to a cluster, the user agent can start submitting commands. First, the syntax is checked. If the syntax checking fails, an appropriate error message is returned. Any automation implementation must ensure that all submitted commands have the correct syntax. If they do not, they must be designed to handle syntax errors. Designing a solution that does not generate invalid syntax is easier than designing a solution to handle all potential syntax errors.
Authorization
Next, commands with valid syntax are checked to determine whether the user agent has the authority to submit the command. A role is associated with the key that was used to authenticate the connection. FlashSystem V9000 checks the submitted command against the authorization role. If the user agent is not authorized to run this command, the following error is returned:
CMMVC6253E The task has failed because the user's role is not authorized to submit the command.
If the user agent is authorized, the command is sent to be run.
See the following resources:
For information about authorization and roles, see 8.8.1, “Users” on page 333.
For more details, see Implementing the IBM System Storage SAN Volume Controller V7.4, SG24-7933.
Running a command
When a command is run, it can fail (one possible scenario) or succeed (four possible scenarios):
The command fails. An error message is written to STDERR.
The command succeeds. A warning is written to STDERR.
The command succeeds. A warning is written to STDERR; information is sent to STDOUT.
The command succeeds. Information is written to STDOUT.
The command succeeds. Nothing is written to STDOUT.
 
Note: Data that is written to STDOUT and STDERR by the V9000 is written to STDOUT and STDERR by your SSH client. However, you must manually verify that the data was written to STDOUT and STDERR by your SSH client.
13.3.2 Creating connections
Connecting to the V9000 cluster is the first step in running commands. Any automation solution requires a connection component. This component must be as robust as possible, because it forms the foundation of your solution.
There are two forms of connection solutions:
Transient: One command is submitted per connection, and the connection is closed after the command is completed.
Persistent: The connection is made and stays open. Multiple commands are submitted through this single connection, including interactive sessions and the CIMOM.
Transient connections
Transient connections are simple to create. The most common SSH clients enable the user to submit a command as part of the user’s invocation. Example 13-8 shows a user submitting two commands as part of the user’s invocation using ssh on an AIX server. Using the operating system command, the V9000 output can be processed.
Example 13-8 Transient connection to V9000 from AIX Server
# ssh -i publickey -l ITSOadmin ITSO_V9000 lsenclosure -delim :
id:status:type:managed:IO_group_id:IO_group_name:product_MTM:serial_number:total_canisters:online_canisters:total_PSUs:online_PSUs:drive_slots:total_fan_modules:online_fan_modules
1:online:expansion:yes:0::9846-AE2:1371006:2:2:2:2:12:0:0
 
# ssh -i publickey -l ITSOadmin ITSO_V9000 lsenclosure -delim : | cut -f1,2,7,8 -d :
id:status:product_MTM:serial_number
1:online:9846-AE2:1371006
#
Example 13-9 shows a user submitting a command as part of the user’s invocation using the plink command on a Windows server.
Example 13-9 Transient connection to V9000 from Windows server
C:Program FilesPutty>plink -i private.ppk -l superuser ITSO_V9000 lsenclosure -delim :
id:status:type:managed:IO_group_id:IO_group_name:product_MTM:serial_number:total_canisters:online_canisters:total_PSUs:online_PSUs:drive_slots:total_fan_modules:online_fan_modules
1:online:expansion:yes:0::9846-AE2:1371006:2:2:2:2:12:0:0
 
C:Program FilesPutty>
These transient connections go through all five stages of running a command and return to the command line. You can redirect the two output streams (STDOUT and STDERR) using the operating system’s standard redirection operators to capture the responses.
These lengthy invocations can be shortened in client-specific ways. User configuration files can be used with the AIX SSH client. The configuration file in Example 13-10 enables you to create a transient connection.
Example 13-10 Sample SSH configuration file saved as sampleCfg
# cat sampleCfg
Host ITSO
HostName ITSO_V9000
IdentityFile ./privateKey
User ITSOadmin
 
Host ITSOsu
HostName ITSO_V9000
IdentityFile .ssh/id_rsa
User superuser
The Transient connection is shown in Example 13-11.
Example 13-11 Transient connection to FlashSystem V9000 system using ssh and configuration file
# ssh -F sampleCFG ITSOsu sainfo lsservicenodes
panel_name cluster_id cluster_name node_id node_name relation node_status error_data
75AM710 00000203202035F4 ITSO_V9000 1 BB1ACN1sn75AM710 local Active
75AM730 00000203202035F4 ITSO_V9000 2 BB1ACN2sn75AM730 cluster Active
01-2 00000203202035F4 ITSO_V9000 expansion Service 690
01-1 00000203202035F4 ITSO_V9000 expansion Managed
Shortening the plink invocation requires the creation of a PuTTY session. First, open the PuTTY application and enter the following line in the Host Name (or IP address) field, as shown in Figure 13-21:
superuser@<Host Name or cluster IP address>
Figure 13-21 Add user name and system name to PuTTY session
Configure the private key for this session by making the selections, as shown in steps 1, 2, and 3 of Figure 13-22. Then click Browse (step 4) to locate the private key file.
Figure 13-22 Set private key for PuTTY SSH session
Complete saving the session (Figure 13-23) by returning to the Session Panel (1), providing a session name (2), and clicking Save (3).
Figure 13-23 Save PuTTY session for use with plink
After a session is saved, you can use it to make transient connections from the command line (Example 13-12).
Example 13-12 Transient connection to FlashSystem V9000 system using plink with PuTTY session
C:UsersIBM_ADMIN>plink -load ITSOsu lsenclosurebattery
enclosure_id battery_id status charging_status recondition_needed percent_charged end_of_life_warning
1 1 online idle no 97 no
1 2 online idle no 91 no
 
C:UsersIBM_ADMIN>
Persistent connections
A persistent connection is a connection that exists beyond the submission and execution of a single command. As outlined previously, the CIMOM provides a persistent connection, but it does not provide direct access to the command line. To provide a persistent connection to the command line, you must use multiple processes.
There are as many ways to provide a persistent connection to the command line as there are programming languages. Most methods involve creating a process that connects to the cluster, writing to its STDIN stream, and reading from its STDOUT and STDERR streams.
You can use persistent connections in several ways:
On a per-script basis
A script opens a connection that exists for the life of the script, enabling multiple commands to be submitted. The connection ends when the script ends.
As a stand-alone script
A connection is opened and other scripts communicate with this script to submit commands to the cluster. This approach enables the connection to be shared by multiple scripts. This in turn enables a greater number of independent scripts to access the cluster without using up all of the connection slots.
See the following source for more information about transient and persistent connections:
IBM System Storage SAN Volume Controller and Storwize V7000 Replication Family Services, SG24-7574
 
13.3.3 FlashSystem V9000 command-line scripting
When connected to the cluster command line, you can use small amounts of automation for various purposes, including for the following tasks:
Repeatedly submitting a single command to a set of FlashSystem V9000 objects
Searching the configuration for objects conforming to certain criteria
The V9000 command line is a highly restricted Bash shell. You cannot access UNIX commands, such as cd or ls. The only commands that are available are built-in commands, such as echo or read. In addition, redirecting inputs and outputs is not supported, but you can pipe commands together.
 
Note: FlashSystem V9000 uses Spectrum Virtualize technology, built on the foundation of the SAN Volume Controller. The command lines function in the same secure way, which enables you to use existing scripting for automation and especially replication.
Example 13-13 shows a script that lists all volumes that are not online. This script complements the filtervalue parameter of the lsvdisk command. The filtervalue parameter provides matches only when a property matches a value. The command-line script in Example 13-13 provides matches according to other criteria.
Example 13-13 FlashSystem V9000 command-line script listing volumes that are not online
001. lsvdisk -nohdr | while read id name IOGid IOGname status rest
002. do
003. if [ "$status" != "online" ]
004. then
005. echo "Volume '$name' ($id) is $status"
006. fi
007. done
 
Note: vdisks offline is an error condition. In normal operations you do not find any that are not online.
Line 001 submits the lsvdisk command and pipes the output to the read command, which is combined with a while command. This combination creates a loop that runs once per line of output from the lsvdisk command.
The read command is followed by a list of variables. A line is read from the lsvdisk command. The first word in that line is assigned to the first variable. The second word is assigned to the second variable, and so on, with any remaining words assigned to the final variable (with intervening spaces included).
In our case, we use the -nohdr parameter, because we are not interested in this information.
Lines 003 - 006 check the status variable. If it is not equal to online, the information is printed to STDOUT.
Submitting command-line scripts
You can submit command-line scripts from an interactive prompt, if required. However, you can also submit the scripts as batch files. Example 13-14 shows how to submit scripts as batch files with ssh.
Example 13-14 Submission of batch file to FlashSystem V9000 using ssh
ssh superuser@ITSO_V9000 -T < batchfile.sh
Host and WWPN info:
 
Host 0 (TA_Win2012) : WWPN is =10000000C9B83684
Host 0 (TA_Win2012) : WWPN is =10000000C9B83685
Example 13-15 shows how to submit scripts as batch files with plink.
Example 13-15 Submission of batch file to FlashSystem V9000 using plink
C:>plink -load ITSOadmin -m batchfile.sh
 
Host and WWPN info:
 
Host 0 (RedHat) : WWPN is =2100000E1E302C73
Host 0 (RedHat) : WWPN is =2100000E1E302C72
Host 0 (RedHat) : WWPN is =2100000E1E302C51
Host 0 (RedHat) : WWPN is =2100000E1E302C50
Host 1 (AIX) : WWPN is =10000090FA13B915
Host 1 (AIX) : WWPN is =10000090FA13B914
Host 1 (AIX) : WWPN is =10000090FA0E5B95
Host 1 (AIX) : WWPN is =10000090FA0E5B94
Host 1 (AIX) : WWPN is =10000090FA02F630
Host 1 (AIX) : WWPN is =10000090FA02F62F
Host 1 (AIX) : WWPN is =10000090FA02F621
Host 1 (AIX) : WWPN is =10000090FA02F620
Host 2(TA_Win2012) : WWPN is =10000000C9B83684
Host 2(TA_Win2012) : WWPN is =10000000C9B83685
Both commands submit a simple batch file, as shown in Example 13-16. This command lists the WWPN for each host defined in the FlashSystem V9000, as shown in Example 13-15.
Example 13-16 Command-line batch file batchFile.sh used in the previous examples
echo "Host and WWPN info:"
echo " "
lshost -nohdr | while read name product_name WWPN
do
lshost $name| while read key value
do
if [ "$key" == "WWPN" ]
then
echo "Host $name ($product_name) : WWPN is =$value"
fi
done
done
Server-side scripting
Server-side scripting involves scripting where the majority of the programming logic is run on a server.
Part of server-side scripting is the generation and management of connections to the FlashSystem V9000 system. For an introduction of how to create and manage a persistent connection to a system and how to manage requests coming from multiple scripts, see “Persistent connections” on page 529.
The Perl module handles the connection aspect of any script. Because connection management is often the most complex part of any script, an advisable task is to investigate this module. Currently, this module uses transient connections to submit commands to a cluster, and it might not be the best approach if you plan to use multiple scripts submitting commands independently.
13.3.4 Sample commands of mirrored VDisks
This section contains sample commands that use the techniques demonstrated in 13.3.3, “FlashSystem V9000 command-line scripting” on page 529. These examples are based on a single building block configuration with sample data designed to support this publication.
 
Tip: Start with small examples to understand the behavior of the commands.
VDisk mirroring to a second enclosure
This example shows how to mirror all VDisks for redundancy or how to vacate a storage enclosure.
The sync rate
Example 13-17 shows mirroring the VDisks to a new managed disk group. In this example, sync rate is low so that it does not adversely affect the load on the system. You can check the progress of synchronization with lsvdisksyncprogress command.
Example 13-17 Mirror all VDisks
lsvdisk -filtervalue copy_count=1 -nohdr |
while read id vdiskname rest
do
addvdiskcopy -mdiskgrp newgroupname -syncrate 30 $id
done
Vdisk [0] copy [1] successfully created
Vdisk [1] copy [1] successfully created
Vdisk [2] copy [1] successfully created
Vdisk [3] copy [1] successfully created
Vdisk [4] copy [1] successfully created
Vdisk [5] copy [1] successfully created
 
Raise the sync rate
Raise the sync rate to 80 for all the VDisks currently not synchronized (Example 13-18).
Example 13-18 Raise syncrate to 80
lsvdiskcopy -filtervalue sync=no -nohdr |
while read id vdisk copyid rest
do
echo “Processing $vdisk”
chvdisk -syncrate 80 $vdisk
done
 
Tip: Remember, raising the sync rate causes more I/O to be transferred, which can be an issue for a standard disk array.
Change primary in use to the new mdisk group
In Example 13-19 the primary is changed to the copy that was created in Example 13-17 on page 532.
 
Tip: Remember, all of these volumes must be in a sync state, as shown by the lsvdisk command output.
Example 13-19 Change vdisk mirror primary to copy in newgroupname
lsvdiskcopy -filtervalue mdisk_grp_name=newgroupname -nohdr |
while read id vdisk copyid rest
do
echo Processing $vdisk
chvdisk -primary $copyid $vdisk
done
 
Remove all the copies not primary
Example 13-20 removes all VDisk copies in the previous MDisk group.
Example 13-20 Remove VDisk copies
lsvdiskcopy -filtervalue mdisk_grp_name=prevmdiskgroup -nohdr |
while read id vdisk copyid rest
do
echo “Processing rmvdiskcopy -copy $copyid $vdisk”
rmvdiskcopy -copy $copyid $vdisk
done
 
Tip: Use extreme care when removing a storage enclosure from service. For example, in the case of FlashSystem V9000, the AE2 enclosure should be unmanaged. The V840 equivalent state is to remove all the mdisk instances from prevmdiskgroup; these MDisks become unmanaged.
Create compressed mirrored copies of VDisks not currently mirrored
Example 13-21 looks for all VDisks that have a single copy and creates a mirrored compressed copy.
Example 13-21 Create compressed VDisk mirrors
lsvdisk -filtervalue copy_count=1 -nohdr |
while read id vdiskname rest
do
addvdiskcopy -mdiskgrp BB1mdiskgrp0 -autoexpand -rsize 50% -syncrate 30 -compressed $id
done
Vdisk [0] copy [1] successfully created
Vdisk [1] copy [1] successfully created
Vdisk [2] copy [1] successfully created
Vdisk [3] copy [1] successfully created
Vdisk [4] copy [1] successfully created
Vdisk [5] copy [1] successfully created
 
Tip: From the CLI, issue the help addvdiskcopy command or look in the IBM Knowledge Center for details of parameters for this command. All options that are available in the GUI can be issued from the CLI, which helps you more easily work with large numbers of volumes.
Turn on autoexpand for all offline volumes
During our testing, an out-of-space condition was encountered with multiple mirrored copies. The condition is the result of the autoexpand option not being used when the volumes were mirrored. See Example 13-22.
Example 13-22 Activate autoexpand for all offline vdisk copies
lsvdiskcopy -filtervalue status=offline -nohdr | while read vid name copyid rest
do
chvdisk -autoexpand on -copy $copyid $vid
done
Summary
This section presented simple scripts that can be used to automate tasks using the V9000 command-line interface. These concepts can be applied to other commands including backup process, such as creating flash copies of volumes.
13.3.5 Backup FlashSystem V9000 configuration
Before making major changes to the FlashSystem V9000 configuration be sure to save the configuration of the system. By saving the current configuration, you create a backup of the licenses that are installed on the system. This can assist you in restoring the system configuration. You can save the configuration by using the svcconfig backup command.
The next two steps show how to create a backup of the configuration file and to copy the file to another system:
1. Log in to the cluster IP using an SSH client and back up the FlashSystem configuration:
superuser> svcconfig backup
...............................................................
CMMVC6155I SVCCONFIG processing completed successfully
2. Copy the configuration backup file from the system. Using secure copy, copy the following file from the system and store it:
/tmp/svc.config.backup.xml
For example, use pscp.exe, which is part of the PuTTY commands family:
pscp.exe superuser@<cluster_ip>:/tmp/svc.config.backup.xml .
superuser@ycluster_ip> password:
svc.config.backup.xml | 163 kB | 163.1 kB/s | ETA: 00:00:00 | 100%
The use of the CLI is described in 13.3, “Command-line hints” on page 518.
 
Tip: This process saves only the configuration of the system. User data must be backed up by using normal system backup processes.
13.3.6 Using the V9000 Software Upgrade Test Utility
In preparation for upgrading firmware on a FlashSystem V9000, be sure to run the Software Upgrade Test Utility before any upgrade. This step ensures that your system configuration is supported and identifies any potential issue during your upgrade change window.
Overview of Software Upgrade Test Utility
You can download this small utility from IBM Fix Central (see 13.5.4, “Downloading from IBM Fix Central” on page 551).
The utility is run on the FlashSystem V9000 before a firmware upgrade. The purpose of the utility is to check for known issues and system configurations that might present a problem for the upgrade and warn you of conditions that might need to be corrected before running the upgrade. It can be run as many times as needed. The utility is run automatically as part of the GUI upgrade process, or stand-alone, to assess the readiness of a system for upgrade as part of the upgrade planning process.
At the time of writing, the Software Upgrade Test Utility is run automatically before upgrading. It is a required step. In earlier Spectrum Virtualize firmware releases, it was optional, but strongly suggested.
When an upgrade is initiated using the FlashSystem web management GUI, you are prompted to download the utility and firmware from Fix Central. Most users prefer to do this in advance of the upgrade and select the files as shown in Figure 13-24 during the upgrade process. If you click Update, the utility runs. If successful, the firmware upgrade process runs.
Figure 13-24 Update System
 
Tip: During the V9000 GUI upgrade process, the upgrade proceeds immediately after the Software Upgrade Test Utility completes successfully.
Be sure that firmware upgrades are initiated by using the GUI so that the Upgrade Test Utility is uploaded and run during the upgrade process. The upgrade process is stopped if any issues are identified. If this happens, examine the output of the utility before proceeding. If the utility provides any warnings, correct them before proceeding with the upgrade.
Upgrades can also be run using the applysoftware command using the CLI.
Using the V9000 Software Upgrade Test Utility from the command line
The installation and use of this utility is nondisruptive and does not require any node to be restarted, so there is no interruption to host I/O. The utility will be installed only on the current configuration node. To install and run the utility, complete the following steps:
1. Copy the utility to the /upgrade directory on the FlashSystem V9000 using a secure copy utility such as Secure Copy Protocol (SCP) or pscp.exe:
pcsp <test_utility_filename> superuser@<cluster_ip_address>:/upgrade
2. Install the utility:
applysoftware -file <test_utility_filename>
3. Run the test utility:
svcupgradetest -v 7.4.1.0
The output is displayed (Example 13-23).
Example 13-23 Output from test utility
svcupgradetest version 14.01
Please wait, the test may take several minutes to complete.
97
100
Results of running svcupgradetest:
==================================
The tool has found 0 errors and 0 warnings
The tool has not found any problems with the cluster.
4. You can rerun the utility (step 3) after it is installed. Installing it again is unnecessary. Installing a new version overwrites the old version.
13.3.7 Secure erase of data
Some clients, especially in the healthcare sector, are concerned about data confidentiality. FlashSystem V9000 uses encryption to secure data. If you have a license for FlashSystem V9000 encryption, you can prevent unauthorized access to FlashSystem data.
Secure erase of V9000 flash modules
 
Important: Deleting FlashSystem V9000 encryption key prevents any access to the data on FlashSystem V9000 when the encryption feature is enabled.
Flash Modules can be securely decommissioned by using the chdrive -task erase CLI command. IBM has certified this erasure process; contact your IBM representative or IBM Business Partner for documentation regarding this process.
Example erasure procedure
The following steps can be used to decommission and entire FlashSystem V9000 enclosure.
 
Attention: This procedure is designed to securely destroy data. There is no recovery, so be careful to identify the correct enclosure.
1. Start with confirming the enclosure ID to be erased using the FlashSystem V9000 GUI. Select Pools  MDisks by Pools (Figure 13-25).
Figure 13-25 Select MDisks by Pools
2. Right-click an MDisk and select Properties (Figure 13-26).
Figure 13-26 Open the Properties menu
3. Click Member Drives (1). The Enclosure ID (2) to be erased is enclosure 1 (Figure 13-27).
Figure 13-27 Identify the enclosure
4. Remove the AE2 Enclosure Array from the storage pool by right-clicking mdisk0 (1) and selecting Unassign from Pool(2), as shown in Figure 13-28.
Figure 13-28 Unassign from Pool
 
5. Confirm the MDisk removal by verifying the number of disks (1) and selecting the check box (2) to indicate removing the data (Figure 13-29), and then click Delete.
Figure 13-29 Confirm the MDisk removal with Data
6. Click Yes (Figure 13-30) and wait for the status to become unmanaged.
Figure 13-30 Final warning
 
Attention: If the pool has other MDisks, the data will be migrated, otherwise it is lost.
7. Use the FlashSystem V9000 command line to securely erase the array, as shwon in Example 13-24.
Example 13-24 Use the CLI to erase the array
IBM_FlashSystem:ITSO_V9000:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity controller_name tier encrypt enclosure_id
0 mdisk0 online unmanaged_array 6.2TB ssd yes 1
 
 
IBM_FlashSystem:ITSO_V9000:superuser>rmarray -enclosure 1 -force
 
IBM_FlashSystem:ITSO_V9000:superuser>lsdrive -nohdr
0 online candidate sas_ssd 1.0TB 1 3 good no no inactive
1 online candidate sas_ssd 1.0TB 1 4 good no no inactive
2 online candidate sas_ssd 1.0TB 1 5 good no no inactive
3 online candidate sas_ssd 1.0TB 1 6 good no no inactive
4 online candidate sas_ssd 1.0TB 1 7 good no no inactive
5 online candidate sas_ssd 1.0TB 1 8 good no no inactive
6 online candidate sas_ssd 1.0TB 1 9 good no no inactive
7 online candidate sas_ssd 1.0TB 1 10 good no no inactive
 
IBM_FlashSystem:ITSO_V9000:superuser>lsdrive -nohdr |
> while read drive_id extrastuff
> do
> chdrive -task erase -type quick $drive_id
> done
 
IBM_FlashSystem:ITSO_V9000:superuser>mkarray -name mdisk0 -enclosure 1
MDisk, id [0], successfully created
 
IBM_FlashSystem:ITSO_V9000:superuser>lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier encrypt enclosure_id
0 mdisk0 offline 6.2TB initting raid5 0 4 ssd yes 1
 
IBM_FlashSystem:ITSO_V9000:superuser>lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier encrypt enclosure_id
0 mdisk0 online 6.2TB online raid5 1 4 ssd yes 1
8. Select the operation to the array into the pool and click Assign to Pool (Figure 13-31).
Figure 13-31 Assign the MDisk to the Pool
9. In the Assign MDisk to Pool window, click Add to Pool (Figure 13-32).
Figure 13-32 Select Add to Pool
10. Click Close to clear the command result window (Figure 13-33).
Figure 13-33 Close the window
Figure 13-34 shows that the MDisk is now part of the pool and ready for use.
Figure 13-34 Re-created array is now ready for use
 
Tip: All of these operations can be done from the command line. Erasing flash drives is not available in the GUI.
Erasing flash drives is not normal operation. By using the encryption features of the V9000, the erasure can be avoided because the data cannot be read, even on a failed flash module.
13.4 Call home process
IBM encourages all clients to take advantage of the following settings to enable you and IBM to partner for your success. With the call home feature enabled, your system is effectively monitored 24 x 7 x 365. As an IBM client you can enjoy faster response times, faster problem determination and effectively reduced risk over an unmonitored system. In the future, IBM plans to use inventory report data to directly notify clients who are affected by known configuration or code issues.
While enabling call home reporting, IBM encourages clients to also enable inventory reporting in order to take advantage of this future offering. For a more detailed explanation, followed by steps to configure, see 9.2.1, “Email and call home” on page 340. The configuration setup is a simple process and takes several minutes to complete.
13.4.1 Call home details
The call home function opens a service alert if a serious error occurs on the system, automatically sending details of the error and contact information to IBM Service personnel. If the system is entitled for support, a problem management record (PMR) is automatically created and assigned to the appropriate IBM Service personnel.
The information provided to IBM in this case might be an excerpt from the Event Log containing the details of the error and client contact information from the system. This enables IBM Service personnel to contact you and arrange service on the system, which can greatly improve the speed of resolution by removing the need for you to detect the error and raise a support call.
13.4.2 Email alert
Automatic email alerts can be generated and sent to an appropriate client system administrator or distribution list. This is effectively the same as call home but you can be additionally notified about error, warning, information messages when they occur, and also you can receive inventory emails (see 13.4.3, “Inventory” on page 544).
You can view the IBM Knowledge Center documentation for your specific V9000 product to determine whether a particular event is classified as error, warning, or informational. Look for the Notification type for each error to determine which you want to be notified for. Because you can customize this, based on the individual, maximum flexibility exists.
13.4.3 Inventory
Rather than reporting a problem, an email is sent to IBM that describes your system hardware and critical configuration information. Object names and other potentially sensitive information, such as IP addresses, are not sent.
IBM suggests that the system inventory be sent on a one-day or seven-day interval for maximum benefit.
13.5 Service support
Understanding how support issues are logged is important information. In this section, we describe support for the FlashSystem V9000, including the IBM Technical Advisor role, support entitlement, registering components in the Service Request Tool, and calling IBM for support.
13.5.1 IBM Storage Technical Advisor
The IBM Storage Technical Advisor (TA) enhances end-to-end support for complex IT solutions. Each FlashSystem V9000 includes an IBM TA for the initial hardware warranty period. This section describes the IBM TA program in general with specifics on how customers can work with their TA.
The TA service is built around three value propositions:
Proactive approach to ensure high availability for vital IT services
Client Advocate that manages problem resolution through the entire support process
A trusted consultant for both storage hardware and software
Technical Advisors benefit customers by providing a consultant for questions on the FlashSystem V9000. Most customers meet their TA during a Technical Delivery Assessment (Solution Assurance Meeting) before the initial installation. After this initial meeting, the TA is the focal point for support related activities as follows:
Maintains a support plan that is specific to each client. This support plan contains an inventory of equipment including customer numbers and serial numbers.
Coordinates service activities, working with your support team in the background. Monitors progress of open service requests, escalation, expert consultation on problem avoidance.
Communicates issues with customers, IBM Business Partners, and IBM Sales teams.
Periodically reviews and provides reports of hardware inventories and service requests. This includes using call home information to provide customer reports on the state of the customer systems.
Oversight of IBM support activities helps companies anticipate and respond to new problems and challenges faster.
Proactive planning, advice, and guidance to improve availability and reliability.
The IBM Storage Technical Advisor is an effective way to improve total cost of ownership and free up customer resources. Customers have options to extend the Technical Advisor service beyond the initial hardware warranty using IBM Technical Support Services (TSS) offerings.
Contact your IBM Sales Team or IBM Business Partner for details.
13.5.2 How a FlashSystem V9000 is entitled for support
FlashSystem V9000 systems consist of various hardware and software components, each carrying their own unique requirements for proper entitlement. FlashSystem V9000 systems consist of at least three unique hardware machine types (models) for each building block. Each building block carries its own serial number. Warranty and maintenance entitled software, requires customer ID, product description, and storage enclosure serial number to properly entitle it.
 
Tip: Customer ID and customer number are the same. The customer number is included in the customer support plan you receive from your Technical Advisor.
Calling IBM Support
Consider this information when you call for support:
For problems known to be hardware-related, place calls against the affected 9846 or 9848-AC2 or AE2 machine type and serial number. Using the correct machine type, model, and serial number avoids service delays.
 
Note: Most hardware support tickets are opened by call home events. However, if there are effects because of component failure, you can open an appropriate severity support ticket independently.
For software problems, navigate through the Automated Voice Response for Software and provide customer ID, product description, and 9846 or 9848-AE2 storage enclosure, plus serial number.
If you are unsure whether the issue is hardware or software, call in for Storage Support (option 3 is US only). Provide customer ID, product description and 9846 or 9848-AE2 storage enclosure.
Scenario 1: Host connectivity issue
The customer is experiencing difficulty with attaching a host to the V9000 AC2 controller enclosure. The customer opens a Software issue (bullet item 2 in “Calling IBM Support” on page 545) against the V9000 AE2 storage enclosure. The customer describes that the issue is the host attachment to the controller enclosure.
Scenario 2: Performance
The customer reports that performance of the V9000 solution is not meeting expectations. The customer opens a Storage Support issue (bullet item 3 in “Calling IBM Support” on page 545) against the V9000 AE2 storage enclosure. The customer can save time by uploading snaps from both controller AC2 and storage enclosure AE2 after the PMR number is obtained.
Scenario 3: Hardware issue
The customer reports that email alerts indicate a failed hard disk in the AC2 node. The customer opens a hardware issue (bullet item 1 in “Calling IBM Support” on page 545) against the V9000 AC2 node serial number reporting the error. This is processed as a standard hardware failure.
13.5.3 Providing Logs to IBM ECuRep
IBM Enhanced Customer Data Repository (ECuRep) is a secure and fully supported data repository with problem determination tools and functions. It updates problem management records (PMR) and maintains full data lifecycle management.
This server-based solution is used to exchange data between IBM customers and IBM Technical Support. Do not place files on or download files from this server without prior authorization from an IBM representative. The representative is able to provide further instructions as needed.
To use ECuRep, you need a documented PMR number either provided by the IBM support team with a call home, or issued by using the IBM Service Request tool on the IBM support portal. IBM provides the Service Request (SR) problem submission tool to electronically submit and manage service requests on the web. This tool replaces the Electronic Service Request (ESR) tool:
To provide logs to IBM ECuRep, complete the following steps:
1. The ECuRep opening page is shown in Figure 13-35. This page provides information about the repository, instructions for preparing files for upload, and multiple alternatives for sending data. Click Send data.
Figure 13-35 Main page of ECuRep
 
Tip: This system is connected to the IBM Problem Management Record. Support tickets are automatically updated, with the files uploaded and queued for an IBM support representative response.
2. IBM provides multiple options for uploading data (Figure 13-36). The Java utility is the most efficient method to upload file. When you use FTP (1) or the Java utility (2), also select Prepare data (3) to see the details about file name conventions. The HTTPS (4) option eliminates the file naming requirement.
Figure 13-36 Options for sending data
3. The file naming requirement is shown in Figure 13-37. Using the PMR number on this form accurately logs the files uploaded to the correct PMR. Select Hardware from the menu for the V9000 and optionally provide your email address for a confirmation.
Click Continue.
Figure 13-37 Using the HTTP option
The file selection panel opens (Figure 13-38).
Figure 13-38 File Selection dialog
 
Tip: Most clients find this way the most effective method to upload logs. IBM suggests understanding the best method for your organization in advance and documenting the process to save precious time during a crisis.
13.5.4 Downloading from IBM Fix Central
Fix Central provides fixes and updates for your system’s software, hardware, and operating system.
If you are not looking for fixes or updates, go to IBM Passport Advantage® to download most purchased software products, or My Entitled Systems Support to download system software.
Go to the IBM Fix Central website:
In the following sections, we describe downloading code from IBM.
IBM ID
To use the IBM Fix Central website, you must have an IBM ID. Your IBM ID provides access to IBM applications, services, communities, support, online purchasing, and more. Additionally, your information is centralized so you can maintain it in a convenient and secure location. The benefits of having an IBM ID will increase over time as more applications migrate to IBM ID.
The login window is shown in Figure 13-39.
Figure 13-39 IBM ID login window
Fix Central
After signing in with your IBM ID, a page opens to the Find product tab (Figure 13-40):
1. The Find Product tab (1) is normally selected. In the Product selector field, start typing FlashSystem. Select FlashSystem V9000 from the list (2). Select Installed Version and Platform (3) and click Continue.
Figure 13-40 Fix Central Find product page
2. The appropriate code packages for your product are displayed (Figure 13-41 on page 553). Select the pieces you want to download (1), including reading the ReleaseNotes file (3) and click Continue (2). You can read the release notes with your web browser because the file name is a URL address.
Figure 13-41 Select fixes panel
 
Note: Always read release notes. They often contain special instructions related to the upgrade that should be part of your planning.
3. The download options are listed (Figure 13-42). Select your preferred download option. We select Download using your browser (HTTPS) and click Continue to start the process.
Figure 13-42 Download panel
4. On the Fix Central page (Figure 13-43), select a Machine type (1) and provide a Machine Serial Number (2); both must be valid. Notice the option for Load from My Systems (3). Systems registered with your My Support account can be loaded using this option.
Figure 13-43 Fix Central Entitlement panel
5. Select each of the files to start the download (Figure 13-44).
 
Figure 13-44 Download page
6. For each of the files to be downloaded, save the file, as shown in Figure 13-45.
Figure 13-45 Save files
Summary
In this section, we showed an example of downloading system firmware for the FlashSystem V9000. The test and upgrade package is all that is needed to update all components of the system no matter how many building blocks are present.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.130.232