Performance considerations for the IBM i system
This chapter describes the topics that are related to the DS8900 performance with an IBM i host. The performance of IBM i database applications and batch jobs is sensitive to the disk response time. Therefore, it is important to understand how to plan, implement, and analyze the DS8900 performance with the IBM i system.
This chapter includes the following topics:
9.1 IBM i storage architecture
To understand the performance of the DS8900F storage system with the IBM i system, you need insight into IBM i storage architecture. This section explains this part of IBM i architecture and how it works with the DS8900F storage system.
The following IBM i specific features are important for the performance of external storage:
Single-level storage
Object-based architecture
Storage management
Types of storage pools
This section describes these features and explains how they relate to the performance of a connected DS8900F storage system.
9.1.1 Single-level storage
The IBM i system uses the same architectural component that is used by the iSeries and AS/400 platform: single-level storage. It treats the main memory and the flash unit space as one storage area. It uses the same set of 64-bit virtual addresses to cover both main memory and flash space. Paging in this virtual address space is performed in 4 KB memory pages.
9.1.2 Object-based architecture
One of the differences between the IBM i system and other operating systems is the concept of objects. For example, data files, programs, libraries, queues, user profiles, and device descriptions are all types of objects in the IBM i system. Every object on the IBM i system is packaged with the set of rules for how it can be used, enhancing integrity, security, and virus-resistance.
The IBM i system takes responsibility for managing the information in auxiliary disk pools. When you create an object, for example, a file, the system places the file in the best location that ensures the best performance. It normally spreads the data in the file across multiple flash units. Advantages of such design are ease of use, self-management, automation of using the added flash units, and so on. IBM i object-based architecture is shown on Figure 9-1 on page 207.
Figure 9-1 IBM i object-based architecture
9.1.3 Storage management
Storage management is a part of the IBM i Licensed Internal Code that manages the I/O operations to store and place data on storage. Storage management handles the I/O operations in the following way.
When the application performs an I/O operation, the portion of the program that contains read or write instructions is first brought into LPAR main memory where the instructions are then run.
With the read request, the virtual addresses of the needed record are resolved, and for each needed page, storage management first looks to see whether it is in LPAR main memory. If the page is there, it is used to resolve the read request. However, if the corresponding page is not in LPAR main memory, a page fault is encountered and it must be retrieved from the Auxiliary Storage Pool (ASP). When a page is retrieved, it replaces another page in LPAR main memory that recently was not used; the replaced page is paged out to the ASP, which resides inside the DS8900F storage server.
Similarly, writing a new record or updating an existing record is done in LPAR main memory, and the affected pages are marked as changed. A changed page normally remains in LPAR main memory until it is written to the ASP as a result of a page fault. Pages are also written to the ASP when a file is closed or when write-to-storage is forced by a user through commands and parameters. The handling of I/O operations is shown in Figure 9-2 on page 208.
Figure 9-2 Storage management handling I/O operations
When resolving virtual addresses for I/O operations, storage management directories map the flash and sector to a virtual address. For a read operation, a directory lookup is performed to get the needed information for mapping. For a write operation, the information is retrieved from the page tables.
9.1.4 Disk pools in the IBM i system
The flash pools in the IBM i system are referred to as Auxiliary Storage Pools (ASPs). The following types of disk pools exist in the IBM i system:
System ASP
User ASP
Independent ASP (IASP)
System ASP
The system ASP is the basic flash pool for the IBM i system. This ASP contains the IBM i system boot flash (load source device), system libraries, indexes, user profiles, and other system objects. The system ASP is always present in the IBM i system and is needed for the IBM i system to operate. The IBM i system does not start if the system ASP is inaccessible.
User ASP
A user ASP separates the storage for different objects for easier management. For example, the libraries and database objects that belong to one application are in one user ASP, and the objects of another application are in a different user ASP. If user ASPs are defined in the IBM i system, they are needed for the IBM i system to start.
Independent ASP
The IASP is an independent flash pool that can switch among two or more IBM i systems that are in a cluster. The IBM i system can start without accessing the IASP. Typically, the objects that belong to a particular application are in this flash pool. If the IBM i system with IASP fails, the independent flash pool can be switched to another system in a cluster. If the IASP is on the DS8900F storage system, the copy of IASP (FlashCopy, Metro Mirror, or Global Mirror copy) is made available to another IBM i system and the cluster, and the application continues to work from another IBM i system.
9.2 Fibre Channel adapters and Multipath
This section explains the usage of Fibre Channel (FC) adapters to connect the DS8900F storage system to the IBM i system, with multiple ways to connect. It describes the performance capabilities of the adapters and the performance enhancement of using Multipath.
The DS8900F storage system can connect to the IBM i system in one of the following ways:
Native: FC adapters in the IBM i system are connected through a Storage Area Network (SAN) to the Host Bus Adapters (HBAs) in the DS8900F storage system.
With Virtual I/O Server Node Port ID Virtualization (VIOS NPIV): FC adapters in the VIOS are connected through a SAN to the HBAs in the DS8900F storage system. The IBM i system is a client of the VIOS and uses virtual FC adapters; each virtual FC adapter is mapped to a port in an FC adapter in the VIOS.
For more information about connecting the DS8900F storage system to the IBM i system with VIOS_NPIV, see DS8000 Copy Services for IBM i with VIOS, REDP-4584, and IBM System Storage DS8000: Host Attachment and Interoperability, SG24-8887.
With VIOS: FC adapters in the VIOS are connected through a SAN to the HBAs in the DS8900F storage system. The IBM i system is a client of the VIOS, and virtual SCSI adapters in VIOS are connected to the virtual SCSI adapters in the IBM i system.
For more information about connecting storage systems to the IBM i system with the VIOS, see IBM i and Midrange External Storage, SG24-7668.
Most installations use the native connection of the DS8900F storage system to the IBM i system or the connection with VIOS_NPIV.
 
IBM i I/O processors: The information that is provided in this section refers to connection with IBM i I/O processor (IOP)-less adapters. For similar information about older IOP-based adapters, see IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i, SG24-7120.
9.2.1 FC adapters for native connection
The following FC adapters are used to connect the DS8900F storage system natively to an IBM i partition in a POWER server:
2-Port 16 Gb PCIe3 Generation-3 Fibre Channel Adapter, Feature Code EN0A (EN0B for Low Profile (LP) adapter)
4-Port 16Gb PCIe3 Generation-3 Fibre Channel Adapter, Feature Code EN1C (EN1D for LP adapter)
2-Port 16Gb PCIe3 Generation-3 Fibre Channel Adapter, Feature Code EN2A (EN2B for LP adapter)
2-Port 32 Gb PCIe3 Generation-3 Fibre Channel Adapter, Feature Code EN1A (EN1B for (LP) adapter)
 
 
Note: The supported FC adapters listed above require DS8900F R9.1 or higher microcode and the IBM i 7.4TR4 operating system installed on Power 8/9 servers.
For detailed specifications, see the IBM System Storage Interoperation Centre (SSIC) at:
All listed adapters are IOP-less adapters. They do not require an I/O processor card to offload the data management. Instead, the processor manages the I/O and communicates directly with adapter. Thus, the IOP-less FC technology takes full advantage of the performance potential in the IBM i system.
Before the availability of IOP-less adapters, the DS8900F storage system connected to IOP-based FC adapters that required the I/O processor card.
IOP-less FC architecture enables two technology functions that are important for the performance of the DS8900F storage system with the IBM i system: Tag Command Queuing and Header Strip Merge.
Tagged Command Queuing
Tagged Command Queuing allows the IBM i system to issue multiple commands to the DS8900F storage system on the same path to a logical volume (LV). In the past, the IBM i system sent one command only per LUN path. Up to six I/O operations to the same LUN through one path are possible with Tag Command Queuing in the natively connected DS8900F storage system. With the natively connected DS8900F storage system, the queue depth on a LUN is 6.
Header Strip Merge
Header Strip Merge allows the IBM i system to bundle data into 4 KB chunks. By merging the data together, it reduces the amount of storage that is required for the management of smaller data chunks.
9.2.2 FC adapters in VIOS
The following Fibre Channel (FC) adapters are used to connect the DS8900F storage system to VIOS in a POWER server to implement VIOS_NPIV connection for the IBM i system:
2-Port 16 Gb PCIe3 Generation-3 FC Adapter, Feature Code EN0A (EN0B for LP adapter)
2-Port 16Gb PCIe3 Generation-3 FCl Adapter, Feature Code EN2A (EN2B for LP adapter)
4-Port 16 Gb PCIe3 Generation-3 FC Adapter, Feature Code EN1E (EN1F for LP adapter)
4-Port 16 Gb PCIe3 Generation-3 FC Adapter, Feature Code EN1C (EN1D for LP adapter
2-Port 32 Gb PCIe3 Generation-3 FC Adapter, Feature Code EN1A (EN1B for LP adapter
2-Port 32 Gb PCIe4 Generation-4 FC Adapter, Feature Code EN1J (EN1K for LP adapter
Queue depth and the number of command elements in the VIOS
When you connect the DS8900F storage system to the IBM i system through the VIOS, consider the following types of queue depths:
The queue depth per LUN: SCSI Command Tag Queuing in the IBM i operating system enables up to 32 I/O operations to one LUN at the same time. This queue depth is valid for either connection with VIOS VSCSI or connection with VIOS_NPIV.
If the DS8900F storage system is connected in VIOS VSCSI, consider the queue depth 32 per physical disk (hdisk) in the VIOS. This queue depth indicates the maximum number of I/O requests that can be outstanding on a physical disk in the VIOS at one time.
The number of command elements per port in a physical adapter in the VIOS is 500 by default; you can increase it to 2048. This queue depth indicates the maximum number of I/O requests that can be outstanding on a port in a physical adapter in the VIOS at one time, either in a VIOS VSCSI connection or in VIOS_NPIV. The IBM i operating system has a fixed queue depth of 32, which is not changeable. However, the queue depth for a physical disk in the VIOS can be set up by a user. If needed, set the queue depth per physical disk in the VIOS to 32 by using the chdev -dev hdiskxx -attr queue_depth=32 command.
9.2.3 Multipath
The IBM i system allows multiple connections from different ports on a single IBM i partition to the same LVs in the DS8900F storage system. This multiple connections support provides an extra level of availability and error recovery between the IBM i system and the DS8900F storage system. If one IBM i adapter fails, or one connection to the DS8900F storage system is lost, you can continue using the other connections and continue communicating with the disk unit. The IBM i system supports up to eight active connections (paths) to a single LUN in the DS8900F storage system.
IBM i multi-pathing is built into the IBM i System Licensed Internal Code (SLIC) and does not require a separate driver package to be installed.
In addition to high availability, multiple paths to the same LUN provide load balancing. A Round-Robin algorithm is used to select the path for sending the I/O requests. This algorithm enhances the performance of the IBM i system with DS8900F connected LUNs.
When the DS8900F storage system connects to the IBM i system through the VIOS, Multipath in the IBM i system is implemented so that each path to a LUN uses a different VIOS. Therefore, at least two VIOSs are required to implement Multipath for an IBM i client. This way of multipathing provides additional resiliency if one VIOS fails. In addition to IBM i Multipath with two or more VIOS, the FC adapters in each VIOS can multipath to the connected DS8900F storage system to provide additional resiliency and enhance performance.
9.3 Performance guidelines for drives in a DS8000 storage system with IBM i
This section describes the guidelines to use when planning and implementing flash drives in a DS8900F storage system for an IBM i system to achieve the required performance.
9.3.1 RAID level
The DS8900F supports three types of RAID configurations, RAID 6, RAID 10 or by Request for Price Quotation (RPQ) RAID 5.
The default and preferred RAID configuration for the DS8900F is now RAID 6. RAID 6 arrays provide better resiliency than RAID 5 with the use of two parity flash modules inside each array allowing an array to be rebuilt which has two failed flash modules.
Alternatively RAID 10 provides better resiliency and in some cases enables better performance than RAID 6. The difference in performance is because of the lower RAID penalty that is experienced with RAID 10 compared to RAID 6. The workloads with a low read/write ratio and with many random writes benefit the most from RAID 10.
Consider RAID 10 for IBM i systems especially for the following types of workloads:
Workloads with large I/O rates
Workloads with many write operations (low read/write ratio)
Workloads with many random writes
Workloads with low write-cache efficiency
9.3.2 Number of ranks
To better understand why the number of disk drives or the number of ranks is important for an IBM i workload, here is a short explanation of how an IBM i system spreads the I/O over flash modules.
When an IBM i page or a block of data is written to flash space, storage management spreads it over multiple flash modules. By spreading data over multiple flash modules, multiple flash arms work in parallel for any request to this piece of data, so writes and reads are faster.
When using external storage with the IBM i system, storage management sees an Logical Volume (LUN) in the DS8900F storage system as a “physical” flash module.
IBM i LUNs can be created with one of two Extent Allocation Methods (EAM) using the mkfbvol command:
Rotate Volumes (rotatevols) EAM. This method occupies multiple stripes of a single rank.
Rotate Extents (rotateexts) EAM. This method is composed of multiple stripes of different ranks.
Figure 9-3 on page 213 shows the use of the DS8900F disk with IBM i LUNs created with the rotatexts EAM.
Figure 9-3 Use of disk arms with LUNs created in the rotate extents method
Therefore, a LUN uses multiple DS8900F flash arms in parallel. The same DS8900F flash arms are used by multiple LUNs that belong to the same IBM i workload, or even to different IBM i workloads. To support efficiently this structure of I/O and data spreading across LUNs and flash modules, it is important to provide enough disk arms to an IBM i workload.
Use the new StorM tool when planning the number of ranks in the DS8900F storage system for an IBM i workload.
To provide a good starting point for the StorM modeler, consider the number of ranks that is needed to keep disk utilization under 60% for your IBM i workload.
Table 9-1 shows the maximal number of IBM i I/O/sec for one rank to keep the disk utilization under 60%, for the workloads with read/write ratios 70/30 and 50/50.
Table 9-1 Host IO/sec for an DS8950F flash array1
RAID array, disk drive
Host I/O per second at 70% reads
Host I/O per second at 50% reads
RAID 6, 3.84 TB Flash Module
55,509
39,670
RAID 10, 3.84 TB Flash Module
84,629
68,157
RAID 6, 1.92 TB Flash Module
55,509
39,670
Use the following steps to calculate the necessary number of ranks for your workload by using Table 9-1:
1. Decide which read/write ratio (70/30 or 50/50) is appropriate for your workload.
2. Decide which RAID level to use for the workload.
3. Look for the corresponding number in Table 9-1.
4. Divide the I/O/sec of your workload by the number from the table to get the number of ranks.
9.3.3 Number and size of LUNs
For the performance of an IBM i system, ensure that the IBM i system uses many flash modules. The more flash modules that are available to an IBM i system, the more server tasks are available for the IBM i storage management to use for managing the I/O operations to the ASP space. The result is improved I/O operational performance.
With IBM i internal disks can potentially be physical disk units or flash modules. With a connected DS8900F storage system, usable capacity created from flash modules is called a LUN. Therefore, it is important to provide many LUNs to an IBM i system.
 
Number of disk drives in the DS8900F storage system: In addition to the suggestion for many LUNs, use a sufficient number of flash modules in the DS8900F storage system to achieve good IBM i performance, as described in 9.3.2, “Number of ranks” on page 212.
Since the introduction of the IBM i 7.2 TR7 and 7.3 TR3 operating systems up to 127 LUN’s are now supported per IBM i 16 Gb (or higher) physical or 8 Gb (or higher) virtual Fibre Channel (FC) adapter.
For IBM i operating systems before IBM i 7.2 TR7 and 7.3 TR3, the limit on the number of LUNs was 64 per FC adapter port.
Another reason why you should define smaller LUNs for an IBM i system is the queue depth in Tagged Command Queuing. With a natively connected DS8900F storage system, an IBM i system manages the queue depth of six concurrent I/O operations to a LUN. With the DS8900F storage system connected through VIOS, the queue depth for a LUN is 32 concurrent I/O operations. Both of these queue depths are modest numbers compared to other operating systems. Therefore, you must define sufficiently small LUNs for an IBM i system to not exceed the queue depth with I/O operations.
Also, by considering the manageability and limitations of external storage and an IBM i system, define LUN sizes of about 70.5 GB - 141 GB.
 
Note: For optimal performance do not create LUNs of different sizes within the same ASP or IASP, choose a suitable LUN capacity from the outset and stick to it.
Since the introduction of code bundle 87.10.xx.xx, two new IBM i volume data types were introduced to support variable volume sizes:
A50 an unprotected variable size volume.
A99 a protected variable size volume.
Using these volume data types provide greater flexibility in choosing an optimum LUN size for your requirements.
9.3.4 DS8900F Ranks
A rank is a logical representation of the physical array formatted for use as FB or CKD storage types. In the DS8900F, ranks are defined in a one-to-one relationship to arrays.
Once the rank is formatted for IBM i data, the formatting determines the size of the set of data that is contained on one flash module within a stripe on the array. The capacity of the rank is subdivided into partitions, which are called extents.
An extent size can depend on the extent type and which type of operating system will be using them, in this case IBM i which is FB. The extents are the capacity building blocks of the LUNs and you can choose between large extents and small extents when creating the ranks during initial DS8900F configuration.
A Fixed Block (FB) rank can potentially have an extent size of either:
Large Extent 1 GiB
Small Extent size of 16 MiB
Small or Large extents
Small extents provide a better capacity utilization. However, managing many small extents causes some small performance degradation during initial allocation. For example, a format write of 1 GB requires one storage allocation with large extents, but 64 storage allocations with small extents. Otherwise, host performance should not be adversely affected.
To utilize more effective capacity utilization within the DS8900F the newly available SCSI Un-Map capability takes advantage of the small extents feature and is now the recommended extent type for IBM i storage implementations.
9.3.5 DS8900F extent pools for IBM i workloads
This section describes how to create extent pools and LUNs for an IBM i system and dedicate or share ranks for IBM i workloads.
Number of extent pools
Create two extent pools for an IBM i workload with each pool in one rank group.
Rotate volumes or rotate extents EAMs for defining IBM i LUNs
IBM i storage management spreads each block of data across multiple LUNs. Therefore, even if the LUNs are created with the rotate volumes EAM, performing an IBM i I/O operation uses the flash modules of multiple ranks.
You might think that the rotate volumes EAM for creating IBM i LUNs provides sufficient flash modules for I/O operations and that the use of the rotate extents EAM is “over-virtualizing”. However, based on the performance measurements and preferred practices, the rotate extents EAM of defining LUNs for an IBM i system still provides the preferred performance, so use it.
Dedicating or sharing the ranks for IBM i workloads
When multiple IBM i logical partitions (LPARs) use disk space on the DS8900F storage system, there is always a question whether to dedicate ranks (extent pools) to each of them or to share ranks among the IBM i systems.
Sharing the ranks among the IBM i systems enables the efficient use of the DS8900F resources. However, the performance of each LPAR is influenced by the workloads in the other LPARs.
For example, two extent pools are shared among IBM i LPARs A, B, and C. LPAR A experiences a long peak with large block sizes that causes a high I/O load on the DS8900F ranks. During that time, the performance of B and the performance of C decrease. But, when the workload in A is low, B and C experience good response times because they can use most of the disk arms in the shared extent pool. In these periods, the response times in B and C are possibly better than if they use dedicated ranks.
You cannot predict when the peaks in each LPAR happen, so you cannot predict how the performance in the other LPARs is influenced.
Many IBM i data centers successfully share the ranks with little unpredictable performance because the flash modules and cache in the DS8900F storage system are used more efficiently this way.
Other IBM i data centers prefer the stable and predictable performance of each system even at the cost of more DS8900F resources. These data centers dedicate extent pools to each of the IBM i LPARs.
Many IBM i installations have one or two LPARs with important workloads and several smaller, less important LPARs. These data centers dedicate ranks to the large systems and share the ranks among the smaller ones.
9.3.6 Number of ports in an IBM i system
An important aspect of DS8900F to IBM i SAN configuration is the number of paths to assign to a volume.
Recommendations as to how many fiber ports to use can be found in the following publication. You should take into account that a fiber port should not exceed 70% utilization when running peak workload. An IBM i client partition supports up to eight multi-path connections to a single flash module based LUN. Recommendations can be found in the following publication:
Limitations and Restrictions for IBM i Client Logical Partitions:
When implementing a multi-pathing design for the IBM i and DS8900F, plan to zone one fiber port in an IBM i system with one fiber port in the DS8900F storage system when running the workload on flash modules.
Using the Service Tools Function within the IBM i partition, the number of paths to a LUN can be seen:
1. On IBM i LPAR command Line type STRSST
2. Take Option 3 - Work With Disk Units
3. Take Option 2 - Work With Disk Configuration
4. Take Option 1 - Display Disk Configuration
5. Take Option 9 - Display Disk Path Status
 
 
 
 
 
The following is an extract of a typical IBM i LPAR.
Two disk units are shown (1 & 2) with two paths allocated to each. For a single path disk the Resource Name column would show DDxxx. Instead as each disk has more than one path available the Resource Name begins with (Disk Multi Pathing) DMP001 and DMP003 for Disk Unit 1.
The IBM i multi-pathing driver supports up to 8 active paths (+8 standby paths for HyperSwap) configurations to the DS8000 storage systems.
Since the introduction of operating system level IBM i 7.1 TR2 the multi-pathing driver uses an advanced load balancing algorithm which accounts for path usage by the amount of outstanding I/O per path.
From a performance perspective the use of more than 2 or 4 active paths with DS8000 storage systems is typically not required.
9.4 Analyzing performance data
For performance issues with IBM i workloads that run on the DS8900F storage system, you must determine and use the most appropriate performance tools in an IBM i system and in the DS8900F storage system.
9.4.1 IBM i performance tools
This section presents the performance tools that are used for an IBM i system. It also indicates which of the tools we employ with the planning and implementation of the DS8900F storage system for an IBM i system in this example.
To help you better understand the tool functions, they are divided into two groups: performance data collectors (the tools that collect performance data) and performance data investigators (the tools to analyze the collected data).
The following tools are the IBM i performance data collectors:
Collection Services
IBM i Job Watcher
IBM i Disk Watcher
Performance Explorer (PEX)
Collectors can be managed by IBM System Director Navigator for i, IBM System i Navigator, or IBM i commands.
 
The following tools are or contain the IBM i performance data investigators:
IBM Performance Tools for i
IBM System Director Navigator for i
iDoctor
Most of these comprehensive planning tools address the entire spectrum of workload performance on System i, including processor, system memory, disks, and adapters. To plan or analyze performance for the DS8900F storage system with an IBM i system, use the parts of the tools or their reports that show the disk performance.
Collection Services
Collection Services for IBM i can collect system and job level performance data. It can run all the time and will sample system and job level performance data with collection intervals as low as 15 seconds or alternatively up to an hour. It provides data for performance health checks or analysis of a sudden performance problem. For detailed documentation, refer to:
Collection Services can look at jobs, threads, processor, disk, and communications. It also has a set of specific statistics for the DS8900F storage system. For example, it shows which IBM i storage units are located within the DS8900F LUNs, whether they are connected in a single path or multipath, the disk service time, and wait time.
The following tools can be used to manage the data collection and report creation of Collection Services:
IBM System i Navigator
IBM System Director navigator
IBM Performance Tools for i
iDoctor Collection Service Investigator can be used to create graphs and reports based on Collection Services data. For more information about iDoctor, see the IBM i iDoctor online documentation at:
With IBM i level V7R4, the Collection Services tool offers additional data collection categories, including a category for external storage. This category supports the collection of nonstandard data that is associated with certain external storage subsystems that are attached to an IBM i partition. This data can be viewed within iDoctor, which is described in “iDoctor” on page 220.
Job Watcher
Job Watcher is an advanced tool for collecting and analyzing performance information to help you effectively monitor your system or to analyze a performance issue. It is job-centric and thread-centric and can collect data at intervals of seconds. To use IBM i Job Watcher functions and content require the installation of IBM Performance Tools for i (5770-PT1) Option 3 - Job Watcher.
For more information about Job Watcher, refer to:
Disk Watcher
Disk Watcher is a function of an IBM i system that provides disk data to help identify the source of disk-related performance problems on the IBM i platform. Its functions require the installation of IBM Performance Tools for i (5770-PT1) Option 1 Manager Feature.
For documented command strings and file layouts, refer to:
Disk Watcher gathers detailed information that is associated with I/O operations to flash modules and provides data beyond the data that is available in other IBM i integrated tools, such as Work with Disk Status (WRKDSKSTS), Work with System Status (WRKSYSSTS), and Work with System Activity (WKSYSACT).
Performance Explorer
Performance Explorer is a data collection tool that helps the user identify the causes of performance problems that cannot be identified by collecting data using Collection Services or by doing general trend analysis.
Two reasons to use Performance Explorer include:
Isolating performance problems to the system resource, application, program, procedure, or method that is causing the problem
Analyzing the performance of applications.
Find more details on the Performance Explorer here:
IBM Performance Tools for i
IBM Performance Tools for i is a licensed program product that includes features that provide further detailed analysis over the basic performance tools that are available in the V7.4 operating system. This software is described in more detail here:
Performance Tools helps you gain insight into IBM i performance features, such as dynamic tuning, expert cache, job priorities, activity levels, and pool sizes. You can also identify ways to use these services better. The tool also provides analysis of collected performance data and produces conclusions and recommendations to improve system performance.
The Job Watcher part of Performance Tools analyzes the Job Watcher data through the IBM Systems Director Navigator for i Performance Data Visualizer.
Collection Services reports about disk utilization and activity, which are created with IBM Performance Tools for i, are used for sizing and Disk Magic modeling of the DS8900F storage system for the IBM i system:
The Disk Utilization section of the System report
The Disk Utilization section of the Resource report
The Disk Activity section of the Component report
IBM Navigator for i
The IBM Navigator for i is a web-based console that provides a single, easy-to-use view of the IBM i system. IBM Navigator for i provides a strategic tool for managing a specific IBM i partition.
The Performance section of IBM Systems Director Navigator for i provides tasks to manage the collection of performance data and view the collections to investigate potential performance issues. Figure 9-4 on page 220 shows the menu of performance functions in the IBM Systems Director Navigator for i.
Figure 9-4 Performance tools of Navigator for i
iDoctor
iDoctor is a suite of tools that is used to manage the collection of data, investigate performance data, and analyze performance data on the IBM i system. The goals of iDoctor are to broaden the user base for performance investigation, simplify and automate processes of collecting and investigating the performance data, provide immediate access to collected data, and offer more analysis options.
The iDoctor tools are used to monitor the overall system health at a high level or to drill down to the performance details within jobs, flash modules and programs. Use iDoctor to analyze data that is collected during performance situations. iDoctor is frequently used by IBM, clients, and consultants to help solve complex performance issues quickly. Further information about these tools can be found at:
Figure 9-5 iDoctor - query of I/O read times by object
9.4.2 DS8900F performance tools
As a preferred practice, use Storage Insights web portal for analyzing the DS8900F performance data for an IBM i workload. For more information about this product, see Chapter 7, “Practical performance management” on page 139.
9.5 Easy Tier with the IBM i system
This section describes how to use Easy Tier with the IBM i system.
9.5.1 Hot data in an IBM i workload
An important feature of the IBM i system is object-based architecture. Everything on the system that can be worked with is considered an object. An IBM i library is an object. A database table, an index file, a temporary space, a job queue, and a user profile are objects. The intensity of I/Os is split by objects. The I/O rates are high on busy objects, such as application database files and index files. The I/Os rates are lower on user profiles.
IBM i Storage Manager spreads the IBM i data across the available flash modules (LUNs) contained within an extent pool so that each flash module is about equally occupied. The data is spread in extents that are 4 KB - 1 MB or even 16 MB. The extents of each object usually span as many LUNs as possible to provide many volumes to serve the particular object. Therefore, if an object experiences a high I/O rate, this rate is evenly split among the LUNs. The extents that belong to the particular object on each LUN are I/O-intense.
Many of the IBM i performance tools work on the object level; they show different types of read and write rates on each object and disk service times on the objects. For more information about the IBM i performance tools, see 9.4.1, “IBM i performance tools” on page 217. You can relocate hot data by objects by using the Media preference method, which is described in “IBM i Media preference” on page 222.
Also, the Easy Tier tool monitors and relocates data on the 1 GB extent level. IBM i ASP balancing, which is used to relocate data to Flash, works on the 1 MB extent level. Monitoring extents and relocating extents do not depend on the object to which the extents belong; they occur on the subobject level.
9.5.2 IBM i methods for hot-spot management
You can choose your tools for monitoring and moving data to faster drives in the DS8900F storage system. Because the IBM i system recognizes the LUNs on SSDs in a natively connected DS8900F storage system, you can use the IBM i tools for monitoring and relocating hot data. The following IBM i methods are available:
IBM i Media preference
ASP balancing
A process where you create a separate ASP with LUNs on SSDs and restore the application to run in the ASP
IBM i Media preference
This method provides monitoring capability and the relocation of the data on the object level. You are in control. You decide the criteria for which objects are hot and you control which objects to relocate to the Flash Module. Some clients prefer Media preference to Easy Tier or ASP balancing. IBM i Media preference involves the following steps:
1. Use PEX to collect disk events
Carefully decide which disk events to trace. In certain occasions, you might collect read flash operations only, which might benefit the most from improved performance, or you might collect both flash read and write information for future decisions. Carefully select the peak period in which the PEX data is collected.
2. Examine the collected PEX data by using the PEX-Analyzer tool of iDoctor or by using the user-written queries to the PEX collection.
It is a good idea to examine the accumulated read service time and the read I/O rate on specific objects. The objects with the highest accumulated read service time and the highest read I/O rate can be selected to relocate to Flash Modules. It is also helpful to analyze the write operations on particular objects. You might decide to relocate the objects with high read I/O and rather modest write I/O to flash benefit from lower service times and wait times. Sometimes, you must distinguish among the types of read and write operations on the objects. iDoctor queries provide the rates of asynchronous and synchronous database reads and page faults.
In certain cases, queries must be created to run on the PEX collection to provide specific information, for example, the query that provides information about which jobs and threads use the objects with the highest read service time. You might also need to run a query to provide the block sizes of the read operations because you expect that the reads with smaller block sizes profit the most from flash. If these queries are needed, contact IBM Lab Services to create them.
3. Based on the PEX analysis, decide which database objects to relocate to Flash Modules in the DS8900F storage system. Then, use IBM i commands such as Change Physical File (CHGPF) with the UNIT(*SSD) parameter, or use the SQL command ALTER TABLE UNIT SSD, which sets on the file a preferred media attribute that starts dynamic data movement. The preferred media attribute can be set on database tables and indexes, and on User-Defined File Systems (UDFS).
For more information about the UDFS, refer to:
 
ASP balancing
This IBM i method is similar to DS8900F Easy Tier because it is based on the data movement within an ASP by IBM i ASP balancing. The ASP balancing function is designed to improve IBM i system performance by balancing disk utilization across all of the disk units (or LUNs) in an ASP. It provides three ways to balance an ASP:
Hierarchical Storage Management (HSM) balancing
Media Preference Balancing
ASP Balancer Migration Priority
The HSM balancer function, which traditionally supports data migration between high-performance and low-performance internal disk drives, is extended for the support of data migration between Flash Modules and HDDs. The flash drives can be internal or on the DS8900F storage system. The data movement is based on the weighted read I/O count statistics for each 1 MB extent of an ASP. Data monitoring and relocation is achieved by the following two steps:
1. Run the ASP balancer tracing function during the important period by using the TRCASPBAL command. This function collects the relevant data statistics.
2. By using the STRASPBAL TYPE(*HSM) command, you move the data to Flash and HDD based on the statistics that you collected in the previous step.
The Media preference balancer function is the ASP balancing function that helps to correct any issues with Media preference-flagged database objects or UDFS files not on their preferred media type, which is either Flash or HDD, based on the specified subtype parameter.
The function is started by the STRASPBAL TYPE(*MP) command with the SUBTYPE parameter equal to either *CALC (for data migration to both Flash (SSD) and HDD), *SSD, or *HDD.
The ASP balancer migration priority is an option in the ASP balancer so that you can specify the migration priority for certain balancing operations, including *HSM or *MP in levels of either *LOW, *MEDIUM, or *HIGH, thus influencing the speed of data migration.
 
Location: For data relocation with Media preference or ASP balancing, the LUNs defined on Flash and on HDD must be in the same IBM i ASP. It is not necessary that they are in the same extent pool in the DS8900F storage system.
Additional information
For more information about the IBM i methods for hot-spot management, including the information about IBM i prerequisites, refer to:

1 The calculations for the values in Table 9-1 are based on the measurements of how many I/O operations one rank can handle in a certain RAID level, assuming 20% read cache hit and 30% write cache efficiency for the IBM i workload. Assume that half of the used ranks have a spare and half are without a spare.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.196.27