Management and configuration
This chapter describes enhancements in z/OS that help to simplify storage management so that the system can manage storage automatically and autonomically. Enhancements over the years to today’s z/OS Version 2 perfected the storage software within z/OS.
This chapter focuses on storage pool designs and specific enhancements that were jointly developed by the z/OS and DS8000 storage system development teams to achieve more synergy between both units.
This chapter includes the following topics:
4.1 Storage pool design considerations
Storage pool design considerations are a source of debate. Discussions about it originated in the early days when customers discovered that the growing number of their disk-based volumes was unmanageable.
 
Note: The information in this section is important for former DS8000 hybrid configurations. Although still valid, it is not so relevant for all-flash D8000 systems.
IBM responded to this challenge by introducing system-managed storage and its corresponding system storage software. The approach was to no longer focus on volume awareness, but instead turn to a pool concept. The pool was the container for many volumes, and disk storage was managed at a pool level.
Eventually, storage pool design considerations also evolved with the introduction of storage systems, such as the DS8000 storage system, which offered other possibilities.
This section covers the system-managed storage and DS8000 views and how both are combined to contribute to the synergy between the IBM Z server and DS8000 storage systems.
4.1.1 Storage pool design considerations within z/OS
System storage software, such as Data Facility Storage Management Subsystem (DFSMS), manages information by creating a file or data set, setting its initial placement in the storage hierarchy, and managing it through its entire lifecycle until the file is deleted. The z/OS Storage Management Subsystem (SMS) can automate management storage tasks and reduce related costs. This automation is achieved through policy-based data management, availability management, space management, and even performance management, which DFSMS provides autonomically.
Ultimately, DFSMS is complemented by the DS8000 storage system and its rich variety of functions that work well with DFSMS and its components, as described in 4.1.4, “Combining SMS Storage Groups and DS8000 extent pools” on page 48.
This storage management starts with the initial placement of a newly allocated file within a storage hierarchy. It includes consideration for storage tiers in the DS8000 storage system where the data is stored. SMS assigns policy-based attributes to each file. Those attributes might change over the lifecycle of that file.
In addition to logically grouping attributes, such as the SMS Data Class (DC), SMS Storage Class (SC), and SMS Management Class (MC) constructs, SMS uses the concept of Storage Group (SG).
Finally, files are assigned to an SG or set of SGs. Ideally, the criteria for placing the file are solely dictated by the SC attributes and controlled by the last Automatic Class Selection (ACS) routine. A chain of ACS routines begins with an optional DC ACS routine, and a mandatory SC ACS routine that is followed by another optional MC ACS routine. This chain of routines is concluded by the SG ACS routine. The result of the SG ACS routine culminates in a list of candidate volumes where this file is placed.
Behind this candidate volume list is sophisticated logic to create the volume list. The z/OS components and measurements are involved in reviewing this volume list to identify the optimal volume for the file.
Therefore, the key construct from a pooling viewpoint is the SMS SG. The preferred approach is to create enough SMS SGs and populate each SMS SG with as many volumes as needed. Consider the volume size because you can create volumes with 27, 54, or larger extent numbers in the box.
Plan bigger volumes to huge data sets, and as much as possible, do not mix large and small data sets on bigger volumes. “Small” and “large” are abstract concepts that depend on each installation. This approach delegates to the system software the control for how to use and populate each single volume within the SMS SG.
The system software includes all the information about the configuration and the capabilities of each storage technology within the SMS SG. Performance-related service-level requirements can be addressed by SC attributes.
For more information about SMS constructs and ACS routines, see z/OS DFSMSdfp Storage Administration, SC23-6860.
 
Note: In the newer DS8000 storage systems, the common practice for multitier extent pools is to enable Easy Tier. By doing so, the DS8000 microcode handles the data placement within the multiple storage tiers that belong to the extent pools for best performance results.
4.1.2 z/OS DFSMS class transition
Starting with z/OS 2.1, DFSMS (and specifically Data Facility Storage Management Subsystem Hierarchical Storage Manager [DFSMShsm]) is enhanced to support a potential class transition in an automatic fashion, which enables relocating a file within its L0 storage level from an SG into another SG. This relocation is performed during DFSMShsm automatic space management and is called class transition because ACS routines are exercised again during this transition process.
Based on a potentially newly assigned SC, MC, or a combination of both, the SG ACS routine then assigns an SG. This group might then be an SG that is different from the previous group that is based on the newly assigned SC within this transition process. This new policy-based data movement between SCs and the SG is a powerful function that is performed during DFSMShsm primary space management and by on-demand migration and interval migration.
 
Note: During initial file allocation, SMS selects an SG that is defined in the ACS routine. However, this file’s performance requirements might change over time, and a different SG might be more suitable.
Various migrate commands were enhanced to support class transitions at the data set, volume, and SG level. For more information, see the following publications:
z/OS DFSMShsm Storage Administration, SC23-6871
z/OS DFSMS Using the New Functions, SC23-6857
4.1.3 DS8000 storage pools
The DS8000 architecture also uses the concept of pools or SGs as extent pools.
The concept of extent pools within the DS8000 storage system also evolved over time from homogeneous drive technologies within an extent pool to what today is referred to as a hybrid extent pool, which uses heterogeneous storage technology within the same extent pool. The use of a hybrid extent pool is made possible and efficient through the Easy Tier functions.
With Easy Tier, you can autonomically use up to three storage tiers in the most optimal fashion, based on workload characteristics. Although this goal is ambitious, Easy Tier for the DS8000 storage system evolved over the years.
In contrast to SMS, where the granularity or entity of management is a file or data set, the management entity is a DS8000 extent within Easy Tier. The Count Key Data (CKD) extent size is the equivalent of a 3390-1 volume with 1113 cylinders or approximately 0.946 GB as a DS8000 extent size when large extents are used.
When small extents are used, these extents are 21 cylinders (or approximately 17.85 MB), but are grouped in extent groups of 64 small extents for Easy Tier management. Automatic Easy Tier management within the DS8000 storage system migrates extents between technologies within the same extent pool or within an extent pool with homogeneous storage technology.
For more information, see IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-4667.
4.1.4 Combining SMS Storage Groups and DS8000 extent pools
When comparing SMS storage tiering and DS8000 storage tiering, each approach has its own strength. SMS-based tiering addresses application and availability needs through initial data set or file placement according to service levels and policy-based management. It also gives control to the user and application regarding where to place the data within the storage hierarchy.
With DFSMS class transition, files can automatically move between different SMS SGs. As of this writing, this capability requires that the file is not open to an active application. From this standpoint, Easy Tier can relocate DS8000 extents without affecting a file that might be in use by an application.
Easy Tier does not understand an application’s needs when the data set is created. It intends to use the available storage technology in an optimal fashion by using faster types of flash for those parts of an application that are in heavy access and benefit most from better response times.
SMS storage tiering and DS8000 storage tiering are compared in Table 4-1.
Table 4-1 Comparing SMS tiering and DS8000 tiering
 
SMS-based
DS8000-based
Movement entity
Data set or file level
Physical DS8000 extent
Management scope
Across DS8000 storage systems, but within a Parallel Sysplex
Within a DS8000 storage system, extent pool
Management level
Policy-based
I/O activity-based
Access
Closed files only
Open and closed files
Impact
File must be quiesced
Transparent (no impact)
Costs
Host MIPS
No host MIPS
Combining both tiering approaches is possible, and it might lead to the best achievable results from a total systems perspective. This combination provides tight integration between IBM Z and DS8000 storage system to achieve the best possible results economically with highly automated and transparent functions serving IBM Z customers.
4.2 Extended address volume enhancements
Today’s large storage facilities tend to expand to larger CKD volume capacities. Some installations are nearing or are beyond the z/OS addressable unit control block (UCB) 64 KB limitation disk storage. Because of the four-digit, device-addressing limitation, larger CKD volumes must be defined by increasing the number of cylinders per volume.
As of this writing, an extended address volume (EAV) supports volumes with up to 1,182,006 cylinders (approximately 1 TB).
With the introduction of EAVs, the addressing changed from track to cylinder addressing. The partial change from track to cylinder addressing creates the following address areas on EAVs:
Track-Managed Space: The area on an EAV that is within the first 65,520 cylinders. The usage of the 16-bit cylinder addressing allows a theoretical maximum address of 65,535 cylinders. To allocate more cylinders, you must have a new format to address the area above 65,520 cylinders.
For 16-bit cylinder numbers, the track address format is CCCCHHHH, where:
 – HHHH: 16-bit track number
 – CCCC: 16-bit track cylinder
Cylinder-Managed Space: The area on an EAV that is above the first 65,520 cylinders. This space is allocated in multicylinder units (MCUs), which of this writing is 21 cylinders. A new cylinder-track address format addresses the extended capacity of an EAV.
For 28-bit cylinder numbers, the format is CCCCcccH, where:
 – CCCC: The low order 16 bits of a 28-bit cylinder number
 – ccc: The high order 12 bits of a 28-bit cylinder number
 – H: A 4-bit track number (0 - 14)
The following z/OS components and products now support 1,182,006 cylinders:
DS8000 storage system and z/OS V1.R12 or later support CKD EAV volume sizes (3390 Model A: 1 - 1,182,006 cylinders [approximately 1004 TB addressable storage]).
Configuration granularity:
 – 1-cylinder boundary sizes: 1 - 56,520 cylinders
 – 1113-cylinder boundary sizes: 56,763 (51 x 1113) - 1,182,006 (1062 x 1113) cylinders
The size of a Mod 3/9/A volume can be increased to its maximum supported size by using dynamic volume expansion (DVE). For more information, see 4.3, “Dynamic volume expansion” on page 54.
The volume table copy (VTOC) allocation method for an EAV volume was changed compared to the VTOC that was used for traditional smaller volumes. The size of an EAV VTOC index was increased four-fold and now has 8192 blocks instead of 2048 blocks.
Because no space remains inside the Format 1 data set control block (DSCB), new DSCB formats (Format 8 and Format 9) were created to protect programs from seeing unexpected track addresses. These DSCBs are known as extended attribute DSCBs (EADSCBs). Format 8 and 9 DSCBs are new for EAV. The Format 4 DSCB also was changed to point to the new Format 8 DSCB.
4.2.1 Data set type dependencies on an EAV
EAV includes several data set type dependencies. In all Virtual Storage Access Method (VSAM) sequential data set types, the following components can be placed on the extended addressing space (EAS):
Extended, Basic, and Large format
Basic direct-access method (BDAM)
Partitioned data set (PDS)
Partitioned data set extended (PDSE)
VSAM volume data set (VVDS)
Basic catalog structure (BCS)
This EAS is the cylinder-managed space of an EAV volume that is running on z/OS V1.12 and later.
EAV includes all VSAM data types, including the following examples:
Key-sequenced data set (KSDS)
Relative record data set (RRDS)
Entry-sequenced data set (ESDS)
Linear data set (LDS)
IBM Db2
IBM Information Management System (IMS)
IBM CICS®
IBM z/OS File System (zFS) data sets
The VSAM data sets that are placed on an EAV volume can be SMS or non-SMS managed.
For an EAV volume, the following data sets might exist but are not eligible to have extents in the EAS (cylinder-managed space):
VSAM data sets with incompatible control area sizes
VTOC (it is still restricted to the first 64 K - 1 tracks)
VTOC index
Page data sets
A VSAM data set with IMBED or KEYRANGE attributes is not supported
Hierarchical file system (HFS) file system
SYS1.NUCLEUS
All other data sets can be placed on an EAV EAS.
You can expand all Mod 3/9/A volumes to a large EAV by using DVE. For a sequential data set, VTOC reformat is run automatically if REFVTOC=ENABLE is enabled in the DEVSUPxx parmlib member.
The data set placement on EAV as supported on z/OS is shown in Figure 4-1.
Figure 4-1 Data set placement on EAV
 
4.2.2 EAV volumes on z/OS
Consider the following points about EAV volumes:
EAV volumes with 1 TB sizes are supported on z/OS V1.12 and later. A non-VSAM data set that is allocated with an EADSCB on z/OS V1.12 cannot be opened on earlier versions of z/OS V1.12.
After a large volume is upgraded to 3390 Model A volume (an EAV with up to 1,182,006 cylinders) and the system is granted permission, an automatic VTOC refresh and index rebuild are run. The permission is granted by REFVTOC=ENABLE in parmlib member DEVSUPxx. The trigger to the system is a state change interrupt (SCI) that occurs after the volume expansion, which is presented by the storage system to z/OS.
No other hardware configuration definition (HCD) considerations are available for the 3390 Model A definitions.
On parmlib member IGDSMSxx, the USEEAV(YES) parameter must be set to allow data set allocations on EAV volumes. The default value is NO and prevents allocating data sets to an EAV volume. Example 4-1 shows a message that you receive when you are trying to allocate a data set on an EAV volume, and USEEAV(NO) is set.
Example 4-1 Message IEF021I with USEEVA set to NO
IEF021I TEAM142 STEP1 DD1 EXTENDED ADDRESS VOLUME USE PREVENTED DUE TO SMS USEEAV (NO)SPECIFICATION.
The new Break Point Value (BPV) parameter determines which size the data set must be allocated on a cylinder-managed area. The default for that the data set on parmlib member IGDSMSxx and in the SG definition (SG BPV overrides system-level BPV). The BPV value can be 0 - 65520, where 0 means that the cylinder-managed area is always preferred and 65520 means that a track-managed area is always preferred.
4.2.3 Identifying an EAV volume
Any EAV has more than 65,520 cylinders. To address this volume, the Format 4 DSCB was updated to x’FFFE’, and DSCB 8+9 is used for cylinder-managed address space. Most of the eligible EAV data sets were modified by software with EADSCB=YES.
An easy way to identify any EAV that is used is to list the VTOC summary in TSO/ISPF
option 3.4. Figure 4-2 shows the VTOC summary of a 3390 Model A with a size of 1 TB CKD usage.
Figure 4-2 TSO/ISPF 3.4 panel for a 1 TB EAV volume: VTOC summary
When the data set list is displayed, enter one of the following commands:
/ in the data set list command field for the CLI, an ISPF line command, the name of a TSO command, CLIST, or REXX exec
= to run the previous command
 
Important: Before EAV volumes are implemented, apply the latest z/OS maintenance levels. For more information, see this IBM Support web page.
4.2.4 EAV migration considerations
When you are planning to migrate to EAV volumes, consider the following items:
Assistance
Migration assistance is provided by the z/OS Generic Tracker Facility.
Suggested actions:
 – Review your programs and look for calls for the following macros:
 • OBTAIN
 • REALLOC
 • CVAFDIR
 • CVAFSEQ
 • CVAFDSM
 • CVAFFILT
These macros were modified, and you must update your program to reflect those changes.
 – Look for programs that calculate volume or data set size by any means, including reading a VTOC or VTOC index directly with a basic sequential access method (BSAM) or EXCP DCB. This task is important because now you have new values that are returning for the volume size.
 – Review your programs and look for the EXCP and STARTIO macros for direct access storage device (DASD) channel programs and other programs that examine DASD channel programs or track addresses. Now that a new addressing mode exists, programs must be updated.
 – Look for programs that examine any of the many operator messages that contain a DASD track, block address, data set, or volume size. The messages now show new values.
Migrating data:
 – Define new EAVs by creating them on the DS8000 storage system or expanding volumes by using DVE.
 – Add new EAV volumes to SGs and storage pools, and update ACS routines.
 – Copy data at the volume level:
 • IBM Transparent Data Migration Facility (IBM TDMF)
 • Data Facility Storage Management Subsystem Data Set Services (DFSMSdss)
 • IBM DS8000 Copy Services Metro Mirror (MM) (formerly known as Peer-to-Peer Remote Copy (PPRC))
 • Global Mirror (GM)
 • Global Copy
 • FlashCopy
 – Copy data at the data set level:
 • DS8000 FlashCopy
 • SMS attrition
 • IBM z/OS Dataset Migration Facility (IBM zDMF)
 • DFSMSdss
 • DFSMShsm
4.3 Dynamic volume expansion
DVE simplifies management by enabling easier online volume expansion for IBM Z to support application data growth. It also supports data center migration and consolidation to larger volumes to ease addressing constraints.
The size of a Mod 3/9/A volume can be increased to its maximum supported size by using DVE. The volume can be dynamically increased in size on a DS8000 storage system by using the GUI or DS command-line interface (DS CLI).
Example 4-2 shows how the volume can be increased by using the DS8000 DS CLI.
Example 4-2 Dynamically expanding a CKD volume
dscli> chckdvol -cap 262268 -captype cyl 9ab0
CMUC00022I chckdvol: CKD Volume 9AB0 successfully modified.
DVE can be done while the volume remains online to the z/OS host system. When a volume is dynamically expanded, the VTOC and VTOC index must be reformatted to map the extra space. With z/OS V1.11 and later, an increase in volume size is detected by the system, which then performs an automatic VTOC and rebuilds the index.
The following options are available:
DEVSUPxx parmlib options.
The system is informed by SCIs, which are controlled by the following parameters:
 – REFVTOC=ENABLE
With this option, the device manager causes the volume VTOC to be automatically rebuilt when a volume expansion is detected.
 – REFVTOC=DISABLE
This parameter is the default. An IBM Device Support Facilities (ICKDSF) batch job must be submitted to rebuild the VTOC before the newly added space on the volume can be used. Start ICKDSF with REFORMAT/REFVTOC to update the VTOC and index to reflect the real device capacity.
The following message is issued when the volume expansion is detected:
IEA019I dev, volser, VOLUME CAPACITY CHANGE,OLD=xxxxxxxx,NEW=yyyyyyyy
Use the SET DEVSUP=xx command to enable automatic VTOC and index reformatting after an IPL or disabling.
Use the F DEVMAN,ENABLE(REFVTOC) command to communicate with the device manager address space to rebuild the VTOC. However, update the DEVSUPxx parmlib member to ensure it remains enabled across subsequent IPLs.
 
Note: For the DVE function, volumes cannot be in Copy Services relationships (point-in-time copy or FlashCopy, MM, GM, Metro/Global Mirror [MGM], or IBM z/OS Global Mirror [IBM zGM]) during expansion. Copy Services relationships must be stopped until the source and target volumes are at their new capacity and then, the Copy Service pair can be reestablished.
4.4 Quick initialization
Whenever the new volumes are assigned to a host, any new capacity that is allocated to it must be initialized. On a CKD logical volume, any CKD logical track that is read before it is written is formatted with a default record 0, which contains a count field with the physical cylinder and head of the logical track, record (R) = 0, key length (KL) = 0, and data length (DL) = 8. The data field contains 8 bytes of zeros.
A DS8000 storage system supports the quick volume initialization function (Quick Init) for
IBM Z environments. Quick Init makes the newly provisioned CKD volumes accessible to the host immediately after they are created and assigned to it. The Quick Init function is automatically started whenever the volume is created or the existing volume is expanded.
It dynamically initializes the newly allocated space, which allows logical volumes to be configured and placed online to host more quickly. Therefore, manually initializing a volume from the host side is not necessary.
If the volume is expanded by using the DS8000 DVE function, normal read and write access to the logical volume is allowed during the initialization process. Depending on the operation, the Quick Init function can be started for the entire logical volume or for an extent range on the logical volume.
Quick Init improves device initialization speeds, simplifies the host storage provisioning process, and allows a Copy Services relationship to be established soon after a device is created.
4.5 Volume formatting overwrite protection
ICKDSF is the main z/OS utility to manage disk volumes (for example, for the initialize and reformat actions). In the complex IBM Z environment with many logical partitions (LPARs) in which volumes are assigned and accessible to more than one z/OS system, it is easy to erase or rewrite mistakenly the contents of a volume that is used by another z/OS image.
The DS8900F addresses this exposure through the Query Host Access function, which is used to determine whether target devices for specific script verbs or commands are online to systems where they should not be online. Query Host Access provides more useful information to ICKDSF about every system (including various sysplexes, virtual machine (VM), Linux, and other LPARs) that has a path to the volume that you are about to alter by using the ICKDFS utility.
The ICKDSF VERIFYOFFLINE parameter was introduced for that purpose. It fails an INIT or REFORMAT job if the volume is being accessed by any system other than the one performing the INIT or REFORMAT operation (as shown in Figure 4-3). The VERIFYOFFLINE parameter is set when the ICKDSF reads the volume label.
Figure 4-3 ICKDSF volume formatting overwrite protection
Messages that are generated soon after the ICKDSF REFORMAT starts and the volume is found to be online to some other system are shown in Example 4-3.
Example 4-3 ICKDSF REFORMAT volume
REFORMAT UNIT(8000) NVFY VOLID(DS8000) VERIFYOFFLINE
ICK00700I DEVICE INFORMATION FOR 8000IS CURRENTLY AS FOLLOWS:
PHYSICAL DEVICE = 3390
STORAGE CONTROLLER = 2107
STORAGE CONTROL DESCRIPTOR = E8
DEVICE DESCRIPTOR = 0E
ADDITIONAL DEVICE INFORMATION = 4A00003C
TRKS/CYL = 15, # PRIMARY CYLS = 65520
ICK04000I DEVICE IS IN SIMPLEX STATE
ICK00091I 9042 NED=002107.900.IBM.75.0000000xxxxx
ICK31306I VERIFICATION FAILED: DEVICE FOUND TO BE GROUPED
ICK30003I FUNCTION TERMINATED. CONDITION CODE IS 12
If this condition is found, the Query Host Access command from ICKDSF (ANALYZE) or DEVSERV (with the QHA option) can be used to determine what other z/OS systems have the volume online.
Example 4-4 shows the result of DEVSERV (or DS for short) with the QHA option.
Example 4-4 DEVSERV with the QHA option
-DS QD,037DF,QHA
IEE459I 10.43.52 DEVSERV QDASD 027
UNIT VOLSER SCUTYPE DEVTYPE CYL SSID SCU-SERIAL DEV-SERIAL EFC
037DF XXY011 2107988 2107900 64554 80E0 0175-EYQ91 0175-EYQ91 *OK
QUERY HOST ACCESS TO VOLUME
PATH-GROUP-ID FL STATUS SYSPLEX SYSTEM MAX-CYLS
80044604D73906DA43E1DA 50 ON PLEXAA AAAA 1182006
80033509A73906DA219FA9 50 ON PLEXBB BBBB 1182006
8003351E273906DA216966 50 ON PLEXCC CCCC 1182006
80033503F73906DA2179BF 50 ON PLEXDD DDDD 1182006
80033504D73906DA215B8D* 50 ON PLEXEE EEEE 1182006
80044504D73906DA43FA7A 00 OFF PLEXFF FFFF 1182006
80044503F73906DA449F33 50 ON PLEXGG GGGG 1182006
8003361E273906DA2198AC 50 ON PLEXHH HHHH 1182006
**** 8 PATH GROUP ID(S) MET THE SELECTION CRITERIA
**** 1 DEVICE(S) MET THE SELECTION CRITERIA
**** 0 DEVICE(S) FAILED EXTENDED FUNCTION CHECKING
This synergy between a DS8000 storage system and ICKDSF prevents accidental data loss and some unpredictable results. In addition, it simplifies the storage management by reducing the need for manual control.
The DS8000 Query Host Access function is used by IBM Geographically Dispersed Parallel Sysplex (GDPS), as described in 2.5, “Geographically Dispersed Parallel Sysplex” on page 26.
4.6 Channel paths and a control-unit-initiated reconfiguration
In the IBM Z environment, the standard practice is to provide multiple paths from each host to a storage system. Typically, four or eight paths are installed. The channels in each host that can access each logical control unit (LCU) in the DS8000 storage system are defined in the HCD or I/O configuration data set (IOCDS) for that host.
Dynamic Path Selection (DPS) allows the channel subsystem to select any available (nonbusy) path to start an operation to the disk subsystem. Dynamic Path Reconnect (DPR) allows the DS8000 to select any available path to a host to reconnect and resume a disconnected operation; for example, to transfer data after disconnection because of a cache miss.
These functions are part of IBM z/Architecture® and are managed by the channel subsystem on the host and the DS8000 storage system.
A physical Fibre Channel connection (FICON) path is established when the DS8000 port sees light on the fiber; for example, a cable is plugged into a DS8000 host adapter, a processor or the DS8000 storage system is powered on, or a path is configured online by z/OS.
Now, logical paths are established through the port between the host and some or all the LCUs in the DS8000 are controlled by the HCD definition for that host. This configuration occurs for each physical path between an IBM Z host and the DS8000 storage systems.
Multiple system images can be in a CPU. Logical paths are established for each system image. The DS8000 storage system then knows which paths can be used to communicate between each LCU and each host.
Control-unit-initiated reconfiguration (CUIR) varies off a path or paths to all IBM Z hosts to allow service for an I/O enclosure or host adapter. Then, it varies on the paths to all host systems when the host adapter ports are available. This function automates channel path management in IBM Z environments in support of selected DS8000 service actions.
CUIR is available for the DS8000 when it operates in the z/OS and IBM z/VM environments. CUIR provides automatic channel path vary on and off actions to minimize manual operator intervention during selected DS8000 service actions.
CUIR also allows the DS8000 storage system to request that all attached system images set all paths that are required for a specific service action to the offline state. System images with the suitable level of software support respond to such requests by varying off the affected paths and notifying the DS8000 that the paths are offline or that it cannot take the paths offline.
CUIR reduces manual operator intervention and the possibility of human error during maintenance actions and reduces the time that is required for the maintenance. This function is useful in environments in which many z/OS or z/VM systems are attached to a DS8000 storage system.
4.7 CKD thin provisioning
DS8000 storage systems allow CKD volumes to be formatted as thin-provisioned extent space-efficient (ESE) volumes. These ESE volumes perform physical allocation only on writes and only when another new extent is needed to satisfy the capacity of the incoming write block.
The allocation granularity and the size of these extents is 1113 cylinders or 21 cylinders, depending on how the extent pool was formatted. The use of small extents makes more sense in the context of thin provisioning.
One scenario to use such thinly provisioned volumes is for FlashCopy target volumes or GM Journal volumes so that the volumes can be space efficient while maintaining standard (thick) volume sizes for the operational source volumes.
Another scenario is to create all volumes as ESE volumes. In PPRC relationships, this idea has the advantage that on initial replication, extents that are not yet allocated in a primary volume do not need to be replicated, which also saves on bandwidth.
In general, thin provisioning requires tight control over the capacity that is free physically in the specific extent pool, especially when over-provisioning is performed. These controls are available along with respective alert thresholds and alerts that can be set.
4.7.1 Advantages
Thin provisioning can make storage administration easier. You can provision large volumes when you configure a new DS8900F storage system. You do not have to manage different volume sizes when you use a 3390 Model 1 size, Model 9, Model 27, and so on. All volumes can be large and of the same size.
A volume or device address is sometimes required to communicate with the control unit (CU), such as the utility device for Extended Remote Copy (XRC). Such a volume can include a minimum capacity. With thin provisioning, you still can use a large volume because less data is written to such a volume, its size in physical space remains small, and no storage capacity is wasted.
For many z/OS customers, migrating to larger volumes is a task they avoid because it involves substantial work. As a result, many customers have too many small 3390 Model 3 volumes. With thin provisioning, they can define large volumes and migrate data from other storage systems to a DS8900F storage system that is defined with thin-provisioned volumes and likely use even less space. Most migration methods facilitate copying small volumes to a larger volume. You refresh the VTOC of the volume to recognize the new size.
4.7.2 Advanced Volume Creation from GUI
The Management GUI now offers an easier way to create a single volume or multiple volume sets at the same time (of the same type). Thin-provisioned volumes can now be created without the use of the former Custom mode.
4.7.3 Space release
Space is released when either of the following conditions is met:
A volume is deleted.
A full FlashCopy Volume relationship is withdrawn and reestablished.
 
Note: This condition is not true when working with data-set- or extent-level FlashCopy with z/VM minidisks. Therefore, use caution when you are working with the DFSMSdss utility because it might use data-set-level FlashCopy, depending on the parameters that are used.
A space release is also done on the target of an MM or Global Copy if the source and target are thin provisioned and the relationship is established.
Introduced with DS8000 code releases R8.2 and APAR OA50675 is the ability for storage administrators to perform an extent-level space release by using the DFSMSdss SPACEREL command. This volume-level command is used to scan and release free extents from volumes back to the extent pool.
The SPACEREL command can be issued for volumes or SGs and uses the following format:
SPACERel
DDName(ddn)
DYNam(volser,unit)
STORGRP(groupname)
An enhancement was provided with the R8.3 code to release space on the secondary volume when the SPACEREL command is issued to an MM duplex primary device.
If issued to an MM suspended primary device, the space first is released in the primary device only. When the PPRC relationship is reestablished, the extents that were freed on the primary device also are released on the secondary device. The pair remains in the DUPLEX PENDING state until the extents on the secondary device are freed; the sync process resumes later.
 
Note: At the time of this writing, Global Copy primary devices must be suspended to allow the use of the SPACEREL command. Global Copy Duplex Pending devices are not supported for the SPACEREL command and devices in FlashCopy relationships.
For Multiple Target Peer-to-Peer Remote Copy (MT-PPRC) relationships, each relationship on the device must allow the SPACEREL command to run for the release to be allowed on the primary device (that is, the primary must be in a Suspended state for Global Copy or GM relationships, and in a Duplex or Suspended state in an MM relationship).
Suspended primary devices in a GM session are supported. Cascaded devices follow the same rules as noncascaded devices, although space is not released on the target because these devices are FlashCopy source devices.
4.7.4 Overprovisioning controls
Overprovisioning a storage system with thin provisioning brings the risk of running out of space in the storage system, which causes a loss of access to the data when applications cannot allocate space that was presented to the servers. To avoid this situation, clients typically use a policy regarding the amount of overprovisioning that they allow in an environment and monitor the growth of allocated space with predefined thresholds and warning alerts.
The current DS8000 codes provide clients with an enhanced method of enforcing such policies so that overprovisioning is capped to an overprovisioning ratio (see Figure 4-4), which does not allow further space allocations in the system.
Figure 4-4 Overprovisioning ratio formula
As part of the implementation project or permanently in a production environment, some clients might want to enforce an overprovisioning ratio of 100%, which means that no overprovisioning is allowed. (The use of this ratio does not risk affecting production because of running out of space on the DS8000 storage system.) By doing so, the Easy Tier and replication benefits of thin provisioning can be realized without the risk of accidentally overprovisioning the underlying storage. The overprovisioning ratio can be changed dynamically later, if wanted.
Implementing overprovisioning control results in changes to the standard behavior to prevent an extent pool from exceeding the overprovisioning ratio. As a result of these changes, the following actions are prevented:
Volume creation, expansion, and migration
Rank depopulation
Pool merging
Turning on Easy Tier space reservation
Overprovisioning controls can be implemented at the extent pool level, as shown in Example 4-5.
Example 4-5 Creating an extent pool with a 350% overprovisioning ratio limit
dscli> mkextpool -rankgrp 0 -stgtype fb -opratiolimit 3.5 -encryptgrp 1 test_create_fb
CMUC00000I mkextpool: Extent pool P8 successfully created.
An extent pool’s overprovisioning ratio can be changed by running the chextpool DS CLI command, as shown in Example 4-6.
Example 4-6 Changing the overprovisioning ratio limit on P3 to 3.125
dscli> chextpool -opratiolimit 3.125 p3
CMUC00001I chextpool: Extent pool P3 successfully modified.
To display the overprovisioning ratio of an extent pool, run the showextpool DS CLI command, as shown in Example 4-7.
Example 4-7 Displaying the current overprovisioning ratio of extent pool P3 and the limit set
dscli> showextpool p3
 
...
%limit 100
%threshold 15
...
opratio 0.96
opratiolimit 3.13
%allocated(ese) 2
%allocated(rep) 0
%allocated(std) 77
%allocated(over) 0
%virallocated(ese) -
%virallocated(tse) -
%virallocated(init) -
...
4.8 DS CLI on z/OS
Another synergy item between IBM Z and a DS8000 storage system is that you can install the DS CLI along with IBM Copy Services Manager (CSM) 6.1.4 and later on a z/OS system. It is a regular SMP/E for z/OS installation.
The DS CLI runs under UNIX System Services for z/OS and includes a separate FMID HIWN61K. You can also install the DS CLI separately from CSM.
For more information, see the IBM DS CLI on z/OS Program Directory, GI13-3563.
After the installation is complete, access your UNIX System Services for z/OS, which can vary among installations. One common way to access these services is by using TSO option 6 (ISPF Command Shell) and the OMVS command. For more information, contact your z/OS System Programmer.
Access to DS CLI in z/OS is shown in Figure 4-5. It requests the same information that you supply when you are accessing DS CLI on other platforms.
$ cd /opt/IBM/CSMDSCLI
$ ./dscli
Enter the primary management console IP address: <enter-your-DS8K-machine-ip-address>
Enter the secondary management console IP address:
Enter your username: <enter-your-user-name-as-defined-on-the-machine>
Enter your password: <enter-your-user-password-to-access-the-machine>
dscli> ver -l
...
dscli>
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 4-5 Accessing DS CLI on z/OS
Some DS CLI commands that are run in z/OS are shown in Figure 4-6.
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
========================================================================================
IBM.2107-75ACA91 IBM.2107-75ACA91 IBM.2107-75ACA90 980 5005076303FFD13E Online Enabled
 
dscli> lsckdvol -lcu EF
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
===========================================================================================
ITSO_EF00 EF00 Online Normal Normal 3390-A CKD Base - P1 262668
ITSO_EF01 EF01 Online Normal Normal 3390-9 CKD Base - P1 10017
ITSO_EF02 EF02 Online Normal Normal 3390-3 CKD Base - P1 3339
dscli> rmckdvol EF02
CMUC00023W rmckdvol: The alias volumes that are associated with a CKD base volume are automatically deleted before deletion of the CKD base volume. Are you sure you want to delete CKD volume EF02? Ýy/n¨: y
CMUC00024I rmckdvol: CKD volume EF02 successfully deleted.
dscli>
Figure 4-6 Common commands on DS CLI
With this synergy, you can use all z/OS capabilities to submit batch jobs and perform DS CLI functions, such as creating disks or LCUs.
4.9 Lightweight Directory Access Protocol authentication
The DS8000 storage systems allow directory services-based user authentication.
By default, the DS8000 authentication is based on local user management. Maintaining local repositories of users and their permissions is convenient and straightforward when dealing with only a few users and only a few DS8000 servers or other systems.
However, as the number of users and interconnected systems grows, the benefits of a centralized user management approach can be substantial. Here, the DS8000 works together with external Lightweight Directory Access Protocol (LDAP) servers.
As shown in Figure 4-7, the DS8900F can connect to a wider variety of LDAP server types natively, starting with code release 9.1. These types include Resource Access Control Facility (IBM RACF®) and CA Top Secret for z/OS.
Figure 4-7 Connecting the DS8900F to LDAP servers
For more information about the DS8900F using LDAP, see LDAP Authentication for IBM DS8000 Systems: Updated for DS8000 Release 9.1, REP-5460.
4.10 GUI performance reporting
The DS8000 Storage Management GUI allows storage performance reporting on the following levels:
Full-system
Pool level
Array
Port
LSS/LCU
Volume
Volume-level performance reporting provides a detailed performance metrics view per volume within the management GUI. This feature helps administrators with simplifying and optimizing tasks to improve their ability to solve problems.
The newer DS8000 codes make it easy to access data that is related to I/Os per second (IOPS), latency, and bandwidth from the logical system and host performance. This configuration enables faster troubleshooting for host applications that are based on graphics, and easy data analysis in the management GUI.
From the management GUI and DS CLI, it is possible to access report files that can be exported and uploaded into the IBM Storage Modeller tool for performance modeling and simulation by the IBM and IBM Business Partners storage technical sales and specialists teams.
Performance Graphs report
The Performance Graphs report is available on the Management GUI. It displays the performance metrics for one or more resources on the storage. It is possible to select the resource and metrics in the graph for the past 7 days. The graphs are refreshed automatically as new data becomes available.
Figure 4-8 shows the Performance Graphs report.
Figure 4-8 Performance Graphs report
This report includes the following features:
Legend
Resources. such as system, pool, I/O port, metrics, and measurements, are displayed in the upper right of the graph, as shown in Figure 4-8. More than one resource or metric can be displayed in different colors to facilitate visualization.
Timeline
The timeline is the horizontal line at the bottom of the graph. You can see the performance history by dragging the timeline control. You also can zoom in and out by using the mouse wheel.
Split-screen view
The split-screen view allows two graphs to be shown simultaneously. By clicking the split-screen icon at the right, you can adjust the timelines independently. To return to a single graph mode, click the unlink icon at the right.
Preset graphs
The performance page includes the following preset graphs:
 – System IOPS: Displays read, write, and total requests in thousand I/O per second (KIOPS) averaged over 1 minute.
 – System Latency: Displays the response time in milliseconds (ms) for read/write operations averaged over 1 minute.
 
Note: The System Latency graph for the storage system that is connected to an
IBM Z host includes only partial end-to-end response times on a system level for CKD volumes. For detailed system level response times, see the Resource Measurement Facility (RMF) on the host.
 – System Bandwidth: Displays the number of megabytes per second (MBps) for read, write, and total bandwidth averaged over 1 minute.
 – System Cache: Displays the percentage of total read I/O operations that were fulfilled from the storage system cache and write I/O operations that were delayed because of write cache limitations. Both metrics are averaged over 1 minute.
For more information about volume-level and other levels of performance reporting, see this IBM Documentation web page.
4.11 Fibre Channel connectivity report
Since DS8000 Release 9.2, the management GUI can generate a report for the attached port, all host logins and replication logins for selected ports in the standard configuration, and the security status for each IBM Fibre Channel Endpoint Security login.
The Fibre Channel Connectivity report is a CSV file that contains information about all login FICON connections, including open systems host connections, IBM Z, LinuxONE servers, or other DS8000 storage systems. This report helps the system administrator with the optimization of system resources. It also better understands the storage area network (SAN) configuration and helps with problem determination.
The CSV file contains the following information:
Local Port ID
Local Port Fibre Channel ID
Local Port WWPN
Local Port WWNN
Local Port Security Capability
Local Port Security Configuration
Local Port Logins
Local Port Security Capable Logins
Local Port Authentication Only Logins
Local Port Encrypted Logins
Attached Port WWPN
Attached Port WWNN
Attached Port Interface ID
Attached Port Type
Attached Port Model
Attached Port Manufacturer
Attached Port S/N
Remote Port WWPN
Remote Port WWNN
Remote Port Fibre Channel ID
Remote Port PRLI Complete
Remote Port Login Type
Remote Port Security State
Remote Port Security Config
Remote Port Interface ID
Remote Port Type
Remote Port Model
Remote Port S/N
Remote Port Manufacturer
Remote Port System Name
For more information about the Fibre Channel connectivity report, see this IBM Documentation web page.
4.12 Encryption
IBM Z features the concept of pervasive encryption, which is the idea of having encryption everywhere. Applying it to storage, it covers encrypting at-rest and in-flight data.
By using the self-encrypting drives (SEDs), the DS8000 for many years offered data at-rest (DAR) encryption. Traditionally, this encryption was combined with a key manager software and servers for storing the encryption keys.
For data-in-flight encryption, the DS8000 supports the IBM Fibre Channel Endpoint Security between host and DS8000 ports. For more information, see 5.11, “IBM Fibre Channel Endpoint Security since IBM z15” on page 115.
For offloading data to tape or to a cloud tier (see 5.13, “Transparent Cloud Tiering” on page 116), encryption also can be used.
With the TCT Secure Data Transfer (SDT) option, encryption of Data-in-Flight (EDiF) can occur while the data is moved over to a tape Grid network. It also can be decrypted again when arriving at the TS7700 cluster.
TCT SDT does not require an external key manager. Hardware acceleration is used in the POWER CECs of the DS8900F to offload CPU cycles to a crypto engine.
When TCT is used, an option is available for client-side encryption of all data; that is, data lands in the object store as encrypted and is decrypted only when it is recalled by DS8900F. This method requires an external key manager, such as IBM Security Guardium Key Lifecycle Manager.
4.12.1 Encryption without external key manager servers
For more information about external key managers, see IBM DS8000 Encryption for Data at Rest, Transparent Cloud Tiering, and Endpoint Security (DS8000 Release 9.2), REDP-4500.
Since DS8000 Release 9.2, a new optional license supports local encryption. Running on the DS8000 Processor Complexes, a local key manager performs the enablement and key management for encryption at rest, as shown in Figure 4-9.
Figure 4-9 Internal key management
Note: An external key manager is still required for TCT data-at-rest encryption, or for IBM Fibre Channel Endpoint Security.
Local encryption on DS8000 works by storing an Encrypted Group Key (EGK) and Obfuscated Data Key (ODK), DS8000 decrypts the ODK by using a known key. The DS8000 uses a Data Key (DK) to decrypt the Group Key (GK). The disk adapters use the GK to derive the Drive Access Key (DAK) that is unique to each drive. DAK is used to decrypt the Drive Encryption Key (DEK) and read data from the drive.
With the internal key management, DS8000 generates a DK, obfuscates it, and stores it internally rather than storing the key in an external server.
4.12.2 Encryption with GKLM and EKMF Web Integration
Beginning with GKLM 4.1.1, GKLM can store the GKLM master key in ICSF by way of EKMF Web 2.1. The GKLM master key is generated by the HSM by using EKMF Web REST APIs, which also are used to wrap and unwrap the data keys that are stored in Db2 or Postgres.
With Release 9.3, DS8900 now supports GKLM Integration with Redundant EKMF Web Instances. It protects the GKLM Master Key in ICSF and Crypto Express Cards.
With enhanced Connectivity to GKLM Containerized Edition (CE), DS8900 now is ready for Fibre Channel Endpoint Security (FCES) and is prepared for future full support that is planned with IBM Z for GKLM CE.
Deployed as a Docker container in a z/OS Container Extension (zCX), the GKLM CE adds KMIP capabilities to z/OS. z/OS teams can continue to use IBM Security Key Lifecycle Manager for z/OS to manage storage devices and add GKLM for zCX to manage new storage devices or use KMIP required capabilities.
For more information, see the following resources:
IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z, SG24-8455
4.13 Transparent Cloud Tiering multi-cloud support
Since DS8000 Release 9.2, TCT can be configured to support up to eight clouds, which can be on-premises, off-premises, and on an IBM TS7700.
The definition of multiple SMS cloud network connection constructs must be done on DFSMShsm by specifying which types of data must be moved to the TS7700 object store and which other types of data must be moved to on-premises or off-premises clouds. This capability can also be used to separate test data, development data, and reduction data through service-level agreements to various cloud targets without reconfiguring the system.
The first cloud that is created is automatically activated. All clouds that are created after the first cloud must be manually activated by using the managecloudserver command.
Figure 4-10 shows the TCT multi-cloud support.
Figure 4-10 Transparent Cloud Tiering multi-cloud support
The following types of clouds are supported:
Swift: A DS8000 system can use cloud solutions that are based on OpenStack Swift to encrypt authentication credentials and connect to a storage target.
Swift-Keystone: You can use Swift-Keystone to encrypt authentication credentials and connect to a cloud storage target. Authentication is done by using root or system certificates with Secure Sockets Layer (SSL) or Transport Layer Security (TLS).
TS7700: Used for authentication and data stores on an IBM TS7700 system.
AWS-S3: A DS8000 system can authenticate and connect to S3 storage through the Amazon Simple Storage Service (Amazon S3) protocol.
IBM Cloud® Object Storage: You can use IBM Cloud Object Storage for data protection through backup and recovery.
S3: The DS8000 system can authenticate and store data on any other S3 Cloud that is a storage target.
For more information, see the following resources:
IBM DS8000 Transparent Cloud Tiering (DS8000 Release 9.2), SG24-8381
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.190.167