IBM TS7700 implementation for IBM z/VM, IBM z/VSE, and
IBM z/TPF environments
This appendix describes the considerations for implementation and operation for IBM z/VM, IBM z/VSE, and IBM z/Transaction Processing Facility (IBM z/TPF) environments.
For more information, see the following documentation:
z/VM: DFSMS/VM Removable Media Services, SC24-6185
IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation, SG24-6789
z/VSE V6R1.0 Administration, SC34-2627
This appendix includes the following sections:
Software requirements
The following software products are required:
IBM z/VM V6R4, or later
With z/VM, the TS7760, TS7740, and TS7720 are transparent to host software. z/VM V6R4, or later, is required for guest and native VM support that provide base CP functions.
IBM z/VSE V5.2, or later
With z/VSE, TS7700 is transparent to host software. z/VSE supports the TS7760, TS7740, and TS7720 as a stand-alone system in transparency mode. z/VSE supports a single node or multi-cluster grid and Copy Export and Logical Write Once Read Many (LWORM).
IBM z/TPF V1.1, or later
With IBM z/TPF, the TS7760, TS7740, and TS7720 are supported in a single node and grid environment with the appropriate software maintenance. The category reserve and release functions are not supported by the TS7700.
Software implementation in z/VM and z/VSE
This section explains how to implement and run the TS7700 under z/VM and z/VSE. It covers the basics for software requirements, implementation, customization, and platform-specific considerations about operations and monitoring. For more information, see IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation, SG24-6789.
General support information
Not all IBM Tape Libraries and TS7700 solutions are supported in all operating systems. Table B-1 lists several supported tape solutions for non-z/OS environments.
Table B-1 Supported tape solutions for non-z/OS platforms in IBM Z environments
Platform/Tape system
IBM TS3500 tape library
TS7700
3592 drives
z/VM V6.4 native
Yes
Yes1
Yes
z/VSE V5.2 native
z/VSE V6.1 native
Yes
Yes
Yes
z/VSE V5.2 under z/VM
z/VSE V6.1 under z/VM
Yes
Yesa
Yes
zTPF V1R1
Yes
Yes
Yes

1 With restrictions; for more information, see “Considerations in all TS7700 environments” on page 827.
Even if z/VM and z/VSE can use the TS7700, you must consider certain items. For more information about support for TPF, see “Software implementation in z/OS Transaction Processing Facility” on page 833.
Note: An RPQ that enables a TS4500 to be used by z/VM is available.
Considerations in all TS7700 environments
z/VSE cannot provide SMS constructs to TS7700. However, clients might be able to take advantage of some of the Outboard policy management functions if they predefine the constructs to the logical volumes when they are entered through the MI. Another possibility is to use dedicated physical pools in a TS7700 environment. After the insert processing of virtual volumes completes, you can define a default construct to the volume range as described in “Implementing Outboard Policy Management for non-z/OS hosts” on page 837.
TS7700 multi-cluster grid environments
z/VSE V5.2 and later supports multi-cluster grid and Copy Export.
Introduced by VM65789 is the ability for the RMS component of DFSMS/VM to use the COPY EXPORT functionality of a TS7700. COPY EXPORT allows a copy of selected logical volumes that are written on the backend physical tape that is attached to a TS7700 to be removed and taken offsite for disaster recovery purposes. For more information, review the memo bundled with VM65789 or see z/VM: DFSMS/VM Removable Media Services, SC24-6185.
For DR tests involving a TS7700 grid that is connected to hosts running z/VM or z/VSE, Release 3.3 of the TS7700 microcode introduced a new keyword on the DRSETUP command called SELFLIVE. This keyword provides a DR host the ability to access its self-created content that has been moved into a write-protected category when flash is enabled. For more information, see Library Request Command, WP101091:
z/VM native support that uses DFSMS/VM
DFSMS/VM Function Level 221 (FL221) is the only way for a z/VM system to communicate with a TS7700. DFSMS/VM FL221 is part of z/VM. The removable media services (RMS) function of DFSMS/VM FL221 provides TS7700 support in z/VM, as described in DFSMS/VM Function Level 221 Removable Media Services, SC24-6185.
Tape management
Although the RMS functions themselves do not include tape management system (TMS) services, such as inventory management and label verification, RMS functions are designed to interface with a TMS that can perform these functions.
For more information about third-party TMSs that support the TS7700 in the z/VM environment, see IBM TotalStorage 3494 Tape Library: A Practical Guide to Tape Drives and Tape Automation, SG24-4632.
Figure B-1 shows the z/VM native support for the TS7700.
Figure B-1 TS7700 in a native z/VM environment using DFSMS/VM
When you use the TS7740, TS7720T, TS7760C, or TS7760T in a VM environment, consider that many VM applications or system utilities use specific mounts for scratch volumes. With specific mounts, when a mount request is sent from the host, the logical volume might need to be recalled from the stacked cartridge if it is not on already in the Tape Volume Cache (TVC).
Instead, you might want to consider the use of a TS7760, or TS7720 (cache resident partition CP0) for your VM workload. This configuration keeps the data in the TVC for faster access. In addition, also consider that your VM backup must determine whether a TS7700 and its replication capabilities to remote sites provides what is needed or if physical tape is needed to move data offsite.
DFSMS/VM
After you define the new TS7700 tape library through HCD, you must define the TS7700 to DFSMS/VM if the VM system is to use the TS7700 directly. You define the TS7700 tape library through the DFSMS/VM DGTVCNTL DATA control file. Also, you define the available tape drives though the RMCONFIG DATA configuration file. For more information, see z/VM V6R2 DFSMS/VM Removable Media Services, SC24-6185.
You have access to RMS as a component of DFSMS/VM. To enable RMS to run automatic insert bulk processing, you must create the RMBnnnnn data file in the VMSYS:DFSMS CONTROL directory, where nnnnn is the five-character tape library sequence number that is assigned to the TS7700 during hardware installation.
For more information about implementing DFSMS/VM and RMS, see DFSMS/VM Function Level 221 Removable Media Services User’s Guide and Reference, SC35-0141. If the TS7700 is shared by your VM system and other systems, more considerations apply. For more information, see Guide to Sharing and Partitioning IBM Tape Library Data, SG24-4409.
Native z/VSE
Native support is provided for the stand-alone grid TS7700 configuration in z/VSE Version 5.2 and later that support all IBM TS1150, TS1140, TS1130, TS1120, and 3592-J1A configurations without APARs in all automation offerings. This includes TS3500 Tape Library configurations.
z/VSE supports the TS3500 Tape Library/3953 natively through its Tape Library Support (TLS). In addition to the old Tape Library Support, a function has been added to enable the Tape Library to be supported through the IBM S/390® channel command interface commands. This function eliminates any XPCC/APPC communication protocol that is required by the old interface. The external interface (LIBSERV JCL and LIBSERV macro) remains unchanged.
Defining library support
First, define the type of support you are using by specifying the SYS ATL statement. You can define the following types:
TLS TLS Tape Library Support, which provides full VSE LPAR support.
VSE LCDD, which does not support TS1150/TS1140/TS1130/TS1120/3592 (only IBM 3490E and 3590), and does not support the TS3500 Tape Library.
VM VM Guest Support, which when running z/VSE under z/VM and a TS7700 is used by both operating systems, where VSE Guest server (VGS) and DFSMS are needed (see “z/VSE as a z/VM guest using a VSE Guest Server” on page 831).
For native support under VSE, where TS7700 is used only by z/VSE, select TLS. At least one tape drive must be permanently assigned to VSE.
Defining tape libraries
Next, define your tape library or libraries. This is done through a batch job as shown in Example B-1. Use skeleton member TLSDEF from ICCF Lib 59.
Example: B-1 Define tape libraries
* $$ JOB JNM=TLSDEF,CLASS=0,DISP=D
* $$ LST CLASS=A
// JOB TLSDEF
// EXEC LIBR,PARM='MSHP'
ACCESS S=IJSYSRS.SYSLIB
CATALOG TLSDEF.PROC REPLACE=YES
LIBRARY_ID TAPELIB1 SCRDEF=SCRATCH00 INSERT=SCRATCH00 --- default library
LIBRARY_ID TAPELIB2 * SECOND LIB DEF
DEVICE_LIST TAPELIB1 460:463 * DRIVES 460 TO 463
DEVICE_LIST TAPELIB2 580:582 * DRIVES 580 TO 582
QUERY_INV_LISTS LIB=TLSINV * MASTER INVENTORY FILES
MANAGE_INV_LISTS LIB=TLSMAN * MANAGE FROM MASTER
/+
LIBSERV
The communication from the host to the TS7700 goes through the LIBSERV JCL or macro interface. Example B-2 shows a sample job that uses LIBSERV to mount volume 123456 for write on device address 480 and, in a second step, to release the drive again.
Example: B-2 Sample LIBSERV JCL
$$ JOB JNM=BACKUP,CLASS=0,DISP=D
$$ JOB BACKUP
// ASSGN SYS005,480
// LIBSERV MOUNT,UNIT=480,VOL=123456/W
// EXEC LIBR
BACKUP S=IJSYSRS.SYSLIB TAPE=480
/*
// LIBSERV RELEASE,UNIT=480
/&
$$ EOJ
LIBSERV provides the following functions:
Query all libraries for a volume LIBSERV AQUERY,VOL=123456
Mount from category LIBSERV CMOUNT,UNIT=480,SRCCAT=SCRATCH01
Mount a specific volume LIBSERV MOUNT,UNIT=480,VOL=123456
Dismount a volume LIBSERV RELEASE,UNIT=480
Query count of volumes LIBSERV
CQUERY,LIB=TAPELIB1,SRCCAT= SCRATCH01
Query device LIBSERV DQUERY,UNIT=480
Query inventory of library LIBSERV IQUERY,LIB=TAPELIB1,SRCCAT=SCRATCH01
Query library LIBSERV LQUERY,LIB=TAPELIB1
Manage inventory LIBSERV MINVENT,MEMNAME=ALL,TGTCAT=SCRATCH01
Change category LIBSERV SETVCAT,VOL=123456,TGTCAT=SCRATCH01
Query library for a volume LIBSERV SQUERY,VOL=123456,LIB=TAPELIB1
Copy Export LIBSERV COPYEX,VOL=123456,LIB=TAPELIB1
For more information, see z/VSE System Administration Guide, SC34-2627, and z/VSE System Macros Reference, SC34-2638.
VM/ESA and z/VM guest support
This section describes two host environments that enable you to use an IBM TS7700 while running it as a guest host system under z/VM.
 
Tip: When z/OS is installed as a z/VM guest on a virtual machine, you must specify the following statement in the virtual machine directory entry for the VM user ID under which the z/OS guest operating system is started for the first time:
STDEVOPT LIBRARY CTL
z/OS guests
The STDEVOPT statement specifies the optional storage device management functions available to a virtual machine. The LIBRARY operand with CTL tells the control program that the virtual machine is authorized to send tape library commands to an IBM Automated Tape Library Dataserver. If the CTL parameter is not explicitly coded, the default of NOCTL is used.
NOCTL specifies that the virtual machine is not authorized to send commands to a tape library, which results in an I/O error (command reject) when MVS tries to send a command to the library. For more information about the STDEVOPT statement, see z/VM V6.2 Resources:
z/VSE guests
Some VSE TMSs require VGS support and also DFSMS/VM RMS for communication with the TS7700.
If the VGS is required, define the LIBCONFIG file and FSMRMVGC EXEC configuration file on the VGS service system’s A disk. This file cross-references the z/VSE guest’s tape library names with the names that DFSMS/VM uses. To enable z/VSE guest exploitation of inventory support functions through the LIBSERV-VGS interface, the LIBRCMS part must be installed on the VM system.
If VGS is to service inventory requests for multiple z/VSE guests, you must edit the LIBRCMS SRVNAMES cross-reference file. This file enables the inventory support server to access Librarian files on the correct VSE guest system. For more information, see 7.6, “VSE Guest Server Considerations” in Guide to Sharing and Partitioning IBM Tape Library Data, SG24-4409.
CA DYNAM/TM-VSE does not use the VGS system.
z/VSE as a z/VM guest using a VSE Guest Server
When a z/VSE guest system uses a tape drive in the TS7700, the virtual tape drive must be attached to that system, and the virtual tape volume must be mounted on the drive. Because, as a virtual machine z/VSE cannot communicate with the TS7700 to request a tape mount, RMSMASTR (a z/VM system) must attach the tape drive and mount the volume. However, z/VSE cannot use RMSMASTR directly because RMS functions run only in CMS mode.
Therefore, some z/VSE guest scenarios use the CMS service system, called the VGS, to communicate with RMSMASTR. VGS uses the standard facilities of RMS to interact with the TS7700 and the virtual drives.
Figure B-2 shows the flow and connections of a TS7700 in a z/VSE environment under a VM.
Figure B-2 TS7700 in a z/VSE environment as a VM guest
Tape management systems
As with the IBM VM/ESA native environment the TMS is responsible for keeping an inventory of volumes in the TS7700 that belong to z/VSE. Some vendor tape management support scenarios do not use VGS. Instead, they communicate directly with RMSMASTR through CSL calls.
Figure B-3 shows CA-DYNAM/T VSE.
Figure B-3 TS7700 in a z/VSE environment as a VM guest (no VGS)
VSE uses original equipment manufacturer (OEM) tape management products that support scratch mounts. So, if you are using VSE under VM, you have the benefit of using the scratch (Fast Ready) attribute for the VSE library’s scratch category.
For more information about z/VSE, see z/VSE V6R1.0 Administration, SC34-2627.
Software implementation in z/OS Transaction Processing Facility
This section describes the support for a TS7700 in a z/OS Transaction Processing Facility (z/TPF) environment with z/TPF V1.1. The z/TPF control program and several new and modified z/TPF E-type programs support the TS1150, TS7740, TS7720, and TS7760. The support is limited to a command-based interface.
Because z/TPF does not have a TMS or a tape catalog system, z/OS manages this function. In a z/TPF environment, most tape data is passed between the systems. In general, 90% of the tapes are created on z/TPF and read on z/OS, and the remaining 10% are created on z/OS and read on z/TPF.
Be sure to use the normal z/OS and TS7700 installation processes. For more information, see the following white paper, which describes some of the leading practices for implementing the TS7700 with z/TPF:
Usage considerations for TS7700 with z/TPF
z/TPF uses virtual volumes from the z/OS scratch pools and shares the TS7700 scratch categories with z/OS. The z/OS host runs the insert processing for these virtual volumes and continues to manage them based on the input obtained from z/TPF. z/TPF has a set of commands (ztplf), which you use to load the volumes in z/TPF-allocated virtual drives.
After a volume is loaded into a z/TPF drive, have an automated solution in place that passes the volume serial number (VOLSER), the tape data set name, and the expiration date over to z/OS to process it automatically.
On z/OS, you must update the TMS’s catalog and the TCDB so that z/OS can process virtual volumes that are created by z/TPF. After the z/TPF-written volumes are added to the z/OS TMS catalog and the TCDB, normal expiration processing applies. When the data on a virtual volume expires and the volume is returned to scratch, the TS7700 internal database is updated to reflect the volume information maintained by z/OS.
Specifics for z/TPF and z/OS with a shared TS7700
From the virtual drive side, z/TPF must be allocated certain drive addresses. This information depends on the tape functions that are needed on z/TPF, and can vary with your set. Therefore, the TS7700 has tape addresses that are allocated to multiple z/TPF and z/OS systems, and can be shared by dedicating device addresses to other systems.
Tapes that are created on z/OS and read into z/TPF
Tapes that are created on z/OS and read into z/TPF use the same z/OS process for creating tapes. Now, when z/TPF wants to read this z/OS-created tape, it does a specific mount of the tape virtual server network (VSN) into a z/TPF-allocated drive by using the z/TPF (ztplf) commands.
TS7700 performance for z/TPF
You can use the normal z/TPF Data Collection and Reduction reports that summarize read and write activity to the z/TPF-allocated drive. For TS7700 specific performance, use the normal TS7700 statistics that are offloaded to z/OS through the TS7700 Bulk Volume Information Retrieval (BVIR) function.
Support of large virtual volumes for z/TPF (2 GB and 4 GB)
z/TPF itself does not use functions, such as Data Class (DC), to control the logical volume size for specific mounts. User exits enable you to set construct names for a volume. If you are not using the user exits, you can set the default size through the TS7700 Management Interface (MI) during logical volume insertion, as described in “Implementing Outboard Policy Management for non-z/OS hosts” on page 837.
Consider the following information when you implement a TS7700 in a TPF environment:
Reserving a tape category does not prevent another host from using that category. You are responsible for monitoring the use of reserved categories.
Automatic insert processing is not provided in z/TPF.
Currently, no IBM TMS is available for z/TPF.
Advanced Policy Management is supported in z/TPF through a user exit. The exit is called any time that a volume is loaded into a drive. Then, the user can specify, through the z/TPF user exit, whether the volume will inherit the attributes of an existing volume by using the clone VOLSER attribute. Or, the code can elect to specifically set any or all of the Storage Group (SG), Management Class (MC), Storage Class (SC), or DC construct names. If the exit is not coded, the volume attributes remain unchanged because the volume is used by z/TPF.
For z/TPF V1.1, APAR PJ31394 is required for this support.
Library interface
z/TPF has only one operator interface with the TS7700, which is a z/TPF functional message called ZTPLF. The various ZTPLF functions enable the operator to manipulate the tapes in the library as operational procedures require. These functions include Reserve, Release, Move, Query, Load, Unload, and Fill. For more information, see IBM TotalStorage 3494 Tape Library: A Practical Guide to Tape Drives and Tape Automation, SG24-4632.
Control data sets
The z/TPF host does not keep a record of the volumes in the TS7700 tape library or manages the tape volumes in it. You can use the QUERY command to obtain information about the tape volumes held in the TS3500/3952 Tape Library.
Service information message and media information message presentation
Service information messages (SIMs) and media information messages (MIMs) report hardware-related problems to the operating system.
SIMs and MIMs are represented in z/TPF by EREP reports and the following messages:
CEFR0354
CEFR0355W
CEFR0356W
CEFR0357E
CEFR0347W
CDFR0348W
CDFR0349E
Performance considerations for TS7700 multi-cluster grids with z/TPF
When clusters are operating within a TS7700 grid, they share information about the status of volumes and devices. Certain operations that are initiated by z/TPF require all the clusters in the grid to communicate with one another. Under normal conditions, this communication occurs without delay and no effect to z/TPF. In addition, if one cluster fails and the other clusters in the grid recognize that condition, the communication with that cluster is no longer needed.
The issue with z/TPF arises when the period that clusters wait before recognizing that another cluster in the grid failed exceeds the timeout values on z/TPF. This issue also means that during this recovery period, z/TPF cannot run any ZTPLF commands that change the status of a volume. This restriction includes loading tapes or changing the category of a volume through a ZTPLF command, or through the tape category user exit in segment CORU.
The recovery period when a response is still required from a failing cluster can be as long as 6 minutes. Attempting to send a tape library command to any device in the grid during this period can render that device inoperable until the recovery period has elapsed even if the device is on a cluster that is not failing.
To protect against timeouts during a cluster failure, z/TPF systems must be configured to avoid sending tape library commands to devices in a TS7700 grid along critical code paths within z/TPF. This task can be accomplished through the tape category change user exit in the segment CORU. To isolate z/TPF from timing issues, the category for a volume must not be changed if the exit is called for a tape switch. Be sure that the exit changes the category when a volume is first loaded by z/TPF and then not changed again.
To further protect z/TPF against periods in which a cluster is failing, z/TPF must keep enough volumes loaded on drives that are varied on to z/TPF so that the z/TPF system can operate without the need to load an extra volume on any drive in the grid until the cluster failure is recognized. z/TPF must have enough volumes that are loaded so that it can survive the 6-minute period where a failing cluster prevents other devices in that grid from loading any new volumes.
 
Important: Read and write operations to devices in a grid do not require communication between all clusters in the grid. Eliminating the tape library commands from the critical paths in z/TPF helps z/TPF tolerate the recovery times of the TS7700 and read or write data without problems if a failure of one cluster occurs within the grid.
Another configuration consideration relates to volume ownership. Each volume in a TS7700 grid is owned by one of the clusters in the grid. When a scratch volume is requested from a category for a specific device, a volume that is owned by the cluster to which that device belongs is selected, if possible. z/TPF systems must always be configured so that any scratch category is populated with volumes that are owned by each cluster in the grid.
In this manner, z/TPF has access to a scratch tape that is owned by the cluster that was given the request for a scratch volume. If all of the volumes in a grid are owned by one cluster, a failure on that cluster requires a cluster takeover (which can take tens of minutes) before volume ownership can be transferred to a surviving cluster.
Guidelines
When z/TPF applications use a TS7700 multi-cluster grid that is represented by the composite library, the following usage and configuration guidelines can help you meet the TPF response-time expectations on the storage subsystems:
The best configuration is to have the active and standby z/TPF devices and volumes on separate composite libraries (either single-cluster or multi-cluster grid). This configuration prevents a single event on a composite library from affecting both the primary and secondary devices.
If the active and standby z/TPF devices/volumes are configured on the same composite library in a grid configuration, be sure to use the following guidelines:
 – Change the category on a mounted volume only when it is first mounted through the ZTPLF LOAD command or as the result of a previous ZTPLF FILL command.
This change can be accomplished through the tape category change user exit in the segment CORU. To isolate z/TPF from timing issues, the category for a volume must never be changed if the exit is called for a tape switch. Be sure that the exit changes the category when a volume is first loaded by z/TPF, and then does not change it again.
 – z/TPF must keep enough volumes loaded on drives that are varied on to z/TPF so that the z/TPF system can operate without the need to load extra volumes on any drive in the grid until a cluster failure is recognized and the cluster isolated. z/TPF must have enough volumes that are loaded so that it can survive the 6-minute period when a failing cluster prevents other devices in that grid from loading any new volumes.
 – z/TPF systems must always be configured so that any scratch category is made up of volumes that are owned throughout all the various clusters in the grid. This method assures that during cluster failures, volumes on other clusters are available for use without having ownership transfers.
Use the RUN Copy Consistency Point only for the cluster that is used as the z/TVC. All other clusters must be configured with the Deferred consistency point to avoid timeouts on the close of the volume.
Implementing Outboard Policy Management for non-z/OS hosts
Outboard Policy Management and its constructs are used only in DFSMS host environments where OAM can identify the construct names and dynamically assigns and resets them. z/VM, z/VSE, z/TPF, and other hosts cannot identify the construct names, and cannot change them. In addition, non-z/OS hosts use multiple Library Manager (LM) categories for scratch volumes, and can use multiple logical scratch pools on the Library Manager, as listed in Table B-2.
Table B-2 Scratch pools and Library Manager volume categories
Host software
Library Manager
scratch categories
Number of
scratch pools
Library Manager private categories
VM (+ VM/VSE)
X’0080’ - X’008F’
16
X’FFFF’
Basic Tape Library Support (BTLS)
X’0FF2’ - X’0FF8’, X’0FFF’
8
X’FFFF’
Native VSE
X’00A0’ - X’00BF’
32
X’FFFF’
 
Clarification: In a z/TPF environment, manipulation of construct names for volumes can occur when they are moved from scratch through a user exit. The user exit enables the construct names and clone VOLSER to be altered. If the exit is not implemented, z/TPF does not alter the construct names.
z/TPF use of categories is flexible. z/TPF enables each drive to be assigned a scratch category. Relating to private categories, each z/TPF has its own category to which volumes are assigned when they are mounted.
For more information about this topic, see the z/TPF section of IBM Knowledge Center:
Because the hosts do not know about constructs, they ignore static construct assignment, and the assignment is kept even when the logical volume is returned to scratch. Static assignment means that at the insert time of logical volumes, they are assigned construct names. Construct names can also be assigned later at any time.
To implement Outboard Policy Management for non-z/OS hosts attached to a TS7700, complete the following steps:
1. Define your pools and constructs.
2. Insert your logical volumes into groups through the TS7700 MI, as described in 9.5.3, “TS7700 definitions” on page 552. You can assign the required static construct names during the insertion as shown at the bottom part of the window in Figure B-4.
3. On the left side of the MI, click Virtual  Virtual Volumes  Insert Virtual Volumes. The window that is shown in Figure B-4 opens. Use the window to insert virtual volumes. Select the Set Constructs option and enter the construct names.
Figure B-4 Insert logical volumes by assigning static construct names
4. If you want to modify VOLSER ranges and assign the required static construct names to the logical volume ranges through the change existing logical volume function, select Logical Volumes  Modify Logical Volumes.
Define groups of logical volumes with the same construct names assigned and during insert processing, direct them to separate volume categories so that all volumes in one LM volume category have identical constructs assigned.
Host control is given by using the appropriate scratch pool. By requesting a scratch mount from a specific scratch category, the actions that are defined for the constructs that are assigned to the logical volumes in this category are run at the Rewind Unload (RUN) of the logical volume.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.159.224