Configuring and operating the IBM TS7700
This chapter provides information about how to configure and operate the IBM TS7700 by using the Management Interface (MI).
This chapter includes the following topics:
For general guidance regarding TS3500 or TS4500 tape libraries, see the following IBM Redbooks publications:
IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation, SG24-6789
IBM TS4500 R5 Tape Library Guide, SG24-8235
9.1 User interfaces
To successfully operate the TS7700, you must understand its concepts and components. This chapter combines the components and functions of the TS7700 into two groups:
The logical view
The physical view
Each component and each function belong to only one view.
The logical view is named the host view. From the host allocation point of view, there is only one library, called the composite library. The logical view includes virtual volumes and virtual tape drives.
With Release R4.2, a composite library can have up to 4096 virtual addresses for tape mounts, considering a eight-cluster grid with support for 496 virtual devices in each cluster (available with FC5275 and z/OS APAR OA44351). For more information, see Chapter 2, “Architecture, components, and functional characteristics” on page 15.
The host is only aware of the existence of the underlying physical libraries because they are defined through Interactive Storage Management Facility (ISMF) in a z/OS environment. The term distributed library is used to denote the physical libraries and TS7700 components that are part of one cluster of the multi-cluster grid configuration.
The physical view shows the hardware components of a stand-alone cluster or a multi-cluster grid configuration. In a TS7700 tape-attached model, it includes the currently configured physical tape library and tape drives:
The TS4500 tape library with supported tape drives TS1140 (3592 EH7) and TS1150 (3592 EH8) models
The TS3500 tape library, which supports the 3592 J1A, TS1120, TS1130, TS1140, or TS1150 tape drive models
 
Note: TS7760-VEC tape attach configurations must use CISCO switches (16 Gbps)
With Release R4.2, the TS7700 supports Tier to Cloud functions, which uses Transparent Cloud Tiering (TCT) for off load data to public or private cloud. Both Physical tape and cloud tier are policy managed options. In R4.2, cloud or tape are mutually exclusive options for a TS7700 cluster.
Release 4.0 introduced support for the TS4500 tape library when attached to models TS7740-V07, TS7720T-VEB, and TS7760. TS3500 tape library can still be attached to all TS7700 tape attach models.
Release 3.3 introduced support for TS1150 along with heterogeneous support for two different tape drive models at the same time, as described in 7.1.5, “TS7700 tape library attachments, drives, and media” on page 266.
The following operator interfaces for providing information about the TS7700 are available:
Object access method (OAM) commands are available at the host operator console. These commands provide information about the TS7700 in stand-alone and grid environments. This information represents the host view of the components within the TS7700. Other z/OS commands can be used against the virtual addresses. This interface is described in Chapter 10, “Host Console operations” on page 585.
Web-based management functions are available through web-based user interfaces (UIs). The following browsers can be used to access the web interfaces:
 – Firefox ESR:31.x, 38.x, 45.x
 – Microsoft Internet Explorer Version 9.x, 10.x, and 11
 – Chrome 39.x and42.x
 – Microsoft Edge 25.x
Enable cookies and disable the browser’s function of blocking windows for the MI usage. Unsupported web browser versions might cause some MI windows to not display correctly.
Considering the overall TS7700 implementation, two different web-based functions are available:
 – The tape library GUI, which enables management, configuration, and monitoring of the configured tape library in tape attach configurations. The TS4500 and TS3500 are the supported tape libraries for the TS7700 implementation.
 – The TS7700 MI, which is used to run all TS7700 configuration, setup, and monitoring actions.
 
Call Home Interface: This interface is activated on the TS3000 System Console (TSSC) and provides helpful information to IBM Service, Support Center, and Development personnel. It also provides a method to connect IBM storage systems with IBM remote support, also known as Electronic Customer Care (ECC). No user data or content is included in the call home information.
9.2 The tape library management GUI
The tape library management GUI web interface enables the user to monitor and configure most of the library functions from the web. The tape library GUI can be started from the tape library expanded page on TS7700 MI by clicking the tape library image. Starting with R4.0, the tape attach TS7700 can be configured with the TS3500 and TS4500 tape libraries.
Figure 9-1 shows the TS3500 tape library GUI initial window with the System Summary.
Figure 9-1 TS3500 tape library GUI initial window
Figure 9-2 shows the TS4500 Management GUI initial Summary Screen. Notice that GUI general appearance and warning messages presentation are similar to the TS7700 and other IBM storage products MIs.
Figure 9-2 The TS4500 tape library management GUI
The tape library management GUI windows are used during the hardware installation phase of the TS7700 tape attach models. For more information about installation tasks, see 9.5.1, “The tape library with the TS7700T cluster” on page 524.
9.3 TS7700 Management Interface
The TS7700 MI is the primary interface to monitor and manage the TS7700. The TS7700 GUI is accessed through TCP/IP, by entering the TS7700 IP address in your web browser. The following web browsers are currently supported:
Mozilla Firefox ESR 31.x, 38.x, 45.x
Microsoft Internet Explorer 9, 10, and 11
Google Chrome 39.x and 42.x
Microsoft Edge 25.x
The current TS7700 graphical user interface (GUI) implementation has an appearance and feel similar to other MI adopted in other IBM Storage products.
 
9.3.1 Connecting to the Management Interface
To connect to the TS7700 MI, complete the following steps:
1. The TS7700 must first be installed, configured, and online.
2. In the address field of a supported web browser, enter http://x.x.x.x
(where x.x.x.x is the virtual IP address that was assigned during installation). Press Enter or click Go in the web browser.
3. The virtual IP is one of three IP addresses that are provided during installation. To access a specific cluster, enter the cluster IP address as shown in Example 9-1, where Cluster 0 is accessed directly.
Example 9-1 IP address to connect to Cluster 0 in a grid
http://x.x.x.x/0/Console
4. If a local name server is used, where names are associated with the virtual IP address, then the cluster name rather than the hardcoded address can be used for reaching the MI.
5. The login window for the MI displays as shown in Figure 9-3. Enter the default login name as admin and the default password as admin.
Figure 9-3 TS7700 MI login
After logging in, the user is presented to the Grid Summary page, as shown in Figure 9-4.
After security policies are implemented locally at the TS7700 cluster or by using centralized role-base access control (RBAC), a unique user identifier and password can be assigned by the administrator. The user profile can be modified to provide only functions applicable to the role of the user. All users might not have access to the same functions or views through the MI.
Figure 9-4 shows an example of Grid Summary window of a TS7700 Grid. It shows a four-cluster grid, the components of it and health status of components. The composite library is depicted as a data center, with all members of the grid on the raised floor. Notice that the TS7760 (two clusters on the left) has a distinct visual appearance when compared to the TS7740 and TS7720 to the right of the picture.
Figure 9-4 MI Grid summary
Each cluster is represented by an image of the TS7700 frame, displaying the cluster’s nickname and ID, and the composite library name and Library ID.
The health of the system is checked and updated automatically at times that are determined by the TS7700. Data that is displayed in the Grid Summary window is not updated in real time. The Last Refresh field, in the upper-right corner, reports the date and time that the displayed data was retrieved from the TS7700. To populate the summary with an updated health status, click the Refresh icon near the Last Refresh field in the upper-right corner of Figure 9-4.
The health status of each cluster is indicated by a status sign next to its icon. The legend explains the meaning of each status sign. To see more information about a specific cluster, click that component’s icon. In the example that is shown in Figure 9-4 on page 348, the TS7720T has a Warning or Degraded indication on it.
9.3.2 Using the TS7700 management interface
This section describes how to use the TS7700 management interface (MI).
Login window
Each cluster in a grid uses its own login window, which is the first window that opens when the cluster URL is entered in the browser address field. The login window shows the name and number of the cluster to be accessed. After logging in to a cluster, other clusters in the same grid can be accessed from the same web browser window.
Navigating between windows
Navigation between MI window can be done by clicking active links on a window or on the banner, or by selecting a menu option or icon.
Banner
The banner is common to all windows of the MI. The banner elements can be used to navigate to other clusters in the grid, run some user tasks, and locate additional information about the MI. The banner is located across the top of the MI web page, and allows a secondary navigation scheme for the user.
Figure 9-5 shows an example of the TS7700 MI banner element.
Figure 9-5 Management Interface Banner
The left field on the banner shows the sequence of selections made by user in the TS7700 MI website hierarchy, the bread crumbs trail. User can navigate directly to a different page by hovering the mouse over that field and clicking to select a different page. At the right of the banner (showing admin on Figure 9-5) shows the current user logged in MI. Hovering the mouse over it gives you the choices of logging out, changing user password, and turn on/off low graph mode.
The last field to the right of the banner (question mark symbol) provides information about current MI window. In addition, you can invoke learning and tutorials, the knowledge center, and check the level of the installed knowledge center by hovering the mouse over it and clicking the desired option.
Status and event indicators
Status and alert indicators occur at the bottom of each MI window. These indicators provide a quick status check for important cluster and grid properties. Grid indicators provide information for the entire grid. These indicators are displayed on the left and right corners of the window footer, and include tasks and events.
Figure 9-6 shows some examples of status and events that can be displayed from the Grid Summary window.
Figure 9-6 Status and Events indicators in the Grid Summary pane
All cluster indicators provide information for the accessing cluster only, and are displayed only on MI windows that have a cluster scope. MI also provides ways to filter, sort, and change the presentation of different tables in the MI. For example, the user can hide or display a specific column, modify its size, sort the table results, or download the table row data in a comma-separated value (CSV) file to a local directory.
For a complete description of tasks, the behavior of health and status icons, and a description of how to optimize the table presentations, see the Using the Management Interface topic in TS7700 R4.1 IBM Knowledge Center:
Library Request Command window
The LI REQ command pane in the MI expands the interaction of the system administrator with the TS7700 subsystem. By using the LI REQ panel, a standard LI REQ command can be run by the Storage Administrator directly from the MI to a grid (also known as Composite Library), or to a specific Cluster (also known as Distributed Library), with no need to be logged in to the z/OS host system.
The LI REQ panel is minimized and docked at the bottom of the MI window. The user must only click it (at the lower right) to open the LI REQ command pane. Figure 9-7 shows the new LI REQ command panel and operation.
Figure 9-7 LI REQ Command window and usage
By default, the only user role that is allowed to run LI REQ commands is the Administrator. LI REQ commands are logged in to tasks.
 
Remember: The LI REQ option shows only in the bottom of the MI windows for users with the Administrator role, and is not displayed on the host console.
Figure 9-8 shows an example of a library request command reported in the Tasks list, and shows how to get more information about the command by selecting Properties clicking See details in the MI window.
Figure 9-8 LI REQ command log and information
 
 
Important: LI REQ commands that are issued from this window are not presented in the host console logs.
For a complete list of available LI REQ commands, their usage, and respective responses, see the current IBM TS7700 Series z/OS Host Command Line Request User’s Guide, WP101091:
Standard navigation elements
This section of the TS7700 MI provides functions to manage and monitor the health the TS7700. Listed next are the expandable interface windows that are shown on the left side of the MI Summary window. The exception is the systems window, which is displayed only when the cluster is part of a grid.
More items might also show, depending on the actual cluster configuration:
Systems icon This window shows the cluster members of the grid and grid-related functions.
Monitor icon This window gathers the events, tasks, and performance information about one cluster.
Light cartridge icon Information that is related to virtual volumes is available here.
Dark cartridge icon Information that is related to physical cartridges and the associated tape library are under this window.
Notepad icon This window contains the constructs settings.
Blue man icon Under the Access icon, all security-related settings are grouped.
Gear icon Cluster general settings, feature licenses, overrides, SNMP, write protect mode, and backup and restore settings dare under the Gear icon.
Tool icon Ownership takeover mode, network diagnostics, data collection, and other repair/recovery-related activities are under this icon.
MI Navigation
Use this window (see Figure 9-9) for a visual summary of the TS7700 MI Navigation.
Figure 9-9 TS7700 MI Navigation
9.3.3 The Systems icon
The TS7700 MI windows that are gathered under the Systems icon can help to identify quickly cluster or grid properties, and assess the cluster or grid “health” at a glance.
 
Tip: The Systems icon is only visible when the accessed TS7700 Cluster is part of a grid.
Grid Summary window
The Grid Summary window is the first window that opens in the web interface when the TS7700 is online, and the cluster that is currently being accessed by MI is part of a grid. This window can be used to quickly assess the health of all clusters in the grid, and as a starting point to investigate cluster or network issues.
 
Note: If the accessing cluster is a stand-alone cluster, the Cluster Summary window is shown upon login instead.
This window shows a summary view of the health of all clusters in the grid, including family associations, host throughput, and any incoming copy queue. Figure 9-10 shows an example of a Grid Summary window, including the pop-up windows.
Grid Summary window includes the following information:
Cluster throttling
Host throughput rate (sampled before compression by host adapters within cluster)
Copy queue size and type
Running tasks and events
Figure 9-10 Grid Summary and pop-up windows
There is a diskette icon on the right of the Actions button. Clicking the icon saves a CSV-formatted file with a summary of the grid components information.
Actions menu on the Grid Summary page
Use this menu to change the appearance of clusters on the Grid Summary window or grid identification details. When the grid includes a disk-only cluster, this menu can also be used to change removal threshold settings for it or resident partitions (CP0) of a TS7700T (tape attach) or a TS7760C (Cloud attach) clusters. The Actions menu window is shown in Figure 9-11.
Figure 9-11 Grid Summary window and Actions list
This menu features the following tasks:
Order by Cluster ID
Select this option to group clusters according to their cluster ID number. Ordered clusters are shown first from left to right, then front to back. Only one ordering option can be selected at a time.
 
Note: The number that is shown in parentheses in breadcrumb navigation and cluster labels is always the cluster ID.
Order by Families
Select this option to group clusters according to their family association.
Show Families
Select this option to show the defined families on the grid summary window. Cluster families are used to group clusters in the grid according to a common purpose.
Cluster Families
Select this option to add, modify, or delete cluster families used in the grid.
Modify Grid Identification
Use this option to change grid nickname or description.
Temporary Removal Thresholds
This option is used to change temporarily the removal thresholds of the disk-only clusters in the grid.
Vary Devices Online
Select this option to vary devices online for the selected cluster. A blue informational icon is shown on the lower left corner of the cluster image if logical devices of that cluster need to be varied online. Figure 9-12 shows an example of cluster vary online devices. Notice the information icon on the cluster that is reporting devices offline.
Figure 9-12 Vary cluster devices online
This menu option is available only if control unit initiated reconfiguration (CUIR) is enabled by a LI REQ command and the Automatic Vary Online (AONLINE) notification is disabled. For more information about the use of the LI REQ commands, see Chapter 10, “Host Console operations” on page 585.
Fence Cluster, Unfence Cluster
Select Fence Cluster to place a selected cluster in a fence state. If a cluster is already in a fence state, the option Unfence Cluster will show instead.
Select Unfence Cluster to unfence a selected cluster that is currently in a fence state.These functions are part of the grid resilience improvements package. The Fence Cluster option in the Actions menu allows the user Administrator (default) to manually remove (fence) a cluster that has been determined to be sick or not functioning properly from the grid. Fencing a cluster will isolate it from the rest of the grid. The administrator can fence the local cluster (the one being accessed by MI) or a remote cluster in the grid from this window.
 
Note: Remote cluster fence is enabled only when all clusters in a grid are at R4.1.2 (or later) code level.
The user can decide what action will be taken by the sick cluster after the fence cluster action is selected:
 – Options for the local cluster:
 • Forced offline
 • Reboot
 • Reboot and stay offline
 – Options for a remote cluster (from any other cluster in the grid besides the cluster under suspicion):
 • Send an alert
 • Force cluster offline
 • Reboot
 • Reboot and stay offline or isolate from the grid
Figure 9-13 shows the TS7700 MI sequence to manual fence a cluster.
Figure 9-13 Fence a cluster operation.
For more information about cluster fence function and proper usage, see 2.3.37, “Grid resiliency functions” on page 99.
Figure 9-14 shows how to manually unfence a cluster using the TS7700 MI.
Figure 9-14 Unfence cluster sequence
 
Cluster Families window
To view information and run actions that are related to TS7700 cluster families, use the window that is shown in Figure 9-15.
Figure 9-15 MI Add Cluster Families: Assigning a cluster to a family
To view or modify cluster family settings, first verify that these permissions are granted to the assigned user role. If the current user role includes cluster family permissions, select Modify to run the following actions:
Add a family: Click Add to create a new cluster family. A new cluster family placeholder is created to the right of any existing cluster families. Enter the name of the new cluster family in the active Name text box. Cluster family names must be 1 - 8 characters in length and composed of Unicode characters. Each family name must be unique. Clusters are added to the new cluster family by relocating a cluster from the Unassigned Clusters area by using the method that is described in the Move a cluster function, described next.
Move a cluster: One or more clusters can be moved by dragging, between existing cluster families, to a new cluster family from the Unassigned Clusters area, or to the Unassigned Clusters area from an existing cluster family:
 – Select a cluster: A selected cluster is identified by its highlighted border. Select a cluster from its resident cluster family or the Unassigned Clusters area by using one of these methods:
 • Clicking the cluster with the mouse.
 • Using the Spacebar key on the keyboard.
 • Pressing and holding the Shift key while selecting clusters to select multiple clusters at one time.
 • Pressing the Tab key on the keyboard to switch between clusters before selecting one.
 – Move the selected cluster or clusters:
 • Click and hold the mouse on the cluster, and drag the selected cluster to the destination cluster family or the Unassigned Clusters area.
 • Using the arrow keys on the keyboard to move the selected cluster or clusters right or left.
 
Consideration: An existing cluster family cannot be moved within the Cluster Families window.
Delete a family: To delete an existing cluster family, click the X in the upper-right corner of the cluster family to delete it. If the cluster family to be deleted contains any clusters, a warning message is displayed. Click OK to delete the cluster family and return its clusters to the Unassigned Clusters area. Click Cancel to abandon the delete action and retain the selected cluster family.
Save changes: Click Save to save any changes that are made to the Cluster Families window and return it to read-only mode.
 
Remember: Each cluster family must contain at least one cluster. An attempt to save a cluster family that does not contain any clusters results in an error message. No changes are made, and the Cluster Families window remains in edit mode.
Grid Identification properties window
To view and alter identification properties for the TS7700 grid, use this option. In a multigrid environment, use this window to identify clearly a particular composite library, making it easier to distinguish, operate, and manage this TS7700 grid (avoiding operational mistakes due to ambiguous identification).
To change the grid identification properties, edit the available fields and click Modify. The following fields are available:
Grid nickname: The grid nickname must be 1 - 8 characters in length and composed of alphanumeric characters with no spaces. The characters at (@), period (.), dash (-), and plus sign (+) are also allowed.
Grid description: A short description of the grid. Up to 63 characters can be used.
Lower removal threshold
Select Actions → Temporary Removal Threshold in the Grid summary view to lower the removal threshold for any disk-only cluster or cache resident partition of a cloud or tape attach cluster in a grid that possess a physical tape library.
Figure 9-16 shows the Temporary Removal Threshold window.
Figure 9-16 Setting the Temporary Removal Threshold
Grid health and details
In the Grid Summary view, cluster is in a normal state (healthy) when there is no warning or degradation icon that is displayed at the lower left side at the cluster’s representation in the MI. Hovering the mouse pointer over the lower right corner of the cluster’s picture in the Grid Summary window shows a message stating The health state of the [cluster number] [cluster name] is Normal, confirming that this cluster is in a normal state.
Exceptions in the cluster state are represented in the Grid Summary window by a little icon at the lower right side of the cluster’s picture. Additional information about the status can be viewed by hovering your cursor over the icon. See Figure 9-6 on page 351 for a visual reference of the icons and how they show up on the Grid Summary page.
Figure 9-17 shows the appearance of the degraded icon, and the possible reasons for degradation to happen.
Figure 9-17 Warning or Degraded Icon
Figure 9-18 shows the icons for other possible statuses for a cluster that can be viewed on the TS7700 Cluster or Grid Summary windows.
Figure 9-18 Other cluster status icons
For a complete list of icons and meanings, see IBM Knowledge Center:
In the Grid Summary window, there is an alert icon to indicate throttling activity on a cluster within the grid. Throttling can severely affect the overall performance of a cluster, and might result in job execution delays and affects the operation schedule. See Figure 9-19 for an example.
Figure 9-19 Clusters throttling in a two-cluster grid
For practical considerations about this topic, what it means, and what can be done to avoid it, see Chapter 11, “Performance and monitoring” on page 623.
For more information about throttling in a TS7700 grid, see the IBM TS7700 Series Best Practices - Understanding, Monitoring, and Tuning the TS7700 Performance, WP101465:
Cluster Summary window
By clicking the icon of an individual cluster in the grid, or by selecting a specific cluster in the cluster navigation element in the banner, the Cluster Summary window can be accessed. In a stand-alone configuration, this is the first icon that is available in the MI.
Figure 9-20 shows an example of the Cluster Summary window.
Figure 9-20 Cluster Summary window with TS7760T and TS4500 tape library
The Cluster Information can be displayed by hovering the cursor over the components, as shown in Figure 9-20. In the resulting box, the following information is available:
Cluster components health status
Cluster Name
Family to which this cluster is assigned
Cluster model
Licensed Internal Code (LIC) level for this cluster
Description for this cluster
Disk encryption status
Cache size and occupancy (Cache Tube)
There is a diskette icon to the right of the Actions button. Clicking that icon downloads a CSV-formatted file with the meaningful information about that cluster.
Cluster Actions menu
By using the options under this menu, the user can change the state or settings of a cluster. Also, when the selected cluster is a tape attach TS7700 (a tape library is present), this menu can be used to change the Copy Export settings.
From the Action menu, the Cluster State can be changed to a different one to perform a specific task, such as preparing for a maintenance window, performing a disaster recovery drill, or moving machines to a different IT center. Depending on the current cluster state, different options display.
Table 9-1 describes options available to change the state of a cluster.
Table 9-1 Options to change the cluster state
If the current state is
Selection
Restrictions and notes
Online
Service Prep
All following conditions must be met first:
The cluster is online.
No other clusters in the grid are in service
prep mode.
At least one other cluster must remain online.
Caution: If only one other cluster remains online, a single point of failure exists when this cluster state becomes service prep mode.
Select Service Prep to confirm this change.
Force Shutdown
Select Force Shutdown to confirm this change.
Important: After a shutdown operation is initiated, it cannot be canceled.
Service Pending
Force Service
Use this option when an operation stalls and is preventing the cluster from entering Service Prep.
Select Force Service to confirm this change.
All but one cluster in a grid can be placed into service mode, but it is advised that only one cluster be in service mode at a time. If more than one cluster is in service mode, and service mode is canceled on one of them, that cluster does not return to normal operation until service mode is canceled on all clusters in the grid.
Return to Normal
Select this option to cancel a previous service prep change and return the cluster to the normal online state.
Select Return to Normal to confirm this change.
Force Shutdown
Select Force Shutdown to confirm this change.
Important: After a shutdown operation is initiated, it cannot be canceled.
Shutdown (offline)
User interface not available
After an offline cluster is powered on, it attempts to return to normal. If no other clusters in the grid are available, skip hot token reconciliation can be tried.
Online-Pending or Shutdown Pending
Menu disabled
No options to change state are available when a cluster is in a pending state.
Going offline and coming online considerations
Whenever a member cluster of a grid goes offline or comes back online, it needs to exchange information with other peer members regarding the status of the logical volumes that are controlled by the grid. Each logical volume is represented by a token, which contains all of the pertinent information regarding that volume, such as creation date, whose cluster it belongs to, which cluster is supposed to have a copy of it, and what kind of copy it should be.
Each cluster in the grid keeps its own copy of the collection of tokens, representing all of the logical volumes that exist in grid, and those copies are kept updated at the same level by the grid mechanism. When coming back online, a cluster needs to reconcile its own collection of tokens with the peer members of the grid, making sure that it represents the status of the grid inventory. This reconcile operation is also referred to as token merge.
Here are some items to consider when going offline and coming online:
Pending token merge
A cluster in a grid configuration attempts to merge its token information with all of the other clusters in the grid as it goes online. When no other clusters are available for this merge operation, the cluster attempting to go online remains in the going online, or blocked, state indefinitely as it waits for the other clusters to become available for the merge operation. If a pending merge operation is preventing the cluster from coming online, there is an option to skip the merge step.
Click Skip Step to skip the merge operation. This button is only available if the cluster is in a blocked state while waiting to share pending updates with one or more unavailable clusters. When you click Skip Step, pending updates against the local cluster might remain undetected until the unavailable clusters become available.
Ownership takeover
If ownership takeover was set at any of the peers, the possibility exists that old data can surface to the host if the cluster is forced online. Therefore, before attempting to force this cluster online, it is important to know whether any peer clusters have ever enabled ownership takeover mode against this cluster while it was unavailable. In addition, if this cluster is in service, automatic ownership takeover from unavailable peers is also likely and must be considered before attempting to force this cluster online.
If multiple clusters were offline and must be forced back online, force them back online in the reverse order that they went down in (for example, the last cluster down is the first cluster up). This process ensures that the most current cluster is available first to educate the rest of the clusters forced online.
Autonomic Ownership Takeover Manager (AOTM)
If it is installed and configured, it attempts to determine whether all unavailable peer clusters are in a failed state. If it determines that the unavailable cluster is not in a failed state, it blocks an attempt to force the cluster online. If the unavailable cluster is not in a failed state, the forced online cluster can be taking ownership of volumes that it need not take ownership of. If AOTM discovers that all unavailable peers failed and network issues are not to blame, this cluster is then forced into an online state.
After it is online, AOTM can further enable ownership takeover against the unavailable clusters if the AOTM option is enabled. Additionally, manual ownership takeover can be enabled, if necessary.
Shutdown restrictions
To shut down a cluster, it is necessary to be logged in to this system. To shut down another cluster, log out of the current cluster and log in to the cluster to shut down. For more information, see “Cluster Shutdown window” on page 371.
 
Note: After a shutdown or force shutdown action, the targeted cluster (and associated cache) are powered off. A manual intervention is required on site where the cluster is physically located to power it up again.
A cluster shutdown operation that is started from the TS7700 MI also shuts down the cache. The cache must be restarted before any attempt is made to restart the TS7700 cluster.
Service mode window
Use the window that is shown in Figure 9-21 to put a TS7700 cluster into service mode, whenever required by a service action or any disruptive activity on a cluster that is a member of a grid. See Chapter 2, “Architecture, components, and functional characteristics” on page 15 for more information.
 
Remember: Service mode is only possible for clusters that are members of a grid.
Figure 9-21 Cluster Summary: Preparing for service
Service mode enables the subject cluster to leave the grid graciously, surrendering the ownership of its logical volumes as required by the peer clusters in the grid to attend to the tasks being performed by client. The user continues operating smoothly from the other members of the grid automatically, if consistent copies of volumes that reside in this cluster also exist elsewhere in the grid, and the host also has access to those clusters.
Before changing a cluster state to Service, the user needs to vary offline all logical devices that are associated with this cluster on the host side. No host access is available in a cluster that is in service mode.
 
Note: On the host side, vary logical drives online on the remaining clusters of the grid to ensure mount points enough for the system to continue operation before setting a cluster in Service mode.
R4.1.2 level of code starts implementing the CUIR. CUIR will help to alleviate client involvement and simplify the process necessary to start service preparation in a grid member.
Before code level R4.1.2, the user needed to vary offline all logical drives associated to the cluster going into service before changing cluster state to service. This process had to be done across all LPARs and system plexes attached to the cluster. Any long running jobs using these pending offline devices at this point will continue to run up to the end. Thus, the user should issue SWAP commands to these jobs, causing them to move to a different logical drive in a different cluster of the grid. After the cluster maintenance is completed and IBM CSR cancels service for the cluster, the user needed to vary online all the devices again across all LPARs and system plexes.
CUIR can automate this entire process when fully implemented. For more information about CUIR, see 10.9, “CUIR for tape” on page 616.
 
Important: Forcing Service Mode causes jobs that are currently mounted or that use resources that are provided by targeted cluster to fail.
Whenever a cluster state is changed to Service, it enters first in service preparation mode, and then, when the preparation stage finishes, it goes automatically into service mode.
During the service preparation stage, the cluster monitors the status of current host mounts, sync copy mounts targeting local Tape Volume Cache (TVC), monitors and finishes up the copies that are currently in execution, and makes sure that there are no remote mounts targeting local TVC. When all running tasks have ended, and no more pending activities are detected, the cluster finishes the service preparation stage and enters Service mode.
In a TS7700 grid, service preparation can occur on only one cluster at any one time. If service prep is attempted on a second cluster before the first cluster has entered in Service mode, the attempt will fail. After service prep has completed for one cluster, and that cluster has entered in service mode, another cluster can be placed in service prep. A cluster in service prep automatically cancels service prep if its peer in the grid experiences an unexpected outage while the service prep process is still active. Although all clusters except one can be in Service mode at the same time within a grid, the preferred approach is having only one cluster in service mode at a time.
Be aware that when multiple clusters are in service mode simultaneously, they need to be brought back to Normal mode at the same time. Otherwise, the TS7700 will not get to ONLINE state, waiting until the remaining clusters also leave service mode. Only then, those clusters merge their tokens and rejoin the grid as ONLINE members.
 
Remember: If more than one cluster is in Service mode, and service is canceled on one of them, that cluster does not return to an online state until Service mode is canceled on all other clusters in this grid.
For a disk-only TS7700 cluster or CP0 partition in a grid, click Lower Threshold to lower the required threshold at which logical volumes are removed from cache in advance. See “Temporary removal threshold” on page 167 for more information about the Temporary Removal Threshold. The following items are available when viewing the current operational mode of a cluster.
Cluster State can be any of the following states:
Normal: The cluster is in a normal operation state. Service prep can be initiated on this cluster.
Service Prep: The cluster is preparing to go into service mode. The cluster is completing operations (that is, copies owed to other clusters, ownership transfers, and lengthy tasks, such as inserts and token reconciliation) that require all clusters to be synchronized.
Service: The cluster is in service mode. The cluster is normally taken offline in this mode for service actions or to activate new code levels.
Depending on the mode that the cluster is in, a different action is presented by the button under the Cluster State display. This button can be used to place the TS7700 into service mode or back into normal mode:
Prepare for Service Mode: This option puts the cluster into service prep mode and enables the cluster to finish all current operations. If allowed to finish service prep, the cluster enters Service mode. This option is only available when the cluster is in normal mode. To cancel service prep mode, click Return to Normal Mode.
Return to Normal Mode: Returns the cluster to normal mode. This option is available if the cluster is in service prep or service mode. A cluster in service prep mode or Service mode returns to normal mode if Return to Normal Mode is selected.
A window opens to confirm the decision to change the Cluster State. Click Service Prep or Normal Mode to change to new Cluster State, or Cancel to abandon the change operation.
Cluster Shutdown window
Use the window that is shown in Figure 9-22 to shut down remotely a TS7700 cluster for a planned power outage or in an emergency.
Figure 9-22 MI Cluster: Forcing a cluster to shutdown
This window is visible from the TS7700 MI whether the TS7700 is online or in service. If the cluster is offline, MI is not available, and the error HYDME0504E The cluster you selected is unavailable is presented.
 
Note: After a shutdown or force shutdown action, the targeted cluster (and associated cache) are powered off. A manual intervention is required on the site where the cluster is physically located to power it up again.
Only the cluster where a connection is established can be shut down by the user. To shut down another cluster, drop the current cluster connection and log in to the cluster that must be shut down.
Before the TS7700 can be shut down, decide whether the circumstances provide adequate time to perform a clean shutdown. A clean shutdown is not mandatory, but it is suggested for members of a TS7700 grid configuration. A clean shutdown requires putting the cluster in Service mode first. Make sure that no jobs or copies are targeting or being sourced from this cluster during shutdown.
Jobs that use this specific cluster are affected, but also copies are canceled. Eligible data that has not yet been copied to remaining clusters cannot be processed during service and downtime. If the cluster cannot be placed in Service mode, use the force shutdown option.
 
Attention: A forced shutdown can result in lost access to data and job failure.
A cluster shutdown operation that is started from the TS7700 MI also shuts down the cache. The cache must be restarted before any attempt is made to restart the TS7700.
If the Shutdown option is selected from the action menu for a cluster that is still online, as shown at the top of Figure 9-22 on page 371, a message alerts the user to put the cluster in service mode first before shutting down, as shown in Figure 9-23.
 
Note: For normal situations, set the cluster into service mode before shutdown is always recommendable.
Figure 9-23 Warning message and Cluster Status during forced shutdown
It is still possible to force a shutdown without going into service by entering the password and clicking the Force Shutdown button if needed (for example, during a DR test to simulate a cluster failure. In this case, placing a cluster in service does not apply).
In Figure 9-23, the Online State and Service State fields in the message show the operational status of the TS7700 and appear over the button that is used to force its shutdown. The lower-right corner of the picture shows the cluster status that is reported by the message.
The following options are available:
Cluster State. The following values are possible:
 – Normal. The cluster is in an online, operational state and is part of a TS7700 grid.
 – Service. The cluster is in service mode or is a stand-alone system.
 – Offline. The cluster is offline. It might be shutting down in preparation for service mode.
Shutdown. This button initiates a shutdown operation:
 – Clicking Shutdown in Normal mode.
If Shutdown is selected while in normal mode, a warning message suggesting that you set the cluster to Service mode before proceeding opens, as shown in Figure 9-23 on page 372.
To place the cluster in service mode, select Modify Service Mode. To continue with the force shutdown operation, enter the password and click Force Shutdown. To abandon the shutdown operation, click Cancel.
 – Clicking Shutdown in Service mode.
When Shutdown is selected while in Service mode, you are prompted for a confirmation. Click Shutdown to continue, or click Cancel to abandon the shutdown operation.
 
Important: After a shutdown operation is initiated, it cannot be canceled.
When a shutdown operation is in progress, the Shutdown button is disabled and the status
of the operation is displayed in an information message. The the shutdown sequence features the following steps:
1. Going offline.
2. Shutting down.
3. Powering off.
4. Shutdown completes.
Verify that power to the TS7700 and to the cache is shut down before attempting to restart the system.
A cluster shutdown operation that is started from the TS7700 MI also shuts down the cache. The cache must be restarted first and allowed to achieve an operational state before any attempt is made to restart the TS7700.
Cluster Identification Properties window
Select this option to view and alter cluster identification properties for the TS7700.
The following information that is related to cluster identification is displayed. To change the cluster identification properties, edit the available fields and click Modify. The following fields are available:
Cluster nickname: The cluster nickname must be 1 - 8 characters in length and composed of alphanumeric characters. Blank spaces and the characters at (@), period (.), dash (-), and plus sign (+) are also allowed. Blank spaces cannot be used in the first or last character position.
Cluster description: A short description of the cluster. Up to 63 characters can be used.
 
Note: Copy and paste might bring in invalid characters. Manual input is preferred.
Cluster health and detail
The health of the system is checked and updated automatically from time to time by the TS7700. The information status that is reflected on this window is not in real time; it shows the status of the last check-out. To repopulate the summary window with the updated health status, click the Refresh icon. This operation takes some minutes to complete. If this cluster is operating in Write Protect Mode, a lock icon is shown in the middle right part of the cluster image.
Figure 9-20 on page 365, Cluster Summary page, depicts a TS7760T with a TS4500 tape library attached. Within cluster front view page, cluster badge (top of the picture) brings a general description about the cluster, such as model, name, family, Licensed Internal Code level, cluster description, and cache encryption status. Hovering the cursor over the locations within the picture of the frame shows the health status of different components, such as the network gear (at the top), TVC controller and expansion enclosures (bottom and halfway up), and the engine server along with the internal 3957-Vxx disks (the middle section). The summary of cluster health shows at the lower-right status bar, and also at the badge health status (over the frame).
Figure 9-24 shows the back view of the cluster summary window and health details. The components that are depicted in the back view are the Ethernet ports and host Fibre Channel connection (FICON) adapters for this cluster. Under the Ethernet tab, the user can see the ports that are dedicated to the internal network (the TSSC network) and those that are dedicated to the external (client) network. The assigned IP addresses are displayed. Details about the ports are shown (IPv4, IPv6, and the health). In the grid Ethernet ports, information about links to the other clusters, data rates, and cyclic redundancy check (CRC) errors are displayed for each port in addition to the assigned IP address and Media Access Control (MAC) address.
The host FICON adapter information is displayed under the Fibre tab for a selected cluster, as shown in Figure 9-24. The available information includes the adapter position and general health for each port.
Figure 9-24 Back view of the cluster summary with health details
To display the different area health details, hover the cursor over the component in the picture.
Cache expansion frame
The expansion frame view displays details and health for a cache expansion frame that is attached to the TS7720 cluster. To open the expansion frame view, click the small image corresponding to a specific expansion frame below the Actions button.
 
Tip: The expansion frame icon is displayed only if the accessed cluster includes an expansion frame.
Figure 9-25 shows the Cache Expansion frame details and health view through the MI.
Figure 9-25 Cache expansion frame details and health
Physical library and tape drive health
Click the physical tape library icon, which is shown on a TS7700 tape-attached Cluster Summary window, to check the health of the tape library and tape drives. Figure 9-26 shows a TS4500 tape library that is attached to a TS7760 cluster.
Figure 9-26 TS4500 tape library expanded page and links
 
Consideration: If the cluster is not a tape-attached model, the tape library icon does not display on the TS7700 MI.
The library details and health are displayed as listed in Table 9-2.
Table 9-2 Library health details
Detail
Definition
Physical library type - virtual library name
The type of physical library (type is always TS3500) accompanied by the name of the virtual library established on the physical library.
Tape Library Health
Fibre Switch Health
Tape Drive Health
The health states of the library and its main components. The following values are possible:
Normal
Degraded
Failed
Unknown
State
Whether the library is online or offline to the TS7700.
Operational Mode
The library operational mode. The following values are possible:
Auto
Paused
Frame Door
Whether a frame door is open or closed.
Virtual I/O Slots
Status of the I/O station that is used to move cartridges into and out of the library. The following values are possible:
Occupied
Full
Empty
Physical Cartridges
The number of physical cartridges assigned to the identified virtual library.
Tape Drives
The number of physical tape drives available, as a fraction of the total. Click this detail to open the Physical Tape Drives window.
The Physical Tape Drives window shows all the specific details about a physical tape drive, such as its serial number, drive type, whether the drive has a cartridge mount on it, and for what is it mounted. To see the same information, such as drive encryption and tape library location, about the other tape drives, select a specific drive and click Select Action  Details.
9.3.4 The Monitor icon
The collection of items under the Monitor icon in the MI provides means to monitor tasks, events, and performance statistics within the TS7700. Figure 9-27 shows the Monitor icon in the TS7700 MI.
Figure 9-27 The monitor Icon
Events
Use this window that is shown in Figure 9-28 on page 378 to view all meaningful events that occurred within the grid or a stand-alone TS7700 cluster. Events encompass every significant occurrence within the TS7700 grid or cluster, such as a malfunctioning alert, an operator intervention, a parameter change, a warning message, or some user-initiated action.
R4.1.2 level of code improves the presentation and handling of the cluster state and alerting mechanisms, providing new capabilities to the user.
The TS7700 MI Event panel has been remodeled, now separating between Operator Intervention Required, Tasks, and Events messages. Also, host notification messages will be logged to the Event window, given the user an historical perspective.
The user is now able to customize the characteristics of each one of the CBR3750I messages, being able to add custom text to each of them. This goal is accomplished through the new TS7700 MI Notification Settings window implemented by the R4.1.2 level of code.
Information is displayed on the Events table for 30 days after the operation stops or the event becomes inactive.
Figure 9-28 TS7700 MI Events window
 
Note: The Date & Time column refers the time of the events to the local time on the computer where the MI was initiated. If the DATA/TIME is modified in the TS7700 from Coordinated Universal Time during installation, the event times are offset by the same difference in the Events display on the MI. Coordinated Universal Time in all TS7700 clusters should be used whenever possible.
The Event window can be customized to meet your needs. Select which columns should show in the Events window. Figure 9-29 shows an example and the meaning of the icons used on the Events page.
Figure 9-29 Customizing the Events page and Icons
For more information about the Events page, see IBM Knowledge Center for TS7700, available locally by clicking the question mark symbol at the right of the banner on the TS7700MI, or at this web page:
Table 9-3 lists the column names and descriptions of the fields, as shown in the Event window.
Table 9-3 Field name and description for the Events window
Column name
Description
Date & Time
Date and time the event occurred.
Source
Cluster where the event occurred.
Location
Specific location on the cluster where the event occurred.
Description
Description of the event.
ID
The unique number that identifies the instance of the event. This number consists of the following values:
A locally generated ID, for example: 923
The type of event: E (event) or T (task)
An event ID based on these examples appears as 923E.
Status
The status of an alert or task.
If the event is an alert, this value is a fix procedure to be performed or the status of a call home operation.
If the event is a task, this value is its progress or one of these final status categories:
Canceled
Canceling
Completed
Completed, with information
Completed, with warning
Failed
System Clearable
Whether the event can be cleared automatically by the system. The following values are possible:
Yes. The event is cleared automatically by the system when the condition that is causing the event is resolved.
No. The event requires user intervention to clear. The event needs to be cleared or deactivated manually after resolving the condition that is causing the event.
Table 9-4 lists the actions that can be run on the Events table.
Table 9-4 Actions that can be run on the Events table
To run this task
Action
Deactivate or clear one or more alerts
1. Select at least one but no more than 10 events.
2. Click Mark Inactive.
If a selected event is normally cleared by the system, confirm the selection. Other selected events are cleared immediately.
A running task can be cleared but if the task later fails, it is displayed again as an active event.
Enable or disable host notification for alerts
Select Actions → [Enable/Disable] Host Notification. This change affects only the accessing cluster.
Tasks are not sent to the host.
View a fix procedure for an alert
Select Actions → View Fix Procedure.
A fix procedure can be shown for only one alert at a time. No fix procedures are shown for tasks.
Download a comma-separated value (CSV) file of the events list
Select Actions → Download all Events.
View more details for a selected event
1. Select an event.
2. Select Actions → Properties.
Hide or show columns on the table
1. Right-click the table header.
2. Click the check box next to a column heading to hide or show that column in the table. Column headings that are checked display on the table.
Filter the table data
Follow these steps to filter by using a string of text:
1. Click in the Filter field.
2. Enter a search string.
3. Press Enter.
To filter by column heading:
1. Click the down arrow next to the Filter field.
2. Select the column heading to filter by.
3. Refine the selection.
Reset the table to its default view
1. Right-click the table header.
2. Click Reset Table Preferences.
9.3.5 Performance
This section present information for viewing IBM TS7700 Grid and Cluster performance and statistics.
All graphical views, except the Historical Summary, are from the last 15 minutes. The Historical Summary presents a customized graphical view of the different aspects of the cluster operation, in a 24-hour time frame. This 24-hour window can be slid back up to 90 days, which covers three months of operations.
For more information about the steps that you can take to achieve peak performance, see IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance, WP101465:
Historical Summary
Figure 9-30 shows the Throughput View for the Historical Summary in Monitor → Performance MI operation in a tape-attached cluster.The performance data for a specific TS7700 cluster can be viewed in this page.
Figure 9-30 Performance window operation, throughput view
The MI is enhanced to accommodate the functions that are introduced by the code. Figure 9-31 shows the Performance Historical Summary and related chart selections that are available for this item.
Figure 9-31 Performance options and chart selections
Table 9-5 lists the chart elements that can be viewed or changed on this page.
Table 9-5 Elements of the Historical Summary chart
Element
Description
Y axes
The left vertical axis measures either throughput (MiBps) or copies (MiB), depending on the selected data sets. The right vertical axis measures the number of mounted virtual drives, reclaimed physical volumes, and data set size to copy. Possible measurements include MiB, GiBs, milliseconds, seconds, hours, and percentage.
X axis
The horizontal axis measures in hours the time period for the data sets shown. Measurements are shown in 15-minute increments by default. Click the time span (located at the top-center of the chart) to change the display increments. The following are the possible values:
1 day
12 hours
1 hour
30 minutes
15 minutes
custom
Last 24 hours
Click the icon in the top right corner of the page to reset the time span that is shown on the chart to the past 24-hour period. A change to the time span does not alter the configuration of data sets displayed in the main area of the chart.
Data sets
Data sets displayed in the main area of the chart are shown as lines or stacked columns.
 
Data sets related to throughput and copy queues can be grouped to better show relationships between these sets. See Table 9-6 on page 385 for descriptions of all data sets.
Legend
The area below the X axis lists all data sets selected to display on the chart, along with their identifying colors or patterns. The legend displays a maximum of 10 data sets. Click any data set shown in this area to toggle its appearance on the chart.
 
Note: To remove a data set from the legend, you must clear it using the Select metrics option.
Time span
The time period from which displayed data is drawn. This range is shown at the top of the page.
 
Note: Dates and times that are displayed reflect the time zone in which your browser is located. If your local time is not available, these values are shown in Coordinated Universal Time (UTC).
 
Click the displayed time span to modify its start or end values. Time can be selected in 15-minute increments.
Start date and time: The default start value is 24 hours before the present date and time. You can select any start date and time within the 90 days that precede the present date and time.
End date and time: The default end value is 24 hours after the start date or the last valid date and time within a 24-hour period. The end date and time cannot be later than the current date and time. You can select any end date and time that is between 15 minutes and 24 hours later than the start value.
Presets
Click one of the Preset buttons at the top of the vertical toolbar to populate the chart using one of three common configurations:
 
Throughput: Data sets in this configuration include the following:
Remote Read
Remote Write
Recall from Tape
Write to Tape
Link Copy Out
Link Copy In
Primary Cache Device Read
 
Throttling: Data sets in this configuration include the following:
Average Host Write Throttle
Average Copy Throttle
Average Deferred Copy Throttle
 
Copy queue: Data sets in this configuration include the following:
Copy Queue Size
 
The established time span is not changed when a preset configuration is applied.
 
Note: The preset options that are available depend on the configuration of the accessing cluster. See Table 9-6 on page 385 for existing restrictions.
Select metrics
Click the Select metrics button on the vertical toolbar to add or remove data sets displayed on the Historical Summary chart. See Table 9-6 on page 385 for descriptions of all data sets.
Download spreadsheet
Click the Download spreadsheet button on the vertical toolbar to download a comma-separated (.csv) file to your web browser for the period shown on the graph. In the .csv file, time is shown in 15-minute intervals.
 
Note: The time reported in the CSV file is shown in UTC. You might find time differences if the system that you are using to access the Management Interface is configured for a different time zone.
Chart settings
Click the Chart settings button on the vertical toolbar to enable the low graphics mode for improved performance when many data points are displayed. Low graphics mode disables hover-over tool tips and improves chart performance in older browsers. If cookies are enabled on your browser, this setting is retained when you exit the browser.
 
Note: Low graphics mode is enabled by default when the browser is Internet Explorer, version 8 or earlier.
Click the Select metrics button to open the Select metrics window to add or remove data sets displayed on the Historical Summary chart.
The Select metrics window organizes data sets by sections and categories.
The user can select up to 10 data sets, as listed in Table 9-6, to display on the Historical Summary chart.
Table 9-6 Data set descriptions
Metrics section
Metrics category
Data set
Description
Throughput
I/O
Channel R/W MiB/s
Transfer rate (MiBps) of host data on the FICON channel, which includes this information:
Host raw read: Rate that is read between the HBA and host.
Host raw write: Rate that is written to the virtual drive from the host.
Throughput
I/O
Primary Cache Read
Data transfer rate (MiBps) read between the virtual drive and HBA for the primary cache repository.
Throughput
 
I/O
Primary Cache Write
 
Data transfer rate (MiBps) written to the primary cache repository from the host through the HBA.
Throughput
 
 
I/O
Remote Read
Data transfer rate (MiBps) to the cache of the accessing cluster from the cache of a remote cluster as part of a remote write operation.
 
This data set is only visible when the accessing cluster is part of a grid.
Throughput
I/O
Remote Write
Data transfer rate (MiBps) to the cache of a remote cluster from the cache of the accessing cluster as part of a remote read operation.
 
This data set is only visible if the access cluster is part of a grid.
Throughput
Copies
Link Copy Out
Data transfer rate (MiBps) for operations that copy data from the accessing cluster to one or more remote clusters. This is data transferred between legacy TS7700 Grid links.
 
This data set is only visible if the access cluster is part of a grid.
Throughput
 
Copies
 
Link Copy In
Data transfer rate (MiBps) for operations that copy data from one or more remote clusters to the accessing cluster. This is data transferred between legacy TS7700 Grid links. This data set is only visible if the access cluster is part of a grid.
Throughput
Copies
Copy Queue Size
The maximum size of the incoming copy queue for the accessing cluster, which is shown in MiBs, GiBs, or TiBs. Incoming copy queue options include the following:
Immediate
Synchronous-deferred
Immediate-deferred
Deferred
Family deferred
Copy refresh
Time delayed
Total
This data set is only visible if the accessing cluster is part of a grid.
Throughput
 
Copies
Average Copy Life Span
The average age of virtual volumes to be copied to the distributed library for the accessing cluster. The following are the available options:
Immediate Mode Copy
Time Delayed Copy
All other deferred type copies
 
This data set is only visible if the accessing cluster is part of a grid.
Storage
Cache
Cache to Copy
The number of GiBs that reside in the incoming copy queue of a remote cluster, but are destined for the accessing cluster. This value is the amount of data that is being held in cache until a copy can be made.
 
This data set is only visible if the accessing cluster is part of a grid.
Storage
Cache
Cache Hit
The number of completed mount requests where data is resident in the TVC.
 
If two distributed library access points are used to satisfy a mount with synchronous mode copy enabled, this count is advanced only when the data is resident in the TVC for both access points. For this reason, this data set is visible only if the accessing cluster is a TS7700 Tape Attach, or is part of a grid that contains a TS7700 Tape Attach.
Storage
Cache
Cache Miss
The number of completed mount requests where data is recalled from a physical stacked volume.
 
If two distributed library access points are used to satisfy a mount with synchronous mode copy enabled, this count is advanced when the data is not resident in the TVC for at least one of the two access points. For this reason, this data set is visible only if the accessing cluster is a TS7700 Tape Attach, or is part of a grid that contains a TS7700 Tape Attach.
Storage
Cache
Cache Hit Mount Time
The average time (ms) to complete Cache Hit mounts. This data set is visible only if the accessing cluster is attached to a tape library. If the cache is partitioned, this value is displayed according to partition.
Storage
Cache
Cache Miss Mount Time
The average time (ms) to complete Cache Miss mounts.
 
This data set is visible only if the accessing cluster is attached to a tape library. If the cache is partitioned, this value is displayed according to partition.
Storage
Cache
Partitions
 
 
 
 
 
 
 
 
Primary Used
If the accessing cluster is a TS7720 or TS7760 attached to a tape library, a numbered tab exists for each active partition. Each tab displays check boxes for these categories:
Cache Hit
Cache Miss,
Mount Time Hit
Mount Time Miss
Data in Cache
 
The amount of used cache in a partitioned, primary cache, according to partition.
 
This data set is only visible if the selected cluster is a TS7700T
Storage
Cache
Data Waiting for Premigration
The amount of data in cache assigned to volumes waiting for premigration.
 
This data set is only visible if the selected cluster is a TS7700T.
Storage
Cache
Data Migrated
The amount of data in cache that has been migrated.
 
This data set is only visible if the selected cluster is a TS7700T.
Storage
Cache
Data Waiting for Delayed Premigration
The amount of data in cache assigned to volumes waiting for delayed premigration.
 
This data set is only visible if the selected cluster is a TS7700T.
Storage
Virtual Tape
Maximum Virtual Drives Mounted
The greatest number of mounted virtual drives. This value is a mount count.
Storage
Physical Tape
Write to Tape
Data transfer rate (MiBps) written to physical media from cache. This value typically represents premigration to tape.
 
This data set is not visible when the selected cluster is not attached to a library.
Storage
Physical Tape
Recall from Tape
Data transfer rate (MiBps) read from physical media to cache. This value is recalled data.
 
This data set is not visible when the selected cluster is not attached to a library.
Storage
Physical Tape
Reclaim Mounts
Number of physical mounts that are completed by the library for the physical volume reclaim cache operation. This value is a mount count.
 
This data set is not visible when the selected cluster is not attached to a library.
Storage
Physical Tape
Recall Mounts
Number of physical mounts that are completed by the library for the physical volume reclaim operation.
 
This data set is not visible when the selected cluster is not attached to a library.
Storage
Physical Tape
Premigration Mounts
Number of physical mount requests completed by the library required to satisfy pre-migrate mounts.
 
This data set is not visible when the selected cluster is not attached to a library.
Storage
Physical Tape
Physical Drives Mounted
The maximum, minimum, or average number of physical devices of all device types concurrently mounted. The average number displays only when you hover over a data point.
 
This data set is only visible when the selected cluster attaches to a library.
Storage
Physical Tape
Physical Mount Times
The maximum, minimum, or average number of seconds required to complete the execution of a mount request for a physical device. The average number displays only when you hover over a data point.
 
This data set is only visible when the selected cluster attaches to a library.
System
Throttling
Average Copy Throttle
The average time delay as a result of copy throttling, which is measured in milliseconds. This data set contains the averages of nonzero throttling values where copying is the predominant reason for throttling.
 
This data set is only visible if the selected cluster is part of a grid.
System
Throttling
Average Deferred Copy Throttle
The average time delay as a result of deferred copy throttling, which is measured in milliseconds. This data set contains the averages of 30-second intervals of the deferred copy throttle value.
 
This data set is only visible if the selected cluster is part of a grid.
System
Throttling
Average Host Write Throttle for Tape Attached Partitions
The average write overrun throttle delay for the tape attached partitions. This data set is the average of the non-zero throttling values where write overrun was the predominant reason for throttling.
 
This data set is only visible if the selected cluster is a TS7720 or TS7760 attached to a tape library.
System
 
Throttling
 
Average Copy Throttle for Tape Attached Partitions
The average copy throttle delay for the tape attached partitions. The value presented is the average of the non-zero throttling values where copy was the predominant reason for throttling.
 
This data set is only visible if the selected cluster is a TS7720 or TS7760 attached to a tape library.
System
 
Throttling
 
Average Deferred Copy Throttle for Tape Attached Partitions
The average deferred copy throttle delay for the tape attached partitions. This value is the average of 30-second intervals of the deferred copy throttle value during the historical record.
 
This data set is only visible if the selected cluster is part of a grid and is a TS7720 or TS7760 attached to a tape library.
System
Utilization
Maximum CPU Primary Server
The maximum percentage of processor use for the primary TS7700 server.
System
Utilization
Maximum Disk I/O Usage Primary Server
The maximum percentage of disk cache I/O uses as reported by the primary server in a TS7700.
For more information about the values and what to expect in the resulting graphs, see Chapter 11, “Performance and monitoring” on page 623.
For more information about the window and available settings, see TS7700 R4.2 IBM Knowledge Center. TS7700 R4.2 IBM Knowledge Center is available locally on the TS7700 MI (by clicking the question mark icon at the upper right corner of the window) and at the following website:
For more information, see IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring, and Tuning the TS7700 Performance, WP101465:
IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring, and Tuning the TS7700 Performance is an in-depth study of the inner workings of the TS7700, and the factors that can affect the overall performance of a stand-alone cluster or a TS7700 grid. In addition, it explains throttling mechanisms and available tuning options for the subsystem to achieve peak performance.
Virtual mounts
This page is used to view the virtual mount statistics for the TS7700 Grid. Virtual mount statistics are displayed for activity on each cluster during the previous 15 minutes. These statistics are presented in bar graphs and tables and are organized according to number of virtual mounts and average mount times.
Number of virtual mounts: This section provides statistics for the number of virtual mounts on a given cluster during the most recent 15-minute snapshot. Snapshots are taken at 15-minute intervals. Each numeric value represents the sum of values for all active partitions in the cluster.Information displayed includes the following:
 – Cluster: The cluster name.
 – Fast-Ready: The number of virtual mounts that were completed using the Fast-Ready method.
 – Cache Hits: The number of virtual mounts that were completed from cache.
 – Cache Misses: The number of mount requests that are unable to be fulfilled from cache.
 
Note: This field is visible only if the selected cluster possesses a physical library.
 – Total: Total number of virtual mounts.
Average mount times: This section provides statistics for the average mount times in milliseconds (ms) on a given cluster during the most recent 15-minute snapshot. Snapshots are taken at 15-minute intervals. Each numeric value represents the average of values for all active partitions in the cluster. The information that is displayed includes the following:
 – Cluster: The cluster name.
 – Fast-Ready: The average mount time for virtual mounts that were completed using the Fast-Ready method.
 – Cache Hits: The average mount time for virtual mounts that were completed from cache.
 – Cache Misses: The average mount time for mount requests that are unable to be fulfilled from cache.
 
Note: This field is visible only if the selected cluster possesses a physical library.
For more information about how to achieve better performance, see IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance, WP101465:
Physical mounts
The physical mounts statistics for the last 15 minutes of activity are displayed in bar graphs and table format per cluster: One for number of mounts by category and one for average mount time per cluster. This window is available and active when the selected TS7700 is attached to a physical tape library. When a grid possesses a physical library but the selected cluster does not, MI displays the following message:
The cluster is not attached to a physical tape library.
This page is not visible on the TS7700 MI if the grid does not possess a physical library (no tape attached member).
The following information is available on this page:
Cluster: The cluster name
Pre-Migrate: The number of pre-migrate mounts
Reclaim: The number of reclaim mounts
Recall: The number of recall mounts
Secure Data Erase: The number of secure data erase mounts
Total: The total number of physical mounts
Mount Time: The average mount time for physical mounts
Host throughput
Use this page to view statistics for each cluster, vNode, host adapter, and host adapter port in the grid. At the top of the page is a collapsible tree that allows you to view statistics for a specific level of the grid and cluster:
Click the grid hyperlink to display information for each cluster.
Click the cluster hyperlink to display information for each vNode.
Click the vNode hyperlink to display information for each host adapter.
Click a host adapter link to display information for each of its ports.
The host throughput data is displayed in two bar graphs and one table. The bar graphs represent raw data coming from the host to the host bus adapter (HBA) and for compressed data going from the HBA to the virtual drive on the vNode.
 
Note: See 1.6, “Data storage values” on page 12 for information related to the use of binary prefixes.
In the table, the letters in the heading correspond to letter steps in the diagram above the table. Data is available for a cluster, vNode, host adapter, or host adapter port. The letters referred to in the table can be seen on the diagram in Figure 9-32.
Figure 9-32 vNode Data flow versus. compression
The following types of data are available:
Cluster/vNode/Host Adapter/Host Adapter Port: Cluster or cluster component for which data is being displayed
Compressed Read (A): Amount of data read between the virtual drive and HBA
Raw Read (B): Amount of data read between the HBA and host
Read Compression Ratio: Ratio of raw data read to compressed data read
Compressed Write (D): Amount of data written from the HBA to the virtual drive
Raw Write (C): Amount of data written from the host to the HBA
Write Compression Ratio: Ratio of raw data written to compressed data written
Cache throttling
This window shows the statistics of the throttling values that are applied on the host write operations and RUN copy operations throughout the grid.
Throttling refers to the intentional slowing of data movement to balance or re-prioritize system resources in a busy TS7700. Throttling can be applied to host write and inbound copy operations. Throttling of host write and inbound copy operations limits the amount of data movement into a cluster. This is typically done for one of two reasons:
The amount of unused cache space is low
The amount of data in cache that is queued for premigration has exceeded a threshold.
Host write operations can also be throttled when RUN copies are being used and it is determined that a throttle is needed to prevent pending RUN copies from changing to the immediate-deferred state. A throttle can be applied to a host write operation for these reasons:
The amount of unused cache space is low
The amount of data in cache that needs to be premigrate is high
For RUN copies, an excessive amount of time is needed to complete an immediate copy
The Cache Throttling graph displays the throttling that is applied to host write operations and to inbound RUN copy operations. The delay represents the time delay, in milliseconds, per 32 KiB of transferred post-compressed data. Each numeric value represents the average of values for all active partitions in the cluster. Information shown includes the following:
Cluster: The name of the cluster affected
Copy: The average delay, in milliseconds, applied to inbound copy activities
Write: The average delay, in milliseconds, applied to host write operations, both locally and from remote clusters
For more information about the steps you can take to achieve peak network performance, see IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance, WP101465:
Cache utilization
Cache utilization statistics are presented for clusters that have a cache single partition and for clusters with cache with multiple partitions. Models TS7720 or TS7760 disk-only clusters and TS7740 have only one resident only or tape partition, which accounts for the entire cache. For the TS7700T (tape attach) or TS7760C (cloud attach) cluster model, up to eight cache partitions (one CP0 cache resident and up to seven partitions) can be defined and represented in the cache utilization window.
Figure 9-33 shows an example of cache utilization (single partition), as displayed in a TS7720 disk-only or TS7740 cluster.
Figure 9-33 TS7700 Cache Utilization window
Cache Partition
The Cache Partition window presents the cache use statistics for the TS7700T or TS7760C models, in which the cache is made up of multiple partitions. Figure 9-34 shows a sample of the Cache Partition (multiple partitions) window. This window can be accessed by clicking the Monitor icon or by clicking the Virtual icon. Both methods direct to the same window. In this window, the user can display the existent cache partitions, create or reconfigure a partition, or delete a partition as needed.
 
Tip: Consider limiting the MI user roles who are allowed to change the partition configurations through this window.
Figure 9-34 Cache Partitions window
For more information about the window, see IBM Knowledge Center, either locally on the TS7700 MI (by clicking the question mark icon) or at the following web page:
Grid Network Throughput
This window is available only if the TS7700 cluster is a member of a Grid. The Grid Network Throughput window shows the last 15 minutes of cross-cluster data transfer rate statistics, which are shown in megabytes per second (MBps). Each cluster of the grid is represented both in the bar graph chart and in the tables. Information shown includes the following:
Cluster: The name of the cluster
Outbound Access: Data transfer rate for host operations that move data from the specified cluster into one or more remote clusters
Inbound Access: Data transfer rate for host operations that move data into the specified cluster from one or more remote clusters
Copy Outbound: Data transfer rate for copy operations that pull data out of the specified cluster into one or more remote clusters
Copy Inbound: Data transfer rate for copy operations that pull data into the specified cluster from one or more remote clusters
Total: Total data transfer rate for the cluster
For more information about this window, see the TS7700 R4.1 section in IBM Knowledge Center. For more information about data flow within the grid and how those numbers vary during the operation, see in Chapter 11, “Performance and monitoring” on page 623.
Pending Updates
The Pending Updates window is only available if the TS7700 cluster is a member of a grid. Pending updates window can be used to monitor status of outstanding updates per cluster throughout the grid. Pending updates can be caused by one cluster being offline, in service preparation or service mode while other grid peers were busy with the normal client’s production work.
A faulty grid link communication also might cause a RUN or SYNC copy to became Deferred Run or Deferred Sync. The Pending Updates window can be used to follow the progress of those copies.
The Download button in the top of the window saves a comma-separated values (.csv) file that lists all volumes or grid global locks that are targeted during an ownership takeover. The volume or global pending updates are listed, along with hot tokens and stolen volumes.
Tokens are internal data structures that are used to track changes to the ownership, data, or properties of each one of the existing logical volumes in the grid. Hot tokens occur when a cluster attempts to merge its own token information with the other clusters, but the clusters are not available for the merge operation (tokens not able to merge became ‘hot’).
Stolen volume describes a volume whose ownership has been taken over during a period in which the owner cluster was in service mode or offline, or if an unexpected cluster outage occurs when the volume ownership is taken over under an operator’s direction, or by using AOTM.
For more information about copy mode and other concepts referred to in this section, see Chapter 2, “Architecture, components, and functional characteristics” on page 15. For more information about this MI function, see IBM Knowledge Center:
https://ibm.biz/Bd24iB
Tasks window
This window is used to monitor the status of tasks that are submitted to the TS7700. The information in this window refers to the entire grid operation if the accessing cluster is part of a grid, or only for this individual cluster if it is a stand-alone configuration. The table can be formatted by using filters, or the format can be reset to its default by using reset table preferences. Information is available in the task table for 30 days after the operation stops or the event or action becomes inactive.
Tasks are listed by starting date and time. Tasks that are still running are shown on the top of the table, and the completed tasks are listed at the bottom. Figure 9-35 shows an example of the Tasks window. Notice that the information on this page and the task status pods are of grid scope.
Figure 9-35 Tasks window
 
Note: The Start Time column refers to the time of starting a task to the local time on the computer where the MI was started. If the DATE/TIME is modified in the TS7700 from the Coordinated Universal Time during installation, the time that is shown in the Start Times field is offset by the same difference from the local time of the MI. Use Coordinated Universal Time in all TS7700 clusters whenever possible.
9.3.6 The Virtual icon
TS7700 MI windows that are under the Virtual icon can help you view or change settings that are related to virtual volumes and their queues, virtual drives, and scratch categories. Also Cache Partitions is under the Virtual icon, which you can use to create, modify, or delete cache partitions.
Figure 9-36 shows the Virtual icon and the options available. The Cache Partitions option is available only for the TS7700T or TS7760C models; the Incoming Copy Queue option appears in grid configurations only.
Figure 9-36 The Virtual icon and options
The available items under the Virtual icon are described in the following topics.
Cache Partitions
In the Cache Partitions window in the MI, you can create a cache partition, or reconfigure or delete a cache partition for the TS7700T (tape-attached, FC5273 Tape Attach Enablement) or TS7760C (cloud-attached, FC5278 Cloud Enablement) models. Also, you can use this window to monitor the cache and partitions occupancy and usage.
Figure 9-37 on page 396 shows a sequence for creating a partition. There can be as many as eight partitions, from Resident partition (partition 0) to Partition 7, if needed. The partition allocated size is subtracted from the actual resident partition (CP0) capacity, if there is at least more than 2 TB of free space in it. For more information about rules and allowed values in effect for this window, see the TS7700 R4.2 IBM Knowledge Center.
Release 4.2 of microcode introduces the TS7700 Transparent Cloud Tiering, allowing the TS7700 to off load data to public or private clouds. For more information about cache partitions, cloud tiering, and usage, see in Chapter 2, “Architecture, components, and functional characteristics” on page 15, and IBM TS7760 R4.2 Cloud Storage Tier Guide, REDP-5514:
 
 
 
Considerations: No partition can be created if Resident-Only (CP0) has 2 TB or less of free space. Creating partitions is blocked by a FlashCopy for DR in progress, or by one of the existing partitions being in an overcommitted state.
Figure 9-37 shows creating a cache partition.
Figure 9-37 Creating a cache partition
Figure 9-38 shows an example of successful creation in the upper half. The lower half shows an example where the user failed to observe the amount of free space available in CP0.
Figure 9-38 Example of a success and a failure to create a new partition
Notice that redefining the size of existing partitions in an operational TS7720T might create unexpected load peak in the overall premigration queue, which causes host write throttling to be applied to the partitions.
For example, consider the following example, where a tape attached partition is downsized, and become instantly overcommitted. In this example, the TS7720T premigration queue is flooded by volumes that got dislodged by the size of this cache partition becoming smaller. Partition readapts to the new size by migrating volumes in excess to physical tape.
Figure 9-39 shows the previous scenario, TS7720T operating and Partition 1 operating with 12 TB cache.
Figure 9-39 Partition 1 operating with 12 TB cache
Figure 9-40 shows Partition 1 being downsized to 8 TB. Note the initial warning and subsequent overcommit statement that shows up when resizing the partition results in overcommitted cache size.
Figure 9-40 Downsizing Partition 1, and the overcommit warning
Accepting the Overcommit statement initiates the resizing action. If this is not the best-suited time for the partition resizing (as during the peak load period), the user can click No and decline to take the action, and then resume it at a more appropriate time. Figure 9-41 shows the final sequence of the operation.
Figure 9-41 Resizing message and resulting cache partitions window
Figure 9-40 on page 397 and Figure 9-41 on page 397 showed an example of downsizing a partition in a TS7700T (tape attach), but the same behavior is expected if it was an TS7700C (cloud attach) partition instead. Partitions function the same for TS7700C and TS7700T configurations.
Note: Currently in R4.2, tape (FC 5273, Tape Attach enablement) and cloud (FC 5278, Enable Cloud Storage Tier) attachment functions are mutually exclusive within a cluster. Nonetheless, a tape attach and a cloud attach cluster can be part of the same grid.
In both cases, CP0 remains dedicated for resident logical volumes only and does not support direct movement of the resident volumes to or from an object store or tape storage.
 
Tip: Consider limiting the MI user roles that are allowed to change the partition configurations through this window.
Incoming Copy Queue
The Incoming Copy Queue window is used for a grid-member TS7700 cluster. Use this window to view the virtual volume incoming copy queue for a TS7700 cluster. The incoming copy queue represents the amount of data that is waiting to be copied to a cluster. Data that is written to a cluster in one location can be copied to other clusters in a grid to achieve uninterrupted data access.
It can be specified through policies and settings on which clusters (if any) copies are, and how quickly copy operations should occur. Each cluster maintains its own list of copies to acquire, and then satisfies that list by requesting copies from other clusters in the grid according to queue priority.
Table 9-7 lists the values that are displayed in the copy queue table.
Table 9-7 Values in the copy queue table
Column type
Description
Copy Type
The type of copy that is in the queue. The following values are possible:
Immediate: Volumes can be in this queue if they are assigned to a Management Class (MC) that uses the Rewind Unload (RUN) copy mode.
Synchronous-deferred: Volumes can be in this queue if they are assigned to an MC that uses the Synchronous mode copy and some event (such as the secondary cluster going offline) prevented the secondary copy from occurring.
Immediate-deferred: Volumes can be in this queue if they are assigned to an MC that uses the RUN copy mode and some event (such as the secondary cluster going offline) prevented the immediate copy from occurring.
Deferred: Volumes can be in this queue if they are assigned to an MC that uses the Deferred copy mode.
Time Delayed: Volumes can be in this queue if they are eligible to be copied based on either their creation time or last access time.
Copy-refresh: Volumes can be in this queue if the MC assigned to the volumes changed and a LI REQ command was sent from the host to initiate a copy.
Family-deferred: Volumes can be in this queue if they are assigned to an MC that uses RUN or Deferred copy mode and cluster families are being used.
Last TVC Cluster
The name of the cluster where the copy last was in the TVC. Although this might not be the cluster from which the copy is received, most copies are typically obtained from the TVC cluster.
This column is only shown when View by Last TVC is selected.
Size
Total size of the queue, which is displayed in GiB.
See 1.6, “Data storage values” on page 12 for additional information about the use of binary prefixes.
When Copy Type is selected, this value is per copy type. When View by Last TVC is selected, this value is per cluster.
Quantity
The total number of copies in queue for each type.
Figure 9-42 shows the incoming copy queue window and other places in the Grid Summary and Cluster Summary that inform the user about the current copy queue for a specific cluster.
Figure 9-42 Incoming copy queue
Using the upper-left option, choose between View by Type and View by Last TVC Cluster. Click Actions to download the Incoming Queued Volumes list.
Recall queue
The Recall Queue window of the MI displays the list of virtual volumes in the recall queue. Use this window to promote a virtual volume or filter the contents of the table. The Recall Queue item is visible but disabled on the TS7700 MI if there is no physical tape attachment to the selected cluster, but there is at least one tape attached cluster (TS7740 or TS7720T, which are connected to a TS3500 tape library) within the grid. Trying to access the Recall queue link from a cluster with no tape attachment causes the following message to display:
The cluster is not attached to a physical tape library.
 
Tip: This item is not visible on the TS7700 MI if there is no TS7700 tape-attached cluster in grid.
A recall of a virtual volume retrieves the virtual volume from a physical cartridge and places it in the cache. A queue is used to process these requests. Virtual volumes in the queue are classified into three groups:
In Progress
Scheduled
Unscheduled
Figure 9-43 shows an example of the Recall Queue window.
Figure 9-43 Recall Queue window
In addition to changing the recall table’s appearance by hiding and showing some columns, the user can filter the data that is shown in the table by a string of text, or by the column heading. Possible selections are by Virtual Volume, Position, Physical Cartridge, or by Time in Queue. To reset the table to its original appearance, click Reset Table Preferences.
Another interaction now available in the Recall window is that the user can promote an unassigned volume recall to the first position in the unscheduled portion of the recall queue. This option is available by checking an unassigned volume in the table, and clicking Actions  Promote Volume.
Virtual tape drives
The Virtual Tape Drives window of the MI presents the status of all virtual tape drives in a cluster. Use this window to check the status of a virtual mount, to perform a stand-alone mount or unmount, or assign host device numbers to a specific virtual drive.
The field Cache Mount Cluster in the virtual tape drives page identifies to which cluster TVC the volume is mounted. The user can recognize remote (crossed) or synchronous mounts simply by looking in this field. Remote mounts show other clusters that are being used by a mounted volume instead of a local cluster (the cluster to whom the Virtual Tape Drives belong to in the page currently on display), whereas synchronous mounts show both clusters used by the mounted volume.
The page contents can be customized by selecting specific items to display. Figure 9-44 shows the Virtual tape drives window and the available items under Actions menu.
Figure 9-44 Virtual Tape Drives window
The user can perform a stand-alone mount to a logical volume against a TS7700 logical drive for special purposes, such as to perform an initial program load (IPL) of a stand-alone services core image from a virtual tape drive. Also, MI allows the user to manually unmount a logical drive that is mounted and in the idle state. The Unmount function is available not only for those volumes that have been manually mounted, but also for occasions when a logical volume has been left mounted on a virtual drive by an incomplete operation or some test rehearsal, therefore creating the need to unmount it through MI operations.
To perform a stand-alone mount, run the steps that are shown in Figure 9-45.
Figure 9-45 Stand-alone mount procedure
The user can mount only virtual volumes that are not already mounted, on a virtual drive that is online.
If there is a need to unmount a logical volume that is currently mounted to a virtual tape drive, follow the procedure that is shown in Figure 9-46.
Figure 9-46 Demounting a logical volume by using MI
The user can unmount only those virtual volumes that are mounted and have a status of Idle.
For more information about the topic, see the TS7700 section of IBM Knowledge Center, available locally or online.
Virtual volumes
The topics in this section present information about monitoring and manipulating virtual volumes in the TS7700 MI.
Virtual volume details
Use this window to obtain detailed information about the state of a virtual volume or a FlashCopy of a virtual volume in the TS7700 Grid. Figure 9-48 on page 404 and Figure 9-49 on page 405 show an example of the resulting windows for a Virtual Volume query. The entire window can be subdivided in three parts:
1. Virtual volume summary
2. Virtual volume details
3. Cluster-specific virtual volume properties
There is a tutorial available about virtual volume display and how to interpret the windows accessible directly from the MI window. To watch it, click the View Tutorial link on the Virtual Volume Detail window.
Figure 9-47 shows an example of the graphical summary for a virtual volume that was off-loaded to cloud, at center left. Also shown is the aspect of virtual volume in a two-way grid, with a cloud attach cluster and another tape attach.The first part of the Virtual Volume Details window in the MI shows a graphical summary of the status of the virtual volume that is being displayed.
Figure 9-47 Virtual Volume Details: Graphical summary
This graphical summary brings details of the present status of the virtual volume within the grid, plus the current operations that are taking place throughout the grid concerning that volume. The graphical summary helps you understand the dynamics of a logical mount, whether the volume is in the cache at the mounting cluster, or whether it is being recalled from tape in a remote location.
Hovering the mouse over the graphical representation of the virtual volume shows the meaning of the icon in the page, as shown in Figure 9-47 on page 403.
For more information about the Virtual volume legend, see TS7700 Customer Knowledge Center 4.2, under Virtual volume details, or click Additional Legend Icons at the bottom of the display (see Figure 9-47 on page 403).
 
Note: The physical resources are shown in the virtual volume summary, virtual volume details table, or the cluster-specific virtual volume properties table for the TS7720T and TS7740 cluster models.
The Virtual Volume Details window shows all clusters where the selected virtual volume is located within the grid. The icon that represents each individual cluster is divided in three different areas by broken lines:
The top area relates to the logical volume status.
The intermediate area shows actions that are currently in course or pending for the cluster.
The bottom area reports the status of the physical components that are related to that cluster.
The cluster that owns the logical volume being displayed is identified by the blue border around it. Figure 9-48 shows how the icons are distributed through the window, and where the pending actions are represented. The blue arrow icon over the cluster represents data that is being transferred from another cluster. The icon in the center of the cluster indicates data that is being transferred within the cluster.
Figure 9-48 Details of the graphical summary area
 
Figure 9-49 shows an example of the text section of the Virtual Volume Details window.
Figure 9-49 Virtual volume details: Text section
Virtual volume details: Text
The virtual volume details and status are displayed in the Virtual volume details table. These virtual volume properties are consistent across the grid. Table 9-8 shows the volumes details and the descriptions that are found on this page.
Table 9-8 Virtual Volume details and descriptions
Volume detail
Description
Volser
The VOLSER of the virtual volume. This value is a six-character number that uniquely represents the virtual volume in the virtual library.
Media Type
The media type of the virtual volume. The following are the possible values:
Cartridge System Tape
Enhanced Capacity Cartridge System Tape
Maximum Volume Capacity
The maximum size (MiB) of the virtual volume. This capacity is set upon insert by the Data Class of the volume.
 
Note: If an override is configured for the Data Class 's maximum size, it is only applied when a volume is mounted and a load-point write (scratch or fast ready mount) occurs. During the volume close operation, the new override value is bound to the volume and cannot change until the volume is reused. Any further changes to a Data Class override are not inherited by a volume until it is written again during a fast ready mount and then closed.
 
Note: When the host mounts a scratch volume and unmounts it without completing any writes, the system might report that the virtual volume's current size is larger than its maximum size. This result can be disregarded.
Current Volume Size
Size of the data (MiB) for this unit type (channel or device).
 
Note: See 1.6, “Data storage values” on page 12 for additional information about the use of binary prefixes.
Current Owner
The name of the cluster that currently owns the latest version of the virtual volume.
Currently Mounted
Whether the virtual volume is currently mounted in a virtual drive. If this value is Yes, these qualifiers are also shown:
vNode: The name of the vNode that the virtual volume is mounted on.
Virtual Drive: The ID of the virtual drive that the virtual volume is mounted on.
Cache Copy Used for Mount
The name of the cluster associated to the TVC selected for mount and I/O operations to the virtual volume. This selection is based on consistency policy, volume validity, residency, performance, and cluster mode.
Mount State
The mount state of the logical volume. The following are the possible values:
Mounted: The volume is mounted.
Mount Pending: A mount request was received and is in progress.
Recall Queued/Requested: A mount request was received and a recall request was queued.
Recalling: A mount request was received and the virtual volume is being staged into tape volume cache from physical tape.
Cache Management Preference Group
The preference level for the Storage Group. This setting determines how soon volumes are removed from cache following their copy to tape. This information is only displayed if the owner cluster is a TS7740 or if the owner cluster is a TS7700T and the volume is assigned to a cache partition that is greater than 0. The following are the possible values:
0: Volumes in this group have preference to be removed from cache over other volumes.
1: Volumes in this group have preference to be retained in cache over other volumes. A “least recently used” algorithm is used to select volumes for removal from cache if there are no volumes to remove in preference group 0.
Unknown: The preference group cannot be determined.
Last Accessed by a Host
The most recent date and time a host accessed a live copy of the virtual volume. The recorded time reflects the time zone in which the user's browser is located.
Last Modified
The date and time the virtual volume was last modified by a host mount or demount. The recorded time reflects the time zone in which the user's browser is located.
Category
The number of the scratch category to which the virtual volume belongs. A scratch category groups virtual volumes for quick, non-specific use.
Storage Group
The name of the Storage Group that defines the primary pool for the premigration of the virtual volume.
Management Class
The name of the Management Class applied to the volume. This policy defines the copy process for volume redundancy.
Storage Class
The name of the Storage Class applied to the volume. This policy classifies virtual volumes to automate storage management.
Data Class
The name of the Data Class applied to the volume. This policy classifies virtual volumes to automate storage management.
Volume Data State
The state of the data on the virtual volume. The following are the possible values:
New: The virtual volume is in the insert category or a non-scratch category and data has never been written to it.
Active: The virtual volume is currently located within a private category and contains data.
Scratched: The virtual volume is currently located within a scratch category and its data is not scheduled to be automatically deleted.
Pending Deletion: The volume is currently located within a scratch category and its contents are a candidate for automatic deletion when the earliest deletion time has passed. Automatic deletion then occurs sometime thereafter.
 
Note: The volume can be accessed for mount or category change before the automatic deletion and therefore the deletion might be incomplete.
 
Pending Deletion with Hold: The volume is currently located within a scratch category configured with hold and the earliest deletion time has not yet passed. The volume is not accessible by any host operation until the volume has left the hold state. After the earliest deletion time has passed, the volume then becomes a candidate for deletion and moves to the Pending Deletion state. While in this state, the volume is accessible by all legal host operations.
Deleted: The volume is either currently within a scratch category or has previously been in a scratch category where it became a candidate for automatic deletion and was deleted. Any mount operation to this volume is treated as a scratch mount because no data is present.
FlashCopy
Details of any existing flash copies. The following are the possible values, among others:
Not Active: No FlashCopy is active. No FlashCopy was enabled at the host by an LI REQ operation.
Active: A FlashCopy that affects this volume was enabled at the host by an LI REQ operation. Volume properties have not changed since FlashCopy time zero.
Created: A FlashCopy that affects this volume was enabled at the host by an LI REQ operation. Volume properties between the live copy and the FlashCopy have changed. Click this value to open the Flash Copy Details page.
Earliest Deletion On
The date and time when the virtual volume will be deleted. Time that is recorded reflects the time zone in which your browser is located.
This value displays as “—” if no expiration date is set.
Logical WORM
Whether the virtual volume is formatted as a WORM volume.
Possible values are Yes and No.
Compression Method
The compression method applied to the volume.
Volume Format ID
Volume format ID that belongs to the volume. the following are the possible values:
-2: Data has not been written yet
5: Logical Volume old format
6: Logical Volume new format
3490 Counters Handling
3490 Counters Handling value that belongs to the volume.
Cluster-specific Virtual Volume Properties
The Cluster-specific virtual volume properties table displays information about requesting virtual volumes on each cluster. These are properties that are specific to a cluster. Figure 9-50 shows the Cluster-specific Virtual Volume Properties table that is shown in the last part of the Virtual volume details window.
Figure 9-50 Cluster-specific Virtual Volume Properties
The Cluster-specific Virtual Volume Properties table displays information about requesting virtual volumes on each cluster. These are properties that are specific to a cluster. Virtual volume details and the status that is displayed include the following properties shown on Table 9-9.
Table 9-9 Cluster-specific virtual volume properties
Property
Description
Cluster
The cluster location of the virtual volume copy. Each cluster location occurs as a separate column header.
In Cache
Whether the virtual volume is in cache for this cluster.
Device Bytes Stored
The number of bytes used (MiB) by each cluster to store the volume. Actual bytes can vary between clusters, based on settings and configuration.
Primary Physical Volume
The physical volume that contains the specified virtual volume. Click the VOLSER hyperlink to open the Physical Stacked Volume Details page for this physical volume. A value of None means that no primary physical copy is to be made. This column is only visible if a physical library is present in the grid. If there is at least one physical library in the grid, the value in this column is shown as “—” for those clusters that are not attached to a physical library.
Secondary Physical Volume
Secondary physical volume that contains the specified virtual volume. Click the VOLSER hyperlink to open the Physical Stacked Volume Details page for this physical volume. A value of None means that no secondary physical copy is to be made. This column is only visible if a physical library is present in the grid. If there is at least one physical library in the grid, the value in this column is shown as “—” for those clusters that are not attached to a physical library.
Copy Activity
Status information about the copy activity of the virtual volume copy. The following are the possible values:
Complete: This cluster location completed a consistent copy of the volume.
In Progress: A copy is required and currently in progress.
Required: A copy is required at this location but has not started or completed.
Not Required: A copy is not required at this location.
Reconcile: Pending updates exist against this location's volume. The copy activity updates after the pending updates get resolved.
Time Delayed Until [time]: A copy is delayed as a result of Time Delayed Copy mode. The value for [time] is the next earliest date and time that the volume is eligible for copies.
Queue Type
The type of queue as reported by the cluster. The following are the possible values:
Rewind Unload (RUN): The copy occurs before the rewind-unload operation completes at the host.
Deferred: The copy occurs some time after the rewind-unload operation completes at the host.
Sync Deferred: The copy was set to be synchronized, according to the synchronized mode copy settings, but the synchronized cluster could not be accessed. The copy is in the deferred state. For more information about synchronous mode copy settings and considerations, see Synchronous mode copy in the Related information section.
Immediate Deferred: A RUN copy that was moved to the deferred state due to copy timeouts or TS7700 Grid states.
Time Delayed: The copy will occur sometime after the delay period has been exceeded.
Copy Mode
The copy behavior of the virtual volume copy. The following are the possible values:
Rewind Unload (RUN): The copy occurs before the rewind-unload operation completes at the host.
Deferred: The copy occurs some time after the rewind-unload operation at the host.
No Copy: No copy is made.
Sync: The copy occurs on any synchronization operation. For more information about synchronous mode copy settings and considerations, see 3.3.5, “Synchronous mode copy” on page 124.
Time Delayed: The copy occurs sometime after the delay period has been exceeded.
Exist: A consistent copy exists at this location, although No Copy is intended. A consistent copy existed at this location at the time the virtual volume was mounted. After the volume is modified, the Copy Mode of this location changes to No Copy.
Deleted
The date and time when the virtual volume on the cluster was deleted. Time that is recorded reflects the time zone in which the user's browser is located. If the volume has not been deleted, this value displays as “—”.
Removal Residency
The automatic removal residency state of the virtual volume. In a TS7700 Tape Attach configuration, this field is displayed only when the volume is in the disk partition. This field is not displayed for TS7740 Clusters. The following are the possible values:
—: Removal Residency does not apply to the cluster. This value is displayed if the cluster attaches to a physical tape library, or inconsistent data exists on the volume.
Removed: The virtual volume has been removed from the cluster.
No Removal Attempted: The virtual volume is a candidate for removal, but the removal has not yet occurred.
Retained: An attempt to remove the virtual volume occurred, but the operation failed. The copy on this cluster cannot be removed based on the configured copy policy and the total number of configured clusters. Removal of this copy lowers the total number of consistent copies within the grid to a value below the required threshold. If a removal is expected at this location, verify that the copy policy is configured and that copies are being replicated to other peer clusters. This copy can be removed only after enough replicas exist on other peer clusters.
Deferred: An attempt to remove the virtual volume occurred, but the operation failed. This state can result from a cluster outage or any state within the grid that disables or prevents replication. The copy on this cluster cannot be removed based on the configured copy policy and the total number of available clusters capable of replication. Removal of this copy lowers the total number of consistent copies within the grid to a value below the required threshold. This copy can be removed only after enough replicas exist on other available peer clusters. A subsequent attempt to remove this volume occurs when no outage exists and replication is allowed to continue.
Pinned: The virtual volume is pinned by the virtual volume Storage Class. The copy on this cluster cannot be removed until it is unpinned. When this value is present, the Removal Time value is Never.
Held: The virtual volume is held in cache on the cluster at least until the Removal Time has passed. When the removal time has passed, the virtual volume copy is a candidate for removal. The Removal Residency value becomes No Removal Attempted if the volume is not accessed before the Removal Time passes. The copy on this cluster is moved to the Resident state if it is not accessed before the Removal Time passes. If the copy on this cluster is accessed after the Removal Time has passed, it is moved back to the Held state.
Removal Time
This field is displayed only if the grid contains a disk-only cluster. Values displayed in this field depend on values displayed in the Removal Residency field. Possible values include:
—: Removal Time does not apply to the cluster. This value is displayed if the cluster attaches to a physical tape library.
Removed: The date and time the virtual volume was removed from the cluster.
Held: The date and time the virtual volume becomes a candidate for removal.
Pinned: The virtual volume is never removed from the cluster.
No Removal Attempted, Retained, or Deferred: Removal Time is not applicable.
 
The time that is recorded reflects the time zone in which the user's browser is located.
Volume Copy Retention Group
The name of the group that defines the preferred auto removal policy applicable to the virtual volume. This field is displayed only for a TS7700 Cluster or for partition 0 (CP0) on a TS7700 Tape Attach Cluster, and only when the grid contains a disk-only cluster.
 
The Volume Copy Retention Group provides more options to remove data from a cluster as the active data reaches full capacity. Volumes become candidates for removal if an appropriate number of copies exist on peer clusters AND the volume copy retention time has elapsed.
 
The Retention Time parameter describes the number of hours a volume remains in cache before becoming a candidate for removal. Retention time is measured beginning at the volume's creation time -- when a write operation was performed from the beginning of a tape -- or at the time the volume was most recently accessed. Volumes in each group are removed in order based on their least recently used access times.
If the virtual volume is in a scratch category and resides on a disk-only cluster, removal settings no longer apply to the volume and the volume is a candidate for removal. In this instance, the value that is displayed for the Volume Copy Retention Group is accompanied by a warning icon. Possible values are:
—: Volume Copy Retention Group does not apply to the cluster. This value is displayed if the cluster attaches to a physical tape library.
Prefer Remove: Removal candidates in this group are removed prior to removal candidates in the Prefer Keep group.
Prefer Keep: Removal candidates in this group are removed after removal candidates in the Prefer Remove group are removed.
Pinned: Copies of volumes in this group are never removed from the accessing cluster. The volume copy retention time does not apply to volumes in this group. Volumes in this group that are subsequently moved to scratch become priority candidates for removal.
 
Caution: Care must be taken when assigning volumes to this group to avoid cache overruns.
Retention Time
The length of time, in hours, that must elapse before the virtual volume named in Volume Copy Retention Group can be removed. Depending on the Volume Copy Retention Reference setting, this time period can be measured from the time when the virtual volume was created or when it was most recently accessed.
Volume Copy Retention Reference
The basis for calculating the time period defined in Retention Time. The following are the possible values:
Volume Creation: Calculate retention time starting with the time when the virtual volume was created. This value refers to the time a write operation was performed from the beginning of a tape. It can be either a scratch mount or a private mount where writing began at record 0.
Volume Last Accessed: Calculate retention time starting with the time when the virtual volume was most recently accessed.
TVC Preference
The group containing virtual volumes that have preference for premigration. This field is displayed only for a TS7740 Cluster or for partitions 1 through 7 (CP1 through CP7) in a TS7720/TS7760 Tape Attach cluster. The following are the possible values:
PG0: Volumes in this group have preference to be premigrated over other volumes.
PG1: Volumes in this group have preference to be premigrated over other volumes. A “least recently used” algorithm is used to select volumes for premigration if there are no volumes to premigrate in preference group 0.
Time-Delayed Premigration Delay
The length of time, in hours, that must elapse before a delayed premigration operation can begin for the virtual volumes designated by the TVC Preference parameter. Depending on the Time-Delayed Premigration Reference setting, this time period can be measured from the time when the virtual volume was created or when it was most recently accessed.
Time-Delayed Premigration Reference
The basis for calculating the time period defined in Time-Delayed Premigration Delay. The following are the possible values:
Volume Creation: Calculate premigration delay starting with the time when the virtual volume was created. This value refers to the time a write operation was performed from the beginning of a tape. It can be either a scratch mount or a private mount where writing began at record 0.
Volume Last Accessed: Calculate premigration delay starting with the time when the virtual volume was most recently accessed.
Storage Preference
The priority for removing a virtual volume when the cache reaches a predetermined capacity. The following are the possible values:
Prefer Remove: These virtual volumes are removed first when the cache reaches a predetermined capacity and no scratch-ready volumes exist.
Prefer Keep: These virtual volumes are the last to be removed when the cache reaches a predetermined capacity and no scratch-ready volumes exist.
Pinned: These virtual volumes are never removed from the TS7700 Cluster.
Partition Number
The partition number for a TS7700 Tape Attach Cluster. Possible values are C0 through C7. “Inserted” logical volumes are those with a -1 partition, meaning that there is no consistent data yet.
FlashCopy details
This section provides detailed information about the state of a virtual volume FlashCopy in the TS7700 grid.
This window is available only for volumes with a created FlashCopy of a virtual volume. In this context, created FlashCopy means an existing FlashCopy, which becomes different from the live virtual volume. The live volume has been modified after FlashCopy time zero. For the volumes with a FlashCopy active (meaning no difference between the FlashCopy and live volume) as in Figure 9-49 on page 405, only the Virtual Volume details window is available (FlashCopy and live volume are identical).
Figure 9-51 shows a FlashCopy details page in the MI compared with the output of LI REQ Lvol flash command.
Figure 9-51 FlashCopy details window
The virtual volume details and status are displayed in the Virtual volume details table:
Volser . The VOLSER of the virtual volume, which is a six-character value that uniquely represents the virtual volume in the virtual library.
Media type . The media type of the virtual volume. Possible values are:
 – Cartridge System Tape
 – Enhanced Capacity Cartridge System Tape
Maximum Volume Capacity . The maximum size in MiB of the virtual volume. This capacity is set upon insert, and is based on the media type of a virtual volume.
Current Volume Size. Size of the data in MiB for this virtual volume.
Current Owner . The name of the cluster that currently owns the latest version of the virtual volume.
Currently Mounted . Whether the virtual volume is mounted in a virtual drive. If this value is Yes, these qualifiers are also displayed:
 – vNode . The name of the vNode that the virtual volume is mounted on.
 – Virtual drive. The ID of the virtual drive the virtual volume is mounted on.
Cache Copy Used for Mount. The name of the cluster that owns the cache chosen for I/O operations for mount. This selection is based on consistency policy, volume validity, residency, performance, and cluster mode.
Mount State . The mount state of the logical volume. The following values are possible:
 – Mounted . The volume is mounted.
 – Mount Pending. A mount request has been received and is in progress.
Last Accessed by a Host. The date and time the virtual volume was last accessed by a host. The time that is recorded reflects the time zone in which the user’s browser is located.
Last Modified . The date and time the virtual volume was last accessed by a host. The time that is recorded reflects the time zone in which the user’s browser is located.
Category . The category to which the volume FlashCopy belongs.
Storage Group . The name of the SG that defines the primary pool for the pre-migration of the virtual volume.
Management Class . The name of the MC applied to the volume. This policy defines the copy process for volume redundancy.
Storage Class . The name of the SC applied to the volume. This policy classifies virtual volumes to automate storage management.
Data Class. The name of the DC applied to the volume.
Volume Data State . The state of the data on the FlashCopy volume. The following values are possible:
 – Active . The virtual volume is located within a private category and contains data.
 – Scratched. The virtual volume is located within a scratch category and its data is not scheduled to be automatically deleted.
 – Pending Deletion . The volume is located within a scratch category and its contents are a candidate for automatic deletion when the earliest deletion time has passed. Automatic deletion then occurs sometime thereafter. This volume can be accessed for mount or category change before the automatic deletion, in which case the deletion can be postponed or canceled.
 – Pending Deletion with Hold. The volume is located within a scratch category that is configured with hold and the earliest deletion time has not yet passed. The volume is not accessible by any host operation until the volume has left the hold state. After the earliest deletion time passes, the volume becomes a candidate for deletion and moved to the Pending Deletion state. While in this state, the volume is accessible by all legal host operations.
Earliest Deletion On . Not applicable to FlashCopy copies (-).
Logical WORM . Not applicable to FlashCopy copies (-).
Cluster-specific FlashCopy volume properties
The Cluster-specific FlashCopy Properties window displays cluster-related information for the FlashCopy volume that is being displayed:
Cluster . The cluster location of the FlashCopy, on the header of the column. Only clusters that are part of a disaster recovery family are shown.
In Cache. Whether the virtual volume is in cache for this cluster.
Device Bytes Stored . The number of actual bytes (MiB) used by each cluster to store the volume. This amount can vary between clusters based on settings and configuration.
Copy Activity . Status information about the copy activity of the virtual volume copy:
 – Complete . This cluster location completed a consistent copy of the volume.
 – In Progress. A copy is required and currently in progress.
 – Required . A copy is required at this location but has not started or completed.
 – Not Required . A copy is not required at this location.
 – Reconcile . Pending updates exist against this location’s volume. The copy activity updates after the pending updates get resolved.
 – Time Delayed Until [time]. A copy is delayed as a result of Time Delayed Copy mode. The value for [time] is the next earliest date and time that the volume is eligible for copies.
Queue Type . The type of queue as reported by the cluster. Possible values are:
 – RUN . The copy occurs before the rewind-unload operation completes at the host.
 – Deferred . The copy occurs some time after the rewind-unload operation completes at the host.
 – Sync Deferred . The copy was set to be synchronized, according to the synchronized mode copy settings, but the synchronized cluster could not be accessed. The copy is in the deferred state.
 – Immediate Deferred . A RUN copy that has been moved to the deferred state due to copy timeouts or TS7700 Grid states.
 – Time Delayed . The copy occurs sometime after the delay period has been exceeded.
Copy Mode . The copy behavior of the virtual volume copy. Possible values are:
 – RUN . The copy occurs before the rewind-unload operation completes at the host.
 – Deferred . The copy occurs some time after the rewind-unload operation completes at the host.
 – No Copy . No copy is made.
 – Sync . The copy occurs upon any synchronization operation.
 – Time Delayed . The copy occurs sometime after the delay period has been exceeded.
 – Exists . A consistent copy exists at this location although No Copy is intended. This happens when a consistent copy existed at this location at the time the virtual volume was mounted. After the volume is modified, the copy mode of this location changes to No Copy.
Deleted . The date and time when the virtual volume on the cluster was deleted. The time that is recorded reflects the time zone in which the user’s browser is located. If the volume has not been deleted, this value displays a dash.
Removal Residency . Not applicable to FlashCopy copies.
Removal Time . Not applicable to FlashCopy copies.
Volume Copy Retention Group . Not applicable to FlashCopy copies.
Insert Virtual Volumes window
Use this window to insert a range of virtual volumes in the TS7700 subsystem. Virtual volumes that are inserted in an individual cluster are available to all clusters within a grid configuration.
The Insert Virtual Volumes window is shown in Figure 9-52.
Figure 9-52 Insert Virtual Volumes window
The Insert Virtual Volume window shows the Currently availability across entire grid table. This table shows the total of the already inserted volumes, the maximum number of volumes allowed in the grid, and the available slots (the difference between the maximum allowed and the currently inserted numbers). Clicking Show/Hide under the table shows or hides the information box with the already inserted volume ranges, quantities, media type, and capacity. Figure 9-53 shows the inserted ranges box.
Figure 9-53 Show logical volume ranges
Insert a New Virtual Volume Range window
Use the following fields to insert a range of new virtual volumes:
Starting VOLSER . The first virtual volume to be inserted. The range for inserting virtual volumes begins with this VOLSER number.
Quantity . Select this option to insert a set number of virtual volumes beginning with the Starting VOLSER. Enter the quantity of virtual volumes to be inserted in the adjacent field. The user can insert up to 10,000 virtual volumes at one time.
Ending VOLSER . Select this option to insert a range of virtual volumes. Enter the ending VOLSER number in the adjacent field.
Initially owned by . The name of the cluster that will own the new virtual volumes. Select a cluster from the menu.
Media type . Media type of the virtual volumes. The following values are possible:
 – Cartridge System Tape (400 MiB)
 – Enhanced Capacity Cartridge System Tape (800 MiB)
See 1.6, “Data storage values” on page 12 for additional information about the use of binary prefixes.
Set Constructs . Select this check box to specify constructs for the new virtual volumes. Then, use the menu under each construct to select a predefined construct name.
Set constructs only for virtual volumes that are used by hosts that are not multiple virtual systems (MVS) hosts. MVS hosts automatically assign constructs for virtual volumes and overwrite any manually assigned constructs.
The user can specify any or all of the following constructs: SG, MC, SC, or DC.
Modify Virtual Volumes window
Use the Modify Virtual Volumes window to modify the constructs that are associated with existing virtual volumes in the TS7700 composite library.
 
Note: You can use the Modify Virtual Volume function to manage virtual volumes that belong to a non-MVS host that is not aware of constructs.
MVS hosts automatically assign constructs for virtual volumes, and manual changes are not recommended. The Modify Virtual Volumes window acts on any logical volume belonging to the cluster or grid regardless of the host that owns the volume. The changes that are made on this window take effect only on the modified volume or range after a mount-demount sequence, or by using the LI REQ COPYRFSH command.
To display a range of existing virtual volumes, enter the starting and ending VOLSERs in the fields at the top of the window and click Show.
To modify constructs for a range of logical volumes, identify a Volume Range, and then, click the Constructs menu to select construct values and click Modify. The menus have these options:
Volume Range: The range of logical volumes to be modified:
 – From: The first VOLSER in the range.
 – To: The last VOLSER in the range.
Constructs: Use the following menus to change one or more constructs for the identified Volume Range. From each menu, the user can select a predefined construct to apply to the Volume Range, No Change to retain the current construct value, or dashes (--------) to restore the default construct value:
 – Storage Groups: Changes the SG for the identified Volume Range.
 – Storage Classes: Changes the SC for the identified Volume Range.
 – Data Classes: Changes the DC for the identified Volume Range.
 – Management Classes: Changes the MC for the identified Volume Range.
The user is asked to confirm the decision to modify logical volume constructs. To continue with the operation, click OK. To abandon the operation without modifying any logical volume constructs, click Cancel.
Delete Virtual Volumes window
Use the Delete Virtual Volumes window to delete unused virtual volumes from the TS7700 that are in the Insert Category (FF00).
 
Note: Only the unused logical volumes can be deleted through this window, meaning volumes in insert category FF00 that have never been mounted or have had their category, constructs, or attributes modified by a host. Otherwise, those logical volumes can be deleted only from the host.
The normal way to delete several virtual scratch volumes is by initiating the activities from the host. With Data Facility Storage Management Subsystem (DFSMS)/Removable Media Management (RMM) as the tape management system (TMS), it is done by using RMM commands.
To delete unused virtual volumes, select one of the options described next, and click Delete Volumes. A confirmation window is displayed. Click OK to delete or Cancel to cancel. To view the current list of unused virtual volume ranges in the TS7700 grid, enter a virtual volume range at the bottom of the window and click Show. A virtual volume range deletion can be canceled while in progress at the Cluster Operation History window.
This window has the following options:
Delete ALL unused virtual volumes: Deletes all unused virtual volumes across all VOLSER ranges.
Delete specific range of unused virtual volumes: All unused virtual volumes in the entered VOLSER range are deleted. Enter the VOLSER range:
 – From: The start of the VOLSER range to be deleted if the Delete specific range of unused virtual volumes option is selected.
 – To: The end of the VOLSER range to be deleted if the Delete specific range of unused virtual volumes option is selected.
Move Virtual Volumes window
Use the Move Virtual Volumes window to move a range of virtual volumes that are used by the TS7740 or TS7720T from one physical volume or physical volume range to a new target pool, or to cancel a move request already in progress. If a move operation is already in progress, a warning message displays. The user can view move operations in progress from the Events window.
To cancel a move request, select the Cancel Move Requests link. The following options to cancel a move request are available:
Cancel All Moves: Cancels all move requests.
Cancel Priority Moves Only: Cancels only priority move requests.
Cancel Deferred Moves Only: Cancels only Deferred move requests.
Select a Pool: Cancels move requests from the designated source pool (1 - 32), or from all source pools.
To move virtual volumes, define a volume range or select an existing range, select a target pool, and identify a move type:
Physical Volume Range: The range of physical volumes from where the virtual volumes must be removed. Either use this option or Existing Ranges to define the range of volumes to move, but not both.
 – From: VOLSER of the first physical volume in the range.
 – To: VOLSER of the last physical volume in the range.
Existing Ranges: The list of existing physical volume ranges. Use either this option or Volume Range to define the range of volumes to move, but not both.
Media Type: The media type of the physical volumes in the range to move. If no available physical stacked volume of the media type is in the range that is specified, no virtual volume is moved.
Target Pool: The number (1 - 32) of the target pool to which virtual volumes are moved.
Move Type: Used to determine when the move operation occurs. The following values are possible:
 – Deferred: Move operation will occur in the future as schedules enable.
 – Priority: Move operation occurs as soon as possible.
 – Honor Inhibit Reclaim schedule: An option of the Priority Move Type, it specifies that the move schedule occurs with the Inhibit Reclaim schedule. If this option is selected, the move operation does not occur when Reclaim is inhibited.
After defining the move operation parameters and clicking Move, confirm the request to move the virtual volumes from the defined physical volumes. If Cancel is selected, you return to the Move Virtual Volumes window.
Virtual Volumes Search window
To search for virtual volumes in a specific TS7700 cluster by VOLSER, category, media type, expiration date, or inclusion in a group or class, use the window that is shown in Figure 9-54. With the TS7720T, a new search option is available to search by Partition Number. The user provides a search name to every query so that they can be recalled as necessary.
A maximum of 10 search queries results or 2 GB of search data can be stored at one time. If either limit is reached, the user should delete one or more stored queries from the Previous Virtual Volume Searches window before creating a new search.
Figure 9-54 MI Virtual Volume Search entry window
To view the results of a previous search query, select the Previous Searches hyperlink to see a table containing a list of previous queries. Click a query name to display a list of virtual volumes that match the search criteria.
To clear the list of saved queries, select the check box next to one or more queries to be removed, select Clear from the menu, and click Go. This operation does not clear a search query already in progress.
Confirm the decision to clear the query list. Select OK to clear the list of saved queries, or Cancel to retain the list of queries.
To create a new search query, enter a name for the new query. Enter a value for any of the fields and select Search to initiate a new virtual volume search. The query name, criteria, start time, and end time are saved along with the search results.
To search for a specific VOLSER, enter parameters in the New Search Name and Volser fields and then click Search.
When looking for the results of earlier searches, click Previous Searches on the Virtual Volume Search window, which is shown in Figure 9-54 on page 419.
Search Options
Use this table to define the parameters for a new search query. Only one search can be executed at a time. Define one or more of the following search parameters:
Volser (volume’s serial number). This field can be left blank. The following wildcard characters in this field are valid:
 – Percent sign (%): Represents zero or more characters.
 – Asterisk (*): Converted to % (percent). Represents zero or more characters.
 – Period (.): Converted to _ (single underscore). Represents one character.
 – A single underscore (_): Represents one character.
 – Question mark (?): Converted to _ (single underscore). Represents one character.
Category: The name of the category to which the virtual volume belongs. This value is a four-character hexadecimal string. For instance, 0002/0102 (scratch MEDIA2), 000E (error), 000F/001F (private), FF00 (insert) are possible values for Scratch and Specific categories. Wildcard characters shown in previous topic also can be used in this field. This field can be left blank.
Media Type: The type of media on which the volume exists. Use the menu to select from the available media types. This field can be left blank.
Current Owner: The cluster owner is the name of the cluster where the logical volume resides. Use the drop-down menu to select from a list of available clusters. This field is only available in a grid environment and can be left blank.
Expire Time: The amount of time in which virtual volume data expires. Enter a number. This field is qualified by the values Equal to, Less than, or Greater than in the preceding menu and defined by the succeeding menu under the heading Time Units. This field can be left blank.
Removal Residency: The automatic removal residency state of the virtual volume. This field is not displayed for TS7740 clusters. In a TS7720T (tape attach) configuration, this field is displayed only when the volume is in partition 0 (CP0). The following values are possible:
 – Blank (ignore): If this field is empty (blank), the search ignores any values in the Removal Residency field. This is the default selection.
 – Removed: The search includes only virtual volumes that have been removed.
 – Removed Before: The search includes only virtual volumes that are removed before a specific date and time. If this value is selected, the Removal Time field must be complete as well.
 – Removed After: The search includes only virtual volumes that are removed after a certain date and time. If this value is selected, the Removal Time field must be complete as well.
 – In Cache: The search includes only virtual volumes in the cache.
 – Retained: The search includes only virtual volumes that are classified as retained.
 – Deferred: The search includes only virtual volumes that are classified as deferred.
 – Held: The search includes only virtual volumes that are classified as held.
 – Pinned: The search includes only virtual volumes that are classified as pinned.
 – No Removal Attempted: The search includes only virtual volumes that have not previously been subject to a removal attempt.
 – Removable Before: The search includes only virtual volumes that are candidates for removal before a specific date and time. If this value is selected, the Removal Time field must be complete as well.
 – Removable After: The search includes only virtual volumes that are candidates for removal after a specific date and time. If this value has been selected, the Removal Time field must complete as well.
Removal Time: This field is not available for the TS7740. Values that are displayed in this field depend on the values that are shown in the Removal Residency field. These values reflect the time zone in which the browser is:
 – Date: The calendar date according to month (M), day (D), and year (Y). It has the format MM/DD/YYYY. This field includes a date chooser calendar icon. The user can enter the month, day, and year manually, or can use the calendar chooser to select a specific date. The default for this field is blank.
 – Time: The Coordinated Universal Time (Coordinated Universal Time) in hours (H), minutes (M), and seconds (S). The values in this field accept the form HH:MM:SS only. Possible values for this field include 00:00:00 - 23:59:59. This field includes a time chooser clock icon. The user can enter hours and minutes manually by using 24-hour time designations, or can use the time chooser to select a start time based on a 12-hour (AM/PM) clock. The default for this field is midnight (00:00:00).
Volume Copy Retention Group: The name of the Volume Copy Retention Group for
the cluster.
The Volume Copy Retention Group provides more options to remove data from a disk-only TS7700 as the active data reaches full capacity. Volumes become candidates for removal if an appropriate number of copies exist on peer clusters and the volume copy retention time has elapsed since the volume was last accessed.
Volumes in each group are removed in order based on their least recently used access times. The volume copy retention time describes the number of hours a volume remains in cache before becoming a candidate for removal.
This field is only visible if the selected cluster does not attach to a physical library. The following values are valid:
 – Blank (ignore): If this field is empty (blank), the search ignores any values in the Volume Copy Retention Group field. This is the default selection.
 – Prefer Remove: Removal candidates in this group are removed before removal candidates in the Prefer Keep group.
 – Prefer Keep: Removal candidates in this group are removed after removal candidates in the Prefer Remove group.
 – Pinned: Copies of volumes in this group are never removed from the accessing cluster. The volume copy retention time does not apply to volumes in this group. Then, volumes in this group that are moved to scratch become priority candidates for removal.
 
Tip: To avoid cache overruns, plan ahead when assigning volumes to this group.
 – “-”: Volume Copy Retention does not apply to the TS7740 cluster and TS7720T (for volume in CP1 to CP7). This value (a dash indicating an empty value) is displayed if the cluster attaches to a physical tape library.
Storage Group: The name of the SG in which the virtual volume is. The user can enter a name in the empty field, or select a name from the adjacent menu. This field can be left blank.
Management Class: The name of the MC to which the virtual volume belongs. The user can enter a name in the empty field, or select a name from the adjacent menu. This field can be left blank.
Storage Class: The name of the SC to which the virtual volume belongs. The user can enter a name in the empty field, or select a name from the adjacent menu. This field can be left blank.
Data Class: The name of the DC to which the virtual volume belongs. The user can enter a name in the empty field, or select a name from the adjacent menu. This field can be left blank.
Compression Method: The name of the compression method to which the virtual volume belongs. You can select a name from the adjacent drop-down menu, or the field can be left blank.
Mounted: Whether the virtual volume is mounted. The following values are possible:
 – Ignore: Ignores any values in the Mounted field. This is the default selection.
 – Yes: Includes only mounted virtual volumes.
 – No: Includes only unmounted virtual volumes.
Storage Preference: Removal policy for a virtual volume, which determines when the volume is removed from the cache of a TS7700 cluster in a grid configuration. The following values are possible:
 – Prefer Remove: These virtual volumes are removed first when cache reaches a predetermined capacity and no scratch-ready volumes exist.
 – Prefer Keep: These virtual volumes are the last to be removed when the cache reaches a predetermined capacity and no scratch-ready volumes exist.
 – Pinned: These virtual volumes are never removed from the TS7700 Cluster.
Logical WORM: Whether the logical volume is defined as Write Once Read Many (WORM). The following values are possible:
 – Ignore: Ignores any values in the Logical WORM field. This is the default selection.
 – Yes: Includes only WORM logical volumes.
 – No: Does not include any WORM logical volumes.
Partition Number: The partition number (0 to 7) for a TS7700 Tape Attach volume. Inserted logical volumes are those with a -1 partition, meaning that there is no consistent data yet.
 
Remember: The user can print or download the results of a search query by using Print Report or Download Spreadsheet on the Volumes found table at the end of the Search Results window.
Search Results Option
Use this table to select the properties that are displayed on the Virtual volume search results window:
Click the down arrow that is adjacent to the Search Results Option, shown in Figure 9-54 on page 419 to open the Search Results Options table.
Select the check box that is adjacent for each property that should be included on the Virtual Volume Search Results window. The following properties can be selected for display:
 – Category
 – Current Owner (Grid only)
 – Media Type
 – Expire Time
 – Storage group
 – Management Class
 – Storage Class
 – Data Class
 – Compression Method
 – Mounted Tape Drive
 – Removal Residency
 – Removal Time
 – Volume Copy Retention Group
 – Storage Preference
 – Logical WORM
Click the Search button to start a new virtual volume search.
When search is complete, the results are displayed in the Virtual Volume Search Results window. The query name, criteria, start time, and end time are saved along with the search results. Maximum of 10 search queries can be saved. The following subwindows are available:
Previous virtual volume searches: Use this window to view precious searches of virtual volumes in the MI currently accessed cluster.
Virtual volume search results: Use this window to view a list of virtual volumes on this cluster that meet the criteria of an executed search query.
Categories
Use this page to add, modify, or delete a scratch category of virtual volumes.
This page also can be used to view the total number of logical volumes classified in each category, grouped under Damaged, Scratch and Private groups. Clicking in the “+” adjacent to the category will expand the information about that category, showing how many volumes in that category exist in each cluster of the grid (as shown in Figure 9-56 on page 424).
A category is a grouping of virtual volumes for a predefined use. A scratch category groups virtual volumes for non-specific use. This grouping enables faster mount times because the TS7700 can order category mounts without recalling data from a stacked volume (fast ready).
Figure 9-55 shows the Category window in the TS7700 MI.
Figure 9-55 Categories window
You can display the already defined categories, as shown in the Figure 9-56.
Figure 9-56 Displaying existing categories
Table 9-10 lists the values that are displayed on the Categories table, as shown in Figure 9-56 on page 424.
Table 9-10 Category values
Column name
Description
Categories
The type of category that defines the virtual volume. The following values are valid:
Scratch: Categories within the user-defined private range 0x0001 through 0xEFFF that are defined as scratch. Click the plus sign (+) icon to expand this heading and reveal the list of categories that are defined by this type. Expire time and hold values are shown in parentheses next to the category number. See Table 9-10 for descriptions of these values.
Private: Custom categories that are established by a user, within the range of 0x0001 - 0xEFFF. Click the plus sign (+) icon to expand this heading and reveal the list of categories that are defined by this type.
Damaged: A system category that is identified by the number 0xFF20. Virtual volumes in this category are considered damaged.
Insert: A system category that is identified by the number 0xFF00. Inserted virtual volumes are held in this category until moved by the host into a scratch category.
If no defined categories exist for a certain type, that type is not displayed on the Categories table.
Owning Cluster
Names of all clusters in the grid. Expand a category type or number to display. This column is visible only when the accessing cluster is part of a grid.
Counts
The total number of virtual volumes according to category type, category, or owning cluster.
Scratch Expired
The total number of scratch volumes per owning cluster that are expired. The total of all scratch expired volumes is the number of ready scratch volumes.
The user can use the Categories table to add, modify, or delete a scratch category, or to change the way information is displayed.
 
Tip: The total number of volumes within a grid is not always equal to the sum of all category counts. Volumes can change category multiple times per second, which makes the snapshot count obsolete.
Table 9-11 lists the actions that can be performed on the Categories window.
Table 9-11 Available actions on the Categories window
Action
Steps to perform action
Add a scratch category
1. Select Add Scratch Category.
2. Define the following category properties:
 – Category: A four-digit hexadecimal number that identifies the category. The valid characters for this field are A - F and 0 - 9. Do not use category name 0000 or “FFxx”, where xx equals 0 - 9 or A - F. 0000 represents a null value, and “FFxx” is reserved for hardware.
 – Expire: The amount of time after a virtual volume is returned to the scratch category before its data content is automatically delete-expired1.
Select an expiration time from the menu. If the user selects No Expiration, volume data never automatically delete-expires. If the user selects Custom, enter values for the following fields:
 • Time: Enter a number in the field according to these restrictions:
 1 - 2,147,483,647 if unit is hours
  1 - 89,478,485 if unit is days
  1 - 244983 if unit is years
 • Time Unit: Select a corresponding unit from the menu.
 – Set Expire Hold: Check this box to prevent the virtual volume from being mounted or having its category and attributes changed before the expire time has elapsed.2 Checking this field activates the hold state for any volumes currently in the scratch category and for which the expire time has not yet elapsed. Clearing this field removes the access restrictions on all volumes currently in the hold state within this scratch category.
Modify a scratch category
The user can modify a scratch category in two ways:
Select a category on the table, and then, select Actions → Modify Scratch Category.
Right-click a category on the table and either hold, or select Modify Scratch Category from the menu.
The user can modify the following category values:
Expire
Set Expire Hold
The user can modify one category at a time.
Delete a scratch category
The user can delete a scratch category in two ways:
1. Select a category on the table, and then, select Actions  Delete Scratch Category.
2. Right-click a category on the table and select Delete Scratch Category from the menu.
The user can delete only one category at a time.
Hide or show columns on the table
1. Right-click the table header.
2. Click the check box next to a column heading to hide or show that column in the table. Column headings that are checked display on the table.
Filter the table data
Follow these steps to filter by using a string of text:
1. Click in the Filter field.
2. Enter a search string.
3. Press Enter.
 
Follow these steps to filter by column heading:
1. Click the down arrow next to the Filter field.
2. Select the column heading to filter by.
3. Refine the selection:
 – Categories: Enter a whole or partial category number and press Enter.
 – Owning Cluster: Enter a cluster name or number and press Enter. Expand the category type or category to view highlighted results.
 – Counts: Enter a number and press Enter to search on that number string.
 – Scratch Expired: Enter a number and press Enter to search on that number string.
Reset the table to its default view
1. Right-click the table header.
2. Click Reset Table Preferences.

1 A volume becomes candidate for delete-expire when all the following conditions are met:
The amount of time since the volume entered the scratch category is equal to or greater than the Expire Time.
The amount of time since the data recorded in that volume was created or last modified is greater than 12 hours.
At least 12 hours has passed since the volume was migrated out or recalled back into disk cache.
Up to 1000 delete-expire candidate volumes can be deleted per hour. The volumes that have been within the scratch category the longest are deleted first. If a volume is selected for a scratch mount before it has delete-expired, the previous data in that volume is deleted immediately at first write.
 
2 If EXPIRE HOLD is set, then the virtual volume cannot be mounted during the expire time duration and will be excluded from any scratch counts surfaced to the host. The volume category can be changed, but only to a private category, allowing accidental scratch occurrences to be recovered to private. If EXPIRE HOLD is not set, then the virtual volume can be mounted or have its category and attributes changed within the expire time duration. The volume is also included in scratch counts surfaced to the host.
 
 
 
 
Note: There is no cross-check between defined categories in the z/OS systems and the definitions in the TS7700.
9.3.7 Physical icon
The topics in this section present information that is related to monitoring and manipulating physical volumes in the TS7740 and TS7720T clusters. To view or modify settings for physical volume pools to manage the physical volumes that are used by the tape-attached clusters, use the window that is shown in Figure 9-57.
Figure 9-57 Physical icon
Physical Volume Pools
The Physical Volume Pools properties table displays the media properties and encryption settings for every physical volume pool that is defined for a specific TS7700T cluster in the grid. This table contains these tabs:
Pool Properties
Encryption Settings
 
Tip: Pools 1 - 32 are preinstalled and initially set to default attributes. Pool 1 functions as the default pool and is used if no other pool is selected.
Figure 9-58 on page 429 show an example of the Physical Volume Pools window. There is a link that is available for a tutorial showing how to modify pool encryption settings. Click the link to see the tutorial material. This window is visible but disabled on the TS7700 MI if the grid possesses a physical library, but the selected cluster does not. This message is displayed:
The cluster is not attached to a physical tape library.
You can use the window that is shown in Figure 9-58 to view or modify settings for physical volume pools.
Figure 9-58 Physical Volume Pools Properties table
The Physical Volume Pool Properties table displays the encryption setting and media properties for every physical volume pool that is defined in a TS7740 and TS7720T. This table contains two tabs: Pool Properties and Physical Tape Encryption Settings. The information that is displayed in a tape-attached cluster depends on the current configuration and media availability.
The following information is displayed:
Under Pool Properties:
 – Pool: The pool number. This number is a whole number 1 - 32, inclusive.
 – Media Class: The supported media class of the storage pool. The valid value is 3592.
 – First Media (Primary): The primary media type that the pool can borrow from or return to the common scratch pool (Pool 0). The values that are displayed in this field are dependent upon the configuration of physical drives in the cluster. See Figure 4-9 on page 196 for First and Second Media values based on drive configuration.
The primary media type can have the following values:
Any 3592 Any media with a 3592 format.
None The only option available if the Primary Media type is any 3592. This option is only valid when the Borrow Indicator field value is No Borrow, Return or No Borrow, Keep.
JA Enterprise Tape Cartridge (ETC).
JB Extended Data Enterprise Tape Cartridge (EDETC).
JC Advanced Type C Data (ATCD)
JD Advanced Type D Data (ATDD)
JJ Enterprise Economy Tape Cartridge (EETC)
JK Advanced Type K Economy (ATKE)
JL Advanced Type L Economy (ATLE)
 – Second Media (Secondary): The second choice of media type from which the pool can borrow. Options that are shown exclude the media type chosen for First Media. The following values are possible:
Any 3592 Any media with a 3592 format.
None The only option available if the Primary Media type is any 3592. This option is only valid when the Borrow Indicator field value is No Borrow, Return or No Borrow, Keep.
JA ETC.
JB EDETC.
JC ATCD.
JD ATDD.
JJ EETC.
JK ATKE.
JL ATLE.
 – Borrow Indicator: Defines how the pool is populated with scratch cartridges. The following values are possible:
Borrow, Return A cartridge is borrowed from the Common Scratch Pool (CSP) and returned to the CSP when emptied.
Borrow, Keep A cartridge is borrowed from the CSP and retained by the actual pool, even after being emptied.
No Borrow, Return A cartridge is not borrowed from CSP, but an emptied cartridge is placed in CSP. This setting is used for an empty pool.
No Borrow, Keep A cartridge is not borrowed from CSP, and an emptied cartridge is retained in the actual pool.
 – Reclaim Pool: The pool to which virtual volumes are assigned when reclamation occurs for the stacked volume on the selected pool.
 
Important: The reclaim pool that is designated for the Copy Export pool needs to be set to the same value as the Copy Export pool. If the reclaim pool is modified, Copy Export disaster recovery capabilities can be compromised.
If there is a need to modify the reclaim pool that is designated for the Copy Export pool, the reclaim pool cannot be set to the same value as the primary pool or the reclaim pool that is designated for the primary pool. If the reclaim pool for the Copy Export pool is the same as either of the other two pools that are mentioned, the primary and backup copies of a virtual volume might exist on the same physical media. If the reclaim pool for the Copy Export pool is modified, it is the user’s responsibility to Copy Export volumes from the reclaim pool.
 – Maximum Devices: The maximum number of physical tape drives that the pool can use for premigration.
 – Export Pool: The type of export that is supported if the pool is defined as an Export Pool (the pool from which physical volumes are exported). The following values are possible:
Not Defined The pool is not defined as an Export pool.
Copy Export The pool is defined as a Copy Export pool.
 – Export Format: The media format used when writing volumes for export. This function can be used when the physical library recovering the volumes supports a different media format than the physical library exporting the volumes. This field is only enabled if the value in the Export Pool field is Copy Export. The following values are valid for this field:
Default The highest common format that is supported across all drives in the library. This is also the default value for the Export Format field.
E06 Format of a 3592-E06 Tape Drive.
E07 Format of a 3592-E07 Tape Drive.
E08 Format of a 3592-E08 Tape Drive
 – Days Before Secure Data Erase: The number of days a physical volume that is a candidate for Secure Data Erase can remain in the pool without access to a physical stacked volume. Each stacked physical volume possesses a timer for this purpose, which is reset when a virtual volume on the stacked physical volume is accessed. Secure Data Erase occurs later, based on an internal schedule. Secure Data Erase renders all data on a physical stacked volume inaccessible. The valid range of possible values is 1 - 365. Clearing the check box deactivates this function.
 – Days Without Access: The number of days the pool can persist without access to set a physical stacked volume as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when a virtual volume is accessed. The reclamation occurs later, based on an internal schedule. The valid range of possible values is 1 - 365. Clearing the check box deactivates this function.
 – Age of Last Data Written: The number of days the pool has persisted without write access to set a physical stacked volume as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when a virtual volume is accessed. The reclamation occurs later, based on an internal schedule. The valid range of possible values is 1 - 365. Clearing the check box deactivates this function.
 – Days Without Data Inactivation: The number of sequential days that the data ratio of the pool has been higher than the Maximum Active Data used to set a physical stacked volume as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when data inactivation occurs. The reclamation occurs later, based on an internal schedule. The valid range of possible values is 1 - 365. Clearing the check box deactivates this function. If deactivated, this field is not used as a criteria for reclaim.
 – Maximum Active Data: The ratio of the amount of active data in the entire physical stacked volume capacity. This field is used with Days Without Data Inactivation. The valid range of possible values is 5 - 95%. This function is disabled if Days Without Data Inactivation is not checked.
 – Reclaim Threshold: The percentage that is used to determine when to perform reclamation of free storage on a stacked volume. When the amount of active data on a physical stacked volume drops below this percentage, a reclaim operation is performed on the stacked volume. The valid range of possible values is 0 - 95% and can be selected in 5% increments; 35% is the default value.
Sunset Media Reclaim Threshold: Lists the percentage that is used to determine when to reclaim sunset media. This is a new option of Release 3.3. To modify pool properties, select the check box next to one or more pools that are shown on the Pool Properties tab, select Modify Pool Properties from the menu, and click Go.
Physical Tape Encryption Settings: The Physical Tape Encryption Settings tab displays the encryption settings for physical volume pools. The following encryption information is displayed on this tab:
 – Pool: The pool number. This number is a whole number 1 - 32, inclusive.
 – Encryption: The encryption state of the pool. The following values are possible:
Enabled Encryption is enabled on the pool.
Disabled Encryption is not enabled on the pool. When this value is selected, key modes, key labels, and check boxes are disabled.
 – Key Mode 1: Encryption mode that is used with Key Label 1. The following values are available:
Clear Label The data key is specified by the key label in clear text.
Hash Label The data key is referenced by a computed value corresponding to its associated public key.
None Key Label 1 is disabled.
“-” The default key is in use.
 – Key Label 1: The current encryption key (EK) Label 1 for the pool. The label must consist of ASCII characters and cannot exceed 64 characters. Leading and trailing blanks are removed, but an internal space is allowed. Lowercase characters are internally converted to uppercase upon storage. Therefore, key labels are reported by using uppercase characters.
 
Note: You can use identical values in Key Label 1 and Key Label 2, but you must define each label for each key.
If the encryption state is Disabled, this field is blank. If the default key is used, the value in this field is default key.
 – Key Mode 2: Encryption mode that is used with Key Label 2. The following values
are valid:
Clear Label The data key is specified by the key label in clear text.
Hash Label The data key is referenced by a computed value corresponding to its associated public key.
None Key Label 2 is disabled.
“-” The default key is in use.
 – Key Label 2: The current EK Label 2 for the pool. The label must consist of ASCII characters and cannot exceed 64 characters. Leading and trailing blanks are removed, but an internal space is allowed. Lowercase characters are internally converted to uppercase upon storage. Therefore, key labels are reported by using uppercase characters.
If the encryption state is Disabled, this field is blank. If the default key is used, the value in this field is default key.
To modify encryption settings, complete these steps:
1. Select one or more pools that are shown on the Physical Tape Encryption Settings tab.
2. Select Modify Encryption Settings from the menu and click Go.
Modify pool properties
Use this page to view or modify settings for physical volume pools, which manage the physical volumes used by the IBM TS7700T cluster. To modify properties for one or more physical volume pools, complete these steps:
1. From the Physical Volume Pools window, click the Pool Properties tab.
2. Select the check box next to each pool to be modified.
3. Select Modify Pool Properties from the Physical volume pools drop-down menu.
4. Click Go to open the Modify Pool Properties window.
 
Note: Pools 1-32 are preinstalled. Pool 1 functions as the default pool and is used if no other pool is selected. All other pools must be defined before they can be selected.
5. You can modify values for any of the following fields:
 – Media Class: The supported media class of the storage pool. The following is the possible value:
 • 3592
 – First Media (Primary): The primary media type that the pool can borrow or return to the common scratch pool (Pool 0). The values displayed in this field are dependent upon the configuration of physical drives in the cluster. The following are the possible values:
 • Any 3592: Any media with a 3592 format.
 • None: Indicates that the pool cannot borrow or return any media to the common scratch pool. This option is valid only when the Borrow Indicator field value is no borrow, return or no borrow, keep.
 • JA: Enterprise Tape Cartridge (ETC).
 • JB: Extended Data Enterprise Tape Cartridge (EDETC).
 • JC: Enterprise Advanced Data Cartridge (EADC).
 • JD: Advanced Type D Data (ATDD).
 • JJ: Enterprise Economy Tape Cartridge (EETC).
 • JK: Enterprise Advanced Economy Tape Cartridge (EAETC).
 • JL: Advanced Type L Economy (ATLE).
 – Second Media (Secondary): The second choice of media type that the pool can borrow from. The options shown exclude the media type chosen for First Media. The following are the possible values:
 • Any 3592: Any media with a 3592 format.
 • None: The only option available if the Primary Media type is Any 3592. This option is valid only when the Borrow Indicator field value is no borrow, return or no borrow, keep.
 • JA: Enterprise Tape Cartridge (ETC).
 • JB: Extended Data Enterprise Tape Cartridge (EDETC).
 • JC: Enterprise Advanced Data Cartridge (EADC).
 • JD: Advanced Type D Data (ATDD).
 • JJ: Enterprise Economy Tape Cartridge (EETC).
 • JK: Enterprise Advanced Economy Tape Cartridge (EAETC).
 • JL: Advanced Type L Economy (ATLE).
 – Borrow Indicator: Defines how the pool is populated with scratch cartridge. The following are the possible values:
 • Borrow, Return: A cartridge is borrowed from the Common Scratch Pool (CSP) and returned when emptied.
 • Borrow, Keep: A cartridge is borrowed from the CSP and retained, even after being emptied.
 • No Borrow, Return: A cartridge is not borrowed from CSP, but an emptied cartridge is placed in CSP. This setting is used for an empty pool.
 • No Borrow, Keep: A cartridge is not borrowed from CSP, and an emptied cartridge is retained.
 – Reclaim Pool: The pool to which virtual volumes are assigned when reclamation occurs for the stacked volume on the selected pool.
 
Important: The reclaim pool designated for the copy export pool should be set to the same value as the copy export pool. If the reclaim pool is modified, copy export disaster recovery capabilities can be compromised.
If there is a need to modify the reclaim pool designated for the copy export pool, the reclaim pool cannot be set to the same value as the primary pool or the reclaim pool designated for the primary pool. If the reclaim pool for the copy export pool is the same as either of the other two pools mentioned, then primary and backup copies of a virtual volume might exist on the same physical media. If the reclaim pool for the copy export pool is modified, it is your responsibility to copy export volumes from the reclaim pool.
 – Maximum Devices: The maximum number of physical tape drives that the pool can use for premigration.
 – Export Pool: The type of export supported if the pool is defined as an Export Pool (the pool from which physical volumes are exported). The following are the possible values:
 • Not Defined: The pool is not defined as an Export pool.
 • Copy Export: The pool is defined as a Copy Export pool.
 – Export Format: The media format used when writing volumes for export. This function can be used when the physical library recovering the volumes supports a different media format than the physical library exporting the volumes. This field is only enabled if the value in the Export Pool field is Copy Export. The following are the possible values:
 • Default: The highest common format supported across all drives in the library. This is also the default value for the Export Format field.
 • E06: Format of a 3592 E06 Tape Drive.
 • E07: Format of a 3592 E07 Tape Drive.
 • E08: Format of a 3592 E08 Tape Drive.
 – Days Before Secure Data Erase: The number of days that a physical volume that is a candidate for Secure Data Erase can remain in the pool without access to a physical stacked volume. Each stacked physical volume possesses a timer for this purpose. This timer is reset when a virtual volume on the stacked physical volume is accessed. Secure Data Erase occurs at a later time, based on an internal schedule. Secure Data Erase renders all data on a physical stacked volume inaccessible. The valid range of possible values is 1-365. Clearing the check box deactivates this function.
 – Days Without Access: The number of days that a determined pool can stay without being accessed for a recall. When a physical volume reaches that number of days without being accessed, it is set as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when a virtual volume is accessed. The reclamation occurs at a later time, based on an internal schedule. The valid range of possible values is 1-365. Clearing the check box deactivates this function.
 
Note: This control is applied to the reclamation of both sunset and R/W media.
 – Age of Last Data Written: The number of days the pool has persisted without write access to set a physical stacked volume as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when a virtual volume is accessed. The reclamation occurs at a later time, based on an internal schedule. The valid range of possible values is 1-365. Clearing the check box deactivates this function.
 
Note: This control is applied to the reclamation of both sunset and R/W media.
 – Days Without Data Inactivation: The number of sequential days that the data ratio of the pool has been higher than the Maximum Active Data used to set a physical stacked volume as a candidate for reclamation. Each physical stacked volume has a timer for this purpose, which is reset when data inactivation occurs. The reclamation occurs at a later time, based on an internal schedule. The valid range of possible values is 1-365. Clearing the check box deactivates this function. If deactivated, this field is not used as a criteria for reclaim.
 
Note: This control is applied to the reclamation of both sunset and R/W media.
 – Maximum Active Data: The ratio of the amount of active data in the entire physical stacked volume capacity. This field is used with Days Without Data Inactivation. The valid range of possible values is 5-95(%). This function is disabled if Days Without Data Inactivation is not selected.
 – Reclaim Threshold: The percentage used to determine when to perform reclamation of free storage on a stacked volume. When the amount of active data on a physical stacked volume drops below this percentage, a reclaim operation is performed on the stacked volume.
Physical volumes hold between the threshold value and 100% of data. For example, if the threshold value is 35% (the default), the percentage of active data on the physical volumes is (100%-35%)/2 or 15%. Setting the threshold too low results in more physical volumes being needed. Setting the threshold too high might affect the ability of the TS7700 Tape Attach to perform host workload because it is using its resources to perform reclamation. Experiment to find a threshold that matches your needs.
The valid range of possible values is 0-95(%) and can be entered in 1% increments. The default value is 35%. If the system is in a heterogeneous tape drive environment, then this threshold is for R/W media.
 – Sunset Media Reclaim Threshold: This field is always available, but it affects only sunset media. The percentage used to determine when to perform reclamation of free storage on a stacked volume. When the amount of active data on a physical stacked volume drops below this percentage, a reclaim operation is performed on the stacked volume.
Physical volumes hold between the threshold value and 100% of data. For example, if the threshold value is 35% (the default), the percentage of active data on the physical volumes is (100%-35%)/2 or 15%. Setting the threshold too low results in more physical volumes being needed. Setting the threshold too high might affect the ability of the TS7700 Tape Attach to perform host workload because it is using its resources to perform reclamation. Experiment to find a threshold that matches your needs.
The valid range of possible values is 0-95(%) and can be entered in 1% increments. The default value is 35%. If the system is in a heterogeneous tape drive environment, then this threshold is for sunset media.
 
Note: If the system contains TS1140 or TS1150 tape drives, the system requires at least 15 scratch physical volumes to run reclamation for sunset media.
6. To complete the operation click OK. To abandon the operation and return to the Physical Volume Pools window, click Cancel.
Modify encryption settings
Use this page to modify encryption settings for the physical volume pools that manage the physical volumes used by the IBM TS7700T cluster.
To watch a tutorial that shows how to modify pool encryption settings, click View tutorial on the Physical Volume Pools page.
To modify encryption settings for one or more physical volume pools, complete the following steps:
1. From the Physical Volume Pools page, click the Encryption Settings tab.
2. Select each pool to be modified.
3. Click Select Action → Modify Encryption Settings.
4. Click Go to open the Modify Encryption Settings window.
5. Modify values for any of the following fields:
 – Encryption: The encryption state of the pool. The following values are available:
 • Enabled: Encryption is enabled on the pool.
 • Disabled: Encryption is not enabled on the pool. When this value is selected, key modes, key labels, and check boxes are disabled.
 – Use encryption key server default key: Select this option yo populate the Key Label field by using a default key provided by the encryption key server.
 
Note: Your encryption key server software must support default keys to use this option.
This option is available before Key Label 1 and Key Label 2 fields You must select this option for each label to be defined by using the default key. If this option is selected, the following fields are disabled:
 • Key Mode 1
 • Key Label 1
 • Key Mode 2
 • Key Label 2
 – Key Mode 1: Encryption Mode used with Key Label 1. The following values are available:
 • Clear Label: The data key is specified by the key label in clear text.
 • Hash Label: The data key is referenced by a computed value corresponding to its associated public key.
 • None: Key Label 1 is disabled.
 • -: The default key is in use.
 – Key Label 1: The current encryption key Label 1 for the pool. The label must consist of ASCII characters and cannot exceed 64 characters. Leading and trailing blanks are removed, but an internal space is allowed. Lowercase characters are internally converted to uppercase upon storage.Therefore, key labels are reported using uppercase characters.
 
Note: You can use identical values in Key Label 1 and Key Label 2, but you must define each label for each key.
 – Key Mode 2: Encryption Mode used with Key Label 2. The following are the possible values for this field:
 • Clear Label: The data key is specified by the key label in clear text.
 • Hash Label: The data key is referenced by a computed value corresponding to its associated public key.
 • None: Key Label 2 is disabled.
 • -: The default key is in use.
 – Key Label 2: The current encryption key Label 2 for the pool. The label must consist of ASCII characters and cannot exceed 64 characters. Leading and trailing blanks are removed, but an internal space is allowed. Lowercase characters are internally converted to uppercase upon storage. Therefore, key labels are reported using uppercase characters.
6. To complete the operation click OK. To abandon the operation and return to the Physical Volume Pools window, click Cancel.
Physical volumes
The topics in this section present information that is related to monitoring and manipulating physical volumes in the TS7700T and TS7740. This window is visible but disabled on the TS7700 MI if the grid possesses a physical library, but the selected cluster does not.
The following message is displayed:
The cluster is not attached to a physical tape library.
 
Tip: This window is not visible on the TS7700 MI if the grid does not possess a physical library.
TS7700 MI windows that are under the Physical icon can help you view or change settings or actions that are related to the physical volumes and pools, physical drives, media inventory, TVC, and a physical library.
Figure 9-59 shows the navigation and the Physical Volumes window. Physical Volumes page is available for all TS7700 tape-attached models.
Figure 9-59 Physical Volumes navigation and options
The following options are available selections under Physical Volumes:
Physical Volume Details window
Use this window to obtain detailed information about a physical stacked volume in the TS7740 and TS7700T clusters. You can download the list of virtual volumes in the physical stacked volume being displayed by clicking Download List of Virtual Volumes under the table.
The following information is displayed when details for a physical stacked volume are retrieved:
VOLSER . Six-character VOLSER number of the physical stacked volume.
Type . The media type of the physical stacked volume. The following values are possible:
 – JA (ETC). ETC
 – JB (ETCL) . Enterprise Extended-Length Tape Cartridge
 – JC (ATCD). ATCD
 – JD (ATDD) . ATDD
 – JJ (EETC) . EETC
 – JK (ATKE) . ATKE
 – JL(ATLE) . ATLE
 
Note: JD (ATDD) and JL (ATLE) media types are only available if the highest common format (HCF) is set to E08 or later.
Recording Format. The format that is used to write the media. The following values are possible:
 – Undefined . The recording format that is used by the volume is not recognized as a supported format.
 – J1A
 – E05
 – E05E. E05 with encryption.
 – E06
 – E06E . E06 with encryption.
 – E07
 – E07E. E07 with encryption.
 – E08
 – E08E . E08 with encryption.
Volume State The following values are possible:
 – Read-Only . The volume is in a read-only state.
 – Read/write . The volume is in a read/write state.
 – Unavailable . The volume is in use by another task or is in a pending eject state.
 – Destroyed . The volume is damaged and unusable for mounting.
 – Copy Export Pending . The volume is in a pool that is being exported as part of an in-progress Copy Export.
 – Copy Exported . The volume has been ejected from the library and removed to offsite storage.
 – Copy Export Reclaim. The host can send a Host Console Query request to reclaim a physical volume currently marked Copy Exported. The data mover then reclaims the virtual volumes from the primary copies.
 – Copy Export No Files Good. The physical volume has been ejected from the library and removed to offsite storage. The virtual volumes on that physical volume are obsolete.
 – Misplaced . The library cannot locate the specified volume.
 – Inaccessible . The volume exists in the library inventory but is in a location that the cartridge accessor cannot access.
 – Manually Ejected . The volume was previously present in the library inventory, but cannot currently be located.
Capacity State . Possible values are empty, filling, and full.
Key Label 1/Key Label 2. The EK label that is associated with a physical volume. Up to two key labels can be present. If there are no labels present, the volume is not encrypted. If the EK used is the default key, the value in this field is default key.
Encrypted Time. The date the physical volume was first encrypted using the new EK. If the volume is not encrypted, the value in this field is “-”.
Home Pool. The pool number to which the physical volume was assigned when it was inserted into the library, or the pool to which it was moved through the library manager Move/Eject Stacked Volumes function.
Current Pool. The current storage pool in which the physical volume is.
Mount Count. The number of times the physical volume has been mounted since being inserted into the library.
Virtual Volumes Contained. Number of virtual volumes that are contained on this physical stacked volume.
Pending Actions. Whether a move or eject operation is pending. The following values are possible:
 – Pending Eject
 – Pending Priority Eject
 – Pending Deferred Eject
 – Pending Move to Pool # (where # represents the destination pool)
 – Pending Priority Move to Pool # (where # represents the destination pool)
 – Pending Deferred Move to Pool # (where # represents the destination pool)
Copy Export Recovery. Whether the database backup name is valid and can be used for recovery. Possible values are Yes and No.
Database Backup . The time stamp portion of the database backup name.
Move Physical Volumes window
To move a range or quantity of physical volumes that is used by the TS7720T or TS7740 to a target pool, or cancel a previous move request, use this option.
The Select Move Action menu provides options for moving physical volumes to a target pool. The following options are available to move physical volumes to a target pool:
Move Range of Physical Volumes. Moves physical volumes to the target pool physical volumes in the specified range. This option requires you to select a Volume Range, Target Pool, and Move Type. The user can also select a Media Type.
Move Range of Scratch Only Volumes. Moves physical volumes to the target pool scratch volumes in the specified range. This option requires you to select a Volume Range and Target Pool. The user can also select a Media Type.
Move Quantity of Scratch Only Volumes. Moves a specified quantity of physical volumes from the source pool to the target pool. This option requires to select Number of Volumes, Source Pool, and Target Pool. The user can also select a Media Type.
Move Export Hold to Private. Moves all Copy Export volumes in a source pool back to a private category if the volumes are in the Export/Hold category but are not selected to be ejected from the library. This option requires to select a Source Pool.
Cancel Move Requests . Cancels any previous move request.
 
Note: This option applies only to private media, not scratch tapes.
If the user selects Move Range of Physical Volumes or Move Range of Scratch Only Volumes from the Select Move Action menu, the user must define a volume range or select an existing range, select a target pool, and identify a move type. A media type can be selected as well.
If the user selects Move Quantity of Scratch Only Volumes from the Select Move Action menu, the user must define the number of volumes to be moved, identify a source pool, and identify a target pool. A media type can be selected as well.
If the user selects Move Export Hold to Private from the Select Move Action menu, the user must identify a source pool.
The following move operation parameters are available:
Volume Range . The range of physical volumes to move. The user can use either this option or the Existing Ranges option to define the range of volumes to move, but not both. Specify the range:
 – To . VOLSER of the first physical volume in the range to move.
 – From . VOLSER of the last physical volume in the range to move.
Existing Ranges . The list of existing physical volume ranges. The user can use either this option or the Volume Range option to define the range of volumes to move, but not both.
Source Pool . The number (0 - 32) of the source pool from which physical volumes are moved. If the user is selecting a source pool for a Move Export Hold to Private operation, the range of volumes that is displayed is
1 - 32.
Target Pool . The number (0 - 32) of the target pool to which physical volumes are moved.
Move Type . Used to determine when the move operation occurs. The following values are possible:
 – Deferred Move . The move operation occurs based on the first Reclamation policy that is triggered for the applied source pool. This operation depends on reclaim policies for the source pool and might take some time to complete.
 – Priority Move . The move operation occurs as soon as possible. Use this option to complete the operation sooner.
 – Honor Inhibit Reclaim schedule. An option of the Priority Move Type, it specifies that the move schedule occurs with the Inhibit Reclaim schedule. If this option is selected, the move operation does not occur when Reclaim is inhibited.
Number of Volumes . The number of physical volumes to be moved.
Media Type . Specifies the media type of the physical volumes in the range to be moved. The physical volumes in the range that is specified to be moved must be of the media type that is designated by this field, or the move operation fails.
After the user defines move operation parameters and clicks Move, the user confirms the request to move physical volumes. If the user selects Cancel, the user returns to the Move Physical Volumes window. To cancel a previous move request, select Select Move Action → Cancel Move Requests. The following options are available to cancel a move request:
Cancel All Moves . Cancels all move requests.
Cancel Priority Moves Only. Cancels only priority move requests.
Cancel Deferred Moves Only. Cancels only deferred move requests.
Select a Pool . Cancels move requests from the designated source pool (0 - 32), or from all source pools.
Eject Physical Volumes window
To eject a range or quantity of physical volumes that is used by the TS7720T or TS7740, or to cancel a previous eject request, use this page.
The Select Eject Action menu provides options for ejecting physical volumes.
 
Note: Before a stacked volume with active virtual volumes can be ejected, all active logical volumes in it are copied to a different stacked volume.
The following options are available to eject physical volumes:
Eject Range of Physical Volumes. Ejects physical volumes in the range that is specified. This option requires you to select a volume range and eject type. A media type can be selected as well.
Eject Range of Scratch Only Volumes. Ejects scratch volumes in the range that is specified. This option requires you to select a volume range. A media type can be selected as well.
Eject Quantity of Scratch Only Volumes. Ejects a specified quantity of physical volumes. This option requires you to select several volumes and a source pool. A media type can be selected as well.
Eject Export Hold Volumes. Ejects a subset of the volumes in the Export/Hold Category.
Eject Empty Unsupported Media. Ejects physical volumes on unsupported media after the existing read-only data is migrated to new media.
Cancel Eject Requests . Cancels any previous eject request.
If the user selects Eject Range of Physical Volumes or Eject Range of Scratch Only Volumes from the Select Eject Action menu, the user must define a volume range or select an existing range and identify an eject type. A media type can be selected as well.
If the user selects Eject Quantity of Scratch Only Volumes from the Select Eject Action menu, the user must define the number of volumes to be ejected, and to identify a source pool. A media type can be selected as well.
If the user selects Eject Export Hold Volumes from the Select Eject Action menu, the user must select the VOLSERs of the volumes to be ejected. To select all VOLSERs in the Export Hold category, select Select All from the menu. The eject operation parameters include these parameters:
Volume Range . The range of physical volumes to eject. The user can use either this option or the Existing Ranges option to define the range of volumes to eject, but not both. Define the range:
 – To. VOLSER of the first physical volume in the range to eject.
 – From . VOLSER of the last physical volume in the range to eject.
Existing Ranges . The list of existing physical volume ranges. The user can use either this option or the Volume Range option to define the range of volumes to eject, but not both.
Eject Type . Used to determine when the eject operation will occur. The following values are possible:
 – Deferred Eject . The eject operation occurs based on the first Reclamation policy that is triggered for the applied source pool. This operation depends on reclaim policies for the source pool and can take some time to complete.
 – Priority Eject . The eject operation occurs as soon as possible. Use this option to complete the operation sooner.
 • Honor Inhibit Reclaim schedule. An option of the Priority Eject Type, it specifies that the eject schedule occurs with the Inhibit Reclaim schedule. If this option is selected, the eject operation does not occur when Reclaim is inhibited.
Number of Volumes . The number of physical volumes to be ejected.
Source Pool . The number (0 - 32) of the source pool from which physical volumes are ejected.
Media Type . Specifies the media type of the physical volumes in the range to be ejected. The physical volumes in the range that are specified to eject must be of the media type designated by this field, or else the eject operation fails.
After the user defines the eject operation parameters and clicks Eject, the user must confirm the request to eject physical volumes. If the user selects Cancel, the user returns to the Eject Physical Volumes window.
To cancel a previous eject request, select Select Eject Action → Cancel Eject Requests. The following options are available to cancel an eject request:
Cancel All Ejects : Cancels all eject requests.
Cancel Priority Ejects Only: Cancels only priority eject requests.
Cancel Deferred Ejects Only: Cancels only deferred eject requests.
Physical Volume Ranges window
To view physical volume ranges or unassigned physical volumes in a library that is attached to an TS7700T cluster, use this window. Figure 9-59 on page 438 brings a summary of options available in this page.
When working with volumes recently added to the attached TS3500 tape library that are not showing in the Physical Volume Ranges window, click Inventory Upload. This action requests the physical inventory from the defined logical library in the tape library to be uploaded to the TS7700T, repopulating the Physical Volume Ranges window.
 
 
Important: When inserting a VOLSER that belongs to a defined tape attach TS7700 range, it is presented and inventoried according to the setup in place. If the newly inserted VOLSER does not belong to any defined range in the TS7700T, an intervention-required message is generated, requiring the user to correct the assignment for this VOLSER.
If a physical volume range contains virtual volumes with active data, those virtual volumes must be moved or deleted before the physical volume range can be moved or deleted.
The following information is displayed in the Physical Volume Ranges table:
Start VOLSER : The first VOLSER in a defined range.
End VOLSER : The last VOLSER in a defined range.
Media Type : The media type for all volumes in a VOLSER range. The following values are possible:
 – JA(ETC): ETC
 – JB(ETCL) : Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) : ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
 
Note: JA and JJ media are supported only for read-only operations with 3592 E07 tape drives. 3592-E08 does not support JA, JJ, or JB media.
Home Pool : The home pool to which the VOLSER range is assigned.
Use the menu on the Physical Volume Ranges table to add a VOLSER range, or to modify or delete a predefined range.
Unassigned Volumes: The Unassigned Volumes table displays the list of unassigned physical volumes that are pending ejection for a cluster. A VOLSER is removed from this table when a new range that contains the VOLSER is added. The following status information is displayed in the Unassigned Volumes table:
VOLSER : The VOLSER associated with a given physical volume.
Media Type : The media type for all volumes in a VOLSER range. The following values are possible:
 – JA(ETC): ETC
 – JB(ETCL): Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) : ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
 
Note: JA and JJ media are supported only for read-only operations with 3592 E07 tape drives. 3592-E08 does not support JA, JJ, or JB media.
Pending Eject : Whether the physical volume associated with the VOLSER is awaiting ejection.
Use the Unassigned Volumes table to eject one or more physical volumes from a library that is attached to a TS7720T or TS7740.
Physical Volume Search window
To search for physical volumes in a TS7720T or TS7740 cluster according to one or more identifying features, use this window. Figure 9-59 on page 438 brings a summary of options available in this page. Click the Previous Searches hyperlink to view the results of a previous query on the Previous Physical Volumes Search window.
The following information can be seen and requested on the Physical Volume Search window:
New Search Name : Use this field to create a new search query:
 – Enter a name for the new query in the New Search Name field.
 – Enter values for any of the search parameters that are defined in the Search Options table.
Search Options : Use this table to define the parameters for a new search query. Click the down arrow next to Search Options to open the Search Options table.
 
Note: Only one search can be run at a time. If a search is in progress, an information message displays at the top of the Physical Volume Search window. The user can cancel a search in progress by clicking Cancel Search within this message.
Define one or more of the following search parameters:
VOLSER : The volume serial number. This field can be left blank. The following wildcard characters can be used in this field:
 – % (percent): Represents zero or more characters.
 – * (asterisk) : Converted to % (percent). Represents zero or more characters.
 – . (period) : Represents one character.
 – _ (single underscore): Converted to period (.). Represents one character.
 – ? (question mark) : Converted to period (.). Represents one character.
Media Type : The type of media on which the volume exists. Use the menu to select from available media types. This field can be left blank. The following other values are possible:
 – JA-ETC : ETC
 – JB(ETCL): Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) . ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
Recording Format : The format that is used to write the media. Use the menu to select from the available media types. This field can be left blank. The following other values are possible:
 – Undefined : The recording format that is used by the volume is not recognized as a supported format
 – J1A
 – E05
 – E05E : E05 with encryption
 – E06
 – E06E : E06 with encryption
 – E07
 – E07E : E07 with encryption
 – E08
 – E08E : E08 with encryption
Capacity State : Whether any active data exists on the physical volume and the status of that data in relation to the volume’s capacity. This field can be left blank. The following other values are valid:
 – Empty : The volume contains no data and is available for use as a physical scratch volume.
 – Filling : The volume contains valid data, but is not yet full. It is available for extra data.
 – Full : The volume contains valid data. At some point, it was marked as full and extra data cannot be added to it. In some cases, a volume can be marked full and yet be short of the volume capacity limit.
Enter a name for the new query in the New Search Name field. Enter values for any of the search parameters that are defined in the Search Options table.
Search Options table: Use this table to define the parameters for a new search query. Click the down arrow next to Search Options to open the Search Options table.
 
Note: Only one search can be run at a time. If a search is in progress, an information message displays at the top of the Physical Volume Search window. The user can cancel a search in progress by clicking Cancel Search within this message.
Define one or more of the following search parameters:
VOLSER : The volume serial number. This field can be left blank. The user can also use the following wildcard characters in this field:
 – % (percent): Represents zero or more characters.
 – * (asterisk): Converted to % (percent). Represents zero or more characters.
 – . (period): Represents one character.
 – _ (single underscore) : Converted to “.” (period). Represents one character.
 – ? (question mark) : Converted to “.” (period). Represents one character.
Media Type. The type of media on which the volume is. Use the menu to select from available media types. This field can be left blank. The following other values are possible:
 – JA-ETC : ETC
 – JB(ETCL): Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) : ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
Recording Format: The format that is used to write the media. Use the menu to select from available media types. This field can be left blank. The following other values are possible:
 – Undefined: The recording format that is used by the volume is not recognized as a supported format.
 – J1A
 – E05
 – E05E: E05 with encryption
 – E06
 – E06E : E06 with encryption
 – E07
 – E07E : E07 with encryption
 – E08
 – E08E : E08 with encryption
Capacity State : Whether any active data exists on the physical volume and the status of that data in relation to the volume’s capacity. This field can be left blank. The following other values are possible:
 – Empty: The volume contains no data and is available for use as a physical scratch volume.
 – Filling: The volume contains valid data, but is not yet full. It is available for more data.
 – Full: The volume contains valid data. At some point, it was marked as full and more data cannot be added to it. In some cases, a volume can be marked full and yet be short of the volume capacity limit.
Home Pool: The pool number (0 - 32) to which the physical volume was assigned when it was inserted into the library, or the pool to which it was moved through the library manager Move/Eject Stacked Volumes function. This field can be left blank.
Current Pool: The number of the storage pool (0 - 32) in which the physical volume currently exists. This field can be left blank.
Encryption Key: The EK label that is designated when the volume was encrypted. This is a text field. The following values are valid:
 – A name identical to the first or second key label on a physical volume.
 – Any physical volume encrypted using the designated key label is included in the search.
 – Search for the default key. Select this check box to search for all physical volumes encrypted using the default key label.
Pending Eject: Whether to include physical volumes pending an eject in the search query. The following values are valid:
 – All Ejects: All physical volumes pending eject are included in the search.
 – Priority Ejects: Only physical volumes that are classified as priority eject are included in the search.
 – Deferred Ejects: Only physical volumes that are classified as deferred eject are included in the search.
Pending Move to Pool: Whether to include physical volumes pending a move in the search query. The following values are possible:
 – All Moves: All physical volumes pending a move are included in the search.
 – Priority Moves: Only physical volumes that are classified as priority move are included in the search.
 – Deferred Moves: Only physical volumes that are classified as deferred move are included in the search.
Any of the previous values can be modified by using the adjacent menu. Use the adjacent menu to narrow the search down to a specific pool set to receive physical volumes. The following values are possible:
 • All Pools: All pools are included in the search.
 • 0 - 32: The number of the pool to which the selected physical volumes are moved.
VOLSER flags : Whether to include, exclude, or ignore any of the following VOLSER flags in the search query. Select only one:
 – Yes to include.
 – No to exclude.
 – Ignore to ignore the following VOLSER types during the search:
 • Misplaced
 • Mounted
 • Inaccessible
 • Encrypted
 • Export Hold
 • Read Only Recovery
 • Unavailable
 • Pending Secure Data Erase
 • Copy Exported
Search Results Options: Use this table to select the properties that are displayed on the Physical Volume Search Results window.
Click the down arrow next to Search Results Options to open the Search Results Options table. Select the check box next to each property that should display on the Physical Volume Search Results window.
Review the property definitions from the Search Options table section. The following properties can be displayed on the Physical Volume Search Results window:
Media Type
Recording Format
Home Pool
Current Pool
Pending Actions
Volume State
Mounted Tape Drive
Encryption Key Labels
Export Hold
Read Only Recovery
Copy Export Recovery
Database Backup
Click Search to initiate a new physical volume search. After the search is initiated but before it completes, the Physical Volume Search window displays the following information message:
The search is currently in progress. The user can check the progress of the search on the Previous Search Results window.
 
Note: The search-in-progress message is displayed on the Physical Volume Search window until the in-progress search completes or is canceled.
To check the progress of the search being run, click the Previous Search Results hyperlink in the information message. To cancel a search in progress, click Cancel Search. When the search completes, the results are displayed in the Physical Volume Search Results window. The query name, criteria, start time, and end time are saved along with the search results. A maximum of 10 search queries can be saved.
Active Data Distribution
To view the distribution of data on physical volumes that are marked full on an TS7700T cluster, use this window. The distribution can be used to select an appropriate reclaim threshold. The Active Data Distribution window displays the utilization percentages of physical volumes in increments of 10%.
Number of Full Volumes at Utilization Percentages window
The tables in this page show the number of physical volumes that are marked as full in each physical volume pool, according to % of volume used. The following fields are displayed:
Pool : The physical volume pool number. This number is a hyperlink; click it to display a graphical representation of the number of physical volumes per utilization increment in a pool. If the user clicks the pool number hyperlink, the Active Data Distribution subwindow opens.
This subwindow contains the following fields and information:
Pool: To view graphical information for another pool, select the target pool from this menu.
Current Reclaim Threshold: The percentage that is used to determine when to perform reclamation of free storage on a stacked volume. When the amount of active data on a physical stacked volume drops under this percentage, a reclaim operation is performed on the stacked volume. The valid range of possible values is 0 - 95% and can be selected in 5% increments; 35% is the default value.
 
Tip: This percentage is a hyperlink; click it to open the Modify Pool Properties window, where the user can modify the percentage that is used for this threshold.
Number of Volumes with Active Data: The number of physical volumes that contain active data.
Pool n Active Data Distribution: This graph displays the number of volumes that contain active data per volume utilization increment for the selected pool. On this graph, utilization increments (x axis) do not overlap.
Pool n Active Data Distribution (cumulative): This graph displays the cumulative number of volumes that contain active data per volume utilization increment for the selected pool. On this graph, utilization increments (x axis) overlap, accumulating as
they increase.
The Active Data Distribution subwindow also displays utilization percentages for the selected pool, excerpted from the Number of Full Volumes at Utilization Percentages table.
Media Type : The type of cartridges that are contained in the physical volume pool. If more than one media type exists in the pool, each type is displayed, separated by commas. The following values are possible:
 – Any 3592 : Any media with a 3592 format
 – JA-ETC : ETC
 – JB(ETCL) : Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) : ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
Percentage of Volume Used (0+, 10+, 20+, and so on): Each of the last 10 columns in the table represents a 10% increment of total physical volume space used. For instance, the column heading 20+ represents the 20% - 29% range of a physical volume used. For each pool, the total number of physical volumes that occur in each range is listed.
Physical Tape Drives window
To view a summary of the state of all physical drives that are accessible to the TS7700T cluster, use this window.
This window is visible but disabled on the TS7700 MI if the grid possesses a physical library, but the selected cluster does not. The following message is displayed:
The cluster is not attached to a physical tape library.
 
Tip: This window is not visible on the TS7700 MI if the grid does not possess a physical library.
Figure 9-60 shows the Physical Tape Drives window.
Figure 9-60 Physical Tape Drives window
The Physical Tape Drives table displays status information for all physical drives accessible by the cluster, including the following information:
Serial Number: The serial number of the physical drive.
Drive Type: The machine type and model number of the drive. The following values are possible:
 – 3592J1A
 – 3592E05
 – 3592E05E: A 3592 E05 drive that is Encryption Capable.
 – 3592E06
 – 3952E07
 – 3952E08
Online: Whether the drive is online.
Health: The health of the physical drive. This value is obtained automatically at times that are determined by the TS7700. The following values are possible:
 – OK : The drive is fully functioning.
 – WARNING: The drive is functioning but reporting errors. Action needs to be taken to correct the errors.
 – DEGRADED: The drive is operational but has lost some redundancy resource and performance.
 – FAILURE: The drive is not functioning and immediate action must be taken to correct it.
 – OFFLINE/TIMEOUT: The drive is out of service or unreachable within a certain time frame.
Role: The current role the drive is performing. The following values are possible:
 – IDLE: The drive is not in use.
 – MIGRATION: The drive is being used to copy a virtual volume from the TVC to the physical volume.
 – RECALL: The drive is being used to recall a virtual volume from a physical volume to the TVC.
 – RECLAIM SOURCE: The drive is being used as the source of a reclaim operation.
 – RECLAIM TARGET: The drive is being used as the target of a reclaim operation.
 – EXPORT: The drive is being used to export a volume.
 – SECURE ERASE : The drive is being used to erase expired volumes from the physical volume securely and permanently.
Mounted Physical Volume: VOLSER of the physical volume that is mounted by the drive.
Recording Format : The format in which the drive operates. The following values are possible:
 – J1A: The drive is operating with J1A data.
 – E05: The drive is operating with E05 data.
 – E05E: The drive is operating with E05E encrypted data.
 – E06: The drive is operating with E06 data.
 – E06E: The drive is operating with E06E encrypted data.
 – E07: The drive is operating with E07 data.
 – E07E: The drive is operating with E07E encrypted data.
 – E08 : The drive is operating with E08 data.
 – E08E : The drive is operating with E08 encrypted data.
 – Not Available: The format is unable to be determined because there is no physical media in the drive or the media is being erased.
 – Unavailable: The format is unable to be determined because the Health and Monitoring checks have not yet completed. Refresh the current window to determine whether the format state has changed. If the Unknown state persists for 1 hour or longer, contact your IBM SSR.
Requested Physical Volume: The VOLSER of the physical volume that is requested for mount. If no physical volume is requested, this field is blank.
To view additional information for a specific, selected drive, see the Physical Drives Details table on the Physical Tape Drive Details window:
1. Select the radio button next to the serial number of the physical drive in question.
2. Click Select Action → Details.
3. Click Go to open the Physical Tape Drives Details window.
The Physical Drives Details table displays detailed information for a specific physical tape drive:
 – Serial Number: The serial number of the physical drive.
 – Drive Type : The machine type and model number of the drive. The following values are possible:
 • 3592J1A
 • 3592E05
 • 3592E05E: A 3592 E05 drive that is Encryption Capable.
 • 3592E06
 • 3952E07
 • 3592E08
 – Online: Whether the drive is online.
 – Health : The health of the physical drive. This value is obtained automatically at times that are determined by the TS7740. The following values are possible:
 • OK: The drive is fully functioning.
 • WARNING: The drive is functioning but reporting errors. Action needs to be taken to correct the errors.
 • DEGRADED: The drive is functioning but at lesser redundancy and performance.
 • FAILURE: The drive is not functioning and immediate action must be taken to correct it.
 • OFFLINE/TIMEOUT: The drive is out of service or cannot be reached within a certain time frame.
 – Role: The current role that the drive is performing. The following values are possible:
 • IDLE: The drive is not in use.
 • MIGRATION: The drive is being used to copy a virtual volume from the TVC to the physical volume.
 • RECALL: The drive is being used to recall a virtual volume from a physical volume to the TVC.
 • RECLAIM SOURCE: The drive is being used as the source of a reclaim operation.
 • RECLAIM TARGET: The drive is being used as the target of a reclaim operation.
 • EXPORT: The drive is being used to export a volume.
 • SECURE ERASE: The drive is being used to erase expired volumes from the physical volume securely and permanently.
 – Mounted Physical Volume: VOLSER of the physical volume mounted by the drive.
 – Recording Format: The format in which the drive operates. The following values are possible:
 • J1A : The drive is operating with J1A data.
 • E05 : The drive is operating with E05 data.
 • E05E: The drive is operating with E05E encrypted data.
 • E06 : The drive is operating with E06 data.
 • E06E : The drive is operating with E06E encrypted data.
 • E07 : The drive is operating with E07 data.
 • E07E : The drive is operating with E07E encrypted data.
 • E08 : The drive is operating with E08 data.
 • E08E : The drive is operating with E08 encrypted data.
 • Not Available: The format is unable to be determined because there is no physical media in the drive or the media is being erased.
 • Unavailable: The format is unable to be determined because the Health and Monitoring checks have not yet completed. Refresh the current window to determine whether the format state has changed. If the Unknown state persists for 1 hour or longer, contact your IBM SSR.
 – Requested Physical Volume: The VOLSER of the physical volume that is requested for mount. If no physical volume is requested, this field is blank.
 – WWNN : The worldwide node name that is used to locate the drive.
 – Frame : The frame in which the drive is.
 – Row: The row in which the drive is.
 – Encryption Enabled: Whether encryption is enabled on the drive.
 
Note: If the user is monitoring this field while changing the encryption status of a drive, the new status does not display until you bring the TS7700 Cluster offline and then back online.
 – Encryption Capable: Whether the drive is capable of encryption.
 – Physical Volume : VOLSER of the physical volume that is mounted by the drive.
 – Pool: The pool name of the physical volume that is mounted by the drive.
 – Virtual Volume: VOLSER of the virtual volume being processed by the drive.
4. Click Back to return to the Physical Tape Drives window.
Physical Media Inventory window
To view physical media counts for media types in storage pools in the TS7700, use this window.
This window is visible but disabled on the TS7700 MI if the grid possesses a physical library, but the selected cluster does not. The following message is displayed:
The cluster is not attached to a physical tape library.
 
Tip: This window is not visible on the TS7700 MI if the grid does not possess a physical library.
Figure 9-61 shows the Physical Media Inventory window.
Figure 9-61 Physical Media Inventory window
The following physical media counts are displayed for each media type in each storage pool:
Pool: The storage pool number.
Media Type: The media type defined for the pool. A storage pool can have multiple media types and each media type is displayed separately. The following values are possible:
 – JA-ETC : ETC
 – JB(ETCL): Enterprise Extended-Length Tape Cartridge
 – JC(ATCD) : ATCD
 – JD(ATDD) : ATDD
 – JJ(EETC) : EETC
 – JK(ATKE) : ATKE
 – JL(ATLE) : ATLE
Empty: The count of physical volumes that are empty for the pool.
Filling: The count of physical volumes that are filling for the pool. This field is blank for pool 0.
Full: The count of physical volumes that are full for the pool. This field is blank for pool 0.
 
Tip: A value in the Full field is displayed as a hyperlink; click it to open the Active Data Distribution subwindow. The Active Data Distribution subwindow displays a graphical representation of the number of physical volumes per utilization increment in a pool. If no full volumes exist, the hyperlink is disabled.
Queued for Erase: The count of physical volumes that are reclaimed but need to be erased before they can become empty. This field is blank for pool 0.
ROR: The count of physical volumes in the Read Only Recovery (ROR) state that are damaged or corrupted.
Unavailable: The count of physical volumes that are in the unavailable or destroyed state.
Unsupported: Unsupported media (for example: JA and JJ) type present in tape library and inserted for the TS7740 and TS7720T. Based on the drive configuration, the TS7700 cannot use one or more of the specified media, which can result in the out-of-scratch condition.
9.3.8 Constructs icon
The topics in this section present information that is related to TS7700 storage constructs. Figure 9-62 shows the Constructs icon and the options that are available under it.
Figure 9-62 Constructs icon
Storage Groups window
Use the window that is shown in Figure 9-63 to add, modify, or delete an SG. The figure shows a TS7760C (cloud attach) example.
Figure 9-63 MI Storage Groups window with Cloud Tier
The SGs table displays all existing SGs available for a cluster.
The user can use the SGs table to create an SG, modify an existing SG, or delete an SG. Also, the user can copy selected SGs to the other clusters in this grid by using the Copy to Clusters action available in the menu.
The SGs table shows the following status information:
Name : The name of the SG. Each SG within a cluster must have a unique name. Valid characters for this field are A - Z, 0 - 9, $, @, *, #, and %.
Primary Pool : The primary pool for migration. Only validated physical primary pools can be selected. If the cluster does not possess a physical library, this column is not visible, and the MI categorizes newly created SGs by using pool 1.
Description : A description of the SG.
Use the menu in the SGs table to add an SG, or modify or delete an existing SG.
To add an SG, select Add from the menu. Complete the fields for information that will be displayed in the SGs table.
 
Consideration: If the cluster does not possess a physical library, the Primary Pool field is not available in the Add or Modify options.
To modify an existing SG, select the radio button from the Select column that appears next to the name of the SG that needs to be modified. Select Modify from the menu. Complete the fields for information that must be displayed in the SGs table.
To delete an existing SG, select the radio button from the Select column that appears next to the name of the SG to delete. Select Delete from the menu. Confirm the decision to delete an SG. If you select OK, the SG is deleted. If you select No, the request to delete is canceled.
Management Classes window
To define, modify, copy, or delete the MC that defines the TS7700 copy policy for volume redundancy, use this window (Figure 9-64). The table displays the copy policy that is in force for each component of the grid.
Figure 9-64 MI Management Classes window on a grid
The secondary copy pool column shows only in a tape attach TS7700 cluster. This is a requirement for using the Copy Export function.
Figure 9-65 shows the MCs options, including the Time Delayed option.
Figure 9-65 Modify Management Classes options
The user can use the MCs table to create, modify, and delete MCs. The default MC can be modified, but cannot be deleted. The default MC uses dashes (--------) for the symbolic name.
The MCs table shows the following status information:
Name: The name of the MC. Valid characters for this field are A - Z, 0 - 9, $, @, *, #, and %. The first character of this field cannot be a number. This is the only field that cannot be modified after it is added.
Secondary Pool: The target pool in the volume duplication. If the cluster does not possess a physical library, this column is not visible and the MI categorizes newly created SGs by using pool 0.
Description: A description of the MC definition. The value in this field must be 1 - 70 characters in length.
Retain Copy Mode: Whether previous copy policy settings on private (non-scratch) logical volume mounts are retained.
Retain Copy mode prevents the copy modes of a logical volume from being refreshed by an accessing host device if the accessing cluster is not the same cluster that created the volume. When Retain Copy mode is enabled through the MI, previously assigned copy modes are retained and subsequent read or modify access does not refresh, update, or merge copy modes. This enables the original number of copies to be maintained.
Scratch Mount Candidate : Clusters that are listed under Scratch Mount Candidate are selected first for scratch mounts of the volumes that are associated with the MC. If no cluster is displayed, the scratch mount process selects among the available clusters in a random mode.
To add an MC, complete the following steps:
1. Select Add from the Management Class menu that is shown in Figure 9-64 on page 457 and click Go.
2. Complete the fields for information that will be displayed in the MCs table. Up to 256 MCs can be created per TS7700 grid.
 
Remember: If the cluster does not possess a physical library, the Secondary Pool field is not available in the Add option.
You can use the Copy Action option to copy any existing MC to each cluster in the TS7700 Grid.
The following options are available in the MC:
No Copy: No volume duplication occurs if this action is selected.
RUN : Volume duplication occurs when the Rewind Unload command is received. The command returns only after the volume duplication completes successfully.
Deferred: Volume duplication occurs later based on the internal schedule of the copy engine.
Synchronous Copy: Volume duplication is treated as host I/O and takes place before control is returned to the application issuing the I/O. Only two clusters in the grid can have the Synchronous mode copy defined.
Time Delayed : Volume duplication occurs only after the delay time that is specified by the user elapses. This option is only available if all clusters in the grid are running R3.1 or higher level of code. Selecting Time Delayed Mode for any cluster opens another option menu:
 – Delay Queueing Copy for [X] Hours : Number of hours that queuing the copies will be delayed if Time Delayed Mode Copy is selected. Can be set for 1 - 65,535 hours.
 – Start Delay After:
 • Volume Create: Delay time is clocked from the volume creation.
 • Volume Last Accessed: Delay time is clocked from the last access. Every time a volume is accessed, elapsed time is zeroed for that volume and countdown starts again from the delay value set by user.
To modify an MC, complete the following steps:
1. Select the check box from the Select column that appears in the same row as the name of the MC to modify.
The user can modify only one MC at a time.
2. Select Modify from the menu and click Go.
Of the fields that were listed in the MCs table, the user can change all of them except the MC name.
 
To delete one or more MCs, complete the following steps:
1. Select the check box from the Select column that appears in the same row as the name of the MC to delete.
2. Select multiple check boxes to delete multiple MCs.
3. Select Delete from the menu.
4. Click Go.
 
Note: The default MC cannot be deleted.
Storage Classes window
To define, modify, or delete an SC that is used by the TS7700 to automate storage management through classification of data sets and objects within a cluster, use the window that is shown in Figure 9-66. Also, you can use this window to copy an existing SC to the same cluster being accessed, or to another cluster in the grid.
You can view SCs from any TS7700 in the grid, but TVC preferences can be altered only from a tape-attached cluster. Figure 9-66 shows the window in a TS7720T model.
Figure 9-66 MI Storage Classes window on a TS7700T
The SCs table lists defined SCs that are available to control data sets (CDSs) and objects within a cluster.
The Create Storage Class box is slightly different depending on the TS7700 cluster model being accessed by the MI. Figure 9-67 shows appearance for the Create Storage Class box in different TS7700 models.
Figure 9-67 The Create Storage Class box
The default SC can be modified, but cannot be deleted. The default SC has dashes (--------) as the symbolic name.
The SCs table displays the following status information:
Name: The name of the SC. The value in this field must be 1 - 8 characters. Each SC within a cluster must have a unique name. Valid characters for this field are A-Z, 0-9, $, @, *, #, and %. The first character of this field cannot be a number. This is the only field that cannot be modified after it is added.
Description: An optional description of the SC. The value in this field must be
0 - 70 characters.
Partition: The name of the partition that is associated with the SC. A partition must be active before it can be selected as a value for this field. This field is displayed only if the cluster is a TS7720 that is attached to a physical library. A dash (-) indicates that the SC contains a partition that was deleted. Any volumes that are assigned to go to the deleted partition are redirected to the primary partition.
Tape Volume Cache Preference: The preference level for the SC. It determines how soon volumes are removed from cache after their copy to tape. This field is visible only if the TS7700 Cluster attaches to a physical library. If the selected cluster does not possess a physical library, volumes in that cluster’s cache display a Level 1 preference. The following values are possible:
 – Use IART: Volumes are removed according to the TS7700’s Initial Access Response Time (IART).
 – Level 0: Volumes are removed from the TVC as soon as they are copied to tape.
 – Level 1: Copied volumes remain in the TVC until more space is required, then the first volumes are removed to free space in the cache. This is the default preference level that is assigned to new preference groups.
Premigration Delay Time: The number of hours until premigration can begin for volumes in the SC, based on the volume time stamp designated by Premigration Delay Reference. Possible values are 0 - 65535. If 0 is selected, premigration delay is disabled. This field is visible only if the TS7700 cluster attaches to a physical library.
Premigration Delay Reference: The volume operation that establishes the time stamp from which Premigration Delay Time is calculated. This field is visible only if the TS7700 cluster attaches to a physical library. The following values are possible:
 – Volume Creation: The time at which the volume was created by a scratch mount or write operation from beginning of tape.
 – Volume Last Accessed: The time at which the volume was last accessed.
Volume Copy Retention Group: The name of the group that defines the preferred auto removal policy applicable to the virtual volume. The Volume Copy Retention Group provides more options to remove data from a disk-only TS7700 or resident-only (CP0) partition in the TS7700T as the active data reaches full capacity. Volumes become candidates for removal if an appropriate number of copies exist on peer clusters and the volume copy retention time has elapsed since the volume was last accessed. Volumes in each group are removed in order based on their least recently used access times.
The volume copy retention time describes the number of hours a volume remains in cache before becoming a candidate for removal. This field is only displayed for disk-only clusters when they are part of a hybrid grid (one that combines TS7700 clusters that both do and do not attach to a physical library).
If the virtual volume is in a scratch category and is on a disk-only cluster, removal settings no longer apply to the volume, and it is a candidate for removal. In this instance, the value that is displayed for the Volume Copy Retention Group is accompanied by a warning icon:
 – Prefer Remove: Removal candidates in this group are removed before removal candidates in the Prefer Keep group.
 – Prefer Keep: Removal candidates in this group are removed after removal candidates in the Prefer Remove group.
 – Pinned: Copies of volumes in this group are never removed from the accessing cluster. The volume copy retention time does not apply to volumes in this group. Volumes in this group that are later moved to scratch become priority candidates for removal.
Volume Copy Retention Time: The minimum amount of time (in hours) that a volume remains temporarily pinned in cache (counting from the volume creation or last access time) before changing either to Prefer Keep or Remove groups. When the amount of retention time elapses, the copy then becomes a candidate for removal. Possible values include 0 - 65,536. The default is 0.
This field is only visible if the selected cluster is a TS7720 and all of the clusters in the grid operate at Licensed Internal Code level 8.7.0.xx or later. If the Volume Copy Retention Group displays a value of Pinned, this field is disabled.
Volume Copy Retention Reference: The volume operation that establishes the time stamp from which Volume Copy Retention Time is calculated. The following list describes the possible values:
 – Volume Creation: The time at which the volume was created by a scratch mount or write operation from beginning of tape.
 – Volume Last Accessed: The time at which the volume was last accessed.
This field is disabled if the Volume Copy Retention Group displays a value of Pinned.
Data Classes window
This page is used to define, modify, copy, or delete a TS7700 Data Class for volume sizes and LWORM policy assignment. Data classes are used to automate storage management through the classification of data sets (Figure 9-68). Code level R4.1.2 introduced the compression optimization of the host data, allowing the user to choose between the traditional compression (FICON) or to select one of the new compression algorithms. A new item was added to the Data Classes page to select the compression method.
Figure 9-68 MI Data Classes window
 
Important: Scratch categories and DCs work at the system level and are unique for all clusters in a grid. Therefore, if they are modified in one cluster, they are applied to all clusters in the grid.
The DC table (Figure 9-68) displays the list of DCs defined for each cluster of the grid.
The user can use the DCs table to create a DC or modify, copy, or delete an existing DC. The default DC can be modified, but cannot be deleted. The default DC has dashes (--------) as the symbolic name.
The DCs table lists the following status information:
Name
The name of the DC. Valid characters for this field are A - Z, 0 - 9, $, @, *, #, and %. The first character of this field cannot be a number. The value in this field must be 1 - 8 characters in length. The first character of this field cannot be a number. This field is the only field that cannot be modified after it is added.
Virtual Volume Size (MiB)
The logical volume size of the DC, which determines the maximum number of MiB for each logical volume in a defined class. One possible value is Insert Media Class, where the logical volume size is not defined (the DC is not defined by a maximum logical volume size). Other possible values are 1,000 MiB, 2,000 MiB, 4,000 MiB, 6,000 MiB, or 25,000 MiB. Regarding 25000 MiB logical volume size:
 – A maximum size of 25,000 MiB for logical volumes is allowed without any restriction if all clusters in a grid operate at R3.2 or higher level of Licensed Internal Code.
 – Otherwise, the 25,000 MiB is not supported whenever one or more TS7740 clusters are present in grid, and at least one cluster in the grid operates at a Licensed Internal Code level earlier than R3.2.
 – In a grid that is formed exclusively by TS7720 clusters that are not attached to a physical library, Feature Code 0001 is required on every cluster operating at a LIC level earlier than R3.2.
 – When a grid contains a cluster that is attached to a tape library, the 25000 MiB options are visible in the MI in the clusters that operate at code R3.2 or later. The 25000 MiB options are not visible from the MI in the clusters that operate at earlier code levels.
 – By default, the TS7700 grid allows up to 128 simultaneous jobs running with 25000 MiB virtual volumes. This number of 25 GiB jobs that are allowed in the grid is adjustable by using a LI REQ command.
 – Depending on the grid network performance and the number of copy tasks for 25 GiB volumes, consider increasing the volume copy timeout time from the default value of 180 minutes to 240 minutes if the copies are timing out. This can be done using the TS7700 MI or using a LI REQ command.
Compression Method
The following methods are available:
 – FICON Compression: Compression that is performed by the FICON adapters.
 – LZ4 Compression: Improved compression that uses an LZ4 algorithm. This compression method is faster and uses less TS7700 processor cycles than the ZSTD method. In the other hand, the compression ration is lower than ZSTD.
 – ZSTD Compression: Improved compression that uses a Zstandard algorithm. This compression method gives a higher compression ratio than LZ4. However, the algorithm uses more processor cycles and is slower than the LZ4 method.
3490 Counters Handling
The following options are available:
 – Surface EOT: Use this option to surface end of tape (EOT) when channel bytes written (before compression) reaches the maximum channel byte counter of 68 GB.
 – Wrap Supported: Set this option to allow channel bytes written to exceed maximum counter value (uncompressed channel data) and present the counter overflow unit attention, which will then collect and reset the counters.
This option might be useful to allow logical volumes to be fulfilled to the maximum volume size (25G B) when the data compression rate is better than 2.72: 1, which is expected with the new compression method introduced with R4.1.2.
Description
A description of the DC definition. The value in this field must be 0 - 70 characters.
Logical WORM
Whether LWORM is set for the DC. LWORM is the virtual equivalent of WORM tape media, achieved through software emulation. This setting is available only when all clusters in a grid operate at R1.6 and later. The following values are valid:
 – Yes : LWORM is set for the DC. Volumes belonging to the DC are defined as LWORM.
 – No : LWORM is not set. Volumes belonging to the DC are not defined as LWORM. This is the default value for a new DC.
Use the menu in the DCs table to add a DC, or modify or delete an existing DC.
 
Tip: The user can create up to 256 DCs per TS7700 grid.
9.3.9 The Access icon
The topics in this section present information that is related to managing user access in a TS7700 subsystem. A series of enhancements in user access management has been introduced gradually along the way. To continue with this improvement, the content of the page has been rearranged into separate pages to provide easier access to the features on the Security Settings page, and the new functions that will be incorporated.
 
The user can access the following options through the User Access (blue man icon) link, as depicted in the Figure 9-69.
Figure 9-69 The Access Icon and options
The TS7700 management interface (MI) pages collected under the Access icon help the user to view or change security settings, roles and permissions, passwords, and certifications. The user can also update the Knowledge Center (Infocenter) files from this menu.
Security Settings: Use this window to view security settings for a TS7700 grid. From this window, the user can also access other pages to add, modify, assign, test, and delete security settings.
Roles and Permissions : Use this window to set and control user roles and permissions for a TS7700 grid.
SSL Certificates: Use this window to view, import, or delete Secure Sockets Layer (SSL) certificates to support connection to a Storage Authentication Service server from a TS7700 cluster.
InfoCenter Settings: Use this window to upload a new TS7700 IBM Knowledge Center to the cluster’s MI.
 
Note: Although the term InfoCenter is still used in the interface of the product, the term IBM Knowledge Center is the current correct term.
Security Settings window
Figure 9-70 shows the Security Settings window, which is the entry point to enabling security policies.
Figure 9-70 TS7700 Security Settings
This page allows you to change the following settings:
Session Timeout: This setting is defined from the Security Settings window. The user can specify the number of hours and minutes that the management interface can be idle before the current session expires and the user is redirected to the login page.
To modify the maximum idle time, select values from the Hours and Minutes menus and click Submit Changes. The following parameters are valid for Hours and Minutes:
 – Hours: The number of hours the MI can be idle before the current session expires. Possible values for this field are 00 - 23.
 – Minutes: The number of minutes the MI can be idle before the current session expires. Possible values for this field are 00 - 55, selected in 5-minute increments.
SSL settings: Use this section to set the SSL/TLS level. There are only two choices: TLS 1.0 (transition) and TLS 1.2 (strict). The default setting is TLS 1.0.
If the browser being used to access TS7700 MI does not support TLS 1.2 and HTTPS-only is enabled, then a warning message is displayed. The message states that access to MI might be lost if you proceed.
HTTP Settings: HTTP enablement can be changed in this session. The HTTP for web setting can be Enabled or Disabled from here. If the HTTP settings changes, the TS7700 MI will restart. All users that are logged on at the time lose connection to the Management Interface and must log in again.
Usage Reporting: Use this section to enable or disable usage reporting. Usage reporting is enabled or disabled for all users on both the client and server sides. By default, usage reporting is disabled.
Authentication Policies
The user can add, modify, assign, test, and delete the authentication policies that determine how users are authenticated to the TS7700 Management Interface. Each cluster is assigned a single authentication policy. The user must be authorized to modify security settings before changing authentication policies.
There are two categories of authentication policies: Local, which replicates users and their assigned roles across a grid, and External, which stores user and group data on a separate server and maps relationships between users, groups, and authorization roles when a user logs in to a cluster. External policies include Storage Authentication Service policies and Direct LDAP (lightweight directory access protocol) policies1.
 
Note: A restore of cluster settings (from a previously taken backup) will not restore or otherwise modify any user, role, or password settings defined by a security policy.
Policies can be assigned on a per cluster basis. One cluster can employ local authentication, while a different cluster within the same grid domain can employ an external policy. Additionally, each cluster in a grid can operate its own external policy. However, only one policy can be enabled on a cluster at a time.
The Authentication Policies table lists the following information:
Policy Name: The name of the policy that defines the authentication settings. The policy name is a unique value that is composed of 1 - 50 Unicode characters. Heading and trailing blank spaces are trimmed, although internal blank spaces are retained. After a new authentication policy is created, its policy name cannot be modified.
 
Tip: The Local Policy name is Local and cannot be modified.
Type : The policy type, which can be one of the following values:
 – Local: A policy that replicates authorization based on user accounts and assigned roles. It is the default authentication policy. When enabled, it is enforced for all clusters in the grid. If Storage Authentication Service is enabled, the Local policy is disabled. This policy can be modified to add, change, or delete individual accounts, but the policy itself cannot be deleted.
 – External: Policies that map user, group, and role relationships upon user login. External policies can be modified. However, they cannot be deleted if in use on any cluster. External policies include the following:
 • Storage Authentication Service: A centrally managed, role-based access control (RBAC) policy that authenticates and authorizes users using the System Storage Productivity Center to authenticate users to an LDAP server.
 • LDAP: An RBAC policy that authenticates and authorizes users though direct communication with an LDAP server.
 – Clusters: The clusters for which the authentication policy is in force. Cluster names are only displayed for policies that are enabled and assigned. Only one policy can be assigned to a cluster at a time.
 – Allow IBM Support: The type of access granted to IBM service representatives for service support. This access is most often used to reset the cluster authentication police to Local during an LDAP authentication issue.
 
Note: If an IBM service representative resets a cluster authentication policy to Local, the Local authentication policy is enabled on all clusters in the grid, regardless of previous LDAP policy setting. Previously enabled LDAP policies are disabled and should be re-enabled following resolution of any LDAP authentication issue.
The following are possible values, among others:
 • Physical: IBM service representatives can log in physically without LDAP credentials to connect to the cluster. At least one IBM representative must have physical access to the cluster. An onsite IBM representative can grant temporary remote access to an off-site IBM representative. This is the recommended option.
 • Remote: IBM service representatives can log in remotely without LDAP credentials to connect to the cluster.
 
Important: If this field is blank for an enabled policy, then IBM service representatives must log in using LDAP login credentials obtained from the system administrator. If LDAP server is inaccessible, IBM service representatives cannot access the cluster.
Adding a user to the Local Authentication Policy
A Local Authentication Policy replicates authorization based on user accounts and assigned roles. It is the default authentication policy. This section looks at the various windows that are required to manage the Local Authentication Policy.
To add a user to the Local Authentication Policy for a TS7700 Grid, complete the following steps:
1. On the TS7700 MI, click Access  Security Settings from the left navigation window.
2. Click Select next to the Local policy name on the Authentication Policies table.
3. Select Modify from the Select Action menu and click Go.
4. On the Local Accounts table, select Add from the Select Action menu and click Go.
5. In the Add User window, enter values for the following required fields:
 – User name: The new user’s login name. This value must be 1 - 128 characters and composed of Unicode characters. Spaces and tabs are not allowed.
 – Role: The role that is assigned to the user account. The role can be a predefined role or a user-defined role. The following values are possible:
 • Operator: The operator has access to monitoring information, but is restricted from changing settings for performance, network configuration, feature licenses, user accounts, and custom roles. The operator is also restricted from inserting and deleting logical volumes.
 • Lead Operator: The lead operator has access to monitoring information and can perform actions for a volume operation. The lead operator has nearly identical permissions to the administrator, but cannot change network configuration, feature licenses, user accounts, or custom roles.
 • Administrator: The administrator has the highest level of authority, and can view all windows and perform any action, including the addition and removal of user accounts. The administrator has access to all service functions and TS7700 resources.
 • Manager: The manager has access to monitoring information, and performance data and functions, and can perform actions for users, including adding, modifying, and deleting user accounts. The manager is restricted from changing most other settings, including those for logical volume management, network configuration, feature licenses, and custom roles.
 • Custom roles: The administrator can name and define two custom roles by selecting the individual tasks that are permitted to each custom role. Tasks can be assigned to a custom role in the Roles and assigned permissions table in the Roles & Permissions Properties window.
 – Cluster Access: The clusters to which the user has access. A user can have access to multiple clusters.
6. To complete the operation, click OK. To abandon the operation and return to the Modify Local Accounts window, click Cancel.
Modifying the user or group of the Local Authentication Policy
To modify a user or group property for a TS7700 grid, use this window.
 
Tip: Passwords for the users are changed from this window also.
To modify a user account belonging to the Local Authentication Policy, complete these steps:
1. On the TS7700 MI, click Access (blue man icon) → Security Settings from the left navigation window.
2. Click Select next to the Local policy name on the Authentication Policies table.
3. Select Modify from the Select Action menu and click Go.
4. On the Local Accounts table, click Select next to the user name of the policy to modify.
5. Select Modify from the Select Action menu and click Go.
6. Modify the values for any of the following fields:
 – Role: The role that is assigned to the user account. The following values are possible:
 • Operator: The operator has access to monitoring information, but is restricted from changing settings for performance, network configuration, feature licenses, user accounts, and custom roles. The operator is also restricted from inserting and deleting logical volumes.
 • Lead Operator: The lead operator has access to monitoring information and can perform actions for volume operation. The lead operator has nearly identical permissions to the administrator, but cannot change network configuration, feature licenses, user accounts, and custom roles.
 • Administrator: The administrator has the highest level of authority, and can view all windows and perform any action, including the addition and removal of user accounts. The administrator has access to all service functions and TS7700 resources.
 • Manager: The manager has access to monitoring information and performance data and functions, and can perform actions for users, including adding, modifying, and deleting user accounts. The manager is restricted from changing most other settings, including those for logical volume management, network configuration, feature licenses, and custom roles.
 • Custom roles: The administrator can name and define two custom roles by selecting the individual tasks that are permitted for each custom role. Tasks can be assigned to a custom role in the Roles and assigned permissions table from the Roles & Permissions Properties window.
 – Cluster Access: The clusters to which the user has access. A user can have access to multiple clusters.
7. To complete the operation, click OK. To abandon the operation and return to the Modify Local Accounts window, click Cancel.
 
Note: The user cannot modify the user name or Group Name. Only the role and the clusters to which it is applied can be modified.
In the Cluster Access table, select the Select check box to toggle all the cluster check boxes on and off.
Adding a Storage Authentication Service policy
A Storage Authentication Service Policy maps user, group, and role relationships upon user login with the assistance of a System Storage Productivity Center (SSPC). This section highlights the various windows that are required to manage the Storage Authentication Service Policy.
 
Important: When a Storage Authentication Service policy is enabled for a cluster, service personnel are required to log in with the setup user or group. Before enabling storage authentication, create an account that can be used by service personnel.
To add a Storage Authentication Service Policy for a TS7700 Grid, complete the following steps:
1. On the TS7700 MI, click Access (blue man icon)  Security Settings from the left navigation window.
2. On the Authentication Policies table, select Add Storage Authentication Service Policy from the Select Action menu.
3. Click Go to open the Add Storage Authentication Service Policy window. The following fields are available for completion:
a. Policy Name: The name of the policy that defines the authentication settings. The policy name is a unique value that is composed of 1 - 50 Unicode characters. Heading and trailing blank spaces are trimmed, although internal blank spaces are retained. After a new authentication policy is created, its policy name cannot be modified.
b. Primary Server URL: The primary URL for the Storage Authentication Service. The value in this field consists of 1 - 256 Unicode characters and takes the following format:
https://<server_IP_address>:secure_port/TokenService/services/Trust
c. Alternative Server URL: The alternative URL for the Storage Authentication Service if the primary URL cannot be accessed. The value in this field consists of 1 - 256 Unicode characters and takes the following format:
https://<server_IP_address>:secure_port/TokenService/services/Trust
 
Remember: If the Primary or alternative Server URL uses the HTTPS protocol, a certificate for that address must be defined on the SSL Certificates window.
d. Server Authentication: Values in the following fields are required if IBM WebSphere Application Server security is enabled on the WebSphere Application Server that is hosting the Authentication Service. If WebSphere Application Server security is disabled, the following fields are optional:
 • User ID: The user name that is used with HTTP basic authentication for authenticating to the Storage Authentication Service.
 • Password: The password that is used with HTTP basic authentication for authenticating to the Storage Authentication Service.
4. To complete the operation, click OK. To abandon the operation and return to the Security Settings window, click Cancel.
 
Note: Generally, select the Allow IBM Support options to grant access to the IBM services representative to the TS7700.
Click OK to confirm the creation of the Storage Authentication Policy. In the Authentication Policies table, no clusters are assigned to the newly created policy, so the Local Authentication Policy is enforced. When the newly created policy is in this state, it can be deleted because it is not applied to any of the clusters.
Adding a user to a Storage Authentication Policy
To add a user to a Storage Authentication Service Policy for a TS7700 Grid, complete the following steps:
1. On the TS7700 MI, click Access → Security Settings from the left navigation window.
2. Select the policy to be modified.
3. In the Authentication Policies table, select Modify from the Select Action menu.
4. Click Go to open the Modify Storage Authentication Service Policy window.
5. In the Modify Storage Authentication Service Policy page, go to the Storage Authentication Service Users/Groups table at the bottom.
6. Select Add User from the Select Action menu.
7. Click Go to open the Add External Policy User window.
8. In the Add External Policy User window, enter values for the following required fields:
 – User name: The new user’s login name. This value must be 1 - 128 characters in length and composed of Unicode characters. Spaces and tabs are not allowed.
 – Role: The role that is assigned to the user account. The role can be a predefined role or a user-defined role. The following values are valid:
 • Operator: The operator has access to monitoring information, but is restricted from changing settings for performance, network configuration, feature licenses, user accounts, and custom roles. The operator is also restricted from inserting and deleting logical volumes.
 • Lead Operator: The lead operator has access to monitoring information and can perform actions for a volume operation. The lead operator has nearly identical permissions as the administrator, but cannot change network configuration, feature licenses, user accounts, and custom roles.
 • Administrator: The administrator has the highest level of authority, and can view all windows and perform any action, including the addition and removal of user accounts. The administrator has access to all service functions and TS7700 resources.
 • Manager: The manager has access to monitoring information and performance data and functions, and can perform actions for users, including adding, modifying, and deleting user accounts. The manager is restricted from changing most other settings, including those for logical volume management, network configuration, feature licenses, and custom roles.
 • Custom roles: The administrator can name and define two custom roles by selecting the individual tasks that are permitted for each custom role. Tasks can be assigned to a custom role in the Roles and assigned permissions table in the Roles & Permissions Properties window.
 – Cluster Access: The clusters (can be multiple) to which the user has access.
9. To complete the operation, click OK. To abandon the operation and return to the Modify Local Accounts window, click Cancel.
10. Click OK after the fields are complete.
Assigning clusters to a Storage Authentication Policy
Clusters participating in a multi-cluster grid can have unique Storage Authentication policies active. To assign an authentication policy to one or more clusters, you must have authorization to modify authentication privileges under the new policy. To verify that it is necessary to have sufficient privileges with the new policy, you must enter a user name and password that is recognized by the new authentication policy.
To add a user to a Storage Authentication Service Policy for a TS7700 grid, complete the following steps:
1. On the TS7700 MI, click Access → Security Settings from the left navigation window.
2. In the Authentication Policies table, select Assign from the Select Action menu.
3. Click Go to open the Assign Authentication Policy window.
4. To apply the authentication policy to a cluster, select the check box next to the cluster’s name.
Enter values for the following fields:
 – User name: User name for the TS7700 MI.
 – Password: Password for this TS7700 MI user.
5. To complete the operation, click OK. To abandon the operation and return to the Security Settings window, click Cancel.
Deleting a Storage Authentication Policy
The user can delete a Storage Authentication Service policy if it is not in effect on any cluster. The Local policy cannot be deleted. Make sure no clusters are assigned to the policy, so it can be deleted. If clusters are assigned to the policy, use Modify from the Select Action menu to remove the assigned clusters.
To delete a Storage Authentication Service Policy from a TS7700 grid, complete the following steps:
1. On the TS7700 MI, click Access → Security Settings from the left navigation window.
2. From the Security Settings window, go to the Authentication Policies table and complete the following steps:
a. Select the radio button next to the policy that must be deleted.
b. Select Delete from the Select Action menu.
c. Click Go to open the Confirm Delete Storage Authentication Service policy window.
d. Click OK to delete the policy and return to the Security Settings window, or click Cancel to abandon the delete operation and return to the Security Settings window.
3. Confirm the policy deletion: Click OK to delete the policy.
Testing an Authentication Policy
Before a new Authentication Policy can be used, it must be tested. The test validates the login credentials (user ID and password) in all clusters for which this user ID and role are authorized. Also, access to the external resources needed by an external authentication policy, such as an SSPC or an LDAP server, is tested. The credentials that are entered in the test window (User ID and Password) are authenticated and validated by the LDAP server, for an external policy.
 
Tip: The policy needs to be configured to an LDAP server before being added in the TS7700 MI. External users and groups to be mapped by the new policy are checked in LDAP before being added.
To test the security settings for the TS7700 grid, complete the following steps. Use these steps to test the roles that are assigned to the user name by an existing policy.
1. From the Security Settings window, go to the Authentication Policies table:
a. Select the radio button next to the policy to test.
b. Select Test from the Select Action menu.
c. Click Go to open the Test Authentication Policy window.
2. Check the check box next to the name of each cluster on which to conduct the policy test.
3. Enter values for the following fields:
 – User name: The user name for the TS7700 MI. This value consists of 1 - 16 Unicode characters.
 – Password: The password for the TS7700 MI. This value consists of 1- 16 Unicode characters.
 
Note: If the user name entered belongs to a user not included on the policy, test results show success, but the result comments show a null value for the role and access fields. Additionally, the user name that entered cannot be used to log in to the MI.
4. Click OK to complete the operation. If you must abandon the operation, click Cancel to return to the Security Settings window.
When the authentication policy test completes, the Test Authentication Policy results window opens to display results for each selected cluster. The results include a statement indicating whether the test succeeded or failed, and if it failed, the reason for the failure. The Test Authentication Policy results window also displays the Policy Users table. Information that is shown on that table includes the following fields:
Username: The name of a user who is authorized by the selected authentication policy.
Role: The role that is assigned to the user under the selected authentication policy.
Cluster Access: A list of all the clusters in the grid for which the user and user role are authorized by the selected authentication policy.
To return to the Test Authentication Policy window, click Close Window. To return to the Security Settings window, click Back at the top of the Test Authentication Policy results window.
Adding a Direct LDAP policy
A Direct LDAP Policy is an external policy that maps user, group, and role relationships. Users are authenticated and authorized through a direct communication with an LDAP server. This section highlights the various windows that are required to manage a Direct LDAP policy.
 
Important: When a Direct LDAP policy is enabled for a cluster, service personnel are required to log in with the setup user or group. Before enabling LDAP authentication, create an account that can be used by service personnel. Also, the user can enable an
IBM SSR to connect to the TS7700 through physical access or remotely by selecting those options in the DIRECT LDAP POLICY window.
To add a Direct LDAP Policy for a TS7700 grid, complete the following steps:
1. On the TS7700 MI, click Access → Security Settings from the left navigation window.
2. From the menu, select Add Direct LDAP Policy and click GO.
3. Mark the check boxes to grant a local or remote access to the TS7700 to the IBM SSR so that the SSR can perform service support.
 
Note: LDAP external authentication policies are not available for backup or recovery through the backup or restore settings operations. Record it, keep it safe, and have it available for a manual recovery as dictated by the security standards.
The values in the following fields are required if secure authentication is used or anonymous connections are disabled on the LDAP server:
User Distinguished Name: The user distinguished name is used to authenticate to the LDAP authentication service. This field supports a maximum length of 254 Unicode characters, for example:
CN=Administrator,CN=users,DC=mycompany,DC=com
Password: The password is used to authenticate to the LDAP authentication service. This field supports a maximum length of 254 Unicode characters.
When modifying an LDAP Policy, the LDAP attributes fields also can be changed:
Base Distinguish Name: The LDAP distinguished name (DN) that uniquely identifies a set of entries in a realm. This field is required but blank by default. The value in this field consists of 1 - 254 Unicode characters.
User Name Attribute: The attribute name that is used for the user name during authentication. This field is required and contains the value uid by default. The value in this field consists of 1 - 61 Unicode characters.
Password: The attribute name that is used for the password during authentication. This field is required and contains the value userPassword by default. The value in this field consists of 1 - 61 Unicode characters.
Group Member Attribute: The attribute name that is used to identify group members. This field is optional and contains the value member by default. This field can contain up to 61 Unicode characters.
Group Name Attribute: The attribute name that is used to identify the group during authorization. This field is optional and contains the value cn by default. This field can contain up to 61 Unicode characters.
User Name filter: Used to filter and verify the validity of an entered user name. This field is optional and contains the value (uid={0}) by default. This field can contain up to 254 Unicode characters.
Group Name filter: Used to filter and verify the validity of an entered group name. This field is optional and contains the value (cn={0}) by default. This field can contain up to 254 Unicode characters.
Click OK to complete the operation. Click Cancel to abandon the operation and return to the Security Settings window.
Creating a RACF based LDAP Policy
The process is similar to the previous item “Adding a Direct LDAP policy” on page 474. There are some required configurations on the host side regarding the RACF, SDBM, and IBM Security Directory Server that should be performed in advance before this capability can be made operational. See Chapter 10, “Host Console operations” on page 585 for the description of the parameters and configurations. When those configurations are ready, the RACF based LDAP Policy can be created and activated. See Figure 9-71 to add a RACF policy.
Figure 9-71 Adding a RACF based Direct LDAP Policy
As shown in Figure 9-71 on page 476, a new policy is created, which is called RACF_LDAP. The Primary Server URL is that of the IBM Security Directory Server, the same way any regular LDAP server is configured.
The Base Distinguished Name matches the SDBM_SUFFIX.
As shown in Figure 9-71 on page 476, the Group Member Attribute was set to racfgroupuserids (it shows truncated in MI’s text box).
The User Distinguished Name time should be specified with all of the following parameters:
racfid
profiletype
cn
When the previous setup is complete, more users can be added to the policy, or clusters can be assigned to it, as described in the topics to follow. There are no specific restrictions for these RACF/LDAP user IDs, and they can be used to secure the MI, or the IBM service login (for the IBM SSR) just as any other LDAP user ID.
See IBM Knowledge Center, available locally on the MI window by clicking the question mark on the upper right upper bar and selecting Help, or on the following website:
Adding users to a Direct LDAP Policy
For more information, see the process that is described in the “Adding a user to a Storage Authentication Policy” on page 471. The same steps apply when adding users to a Direct LDAP Policy.
Assigning a Direct LDAP Policy to a cluster or clusters
For more information, see the procedure that is described in “Assigning clusters to a Storage Authentication Policy” on page 472. The same steps apply when working with a Direct LDAP Policy.
Deleting a Direct LDAP Policy
For more information, see the procedure that is described in “Deleting a Storage Authentication Policy” on page 473. The same steps apply when deleting a Direct LDAP Policy.
Roles and Permissions window
You can use the window that is shown in Figure 9-72 to set and control user roles and permissions for a TS7700 Grid.
Figure 9-72 TS7700 MI Roles and Permissions window
Clicking the Manage users and assign user roles link opens the Security Settings window where the user can add, modify, assign, test, and delete the authentication policies that determine how users are authenticated to the TS7700 Management Interface.
Each role is described in the following list:
Operator: The operator has access to monitoring information, but is restricted from changing settings for performance, network configuration, feature licenses, user accounts, and custom roles. The operator is also restricted from inserting and deleting logical volumes.
Lead Operator: The lead operator has access to monitoring information and can perform actions for volume operation. The lead operator has nearly identical permissions to the administrator, but cannot change network configuration, feature licenses, user accounts, and custom roles.
Administrator: The administrator has the highest level of authority, and can view all windows and perform any action, including the addition and removal of user accounts. The administrator has access to all service functions and TS7700 resources.
Manager: The manager has access to monitoring information, performance data, and functions, and can perform actions for users. The manager is restricted from changing most settings, including those for logical volume management, network configuration, feature licenses, user accounts, and custom roles.
Read Only: The read only role can view all pages but cannot perform any actions. Any custom role can be made read-only by applying the Read Only role template to it.
Custom roles: The administrator can name and define 10 custom roles by selecting the individual tasks that are permitted to each custom role. Tasks can be assigned to a custom role in the Roles and Assigned Permissions window.
 
Note: Valid characters for the Name of Custom Role field are A-Z, 0-9, $, @, *, #, and %. The first character of this field cannot be a number.
Roles and Assigned Permissions table
The Roles and Assigned Permissions table is a dynamic table that displays the complete list of TS7700 grid tasks and the permissions that are assigned to selected user roles.
To view the Roles and Assigned Permissions table, complete the following steps:
1. Select the check box to the left of the role to be displayed. The user can select more than one role to display a comparison of permissions.
2. Click Select Action → Properties.
3. Click Go.
The first column of the Roles and Assigned Permissions table lists all the tasks available to users of the TS7700. Subsequent columns show the assigned permissions for selected role (or roles). A check mark denotes permitted tasks for a user role. A null dash (-) denotes prohibited tasks for a user role.
Permissions for predefined user roles cannot be modified. The user can name and define up to 10 different custom roles, if necessary. The user can modify permissions for custom roles in the Roles and Assigned Permissions table. The user can modify only one custom role at a time.
To modify a custom role, complete the following steps:
1. Enter a unique name for the custom role in the Name of Custom Role field.
 
Note: Valid characters for this field are A-Z, 0-9, $, @, *, #, and %. The first character of this field cannot be a number.
2. Modify the custom role to fit the requirements by selecting (permitting) or clearing (prohibiting) tasks. Selecting or clearing a parent task affects any child tasks. However, a child task can be selected or cleared independently of a parent task. The user can apply the permissions of a predefined role to a custom role by selecting a role from the Role Template menu and clicking Apply. The user can then customize the permissions by selecting or clearing tasks.
3. After all tasks for the custom role are selected, click Submit Changes to activate the new custom role.
 
Remember: The user can apply the permissions of a predefined role to a custom role by selecting a role from the Role Template menu and clicking Apply. The user can then customize the permissions by selecting or clearing tasks.
SSL Certificates window
Use the window that is shown in Figure 9-73 to view, import, or delete SSL certificates to support secure connections to a Storage Authentication Service server from a TS7700 cluster. Starting from R3.3, this page also allows the user to replace the MI HTTPS SSL certificate with a custom one.
Figure 9-73 SSL Certificates window
If a Primary or alternative Server URL, which is defined by a Storage Authentication Service Policy, uses the HTTPS protocol, a certificate for that address must be defined in this window. The same is true for Direct LDAP policies if the Primary or Alternate server uses LDAPs. If the policy uses LDAP, then a certificate is not required. The Certificates table displays identifying information for SSL certificates on the cluster.
The Certificates table displays the following identifying information for SSL certificates on the cluster:
Alias: A unique name to identify the certificate on the system.
Issued To: The distinguished name of the entity requesting the certificate.
Fingerprint: A number that specifies the Secure Hash Algorithm (SHA) of the certificate. This number can be used to verify the hash for the certificate at another location, such as the client side of a connection.
Expiration: The expiration date of the signer certificate for validation purposes.
Issued By: The issuer of the certificate.
Type: Shows the type of the SSL certificate. Can be a trusted certificate installed from a remote server, or HTTPS for a certificate that is used in https connections to the local MI.
To import a new SSL certificate, complete the following steps:
1. Click Select Action → Retrieve from port and then, click Go. The Retrieve from Port window opens.
2. Enter the host and port from which the certificate is retrieved, and a unique value for
the alias.
3. Click Retrieve Signer Information. To import the certificate, click OK. To abandon the operation and return to the SSL Certificates window, click Cancel.
To import a new SSL certificate, select New Certificate from the top of the table, which displays a wizard dialog.
To retrieve a certificate from the server, select Retrieve certificate from server and click Next. Enter the host and port from which the certificate is retrieved and click Next. The certificate information is retrieved. The user must set a unique alias in this window. To import the certificate, click Finish. To abandon the operation and close the window, click Cancel. The user can also go back to the Retrieve Signer Information window.
To upload a certificate, select Upload a certificate file and click Next. Click the Upload button, select a valid certificate file, and click Next. Verify that the certificate information (serial number, issued to, issued by, fingerprint, and expiration) is displayed on the wizard. Fill the alias field with valid characters. When the Finish button is enabled, click Finish. Verify that the trusted certificate was successfully added in the SSL Certificates table. To abandon the operation and close the window, click Cancel. The user can also go back to the Retrieve Signer Information window.
To delete an existing SSL certificate, complete the following steps:
1. Select the radio button next to the certificate to delete, select Select Action → Delete, and click Go. The Confirm Delete SSL Certificate window opens and prompts to confirm the decision to delete the SSL certificate.
2. Click Yes to delete the certificate and return to the SSL Certificates window. Click Cancel to abandon the delete operation and return to the SSL Certificates window.
InfoCenter Settings window
To upload a new TS7700 IBM Knowledge Center to the cluster’s MI, use the window that is shown in Figure 9-74.
Figure 9-74 InfoCenter Settings window
This window has the following items:
Current Version section, where the following items can be identified or accessed:
 – Identify the version level and date of IBM Knowledge Center that is installed on
the cluster.
 – Access a product database to download a JAR file containing a newer version of IBM Knowledge Center.
 – Access an external site displaying the most recently published version of IBM Knowledge Center.
The TS7700 IBM Knowledge Center download site link.
Click this link to open the Fix Central product database so that the user can download a new version of the TS7700 IBM Knowledge Center as a .jar file (if available):
a. Select System Storage from the Product Group menu.
b. Select Tape Systems from the Product Family menu.
c. Select TS7700 from the Product menu.
d. Click Continue.
e. On the Select Fixes window, check the box next to the wanted InfoCenter Update file (if available).
f. Click Continue.
g. On the Download Options window, select Download using Download Director.
h. Select the check box next to Include prerequisites and co-requisite fixes.
i. Click Continue.
j. On the Download files using Download Director window, ensure that the check box next to the correct InfoCenter Update version is checked and click Download now. The Download Director applet opens. The downloaded file is saved at C:DownloadDirector.
With the new .jar file that contains the updated IBM Knowledge Center (either from the Fix Central database or from an IBM SSR), save the .jar file to a local directory.
To upload and install the new IBM Knowledge Center, complete the following steps:
a. Click Browse to open the File Upload window.
b. Go to the folder that contains the new .jar file.
c. Highlight the new .jar file name and click Open.
d. Click Upload to install the new IBM Knowledge Center on the cluster’s MI.
9.3.10 The Settings icon
The TS7700 management interface pages that are accessible through the Settings icon help the user to view or change cluster network settings, install, or remove feature licenses, and configure SNMP and library port access groups (SDAC).
With R4.2 a new item, Cloud Tiering Settings, can show under the Settings options, depending on the configuration of the cluster (FC 5278, Enable Cloud Storage Tier).
Figure 9-75 shows the Setting Icon appearance with the new item, Cloud Tier Settings, and options.
Figure 9-75 The Cloud Tier Settings and options
Cloud Tier Settings page is used to set or modify the cloud tier settings on the TS7700 cluster. In this page, the following items can be found:
Cloud Pools: Cloud Pool parameters
Each cloud pool will contain the following items:
 – Nicknames
 – Cloud Data Format - Standard Format
Cloud Accounts: Cloud account parameters. R4.2 supports IBM COS S3 and Amazon S3 type of accounts.
Each cloud account contain the fields listed below. The maximum number of cloud accounts is 256.
 – Nickname: Cloud account nickname, up to 8 characters in length. Nicknames can be changed without restriction to the Account Type.
 – Tenant Name: Not required for the type of Accounts supported in R4.1.2 (IBM COS S3 & Amazon S3)
 – Service ID: Service ID is unique and cannot be modified. The value is auto generated, no input is needed by user.
 – Health Check: Periodic, Event or Disabled. If Periodic is the choice, specify a number of minutes to perform health check.
 – Type: Amazon S3 or IBM COS S3. Once a Cloud account is created; its type cannot be modified.
 – Access Key ID: the user name to access data in the cloud when the Cloud Account is created. The Secret Access Key also is provided and both settings can be modified later.
Cloud Container parameters: A container is owned by the account that created it. The TS7700 cloud supports up to 256 containers in each of the accounts. Container ownership is not transferable, but can be deleted if empty.
After a container is deleted, the name becomes available for reuse, but it can take some time to the name becomes free for reuse. Also, another account can create a container with that name in the meantime; therefore, the name might not be available for reuse.
Tip: If you are planning to use the same container name, do not delete the original empty container.
An unlimited number of objects can be stored in a container and no difference exists in performance whether many or only a few containers are used. All objects can be stored in a single container, or they can be organized across several containers.
 
Note: A container cannot be created within another container
Each container contains the following fields:
 – Container name: After creation, an S3 container’s name cannot be changed. The following rules apply to naming S3 container in all AWS Regions:
 • Container names can be repeated to support Cross-Region Replication on Amazon
 • Container names must be 3 - 63 characters.
 • Container names must not contain uppercase characters or underscores.
 • Container names must start with a lowercase letter or number.
 • Container names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Container names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
 • The single period (.) is not allowed in AWS account container names but allowed in COS account container names.
 • Container names must not be formatted as an IP address (for example, 192.168.5.4).
 • When you use virtual hosted–style containers with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches containers that do not contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (“.”) in container names when using virtual hosted–style containers.
 – Cloud Account
 – Cloud Pool
For more information see the following resources:
IBM TS7760 R4.2 Cloud Storage Tier Guide, REDP-5514:
Amazon updated naming conventions:
Virtual Hosting of Buckets:
Amazon S3 Transfer Acceleration:
After an Amazon container is created, the cloud URLs must be added. For Amazon S3, select one of the endpoint URLs from the following website:
For more information about containers, see the following resources:
IBM COS (IBM Cloud Object Storage System™):
COS Account Creation:
Figure 9-76 shows the Cluster Settings icon and options.
Figure 9-76 The Cluster Settings options
Cluster network settings
Use this page to set or modify IP addresses for the selected IBM TS7700 cluster.
The user can back up these settings as part of the TS7700_cluster<cluster ID>.xmi file and restore them for later use or use with another cluster.
Customer IP addresses tab
Use this tab to set or modify the MI IP addresses for the selected cluster. Each cluster is associated with two routers or switches. Each router or switch is assigned an IP address and one virtual IP address is shared between routers or switches.
 
Note: Any modifications to IP addresses on the accessing cluster interrupt access to that cluster for all current users. If the accessing cluster IP addresses are modified, the current users are redirected to the new virtual address.
The following fields are displayed on this tab:
 – IPv4: Select this radio button if the cluster can be accessed by an IPv4 address. If this option is disabled, all incoming IPv4 traffic is blocked, although loop-back traffic is still permitted.
If this option is enabled, specify the following addresses:
 • <Cluster Name> IP address: An AIX virtual IPv4 address that receives traffic on both customer networks. This field cannot be blank if IPv4 is enabled.
 • Primary Address: The IPv4 address for the primary customer network. This field cannot be blank if IPv4 is enabled.
 • Secondary Address: The IPv4 address for the secondary customer network. This field cannot be blank if IPv4 is enabled.
 • Subnet Mask: The IPv4 subnet mask that is used to determine the addresses present on the local network. This field cannot be blank if IPv4 is enabled.
 • Gateway: The IPv4 address that is used to access systems outside the local network.
A valid IPv4 address is 32 bits long, consists of four decimal numbers, each 0 - 255, separated by periods, such as 98.104.120.12.
 – IPv6: Select this radio button if the cluster can be accessed by an IPv6 address. If this option is disabled, all incoming IPv6 traffic is blocked, although loop-back traffic is still permitted. If the user enables this option and does not designate any additional IPv6 information, the minimum required local addresses for each customer network interface will automatically be enabled and configured by using neighbor discovery. If this option is enabled, the user can specify the following addresses:
 • Primary Address: The IPv6 address for the primary network. This field cannot be blank if IPv6 is enabled.
 • Secondary Address: The IPv6 address for the secondary network. This field cannot be blank if IPv6 is enabled.
 • Prefix Length: The IPv6 prefix length that is used to determine the addresses present on the local network. The value in this field is an integer 1 - 128. This field cannot be blank if IPv6 is enabled.
 • Gateway: The IPv6 address that is used to access systems outside the local network.
A valid IPv6 address is a 128-bit long hexadecimal value that is separated into 16-bit fields by colons, such as 3afa:1910:2535:3:110:e8ef:ef41:91cf.
Leading zeros can be omitted in each field so that :0003: can be written as :3:. A double colon (::) can be used once per address to replace multiple fields of zeros. For example, 3afa:0:0:0:200:2535:e8ef:91cf can be written as 3afa::200:2535:e8ef:91cf.
 – DNS Server: The IP addresses of any domain name server (DNS), separated by commas. DNS addresses are needed only when specifying a symbolic domain name rather than a numeric IP address for one or more of the following types of information:
 • Primary Server URL on the Add External policy window
 • Encryption Key Server (EKS) address
 • SNMP server address
 • Security server address
If this field is left blank, the DNS server address is populated by Dynamic Host Configuration Protocol (DHCP).
The address values can be in IPv4 or IPv6 format. A maximum of three DNS servers can be added. Any spaces that are entered in this field are removed.
To submit changes, click Submit. If the user changes apply to the accessing cluster, a warning message is displayed that indicates that the current user access will be interrupted. To accept changes to the accessing cluster, click OK. To reject changes to the accessing cluster and return to the IP addresses tab, click Cancel.
To reject the changes that are made to the IP addresses fields and reinstate the last submitted values, select Reset. The user can also refresh the window to reinstate the last submitted values for each field.
Use the Encrypt Grid Communication tab to encrypt grid communication between specific clusters.
 
Important: Enabling grid encryption significantly affects the performance of the TS7700. System performance can be reduced by 70% or more when grid encryption is enabled.
This tab includes the following fields:
Password
This password is used as an EK to protect grid communication. This value has a 255 ASCII character limit, and is required.
Cluster communication paths
Select the box next to each cluster communication path to be encrypted. The user can select a communication path between two clusters only if both clusters meet all the following conditions:
 – Are online
 – Operate at a Licensed Internal Code level of 8.30.0.x or higher
 – Operate by using IPv6-capable servers (3957 models V07, VEB and VEC))
To submit changes, click Submit.
Feature licenses
The user can view information about feature licenses, or to activate or remove feature licenses from the TS7700 cluster from this TS7700 MI page.
The Feature Licenses window includes the following fields:
Cluster common resources : The Cluster common resources table displays a summary of resources that are affected by activated features. The following information is displayed:
 – Cluster-Wide Disk Cache Enabled: The amount of disk cache that is enabled for the entire cluster, in terabytes (TB). If the selected cluster does not possess a physical library, the value in this field displays the total amount of cache that is installed on the cluster. Access to cache by a cluster without a physical library is not controlled by feature codes.
 – Cross-Cluster Communication (Grid): Whether cross-cluster communication is enabled on the grid. If this option is enabled, multiple clusters can form a grid. The possible values are Enabled and Disabled.
 – Peak data throughput: The Peak data throughput table displays for each vNode the peak data throughput in megabytes per second (MBps). The following information is displayed:
 • vNode: Name of the vNode.
 • Peak data throughput: The upper limit of the data transfer speed between the vNode and the host, which is displayed in MBps.
Currently activated feature licenses: The Currently activated feature licenses table displays a summary of features that are installed on each cluster:
 – Feature Code: The feature code number of the installed feature.
 – Feature Description: A description of the feature that was installed by the feature license.
 – License Key: The 32-character license key for the feature.
 – Node: The name and type of the node on which the feature is installed.
 – Node Serial Number: The serial number of the node on which the feature is installed.
 – Activated. The date and time the feature license was activated.
 – Expires: The expiration status of the feature license. The following values are possible:
 • Day/Date: The day and date on which the feature license is set to expire.
 • Never: The feature is permanently active and never expires.
 • One-time use: The feature can be used once and has not yet been used.
 
Note: These settings can be backed up by using the Backup Settings function under Cluster Settings tab and restore them for later use. When the backup settings are restored, new settings are added but no settings are deleted. The user cannot restore feature license settings to a cluster that is different from the cluster that created the ts7700_cluster<cluster ID>.xmi backup file. After restoring feature license settings on a cluster, log out and then log in to refresh the system.
Use the menu on the Currently activated feature licenses table to activate or remove a feature license. The user can also use this menu to sort and filter feature license details.
Simple Network Management Protocol
To view or modify the SNMP configured on a TS7700 Cluster, use this selection on the TS7700 MI.
Use this page to configure SNMP traps that log events, such as logins, configuration changes, status changes (vary on, vary off, or service prep), shutdown, and code updates. SNMP is a networking protocol that enables a TS7700 to gather and transmit automatically information about alerts and status to other entities in the network.
When adding or modifying SNMP destinations, follow this advice:
Use IPv4 or IPv6 addresses as destinations rather than a fully qualified domain name (FQDN).
Verify that any FQDN used correctly addresses its IP address.
Test only one destination at a time when testing SNMP configuration to ensure that FQDN destinations are working properly.
SNMP settings
Use this section to configure global settings that apply to SNMP traps on an entire cluster. The following settings are configurable:
SNMP Version: The SNMP version. It defines the protocol that is used in sending SNMP requests and is determined by the tool that is used to monitor SNMP traps. Different versions of SNMP traps work with different management applications. The following values are possible:
 – V1: The suggested trap version that is compatible with the greatest number of management applications. No alternative version is supported.
 – V2: An alternative trap version.
 – V3: An alternative trap version.
 – Enable SNMP Traps: A check box that enables or disables SNMP traps on a cluster. A checked box enables SNMP traps on the cluster; a cleared box disables SNMP traps on the cluster. The check box is cleared, by default.
 – Trap Community Name: The name that identifies the trap community and is sent along with the trap to the management application. This value behaves as a password; the management application will not process an SNMP trap unless it is associated with the correct community. This value must be 1 - 15 characters in length and composed of Unicode characters. The default value for this field is public.
Send Test Trap: Select this button to send a test SNMP trap to all destinations listed in the Destination Settings table by using the current SNMP trap values. The Enable SNMP Traps check box does not need to be checked to send a test trap. If the SNMP test trap is received successfully and the information is correct, click Submit Changes.
Submit Changes: Select this button to submit changes to any of the global settings, including the fields SNMP Version, Enable SNMP Traps, and Trap Community Name.
Destination Settings: Use the Destination Settings table to add, modify, or delete a destination for SNMP trap logs. The user can add, modify, or delete a maximum of 16 destination settings at one time.
 
Note: A user with read-only permissions cannot modify the contents of the Destination Settings table.
IP address: The IP address of the SNMP server. This value can take any of the following formats: IPv4, IPv6, a host name that is resolved by the system (such as localhost), or an FQDN if a domain name server (DNS) is provided. A value in this field is required.
 
Tip: A valid IPv4 address is 32 bits long, consists of four decimal numbers, each 0 - 255, separated by periods, such as 98.104.120.12.
A valid IPv6 address is a 128-bit long hexadecimal value that is separated into 16-bit fields by colons, such as 3afa:1910:2535:3:110:e8ef:ef41:91cf. Leading zeros can be omitted in each field so that :0003: can be written as :3:. A double colon (::) can be used once per address to replace multiple fields of zeros.
For example, 3afa:0:0:0:200:2535:e8ef:91cf” is also 3afa::200:2535:e8ef:91cf”.
Port: The port to which the SNMP trap logs are sent. This value must be
0 - 65535. A value in this field is required.
Use the Select Action menu on the Destination Settings table to add, modify, or delete an SNMP trap destination. Destinations are changed in the vital product data (VPD) as soon as they are added, modified, or deleted. These updates do not depend on clicking Submit Changes.
Any change to SNMP settings is logged on the Tasks window.
Library Port Access Groups window
Use this selection to view information about library port access groups that are used by the TS7700. Library port access groups enables the user to segment resources and authorization by controlling access to library data ports.
 
Tip: This window is visible only if at least one instance of FC5271 (selective device access control (SDAC)) is installed on all clusters in the grid.
Access Groups table
This table displays information about existing library port access groups.
The user can use the Access Groups table to create a library port access group. Also, the user can modify or delete an existing access group
The following status information is displayed in the Access Groups table:
Name: The identifying name of the access group. This name must be unique and cannot be modified after it is created. It must contain 1 - 8 characters, and the first character in this field cannot be a number. Valid characters for this field are A - Z, 0 - 9, $, @, *, #, and %. The default access group is identified by the name “- - - - - - - -”. This group can be modified but cannot be deleted.
Library Port IDs: A list of Library Port IDs accessible using the defined access group. This field contains a maximum of 750 characters, or 31 Library Port IDs separated by commas or spaces. A range of Library Port IDs is signified by using a hyphen (-). This field can be left blank.
The default access group has a value in this field that is 0x01-0xFF. Initially, all port IDs are shown by default. However, after modification, this field can change to show only the IDs corresponding to the existing vNodes.
 
Important: VOLSERs that are not found in the SDAC VOLSER range table use this default group to determine access. The user can modify this group to remove any or all default Library Port IDs. However, if all default Library Port ID values are removed, no access is granted to any volumes not in a defined range.
Click the Select Action menu on the Access Groups table to add, modify, or delete a library port access group.
Description: A description of the access group (a maximum of 70 characters).
Access Groups Volume Ranges: The Access Groups Volume Ranges table displays VOLSER range information for existing library port access groups. The user can also use the Select Action menu on this table to add, modify, or delete a VOLSER range that is defined by a library port access group.
Start VOLSER: The first VOLSER in the range that is defined by an access group.
End VOLSER: The last VOLSER in the range that is defined by an access group.
Access Group: The identifying name of the access group, which is defined by the Name field in the Access Groups table.
Click the Select Action menu on the Access Group Volume Ranges table to add, modify, or delete a VOLSER range that is associated with a library port access group. The user can show the inserted volume ranges. To view the current list of virtual volume ranges in the TS7700 cluster, enter the start and end VOLSERs and click Show.
 
Note: Access groups and access group ranges are backed up and restored together. For additional information, see “Backup settings” on page 500 and “Restore Settings window” on page 503.
Cluster Settings
You can use the Cluster Settings to view or change settings that determine how a cluster runs copy policy overrides, applies Inhibit Reclaim schedules, uses an EKS, implements write protect mode, and runs backup and restore operations.
For an evaluation of different scenarios and examples where those overrides benefit the overall performance, see 4.2, “Planning for a grid operation” on page 160.
Copy Policy Override
Use this page to override local copy and I/O policies for a given TS7700 cluster.
 
Reminder: The items on this window can modify the cluster behavior regarding local copy and certain I/O operations. Some LI REQUEST commands also can do this action.
For the selected cluster, the user can tailor copy policies to override certain copy or I/O operations. Select the check box next to one or more of the following settings to specify a policy override:
Prefer local cache for scratch mount requests
When this setting is selected, a scratch mount selects the local TVC in the following conditions:
 – The Copy Mode field that is defined by the MC for the mount has a value other than No Copy defined for the local cluster.
 – The Copy Mode field defined for the local cluster is not Deferred when one or more peer clusters are defined as Rewind Unload (RUN).
 – The local cluster is not in a degraded state. The following examples are degraded states:
Out of cache resources
Out of physical scratch
 
Note: This override can be enabled independently of the status of the copies in the cluster.
Prefer local cache for private mount requests
This override causes the local cluster to satisfy the mount request if both of the following conditions are true:
 – The cluster is available.
 – The local cluster has a valid copy of the data, even if that data is only resident on physical tape.
If the local cluster does not have a valid copy of the data, the default cluster selection criteria applies.
Force volumes that are mounted on this cluster to be copied to the local cache
When this setting is selected for a private (non-scratch) mount, a copy operation is performed on the local cluster as part of the mount processing. When this setting is selected for a scratch mount, the Copy Consistency Point on the specified MC is overridden for the cluster with a value of Rewind Unload. This override does not change the definition of the MC, but influences the replication policy.
Enable fewer RUN consistent copies before reporting RUN command complete
When this setting is selected, the maximum number of RUN copies, including the source, is determined by the value that is entered at Number of required RUN consistent copies including the source copy. This value must be consistent before the RUN operation completes. If this option is not selected, the MC definitions are used explicitly. Therefore, the number of RUN copies can be from one to the number of clusters in the grid configuration or the total number of clusters that are configured with a RUN Copy Consistency Point.
Ignore cache preference groups for copy priority
If this option is selected, copy operations ignore the cache preference group when determining the priority of volumes that are copied to other clusters.
 
Note: These settings override the default TS7700 behavior and can be different for every cluster in a grid.
To change any of the settings on this window, complete the following steps:
 – Select or clear the box next to the setting that must be changed. If the user enables the Enable fewer RUN consistent copies before reporting RUN command complete option, the user can alter the value for the Number of required RUN consistent copies including the source copy.
 – Click Submit Changes.
Inhibit Reclaim Schedules window
To add, modify, or delete Inhibit Reclaim Schedules that are used to postpone tape reclamation in a TS7740 or TS7720T cluster, use this window.
This window is visible but disabled on the TS7700 MI if the grid possesses a physical library, but the selected cluster does not. The following message is displayed:
The cluster is not attached to a physical tape library.
 
Tip: This window is not visible on the TS7700 MI if the grid does not possess a physical library.
Reclamation can improve tape usage by consolidating data on some physical volumes, but it uses system resources and can affect host access performance. The Inhibit Reclaim schedules function can be used to disable reclamation in anticipation of increased host access to physical volumes.
The following fields on this window are described:
Schedules : The Schedules table displays the list of Inhibit Reclaim schedules that are defined for each partition of the grid. It displays the day, time, and duration of any scheduled reclamation interruption. All inhibit reclaim dates and times are displayed first in Coordinated Universal Time (UTC) and then in local time. The status information is displayed in the Schedules table:
 – Coordinated Universal Time (UTC) Day of Week: The UTC day of the week on which the reclamation will be inhibited. The following values are possible:
 • Every Day: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, or Saturday.
 • Coordinated Universal Time (UTC) Start Time: The UTC time in hours (H) and minutes (M) at which reclamation is inhibited. The values in this field must take the form HH:MM. Possible values for this field include 00:00 through 23:59.
The Start Time field includes a time chooser clock icon. The user can enter hours and minutes manually by using 24-hour time designations, or can use the time chooser to select a start time based on a 12-hour (AM/PM) clock.
 – Local Day of Week: The day of the week in local time on which the reclamation is inhibited. The day that is recorded reflects the time zone in which the browser is. The following values are possible:
 • Every Day: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, or Saturday.
 • Local Start Time: The local time in hours (H) and minutes (M) at which reclamation is inhibited. The values in this field must take the form HH:MM. The time that is recorded reflects the time zone in which the web browser is. Possible values for this field include 00:00 - 23:59. The Start Time field includes a time chooser clock icon. The user can enter hours and minutes manually by using 24-hour time designations, or can use the time chooser to select a start time based on a 12-hour (AM/PM) clock.
 – Duration: The number of days (D) hours (H) and minutes (M) that the reclamation is inhibited. The values in this field must take the form DD days HH hours MM minutes. Possible values for this field include 0 day 0 hour 1 minute through 1 day 0 hour 0 minutes if the day of the week is Every Day. Otherwise, possible values for this field are 0 day 0 hour 1 minute through 7 days 0 hour 0 minutes.
 
Note: Inhibit Reclaim schedules cannot overlap.
Use the menu on the Schedules table to add a new Inhibit Reclaim schedule or to modify or delete an existing schedule.
To modify an Inhibit Reclaim schedule, complete the following steps:
1. From the Inhibit Reclaim Schedules window, go to the Schedules table.
2. Select the radio button next to the Inhibit Reclaim schedule to be modified.
3. Select Modify from the Select Action menu.
4. Click Go to open the Modify Inhibit Reclaim Schedule window.
The values are the same as for the Add Inhibit Reclaim Schedule.
To delete an Inhibit Reclaim schedule, complete the following steps:
1. From the Inhibit Reclaim Schedules window, go to the Schedules table.
2. Select the radio button next to the Inhibit Reclaim schedule that must be deleted.
3. Select Delete from the Select Action menu.
4. Click Go to open the Confirm Delete Inhibit Reclaim Schedule window.
5. Click OK to delete the Inhibit Reclaim schedule and return to the Inhibit Reclaim Schedules window, or click Cancel to abandon the delete operation and return to the Inhibit Reclaim Schedules window.
 
Note: Plan the Inhibit Reclaim schedules carefully. Running the reclaims during peak times can affect production, and not having enough reclaim schedules influences the media consumption.
Encryption Key Server Addresses window
Use this page to set the EKS addresses in the TS7700 cluster. This selection is only available on the TS7700 Management Interface in these circumstances:
A physical tape library and the tape encryption enablement feature (FC9900) are installed.
The Disk Encryption with External Key Management feature (FC5276) is installed.
In the TS7700 subsystem, user data can be encrypted on tape cartridges by the encryption capable tape drives that are available to the TS7700 tape-attached clusters. Also, data can be encrypted by the full data encryption (FDE) DDMs in TS7700 TVC cache.
With R4.0, the TVC cache encryption can be configured either for local or external encryption key management with 3956-CC9, 3956-CS9, and 3956-CSA cache types. Tape encryption uses an out-of-band external key management. For more information, see Chapter 2, “Architecture, components, and functional characteristics” on page 15.
 
Note: Only IBM Security Key Lifecycle Manager (SKLM) supports both external disk encryption and TS1140 and TS1150 tape drives. The settings for Encryption Server are shared for both tape and external disk encryption.
The IBM Security Key Lifecycle Manager for z/OS (ISKLM) external key manager supports TS7700 physical tape but does not support TS7700 disk encryption.
For both tape and disk encryption, an EKS is required in the network that is accessible by the TS7700 cluster. For more information, see Chapter 4, “Preinstallation planning and sizing” on page 135 and Chapter 7, “Hardware configurations and upgrade considerations” on page 245.
There is a tutorial available on MI that shows the properties of the EKS. To watch it, click the View tutorial link in the MI window. Figure 9-77 shows the Encryption Key Server Addresses setup window.
Figure 9-77 Encryption Key Server Addresses window
If the cluster has the feature code for disk or tape encryption enabled, this window is visible on the TS7700 MI.
 
Note: Some IP addresses (i.e.: 10.x.x.x, 192.168.x.x, 172.16.x.x) are commonly used in local networks (internal use) and should not be propagated through the Internet. If TS7700 and key manager reside in different networks, separated by the Internet, those subnets must be avoided.
The EKS assists encryption-enabled tape drives in generating, protecting, storing, and maintaining EKs that are used to encrypt information that is being written to, and to decrypt information that is being read from, tape media (tape and cartridge formats). Also, EKS manages the EK for the cache disk subsystem, with the External key management disk encryption feature installed, which removes the responsibility of managing the key from the 3957 V07 or VEB engine, and from the disk subsystem controllers. To read more about data encryption with TS7700, see Chapter 2, “Architecture, components, and functional characteristics” on page 15.
The following settings are used to configure the TS7700 connection to an EKS:
Primary key server address
The key server name or IP address that is primarily used to access the EKS. This address can be a fully qualified host name or an IP address in IPv4 or IPv6 format. This field is not required if the user does not want to connect to an EKS.
 
Tip: A valid IPv4 address is 32 bits long, consists of four decimal numbers, each
0 - 255, separated by periods, such as 98.104.120.12.
A valid IPv6 address is a 128-bit long hexadecimal value separated into 16-bit fields by colons, such as 3afa:1910:2535:3:110:e8ef:ef41:91cf. Leading zeros can be omitted in each field so that :0003: can be written as :3:. A double colon (::) can be used once per address to replace multiple fields of zeros. For example, 3afa:0:0:0:200:2535:e8ef:91cf can be written as: 3afa::200:2535:e8ef:91cf.
A fully qualified host name is a domain name that uniquely and absolutely names a computer. It consists of the host name and the domain name. The domain name is one or more domain labels that place the computer in the DNS naming hierarchy. The host name and the domain name labels are separated by periods and the total length of the host name cannot exceed 255 characters.
Primary key server port
The port number of the primary key server. Valid values are any whole number 0 - 65535; the default value is 3801. This field is only required if a primary key address is used.
Secondary key server address
The key server name or IP address that is used to access the EKS when the primary key server is unavailable. This address can be a fully qualified host name or an IP address in IPv4 or IPv6 format. This field is not required if the user does not want to connect to an EKS. See the primary key server address description for IPv4, IPv6, and fully qualified host name value parameters.
Secondary key server port
The port number of the secondary key server. Valid values are any whole number
0 - 65535; the default value is 3801. This field is only required if a secondary key address is used.
Using the Ping Test
Use the Ping Test buttons to check the cluster network connection to a key server after changing a cluster’s address or port. If the user changes a key server address or port and do not submit the change before using the Ping Test button, the user receives the following message:
To perform a ping test you must first submit your address and/or port changes.
After the ping test starts, one of the following two messages will occur:
 – T he ping test against the address “<address>” on port “<port>” was successful.
 – The ping test against the address “<address>” on port “<port>” from “<cluster>” has failed. The error returned was: <error text>.
Click Submit Changes to save changes to any of these settings.
 
Tip: The user can back up these settings as part of the ts7700_cluster<cluster ID>.xmi file and restore them for later use or use with another cluster. If a key server address is empty at the time that the backup is run, when it is restored, the port settings are the same as the default values.
Write Protect Mode window
Use this page on the TS7700 Management Interface to view Write Protect Mode settings in a TS7700 cluster. This window also is displayed if the Write Protect Mode is enabled due to FlashCopy being enabled (the Current State field shows the Write protect for FlashCopy option is enabled).
With FlashCopy in progress, no modifications are allowed on the Write Protect Mode window until the FlashCopy testing is completed. When Write Protect Mode is enabled on a cluster, host commands fail if they are sent to virtual devices in that cluster and attempt to modify a volume’s data or attributes.
 
Note: FlashCopy is enabled from LI REQ (Library Request Host Console) command.
Meanwhile, host commands that are sent to virtual devices in peer clusters are allowed to continue with full read and write access to all volumes in the library. Write Protect Mode is used primarily for client-initiated disaster recovery testing. In this scenario, a recovery host that is connected to a non-production cluster must access and validate production data without any risk of modifying it.
A cluster can be placed into Write Protect Mode only if the cluster is online. After the mode is set, the mode is retained through intentional and unintentional outages and can be disabled only through the same MI window that is used to enable the function. When a cluster within a grid configuration has Write Protect Mode enabled, standard grid functions, such as virtual volume replication and virtual volume ownership transfer, are unaffected.
Virtual volume categories can be excluded from Write Protect Mode. Up to 32 categories can be identified and set to include or exclude from Write Protect Mode by using the Category Write Protect Properties table. Additionally, write-protected volumes in any scratch category can be mounted as private volumes if the Ignore Fast Ready characteristics of write-protected categories check box is selected.
The following settings are available:
Write Protect Mode settings
Write Protect Mode does not affect standard grid functions such as virtual volume replication or virtual volume ownership transfer. Table 9-12 shows settings available in Write Protect Mode page.
Table 9-12 Write Protect Mode settings for the active cluster
Setting
Description
Write Protect State
Displays the status of Write PRotect Mode on the active cluster. The following are the possible values:
Disabled: Write Protect mode is disabled. No Write Protect settings are in effect.
Enabled: Write Protect Mode is enabled. Any host command to modify volume data or attributes by using virtual devices in this cluster will fail, subject to any defined category exclusions.
Write protect for Flash Copy enabled: Write Protect Mode is enabled by the host. The Write Protect for Flash Copy function was enabled through the LI REQ zOS command and a DR test is likely in progress.
Important: Write Protect Mode cannot be modified while LI-REQ-initiated write protection for Flash Copy is enabled. User must disable it first by using the LI REQ command before trying to change any Write Protect Mode settings on the TS7700 MI:
 – DR family: The name of the disaster recovery (DR) family the Write Protect for Flash Copy was initiated against.
 – Flash time: The data and time at which flash copy was enabled by the host. This mimics the time at which a real disaster occurs.
Disable Write Protect Mode
Select this to disable Write Protect Mode for the cluster.
Enable Write Protect Mode
Select this option to enable Write Protect Mode for devices that are associated with this cluster. When enabled, any host command fails if it attempts to modify volume data or volume attributes through logical devices that are associated with this cluster, subject to any defined category exclusions. After Write Protect Mode is enabled through the TS7700 Management Interface, it persists through any outage and can be disabled only through the TS7700 Management Interface.
Note: Write Protect Mode can only be enabled on a cluster if it is online and no Write Protect Flash copy is in progress.
Ignore fast ready characteristics of write protected categories
Check this box to permit write protected volumes that were returned to a scratch or a fast ready category to be recognized as private volumes.
 
When this box is checked, DR test hosts can mount production volumes as private volumes even if the production environment has already returned them to scratch. However, peer clusters, such as production clusters, continue to view these volumes as scratch volumes. This setting does not override the Fast Ready characteristics of excluded categories.
1. When the production environment returns volumes to scratch that are still needed for read operations on a DR test host the user must ensure that the data is available to be read by the DR hosts. To meet this requirement, scratch categories in the production environment can use delete expire hold for a period of time that is long enough to cover the DR test. Otherwise, even if selective write protect is being used on the DR clusters, the data is no longer available to the DR hosts if the production volume is returned to scratch and deleted or reused. One alternative is to avoid running return to scratch processing from the production environment during the DR test. Another alternative in a DR environment that is composed only of TS7700s is to use Flash Copy for DR testing, which is immune to production changes, including any scratch processing, deletion, and reuse.
2. Additionally when this box is checked, if the production host deletes a volume, that volume is also deleted on the DR cluster.
Category Write Protect Properties
Use the Category Write Protect Properties table to add, modify, or delete categories to be selectively excluded from Write Protect Mode. Disaster recovery test hosts or locally connected production partitions can continue to read and write to local volumes while their volume categories are excluded from write protect. These hosts must use a set of categories different from those primary production categories that are write protected.
 
Note: Categories configured and displayed on this table are not replicated to other clusters in the grid.
When Write Protect Mode is enabled, any categories added to this table must display a value of Yes in the Excluded from Write Protect field before the volumes in that category can be modified by an accessing host.
The following category fields are displayed in the Category Write Protect Properties table:
 – Category Number: The identifier for a defined category. This is an alphanumeric hexadecimal value between 0x0001 and 0xFEFF (0x0000 and 0xFFxx cannot be used). Values that are entered do not include the 0x prefix, although this prefix is displayed on the Cluster Summary window. Values that are entered are padded up to four places. Letters that are used in the category value must be capitalized.
 – Excluded from Write Protect: Whether the category is excluded from Write Protect Mode. The following values are possible:
 • Yes: The category is excluded from Write Protect Mode. When Write Protect is enabled, volumes in this category can be modified when accessed by a host.
 • No: The category is not excluded from Write Protect Mode. When Write Protect is enabled, volumes in this category cannot be modified when accessed by a host.
 – Description: A descriptive definition of the category and its purpose. This description must contain 0 - 63 Unicode characters.
Use the menu on the Category Write Protect Properties table to add a category, or modify or delete an existing category. The user must click Submit Changes to save any changes that were made to the Write Protect Mode settings.
The user can add up to 32 categories per cluster when all clusters in the grid operate at code level R3.1 or later.
The following windows can be used to manage categories:
Add category: Use this window to add a new Write Protect Mode category in an IBM TS7700 cluster.
Modify category: Use this window to modify a Write Protect Mode category in an IBM TS7700 cluster.
Confirm Delete Category: Use this page to delete a Write Protect Mode category in an IBM TS7700 Cluster.
Below are some Knowledge Center tips regarding Disaster Recovery tests that can be valuable when planning for DR tests. Also, see Chapter 13, “Disaster recovery testing” on page 779.
Use the Write Protect Mode during a DR test to prevent any accidental DR host-initiated changes to your production content.
During a DR test, housekeeping (return to scratch processing) within the DR test host configuration unless the process specifically targets volumes only within the DR host test range. Otherwise, even with the Selective Write Protect function enabled, the DR host can attempt to return production volumes to scratch. This problem can occur because the tape management system snapshot that is used for the DR test can interpret the volumes as expired and ready for processing.
Never assume the return to scratch process acts only on DR test volumes. If Write Protect Mode is enabled before DR testing and return to scratch processing is run on the DR host, then the Selective Write Protect function prevents the return to scratch from occurring on protected categories. Further, options in the tape management system can be used to limit which volumes within the DR host are returned to scratch.
For example, in the DFSMSrmm tape management system, the VOLUMES or VOLUMERANGES options can be used on the EXPROC command to limit volumes returned to scratch. When tape management and write protect safeguards are used, protection against data loss occurs both on the TS7700 and at the host.
It is not necessary to disable return to scratch processing within the production environment as long as the following category settings are used:
 – Delete expire hold option is set to prevent the reuse of production volumes before they are verified within the DR test application.
 – Expire hold is not required when the Flash Copy for DR testing mechanism is used and no TS7700 Tape Attach is present within the DR location because the flash snapshot is unchanged by any scratch and reuse of volumes in the production environment.
 – Ignore fast ready characteristics option is set to ensure scratched volumes are viewed as private when accessed through the DR test devices.
Write protection extends only to host commands issued to logical devices within a write-protected cluster. Volume ownership and data location are independent of write protection. For example, production access to non-write-protected devices within a production cluster can still alter content through a remote mount to a DR cluster in the Write Protect State.
You can leave a cluster in the Write Protect State indefinitely. Doing so can prevent unexpected host access in a DR location from modifying production content. However, write protect must be disabled in the event of a true failover to the secondary location.
Backup settings
Use this selection to back up the settings from a TS7700 cluster.
 
Important: Backup and restore functions are not supported between clusters operating at different code levels. Only clusters operating at the same code level as the accessing cluster (the one addressed by the web browser) can be selected for Backup or Restore. Clusters operating different code levels are visible, but the options are disabled.
Table 9-13 lists the cluster settings that are available for backup (and restore) in a TS7700 cluster.
Table 9-13 Backup and restore settings reference
Setting
Can be backed up from
Can be restored to
Storage Classes
Data Classes
Any TS7700 cluster
Any TS7700 Cluster
Partitions
TS7720T
TS7760T
TS7760C
TS7720T or TS7760 T
 
TS7760C
Inhibit Reclaim Schedule
 
Physical Volume Pools
 
Physical Volume Ranges
A TS7700 cluster that is attached to a tape library
A TS7700 cluster that is attached to a tape library
Library Port Access Groups
 
Categories
 
Storage Groups
 
Management Classes
 
Session Timeout
 
Account Expirations
 
Account Lock
 
Roles and Permissions
 
Encryption Key Server Addresses
 
Cluster Network Settings
 
Feature License Settings
 
Copy Policy Override
 
SNMP
 
Write Protect Mode Categories
Any TS7700 Cluster
Any TS7700 Cluster
The Backup Settings table lists the cluster settings that are available for backup:
Constructs: Select this check box to select all of the following constructs for backup:
 – Storage Groups: Select this check box to back up defined Storage Groups.
 – Management Classes: Select this check box to back up defined Management Classes.
 – Storage Classes: Select this check box to back up defined Storage Classes.
 – Data Classes: Select this check box to back up defined Data Classes.
Partitions: Select this check box to back up defined partitions. Resident partitions are not considered.
Inhibit Reclaim Schedule: Select this check box to back up the Inhibit Reclaim Schedules that are used to postpone tape reclamation. If the cluster does not have an attached tape library, then this option will not be available.
Library Port Access Groups: Select this check box to back up defined library port access groups. Library port access groups and access group ranges are backed up together.
Categories: Select this check box to back up scratch categories that are used to group virtual volumes.
Physical Volume Ranges: Select this box to back up defined physical volume ranges. If the cluster does not have an attached tape library, then this option will not be available.
Physical Volume Pools: Select this check box to back up physical volume pool definitions. If the cluster does not have an attached tape library, then this option will not be available.
Security Settings: Select this check box to back up defined security settings:
 – Session Timeout
 – Account Expiration
 – Account Lock
Roles & Permissions: Select this check box to back up defined custom user roles.
 
Important: A restore operation after a backup of cluster settings does not restore or otherwise modify any user, role, or password settings defined by a security policy.
Encryption Key Server Addresses: Select this check box to back up defined encryption key server addresses, including the following:
 – Primary key server address: The key server name or IP address that is primarily used to access the encryption key server.
 – Primary key server port: The port number of the primary key server.
 – Secondary key server address: The key server name or IP address that is used to access the encryption server when the primary key server is unavailable.
 – Secondary key server port: The port number of the secondary key server.
Feature Licenses: Select this check box to back up the settings for currently activated feature licenses.
 
Note: The user can back up these settings as part of the ts7700_cluster<cluster ID>.xmi file and restore them for later use on the same cluster. However, the user cannot restore feature license settings to a cluster different from the cluster that created the ts7700_cluster<cluster ID>.xmi backup file.
The following feature license information is available for backup:
 – Feature Code: The feature code number of the installed feature.
 – Feature Description : A description of the feature that was installed by the feature license.
 – License Key: The 32-character license key for the feature.
 – Node: The name and type of the node on which the feature is installed.
 – Node Serial Number : The serial number of the node on which the feature is installed.
 – Activated: The date and time that the feature license was activated.
 – Expires: The expiration status of the feature license. The following values are possible:
 • Day/Date: The day and date on which the feature license is set to expire.
 • Never: The feature is permanently active and never expires.
 • One-time use: The feature can be used once and has not yet been used.
Copy Policy Override: Select this check box to back up the settings to override local copy and I/O policies.
SNMP: Select this check box to back up the settings for SNMP.
Write Protect Mode Categories: Select this check box to back up the settings for write protect mode categories.
To back up cluster settings, click a check box next to any of the previous settings and then click Download. A window opens to show that the backup is in progress.
 
Important: If the user navigates away from this window while the backup is in progress, the backup operation is stopped and the operation must be restarted.
When the backup operation is complete, the backup file ts7700_cluster<cluster ID>.xmi is created. This file is an XML Meta Interchange file. The user is prompted to open the backup file or save it to a directory. Save the file. When prompted to open or save the file to a directory, save the file without changing the .xmi file extension or the file contents.
Any changes to the file contents or extension can cause the restore operation to fail. The user can modify the file name before saving it, if the user wants to retain this backup file after subsequent backup operations. If the user chooses to open the file, do not use Microsoft Excel to view or save it. Microsoft Excel changes the encoding of an XML Meta Interchange file, and the changed file is corrupted when used during a restore operation.
The following settings are not available for backup or recovery:
User accounts
Security policies
Grid identification policies
Cluster identification policies
Grid communication encryption (IPSec)
SSL certificates
Record these settings in a safe place and recover them manually if necessary.
Restore Settings window
To restore the settings from a TS7700 cluster to a recovered or new cluster, use this window.
 
Note: Backup and restore functions are not supported between clusters operating at different code levels. Only clusters operating at the same code level as the current cluster can be selected from the Current Cluster Selected graphic. Clusters operating at different code levels are visible, but not available, in the graphic.
See Table 9-13 on page 500 for a quick Backup and Restore settings reference. Follow these steps to restore cluster settings:
1. Use the banner bread crumbs to navigate to the cluster where the restore operation is applied.
2. On the Restore Settings window, click Browse to open the File Upload window.
3. Go to the backup file used to restore the cluster settings. This file has an .xmi extension.
4. Add the file name to the File name field.
5. Click Open or press Enter from the keyboard.
6. Click Show file to review the cluster settings that are contained in the backup file.
The backup file can contain any of the following settings, but only those settings that are defined by the backup file are shown:
Categories : Select this check box to restore scratch categories that are used to group virtual volumes.
Physical Volume Pools: Select this check box to restore physical volume pool definitions.
 
Important: If the backup file was created by a cluster that did not possess a physical library, physical volume pool settings are reset to default.
Constructs: Select this check box to restore all of the displayed constructs. When these settings are restored, new settings are added and existing settings are modified, but no settings are deleted.
 – Storage Groups: Select this check box to restore defined SGs.
 – Management Classes: Select this check box to restore defined MCs.
MC settings are related to the number and order of clusters in a grid. Take special care when restoring this setting. If an MC is restored to a grid that has more clusters than the grid had when the backup was run, the copy policy for the new cluster or clusters are set as No Copy.
If an MC is restored to a grid that has fewer clusters than the grid had when the backup was run, the copy policy for the now-nonexistent clusters is changed to No Copy. The copy policy for the first cluster is changed to RUN to ensure that one copy exists in the cluster.
If cluster IDs in the grid differ from cluster IDs present in the restore file, MC copy policies on the cluster are overwritten with those from the restore file. MC copy policies can be modified after the restore operation completes.
If the backup file was created by a cluster that did not define one or more scratch mount candidates, the default scratch mount process is restored. The default scratch mount process is a random selection routine that includes all available clusters. MC scratch mount settings can be modified after the restore operation completes.
 – Storage Classes: Select this check box to restore defined SCs.
 – Data Classes: Select this check box to restore defined DCs.
If this setting is selected and the cluster does not support logical Write Once Read Many (LWORM), the Logical WORM setting is disabled for all DCs on the cluster.
 – Inhibit Reclaim Schedule: Select this check box to restore Inhibit Reclaim schedules that are used to postpone tape reclamation.
A current Inhibit Reclaim schedule is not overwritten by older settings. An earlier Inhibit Reclaim schedule is not restored if it conflicts with an Inhibit Reclaim schedule that currently exists. Media type volume sizes are restored based on the restrictions of the restoring cluster. The following volume sizes are supported:
 • 1000 MiB
 • 2000 MiB
 • 4000 MiB
 • 6000 MiB
 • 25000 MiB
 
Note: If the backup file was created by a cluster that did not possess a physical library, the Inhibit Reclaim schedules settings are reset to default.
Library Port Access Groups: Select this check box to restore defined library port access groups.
This setting is only available if all clusters in the grid are operating with Licensed Internal Code levels of 8.20.0.xx or higher.
Library port access groups and access group ranges are backed up and restored together.
Physical Volume Ranges: Select this check box to restore defined physical volume ranges.
If the backup file was created by a cluster that did not possess a physical library, physical volume range settings are reset to default.
Roles & Permissions: Select this check box to restore defined custom user roles.
A restore operation after a backup of cluster settings does not restore or otherwise modify any user, role, or password settings defined by a security policy.
Security Settings: Select this check box to restore defined security settings, for example:
 – Session Timeout
 – Account Expiration
 – Account Lock
Encryption Key Server Addresses: Select this check box to restore defined EKS addresses. If a key server address is empty at the time that the backup is performed, when restored, the port settings are the same as the default values. The following EKS address settings can be restored:
 – Primary key server address: The key server name or IP address that is primarily used to access the EKS.
 – Primary key server port: The port number of the primary key server.
 – Secondary key server address: The key server name or IP address that is used to access the EKS when the primary key server is unavailable.
 – Secondary key server port: The port number of the secondary key server.
Cluster Network Settings: Select this check box to restore the defined cluster network settings.
 
Important: Changes to network settings affect access to the TS7700 MI. When these settings are restored, routers that access the TS7700 MI are reset. No TS7700 grid communications or jobs are affected, but any current users are required to log back on to the TS7700 MI by using the new IP address.
Feature Licenses: Select this check box to restore the settings for currently activated feature licenses. When the backup settings are restored, new settings are added but no settings are deleted. After restoring feature license settings on a cluster, log out and then log in to refresh the system.
 
Note: The user cannot restore feature license settings to a cluster that is different from the cluster that created the ts7700_cluster<cluster ID>.xmi backup file.
The following feature license information is available for backup:
 – Feature Code: The feature code number of the installed feature.
 – Feature Description: A description of the feature that was installed by the feature license.
 – License Key: The 32-character license key for the feature.
 – Node: The name and type of the node on which the feature is installed.
 – Node Serial Number: The serial number of the node on which the feature is installed.
 – Activated: The date and time the feature license was activated.
 – Expires: The expiration status of the feature license. The following values are possible:
 • Day/Date : The day and date on which the feature license is set to expire.
 • Never: The feature is permanently active and never expires.
 • One-time use: The feature can be used once and has not yet been used.
Copy Policy Override: Select this check box to restore the settings to override local copy and I/O policies.
SNMP: Select this check box to restore the settings for Simple Network Management Protocol (SNMP). When these settings are restored, new settings are added and existing settings are modified, but no settings are deleted.
Write Protect Mode Categories: Select this check box to restore the settings for write protect mode categories. When these settings are restored, new settings are added and existing settings are modified, but no settings are deleted.
After selecting Show File, the name of the cluster from which the backup file was created is displayed at the top of the window, along with the date and time that the backup occurred.
Select the box next to each setting to be restored. Click Restore.
 
Note: The restore operation overwrites existing settings on the cluster.
A warning window opens and prompts you to confirm the decision to restore settings. Click OK to restore settings or Cancel to cancel the restore operation.
The Confirm Restore Settings window opens.
 
Important: If the user navigates away from this window while the restore is in progress, the restore operation is stopped and the operation must be restarted.
The restore cluster settings operation can take 5 minutes or longer. During this step, the MI is communicating the commands to update settings. If the user navigates away from this window, the restore settings operation is canceled.
Restoring to or from a cluster without a physical library: If either the cluster that created the backup file or the cluster that is performing the restore operation do not possess a physical library, upon completion of the restore operation all physical tape library settings are reset to default. One of the following warning messages is displayed on the confirmation page:
The file was backed up from a system with a physical tape library attached but this cluster does not have a physical tape library attached. If you restore the file to this cluster, all the settings for physical tape library will have default values.
The file was backed up from a cluster without a physical tape library attached but this cluster has a physical tape library attached. If you restore the file to this cluster, all the settings for physical tape library will have default values.
The following settings are affected:
 – Inhibit Reclaim Schedule
 – Physical Volume Pools
 – Physical Volume Ranges
Confirm restore settings: This page confirms that a restore settings operation is in progress on the IBM TS7700 cluster.
RSyslog
Use this page to set or modify RSyslog settings on the IBM TS7700 cluster. Remote System Log Processing (Rsyslog) is now supported by TS7700 clusters starting from the R4.1.2 code level. The user can add or modify settings of a remote target (the Rsyslog server) that the system logs are sent to. All TS7700 cluster models running R4.1 level of code or later will be able to activate the Rsyslog functionality. See Chapter 2, “Architecture, components, and functional characteristics” on page 15.
The following information is displayed on this page:
Target Number: The number of the remote target (server) to send the system logs to.
IP Address or Hostname: IP Address or Hostname of the remote target to receive the system logs.
 
Note: If this value is a Domain Name Server (DNS) address, then you must activate and configure a DNS on the Cluster Network Settings window.
Port: Port number of the remote target.
Status: Status of the remote target, which can be either Active or Inactive.
 
Note: To send logs to a syslog server, RSyslog must be enabled and the status of the remote target must be Active. Otherwise, no logs will be sent.
Table 9-14 lists the available actions that are on this page.
Table 9-14 Actions available from the RSyslog table
In order to...
Do this:
Enable or Disable RSyslog
To change the state, select either Enable RSyslog or Disable RSyslog.
Create a remote target
1. Select Actions  Create Remote Target.
2. Add the remote target settings:
IP Address or Hostname
Port
Status
The user can have a total of two remote targets.
3. Click OK to add the remote target, or Cancel to quit the operation.
Modify a remote target
1. Highlight a remote target.
2. Select Actions  Modify Remote Target.
3. Modify the remote target settings:
IP Address or Hostname
Port
Status
4. Click OK to modify the remote target, or Cancel to quit the operation.
Delete a remote target.
1. Highlight a remote target.
2. Select Actions  Delete Remote Target.
3. Click OK to delete the remote target, or Cancel to quit the operation.
Change the order of the remote targets
Click the Target Number column heading.
Hide or show columns on the table
1. Right-click the table header.
2. Select the check box next to a column heading to hide or show that column in the table. Column headings that are selected display on the table.
Reset the table to its default view
1. Right-click the table header.
2. Click Reset Grid Preferences.
Table 9-15 lists the facilities that are used to collect the different log types that are sent using RSyslog. Items 0 - 15 are system defaults; items 16 - 18 are specific to TS7700.
Table 9-15 Log types that are collected
Numerical code
Facility
Description
0
kern
Kernel messages
1
user
User-level messages
3
daemon
System daemons
4
auth
Security and authorization messages
5
syslog
Messages generated internally by syslogd
7
news
Network news subsystem
9
cron
Clock daemon
10
security
Security and authorization messages
11
ftp
FTP daemon
12
ntp
NTP subsystem
13
logaudit
Log audit
14
logalert
Log alert
15
clock
Clock daemon
16
local0 (MI_EVENT)
MI EVENTS
17
local1 (X11)
HOST MESSAGES
18
local2 (MDE_CH)
MDE Call Home
Copy Export Settings window
Use this window to change the maximum number of physical volumes that can be exported by the TS7700.
The Number of physical volumes to export is the maximum number of physical volumes that can be exported. This value is an integer 1 - 10,000. The default value is 2000. To change the number of physical volumes to export, enter an integer in the described field and click Submit.
Note: The user can modify this field even if a Copy Export operation is running, but the changed value does not take effect until the next Copy Export operation starts.
For more information about Copy Export, see Chapter 12, “Copy Export” on page 747.
Notification settings
Use this page to set or modify notification settings on the IBM TS7700 Cluster.
This page displays information for an entire grid if the accessing cluster is part of a grid, or for only the accessing cluster if that cluster is a stand-alone machine. Use this page to modify settings of the notifications that are generated by the system, such as Event, Host Message, and Call Home Microcode Detected Error (MDE).
There are three types of notifications that are generated by the system:
Events (OPxxxx messages in a CBR3750I message on the host)
Host Message (Gxxxx, ALxxxx, EXXXX, Rxxxx in a CBR3750I message on the host)
Call Home MDE in SIM (in a IEA480E message on the host)
During normal and exception processing, intervention or action by an operator or storage administrator is sometimes required. As these conditions or events are encountered, a message text is logged to the host console. Those messages can be viewed through the TS7700 MI by using the Events window (described previously in this chapter).
Notification Settings page has been introduced by R4.1.2 level of code as part of the System Events Redesign package. The new window allows the user to adjust different characteristics for all CBR3750I messages. Also, the user is able to add personalized text to the messages, making it easier to implement or manage the automation-based monitoring in the IBM z/OS.
The Notification Settings window also works as a catalog that can be use to search for descriptions of the Event, Host Message, or Call Home MDE. From this window, the user can send the text messages to all hosts attached to the subsystem by enabling the Host Notification option. Figure 9-78 shows the Notification Settings window and options.
Figure 9-78 Notification Settings and options
Information that can be viewed on the Notification settings are listed in Table 9-16.
Table 9-16 Notification settings
ID
ID of the Event, Host Message, or Call Home MDE
Custom Severity
Custom severity of the notification. The following are the possible values:
Default
Informational
Warning
Impact
Serious/Error
Critical
Description
Description of the notification.
Type
The type of notification that is listed: Event, Host Message or Call HOME MDE.
Notification State
Whether the notification is active or inactive. If inactive, it will not be sent to any notification channel.
Notification Channels
How the notification is sent, possible selections are:
Host
Management Interface
SNMP
RSyslog
 
Note: Host Messages cannot be sent to the Management Interface.
Comments
Field available to add user comments. The comments are sent with the message, through the notification channels, when the message is triggered by the system.
 
Note: Comments for Call Home MDEs will not be sent to Host. Only the MDE code is sent to the Host.
Table 9-17 lists the actions available from the Notifications table.
Table 9-17 Notifications actions
To...
Do this:
Enable or disable host notifications for alerts
1. Select Actions  Modify Host Notification State by Cluster.
2. Select the cluster in the Modify Host Notification State by Cluster box.
Select Active, then OK to enable notifications.
Select Inactive, then OK to disable notifications.
Set notifications to Active or Inactive
1. Select at least one notification.
2. Select Actions  Modify Notifications Settings.
Select Active, then OK to enable notifications.
Select Inactive, then OK to disable notifications.
Set custom severity level for notifications
1. Select at least one notification.
2. Select Actions  Modify Notifications Settings.
3. From the Custom Severity drop-down box, select the severity level.
4. Verify that the Notification State is Active, then click OK.
The option to set a custom severity can be performed only if all clusters in the grid are at microcode level 8.41.2 or later.
Modify notification state by severity
1. Select at least one notification.
2. Select Actions  Modify Notifications State by Severity.
Select Active, then OK to enable notifications.
Select Inactive, then OK to disable notifications.
This action will work based on the current value of the severity, whether is the default severity or custom severity.
Download a CSV file of the notification settings
Select the File icon (export to CSV).
 
The time reported in the CSV file is shown in Coordinated Universal Time (UTC).
Hide or show columns on the table
1. Right-click the table header.
2. Click the check box next to a column heading to hide or show that column in the table. Column headings that are checked display on the table.
Filter the table data by using a string of text
1. Click the Filter field.
2. Enter a search string.
3. Press Enter.
Filter the table data by using a column heading
1. Click the down arrow next to the Filter field.
2. Select the column heading to filter by.
3. Refine the selection.
Reset the table to its default view
1. Right-click the table header.
2. click Reset Grid Preferences.
9.3.11 The Service icon
The following sections present information about running service operations and troubleshooting problems for the TS7700. Figure 9-79 on page 512 shows the Service icon options for both stand-alone and a grid configuration of TS7700 clusters.
The Ownership Takeover Mode option shows only when a cluster is member of a grid, whereas Copy Export Recover and Copy Export Recovery Status options appear for a single TS7700T configuration (that is connected to a physical library).
 
Note: The Copy Export Recover and Copy Export Recover Status options are only available in a single cluster configuration for a tape-attached cluster.
Figure 9-79 Service Icon options
Ownership Takeover Mode
To enable or disable Ownership Takeover Mode for a failed cluster in a TS7700, use the Ownership takeover mode page. The Ownership Takeover Mode must be started from any surviving cluster in the grid when a cluster becomes inaccessible. Enabling Ownership Takeover Mode from the failed cluster is not possible.
 
Note: Keep the IP addresses for the clusters in the configuration available for use during a failure of a cluster. Thus, the MI can be accessed from a surviving cluster to initiate the ownership takeover actions.
When a cluster enters a failed state, enabling Ownership Takeover Mode enables other clusters in the grid to obtain ownership of logical volumes that are owned by the failed cluster. Normally, ownership is transferred from one cluster to another through communication between the clusters. When a cluster fails or the communication links between clusters fail, the normal means of transferring ownership is not available.
Read/write or read-only takeover should not be enabled when only the communication path between the clusters has failed, and the isolated cluster remains operational. The integrity of logical volumes in the grid can be compromised if a takeover mode is enabled for a cluster that is only isolated from the rest of the grid (not failed) and there is active host access to it.
A takeover decision should be made only for a cluster that is indeed no longer operational.
AOTM, when available and configured, verifies the real status of the non-responsive cluster by using an alternative communication path other than the usual connection between clusters. AOTM uses the TSSC associated with each cluster to determine whether the cluster is alive or failed, enabling the ownership takeover only in case the unresponsive cluster has indeed failed. If the cluster is still alive, AOTM does not initiate a takeover, and the decision is up to the human operator.
If one or more clusters become isolated from one or more peers, those volumes that are owned by the inaccessible peers cannot be mounted without first enabling an ownership takeover mode. Volumes that are owned by one of the accessible clusters can be successfully mounted and modified. For those mounts that cannot obtain ownership from the inaccessible peers, the operation fails. In z/OS, the failure for this error code is not permanent, which makes it possible for the user to enable ownership takeover and retry the operation.
When Read Only takeover mode is enabled, those volumes requiring takeover are read-only, and fail any operation that attempts to modify the volume attributes or data. Read/write takeover enables full read/write access of attributes and data.
If an ownership takeover mode is enabled when only a WAN/LAN failure is present, read/write takeover should not be used because it can compromise the integrity of the volumes that are accessed by both isolated groups of clusters.
Read-only takeover mode should be used instead.
If full read/write access is required, one of the isolated groups should be taken offline to prevent any use case where both groups attempt to modify the same volume. Figure 9-80 shows the Ownership Takeover Mode window.
Figure 9-80 Ownership Takeover Mode
Figure 9-80 also shows the local cluster summary, the list of available clusters in the grid, the connection state between local (accessing) cluster and its peers. It also shows the current takeover state for the peer clusters (if enabled or disabled by the accessing cluster) and the current takeover mode.
In the example that is shown in Figure 9-80 on page 513, cluster zero has a failed connection status. A mount request by the host for a volume that is owned by cluster zero that is issued to any of the peer clusters causes an operator intervention, reporting that the ownership request for that volume was not granted by cluster zero. The decision to takeover the ownership must be made (either by human operator or AOTM).
Complete the following steps to start an ownership takeover against a failed cluster:
1. Authenticate the MI to the surviving cluster that has the takeover intervention.
2. Go to the Ownership Takeover Mode by clicking Service → Ownership Takeover Mode.
3. Select the failed cluster (the one to be taken over).
4. In the Select Action box, select the appropriate Ownership takeover mode (RW or RO).
5. Click Go, and retry the host operation that failed.
Figure 9-80 on page 513 shows that AOTM was previously configured in this grid (for Read/Write, with a grace period of 405 minutes for Cluster 0). In this case, automatic ownership takeover takes place at the end of that period (6 hours and 45 minutes). Human operation can override that setting manually by taking the actions that are described previously, or by changing the AOTM settings to more suitable values. Click Configure AOTM to configure the values that are displayed in the previous AOTM Configuration table.
 
Important: An IBM SSR must configure the TSSC IP addresses for each cluster in the grid before AOTM can be enabled and configured for any cluster in the grid.
Table 9-18 compares the operation of read/write and read-only ownership takeover modes.
Table 9-18 Comparing read/write and read-only ownership takeover modes
Read/write ownership takeover mode
Read-only ownership takeover mode
Operational clusters in the grid can run these tasks:
Perform read and write operations on the virtual volumes that are owned by the failed cluster.
Change virtual volumes that are owned by the failed cluster to private or SCRATCH status.
Operational clusters in the grid can run these tasks:
Perform read operations on the virtual volumes that are owned by the failed cluster.
Operational clusters in the grid cannot run these tasks:
Change the status of a volume to private or scratch.
Perform read and write operations on the virtual volumes that are owned by the failed cluster.
A consistent copy of the virtual volume must be available on the grid or the virtual volume must exist in a scratch category. If no cluster failure occurred (grid links down) and the ownership takeover was started by mistake, the possibility exists for two sites to write data to the same virtual volume.
If no cluster failure occurred, it is possible that a virtual volume that is accessed by another cluster in read-only takeover mode contains older data than the one on the owning cluster. This situation can occur if the virtual volume was modified on the owning cluster while the communication path between the clusters was down. When the links are reestablished, those volumes are marked in error.
See TS7700 IBM Knowledge Center available locally by clicking the question mark icon in the upper right corner of the MI window, or online at the following website:
Repair Virtual Volumes window
Damaged volumes typically occur due to a user intervention, such as enabling ownership takeover against a live cluster and ending up with two different versions of the same volume, or in a cluster removal scenario where the removed cluster had the only instance of a volume.
In these cases, the volume is moved to the FF20 (damaged) category by the TS7700 subsystem, and the host cannot access it. If access is attempted, messages like CBR4125I Valid copy of volume volser in library library-name inaccessible are displayed.
To repair virtual volumes in the damaged category for the TS7700 Grid, use the Repair virtual volumes page.
The user can print the table data by clicking Print report. A comma-separated value (.csv) file of the table data can be downloaded by clicking Download spreadsheet. The following information is displayed on this window:
Repair policy : The Repair policy section defines the repair policy criteria for damaged virtual volumes in a cluster. The following criteria is shown:
 – Cluster’s version to keep: The selected cluster obtains ownership of the virtual volume when the repair is complete. This version of the virtual volume is the basis for repair if the Move to insert category keeping all data option is selected.
 – Move to insert category keeping all data: This option is used if the data on the virtual volume is intact and still relevant. If data has been lost, do not use this option. If the cluster that is chosen in the repair policy has no data for the virtual volume to be repaired, choosing this option is the same as choosing Move to insert category deleting all data.
 – Move to insert category deleting all data: The repaired virtual volumes are moved to the insert category and all data is erased. Use this option if the volume is returned to scratch or if data loss has rendered the volume obsolete. If the volume has been returned to scratch, the data on the volume is no longer needed. If data loss has occurred on the volume, data integrity issues can occur if the data on the volume is not erased.
 – Damaged Virtual Volumes: The Damaged Virtual Volumes table displays all the damaged virtual volumes in a grid. The Virtual Volume information is shown, which is the VOLSER of the damaged virtual volume. This field is also a hyperlink that opens the Damaged Virtual Volumes Details window, where more information is available.
Damaged virtual volumes cannot be accessed; repair all damaged virtual volumes that appear on this table. The user can repair up to 10 virtual volumes at a time.
Complete the following steps to repair damaged virtual volumes:
1. Define the repair policy criteria in the Repair policy section.
2. Select a cluster name from the Cluster’s version to keep menu.
3. Select Move to insert category keeping all data or Move to insert category deleting all data.
4. In the Damaged Virtual Volumes table, select the check box next to one or more (up to 10) damaged virtual volumes to be repaired by using the repair policy criteria.
5. Click Select Action  Repair.
6. A confirmation message appears at the top of the window to confirm the repair operation. Click View Task History to open the Tasks window to monitor the repair progress. Click Close Message to close the confirmation message.
Network Diagnostics window
The Network Diagnostics window can be used to initiate ping or trace route commands to any IP address or host name from this TS7700 cluster. The user can use these commands to test the efficiency of grid links and the network system.
Figure 9-81 shows the navigation to the Network Diagnostics window and a ping test example.
Figure 9-81 Network Diagnostics window
The following information is shown on this window:
Network Test: The type of test to be run from the accessing cluster. The following values are available:
 – Ping: Select this option to initiate a ping test against the IP address or host name entered in the IP Address/Hostname field. This option tests the length of time that is required for a packet of data to travel from the computer to a specified host, and back again. This option can test whether a connection to the target IP address or host name is enabled, the speed of the connection, and the distance to the target.
 – Traceroute: Select this option to initiate a trace route test against the IP address or host name that is entered in the IP Address/Hostname field. This option traces the path that a packet follows from the accessing cluster to a target address and displays the number of times packets are rebroadcasted by other servers before reaching their destination.
 
Important: The Traceroute command is intended for network testing, measurement, and management. It imposes a heavy load on the network and should not be used during normal operations.
 – IP Address/Hostname: The target IP address or host name for the selected network test. The value in this field can be an IP address in IPv4 or IPv6 format or a fully qualified host name.
 – Number of Pings: Use this field to select the number of pings that are sent by the Ping command. The range of available pings is 1 - 100. The default value is 4. This field is only displayed if the value in the Network Test field is Ping.
Start: Click this button to begin the selected network test. This button is disabled if required information is not yet entered on the window or if the network test is in progress.
Cancel: Click this button to cancel a network test in progress. This button is disabled unless a network test is in progress.
Output: This field displays the progress output that results from the network test command. Information that is retrieved by the web interface is displayed in this field as it is received. The user can scroll within this field to view output that exceeds the space provided.
The status of the network command is displayed in line with the Output field label and right-aligned over the Output field. The format for the information that is displayed is shown:
Pinging 98.104.120.12...
Ping complete for 98.104.120.12
Tracing route to 98.104.120.12...
Trace complete to 98.104.120.12
Data Collection window
To collect a snapshot of data or a detailed log to help check system performance or troubleshoot a problem during the operation of the TS7700, use this window.
If the user is experiencing a performance issue on a TS7700, the user has two options to collect system data for later troubleshooting. The first option, System Snapshot, collects a summary of system data that includes the performance state. This option is useful for intermittently checking the system performance. This file is built in approximately 5 minutes.
The second option, TS7700 Log Collection, enables you to collect historical system information for a time period up to the past 12 hours. This option is useful for collecting data during or soon after experiencing a problem. Based on the number of specified hours, this file can become large and require over an hour to build.
The following information is shown on the Data Collection window:
System Snapshot: Select this box to collect a summary of system health and performance from the preceding 15-minute period. The user can collect and store up to 24 System Snapshot files at the same time.
TS7700 Log Collection: Check this box to collect and package all logs from the time period that is designated by the value in the Hours of Logs field. The user can collect and store up to two TS7700 Log Collection files at the same time.
Hours of Logs: Use this menu to select the number of preceding hours from which system logs are collected. Possible values are 1 - 12, with a default of 2 hours. The time stamp next to the hours field displays the earliest time from which logs are collected. This time stamp is automatically calculated based on the number that is displayed in the hours field.
 
Note: Periods that are covered by TS7700 Log Collection files cannot overlap. If the user attempts to generate a log file that includes a period that is covered by an existing log file, a message prompts the user to select a different value for the hours field.
Continue: Click this button to initiate the data collection operation. This operation cannot be canceled after the data collection begins.
 
Note: Data that is collected during this operation is not automatically forwarded to IBM. The user must contact IBM and open a problem management report (PMR) to move manually the collected data off the system.
When data collection is started, a message is displayed that contains a button linking to the Tasks window. The user can click this button to view the progress of data collection.
 
Important: If data collection is started on a cluster that is in service mode, the user might not be able to check the progress of data collection. The Tasks window is not available for clusters in service mode, so there is no link to it in the message.
Data Collection Limit Reached: This dialog box opens if the maximum number of System Snapshot or TS7700 Log Collection files exists. The user can save a maximum number of 24 System Snapshot files or two TS7700 Log Collection files. If the user attempted to save more than the maximum of either type, the user is prompted to delete the oldest existing version before continuing. The name of any file to be deleted is displayed.
Click Continue to delete the oldest files and proceed. Click Cancel to abandon the data collection operation.
Problem Description (optional): Enter a detailed description of the conditions or problem that was experienced before any data collection has been initiated, in this field. Include symptoms and any information that can assist IBM Support in the analysis process, including the description of the preceding operation, VOLSER ID, device ID, any host error codes, any preceding messages or events, time and time zone of incident, and any PMR number (if available). The number of characters in this description cannot exceed 1000.
Copy Export Recovery window
To test a Copy Export recovery, or to run an actual Copy Export recovery on the TS7700 cluster, use this window.
 
Tip: This window is only visible in a single tape-attach cluster.
Copy Export enables the export of all virtual volumes and the virtual volume database to physical volumes, which can then be ejected and saved as part of a data retention policy for disaster recovery. The user can also use this function to test system recovery.
For more information about the Copy Export function, see Chapter 12, “Copy Export” on page 747. Also, see IBM TS7700 Series Copy Export Function User's Guide, WP101092:
 
Reminder: The recovery cluster needs tape drives that are compatible with the exported media. Also, if encrypted tapes are used for export, access to the EKs must be provided.
Before the user attempts a Copy Export, ensure that all physical media that is used in the recovery is inserted. During a Copy Export recovery, all current virtual and physical volumes are erased from the database and virtual volumes are erased from the cache. Do not attempt a Copy Export operation on a cluster where current data is to be saved.
 
Important: In a grid configuration, each TS7700 is considered a separate source. Therefore, only the physical volume that is exported from a source TS7700 can be used for the recovery of that source. Physical volumes that are exported from more than one source TS7700 in a grid configuration cannot be combined to use in recovery. Recovery can occur only to a single cluster configuration; the TS7700 that is used for recovery must be configured as Cluster 0.
Secondary Copies window
If the user creates a new secondary copy, the original secondary copy is deleted because it becomes inactive data. For example, if the user modifies constructs for virtual volumes that already were exported and the virtual volumes are remounted, a new secondary physical volume is created.
The original physical volume copy is deleted without overwriting the virtual volumes. When the Copy Export operation is rerun, the new, active version of the data is used.
The following fields and options are presented to the user to help testing recovery or running a recovery:
Volser of physical stacked volume for Recovery Test: The physical volume from which the Copy Export recovery attempts to recover the database.
Disaster Recovery Test Mode: This option determines whether a Copy Export Recovery is run as a test or to recover a system that has suffered a disaster. If this box contains a check mark (default status), the Copy Export Recovery runs as a test. If the box is cleared, the recovery process runs in normal mode, as when recovering from an actual disaster.
When the recovery is run as a test, the content of exported tapes remains unchanged. Additionally, primary physical copies remain unrestored and reclaim processing is disabled to halt any movement of data from the exported tapes.
Any new volumes that are written to the system are written to newly added scratch tapes, and do not exist on the previously exported volumes. This ensures that the data on the Copy Export tapes remains unchanged during the test.
In contrast to a test recovery, a recovery in normal mode (box cleared) rewrites virtual volumes to physical storage if the constructs change so that the virtual volume’s data can be put in the correct pools. Also, in this type of recovery, reclaim processing remains enabled and primary physical copies are restored, requiring the addition of scratch physical volumes.
A recovery that is run in this mode enables the data on the Copy Export tapes to expire in the normal manner and those physical volumes to be reclaimed.
 
Note: The number of virtual volumes that can be recovered depends on the number of FC5270 licenses that are installed on the TS7700 that is used for recovery. Additionally, a recovery of more than 2 million virtual volumes must be run by a TS7740 operating with a 3957-V07 and a code level of 8.30.0.xx or higher.
Erase all existing virtual volumes during recovery: This check box is shown if virtual volume or physical volume data is present in the database. A Copy Export Recovery operation erases any existing data. No option exists to retain existing data while running the recovery. The user can check this check box to proceed with the Copy Export Recovery operation.
Submit : Click this button to initiate the Copy Export Recovery operation.
Confirm Submission of Copy Export Recovery: The user is asked to confirm the decision to initiate a Copy Export Recovery option. Click OK to continue with the Copy Export Recovery operation. Click Cancel to abandon the Copy Export Recovery operation and return to the Copy Export Recovery window.
Password : The user password. If the user selected the Erase all existing virtual volumes during recovery check box, the confirmation message includes the Password field. The user must provide a password to erase all current data and proceed with the operation.
Canceling a Copy Export Recovery operation in progress: The user can cancel a Copy Export Recovery operation that is in progress from the Copy Export Recovery Status window.
Copy Export Recovery Status window
Use this window to view information about or to cancel a currently running Copy Export recovery operation on a TS7700 cluster.
 
Important: The Copy Export recovery status is only available for a stand-alone TS7700T cluster.
The table on this window displays the progress of the current Copy Export recovery operation. This window includes the following information:
Total number of steps: The total number of steps that are required to complete the Copy Export recovery operation.
Current step number: The number of steps completed. This value is a fraction of the total number of steps that are required to complete, not a fraction of the total time that is required to complete.
Start time: The time stamp for the start of the operation.
Duration: The amount of time the operation has been in progress, in hours, minutes, and seconds.
Status: The status of the Copy Export recovery operation. The following values are possible:
 – No task: No Copy Export operation is in progress.
 – In progress: The Copy Export operation is in progress.
 – Complete with success: The Copy Export operation completed successfully.
 – Canceled: The Copy Export operation was canceled.
 – Complete with failure: The Copy Export operation failed.
 – Canceling: The Copy Export operation is in the process of cancellation.
Operation detail: This field displays informative status about the progress of the Copy Export recovery operation.
Cancel Recovery: Click Cancel Recovery button to end a Copy Export recovery operation that is in progress and erase all virtual and physical data. The Confirm Cancel Operation dialog box opens to confirm the decision to cancel the operation. Click OK to cancel the Copy Export recovery operation in progress. Click Cancel to resume the Copy Export recovery operation.
9.4 Call Home and Electronic Customer Care
The tape subsystem components include several external interfaces that are not directly associated with data paths. Rather, these interfaces are associated with system control, service, and status information. They support customer interaction and feedback, and attachment to IBM remote support infrastructure for product service and support.
These interfaces and facilities are part of the IBM System Storage Data Protection and Retention (DP&R) storage system. The main objective of this mechanism is to provide a safe and efficient way for the System Call Home (Outbound) and Remote Support (Inbound) connectivity capabilities.
For more information about the connectivity mechanism and related security aspects, see IBM Data Retention Infrastructure (DRI) System Connectivity and Security, WP102531:
The Call Home function generates a service alert automatically when a problem occurs with one of the following:
TS7720
TS7740
TS7760
TS3500 tape library
TS4500 tape library
Error information is transmitted to the TSSC for service, and then to the IBM Support Center for problem evaluation. The IBM Support Center can dispatch an IBM SSR to the client installation. Call Home can send the service alert to a window service to notify multiple people, including the operator. The IBM SSR can deactivate the function through service menus, if required.
A high-level view of call home and remote support capabilities is shown in Figure 9-82.
Figure 9-82 Call home and remote support functions
The TSSC can be ordered as a rack mount feature for a range of products. Feature Code 2725 provides the enhanced TS3000 TSSC. Physically, the TS3000 TSSC is a standard rack 1U mountable server that is installed within the 3592 F05 or F06 frame.
Feature code 2748 provides an optical drive, which is needed for the Licensed Internal Code changes and log retrieval. With the new TS3000 TSSC provided by FC2725, remote data link or call home by using an analog telephone line and modem is no longer supported. Dial-in function through Assist On-site (AOS) and Call Home with ECC functions are both available using an HTTP/HTTPS broadband connection.
9.4.1 Electronic Customer Care
ECC provides a method to connect IBM storage systems with IBM remote support. The package provides supports for dial-out communication for broadband Call Home and modem connections. All information that is sent back to IBM is Secure Sockets Layer (SSL) or Transport Layer Security (TLS) encrypted. Modem connectivity protocols follow similar standards as for direct connected modems, and broadband connectivity uses the HTTPS protocol.
 
Note: Modem option is no longer offered with the latest TSSC FC 2725. All modem-based call home and remote support was discontinued at the end of 2017.
ECC is a family of services featuring problem reporting by opening a problem management record (PMR), sending data files, and downloading fixes. The ECC client provides a coordinated end-to-end electronic service between IBM business operations, its IBM Business Partners, and its clients.
The ECC client runs electronic serviceability activities, such as problem reporting, inventory reporting, and fix automation. This feature becomes increasingly important because customers are running heterogeneous, disparate environments, and are seeking a means to simplify the complexities of those environments.
The TSSC enables the use of a proxy server or direct connection. Direct connection implies that there is not an HTTP proxy between the configured TS3000 and the outside network to IBM. Selecting this method requires no further setup. ECC supports customer-provided HTTP proxy. Additionally, a customer might require all traffic to go through a proxy server. In this case, the TSSC connects directly to the proxy server, which initiates all communications to the Internet.
 
Note: All inbound connections are subject to the security policies and standards that are defined by the client. When a Storage Authentication Service, Direct Lightweight Directory Access Protocol (LDAP), or RACF policy is enabled for a cluster, service personnel (local or remote) are required to use the LDAP-defined service login.
Important: Be sure that local and remote authentication is allowed, or that an account is created to be used by service personnel, before enabling storage authentication, LDAP, or RACF policies.
The outbound communication that is associated with ECC call home can be through an Ethernet connection, a modem, or both, in the form of a failover setup. Modem is not supported in the new TS3000 TSSC. The local subnet LAN connection between the TSSC and the attached subsystems remains the same. It is still isolated without any outside access. ECC adds another Ethernet connection to the TSSC, bringing the total number to three. These connections are labeled:
The External Ethernet Connection, which is the ECC Interface
The Grid Ethernet Connection, which is used for the TS7700 Autonomic Ownership Takeover Manager (AOTM)
The Internal Ethernet Connection, which is used for the local attached subsystem’s subnet
 
Note: The AOTM and ECC interfaces should be in different TCP/IP subnets. This setup avoids both communications from using the same network connection.
All of these connections are set up using the Console Configuration Utility User Interface that is on the TSSC. TS7700 events that start a Call Home are displayed in the Events window under the Monitor icon.
9.4.2 Assist On-site
Enhanced support capabilities include the introduction of AOS to expand maintenance capabilities. Assist On-site allows IBM support personnel to remotely access local TSSC and the Tape Subsystems under it to identify and resolve technical issues in real time. Assist On-site facilitates problem determination and solution by providing a powerful suite of tools that enables the IBM support team to quickly identify and fix issues with the system.
AOS uses the same network as broadband call home, and works on either HTTP or HTTPS. Although the same physical Ethernet adapter is used for these functions, different ports must be opened in the firewall for the different functions. For more information, see 4.1.3, “TCP/IP configuration considerations” on page 146. The AOS function is disabled by default.
When enabled, the AOS can be configured to run in either attended or unattended modes:
Attended mode requires that the AOS session be initiated at the TSSC associated with the target TS7700, which requires physical access by the IBM SSR to the TSSC or the client through the customer interface.
Unattended mode, also called Lights Out mode, enables a remote support session to be established without manual intervention at the TSSC associated with the target TS7700.
All AOS connections are outbound, so no connection is initiated from the outside to the TSSC. It is always the TSSC that initiates the connection. In unattended mode, the TSSC checks whether there is a request for a session when it connects to the regional AOS relay servers, periodically. When a session request exists, the AOS authenticates and establishes the connection, allowing remote access to the TSSC.
Assist On-site uses current security technology to ensure that the data that is exchanged between IBM Support engineers and the TSSC is secure. Identities are verified and protected with industry-standard authentication technology, and Assist On-site sessions are kept secure and private by using randomly generated keys for session, plus advanced encryption.
 
Note: All authentications are subject to the authentication policy that is in effect, as described in 9.3.9, “The Access icon” on page 465.
9.5 Common procedures
This section describes how to run some tasks that are necessary during the implementation stage of the TS7700, whether in stand-alone or grid mode. Some procedures that are described here might also be useful later during the lifecycle of the TS7700, when a change in configuration or operational parameter is necessary for the operation of the subsystem to meet the new requirements.
The tasks are grouped by the following criteria:
Procedures that are related to the tape library connected to a TS7700 tape-attached model
Procedures that are used with all TS7700 cluster models
9.5.1 The tape library with the TS7700T cluster
The following sections describe the steps necessary to configure a TS7700 tape-attached cluster with a tape library.
Defining a logical library
The tape library GUI is required to define a logical library and run the following tasks. Therefore, ensure that it is set up correctly and working. For access through a standard-based web browser, an IP address must be configured in the tape library, which is done initially by the IBM SSR during the hardware installation at the TS3500 or TS4500.
 
Important: Each tape attach TS7700 cluster requires its own logical library in the tape library.
The ALMS feature must be installed and enabled to define a logical library partition in both TS3500 and TS4500 tape libraries.
Ensuring that ALMS is enabled
Before enabling ALMS, the ALMS license key must be entered through the TS3500 tape library operator window because ALMS is a chargeable feature.
You can check the status of ALMS with the TS3500 tape library GUI by clicking Library  ALMS, as shown in Figure 9-83.
Figure 9-83 TS3500 tape library GUI Summary and ALMS window
When ALMS is enabled for the first time in a partitioned TS3500 tape library, the contents of each partition are migrated to ALMS logical libraries. When enabling ALMS in a non-partitioned TS3500 tape library, cartridges that are already in the library are migrated to the new ALMS single logical library.
Figure 9-84 shows how to check the ALMS status with the TS4500 tape library. If necessary, the license key for the ALMS feature can be entered and activated in the same page.
Figure 9-84 ALMS installed and enabled on TS4500
Creating a logical library with TS4500
For more information about TS4500 operations and configuration, see IBM Knowledge Center, which is available locally at TS4500 GUI by clicking the question mark icon, or at:
Complete the following steps for a TS4500 tape library:
1. From the initial page of the TS4500 GUI, select Library Icon and click logical library (see Figure 9-85).
Figure 9-85 TS4500 create logical library page
2. Use the Create Logical Library option (see Figure 9-85) to complete the task.
Notice that the TS4500 GUI features selected presets, which helps in the setup of the new logical library. For the TS7700 library, use the TS7700 option that is highlighted in Figure 9-85. This option uses the 3592 tape drives that are not assigned to any existent logical library within the TS4500 tape library. Also, it selects up to four drives as control paths, distributing them in two separate frames, when this is possible.
 
Note: The TS7700 preset is disabled when less than four unassigned tape drives are available to create a logical library.
Figure 9-86 shows how to display which tape drives are available (unassigned) to be configured in the new logical library. Always work with your IBM service representative when defining drives for the TS7700 during installation or any further change in the environment. Those drives must be correctly cabled to the fibre switches dedicated to the TS7700, and the new back-end resources need to be configured (or reconfigured) within the cluster for proper operation.
Figure 9-86 Display available tapes
The preset also indicates the System Managed encryption method for the new TS7700 logical library.
An example of logical library definition is shown in Figure 9-87.
Figure 9-87 Defining the new Logical Library for the TS7700T
After the configuration of the logical library is completed, your IBM service representative can complete the TS7700 tape-attached cluster installation, and the tape cartridges can be inserted in the TS4500 tape library.
Creating a logical library with ALMS on the TS3500 tape library
Complete these steps for a TS3500 tape library:
1. From the main section of the TS3500 tape library GUI Welcome window, go to the work items on the left side of the window and click Library  Logical Libraries.
2. From the Select Action menu, select Create and then, click Go.
An extra window, named Create Logical Library pops up.
3. Type the logical library name (up to 15 characters), select the media type (3592 for TS7740 or TS7720T), and then, click Apply. The new logical library is created and is displayed in the logical library list when the window is refreshed.
4. After the logical library is created, you can display its characteristics by selecting Library  Logical Libraries under work items on the left side of the window. Figure 9-88 shows a summary of the screens in the Create logical library sequence.
Figure 9-88 Creating a Logical Library with the TS3500 tape library
Maximum number of slots, 8-character volser, and VIO
Define the maximum number of cartridge slots for the new logical library. If multiple logical libraries are defined, you can define the maximum number of tape library cartridge slots for each logical library. This enables a logical library to grow without changing the configuration each time you want to add empty slots.
Ensure that the new logical library has the eight-character Volser reporting option set. Another item to consider is the VIO usage - if VIO is enabled and, if so, how many cells should be defined.
For more information, see the documentation regarding the TS3500 Tape Library that is available on virtual I/O slots and applicability at this web page:
All of those items can be selected and defined at the Manage Logical Libraries window in the TS3500 tape library, as shown in Figure 9-89.
Figure 9-89 Manage Logical Libraries in the TS3500
Assigning drives
Now, the TS7700T tape drives should be added to the logical library.
From the Logical Libraries window that is shown in Figure 9-90 on page 532, use the work items on the left side of the window to go to the requested web window by clicking Drives  Drive Assignment. This link takes you to a filtering window where you can select to have the drives displayed by drive element or by logical library.
 
 
Note: For the 3592 J1A, E05, E06, and E07 drives, an intermix of tape drive models is not supported by TS7720T or TS7740, except for 3592-E05 tape drives working in J1A emulation mode and 3592-J1A tape drives (the first and second generation of the 3592 tape drives).
Upon selection, a window opens so that a drive can be added to or removed from a library configuration. Also, you can use this window to share a drive between Logical Libraries and define a drive as a control path.
Figure 9-90 on page 532 shows the drive assignment window of a logical library that has all drives assigned.
Unassigned drives appear in the Unassigned column with the box checked. To assign them, select the appropriate drive box under the logical library name and click Apply.
 
Note: Do not share drives belonging to a TS7700T. They must be exclusive.
Click the Help link at the upper-right corner of the window that is shown in Figure 9-90 to see extended help information, such as detailed explanations of all the fields and functions of the window. The other TS3500 tape library GUI windows provide similar help support.
Figure 9-90 Drive Assignment window
TS7700 R4.0 works with the TS1150 tape drives in a homogeneous or heterogeneous configuration. Heterogeneous configuration of the tape drives means a mix of TS1150 (3592 E08) and one previous generation of the 3592 tape drives to facilitate data migration from legacy media. Tape drives from previous generation are only used to read legacy media (JA/JB) while the TS1150 will read/write to the newer media types. There will be no writes to the legacy media type, so the support for heterogeneous configuration of the tape drives is deemed limited.
In a multi-platform environment, logical libraries show up as in Figure 9-90. Physical tape drives can be reassigned from one logical library to another. This can be easily done for the Open Systems environment, where the tape drives attach directly to the host systems without a tape controller or VTS/TS7700.
 
Note: Do not change drive assignments if they belong to an operating TS7700T, or tape controller. Work with your IBM SSR, if necessary.
In an IBM Z environment, a tape drive always attaches to one tape control unit (CU) only. If it is necessary to change the assignment of a tape drive from a TS7700T, the CU must be reconfigured to reflect the change. Otherwise, the missing resource is reported as defective to the MI and hosts. Work with your IBM SSRs to perform these tasks in the correct way, avoiding unplanned outages.
 
Important: In an IBM Z environment, use the Drive Assignment window only for these functions:
Initially assign the tape drives from the TS3500 tape library GUI to a logical partition (LPAR).
Assign more tape drives after they are attached to the TS7740 or a tape controller.
Remove physical tape drives from the configuration after they are physically detached from the TS7740 or tape controller.
In addition, never disable ALMS at the TS3500 tape library after it is enabled for IBM Z host support and IBM Z tape drive attachment.
Defining control paths
Each TS7740 requires four control path drives defined. If possible, distribute the control path drives over more than one TS3500 tape library frame to avoid single points of failure.
Defining the encryption method for the new logical library
After adding tape drives to the new logical library, the encryption method for the new logical library (if applicable) needs to be defined.
 
Reminders: Tape drives must be set to Native mode when encryption is used.
To activate encryption, FC9900 must be ordered for the TS7740 or the TS7720T, and the license key must be installed. In addition, the associated tape drives must be Encryption Capable 3592-E05, 3592-E06, 3592-E07, or 3592-E08.
Complete the following steps:
1. Check the drive mode by opening the Drives summary window in the TS3500 tape library GUI, as shown in Figure 9-91, and look in the Mode column. This column is displayed only if drives in the tape library are emulation-capable.
Figure 9-91 Checking drive mode
2. If necessary, change the drive mode to Native mode (3592-E05 only). In the Drives summary window, select a drive and select Change Emulation Mode, as shown in Figure 9-92.
Figure 9-92 Change the drive emulation
3. In the next window that opens, select the native mode for the drive. After the drives are in the wanted mode, proceed with the Encryption Method definition.
4. In the TS3500 MI, click Library  Logical Libraries, select the logical library with which you are working, select Modify Encryption Method, and then click Go. See Figure 9-93.
Figure 9-93 Select the encryption method
5. In the window that opens, select System-Managed for the chosen method, and select all drives for this partition. See Figure 9-94.
Figure 9-94 Set the encryption method
To make encryption fully operational in the TS7740 configuration, more steps are necessary. Work with your IBM SSR to configure the Encryption parameters in the TS7740 during the installation process.
 
Important: Keep the Advanced Encryption Settings as NO ADVANCED SETTING, unless set otherwise by IBM Engineering.
Defining Cartridge Assignment Policies
The Cartridge Assignment Policy (CAP) of the TS3500 tape library is where the ranges of physical cartridge volume serial numbers are assigned to specific logical libraries. With CAP correctly defined, when a cartridge is inserted with a VOLSER that matches that range into the I/O station, the library automatically assigns that cartridge to the appropriate logical library.
To add, change, and remove policies, select Cartridge Assignment Policy from the Cartridges work items. The maximum quantity of CAPs for the entire TS3500 tape library must not exceed 300 policies.
Figure 9-95 shows the VOLSER ranges defined for logical libraries.
Figure 9-95 TS3500 Tape Library Cartridge Assignment Policy
The TS3500 tape library enables duplicate VOLSER ranges for different media types only. For example, Logical Library 1 and Logical Library 2 contain Linear Tape-Open (LTO) media, and Logical Library 3 contains IBM 3592 media. Logical Library 1 has a CAP of ABC100-ABC200. The library rejects an attempt to add a CAP of ABC000-ABC300 to Logical Library 2 because the media type is the same (both LTO). However, the library does enable an attempt to add a CAP of ABC000-ABC300 to Logical Library 3 because the media (3592) is different.
In a storage management subsystem (SMS-managed) z/OS environment, all VOLSER identifiers across all storage hierarchies are required to be unique. Follow the same rules across host platforms also, whether sharing a TS3500 tape library between IBM Z and Open Systems hosts or not.
 
Tip: The CAP does not reassign an already assigned tape cartridge. If needed, you must first unassign it, then manually reassign it.
Inserting TS7700T physical volumes
The tape attach TS7700 subsystem manages both logical and physical volumes. The CAP of the TS3500 tape library or the associate volume ranges at TS4500 affects only the physical volumes that are associated with this TS7740 or TS7720T logical library. Logical Volumes are managed exclusively from the TS7700 MI.
To add physical cartridges, complete the following steps:
1. Define CAPs at the IBM TS3500 or apply the volser ranges at TS4500 tape library through the GUI. This process ensures that all TS7700 ranges are recognized and assigned to the correct logical library partition (the logical library that is created for this specific TS7700) before you begin any TS7700 MI definitions.
2. Physically insert volumes into the library by using the I/O station, or by opening the library and placing cartridges in empty storage cells. Cartridges are assigned to the tape attach TS7700 logical library partitions according to the definitions.
 
Important: Before inserting physical volumes belonging to a TS7700T into the tape library, ensure that the VOLSER ranges are defined correctly at the TS7700 MI. For more information, see “Defining VOLSER ranges for physical volumes” on page 542.
These procedures ensure that TS7700 back-end cartridges are never assigned to a host by accident. Figure 9-96 shows the flow of physical cartridge insertion and assignment to logical libraries for TS7740 or TS7720T.
Figure 9-96 Physical volume assignment
Inserting physical volumes into the tape library
Two methods are available for inserting physical volumes into the tape library:
Opening the library doors and inserting the volumes directly into the tape library storage empty cells (bulk loading)
Using the tape library I/O station
Insertion directly into storage cells
Use the operator pane of the tape library to pause it. Open the door and insert the cartridges into any empty slot, except those slots that are reserved for diagnostic cartridges, which are Frame 1, Column 1 in the first Row (F01, C01, and R01) in a single media-type library. Also do not insert cartridges in the shuffle locations in the high-density frames (top two first rows in the HD frame). Always use empty slots in the same frame whose front door was opened, otherwise the cartridges will not be inventoried.
 
Important: Cartridges that are not in a CAP range (TS3500) or associated to any logical library (TS4500) are not assigned to any logical library.
After completing the new media insertion, close the doors. After approximately 15 seconds, the tape library automatically inventories the frame or frames of the door you opened.
When the tape library finishes the physical inventory, the TS7700T uploads the inventory from its associate logical library. At the end of the inventory upload, the tape library comes to the Auto status to the tape attach TS7700 cluster.
 
Tip: Only place cartridges in a frame whose front door is open. Do not add or remove cartridges from an adjacent frame.
Insertion by using the I/O station
The tape library can be operating with or without virtual I/O (VIO) being enabled.
Basically, with VIO enabled, the tape library moves the cartridges from the physical I/O station into the physical library by itself. In the first moment, the cartridge leaves the physical I/O station and goes into a slot that is mapped as a VIO - SCSI element between 769 (X’301’) and 1023 (X’3FF’) for the logical library that is designated by the Volser association or CAP.
Each logical library has its own set of up to 256 VIO slots, as defined during logical library creation or later.
With VIO disabled, the tape library does not move cartridges from the physical I/O station unless it receives a command from the TS7700T or any other host in control.
For both cases, the tape library detects the presence of cartridges in the I/O station when it transitions from open to close, and scans all I/O cells by using the bar code reader. The CAP or volser assignment decides to which logical library those cartridges belong and then runs one of the following tasks:
Moves them to the VIO slots of the designated logical library, with VIO enabled.
Waits for a host command in this logical library. The cartridges stay in the I/O station after the bar code scan.
The volumes being inserted should belong to the range of volumes that are defined in the tape library (CAP or volser range) for the TS7700 logical library, and those ranges also should be defined in the TS7700 Physical Volume Range as described in “Defining VOLSER ranges for physical volumes” on page 542. Both conditions should be met to a physical cartridge be successfully inserted to the TS7700T.
If any VOLSER is not in the range that is defined by the policies, the cartridges need to be assigned to the correct logical library manually by operator.
 
Note: Make sure that CAP ranges are correctly defined. Insert Notification is not supported on a high-density library. If a cartridge outside the CAP-defined ranges is inserted, it remains unassigned without any notification, and it might be checked in by any logical library of the same media type.
Verify that the cartridges were correctly assigned - or not left unassigned by using the tape library GUI. Figure 9-97 shows the MI page, with TS4500 and TS3500.
Figure 9-97 Check the volume assignment
When volumes that belong to a logical library are found unassigned, correct the CAP or volser assignment definitions, and reinsert them again. Optionally, cartridges could be manually assigned to the correct logical library by the operator through the GUI.
We strongly suggest having correctly defined CAP or volser assignment policies in the tape library for the best operation of the tape system.
Unassigned volumes in the tape attach TS7700
A physical volume goes to the Unassigned category in the TS7740 or TS7720T if it does not fit in any defined range of physical volumes for this TS7700 cluster. Defined Ranges and Unassigned Volumes can be checked in the TS7700 MI Physical Volume Ranges window that is shown in Figure 9-98.
Figure 9-98 TS7740 unassigned physical volumes
If an unassigned volume should be assigned to this TS7700T, a new range that includes this volume must be created, as described in “Defining VOLSER ranges for physical volumes” on page 542. If this volume was incorrectly assigned to the TS7700 cluster, it should be ejected and reassigned to the correct logical library in the tape library. Also, make sure that CAP or volser assignments are correct in the tape library.
Assigning cartridges in the tape library to the logical library partition
This procedure is necessary only if a cartridge was inserted, without CAP or volser assignment being provided in advance (not recommended). To use this procedure, you must assign the cartridge manually to a logical library in the tape library.
 
 
Clarifications: Insert Notification is not supported in a high-density library for TS3500. The CAP must be correctly configured to provide automated assignment of all the inserted cartridges.
A cartridge that has been manually assigned to the TS7700 logical library does not display automatically in the TS7700T inventory. An Inventory Upload is needed to refresh the TS7700 cluster inventory. The Inventory Upload function is available on the Physical Volume Ranges menu as shown in Figure 9-98.
Cartridge assignment to a logical library is available only through the tape library GUI.
Assigning a data cartridge
To assign a data cartridge to a logical library in the TS3500 tape library, complete the following steps:
1. Click the Cartridge icon on the TS4500 GUI, and select Cartridges.
2. Find the cartridge that you want to assign (should show as unassigned at that point), and select it by clicking the line.
3. Right-click it, and select Assign. Choose the correct logical library in the list available.
4. Complete the assignment insertion by clicking Assign button.
5. For a TS7700T cluster, click Physical  Physical Volumes  Physical Volume Ranges and click Inventory Upload, as shown in Figure 9-101 on page 543.
To assign a data cartridge to a logical library in the TS3500 tape library, complete the following steps:
1. Open the TS3500 tape library GUI (go to the library’s Ethernet IP address or the library URL by using a standard browser). The Welcome window opens.
2. Click Cartridges  Data Cartridges. The Data Cartridges window opens.
3. Select the logical library to which the cartridge is assigned and select a sort view of the cartridge range. The library can sort the cartridge by volume serial number, SCSI element address, or frame, column, and row location. Click Search. The Cartridges window opens and shows all the ranges for the specified logical library.
4. Select the range that contains the data cartridge that should be assigned.
5. Select the data cartridge and then click Assign.
6. Select the logical library partition to which the data cartridge should be assigned to.
7. Click Next to complete the function.
8. For a TS7700T cluster, click Physical  Physical Volumes  Physical Volume Ranges and click Inventory Upload, as shown in Figure 9-101 on page 543.
Inserting a cleaning cartridge
Each drive in the tape library requires cleaning from time to time. Tape drives that are used by the TS7700 subsystem can request a cleaning action when necessary. This cleaning is carried out by the tape library automatically. However, the necessary cleaning cartridges must be provided.
 
Remember: Cleaning action is performed automatically by the tape libraries when necessary. A cleaning cartridge is good for 50 cleaning actions.
Use the cartridge magazine to insert cleaning cartridges into the I/O station, and then into the TS4500 tape library. The TS4500 can be set to move expired cleaning cartridges to the I/O station automatically. Figure 9-99 shows how to set it.
Figure 9-99 TS4500 tape library moves expired cleaning cartridge to I/O station automatically
The GUI page Cartridges by Logical Library under icon Cartridge shows how many cleaning cartridges are globally available to the TS4500 tape library, as shown in Figure 9-100.
Figure 9-100 Displaying cleaning cartridges with the TS4500.
Also, TS4500 tape library command-line interface commands are available that can be used to check the status of the cleaning cartridges or alter settings in the tape library. For more information, see the documentation for TS4500 that is available locally by clicking the question mark icon at the top bar in the GUI, or at IBM Knowledge Center:
9.5.2 TS7700T definitions
This section provides information about the following definitions:
Defining physical volume pools in the TS7700 tape-attached cluster
Defining VOLSER ranges for physical volumes
After a cartridge is assigned to a logical library that is associated to a TS7700T by CAPs or volser ranges, it is presented to the TS7700 tape attached cluster. The TS7700T uses the VOLSER ranges that are defined in its VOLSER Ranges table to set it to a proper category. Define the proper policies in the VOLSER Ranges table before inserting the cartridges into the tape library.
 
Note: VOLSER Ranges (or CAP) should be correctly assigned at tape library before using the tape library with IBM Z hosts. Native physical volume ranges must fall within ranges that are assigned to IBM Z host logical libraries.
Use the window that is shown in Figure 9-101 to add, modify, and delete physical volume ranges. Unassigned physical volumes are listed in this window. If a volume is listed as unassigned volume, and this volume belongs to this TS7700, a new range should be added including that volume to fix it. If an unassigned volume does not belong to this TS7700 cluster, it should be ejected and reassigned to the proper logical library in the physical tape library.
Figure 9-101 Physical Volume Ranges window
Click Inventory Upload to upload the inventory from the TS3500 tape library and update any range or ranges of physical volumes that were recently assigned to that logical library. The VOLSER Ranges table displays the list of defined VOLSER ranges for a specific component. The VOLSER Ranges table can be used to create a VOLSER range, or to modify or delete a predefined VOLSER range.
 
Important: Operator intervention is required to resolve unassigned volumes.
For more information about how to insert a new range of physical volumes by using the TS7700 Management Interface, see “Physical Volume Ranges window” on page 443.
Defining physical volume pools in the TS7700T
Pooling physical volume allows you to enable data to be placed into separate sets of physical media, treating each media group in a specific way. For instance, there might be a need to segregate production data from test data, or encrypt part of the data. All of these goals can be accomplished by defining physical volume pools. Also, the reclaim parameters can be defined for each specific pool to best suit specific needs. The TS7700 MI is used for pool property definitions.
Items under Physical Volumes in the MI apply only to tape attach clusters. Trying to access those windows from a TS7720 results in the following HYDME0995E message:
This cluster is not attached to a physical tape library.
Use the window that is shown in Figure 9-102 to view or modify settings for physical volume pools, which manage the physical volumes that are used by the TS7700.
Figure 9-102 Physical Volume Pools
The Physical Volume Pool Properties table displays the encryption setting and media properties for every physical volume pool that is defined for TS7700T clusters in the grid.
For more information about how to view, create, or modify Physical volume tape pools by using the TS7700 management interface, see “Physical Volume Pools” on page 428.
To modify encryption settings for one or more physical volume pools, complete the following steps (Figure 9-103 on page 545 shows this sequence). For more information, see “Physical Volume Pools” on page 428:
1. Open the Physical Volume Pools window.
 
Tip: A tutorial is available in the Physical Volume Pools window to show how to modify encryption properties.
2. Click the Physical Tape Encryption Settings tab.
3. Select the check box next to each pool to be modified.
4. Click Select Action  Modify Encryption Settings.
5. Click Go to open the Modify Encryption Settings window (see Figure 9-103).
Figure 9-103 Modify encryption settings parameters
For more information about for parameters and settings on the Modify Encryption settings window, see “Physical Volume Pools” on page 428.
Defining reclamation settings in a TS7700T
To optimize the use of the subsystem resources, such as processor cycles and tape drive usage, space reclamation can be inhibited during predictable busy periods and reclamation thresholds can be adjusted to the optimum point in the TS7700T through the MI. The reclaim threshold is the percentage that is used to determine when to run the reclamation of free space in a stacked volume.
When the amount of active data on a physical stacked volume drops below this percentage, the volume becomes eligible for reclamation. Reclamation values can be in the range of
0% - 95%, with a default value of 35%. Selecting 0% deactivates this function.
 
Note: Subroutines of the Automated Read-Only Recovery (ROR) process are started to reclaim space in the physical volumes. Those cartridges are made read-only momentarily during the reclaim process, returning to normal status at the end of the process.
Throughout the data lifecycle, new logical volumes are created and old logical volumes become obsolete. Logical volumes are migrated to physical volumes, occupying real space there. When a logical volume becomes obsolete, that space becomes a waste of capacity in that physical tape. Therefore, the active data level of that volume is decreasing over time.
TS7700T actively monitors the active data in its physical volumes. Whenever this active data level crosses the reclaim threshold that is defined in the Physical Volume Pool in which that volume belongs, the TS7700 places that volume in a candidate list for reclamation.
Reclamation copies active data from that volume to another stacked volume in the same pool. When the copy finishes and the volume becomes empty, the volume is returned to available SCRATCH status. This cartridge is now available for use and is returned to the common scratch pool or directed to the specified reclaim pool, according to the Physical Volume Pool definition.
 
Clarification: Each reclamation task uses two tape drives (source and target) in a tape-to-tape copy function. The TS7700 TVC is not used for reclamation.
Multiple reclamation processes can run in parallel. The maximum number of reclaim tasks is limited by the TS7700T, based on the number of available drives as listed in Table 9-19.
Table 9-19 Installed drives versus maximum reclaim tasks
Number of available drives
Maximum number of reclaims
3
1
4
1
5
1
6
2
7
2
8
3
9
3
10
4
11
4
12
5
13
5
14
6
15
6
16
7
You might want to have fewer reclaims running, sparing the resources for other activities in the cluster. The user can set a maximum number of drives that will be used for reclaim in a pool bases. Also, reclaim settings can be changed by using LI REQ commands.
The reclamation level for the physical volumes must be set by using the Physical Volume Pools window in the TS7700 MI.Select a pool and click Modify Pool Properties in the menu to set the reclamation level and other policies for that pool.
For example, Figure 9-104 shows the borrow-return policy in effect for Pool 3, meaning that cartridges can be borrowed from the common scratch pool. In addition, those cartridges are returned to the CSP upon reclamation. Also, the user has defined that volumes belonging to pool 3 should reclaim into pool 13.
No more than four drives can be used for premigration in pool 3. The reclaiming threshold percentage has been set to 35%, meaning that when a physical volume in pool 3 crosses down the threshold of 35% of occupancy with active data, the stacked cartridge became candidate for reclamation. The other way to trigger a reclamation in this example is Days Without Data Inactivation for tape cartridges with up to 65% of occupancy level.
Figure 9-104 Pool properties
For more information about parameters and settings, see “Physical Volume Pools” on page 428.
Reclamation enablement
To minimize any effect on TS7700 activity, the storage management software monitors resource use in the TS7700, and enables or disables reclamation. Optionally, reclamation activity can be prevented at specific times by specifying an Inhibit Reclaim Schedule in the TS7700 MI (Figure 9-105 on page 550 shows an example).
However, the TS7700T determines whether reclamation is enabled or disabled once an hour, depending on the number of available scratch cartridges. It disregards the Inhibit Reclaim Schedule if the TS7700T goes below a minimum number of scratch cartridges that are available. Now, reclamation is enforced by the tape attach TS7700 cluster.
 
Tip: The maximum number of Inhibit Reclaim Schedules is 14.
Using the Bulk Volume Information Retrieval (BVIR) process, the amount of active data on stacked volumes can be monitored on PHYSICAL MEDIA POOLS, helping to plan for a reasonable and effective reclaim threshold percentage. Also, Host Console Request function can be used to obtain the physical volume counts.
Although reclamation is enabled, stacked volumes might not always be going through the process all of the time. Other conditions must be met, such as stacked volumes that meet one of the reclaim policies and drives that are available to mount the stacked volumes.
Reclamation for a volume is stopped by the TS7700 internal management functions if a tape drive is needed for a recall or copy (because these are of a higher priority) or a logical volume is needed for recall off a source or target tape that is in the reclaim process. If this happens, reclamation is stopped for this physical tape after the current logical volume move is complete.
Pooling is enabled as a standard feature of the TS7700, even if only one pool is used. Reclamation can occur on multiple volume pools at the same time, and process multiple tasks for the same pool. One of the reclamation methods selects the volumes for processing based on the percentage of active data.
Individual pools can have separate reclaim policies set. The number of pools can also influence the reclamation process because the TS7700 tape attach always evaluates the stacked media starting with Pool 1.
The scratch count for physical cartridges also affects reclamation. The scratch state of pools is assessed in the following manner:
1. A pool enters a Low scratch state when it has access to less than 50 and two or more empty cartridges (scratch tape volumes).
2. A pool enters a Panic scratch state when it has access to fewer than two empty cartridges (scratch tape volumes).
Panic Reclamation mode is entered when a pool has fewer than two scratch cartridges and no more scratch cartridges can be borrowed from any other pool that is defined for borrowing. Borrowing is described in “Using physical volume pools” on page 51.
 
Important: A physical volume pool that is running out of scratch cartridges might stop mounts in the TS7740 or TS7720T tape attach partitions, affecting host tape operations. Mistakes in pool configuration (media type, borrow and return, home pool, and so on) or operating with an empty common scratch pool might lead to this situation.
Consider that one reclaim task uses two drives for the data move, and processor cycles. When a reclamation starts, these drives are busy until the volume that is being reclaimed is empty. If the reclamation threshold level is raised too high, the result is larger amounts of data to be moved, with a resultant penalty in resources that are needed for recalls and premigration. The default setting for the reclamation threshold level is 35%.
Ideally, reclaim threshold level should be 10% - 35%. Read more about how to fine-tune this function and about the available host functions in 4.4.4, “Physical volumes for TS7740, TS7720T, and TS7760T” on page 183. Pools in either scratch state (Low or Panic state) get priority for reclamation.
Table 9-20 lists the thresholds.
Table 9-20 Reclamation priority table
Priority
Condition
Reclaim schedule honored
Active data threshold% honored
Number of concurrent reclaims
Comments
1
Pool in Panic scratch state
No
No
At least one, regardless of idle drives. If more idle drives are available, more reclaims are started, up to the maximum limit.
 
2
Priority move
Yes or No
No
At least one, regardless of idle drives. If more idle drives are available, more reclaims are started, up to the maximum limit.
If a volume is within 10 days of a Secure Data Erasure and still has active data on it, it is reclaimed at this priority. An SDE priority move accepts the inhibit reclaim schedule.
 
For a TS7700 MI-initiated priority move, the option to accept the inhibit reclaim schedule is given to the operator.
3
Pool in Low scratch state
Yes
Yes
At least one, regardless of idle drives. If more idle drives are available, more reclaims are started, up to the maximum limit.
Volumes that are subject to reclaim because of Maximum Active Data, Days Without Access, Age of Last Data Written, and Days Without Data Inactivation use priority 3 or 4 reclamation.
4
Normal reclaim
Yes
Yes, pick from all eligible pools
(Number of idle drives divided by 2) minus 1
  8 drv: 3 max
16 drv: 7 max
Volumes that are subject to reclaim because of Maximum Active Data, Days Without Access, Age of Last Data Written, and Days Without Data Inactivation use priority 3 or 4 reclamation.
 
Tips: A physical drive is considered idle when no activity has occurred for the previous 10 minutes.
The Inhibit Reclaim schedule is not accepted by the Secure Data Erase function for a volume that has no active data.
Inhibit Reclaim schedule
The Inhibit Reclaim schedule defines when the TS7700 must refrain from reclaim operations. During times of heavy mount activity, it might be desirable to make all of the physical drives available for recall and premigration operations. If these periods of heavy mount activity are predictable, the Inhibit Reclaim schedule can be used to inhibit reclaim operations for the heavy mount activity periods.
To define the Inhibit Reclaim schedule, click Management Interface  Settings  Cluster Settings, which opens the window that is shown in Figure 9-105.
Figure 9-105 Inhibit Reclaim schedules
The Schedules table (Figure 9-106) displays the day, time, and duration of any scheduled reclamation interruption. All inhibit reclaim dates and times are first displayed in Coordinated Universal Time and then in local time. Use the menu on the Schedules table to add a new Reclaim Inhibit Schedule, or modify or delete an existing schedule, as shown in Figure 9-106.
Figure 9-106 Add Inhibit Reclaim schedule
Defining Encryption Key Server addresses
Set the EKS addresses in the TS7700 cluster (see Figure 9-107).
Figure 9-107 Encryption Key Server Addresses
To watch a tutorial that shows the properties of encryption key management, click the View tutorial link.
The EKS assists encryption-enabled tape drives in generating, protecting, storing, and maintaining EKs that are used to encrypt information being written to and decrypt information being read from tape media (tape and cartridge formats). Also, EKS manages the EK for the TVC cache disk subsystem, with the external key management disk encryption feature installed. This removes the responsibility of managing the key away from the 3957-Vxx and from the disk subsystem controllers.
 
Note: The settings for Encryption Server are shared for both tape and external disk encryption.
9.5.3 TS7700 definitions
This section describes the basic TS7700 definitions.
Inserting virtual volumes
Use the Insert Virtual Volumes window (see Figure 9-108) to insert a range of logical volumes in the TS7700 grid. Logical volumes that are inserted into an individual cluster are available to all clusters within a grid configuration.
Figure 9-108 TS7700 MI Insert Virtual Volumes window
During logical volume entry processing on z/OS, even if the library is online and operational for a specific host, at least one device must be online (or been online) for that host for the library to send the volume entry attention interrupt to that host. If only the library is online and operational, but there are no online devices to a specific host, that host does not receive the attention interrupt from the library unless a device previously was varied online.
To work around this limitation, ensure that at least one device is online (or been online) to each host or use the LIBRARY RESET,CBRUXENT command to initiate cartridge entry processing from the host. This task is especially important if only one host is attached to the library that owns the volumes being entered. In general, after the volumes are entered into the library, CBR36xxI cartridge entry messages are expected. The LIBRARY RESET,CBRUXENT command from z/OS can be used to reinitiate cartridge entry processing, if necessary. This command causes the host to ask for any volumes in the insert category.
Up to now, as soon as OAM starts for the first time, and being the volumes in the Insert category, the entry processing starts, not allowing for operator interruptions. The LI DISABLE,CBRUXENT command can be used before starting the OAM address space. This approach allows for the entry processing to be interrupted before the OAM address space initially starts.
For more information about this TS7700 MI page, see “Insert Virtual Volumes window” on page 415.
 
Note: Up to 10,000 logical volumes can be inserted at one time. This applies to both inserting a range of logical volumes and inserting a quantity of logical volumes.
Defining scratch categories
You can use the TS7700 MI to add, delete, or modify a scratch category of virtual volumes. All scratch categories that are defined by using the TS7700 MI inherit the Fast Ready attribute.
 
Note: The Fast Ready attribute provides a definition of a category to supply scratch mounts. For z/OS, it depends on the definitions. The TS7700 MI provides a way to define one or more scratch categories. A scratch category can be added by using the Add Scratch Category menu.
The MOUNT FROM CATEGORY command is not exclusively used for scratch mounts. Therefore, the TS7700 cannot assume that any MOUNT FROM CATEGORY is for a scratch volume.
When defining a scratch category, an expiration time can be set up, and further define it as an Expire Hold time.
The category hexadecimal number depends on the software environment and on the definitions in the SYS1.PARMLIB member DEVSUPxx for library partitioning. Also, the DEVSUPxx member must be referenced in the IEASYSxx member to be activated.
 
Tip: Do not add a scratch category by using MI that was previously designated as a private volume category at the host. Categories should correspond to the defined categories in the DEVSUPxx from the attached hosts.
To add, modify, or delete a scratch category of virtual volumes, see “Insert a New Virtual Volume Range window” on page 416 for more details. This window can also be used to view total volumes that are defined by custom, inserted, and damaged categories. The Categories table uses the following values and descriptions:
Categories:
 – Scratch
Categories within the user-defined private range 0x0001 through 0xEFFF that are defined as scratch.
 – Private
Custom categories that are established by a user, within the range of 0x0001 though 0xEFFF.
 – Damaged
A system category that is identified by the number 0xFF20. Virtual volumes in this category are considered damaged.
 – Insert
A system category that is identified by the number 0xFF00. Inserted virtual volumes are held in this category until moved by the host into a scratch category.
Owning Cluster
Names of all clusters in the grid.
Counts
The total number of virtual volumes according to category type, category, or owning cluster.
Scratch Expired
The total number of scratch volumes per owning cluster that are expired. The total of all scratch expired volumes is the number of ready scratch volumes.
 
Number of virtual volumes: The addition of all volumes counts that are shown in the Counts column do not always result in the total number of virtual volumes due to some rare, internal categories not being displayed on the Categories table. Additionally, movement of virtual volumes between scratch and private categories can occur multiple times per second and any snapshot of volumes on all clusters in a grid is obsolete by the time a total count completes.
The Categories table can be used to add, modify, and delete a scratch category, and to change the way that information is displayed.
Figure 9-109 shows the Add Category window, which you can open by selecting Add Scratch Categories.
Figure 9-109 Scratch Categories - Add Category
The Add Category window shows these fields:
Category
A four-digit hexadecimal number that identifies the category. The following characters are valid characters for this field:
A-F, 0-9
 
Important: Do not use category name 0000 or FFxx, where xx equals 0 - 9 or A - F. 0000 represents a null value, and FFxx is reserved for hardware.
Expire
The amount of time after a virtual volume is returned to the scratch category before its data content is automatically delete-expired.
A volume becomes a candidate for delete-expire after all the following conditions are met:
 – The amount of time since the volume entered the scratch category is equal to or greater than the Expire Time.
 – The amount of time since the volume’s record data was created or last modified is greater than 12 hours.
 – At least 12 hours has passed since the volume was migrated out of or recalled back into disk cache.
Select an expiration time from the drop-down menu shown on Figure 9-109.
 
Note: If No Expiration is selected, volume data never automatically delete-expires.
Set Expire Hold
Select this box to prevent the virtual volume from being mounted or having its category and attributes changed before the expire time elapses.
Selecting this field activates the hold state for any volumes currently in the scratch category and for which the expire time has not yet elapsed. Clearing this field removes the access restrictions on all volumes currently in the hold state within this scratch category.
If Expire Hold is set, the virtual volume cannot be mounted during the expire time duration and is excluded from any scratch counts surfaced to the IBM Z host. The volume category can be changed, but only to a private category, allowing accidental scratch occurrences to be recovered to private.
If Expire Hold is not set, then the virtual volume can be mounted or have its category and attributes changed within the expire time duration. The volume is also included in scratch counts surfaced to the IBM Z hosts.
 
Tip: Add a comment to DEVSUPnn to ensure that the scratch categories are updated when the category values in DEVSUPnn are changed. They always need to be in sync.
Defining the logical volume expiration time
The expiration time is defined from the MI window that is shown in Figure 9-109 on page 555. If the Delete Expired Volume Data setting is not used, logical volumes that have been returned to scratch are still considered active data, allocating physical space in tape cartridges on the tape attach TS7700. In that case, rewriting only this logical volume expires the old data, enabling physical space that is occupied by old data to be reclaimed later.
With the Delete Expired Volume Data setting, the data that is associated with volumes that have been returned to scratch are expired after a specified time period and their physical space in tape can be reclaimed.
The parameter Expire Time specifies the amount of time in hours, days, or weeks. The data continues to be managed by the TS7700 after a logical volume is returned to scratch before the data that is associated with the logical volume is deleted. A minimum of 1 hour and a maximum of 2,147,483,647 hours (approximately 244,983 years) can be specified.
Specifying a value of zero means that the data that is associated with the volume is to be managed as it was before the addition of this option. This means that it is never deleted. In essence, specifying a value (other than zero) provides a “grace period” from when the virtual volume is returned to scratch until its associated data is eligible for deletion. A separate Expire Time can be set for each category that is defined as scratch.
 
Remember: Scratch categories are global settings within a multi-cluster grid. Therefore, each defined scratch category and the associated Delete Expire settings are valid on each cluster of the grid.
The Delete Expired Volume Data setting applies also to disk only clusters. If it is not used, logical volumes that have been returned to scratch are still considered active data, allocating physical space in the TVC. Therefore, setting an expiration time on a disk only TS7700 is important to maintain an effective cache usage by deleting expired data.
Expire Time
Figure 9-109 on page 555 shows how to select the amount of time (in hours, days, week, or years) for the Expire Time parameter that is related to a given scratch category. A separate Expire Time can be set for each category defined as scratch. A minimum of 1 hour and a maximum of 2,147,483,647 (approximately 244,983 years) can be specified.
 
Note: The value 0 is not a valid entry on the dialog box for Expire Time on the Add Category page. Use No Expiration instead.
Establishing the Expire Time for a volume occurs as a result of specific events or actions. The following list show possible events or actions and their effect on the Expire Time of a volume:
A volume is mounted
The data that is associated with a logical volume is not deleted, even if it is eligible, if the volume is mounted. Its Expire Time is set to zero, meaning it will not be deleted. It is reevaluated for deletion when its category is assigned.
A volume’s category is changed
Whenever a volume is assigned to a category, including assignment to the same category in it currently exists, it is reevaluated for deletion.
Expiration
If the category has a nonzero Expire Time, the volume’s data is eligible for deletion after the specified time period, even if its previous category had a different nonzero Expire Time.
No action
If the volume’s previous category had a nonzero Expire Time or even if the volume was already eligible for deletion (but has not yet been selected to be deleted) and the category to which it is assigned has an Expire Time of zero, the volume’s data is no longer eligible for deletion. Its Expire Time is set to zero.
A category’s Expire Time is changed
If a user changes the Expire Time value through the scratch categories menu on the TS7700 MI, the volumes that are assigned to that category are reevaluated for deletion.
Expire Time is changed from nonzero to zero
If the Expire Time is changed from a nonzero value to zero, volumes that are assigned to the category that currently have a nonzero Expire Time are reset to an Expire Time of zero. If a volume was already eligible for deletion, but had not been selected for deletion, the volume’s data is no longer eligible for deletion.
Expire Time is changed from zero to nonzero
Volumes that are assigned to the category continue to have an Expire Time of zero. Volumes that are assigned to the category later will have the specified nonzero Expire Time.
Expire Time is changed from nonzero to nonzero
Volumes assigned for that category are reevaluated for deletion. Volumes that are assigned to the category later will have the updated nonzero Expire Time.
After a volume’s Expire Time is reached, it is eligible for deletion. Not all data that is eligible for deletion is deleted in the hour that it is first eligible. Once an hour, the TS7700 selects up to 1,000 eligible volumes for data deletion. The volumes are selected based on the time that they became eligible, with the oldest ones being selected first. Up to 1,000 eligible volumes for the TS7700 in the library are selected first.
Defining TS7700 constructs
To use the Outboard Policy Management functions, four constructs must be defined:
Storage Group (SG)
Management Class (MC)
Storage Class (SC)
Data Class (DC)
These construct names are passed down from the z/OS host and stored with the logical volume. The actions that are defined for each construct are performed by the TS7700. For non-z/OS hosts, the constructs can be manually assigned to logical volume ranges.
Storage Groups
On the z/OS host, the SG construct determines into which tape library a logical volume is written. Within the TS7700T, the SG construct defines the storage pool to which the logical volume is placed.
Even before the first SG is defined, there is always at least one SG present. This is the default SG, which is identified by eight dashes (--------). This SG cannot be deleted, but it can be modified to point to another storage pool. Up to 256 SGs, including the default, can be defined.
Use the window that is shown in Figure 9-110 to add, modify, and delete an SG used to define a primary pool for logical volume premigration.
Figure 9-110 Storage Groups
The SGs table displays all existing SGs available for a selected cluster. See “Storage Groups window” on page 456 for details about this page.
Management Classes
Dual copy for a logical volume within the same TS7700T can be defined in the Management Classes window. In a grid configuration, a typical choice is to copy logical volumes over to the other cluster rather than creating a second copy in the same TS7700T.
However, in a stand-alone configuration, the dual copy capability can be used to protect against media failures. The second copy of a volume can be in a pool that is designated as a Copy Export pool. For more information, see 2.3.32, “Copy Export” on page 95.
If you want to have dual copies of selected logical volumes, you must use at least two storage pools because the copies cannot be written to the same storage pool as the original logical volumes.
A default MC is always available. It is identified by eight dashes (--------) and cannot be deleted. You can define up to 256 MCs, including the default. Use the window that is shown in Figure 9-111 to define, modify, or delete the MC that defines the TS7700 copy policy for volume redundancy.
The Current Copy Policy table displays the copy policy in force for each component of the grid. If no MC is selected, this table is not visible. You must select an MC from the MCs table to view copy policy details.
Figure 9-111 shows the MCs table.
Figure 9-111 Management Classes
The MCs table (Figure 9-111) displays defined MC copy policies that can be applied to a cluster. You can use the MCs table to create a new MC, modify an existing MC, and delete one or more existing MCs. Use the“Management Classes window” on page 457 for information about how to use the Management Classes window.
Storage Classes
By using the SC construct, you can influence when a logical volume is removed from cache, and assign Cache Partition Residency for logical volumes in a TS7700T cluster.
Use the window that is shown in Figure 9-112 to define, modify, or delete an SC that is used by the TS7700 to automate storage management through the classification of data sets and objects.
Figure 9-112 Storage Classes window on a TS7700
The SCs table displays defined SCs available to CDSs and objects within a cluster. Although SCs are visible from all TS7700 clusters, only those clusters that are attached to a physical library can alter TVC preferences. A stand-alone TS7700 cluster that does not possess a physical library does not remove logical volumes from the tape cache, so the TVC preference for the disk-only clusters is always Preference Level 1.
See “Storage Classes window” on page 460 for detailed information.
Data Classes
From a z/OS perspective (SMS-managed tape), the DFSMS DC defines the following information:
Media type parameters
Recording technology parameters
Compaction parameters
For the TS7700, only the Media type, Recording technology, and Compaction parameters are used. The use of larger logical volume sizes is controlled through DC. Also, LWORM policy assignments are controlled from Data Classes.
Starting with R4.1.2, you can select the compression method for the logical volumes per Data Class policy. The compression method that you choose will be applied to the z/OS host data being written to logical volumes that belong to a specific Data Class. The following are the available options:
FICON compression: This is the traditional method in place for the TS7700 family, where the compression is performed by the FICON adapters (also known as hardware compression). This algorithm uses no cluster processing resources, but has a lower compression ratio compared with the newer compression methods.
LZ4 Compression: Improved compression method implemented with R4.1.2 using an LZ4 algorithm. This compression method is faster and uses fewer cluster processing resources than the ZSTD method, but it delivers a lower compression ration than the ZSTD.
ZSTD Compression: Improved compression method implemented with R4.1.2 using an zStandard algorithm. This compression method delivers a higher compression ratio than LZ4, but is slower and uses more cluster processing resources than LZ4.
Use the window that is shown in Figure 9-113 to define, modify, or delete a TS7700 DC. The DC is used to automate storage management through the classification of data sets.
Figure 9-113 Data Classes window
See “Data Classes window” on page 463 to see more details about how to create, modify, or delete a Data Class.
Activating a TS7700 license key for a new Feature Code
The user will use the Feature Licenses page when there is a need to view information about feature licences currently installed on a TS7700 cluster, or to activate or remove some feature licenses from the IBM TS7700.
Figure 9-114 shows an example of the Feature Licenses window.
Figure 9-114 Feature Licenses window
See “Feature licenses” on page 487 for more details about how to use this window.
Defining Simple Network Management Protocol
SNMP is one of the notification channels that IBM TS7700 uses to inform the user that an unexpected occurrence, malfunction, or event has happened. Use the window that is shown in Figure 9-115 to view or modify the SNMP configured on a TS7700 Cluster.
Figure 9-115 SNMP settings
For more information about how to use this page, see “Simple Network Management Protocol” on page 488.
Enabling IPv6
IPv6 and Internet Protocol Security (IPSec) are supported by the TS7700 clusters.
 
Tip: The client network must use whether IPv4 or IPv6 for all functions, such as MI, key manager server, SNMP, Lightweight Directory Access Protocol (LDAP), and NTP. Mixing IPv4 and IPv6 is not currently supported.
Figure 9-116 shows the Cluster Network Settings window from which you can enable IPv6.
Figure 9-116 Cluster Network Settings
For more information about how to use Cluster Network Settings window, see “Cluster network settings” on page 485.
Enabling IPSec
The current configurations of the TS7700 clusters support IPSec over the grid links.
 
Caution: Enabling grid encryption significantly affects the performance of the TS7700.
Figure 9-117 shows how to enable the IPSec for the TS7700 cluster.
Figure 9-117 Enabling IPSec in the grid links
In a multi-cluster grid, the user can choose which link is encrypted by selecting the boxes in front of the beginning and ending clusters for the selected link. Figure 9-117 shows a two-cluster grid, which is the reason why there is only one option to select.
For more information about IPSec configuration, see “Cluster network settings” on page 485. Also, see IBM Knowledge Center:
Defining security settings
Role-based access control (RBAC) is a general security model that simplifies administration by assigning roles to users and then assigning permissions to those roles. LDAP is a protocol that implements an RBAC methodology.
The TS7700 supports RBAC through the System Storage Productivity Center or by native LDAP by using Microsoft Active Directory (MSAD) or IBM Resource Access Control Facility (RACF).
For information about setting up and checking the security settings for the TS7700 grid, see “Security Settings window” on page 466.
9.5.4 TS7700 multi-cluster definitions
The following sections describe TS7700 multi-cluster definitions.
Defining grid copy mode control
When upgrading a stand-alone cluster to a grid, FC4015, Grid Enablement, must be installed on all clusters in the grid. Also, the Copy Consistency Points in the MC definitions on all clusters in the new grid should be set.
The data consistency point is defined in the MCs construct definition through the MI. This task can be performed for an existing grid system. In a stand-alone cluster configuration, the Modify MC definition will only display the lone cluster.
See “Management Classes window” on page 457 for details about how to modify the copy consistency by using the Copy Action table.
Figure 9-118 shows an example of how to modify the copy consistency by using the Copy Action table, and then clicking OK. In the figure, the TS7700 is part of a three-cluster grid configuration. This additional menu is displayed only if a TS7700 is part of a grid environment (options are not available in a stand-alone cluster).
Figure 9-118 Modify Management Classes
For more information about how to modify the copy consistency by using the Copy Action table, see “Management Classes window” on page 457.
For more information about this subject, see the following resources:
IBM Virtualization Engine TS7700 Series Best Practices - TS7700 Hybrid Grid Usage, WP101656:
IBM TS7700 Series Best Practices - Copy Consistency Points, WP101230:
IBM TS7700 Series Best Practices - Synchronous Mode Copy, WP102098:
Defining Copy Policy Override settings
With the TS7700, the optional override settings that influence the selection of the I/O TVC and replication responses can be defined and set.
 
Reminder: The items on this window can modify the cluster behavior regarding local copy and certain I/O operations. Some LI REQ commands also can do it.
The settings are specific to a cluster in a multi-cluster grid configuration, which means that each cluster can have separate settings, if needed. The settings take effect for any mount requests that are received after the settings were saved. Mounts already in progress are not affected by a change in the settings. The following settings can be defined and set:
Prefer local cache for scratch mount requests
Prefer local cache for private mount requests
Force volumes that are mounted on this cluster to be copied to the local cache
Enable fewer RUN consistent copies before reporting RUN command complete
Ignore cache preference groups for copy priority
For more information about how to view or modify these settings, see “Copy Policy Override” on page 491.
Defining scratch mount candidates
Scratch allocation assistance (SAA) is an extension of the device allocation assistance (DAA) function for scratch mount requests. SAA filters the list of clusters in a grid to return to the host a smaller list of candidate clusters that are designated as scratch mount candidates.
Scratch mount candidates can be defined in a grid environment with two or more clusters. For example, in a hybrid configuration, the SAA function can be used to direct certain scratch allocations (workloads) to one or more TS7720 tape drives for fast access. Other workloads can be directed to a TS7740 or TS7720T for archival purposes.
Clusters not included in the list of scratch mount candidates are not used for scratch mounts at the associated MC unless those clusters are the only clusters that are known to be available and configured to the host.
See Chapter 10, “Host Console operations” on page 585 for information about software levels that are required by SAA and DAA to function properly, in addition to the LI REQ commands that are related to the SAA and DAA operation.
Figure 9-119 shows an example of a MC. Select which clusters are candidates by MC. If no clusters are checked, the TS7700 defaults to all clusters being candidates.
Figure 9-119 Scratch mount candidate list in an existing Management Class
Each cluster in a grid can provide a unique list of candidate clusters. Clusters with an ‘N’ copy mode, such as cross-cluster mounts, can still be candidates. When defining the scratch mount candidates in an MC, normally you want each cluster in the grid to provide the same list of candidates for load balancing. See “Management Classes window” on page 457 for more details about how to create or change a MC.
 
Note: Scratch mount candidate list as defined in MI (Figure 9-119) is only accepted upon being enabled by using the LI REQ setting.
Retain Copy mode
Retain Copy mode is an optional setting where existing Copy Consistency Points for a volume are accepted rather than applying the Copy Consistency Points defined at the mounting cluster. This applies to private volume mounts for reads or write appends. It is used to prevent more copies of a volume being created in the grid than wanted. This is important in a grid with three or more clusters that has two or more clusters online to a host.
Figure 9-119 also shows the Retain copy mode option in the TS7700 MI window.
For more information about how to create or change a MC, see “Management Classes window” on page 457.
 
Note: The Retain Copy mode option is effective only on private (non-scratch) virtual volume mounts.
Defining cluster families
Cluster families can be defined in a grid with three or more clusters.
This function introduces a concept of grouping clusters together into families. Using cluster families, a common purpose or role can be assigned to a subset of clusters within a grid configuration. The role that is assigned, for example, production or archive, is used by the TS7700 Licensed Internal Code to make improved decisions for tasks, such as replication and TVC selection. For example, clusters in a common family are favored for TVC selection, or replication can source volumes from other clusters within its family before using clusters outside of its family.
Use the Cluster Families option on the Actions menu of the Grid Summary window to add, modify, or delete a cluster family.
Figure 9-120 shows an example of how to create a cluster family using the TS7700 MI.
Figure 9-120 Create a cluster family
For more information about how to create, modify, or delete families on the TS7700 MI, see “Cluster Families window” on page 360.
TS7700 cache thresholds and removal policies
This topic describes the boundaries (thresholds) of free cache space in a disk-only TS7700 or TS7700T CP0 partition cluster and the policies that can be used to manage available (active) cache capacity in a grid configuration.
Cache thresholds for a disk only TS7700 or TS7700T resident partition (CP0)
A disk-only TS7700 and the resident partition (CP0) of a TS7700T (tape attach) configuration does not attach to a physical library. All virtual volumes are stored in the cache. Three thresholds define the active cache capacity in a TS7700 and determine the state of the cache as it relates to remaining free space. In ascending order of occurrence, these are the three thresholds:
Automatic Removal
The policy removes the oldest logical volumes from the disk only TS7700 cache if a consistent copy exists elsewhere in the grid. This state occurs when the cache is 4 TB below the out-of-cache-resources threshold. In the automatic removal state, the TS7700 automatically removes volumes from the disk-only cache to prevent the cache from reaching its maximum capacity. This state is identical to the limited-free-cache-space-warning state unless the Temporary Removal Threshold is enabled.
Automatic removal can be disabled within any specific TS7700 cluster by using the following library request command:
LIBRARY REQUEST,CACHE,REMOVE,{ENABLE|DISABLE}
 
So that a disaster recovery test can access all production host-written volumes, automatic removal is temporarily disabled while disaster recovery write protect is enabled on a disk-only cluster. When the write protect state is lifted, automatic removal returns to normal operation.
Limited free cache space warning
This state occurs when there is less than 3 TB of free space remaining in the cache. After the cache passes this threshold and enters the limited-free-cache-space-warning state, write operations can use only an extra 2 TB before the out-of-cache-resources state is encountered. When a TS7700 cluster enters the limited-free-cache-space-warning state, it remains in this state until the amount of free space in the cache exceeds 3.5 TB. The following messages can be displayed on the MI during the limited-free-cache-space-warning state:
 – HYDME0996W
 – HYDME1200W
For more information, see IBM Knowledge Center:
 
Clarification: Host writes to the disk only TS7700 cluster and inbound copies continue during this state.
Out of cache resources
This state occurs when there is less than 1 TB of free space remaining in the cache. After the cache passes this threshold and enters the out-of-cache-resources state, it remains in this state until the amount of free space in the cache exceeds 3.5 TB. When a TS7720 cluster is in the out-of-cache-resources state, volumes on that cluster become read-only and one or more out-of-cache-resources messages are displayed on the MI. The following messages can display:
 – HYDME0997W
 – HYDME1133W
 – HYDME1201W
For more information, see IBM Knowledge Center:
https://ibm.biz/Bd24nk
 
Clarification: New host allocations do not choose a disk only cluster in this state as a valid TVC candidate. New host allocations that are sent to a TS7700 cluster in this state choose a remote TVC instead. If all valid clusters are in this state or cannot accept mounts, the host allocations fail. Read mounts can choose the disk only TS7700 cluster in this state, but modify and write operations fail. Copies inbound to this cluster are queued as Deferred until the disk only cluster exits this state.
Table 9-21 lists the start and stop thresholds for each of the active cache capacity states defined.
Table 9-21 Active cache capacity state thresholds
State
Enter state (free space available)
Exit state (free space available)
Host message displayed
Automatic removal
< 4 TB
> 4.5 TB
CBR3750I when automatic removal begins
Limited free cache space warning (CP0 for a TS7700 tape attach)
<= 3 TB or <=15% of the size of cache partition 0, whichever is less
>3.5 TB or >17.5% of the size of cache partition 0, whichever is less
CBR3792E upon entering state
CBR3793I upon exiting state
Out of cache resources (CP0 for a TS7700 tape attach)
< 1 TB or <= 5% of the size of cache partition 0, whichever is less
> 3.5 TB or >17.5% of the size of cache partition 0, whichever is less
CBR3794A upon entering state
CBR3795I upon exiting state
Temporary removal1
<(X + 1 TB)2
>(X + 1.5 TB)b
Console message

1 When enabled.
2 Where X is the value set by the TVC window on the specific cluster.
The Removal policy is set by using the SC window on the TS7700 MI. Figure 9-121 shows several definitions in place.
Figure 9-121 Storage Classes in TS7700 with removal policies
To add or change an SC, select the appropriate action in the menu, and click Go (see Figure 9-122).
Figure 9-122 Define a new Storage Class with TS7700
Removal Threshold
The Removal Threshold is used to prevent a cache overrun condition in a disk only TS7700 cluster that is configured as part of a grid. By default, it is a 4 TB value (3 TB fixed, plus 1 TB) that, when taken with the amount of used cache, defines the upper limit of a TS7700 cache size. Above this threshold, logical volumes begin to be removed from a disk only TS7700 cache.
 
Note: Logical volumes are only removed if there is another consistent copy within the grid.
Logical volumes are removed from a disk only TS7700 cache in the following order:
1. Volumes in scratch categories
2. Private volumes least recently used, by using the enhanced Removal policy definitions
After removal begins, the TS7700 continues to remove logical volumes until the Stop Threshold is met. The Stop Threshold is the Removal Threshold minus 500 GB. A particular logical volume cannot be removed from a disk only TS7700 cache until the TS7700 verifies that a consistent copy exists on a peer cluster. If a peer cluster is not available, or a volume copy has not yet completed, the logical volume is not a candidate for removal until the appropriate number of copies can be verified later.
 
Tip: This field is only visible if the selected cluster is a disk only TS7700 in a grid configuration.
Temporary Removal Threshold
The Temporary Removal Threshold lowers the default Removal Threshold to a value lower than the Stop Threshold. This resource might be useful in preparation for a disaster recovery testing with FlashCopy, or yet in anticipation of a service activity in a member of the grid. Logical volumes might need to be removed to create extra room in cache for FlashCopy volumes that will be present during a DR rehearsal, or before one or more clusters go into service mode.
When a cluster in the grid enters service mode, the remaining clusters can have their ability to make or validate copies and perform auto removal of logical volumes affected. For an extended period, this situation might result in a disk-only cluster getting out of cache resources, considering the worst possible scenario. The Temporary Removal Threshold resource is instrumental to help preventing this possibility.
 
Note: The Temporary Removal Threshold is not supported on the TS7740.
The lower threshold creates extra free cache space, which enables the disk-only TS7700 to accept any host requests or copies during the DR testing or service outage without reaching its maximum cache capacity. The Temporary Removal Threshold value must be equal to or greater than the expected amount of compressed host workload written, copied, or both to the disk-only cluster or CP0 partition during the service outage.
The default Temporary Removal Threshold is 4 TB, which provides 5 TB (4 TB plus 1 TB) of existing free space. The threshold can be set to any value between 2 TB and full capacity minus 2 TB.
All disk-only TS7700 cluster or CP0 partition in the grid that remain available automatically lower their Removal Thresholds to the Temporary Removal Threshold value defined for each. Each cluster can use a different Temporary Removal Threshold. The default Temporary Removal Threshold value is 4 TB (an extra 1 TB more data than the default removal threshold of 3 TB).
Each disk-only TS7700 cluster or CP0 partition uses its defined value until the cluster within the grid in which the removal process has been started enters service mode or the temporary removal process is canceled. The cluster that is initiating the temporary removal process (either a cluster within the grid that is not part of the DR testing, or the one scheduled to go into Service) does not lower its own removal threshold during this process.
 
Note: The cluster that is elected to initiate Temporary Removal process is not selectable in the list of target clusters for the removal action.
Removal policy settings can be configured by using the Temporary Removal Threshold option on the Actions men, which is available on the Grid Summary window of the TS7700 MI.
Figure 9-123 shows the Temporary Removal Threshold mode window.
Figure 9-123 Selecting cluster to start removal process and temporary removal threshold levels
The Temporary Removal Threshold mode window includes these options:
Enable Temporary Thresholds
Check this box and click OK to start the pre-removal process. Clear this box and click OK to abandon a current pre-removal process.
Initiate the removal process from (cluster to be serviced):
Select from this menu the cluster that will be put into service mode. The pre-removal process is started from this cluster.
 
Note: This process does not initiate Service Prep mode.
Even when the temporary removal action is started from a disk-only cluster, this cluster will still be not selectable on the drop-down menu of the TS7700 List Subject to Auto Removal because the removal action will not affect this cluster.
This area of the window contains each disk-only TS7700 cluster or CP0 partition in the grid and a field to set the temporary removal threshold for that cluster.
 
Note: The Temporary Removal Threshold task ends when the originator cluster enters in Service mode, or the task is canceled on the Tasks page in MI.
The Temporary Removal Threshold is not supported on the TS7740 cluster.
9.6 Basic operations
This section explains the tasks that might be needed during the operation of a TS7700.
9.6.1 Clock and time setting
The TS7700 time can be set from an NTP server or by the IBM SSR. It is set to Coordinated Universal Time. See “Date and Time coordination” on page 64 for more details about time coordination.
 
Note: Use Coordinated Universal Time in all TS7700 clusters whenever possible.
The TS4500 tape library time can be set from management Interface, as shown in Figure 9-124. Notice that TS4500 can be synchronized with NTP server, when available.
More information about TS4500 tape library is available locally in the TS4500 GUI by clicking the question mark icon, or at IBM Knowledge Center:
Figure 9-124 Adjusting date and time at TS4500 GUI
On the TS3500 tape library, the time can be set from IBM Ultra Scalable Specialist work items by clicking Library  Date and Time, as shown in Figure 9-125.
Figure 9-125 TS3500 tape library GUI Date and Time
9.6.2 Library in Pause mode
During the operation, the tape library can be paused, and this might affect the related tape attach cluster, regardless of being in a grid or not. The reasons for the pause can include an enclosure door that is being opened to clear a device after a load/unload failure or to remove cartridges from the high capacity I/O station. The following message is displayed at the host when a library is in Pause or manual mode:
CBR3757E Library library-name in {paused | manual mode} operational state
During Pause mode, all recalls and physical mounts are held up and queued by the TS7740 or TS7720T for later processing when the library leaves the Pause mode. Because both scratch mounts and private mounts with data in the cache are allowed to run, but not physical mounts, no more data can be moved out of the cache after the currently mounted stacked volumes are filled.
During an unusually elongated period of pause (like during a physical tape library outage), the cache will continue to fill up with data that has not been migrated to physical tape volumes. This might lead in extreme cases to significant throttling and stopping of any mount activity in the TS7740 cluster or in the partitions in the TS7700T cluster.
For this reason, it is important to minimize the amount of time that is spent with the library in Pause mode.
9.6.3 Preparing a TS7700 for service
The following message is posted to all hosts when the TS7700 Grid is in this state:
CBR3788E Service preparation occurring in library library-name.
Starting with R4.1.2 level of code the TS7700 will support Control Unit Initiated Reconfiguration (CUIR). A library request command is performed so notifications to the host from the TS7700 are enabled, which allows the automation in CUIR to function to minimize operator intervention during preparing TS7700 for service. When service-prep is initiated on the cluster, a Distributed Library Notification is surfaced from the cluster to prompt the attached host to vary off the devices automatically once the following conditions are met:
All clusters in the grid are at microcode level 8.41.200.xx or later.
The attached host logical partition supports CUIR function.
The CUIR function is enabled from the command line, by using the following LI REQ command: LIBRARY REQUEST,library-name,CUIR,SETTING,SERVICE,ENABLE
 
Tip: Before starting service preparation at the TS7700, all virtual devices on this cluster must be in the offline state with regard to the accessing hosts. Pending offline devices (logical volumes that are mounted to local or remote TVC) with active tasks should be allowed to finish execution and volumes to unload, completing transition to offline state.
Virtual devices in other clusters should be made online to provide mount point to new jobs, shifting workload to other clusters in the grid before start service preparation. After scheduled maintenance finishes and TS7700 can be taken out of service, then virtual devices can be varied back online for accessing hosts.
When service is canceled and the local cluster comes online, a Distributed Library Notification is surfaced from the cluster to prompt the attached host to vary on the devices automatically after the following conditions are met:
All clusters in the grid are at microcode level 8.41.200.xx or later.
The attached host logical partition supports CUIR function.
The CUIR function is enabled from the command line, by using the following LI REQ command: LIBRARY REQUEST,library-name,CUIR,SETTING,SERVICE,ENABLE
The AONLINE notification is enabled from the command line, by using the following LI REQ command:
LIBRARY REQUEST,library-name,CUIR,AONLINE,SERVICE,ENABLE
If the AONLINE notification is disabled by using the LI REQ command LIBRARY REQUEST,library-name,CUIR,AONLINE,SERVICE,DISABLE, a blue informational icon is shown on the lower left corner of the cluster image to alert the user that the cluster's devices need to be varied online. In this case, devices can be varied online from the Actions menu:
1. Select Vary Devices Online from the Actions menu.
2. Select the radio button next to the cluster you want to vary devices online for and then click OK. Click Cancel to exit without varying devices online
For more information about service preparation, see 10.2, “Messages from the library” on page 607.
If CUIR is not in place, all the host actions such as varying devices offline or online must be performed manually across all LPARs and system plexes attached to the cluster.
Preparing the tape library for service
If the physical tape library in a TS7700 Grid must be serviced, the effect on the associated cluster must be evaluated, and the decision about whether to bring the associated cluster (TS7740 or TS7700T) into service should be made. It depends on the duration of the planned outage for the physical tape library, the role played by the tape attached cluster or partition in this particular grid architecture, the policies in force within this grid, and so on.
There might be cases where the best option is to prepare the cluster (TS7740 or TS7700T) for service before servicing the TS3500 tape library. In addition, there might be other scenarios where the preferred option is to service the tape library without bringing the associated cluster in service.
Work with the IBM SSR to identify which is the preferred option in a specific case.
For information about how to set the TS7700 to service preparation mode, see “Cluster Actions menu” on page 365.
9.6.4 The Tape Library inventory
The inventory on the TS4500 tape library can be started from the Management Interface as shown at Figure 9-126. A full tape library or frame inventory can be select.
Figure 9-126 TS4500 inventory options
A partial inventory can be performed in any specific frame of the tape library: Left-click the desired frame on the tape library image to select it (it changes colors), and right-click to display the options; then, select Inventory Frame from the list.
A complete tape library inventory can be started from the Actions button on the top of the page. Both options will pops up a dialog box, asking whether to scan tiers 0 and 1, or all tiers.
The Scan tier 0 and tier 1 option will check cartridges on the doors and the external layer of the cartridges on the walls of the library. This option will only scan other tiers if a discrepancy is found. This is the preferred option for normal tape library operations, and it can be performed concurrently.
The option Scan all tiers will perform full library inventory, shuffling and scanning all cartridges in all tiers. This option is not concurrent (even when selected for a specific frame) and can take a long time to complete, depending on the number of cartridges in the library. Use scan all tiers only when a full inventory of the tape library is required.
9.6.5 Inventory upload
For more information about an inventory upload, see “Physical Volume Ranges window” on page 443.
Click Inventory Upload to synchronize the physical cartridge inventory from the attached tape library with the TS7700T database.
 
Note: Perform the Inventory Upload from the TS3500 tape library to all TS7700T tape drives that are attached to that tape library whenever a library door is closed, manual inventory or Inventory with Audit is run, or a TS7700 cluster is varied online from an offline state.
9.7 Cluster intervention scenarios
This section describes some operator intervention scenarios that might happen. Most of the errors requiring operator attention are reported on the MI or through a Host Notification, which is enabled from the Events window of the MI. For a sample of one Event message that needs an operator intervention, see Figure 9-127.
Figure 9-127 Example of an operator intervention
9.7.1 Hardware conditions
Some potential hardware attention scenarios are described in the following sections. The main source that is available for reference about the operational or recovery procedures is the IBM TS7700 R4.0 IBM Knowledge Center. The TS7700 IBM Knowledge Center is available directly from the TS7700 MI by clicking the question mark symbol in the upper-right corner of the top bar of the MI or at IBM Knowledge Center:
Most of the unusual conditions are reported to the host through Host Notification (which is enabled on the Events MI window). In a z/OS, the information messages generate the host message CBR3750I Message from library library-name: message-text, which identifies the source of the message and shows the information about the failure, intervention, or some specific operation that the TS7700 library is bringing to your attention.
The meaningful information that provided by the tape library (the TS7700 in this book) is contained in the message-text field, which can have 240 characters. This field includes a five-character message ID that might be examined by the message automation software to filter the events that should get operator attention. The message ID classifies the event being reported by its potential impact to the operations. The categories are critical, serious, impact, warning, and information. For more information, see the IBM TS7700 4.0 IBM Knowledge Center.
For more information about these informational messages, see the IBM TS7700 Series Operator Informational Messages White Paper, WP101689:
IBM 3592 tape drive failure (TS7740 or TS7700T)
When the TS7700 determines that one of its tape drives is not operating correctly and requires service (due to read/write errors, fiber interface, or another hardware-related reason), the drive is marked offline and an IBM SSR must be engaged. The following intervention- required message is displayed on the Library Manager Console:
CBR3750I MESSAGE FROM LIBRARY lib: Device xxx made unavailable by a VTS. (VTS z)
Operation of the TS7700 continues with a reduced number of drives until the repair action on the drive is complete. To recover, the IBM SSR repairs the failed tape drive and makes it available for the TS7700 to use it again.
Physical volume in read-only status
The message OP0123 Physical volume in read-only status due to successive media errors reports that a specific cartridge belonging to the TS7740 or TS7700T has exceeded the media error threshold, or encountered a permanent error during write or read operations, or is damaged. The faulty condition is reported by the tape drive to the cluster, and the cartridge is flagged Read-Only by the running cluster code. A read-only status means that new data is not written to that suboptimal media.
By default, this cartridge is corrected by an internal function of the TS7700 named Automated ROR. Make sure that the IBM SSR has enabled Automated ROR in the cluster. Automated ROR is the process by which hierarchical storage management (HSM) recalls all active data from a particular physical volume that has exceeded its error thresholds, encountered a permanent error, or is damaged.
This process extracts all active data (in the active logical volumes) that is contained in that read-only cartridge. When all active logical volumes are successfully retrieved from that cartridge, the Automated ROR process ejects the suboptimal physical cartridge from the tape library, ending the recovery process with success. Messages OP0100 A read-only status physical volume xxxxxx has been ejected or OP0099 Volser XXXXXX was ejected during recovery processing reports that the volume was ejected successfully.
After the ejection is complete, the cartridge VOLID is removed from the TS7700 physical cartridges inventory.
 
Note: By design, the tape attach TS7700 never ejects a cartridge that contains any active data. The requirement to eject a physical cartridge is to move off all active data to another physical cartridge in the same pool, and only then the cartridge can be ejected.
If Automated ROR process successfully ejects a read-only cartridge, there is no further actions to be taken, except inserting a new cartridge to replace the ejected one.
The ROR ejection task runs at a low priority to avoid causing any impact on the production environment. The complete process, from cartridge being flagged Read-Only to the OP0100 A read-only status physical volume xxxxxx has been ejected message, signaling the end of the process, can take several hours (typically one day to complete).
If the process fails to retrieve the active logical volumes from that cartridge due to a damaged media or unrecoverable read error, the next actions depend on the current configuration that is implemented in this cluster, whether stand-alone or part of a multi-cluster grid.
In a grid environment, the ROR reaches into other peer clusters to find a valid instance of the missing logical volume, and automatically copies it back into this cluster, completing the active data recovery.
If recovery fails because there is no other consistent copy that is available within the grid, or this is a stand-alone cluster, the media is not ejected and message OP0115 The cluster attempted unsuccessfully to eject a damaged physical volume xxxxxx is reported, along with OP0107 Virtual volume xxxxxx was not fully recovered from damaged physical volume yyyyyy.for each logical volume that failed to be retrieved.
In this situation, the physical cartridge is not ejected. A decision must be made regarding the missing logical volumes that are reported by OP107 messages. Also, the defective cartridge contents can be verified through the MI Physical Volume Details window by clicking the Download List of Virtual Volumes for that damaged physical volume. Check the list of the logical volumes that are contained in the cartridge, and work with the IBM SSR if data recovery from that damaged tape should be attempted.
If those logical volumes are not needed anymore, they should be made into scratch volumes by using the TMS on the IBM Z host. After this task is done, the IBM SSR can redo the ROR process for that defective cartridge (which is done from the TS7700 internal maintenance window, and through an MI function). This time because these logical volumes that are not retrieved do not contain active data, the Automated ROR completes successfully, and the cartridge is ejected from the library.
 
Note: Subroutines of the same Automated ROR process are started to reclaim space in the physical volumes and to perform some MI functions, such as eject or move physical volumes or ranges from the MI. Those cartridges are made read-only momentarily during the running of the function, returning to normal status at the end of the process.
Power failure
User data is protected during a power failure because it is stored on the TVC. Any host jobs that are reading or writing to virtual tapes fail as they fail with a real IBM 3490E. They must be restarted after the TS7700 is available again.
When power is restored and stable, the TS7700 must be started manually. The TS7700 recovers access to the TVC by using information that is available from the TS7700 database and logs.
TS7700 Tape Volume Cache errors
Eventually, one DDM or another component might fail in the TS7700 TVC. In this situation, the host is notified by the TS7700, and the operator sees the HYDIN0571E Disk operation in the cache is degraded message. Also, the MI shows the Health Status bar (lower-right corner) in yellow that warns you about a degraded resource in the subsystem. A degraded TVC needs an IBM SSR engagement. The TS7700 continues to operate normally during the intervention.
The MI has improved the accuracy and comprehensiveness of Health Alert messages and Health Status messages. For example, new alert messages report that a DDM failed in a specific cache drawer, which is compared to a generic message of degradation in previous levels. Also, the MI shows enriched information in graphical format.
Accessor failure and manual mode (TS7740 or TS7700T)
If the physical tape library does not have the dual accessors installed, a failure of the accessor results in the library being unable to mount automatically physical volumes. If the high availability dual accessors are installed in the tape library, the second accessor takes over. Then, the IBM SSR should be notified about repairing the failed accessor.
Gripper failure (TS7700T)
The TS3500 and TS4500 tape library have dual grippers. If a gripper fails, library operations continue with the remaining gripper. While the gripper is being repaired, the accessor is not available. If the dual accessors are installed, the second accessor is used until the gripper is repaired. For more information about operating the tape library, see the documentation for TS3500 or TS4500.
Out of stacked volumes (TS7700T)
If the tape library runs out of stacked volumes, copying to the 3592 tape drives fail, and an intervention-required message is sent to the host and the TS7700 MI. All further logical mount requests are delayed by the Library Manager until more stacked volumes are added to the tape library that is connected to the TS7700T. To recover, insert more stacked volumes. Copy processing can then continue.
 
Important: In a TS7700T cluster, only the tape attached partitions are affected.
Damaged cartridge pin
The 3592 has a metal pin that is grabbed by the feeding mechanism in the 3592 tape drive to load the tape onto the take-up spool inside the drive. If this pin gets dislodged or damaged, follow the instructions in IBM Enterprise Tape System 3592 Operators Guide, GA32-0465, to correct the problem.
 
Important: Repairing a 3592 tape must be done only for data recovery. After the data is moved to a new volume, eject the repaired cartridge from the TS7700 library.
Broken tape
If a 3592 tape cartridge is physically damaged and unusable (the tape is crushed, or the media is physically broken, for example), the TS7740 or TS7700T cannot recover the contents that are configured as a stand-alone cluster. If this TS7700 cluster is part of a grid, the damaged tape contents (active logical volumes) are retrieved from other clusters, and the TS7700 has those logical volumes brought in automatically (given that those logical volumes had another valid copy within the grid).
Otherwise, this is the same for any tape drive media cartridges. Check the list of the logical volumes that are contained in cartridge, and work with the IBM SSR if data recovery from that broken tape should be attempted.
Logical mount failure
When a mount request is received for a logical volume, the TS7700 determines whether the mount request can be satisfied and, if so, tells the host that it will process the request. Unless an error condition is encountered in the attempt to mount the logical volume, the mount operation completes and the host is notified that the mount was successful. With the TS7700, the way that a mount error condition is handled is different than with the prior generations of VTS.
With the prior generation of VTS, the VTS always indicated to the host that the mount completed even if a problem had occurred. When the first I/O command is sent, the VTS fails that I/O because of the error. This results in a failure of the job without the opportunity to try to correct the problem and try the mount again.
With the TS7700 subsystem, if an error condition is encountered during the execution of the mount, rather than indicating that the mount was successful, the TS7700 returns completion and reason codes to the host indicating that a problem was encountered. With DFSMS, the logical mount failure completion code results in the console messages shown in Example 9-2.
Example 9-2 Unsuccessful mount completion and reason codes
CBR4195I LACS RETRY POSSIBLE FOR JOB job-name
CBR4171I MOUNT FAILED. LVOL=logical-volser, LIB=library-name, PVOL=physical-volser, RSN=reason-code
...
CBR4196D JOB job-name, DRIVE device-number, VOLSER volser, ERROR CODE error-code. REPLY ’R’ TO RETRY OR ’C’ TO CANCEL
Reason codes provide information about the condition that caused the mount to fail:
For example, look at CBR4171I. Reason codes are documented in IBM Knowledge Center. As an exercise, assume RSN=32. In IBM Knowledge Center, the reason code is as follows:
Reason code x’32’: Local cluster recall failed; the stacked volume is unavailable.
CBR4196D: Error code shows in the format 14xxIT:
 – 14 is the permanent error return code.
 – xx is 01 if the function was a mount request or 03 if the function was a wait request.
 – IT is the permanent error reason code. The recovery action to be taken for each CODE.
 – In this example, it is possible to have a value of 140194 for the error code, which means xx=01: Mount request failed.
IT=94: Logical volume mount failed. An error was encountered during the execution of the mount request for the logical volume. The reason code that is associated with the failure is documented in CBR4171I. The first book title includes the acronyms for message IDs, but the acronyms are not defined in the book.
For CBR messages, see z/OS MVS System Messages, Vol 4 (CBD-DMO), SA38-0671, for an explanation of the reason code and for specific actions that should be taken to correct the failure. See z/OS DFSMSdfp Diagnosis, SC23-6863, for OAM return and reason codes. Take the necessary corrective action and reply ‘R’ to try again. Otherwise, reply ‘C’ to cancel.
 
Tip: Always see the appropriate documentation (TS7700 IBM Knowledge Center and MVS System Messages) for the meaning of the messages and the applicable recovery actions.
Orphaned logical volume
A logical volume is orphaned when the TS7700 database has a reference to a logical volume but no reference to its physical location. This can result from hardware or internal processing errors. For more information about orphaned logical volume messages, contact your IBM SSR.
Internal-external label mismatch
If a label mismatch occurs, the stacked volume is ejected to the Convenience Input/Output Station, and the intervention-required condition is posted at the TS7740 or TS7700T MI and sent to the host console (see Example 9-3).
Example 9-3 Label mismatch
CBR3750I MESSAGE FROM LIBRARY lib: A stacked volume has a label mismatch and has been ejected to the Convenience Input/Output Station.
Internal: xxxxxx, External: yyyyyy
The host is notified that intervention-required conditions exist. Investigate the reason for the mismatch. If possible, relabel the volume to use it again.
Failure during reclamation
If there is a failure during the reclamation process, the process is managed by the TS7740 or TS7700T Licensed Internal Code. No user action is needed because recovery is managed internally.
Excessive temporary errors on stacked volume
When a stacked volume is determined to have an excessive number of temporary data errors, to reduce the possibility of a permanent data error, the stacked volume is placed in read-only status. The stacked physical volume goes through the ROR process and is ejected after all active data is recalled. This process is handled automatically by the TS7700.
9.7.2 TS7700 LIC processing failure
If a problem develops with the TS7700 Licensed Internal Code (LIC), the TS7700 sends an intervention-required message to the TS7700 MI and host console, and attempts to recover. In the worst case scenario, this situation involves a restart of the TS7700 itself. If the problem persists, your IBM SSR should be contacted. The intervention-required message that is shown in Example 9-4 is sent to the host console.
Example 9-4 VTS software failure
CBR3750I MESSAGE FROM LIBRARY lib: Virtual Tape z Systems has a CHECK-1 (xxxx) failure
The TS7700 internal recovery procedures handle this situation and restart the TS7700. For more information, see Chapter 13, “Disaster recovery testing” on page 779.
Monitoring the health of the TS7700 using the TS7700 MI
For more information about the health status of the various components of the cluster or grid, see “Grid health and details” on page 362, and “Cluster health and detail” on page 374.

1 When operating at code level 8.20.x.x or 8.21.0.63-8.21.0.119 with Storage Authentication Service enabled, a 5-minute web server outage occurs when a service person logs in to the machine.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.21.86