Storage management hardware
The use of Data Facility Storage Management Subsystem (DFSMS) requires storage management hardware that includes both direct access storage device (DASD) and tape device types. This chapter provides an overview of both storage device categories and a brief introduction to RAID technology.
For many years, DASDs have been the most used storage devices on IBM eServer™ zSeries systems and their predecessors. DASDs deliver the fast random access to data and high availability that customers have come to expect. This chapter covers the following types of DASD:
Traditional DASD
DS8000 family
The era of tapes began before DASD was introduced. During that time, tapes were used as the primary application storage medium. Today, customers use tapes for such purposes as backup, archiving, and data transfer between companies. The following types of tape devices are described:
IBM TS1155
IBM TS4500
IBM TS7700
This chapter also briefly explains the storage area network (SAN) concept.
This chapter includes the following sections:
8.1 Overview of DASD types
In the era of traditional DASD, the hardware consisted of controllers like 3880 and 3990, which contained the necessary intelligent functions to operate a storage subsystem. The controllers were connected to S/390 systems through parallel or ESCON channels. Behind a controller, there were several model groups of the 3390 that contained the disk drives. Based on the models, these disk drives had various capacities per device.
Within each model group, the various models provide either four, eight, or twelve devices. All A-units come with four controllers, providing a total of four paths to the 3990 Storage Control. At that time, you were not able to change the characteristics of a given DASD device.
8.1.1 DASD based on RAID technology
With the introduction of the RAMAC Array in 1994, IBM first introduced storage subsystems for S/390 systems based on RAID technology. The various RAID implementations are shown in Figure 8-1 on page 139.
The more modern IBM DASD products, such as DS8000 family, and DASD from other vendors, emulate IBM 3380 and 3390 volumes in geometry, capacity of tracks, and number of tracks per cylinder. This emulation makes all the other entities think they are dealing with real 3380s or 3390s, reducing the needs from programmers to deal with different DASD technologies and architecture.
One benefit of this emulation is that it allows DASD manufacturers to implement changes in the real disks, including the geometry of tracks and cylinders, without affecting the way those components interface with DASD. From an operating system point of view, device types always will be 3390s, sometimes with much higher numbers of cylinders, but 3390s nonetheless.
 
Note: This publication uses the terms disk or head disk assembly (HDA) for the real devices, and the terms DASD volumes or DASD devices for the logical 3380/3390s.
8.2 Redundant Array of Independent Disks
Redundant Array of Independent Disks (RAID) is a direct-access storage architecture where data is recorded across multiple physical disks with parity separately recorded so that no loss of access to data results from the loss of any one disk in the array.
RAID breaks the one-to-one association of volumes with devices. A logical volume is now the addressable entity presented by the controller to the attached systems. The RAID unit maps the logical volume across multiple physical devices. Similarly, blocks of storage on a single physical device can be associated with multiple logical volumes. Because a logical volume is mapped by the RAID unit across multiple physical devices, it is now possible to overlap processing for multiple cache misses to the same logical volume because cache misses can be satisfied by separate physical devices.
The RAID concept involves many serial-attached SCSI (SAS) disks replacing a big one. RAID provides the following major RAID advantages:
Performance (due to parallelism)
Cost (SAS are commodities)
zSeries compatibility
Environment (space and energy)
Figure 8-1 shows some sample RAID configurations, which are explained in more detail next.
Figure 8-1 Redundant Array of Independent Disks (RAID)
However, RAID increased the chances of malfunction due to media and disk failures and the fact that the logical device is on many physical disks. The solution was redundancy, which wastes space and causes performance problems as “write penalty” and “free space reclamation.” To address this performance issue, large caches are implemented.
A disk array is a group of disk drive modules (DDMs) that are arranged in a relationship, for example, a RAID 5 or a RAID 10 array. For the DS8000, the arrays are built upon the disks of storage enclosures.
 
Note: The DS8000 storage controllers use the RAID architecture that enables multiple logical volumes to be mapped on a single physical RAID group. If required, you can still separate data sets on a physical controller boundary for availability.
8.2.1 RAID implementations
Except for RAID-1, each manufacturer sets the number of disks in an array. An array is a set of logically related disks, where a parity applies.
The following implementations are certified by the RAID Architecture Board:
RAID-1 This implementation has simple disk mirroring, like dual copy.
RAID-3 This implementation has an array with one dedicated parity disk and just one I/O request at a time, with intra-record striping. It means that the written physical block is striped and each piece (together with the parity) is written in parallel in each disk of the array. The access arms move together. It has a high data rate and a low I/O rate.
RAID-5 This implementation has an array with one distributed parity (there is no dedicated disk for parities). It does I/O requests in parallel with extra-record striping, meaning each physical block is written in each disk. The access arms move independently. It has strong caching to avoid write penalties, involving four disk I/Os per write. RAID-5 has a high I/O rate and a medium data rate. RAID-5 does the following:
It reads data from an undamaged disk. This is one single disk I/O operation.
It reads data from a damaged disk, which implies (n-1) disk I/Os, to re-create the lost data where n is the number of disks in the array.
For every write to an undamaged disk, RAID-5 does four I/O operations to store a correct parity block. This is called a write penalty. This penalty can be relieved with strong caching and a slice triggered algorithm (coalescing disks updates from cache into a single parallel I/O).
For every write to a damaged disk, RAID-5 does n-1 reads and one parity write.
RAID-6 This implementation has an array with two distributed parity and I/O requests in parallel with extra-record striping. Its access arms move independently (Reed/Salomon P-Q parity). The write penalty is greater than RAID-5, with six I/Os per write.
RAID-6+ This implementation is without write penalty (due to large file storage (LFS)), and has background free-space reclamation. The access arms all move together for writes.
RAID-10 RAID-10 is also known as RAID 1+0 because it is a combination of RAID 0 (striping) and RAID 1 (mirroring). The striping optimizes the performance by striping volumes across several disk drives. RAID 1 is the protection against a disk failure provided by having a mirrored copy of each disk. By combining the two, RAID 10 provides data protection and I/O performance.
Spare disks
The DS8000 requires that a device adapter loop have a minimum of two spare disks to enable sparing to occur. The sparing function of the DS8000 is automatically initiated whenever a DDM failure is detected on a device adapter loop and enables regeneration of data from the failed DDM onto a hot spare DDM.
 
 
 
 
 
 
 
 
Note: Data striping (stripe sequential physical blocks in separate disks) is sometimes called RAID-0, but it is not a real RAID because there is no redundancy, that is, no parity bits.
8.3 IBM DS8000 series
IBM DS8000 series is a high-performance, high-capacity series of disk storage that is designed to support continuous operations. DS8000 series models (machine type 2107) use the IBM POWER8® server technology that is integrated with the IBM Virtualization Engine technology. DS8000 series models consist of a storage unit and one or two management consoles, two being the preferred configuration. The graphical user interface (GUI) or the command-line interface (CLI) allows you to logically partition storage (create storage LPARs) and use the built-in Copy Services functions. For high availability, hardware components are redundant.
The DS8000 series system can scale to over 5.22 PB of raw drive capacity, with a wide range of capacity available for configuration, supporting any range of customer for smaller to very large storage systems.
The DS8000 series offers various choices of base and expansion models, so you can configure storage units that meet your performance and configuration needs:
DS8884 and DS8886
DS8884 and DS8886 provide great scalability and performance, with over 5 PB of raw capacity drive with expansion frames, and up to 2 TB of system memory.
DS8884F, DS8886F, and DS8888F
The DS888xF models feature all-flash drivers, and are optimizes for performance and throughput by maximizing the number of paths to the storage enclosures.
The DS8000 expansion frames expand the capabilities of the base models, allowing greater capacity levels for your storage controller.
8.3.1 DS8000 hardware overview
This section provides an overview of available DS8000 hardware.
DS8884 (Model 984)
The IBM TotalStorage DS8884, which is Model 984, is a business class machine that offers the following features, among others:
6-core processor complex
Up to 256 GB of system memory
Up to two HPF Enclosures pairs (Gen-2)
Up to eight standard drive enclosures
Up to 16 host adapters
Single-phase power
Up to 192 2.5-inch disk drives
Up to 48 flash cards
Maximum capacity of up to 730 TB
Up to 32 Fibre Channel/FICON ports
The DS8884 model can support two expansion frames. With two expansion frames, you can expand the capacity of the Model 984 as follows:
Up to 768 small form-factor (SFF) drives, and 96 flash cards, for a maximum capacity of 2.61 PB
The DS8884 also supports an all-flash configuration, which is named DS8884F.
DS8886 (Models 986)
The IBM TotalStorage DS8886, which is Model 986, is a business class machine that offers the following features, among others:
Up to 24-core processor complex
Up to 2 TB of system memory
Up to four HPF Enclosures pairs (Gen-2)
Up to six standard drive enclosures
Up to 16 host adapters
Single-phase or three-phase power
Up to 144 2.5-inch disk drives
Up to 96 flash cards
Maximum capacity of up to 739 TB
Up to 64 Fibre Channel/FICON ports
The DS8886 model can support four expansion frames. With four expansion frames, you can expand the capacity of the Model 986 as follows:
Up to 1536 SFF drives, and 192 flash cards, for a maximum capacity of 5.22 PB
The DS8886 also supports an all-flash configuration, which is named DS8886F.
DS8888F (Models 988)
The IBM TotalStorage DS8886, which is Model 988, is a business class machine that offers the following features, among others:
Up to 48-core processor complex
Up to 2 TB of system memory
Up to eight HPF Enclosures pairs (Gen-2)
Up to 128 host adapters
Three-phase power
Up to 192 2.5-inch flash cards
Maximum capacity of up to 614 TB
Up to 64 Fibre Channel/FICON ports
Up to 2.5 billion IOPS
The DS8888F model can support one expansion frame. With one expansion frame, you can expand the capacity of the Model 986 as follows:
Up to 384 flash cards, for a maximum capacity of 1.23 PB
Up to 128 Fibre Channel/FICON ports
8.3.2 DS8000 major components
Figure 8-2 shows an IBM TDS8886 Model 985 and its major components. As you can see, the machine that is displayed consists of two frames, each with its own power supplies, batteries, I/O enclosures, HPF enclosures, and standard drive enclosures. The base frame also includes the two IBM POWER8 processor-based servers, and the Hardware Management Console (HMC).
Figure 8-2 DS8886 front view, with one expansion frame shown
The major DS8000 components will be briefly described next. For an in-depth description of the DS8000 major components, see IBM DS8880 Product Guide (Release 8.3), REDP-5344.
IBM POWER8 servers
A pair of POWER8 based servers, also known as central processor complexes, are at the heart of the IBM DS8880 models.
These servers share the load of receiving and moving data between the attached hosts and storage arrays. They can provide redundancy, so, if either server fails, the system operations are processed by the remaining server, without any disruption to the service.
Hardware Management Console
The mini PC HMC is a Linux-based server that enables users to interact with the DS8880 by using the HMC GUI (for service purposes) or DS Storage Manager/DS CLI (for storage administration or configuration).
Power subsystem
The power architecture used on DS8000 is based on a direct current uninterruptible power supply (DC-UPS). The power subsystems available depend on the DS8000 model in use. The DS8884 uses single-phase power, which supports 200-240V, whereas DS8886 also includes the option of having three-phase power supporting voltages of 220-240 V and 380-415 V. For DS8888, three-phase power is the standard.
Each DC-UPS has its own battery-backup functions. Therefore, the battery system also provides 2N redundancy. The battery of a single DC-UPS is able to preserve non-volatile storage (NVS) in a complete power outage.
High-performance flash enclosure
The high-performance flash enclosure (HPFE) Gen-2 is a 2U storage enclosure, with additional hardware at the rear (microbay). HPFEs need to be installed in pairs.
Each PHFE Gen-2 pair contains the following hardware components:
16, 32, 48 2.5-inch SAS Flash Cards.
A pair of Microbays, where each Microbay contains these components, among others:
 – A Flash RAID adapter that is dual-core IBM PowerPC® based and can do RAID parity processing as well as provide speed and an amount of I/O that goes far beyond of what a usual Device Adapter could handle.
 – A PCIe switch card. The switch cards carry the signal forward from the Flash RAID adapters over through the PCIe Gen-3 into the processor complexes. The Flash RAID adapters go with eight SAS connections per Microbay pair to a pair of the specific flash enclosures that hold SAS expanders and the flash cards.
All pathing is redundant. The HPFE Microbays are directly attached to the I/O enclosures by using the PCIe bus fabric, which increases bandwidth and transaction-processing capability compared to Fibre Channel attached standard drives.
Standard drive enclosure
The IBM DS8880 offers 2U FC attached standard drive enclosures in two types. SFF drive enclosures are installed in pairs, and can contain up to 24 2.5-inch SFF SAS drives. Large-Form factor (LFF) drive enclosures are also installed in pairs, and can contain up to 12 3.5-inch LFF SAS drives.
The SFF drives can be Flash Drives (SSDs) or rotational hard disk drives (HDDs), also known as disk drive modules (DDMs). Although Flash Drives and rotational drive types can both be used in the same storage system, they are not intermixed within the same standard drive enclosure pair.
Host adapters
The IBM DS8880 offers 16 Gbps Host Adapters (HAs) with four ports, and 8 Gbps HAs with either four or eight ports. Each HA port can be individually configured for FC or FICON connectivity. The 8 Gbps adapters also support FC-AL connections. Configuring multiple host connections across multiple HAs in multiple I/O enclosures provides the best combination of throughput and fault tolerance.
NVS cache
NVS is used to store a second copy of write data to ensure data integrity if there is a power failure or a cluster failure and the cache copy is lost. The NVS of cluster 1 is located in cluster 2 and the NVS of cluster 2 is located in cluster 1. In this way, during a cluster failure, the write data for the failed cluster will be in the NVS of the surviving cluster. This write data is then de-staged at high priority to the disk arrays. At the same time, the surviving cluster will start to use its own NVS for write data, ensuring that two copies of write data are still maintained. This process ensures that no data is lost even during a component failure.
Drive options
The IBM DS8880 offers five different drives types to meet the requirements of various workloads and configurations:
400 GB, 800 GB, 1.6 TB, and 3.2 TB flash cards for the highest performance requirements
400 GB, 800 GB, and 1.6 TB flash drives (SSDs) for higher performance requirements
300 GB, 600 GB, 15,000 RPM Enterprise drives for high-performance requirements
600 GB, 1.2 TB, and 1.8 TB 10,000 RPM drives for standard performance requirements
4 TB, and 6 TB 7,200 RPM Nearline drives for large-capacity requirements.
Additional drive types are in qualification. Flash cards and flash drives are treated as the same tier, and the IBM Easy Tier® intra-tier auto-rebalance function enables you to use the higher IOPS capability of flash cards.
For more information about DS8000 family major components, see IBM DS8880 Product Guide (Release 8.3), REDP-5344.
8.4 IBM TotalStorage Resiliency Family
The IBM TotalStorage Resiliency Family is a set of products and features that are designed to help you implement storage solutions that keep your business running 24 hours a day, 7 days a week.
These hardware and software features, products, and services are available on the IBM DS8000 series. In addition, a number of advanced Copy Services features that are part of the IBM TotalStorage Resiliency family are available for the IBM DS6000™ and DS8000 series. The IBM TotalStorage DS Family also offers systems to support enterprise-class data backup and disaster recovery capabilities.
As part of the IBM TotalStorage Resiliency Family of software, IBM TotalStorage FlashCopy point-in-time copy capabilities back up data in the background and allow users nearly instant access to information about both source and target volumes. Metro and Global Mirror capabilities create duplicate copies of application data at remote sites. High-speed data transfers help to back up data for rapid retrieval.
8.4.1 Copy Services
Copy Services is a collection of functions that provides disaster recovery, data migration, and data duplication functions. Copy Services runs on the DS8000 series and supports open systems and zSeries environments.
Copy Services functions also are supported on the previous generation of storage systems, the IBM TotalStorage Enterprise Storage Server®.
Copy Services include the following types of functions:
FlashCopy, which is a point-in-time copy function
Remote mirror and copy functions (previously known as Peer-to-Peer Remote Copy or PPRC), which include the following:
 – IBM Metro Mirror (previously known as Synchronous PPRC)
 – IBM Global Copy (previously known as PPRC Extended Distance)
 – IBM Global Mirror (previously known as Asynchronous PPRC)
 – IBM Metro/Global Mirror
z/OS Global Mirror (previously known as Extended Remote Copy or XRC)
8.4.2 FlashCopy
FlashCopy creates a copy of a source volume on the target volume. This copy is called a point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between a source volume and target volume. A FlashCopy relationship is a “mapping” of the FlashCopy source volume and a FlashCopy target volume.
This mapping allows a point-in-time copy of that source volume to be copied to the associated target volume. The FlashCopy relationship exists between this volume pair from the time that you initiate a FlashCopy operation until the storage unit copies all data from the source volume to the target volume or you delete the FlashCopy relationship, if it is a persistent FlashCopy.
8.4.3 Metro Mirror function
Metro Mirror is a copy service that provides a continuous, synchronous mirror of one volume to a second volume. The different systems can be up to 300 kilometers apart, so by using Metro Mirror you can make a copy to a location off-site or across town. Because the mirror is updated in real time, no data is lost if a failure occurs. Metro Mirror is generally used for disaster-recovery purposes, where it is important to avoid data loss.
8.4.4 Global Copy function
Global Copy is a nonsynchronous mirroring function and is an alternative mirroring approach to Metro Mirror operations. Host updates to the source volume are not delayed by waiting for the update to be confirmed by a storage unit at your recovery site. The source volume sends a periodic, incremental copy of updated tracks to the target volume instead of a constant stream of updates.
There is no guarantee that dependent write operations are transferred in the same sequence that they have been applied to the source volume. This nonsynchronous operation results in a “fuzzy copy” at the recovery site. However, through operational procedures, you can create a point-in-time consistent copy at your recovery site that is suitable for data migration, backup, and disaster recovery purposes.
The Global Copy function can operate at very long distances, well beyond the 300 km distance that is supported for Metro Mirror, and with minimal impact to applications. The distance is limited only by the network and the channel extended technology.
8.4.5 Global Mirror function
Global Mirror is a copy service that is very similar to Metro Mirror. Both provide a continuous mirror of one volume to a second volume. However, with Global Mirror, the copy is asynchronous. You do not have to wait for the write to the secondary system to complete. For long distances, performance is improved compared to Metro Mirror. However, if a failure occurs, you might lose data. Global Mirror uses one of two methods to replicate data:
Multicycling Global Mirror is designed to replicate data while adjusting for bandwidth constraints, and is appropriate for environments where it is acceptable to lose a few minutes of data if a failure occurs.
For environments with higher bandwidth, noncycling Global Mirror can be used so that less than a second of data is lost if a failure occurs.
Global Mirror works well for data protection and migration when recovery sites are more than 300 kilometers away.
8.4.6 Metro/Global Mirror function
Metro/Global Mirror is a three-site, high availability disaster recovery solution that uses synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a remote site. The DS8000 series supports the Metro/Global Mirror function on open systems and IBM Z. You can set up and manage your Metro/Global Mirror configurations using DS CLI and TSO commands.
In a Metro/Global Mirror configuration, a Metro Mirror volume pair is established between two nearby sites (local and intermediate) to protect from local site disasters. The Global Mirror volumes can be located thousands of miles away and can be updated if the original local site has suffered a disaster but has performed failover operations to the intermediate site. In a local-site-only disaster, Metro/Global Mirror can provide zero-data-loss recovery at the remote site and the intermediate site.
8.4.7 z/OS Global Mirror function
z/OS Global Mirror (previously known as XRC) provides a long-distance remote copy solution across two sites for open systems and IBM Z data with asynchronous technology.
The DS8000 series supports the z/OS Global Mirror function on IBM Z hosts, only. The Global Mirror function mirrors data on the storage system to a remote location for disaster recovery. It protects data consistency across all volumes that you define for mirroring. The volumes can be on several different storage systems. The z/OS Global Mirror function can mirror the volumes over several thousand kilometers from the source site to the target recovery site.
With Global Mirror, you can suspend or resume service during an outage. You do not have to end your current data-copy session. You can suspend the session, and then restart it. Only data that changed during the outage must be synchronized again between the copies.
8.5 DS8000 performance features
This section covers DS8000 series performance features.
8.5.1 I/O priority queuing
Before I/O priority queuing, IOS kept the UCB I/O pending requests in a queue named IOSQ. The priority order of the I/O request in this queue, when the z/OS image is in goal mode, is controlled by Workload Manager (WLM), depending on the transaction owning the I/O request. There was no concept of priority queuing within the internal queues of the I/O control units. Instead, the queue regime was FIFO.
It is now possible to have this queue concept internally. I/O Priority Queuing in DS8000 has the following properties:
I/O can be queued with the Ds8000 in priority order.
WLM sets the I/O priority when running in goal mode.
There is I/O priority for systems in a sysplex.
Each system gets a fair share.
8.5.2 Custom volumes
Custom volumes provide the possibility of defining DASD volumes with any size, from 1 - 268,434,453 cylinders, though current z/OS versions support DASD volumes with up to 1,182,006 cylinders. This capability gives storage administrators more flexibility to create the DASD volumes, and reduce wasted space, while reducing the number of UCBs necessary to provide the required capacity.
8.5.3 Improved caching algorithms
With its effective caching algorithms, DS8000 series can minimize wasted cache space and reduce disk drive utilization, reducing its back-end traffic. The current DS8886 has a maximum cache size of 2 TB, and the NVS size can be up to 2 TB.
The DS8000 can manage its cache in 4 KB segments, so for small data blocks (4 KB and 8 KB are common database block sizes), minimum cache is wasted. In contrast, large cache segments can exhaust cache capacity when filling up with small random reads.
This efficient cache management, together with the optional flash cards and flash drives, HPFEs, and caching size improvements, all integrate to give greater throughput while sustaining cache speed response times.
8.5.4 I/O Priority manager
I/O Priority Manager is a feature that provides application-level quality of service (QoS) for workloads that share a storage pool. This feature provides a way to manage QoS for I/O operations that are associated with critical workloads, and gives them priority over other I/O operations that are associated with non-critical workloads. For IBM z/OS, the I/O Priority Manager allows increased interaction with the host side.
8.5.5 Easy Tier
Easy Tier offers capabilities including manual volume capacity rebalance, auto performance rebalancing in both homogeneous and hybrid pools, hot-spot management, rank depopulation, manual volume migration, and thin provisioning support.
Easy Tier determines the appropriate tier of storage based on data access requirements, and then automatically and nondisruptively moves data, at the volume or extent level, to the appropriate tier in the storage system.
8.5.6 Thin provisioning
The DS8000 series allows you to create thin-provisioned volumes in your z/OS systems, maximizing your storage capacity, and allowing you to increase your storage capacity on demand with no disruption to z/OS systems that access the storage devices. To achieve that, a few features were introduces on DS8000 series. They are described in the next sections.
Small extents
In an alternative to the standard CKD extents of 1,113 cylinders, you can create volumes using small extents of 21 cylinders, allowing a better granularity, specially for thin-provisioned volumes.
Extent Space Efficient volumes
Extent Space Efficient (ESE) volumes are the replacement for the old Track Space Efficient (TSE) volumes, using allocations of 21 cylinders extents to back-end your data. These extents are allocated to the volumes from a defined extent pool when new data needs to be stored in the specified DASD. Space that is no longer used by the logical volume can be reclaimed by the storage administrator.
ESE volumes provide a better performance than TSE volumes.
8.6 Introduction to tape processing
The term tape refers to volumes that can be physically moved. You can only store sequential data sets on tape. Tape volumes can be sent to a safe, or to other data processing centers. Internal labels are used to identify magnetic tape volumes and the data sets on those volumes. You can process tape volumes with these types of labels:
IBM standard labels
Labels that follow standards published by these organizations:
 – International Organization for Standardization (ISO)
 – American National Standards Institute (ANSI)
Nonstandard labels
No labels
 
Note: Your installation can install a bypass for any type of label processing. However, the use of labels is preferred as a basis for efficient control of your data.
IBM standard tape labels consist of volume labels and groups of data set labels. The volume label, identifying the volume and its owner, is the first record on the tape. The data set label, identifying the data set and describing its contents, precedes and follows each data set on the volume:
The data set labels that precede the data set are called header labels.
The data set labels that follow the data set are called trailer labels. They are almost identical to the header labels.
The data set label groups can include standard user labels at your option.
Usually, the formats of ISO and ANSI labels, which are defined by their respective organizations, are similar to the formats of IBM standard labels.
Nonstandard tape labels can have any format and are processed by routines that you provide. Unlabeled tapes contain only data sets and tape marks.
8.6.1 SL and NL format
Figure 8-3 illustrates the format differences between the following label types:
No labels (NL)
IBM standard labels (SL)
Other types are described in Table 8-1 on page 151.
Figure 8-3 SL and NL format
Using tape with JCL
In the job control statements, you must provide a data definition (DD) statement for each data set to be processed. The LABEL parameter of the DD statement is used to describe the data set's labels.
Other parameters of the DD statement identify the data set, give volume and unit information and volume disposition, and describe the data set's physical attributes. You can use a data class to specify all of your data set's attributes (such as record length and record format), but not data set name and disposition. Specify the name of the data class using the job control language (JCL) keyword DATACLAS. If you do not specify a data class, the automatic class selection (ACS) routines assign a data class based on the defaults defined by your storage administrator.
An example of allocating a tape data set using DATACLAS in the DD statement of the JCL statements follows. In this example, TAPE01 is the name of the data class.
//NEW DD DSN=DATASET.NAME,UNIT=TAPE,DISP=(,CATLG,DELETE),DATACLAS=TAPE01,LABEL=(1,SL)
Describing the labels
You specify the type of labels by coding one of the subparameters of the LABEL parameter as shown in Table 8-1.
Table 8-1 Types of labels
Code
Meaning
SL
IBM Standard Label.
AL
ISO/ANSI/FIPS labels.
SUL
Both IBM and user header or trailer labels.
AUL
Both ISO/ANSI/FIPS and user header or trailer labels.
NSL
Nonstandard labels.
NL
No labels, but the existence of a previous label is verified.
BLP
Bypass label processing. The data is treated in the same manner as though NL had been specified, except that the system does not check for an existing volume label. The user is responsible for the positioning.
If your installation does not allow BLP, the data is treated exactly as though NL had been specified. Your job can use BLP only if the Job Entry Subsystem (JES) through Job class, RACF through TAPEVOL class, or DFSMSrmm(*) allow it.
LTM
Bypass a leading tape mark. If encountered, on unlabeled tapes from VSE.
 
Note: If you do not specify the label type, the operating system assumes that the data set has IBM standard labels.
8.6.2 Tape cartridges
The operating system supports several IBM magnetic tape subsystems, such as the IBM 3590 and 3592 Tape Subsystem, as well as virtual tape, all of which use tape cartridges.
Tape mount management
Using DFSMS and tape mount management can help you reduce the number of both tape mounts and tape volumes that your installation requires. The volume mount analyzer reviews your tape mounts and creates reports that provide you with the information that you need to effectively implement the tape mount management methodology suggested by IBM.
Tape mount management allows you to efficiently fill a tape cartridge to its capacity and gain full benefit from compression. By filling your tape cartridges, you reduce your tape mounts and even the number of tape volumes that you need.
With an effective tape cartridge capacity of 15 TB using 3592 JD cartridges on TS1155 tape drives, DFSMS can intercept all but extremely large data sets and manage them with tape mount management. By implementing tape mount management with DFSMS, you can reduce your tape mounts with little or no additional hardware required.
Tape mount management also improves job throughput because jobs are no longer queued up on tape drives. A large portion of all tape data sets queued up on drives are less than 20 MB. With tape mount management, these data sets are on DASD while in use. This feature frees up the tape drives for other allocations.
Tape mount management recommends that you use DFSMShsm to do interval migration to SMS storage groups. You can use ACS routines to redirect your tape data sets to a tape mount management DASD buffer storage group. DFSMShsm scans this buffer regularly and migrates the data sets to migration level 1 DASD or migration level 2 tape as soon as possible, based on the management class and storage group specifications.
8.7 IBM TS1155 tape drive
The IBM TS1155 tape drive is the sixth generation of high capacity and high-performance tape systems. It was announced in May 2017 and connects to IBM Z through Fibre Channel links. The 3592 system is the successor of the IBM Magstar® 3590 family of tape drives and controller types.
The IBM 3592 tape drive can be used as a stand-alone solution or as an automated solution within a TS4500 tape library.
The native rate for data transfer increases up to 360 MBps. The uncompressed amount of data that fits on a single cartridge increases to 15 TB and is used for scenarios where high capacity is needed. The tape drive has a second option, where you can store a maximum of 10 TB per tape. This option is used whenever fast access to tape data is needed.
Unlike other tape drives, TS1155 provides two different methods of connecting to the host. The first option is using 8-Gbps Fibre Channel interface, allowing connections to the host or switched fabric environment. An alternative model (TS1155 model 55F) has a dual-ported 10 Gb Ethernet port for host attachment, which is optimized for cloud-based and large, open computer environments.
8.8 IBM TS4500 tape library
Tape storage media can provide low-cost data storage for sequential files, inactive data, and vital records. Because of the continued growth in tape use, tape automation has been seen as a way of addressing an increasing number of challenges.
The IBM TS4500 offers a wide range of features that include the following:
Up to 128 tape drives
Support through the Library Control Unit for attachment of up to 17 additional frames
Cartridge storage capacity of over 17,500 3592 tape cartridges
Data storage capacity of up to 263 PB
Support for the High Availability unit that provides a high level of availability for tape automation
Support for the IBM TS7700 through Fibre Channel switches
Support for the IBM Total Storage Peer-to-Peer VTS
Support for the following tape drives:
 – IBM TS1155 Model 55F tape drive
 – IBM TS1155 Model 55E tape drive
 – IBM TS1150 Model E08 tape drives
 – IBM TS1140 Model E07 tape drives
 – IBM Linear Tape-Open (LTO) Ultrium 7 tape drives
Attachment to and sharing by multiple host systems, such as IBM Z, iSeries, pSeries, AS/400, HP, and Sun processors
Data paths through Fibre Channels
Library management commands through RS-232, a local area network (LAN), and parallel, ESCON, and FICON channels
8.9 Introduction to IBM TS7700
The IBM TS7700 series, integrated with the IBM TS4500, delivers an increased level of storage capability beyond the traditional storage products hierarchy. The TS7700 emulates the function and operation of IBM 3490 Enhanced Capacity (3490E). This virtualization of both the tape devices and the storage media to the host allows for transparent utilization of the capabilities of the IBM 3592 tape technology.
With virtualization, a new view of virtual volumes and drives was introduced, as users and applications using TS7700 do not access directly the physical volumes and drives associated with it. Instead, they use virtual drives and volumes that are managed by the library. Using a TS7700 subsystem, the host application writes tape data to virtual devices. The volumes created by the hosts are called virtual volumes. They are physically stored in a tape volume cache that is built from RAID DASD, being destaged to physical tapes when attached to a TS4500, and the library cache reaches the defined threshold.
8.9.1 IBM TS7700 models
Two TS7700 models are currently available:
IBM TS7760
The IBM TS7760 is an all new hardware refresh and features Encryption Capable, high-capacity cache that uses 4 TB serial-attached Small Computer System Interface (SCSI) HDDs in arrays that use dynamic disk pool configuration.
IBM TS7760T
This option is similar to IBM TS7760, and also has a physical tape library attached to store additional copies or add capacity.
Each FICON channel in the TS7700 can support up to 512 logical paths, providing up to 4,096 logical paths with eight FICON channels. You can also have up to 496 virtual drives defined.
 
Tape volume cache
The IBM TS7700 appears to the host processor as a single automated tape library with up to 496 virtual tape drives and up to 4,000,000 virtual volumes. The configuration of this system has up to 1.3 PB of Tape Volume Cache native, and up to 16 IBM 3592 tape drives.
Through tape volume cache management policies, the TS7700 management software moves host-created volumes from the tape volume cache to a cartridge managed by the TS7700 subsystem. When a virtual volume is moved from the tape volume cache to tape, it becomes a logical volume.
VTS functions
VTS provides the following functions:
Thirty-two 3490E virtual devices.
Tape volume cache (implemented in a RAID-5 disk) that contains virtual volumes.
The tape volume cache consists of a high-performance array of DASD and storage management software. Virtual volumes are held in the tape volume cache when they are being used by the host system. Outboard storage management software manages which virtual volumes are in the tape volume cache and the movement of data between the tape volume cache and physical devices.
The size of the DASD is made large enough so that more virtual volumes can be retained in it than just the ones that are currently associated with the virtual devices.
After an application modifies and closes a virtual volume, the storage management software in the system makes a copy of it onto a physical tape. The virtual volume remains available on the DASD until the space it occupies reaches a predetermined threshold. Leaving the virtual volume in the DASD allows for fast access to it during subsequent requests.
The DASD and the management of the space used to keep closed volumes available is called tape volume cache. Performance for mounting a volume that is in tape volume cache is quicker than if a real physical volume is mounted.
Up to sixteen 3592 tape drives. The real 3592 volume contains logical volumes.
Stacked 3592 tape volumes managed by the TS4500. It fills the tape cartridge up to 100%. Putting multiple virtual volumes into a stacked volume, TS7700 uses all of the available space on the cartridge. TS7700 uses IBM 3592 cartridges when stacking volumes.
8.10 Introduction to TS7700 Grid configuration
A grid configuration is specifically designed to enhance data availability. It accomplishes this by providing volume copy, remote functionality, and automatic recovery and switchover capabilities. With a design that reduces single points of failure (including the physical media where logical volumes are stored), the grids improve system reliability and availability, as well as data access. A grid configuration can have from two to six clusters that are connected using a grid link. Each grid link is connected to other grid links on a grid network.
Logical volumes that are created within a grid can be selectively replicated to one or more peer clusters by using a selection of different replication policies. Each replication policy or Copy Consistency Point provides different benefits, and can be intermixed. The grid architecture also enables any volume that is located within any cluster to be accessed remotely, which enables ease of access to content anywhere in the grid.
In general, any data that is initially created or replicated between clusters is accessible through any available cluster in a grid configuration. This concept ensures that data can still be accessed even if a cluster becomes unavailable. In addition, it can reduce the need to have copies in all clusters because the adjacent or remote cluster’s content is equally accessible.
8.10.1 Copy consistency
There are currently five available consistency point settings:
Sync As data is written to the volume, it is compressed and then simultaneously written or duplexed to two TS7700 locations. The mount point cluster is not required to be one of the two locations. Memory buffering is used to improve the performance of writing to two locations. Any pending data that is buffered in memory is hardened to persistent storage at both locations only when an implicit or explicit sync operation occurs. This setting provides a zero RPO at tape sync point granularity.
Tape workloads in IBM Z environments already assume sync point hardening through explicit sync requests or during close processing, enabling this mode of replication to be performance-friendly in a tape workload environment. When sync is used, two clusters must be defined as sync points. All other clusters can be any of the remaining consistency point options, enabling more copies to be made.
RUN The copy occurs as part of the Rewind Unload (RUN) operation, and completes before the RUN operation at the host finishes. This mode is comparable to the immediate copy mode of the PtP VTS.
Deferred The copy occurs after the rewind unload operation at the host. This mode is comparable to the Deferred copy mode of the PtP VTS. This is also called asynchronous replication.
Time Delayed The copy occurs only after a specified time (1 hour - 379 days). If the data expires before the Time Delayed setting is reached, no copy is produced at all. For Time Delayed, you can specify after creation or after access in the MC.
No Copy No copy is made.
8.10.2 VTS advanced functions
As with a stand-alone IBM TS7700, the grid configuration has the option to install additional features and enhancements to existing features, including the following:
Outboard policy management: Enables you to better manage your IBM TS7700 stacked and logical volumes. With this support, the SMS construct names that are associated with a volume (storage class, storage group, management class, and data class) are sent to the library. At the library, you can define outboard policy actions for each construct name, enabling you and the TS7700 to better manage your volumes.
For example, through the storage group policy and physical volume pooling, you now can group logical volumes with common characteristics on a set of physical stacked volumes.
Physical volume pooling: With outboard policy management enabled, you can assign logical volumes to selected storage groups. Storage groups point to primary storage pools. These pool assignments are stored in the library manager database. When a logical volume is copied to tape, it is written to a stacked volume that is assigned to a storage pool as defined by the storage group constructs at the library manager.
Tape volume dual copy: With advanced policy management, storage administrators have the facility to selectively create dual copies of logical volumes within a TS7700. This function is also available in the grid environment. At the site or location where the second distributed library is located, logical volumes can also be duplexed, in which case you can have two or four copies of your data.
Tape volume cache management: Before the introduction of these features, there was no way to influence cache residency. As a result, all data written to the tape volume cache (TVC) was pre-migrated using a first-in, first-out (FIFO) method. With the introduction of this function, you can now h influence the time that virtual volumes reside in the TVC.
8.11 Storage area network
The Storage Network Industry Association (SNIA) defines a SAN as a network whose primary purpose is the transfer of data between computer systems and storage elements. A SAN consists of a communication infrastructure, which provides physical connections, and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust.
The term SAN is usually (but not necessarily) identified with block I/O services rather than file access services. It can also be a storage system that consists of storage elements, storage devices, computer systems, and appliances, plus all control software, communicating over a network.
SANs today are usually built by using Fibre Channel technology, but the concept of a SAN is independent of the underlying type of network.
The following are the major potential benefits of a SAN:
Access
Benefits include longer distances between processors and storage, higher availability, and improved performance (because I/O traffic is offloaded from a LAN to a dedicated network, and because Fibre Channel is generally faster than most LAN media). Also, a larger number of processors can be connected to the same storage device, compared to typical built-in device attachment facilities.
Consolidation
Another benefit is replacement of multiple independent storage devices by fewer devices that support capacity sharing, which is also called disk and tape pooling. SANs provide the ultimate in scalability because software can allow multiple SAN devices to appear as a single pool of storage accessible to all processors on the SAN. Storage on a SAN can be managed from a single point of control. Controls over which hosts can see which storage (called zoning and LUN masking) can be implemented.
Protection
LAN-free backups occur over the SAN rather than the (slower) LAN, and server-free backups can let disk storage “write itself” directly to tape without processor overhead.
There are various SAN topologies on the base of Fibre Channel networks:
Point-to-Point
With a SAN, a simple link is used to provide high-speed interconnection between two nodes.
Arbitrated loop
The Fibre Channel arbitrated loop offers relatively high bandwidth and connectivity at a low cost. For a node to transfer data, it must first arbitrate to win control of the loop. After the node has control, it is free to establish a virtual point-to-point connection with another node on the loop. After this point-to-point (virtual) connection is established, the two nodes consume all of the loop’s bandwidth until the data transfer operation is complete. After the transfer is complete, any node on the loop can then arbitrate to win control of the loop.
Switched
Fibre Channel switches function in a manner similar to traditional network switches to provide increased bandwidth, scalable performance, an increased number of devices, and, in certain cases, increased redundancy.
Multiple switches can be connected to form a switch fabric capable of supporting many host servers and storage subsystems. When switches are connected, each switch’s configuration information must be copied into all the other participating switches. This process is called cascading.
Figure 8-4 shows a sample SAN configuration.
Figure 8-4 Sample SAN configuration
8.11.1 FICON and SAN
From an IBM Z perspective, FICON is the protocol that is used in a SAN environment. A FICON infrastructure can be point-to-point or switched, using FICON directors to provide connections between channels and control units. FICON uses Fibre Channel transport protocols, and so uses the same physical fiber. Today, IBM Z has 16 Gbps link data rate support.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.192.3