Operating system support
This chapter describes the minimum operating system requirements and support considerations for the IBM z15™ servers and their features. It addresses z/OS, z/VM, z/VSE, z/TPF, Linux on Z, and the KVM hypervisor.
 
Note: Throughout this chapter, z15 refers to IBM z15 Model T01 (Machine Type 8561) unless, otherwise specified.
Because this information is subject to change, see the hardware fix categories (IBM.Device.Server.z15-8561.*) for the most current information.
Support of z15 functions depends on the operating system, its version, and release.
This chapter includes the following topics:
7.1 Operating systems summary
The minimum operating system levels that are required on z15 servers are listed in Table 7-1.
 
End of service operating systems: Operating system levels that are no longer in service are not covered in this publication. These older levels might support some features.
Table 7-1 z15 minimum operating systems requirements
Operating systems1
Supported Version and release on z152
z/OS
V2R13
z/VM
V6R4
z/VSE
V6.24,5
z/TPF
V1R1
Linux on Z 6

1 Only z/Architecture mode is supported.
2 Service is required. For more information, see the shaded box that is titled “Features” on page 229.
3 z/OS V2R1 - Compatibility only. The IBM Software Support Services for z/OS V2R1 offered as October 1st, 2018, provides the ability for customers to purchase extended defect support service for z/OS V2.1.
4 End of service date for z/VSE V5R2 was October 31, 2018.
5 End of service date for z/VSE V6R1 is Sept. 30, 2019.
6 KVM hypervisor is supported by Linux distribution partners.
The use of certain features depends on the operating system. In all cases, program temporary fixes (PTFs) might be required with the operating system level that is indicated. Check the z/OS fix categories, or the subsets of the 8561DEVICE PSP buckets for z/VM and z/VSE. The fix categories and the PSP buckets are continuously updated, and contain the latest information about maintenance.
Hardware and software buckets contain installation information, hardware and software service levels, service guidelines, and cross-product dependencies.
For more information about Linux on Z distributions and KVM hypervisor, see the distributor’s support information.
7.2 Support by operating system
z15 servers introduce several new functions. This section describes the support of those functions by the current operating systems. Also included are some of the functions that were introduced in previous IBM Z servers and carried forward or enhanced in z15 servers. Features and functions that are available on previous servers, but no longer supported by z15 servers were removed.
For more information about supported functions that are based on operating systems, see 7.3, “z15 features and function support overview” on page 260. Tables are built by function and feature classification to help you determine, by a quick scan, what is supported and the minimum operating system level that is required.
7.2.1 z/OS
z/OS Version 2 Release 2 is the earliest in-service release that supports z15 servers. Consider the following points:
Service support for z/OS Version 2 Release 1 ended in September of 2018; however, a fee-based extension for defect support (for up to three years) can be obtained by ordering IBM Software Support Services - Service Extension for z/OS 2.1.
z15 capabilities differ depending on the z/OS release. Toleration support is provided on z/OS V2R1. Exploitation support is provided on z/OS V2R2 and later only1.
For more information about supported functions and their minimum required support levels, see 7.3, “z15 features and function support overview” on page 260.
7.2.2 z/VM
z/VM V6R4 and z/VM V7R1 provide support that enables guests to use the following features that are supported by z/VM on IBM z15™:
z/Architecture support
New hardware facilities
ESA/390-compatibility mode for guests
Crypto Clear Key ECC operations
RoCE Express2 support
Dynamic I/O support
Provided for managing the configuration of OSA-Express7S and OSA-Express6S OSD CHPIDs, FICON Express16SA and FICON Express16S+ (FC and FCP CHPIDs), and RoCE Express2 features.
Improved memory management
For more information about supported functions and their minimum required support levels, see 7.3, “z15 features and function support overview” on page 260.
 
Statements of Directions1: Consider the following points:
Future z/VM release guest support
z/VM V6.4 is the last release that is supported as a guest of z/VM V6.2 or older releases.
Disk-only support for z/VM dumps
z/VM V6.4 is the last z/VM release to support tape as a media option for stand-alone, hard abend, and snap dumps. Subsequent releases will support dumps to ECKD DASD or FCP SCSI disks only.
Discontinuance of support for separately ordered EREP licensed product
z/VM V7.1 is planned to be the last z/VM release to support EREP as a separately orderable and serviceable IBM licensed product. EREP functionality will continue to be delivered as part of the z/VM offering.
Stabilization of z/VM Support for the IBM EC12 and BC12 server family
z/VM V7.1 is the last z/VM release that is planned to support the EC12 or BC12 family of servers. Therefore, an IBM z13 or an IBM z13s will be the required minimum level of server for future z/VM releases. See the IBM Support Portal for the most current z/VM support lifecycle information.
Removal of the z/VM PAGING63 IPL parameter
z/VM V7.1 is the last z/VM release to support use of the PAGING63 IPL parameter. This parameter directed the paging subsystem to behave as it had in releases before z/VM V6.4. It also prevented use of z/VM V6.4 and V7.1 paging subsystem improvements, which include support for High-Performance FICON, HyperPAV, encryption, and EAV.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.2.3 z/VSE
z15 support is provided by z/VSE V6R2 and later, with the following considerations:
z/VSE runs in z/Architecture mode only.
z/VSE supports 64-bit real and virtual addressing.
For more information about supported functions and their minimum required support levels, see 7.3, “z15 features and function support overview” on page 260.
7.2.4 z/TPF
z15 support is provided by z/TPF V1R1 with PTFs. For more information about supported functions and their minimum required support levels, see 7.3, “z15 features and function support overview” on page 260.
7.2.5 Linux on IBM Z (Linux on Z)
Generally, a new machine is not apparent to Linux on Z. For z15, toleration support is required for the following functions and features:
IPL in “z/Architecture” mode
Crypto Express7S cards
RoCE Express cards
8-byte LPAR offset
The service levels of SUSE, Red Hat, and Ubuntu releases that are supported at the time of this writing are listed in Table 7-2.
Table 7-2 Linux on Z distributions
Linux on Z distribution1
Supported Version and Release on z152
SUSE Linux Enterprise Server
15 SP1
SUSE Linux Enterprise Server
12 SP43
SUSE Linux Enterprise Server
11 SP4c
Red Hat RHEL
8.0 with service
Red Hat RHEL
7.7c with service
Red Hat RHEL
6.10c with service
Ubuntu
18.04.1 LTSc
Ubuntu
16.04.5 LTSc
KVM Hypervisor4
Offered with the supported Linux distributions.

1 Only z/Architecture (64-bit mode) is supported. IBM testing identifies the minimum required level and the recommended levels of the tested distributions.
2 Fix installation is required for toleration.
3 Maintenance is required.
4 For more information about minimal and recommended distribution levels, see the Linux on Z website.
For more information about supported Linux distributions on IBM Z servers, see the Tested platforms for Linux page of the IBM IT infrastructure website.
IBM is working with Linux distribution Business Partners to provide further use of selected z15 functions in future Linux on Z distribution releases.
Consider the following guidelines:
Use SUSE Linux Enterprise Server 15, Red Hat RHEL 8, or Ubuntu 18.04 LTS or newer in any new projects for z15 servers.
Update any Linux distribution to the latest service level before migrating to z15 servers.
Adjust the capacity of any z/VM and Linux on Z LPAR guests, and z/VM guests, in terms of the number of IFLs and CPs, real or virtual, according to the PU capacity of the z15 servers.
7.2.6 KVM hypervisor
KVM is now offered through our Linux distribution partners for IBM Z and LinuxONE to help simplify delivery and installation. Linux and KVM is provided from a single source. With KVM being included in the Linux distribution, ordering and installing KVM is easier.
For KVM support information, see the IBM Z website.
7.3 z15 features and function support overview
The following list the z15 features and functions and their minimum required operating system support levels:
Information about Linux on Z refers exclusively to the appropriate distributions of SUSE, Red Hat, and Ubuntu.
All tables use the following conventions:
Y : The function is supported.
N : The function is not supported.
- : The function is not applicable to that specific operating system.
 
Note: The tables in this section list but do not explicitly mark all the features that require fixes that are required by the corresponding operating system for toleration or exploitation. For more information, see the PSP bucket for 3906DEVICE.
7.3.1 Supported CPC functions
The supported Base CPC Functions or z/OS and z/VM are listed in Table 7-3.
Table 7-3 Supported Base CPC Functions or z/OS and z/VM
Function1
z/OS
V2R42
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/VM
V7R1
z/VM
V6R4
z15 servers
Y
Y
Y
Y
Y
Y
Maximum processor unit (PUs) per system image
190
190c
1903
190c
644
64d
Maximum main storage size5
4 TB
4 TB
4 TB
4 TB
2 TB
2 TB
85 LPARs
Y
Y
Y
Y
Y
Y
Separate LPAR management of PUs
Y
Y
Y
Y
Y
Y
Dynamic PU add
Y
Y
Y
Y
Y
Y
Dynamic LPAR memory upgrade
Y
Y
Y
Y
Y
Y
LPAR group absolute capping
Y
Y
Y
Y
Y
Y
Capacity Provisioning Manager
Y
Y
Y
Y
N
N
Program-directed re-IPL
-
-
-
-
Y
Y
HiperDispatch
Y
Y
Y
Y
Y
Y
IBM Z Integrated Information Processors (zIIPs)
Y
Y
Y
Y
Y
Y
Transactional Execution
Y
Y
Y
Y
Java Exploitation of Transactional Execution
Y
Y
Y
Y
Yg
Yg
Simultaneous multithreading (SMT)
Y
Y
Y
Y
Y8
Yh
Single Instruction Multiple Data (SIMD)
Y
Y
Y
Y
Hardware decimal floating point10
Y
Y
Y
Y
Y
Y
2 GB large page support
Y
Y
Y
Y
N
N
Large page (1 MB) support
Y
Y
Y
Y
Yg
Yg
Out-of-order execution
Y
Y
Y
Y
Y
CPUMF (CPU measurement facility) for z15
Y
Y
Y
Y
Y
Y
Enhanced flexibility for Capacity on Demand (CoD)
Y
Y
Y
Y
Y
Y
IBM Virtual Flash Memory (VFM)
Y
Y
Y
Y
N
N
1 MB pageable large pages12
Y
Y
Y
Y
N
N
Guarded Storage Facility (GSF)
Y
Y
Y
N
Yg
Yg
Instruction Execution Protection (IEP)
Y
Y
Y
N
Yg
Yg
Co-processor Compression Enhancements13 (CMPSC)
Y
Y
Y
Y
N
N
System Recovery Boost
Yn
N
N
N
IBM Integrated Accelerator for zEDC 16 (on-chip compression)
Y
Y
Yq
Nr

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 See the z/OS preview announcement:
3 190-way without multithreading; 128-way with multithreading enabled.
4 64-way without multithreading; 32-way with multithreading enabled.
5 A total of 40 TB of real storage is supported per server.
6 Guests are informed that TX facility is available for use.
7 Guest exploitation support.
8 Dynamic SMT with z15.
9 Guests are informed that SIMD is available for use.
10 Packed decimal conversion support.
11 Enhanced OoO execution for z15 - see 3.4.3, “Out-of-Order execution” on page 102.
12 With IBM Virtual Flash Memory for middleware exploitation.
13 With PTFs for exploitation.
14 With PTFs.
15 Support for subcapacity CPs allocated to the z/VM LPAR.
16 IBM zEnterprise Data Compression Express replacement
17 Compatibility (read only) - with PTFs
18 Transparent for Guest support use of the gzip acceleration; guest support for z/OS Storage Compression
The supported base CPC functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-4.
Table 7-4 Supported base CPC functions for z/VSE, z/TPF, and Linux on Z
Function1
z/VSE V6R2
z/TPF
V1R1
Linux on
Z2
z15 servers
Y
Y
Y
Maximum processor unit (PUs) per system image
10
86
1903
Maximum main storage size4
32 GB
4 TB
16 TB5
85 LPARs
Y
Y
Y
Separate LPAR management of PUs
Y
Y
Y
Dynamic PU add
Y
N
Y
Dynamic LPAR memory upgrade
N
N
Y
LPAR group absolute capping
Y
N
N
Program-directed re-IPL
Y
N
Y
HiperDispatch
N
N
Y
IBM Z Integrated Information Processors (zIIPs)
N
N
N
Transactional Execution
N
N
Y
Java Exploitation of Transactional Execution
N
N
Y
Simultaneous multithreading (SMT)
N
N
Y
Single Instruction Multiple Data (SIMD)
Y
N
Y
Hardware decimal floating point6
N
N
Y
2 GB large page support
N
Y
Y
Large page (1 MB) support
Y
Y
Y
Out-of-order execution
Y
Y
Y
CPUMF (CPU measurement facility) for z15
N
Y
N7
Enhanced flexibility for CoD
N
N
N
IBM Virtual Flash Memory (VFM)
N
N
Y
Guarded Storage Facility (GSF)
N
N
Y
Instruction Execution Protection (IEP)
N
N
Y
Co-processor Compression Enhancements
N
N
N
System Recovery Boost
N
N
N
IBM Integrated Accelerator for zEDC 8 (on-chip compression)
N
N
Y9

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 For SLES12/RHEL7/Ubuntu 16.04 and later, Linux kernel supports 256 cores without SMT and 128 cores with SMT (= 256 threads).
4 A total of 40 TB of real storage is supported per server.
5 Linux on Z releases can support up to 64 TB of memory.
6 Packed decimal conversion support.
7 IBM is working with its Linux distribution Business Partners to provide this feature.
8 IBM zEnterprise Data Compression Express replacement.
9 Requires Linux kernel exploitation support for gzip/zlib compression.
7.3.2 Coupling and clustering
The supported coupling and clustering functions for z/OS and z/VM are listed in Table 7-5.
Table 7-5 Supported coupling and clustering functions for z/OS and z/VM
Function1
z/OS
V2R42
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/VM
V7R1
z/VM
V6R4
Server Time Protocol (STP)
Y
Y
Y
Y
Y
Y
CFCC Level 243
Y
Y
Y
Yf
Yh
Yh
CFCC Level 24 Fair Latch Manager
Y
Y4
Yd
Yd
Yh
Yh
Message Path System ID (SYID) resiliency
Y
Yd
Yd
Yd
Yh
Yh
CFCC Level 235
Y
Y
Y
Y6
Yh
Yh
CFCC Level 227
Y
Y
Y
Y
Y8
Yh
CFCC Level 22 Coupling Thin Interrupts
Y
Y
Y
Y
N
N
CFCC Level 22 Large Memory support
Y
Y
Y
Y
N
N
CFCC Level 22 Support for 256 Coupling CHPIDs per CPC
Y
Y
Y
Y
Yh
Yh
CFCC Level 22 Coupling Facility Processor Scalability
Y
Y
Y
Y
Yh
Yh
CFCC Level 22 List Notification Enhancements
Y
Y
Y
N
Yh
Yh
CFCC Level 22 Encryption Support
Y
Y
N9
Ni
Yh
Yh
CFCC Level 22 Exploitation of VFM (Virtual Flash Memory)
Y
Y
Y
Y
N
N
RMF coupling channel reporting
Y
Y
Y
Y
N
N
Coupling over InfiniBand CHPID type CIB10
Y
Y
Y
Y
N
N
InfiniBand coupling links 12x at a distance of 150 m (492 ft.)j
Y
Y
Y
Y
N
N
InfiniBand coupling links 1x at an unrepeated distance of 10 km (6.2 miles)j
Y
Y
Y
Y
N
N
Integrated Coupling Adapter (ICA-SR) links CHPID CS5
Y
Y
Y
Y
Yk
Coupling Express LR (CE LR) CHPID CL5
Y
Y
Y
Y
Yk
Yk
z/VM Dynamic I/O support for InfiniBand CHPIDsj
-
-
-
-
Yk
Yk
z/VM Dynamic I/O support for ICA SR CHPIDs
-
-
-
-
Yk
Yk
Asynchronous CF Duplexing for lock structures
Y
Y
Y
N
Yh
Yh
Asynchronous cross-invalidate (XI) for CF cache structures12
Y
Ym
Yh
Yh
Dynamic I/O activation for stand-alone CF CPCs15
Yp
Yp
Yp
Yp
Yp

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 See the z/OS preview announcement:
3 CFCC Level 24 with Driver 41 (z15).
4 Requires z/OS XCF/XES toleration APAR.
5 CFCC Level 23 with Driver 36 (z14).
6 Not all functions supported with z/OS 2.1.
7 CFCC Level 22 with Driver 32 (z13/z13s).
8 Virtual guest coupling.
9 Toleration support (“locking out” down level systems that cannot use encrypted structure) provided for z/OS 2.1 and later.
10 InfiniBand Coupling Links are not supported on z15 and z14 ZR1.
11 To define, modify, and delete CHPID type CS5 when z/VM is the controlling LPAR for dynamic I/O.
12 Requires data manager support (Db2 fixes).
13 Requires fixes for APAR OA54688 for exploitation.
14 Toleration support only; requires fixes for APAR OA54985. Functional support in z/OS 2.2 and later.
15 Managing dynamic I/O activation for stand-alone CF CPCs on a z15 requires HMC 2.15.0 (Driver 41)
16 Requires HMC 2.14.1(Driver 36) or newer and various OS fixes (HCD, HCM, IOS, IOCP).
In addition to operating system support that is listed in Table 7-5 on page 263, Server Time Protocol is supported on z/TPF V1R1 and Linux on Z and CFCC Level 22, Level 23, and Level 24 are supported for z/TPF V1R1.
Storage connectivity
The supported storage connectivity functions for z/OS and z/VM are listed Table 7-6.
Table 7-6 Supported storage connectivity functions for z/OS and z/VM
Function1
z/OS
V2R42
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/VM
V7R1
z/VM
V6R4
zHyperLink Express Read Support
Y
Y3
Yc
Yc
N
N
zHyperLink Express Write Support
Y
Yc
Yc
N
N
N
The 63.75-K subchannels
Y
Y
Y
Y
Y
Y
Six logical channel subsystems (LCSSs)
Y
Y
Y
Y
Y
Y
Four subchannel set per LCSS
Y
Y
Y
Y
Y4
Yd
Health Check for FICON Dynamic routing
Y
Y
Y
Y
N
N
z/VM Dynamic I/O support for FICON Express16SA FC and FCP CHPIDs
-
-
-
-
Y
Y
z/VM Dynamic I/O support for FICON Express16S+ FC and FCP CHPIDs
-
-
-
-
Y
Y
CHPID (Channel-Path Identifier) type FC
Extended distance FICON5
Y
Y
Y
Y
Y
Y
FICON Express16SA for support of zHPF (IBM Z High-Performance FICON)
Y
Y
Y
Y
Y
Y
FICON Express16S+ for support of zHPF (IBM Z High-Performance FICON)
Y
Y
Y
Y
Y6
Yf
FICON Express16S for support of zHPF
Y
Y
Y
Y
Yf
Yf
FICON Express8S for support of zHPF
Y
Y
Y
Y
Yf
Yf
MIDAW (Modified Indirect Data Address Word)
Y
Y
Y
Y
Yf
Yf
zDAC (z/OS Discovery and Auto-Configuration)
Y
Y
Y
Y
N
N
FICON Express16SA when using FICON or CTC (channel-to-channel)
Y
Y
Y
Y
Yg
Yg
FICON Express16S+ when using FICON or CTC (channel-to-channel)
Y
Y
Y
Y
Y7
Yg
FICON Express16S when using FICON or CTC
Y
Y
Y
Y
Yg
Yg
FICON Express8S when using FICON or CTC
Y
Y
Y
Y
Yg
Yg
Global resource serialization (GRS) FICON CTC toleration
Y
Y
Y
Y
N
N
IPL from an alternative subchannel set
Y
Y
Y
Y
N
N
32 K subchannels for the FICON Express16SA
Y
Y
Y
Y
Y
Y
32 K subchannels for the FICON Express16S+
Y
Y
Y
Y
Y
Y
32 K subchannels for the FICON Express16S
Y
Y
Y
Y
Y
Y
Request node identification data
Y
Y
Y
Y
N
N
FICON link incident reporting
Y
Y
Y
Y
N
Y
CHPID (Channel-Path Identifier) type FCP
FICON Express16SA for support of SCSI devices
-
-
-
-
Y
Y
FICON Express16S+ for support of SCSI devices
-
-
-
-
Y
Y
FICON Express16S for support of SCSI devices
-
-
-
-
Y
Y
FICON Express8S for support of SCSI devices
-
-
-
-
Y
Y
FICON Express16SA support of hardware data router
-
-
-
-
Yf
Yf
FICON Express16S+ support of hardware data router
-
-
-
-
Yf
Yf
FICON Express16S support of hardware data router
-
-
-
-
Yf
Yf
FICON Express8S support of hardware data router
-
-
-
-
Yf
Yf
FICON Express16SA T10-DIF support
-
-
-
-
Yf
Yf
FICON Express16S+ T10-DIF support
-
-
-
-
Yf
Yf
FICON Express16S T10-DIF support
-
-
-
-
Yf
Yf
FICON Express8S T10-DIF support
-
-
-
-
Yf
Yf
Increased performance for the FCP protocol
-
-
-
-
Y
Y
N_Port ID Virtualization (NPIV)
-
-
-
-
Y
Y
Worldwide port name tool
-
-
-
-
Y
Y

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
3 With PTFs
4 For specific Geographically Dispersed Parallel Sysplex (GDPS) usage only.
5 Transparent to operating systems.
6 For guest use.
7 CTC channel type not supported when CPC is managed in DPM mode.
The supported storage connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-7.
Table 7-7 Supported storage connectivity functions for z/VSE, z/TPF, and Linux on Zv
Function1
z/VSE V6R2
z/TPF
V1R1
Linux on Z2
zHyperLink Express
-
-
-
The 63.75-K subchannels
N
N
Y
Six logical channel subsystems (LCSSs)
Y
N
Y
Four subchannel set per LCSS
Y
N
Y
Health Check for FICON Dynamic routing
N
N
N
CHPID (Channel-Path Identifier) type FC
Extended distance FICON3
Y
Y
Y
FICON Express16SA for support of zHPF (IBM Z High-Performance FICON)
Y4
Y
Y
FICON Express16S+ for support of zHPF (IBM Z High-Performance FICON)
Yd
Y
Y
FICON Express16S for support of zHPF
Y
Y
Y
FICON Express8S for support of zHPF
Y
Y
Y
MIDAW (Modified Indirect Data Address Word)
N
N
N
FICON Express16SA when using FICON or CTC (channel-to-channel)
Y
Y
Y5
FICON Express16S+ when using FICON or CTC
Y
Y
Ye
FICON Express16S when using FICON or CTC
Y
Y
Ye
FICON Express8S when using FICON or CTC
Y
Y
Ye
IPL from an alternative subchannel set
N
N
N
32 K subchannels for the FICON Express16SA
N
N
Y
32 K subchannels for the FICON Express16S+
N
N
Y
32 K subchannels for the FICON Express16S
N
N
Y
Request node identification data
N
N
N
FICON link incident reporting
N
N
N
CHPID (Channel-Path Identifier) type FCP
FICON Express16SA for support of SCSI devices
Y
-
Y
FICON Express16SA for support of SCSI devices
Y
-
Y
FICON Express16S+ for support of SCSI devices
Y
-
Y
FICON Express16S for support of SCSI devices
Y
-
Y
FICON Express8S for support of SCSI devices
Y
-
Y
FICON Express16SA support of hardware data router
N
N
Y
FICON Express16S+ support of hardware data router
N
N
Y
FICON Express16S support of hardware data router
N
N
Y
FICON Express8S support of hardware data router
N
N
Y
FICON Express16SA T10-DIF support
N
N
Y
FICON Express16S+ T10-DIF support
N
N
Y
FICON Express16S T10-DIF support
N
N
Y
FICON Express8S T10-DIF support
N
N
Y
Increased performance for the FCP protocol
Y
-
Y
N_Port ID Virtualization (NPIV)
Y
N
Y
Worldwide port name tool
-
-
Y

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 Transparent to operating systems.
4 Supported on z/VSE V6.2 with PTFs.
5 CTC channel type not supported when CPC is managed in DPM mode.
7.3.3 Network connectivity
The supported network connectivity functions for z/OS and z/VM are listed in Table 7-8.
Table 7-8 Supported network connectivity functions for z/OS and z/VM
Function1
z/OS
V2R42
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/VM
V7R1
z/VM
V6R4
Checksum offload for IPV6 packets
Y
Y
Y
Y
Y3
Yc
Checksum offload for LPAR-to-LPAR traffic with IPv4 and IPv6
Y
Y
Y
Y
Yc
Yc
Querying and displaying an OSA configuration
Y
Y
Y
Y
Y
Y
QDIO data connection isolation for z/VM
-
-
-
-
Y
Y
QDIO interface isolation for z/OS
Y
Y
Y
Y
-
-
QDIO OLM (Optimized Latency Mode)
Y
Y
Y
Y
-
-
Adapter interruptions for QDIO
Y
N
N
N
Y
Y
QDIO Diagnostic Synchronization
Y
Y
Y
Y
N
N
IWQ (Inbound Workload Queuing) for OSA
Y
Y
Y
Y
Yc
Yc
VLAN management enhancements
Y
Y
Y
Y
Y4
Yd
GARP VLAN Registration Protocol
Y
Y
Y
Y
Y
Y
Link aggregation support for z/VM
-
-
-
-
Y
Y
Multi-vSwitch Link Aggregation
-
-
-
-
Y
Y
Large send for IPV6 packets
Y
Y
Y
Y
Yc
Yc
z/VM Dynamic I/O Support for OSA-Express6S OSD CHPIDs
-
-
-
-
Y
Y
z/VM Dynamic I/O Support for OSA-Express7S OSD CHPIDs
-
-
-
-
Y
Y
OSA Dynamic LAN idle
Y
Y
Y
Y
N
N
OSA Layer 3 virtual MAC for z/OS environments
Y
Y
Y
Y
-
-
Network Traffic Analyzer
Y
Y
Y
Y
N
N
Hipersockets
HiperSockets5
Y
Y
Y
Y
Y
Y
32 HiperSockets
Y
Y
Y
Y
Y
Y
HiperSockets Completion Queue
Y
Y
Y
Y
Y
Y
HiperSockets Virtual Switch Bridge
-
-
-
-
Y
Y
HiperSockets Multiple Write Facility
Y
Y
Y
Y
N
N
HiperSockets support of IPV6
Y
Y
Y
Y
Y
Y
HiperSockets Layer 2 support
Y
Y
Y
Y
Y
Y
SMC-D and SMC-R
SMC-D6 over ISM (Internal Shared Memory)
Y
Y
Y
N
Yc
Yc
10GbE RoCE7 Express
Y
Y
Y
Y
Yc
Yc
25GbE and 10GbE RoCE Express2 for SMC-R
Y
Y
Y
Y
Yc
Yc
25GbE and 10GbE RoCE Express2 and 2.1 for Ethernet communications8 including Single Root I/O Virtualization (SR-IOV)
N
N
N
N
Yc
Yc
z/VM Dynamic I/O support for RoCE Express2 and 2.1
-
-
-
-
Y
Y
Shared RoCE environment
Y
Y
Y
Y
Y
Y
Open Systems Adapter (OSA)9
OSA-Express7S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express7S 25-Gigabit Ethernet Short Reach (SR and SR1.1) CHPID type OSD
Y
Yj
Yj
Yk
OSA-Express7S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express6S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express7S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express7S Gigabit Ethernet LX and SX
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express6S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express7S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
3 For guest use or exploitation.
4 Support of guests is transparent to z/VM if the device is directly connected to the guest (pass through).
5 On z15, the CHPID statement of HiperSockets devices requires the keyword VCHID. Therefore, the z15 IOCP definitions must be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number that is associated with the channel path (valid range is 7E0 - 7FF). VCHID is not valid on IBM Z servers before z13.
6 Shared Memory Communications - Direct Memory Access.
7 Remote Direct Memory Access (RDMA) over Converged Ethernet.
8 Does not require a peer OSA.
9 Supported CHPID types: OSC, OSD, OSE, and OSM.
10 Require PTFs for APARs OA55256 (IBM VTAM®) and PI95703 (TCP/IP).
11 Require PTF for APAR PI99085.
The supported network connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-9.
Table 7-9 Supported network connectivity functions for z/VSE, z/TPF, and Linux on Z
Function1
z/VSE V6R2
z/TPF
V1R1
Linux on Z2
Checksum offload for IPV6 packets
N
N
Y
Checksum offload for LPAR-to-LPAR traffic with IPv4 and IPv6
N
N
Y
Adapter interruptions for QDIO
Y
N
Y
QDIO Diagnostic Synchronization
N
N
N
IWQ (Inbound Workload Queuing) for OSA
N
N
N
VLAN management enhancements
N
N
N
GARP VLAN Registration Protocol
N
N
Y3
Link aggregation support for z/VM
N
N
N
Multi-vSwitch Link Aggregation
N
N
N
Large send for IPV6 packets
N
N
Y
z/VM Dynamic I/O Support for OSA-Express6S and OSA-Express 7S OSD CHPIDs
N
N
N
OSA Dynamic LAN idle
N
N
N
Hipersockets
HiperSockets4
Y
N
Y
32 HiperSockets
Y
N
Y
HiperSockets Completion Queue
Y
N
Y
HiperSockets Virtual Switch Bridge
-
-
Y5
HiperSockets Multiple Write Facility
N
N
N
HiperSockets support of IPV6
Y
N
Y
HiperSockets Layer 2 support
Y
N
Y
HiperSockets Network Traffic Analyzer for Linux on Z
N
N
Y
SMC-D and SMC-R
SMC-D6 over ISM (Internal Shared Memory)
N
N
Y7
10GbE RoCE8 Express
N
N
Y9,g
25GbE and 10GbE RoCE Express2 for SMC-R
N
N
Yg
25GbE and 10GbE RoCE Express2 for Ethernet communications10 including Single Root I/O Virtualization (SR-IOV)
N
N
Yi,g
Shared RoCE environment
N
N
Y
Open Systems Adapter (OSA)
OSA-Express7S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
-
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
-
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
-
OSA-Express7S 1 GbE LX and SX
CHPID type OSC
Y
Y
-
OSA-Express7S 25-Gigabit Ethernet Short Reach (SR) CHPID type OSD
Y
Y
Y
OSA-Express7S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
OSA-Express6S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
OSA-Express5S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
OSA-Express7S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
OSA-Express6S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
OSA-Express7S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
OSA-Express7S 1000BASE-T Ethernet
CHPID type OSE
Y
N
N
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSE
Y
N
N
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
N
N

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 By using VLANs.
4 On z15, the CHPID statement of HiperSockets devices requires the keyword VCHID. Therefore, the z15 IOCP definitions must be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number that is associated with the channel path (valid range is 7E0 - 7FF). VCHID is not valid on IBM Z servers before z13.
5 Applicable to guest operating systems.
6 Shared Memory Communications - Direct Memory Access.
7 SMC-R and SMC-D are supported on Linux kernel; see: https://linux-on-z.blogspot.com/p/smc-for-linux-on-ibm-z.html
8 Remote Direct Memory Access (RDMA) over Converged Ethernet.
9 Linux can also use RocE Express as a standard NIC (Network Interface Card) for Ethernet.
10 Does not require a peer OSA.
7.3.4 Cryptographic functions
The z15 supported cryptography functions for z/OS and z/VM are listed in Table 7-10.
Table 7-10 Supported cryptography functions for z/OS and z/VM
Function1
z/OS
V2R42
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/VM
V7R1
z/VM
V6R4
CP Assist for Cryptographic Function (CPACF)
Y
Y
Y
Y
Y3
Yc
CPACF greater than 16 Domain Support
Y
Y
Y
Y
Yc
Yc
CPACF AES-128, AES-192, and AES-256
Y
Y
Y
Y
Yc
Yc
CPACF SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y
Y
Y
Yc
Yc
CPACF protected key
Y
Y
Y
Y
Yc
Yc
Crypto Express7S
Y
Y
Y4
Yd
Yc
Yc
Crypto Express7S Support for Visa Format Preserving Encryption
Y
Y
Yd
Yd
Yc
Yc
Crypto Express7S Support for Coprocessor in PCI-HSM Compliance Mode5
Y
Y
Yd
Yd
Yc
Yc
Crypto Express7S supporting up to 85 domains
Y
Y
Yd
Yd
Yc
Yc
Crypto Express6S
Y
Y
Yd
Yd
Yc
Yc
Crypto Express6S Support for Visa Format Preserving Encryption
Y
Y
Yd
Yd
Yc
Yc
Crypto Express6S Support for Coprocessor in PCI-HSM Compliance Mode6
Y
Y
Yd
Yd
Yc
Yc
Crypto Express6S supporting up to 85 domains
Y
Y
Yd
Yd
Yc
Yc
Crypto Express5S
Y
Y
Y
Y
Yc
Yc
Crypto Express5S spouting up to 85 domains
Y
Y
Y
Y
Yc
Yc
Elliptic Curve Cryptography (ECC)
Y
Y
Y
Y
Yc
Yc
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
Y
Y
Y
Y
Yc
Yc
z/OS Data Set Encryption
Y
Y
Y
N
-
-
z/VM Encrypted paging support
-
-
-
-
Y
Y
RMF Support for Crypto Express7
Y
Y
Y
Y
-
-
RMF Support for Crypto Express6
Y
Y
Y
Y
-
-
z/OS encryption readiness technology (zERT)
Y
Y
N
N
-
-

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
3 For guest use or exploitation.
4 A web deliverable is required. For more information and to download the deliverable, see the z/OS downloads page of the IBM IT infrastructure website.
5 Requires TKE 9.1 or newer.
6 Requires TKE 9.1 or newer.
The z15 supported cryptography functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-11.
Table 7-11 Supported cryptography functions for z/VSE, z/TPF, and Linux on Z
Function1
z/VSE V6R2
z/TPF
V1R1
Linux on Z2
CP Assist for Cryptographic Function (CPACF)
Y
Y
Y
CPACF greater than 16 Domain Support
Y
N
Y
CPACF AES-128, AES-192, and AES-256
Y
Y3
Y
CPACF SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y4
Y
CPACF protected key
N
N
N
Crypto Express7S
Y
Y
Y
Crypto Express6S Support for Visa Format Preserving Encryption
N
N
N
Crypto Express7S Support for Coprocessor in PCI-HSM Compliance Mode5
N
N
N
Crypto Express7S supporting up to 85 domains
Y
N
Y
Crypto Express6S
Y
Y
Y
Crypto Express6S Support for Visa Format Preserving Encryption
N
N
N
Crypto Express6S Support for Coprocessor in PCI-HSM Compliance Modee
N
N
N
Crypto Express6S supporting up to 85 domains
Y
N
Y
Crypto Express5S
Y
Y
Y
Crypto Express5S spouting up to 85 domains
Y
N
Y
Elliptic Curve Cryptography (ECC)
Y
N
Y
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
N
N
Y
z/VM Encrypted paging support
N
-
N
z/TPF transparent database encryption
-
Y
-

1 PTFs might be required for toleration support or exploitation of z15 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 z/TPF supports only AES-128 and AES-256.
4 z/TPF supports only SHA-1 and SHA-256.
5 Requires TKE 9.1 or newer.
7.4 Support by features and functions
This section addresses operating system support by function. Only the currently in-support releases are covered.
Tables in this section use the following convention:
N/A : Not applicable
NA : Not available
7.4.1 LPAR Configuration and Management
A single system image can control several processor units, such as CPs, zIIPs, or IFLs.
Maximum number of PUs per system image
The maximum number of PUs that is supported by each operating system image and by special-purpose LPARs are listed in Table 7-12.
Table 7-12 Maximum number of PUs per system image
Operating system
Maximum number of PUs per system image
z/OS V2R41
25623
z/OS V2R3
256bc
z/OS V2R2
256bc
z/OS V2R1
256bc
z/VM V7R1
804
z/VM V6R4
645
z/VSE V6.2 and later
z/VSE Turbo Dispatcher can use up to 4 CPs, and tolerates up to 10-way LPARs
z/TPF V1R1
86 CPs
CFCC Level 24
16 CPs or ICFs
CPs and ICFs cannot be mixed.
Linux on Z
SUSE Linux Enterprise Server 12 and later: 256 CPs or IFLs.
SUSE Linux Enterprise Server 11: 64 CPs or IFLs.
Red Hat RHEL 7 and later: 256 CPs or IFLs.
Red Hat RHEL 6: 64 CPs or IFLs.
Ubuntu 16.04 LTS and 18.04 LTS: 256 CPs or IFLs.
KVM Hypervisor
The KVM hypervisor is offered with the following Linux distributions -- 256CPs or IFLs--:
SLES 12 SP4 and later.
RHEL 7.5 with kernel-alt package (kernel 4.14).
Ubuntu 16.04 LTS and Ubuntu 18.04 LTS.
Secure Service Container
80
GDPS Virtual Appliance
80

2 z15 T01 LPARs support 190-way without multithreading; 128-way with multithreading (SMT).
3 Total characterizable PUs, including zIIPs and CPs.
4 An 80-way without multithreading and 40-way with multithreading enabled. Requires PTF for APAR VM66265. Supported on z14 and z15.
5 A 64-way without multithreading and 32-way with multithreading enabled.
Maximum main storage size
The maximum amount of main storage that is supported by current operating systems is listed in Table 7-13. A maximum of 16 TB of main storage can be defined for an LPAR on a z15 server.
Table 7-13 Maximum memory that is supported by the operating system
Operating system1
Maximum supported main storage2
z/OS
z/OS V2R1 and later support 4 TB.
z/VM
z/VM V6R4 and V7R1 support 2 TB
z/VSE
z/VSE V6R2 supports 32 GB.
z/TPF
z/TPF supports 4 TB.
CFCC
Level 22, 23, and 24supports up to 3 TB.
Secure Service Containers
Supports up to 16TB.
Linux on Z (64-bit)
16TB3

1 An LPAR on z15 supports up to 16 TB of memory.
2 z15 servers support 40 TB user configurable memory per server.
3 Support may vary by distribution. Check with your distribution provider.
Up to 85 LPARs
This feature was first made available on z13 servers and allows the system to be configured with up to 85 LPARs. Because channel subsystems can be shared by up to 15 LPARs, it is necessary to configure six channel subsystems to reach the 85 LPARs limit.
The supported operating systems are listed in Table 7-3 and Table 7-4 on page 262.
 
Remember: A virtual appliance that is deployed in a Secure Service Container runs in a dedicated LPAR. When activated, it reduces the maximum number of available LPARs by one.
Separate LPAR management of PUs
z15 servers use separate PU pools for each optional PU type. The separate management of PU types enhances and simplifies capacity planning and management of the configured LPARs and their associated processor resources.
The supported operating systems are listed in Table 7-3 and Table 7-4 on page 262.
Dynamic PU add
Planning an LPAR configuration includes defining reserved PUs that can be brought online when extra capacity is needed. Operating system support is required to use this capability without an IPL; that is, nondisruptively. This support is available in z/OS for some time.
The dynamic PU add function enhances this support by allowing you to dynamically define and change the number and type of reserved PUs in an LPAR profile, which removes any planning requirements. The new resources are immediately made available to the operating system and in the case of z/VM, to its guests.
The supported operating systems are listed in Table 7-3 and Table 7-4 on page 262.
Dynamic LPAR memory upgrade
An LPAR can be defined with an initial and a reserved amount of memory. At activation time, the initial amount is made available to the partition and the reserved amount can be added later, partially or totally. Although these two memory zones do not have to be contiguous in real memory, they appear as logically contiguous to the operating system that runs in the LPAR.
z/OS can take advantage of this support and nondisruptively acquire and release memory from the reserved area. z/VM V6R2 and later can acquire memory nondisruptively and immediately make it available to guests. z/VM virtualizes this support to its guests, which now also can increase their memory nondisruptively if supported by the guest operating system. Currently, releasing memory from z/VM is not supported2. Releasing memory from the z/VM guest depends on the guest’s operating system support.
Linux on Z also supports acquiring and releasing memory nondisruptively. This feature is enabled for SUSE Linux Enterprise Server 11 and RHEL 6 and later releases.
LPAR group absolute capping
On z13 servers, PR/SM is enhanced to support an option to limit the amount of physical processor capacity that is used by an individual LPAR when a PU that is defined as a CP or an IFL is shared across a set of LPARs. This enhancement is designed to provide a physical capacity limit that is enforced as an absolute (versus a relative) limit. It is not affected by changes to the logical or physical configuration of the system. This physical capacity limit can be specified in units of CPs or IFLs.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
Capacity Provisioning Manager
The provisioning architecture enables clients to better control the configuration and activation of the On/Off CoD. For more information, see Chapter 8., “System upgrades” on page 331. The new process is inherently more flexible and can be automated. This capability can result in easier, faster, and more reliable management of the processing capacity.
The Capacity Provisioning Manager, which is a feature that is first available with z/OS V1R9, interfaces with z/OS Workload Manager (WLM) and implements capacity provisioning policies. Several implementation options are available, from an analysis mode that issues only guidelines, to an autonomic mode that provides fully automated operations.
Replacing manual monitoring with autonomic management or supporting manual operation with guidelines can help ensure that sufficient processing power is available with the least possible delay. The supported operating systems are listed in Table 7-3 on page 260.
Program-directed re-IPL
First available on System z9®, program directed re-IPL allows an operating system on a z15 to IPL again without operator intervention. This function is supported for SCSI and IBM extended count key data (IBM ECKD) devices.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
IOCP
All IBM Z servers require a description of their I/O configuration. This description is stored in I/O configuration data set (IOCDS) files. The I/O configuration program (IOCP) allows for the creation of the IOCDS file from a source file that is known as the I/O configuration source (IOCS).
The IOCS file contains definitions of LPARs and channel subsystems. It also includes detailed information for each channel and path assignment, control unit, and device in the configuration.
IOCP for z15 provides support for the following features:
z15 Base machine definition
New PCI function adapter for zHyperLink (HYL)
New PCI function adapter for RoCE Express2 (CX4)
New IOCP Keyword MIXTYPE required for prior FICON3 cards
New hardware (announced with Driver 41)
IOCP support for Dynamic I/O for stand-alone CF (Driver 36 and later)
 
IOCP required level for z15 servers: The required level of IOCP for the z15 is IOCP 5.5.0 with PTFs. For more information, see the following publications:
IBM Z Stand-Alone Input/Output Configuration Program User's Guide, SB10-7173
IBM Z Input/Output Configuration Program User’s Guide for ICP IOCP, SB10-7172
 
Dynamic Partition Manager V4.0: Dynamic Partition Manager V4.0 is available for managing IBM Z servers that are running Linux. DPM 4.0 is available with HMC Driver Level 41 (HMC Version 2.15.0). IOCP does not need to configure a server that is running in DPM mode. For more information, see IBM Dynamic Partition Manager (DPM) Guide, SB10-7176.
7.4.2 Base CPC features and functions
In this section, we describe the features and functions of Base CPC.
HiperDispatch
The HIPERDISPATCH=YES/NO parameter in the IEAOPTxx member of SYS1.PARMLIB and on the SET OPT=xx command controls whether HiperDispatch is enabled or disabled for a z/OS image. It can be changed dynamically, without an IPL or any outage.
In z/OS, the IEAOPTxx keyword HIPERDISPATCH defaults to YES when it is running on a z15, z14, z13, and z13s server.
The use of SMT on z15 servers requires that HiperDispatch is enabled on the operating system. For more information, see “Simultaneous multithreading” on page 281.
Additionally, any LPAR that is running with more than 64 logical processors is required to operate in HiperDispatch Management Mode.
The following rules control this environment:
If an LPAR is defined at IPL with more than 64 logical processors, the LPAR automatically operates in HiperDispatch Management Mode, regardless of the HIPERDISPATCH= specification.
If logical processors are added to an LPAR that has 64 or fewer logical processors and the extra logical processors raise the number of logical processors to more than 64, the LPAR automatically operates in HiperDispatch Management Mode, regardless of the HIPERDISPATCH=YES/NO specification. That is, even if the LPAR has the HIPERDISPATCH=NO specification, that LPAR is converted to operate in HiperDispatch Management Mode.
An LPAR with more than 64 logical processors that are running in HiperDispatch Management Mode cannot be reverted to run in non-HiperDispatch Management Mode.
HiperDispatch on z15 servers uses a new chip and CPC drawer configuration to improve the access cache performance. Beginning with z/OS V2R1, HiperDispatch was changed to use the PU Chip/cluster/drawer cache structure of z15 servers. The base support is provided by PTFs that are identified by IBM.device.server.z15-8561.requiredservice.
PR/SM on z15 servers seeks to assign all memory in one CPC drawer that is striped across the clusters of that drawer to take advantage of the lower latency memory access in a drawer. Also, PR/SM tries to consolidate storage onto drawers with the most processor entitlement.
PR/SM on z15 servers seeks to assign all logical processors of a partition to one CPC drawer, packed into PU chips of that CPC drawer in cooperation with operating system HiperDispatch optimize shared cache usage.
PR/SM automatically keeps a partition’s memory and logical processors on the same CPC drawer. This arrangement looks simple for a partition, but it is a complex optimization for multiple logical partitions because some must be split among processors drawers.
In z15, all processor types can be dynamically reassigned except IFPs.
To use HiperDispatch effectively, WLM goal adjustment might be required. Review the WLM policies and goals and update them as necessary. WLM policies can be changed without turning off HiperDispatch. A health check is provided to verify whether HiperDispatch is enabled on a system image.
z/VM V7R1 and V6R4
z/VM also uses the HiperDispatch facility for improved processor efficiency by better use of the processor cache to take advantage of the cache-rich processor, node, and drawer design of the z15 system. The supported processor limit was increased to 80 for z/VM 7.1 (40 with SMT and up to 80 threads running simultaneously), whereas it remains at 64 for z/VM 6.4 (32 with SMT and supports up to 64 threads that are running simultaneously).
CPU polarization support in Linux on Z
You can optimize the operation of a vertical SMP environment by adjusting the SMP factor based on the workload demands. For more information about CPU polarization support in Linux on Z, see the CPU polarization page of IBM Knowledge Center.
z/TPF
z/TPF on z15 can use more processors immediately without reactivating the LPAR or IPLing the z/TPF system.
In installations older than z14, z/TPF workload is evenly distributed across all available processors, even in low-utilization situations. This configuration causes cache and core contention with other LPARs. When z/TPF is running in a shared processor configuration, the achieved MIPS is higher when z/TPF is using a minimum set of processors.
In low-utilization periods, z/TPF now minimizes the processor footprint by compressing TPF workload onto a minimal set of I-streams (engines), which reduces the effect on other LPARs and allows the entire CPC to operate more efficiently.
As a consequence, z/OS and z/VM experience less contention from the z/TPF system when the z/TPF system is operating at periods of low demand.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
zIIP support
zIIPs do not change the model capacity identifier of z15 servers. IBM software product license charges that are based on the model capacity identifier are not affected by the addition of zIIPs. On a z15 server, z/OS Version 2 Release 1 is the minimum level for supporting zIIPs.
No changes to applications are required to use zIIPs. They can be used by the following applications:
Db2 V8 and later for z/OS data serving for applications that use data Distributed Relational Database Architecture (DRDA) over TCP/IP, such as data serving, data warehousing, and selected utilities.
z/OS XML services.
z/OS CIM Server.
z/OS Communications Server for network encryption (Internet Protocol Security [IPSec]) and for large messages that are sent by HiperSockets.
IBM GBS Scalable Architecture for Financial Reporting.
IBM z/OS Global Mirror (formerly XRC) and System Data Mover.
IBM z/OS Container Extensions.
IBM OMEGAMON® XE on z/OS, OMEGAMON XE on Db2 Performance Expert, and Db2 Performance Monitor.
Any Java application that uses the current IBM SDK.
WebSphere Application Server V5R1 and later, and products that are based on it, such as WebSphere Portal, WebSphere Enterprise Service Bus (WebSphere ESB), and WebSphere Business Integration (WBI) for z/OS.
CICS/TS V2R3 and later.
Db2 UDB for z/OS Version 8 and later.
IMS Version 8 and later.
zIIP Assisted HiperSockets for large messages.
z/OSMF (z/OS Management Facility).
IBM z/OS Platform for Apache Spark.
IBM Watson® Machine Learning for z/OS.
z/OS System Recovery Boost.
The functioning of a zIIP is transparent to application programs. The supported operating systems are listed in Table 7-3 on page 260.
On z15 servers, the zIIP processor is designed to run in SMT mode, with up to two threads per processor. This function is designed to help improve throughput for zIIP workloads and provide appropriate performance measurement, capacity planning, and SMF accounting data. This support is available for z/OS V2.1 with PTFs and higher.
Use the PROJECTCPU option of the IEAOPTxx parmlib member to help determine whether zIIPs can be beneficial to the installation. Setting PROJECTCPU=YES directs z/OS to record the amount of eligible work for zIIPs in SMF record type 72 subtype 3. The field APPL% IIPCP of the Workload Activity Report listing by WLM service class indicates the percentage of a processor that is zIIP eligible. Because of the zIIP’s lower price as compared to a CP, even a utilization as low as 10% can provide cost benefits.
Transactional Execution
The IBM zEnterprise EC12 introduced an architectural feature called Transactional Execution (TX). This capability is known in academia and industry as hardware transactional memory. Transactional execution is also implemented on subsequent IBM Z servers.
This feature enables software to indicate to the hardware the beginning and end of a group of instructions that must be treated in an atomic way. All of their results occur or none occur, in true transactional style. The execution is optimistic.
The hardware provides a memory area to record the original contents of affected registers and memory as the instruction’s execution occurs. If the transactional execution group is canceled or must be rolled back, the hardware transactional memory is used to reset the values. Software can implement a fallback capability.
This capability increases the software’s efficiency by providing a way to avoid locks (lock elision). This advantage is of special importance for speculative code generation and highly parallelized applications.
TX is used by IBM Java virtual machine (JVM) and might be used by other software. The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
System Recovery Boost
System Recovery Boost is a new feature that was implemented on the IBM z15 system. This feature provides higher temporary processing capacity to LPARs for speeding up shutdown and IPL operations, without increasing software costs. For more information, see Appendix B, “System Recovery Boost” on page 491.
The temporary processing capacity boost is intended to help workloads catch-up after an IPL (planned or unplanned) to work faster through a backlog after a downtime, which improves overall system availability by reducing the elapsed time required to recover service.
The capacity boost is available on an LPAR basis and is provided for general-purpose processors (CPs) and the using operating systems. The following types of temporary boost capacity (see Table 7-14 on page 281 for operating system support) are available:
Subcapacity Boost (for systems with CPs running at a subcapacity index 4xx, 5xx, or 6xx). For LPARS running on subcapacity processors, during the boost period the allocated CPs are boosted to full capacity (7xx) without increasing software cost.
CP capacity boost using zIIP conversion during the boost period. If the customer has zIIP processors, these processors can be converted for the temporary boost interval to general-purpose processors (CPs) for the selected LPARs. After the boost period ends, the ZIIPs resume their characterization. No other software cost is associated with the zIIPs during the conversion (boost) period.
More temporary zIIP boost records can be obtained by way of eBOD for supplementing the existing available customer zIIPs. The temporary zIIP boost records must be activated before planned operations and deactivated at the end of the boost period (automation software can be used for this purpose).
Table 7-14 Operating system support for System Recovery Boost
Boost type1
z/OS V2R4
z/OS V2R3
z/OS V2R2
z/OS V2R1
z/VM V7R1
z/VM V6R4
z/TPF V1R1
z/VSE V6R2
Linux on Z
Subcapacity Boost
Y
Y2
N
N
N
Yb
N
N
zIIP to CP capacity boost4
Y
Yb
N
N
N
N
N
N
N

1 Boost must be enabled for LPARs to opt in.
2 With Fixes.
3 Subcapacity boost might be available during the boost period to guest operating systems except for z/OS.
4 zIIP processor capacity boost is only available if customer has at least one active processor characterized as zIIP. More zIIPs can be used if obtained through eBOD (temporary zIIP boost records).
Automation
The client’s automation product can be used to automate and control the following System Recovery Boost activities:
To activate and deactivate the eBod temporary capacity record to provide more physical zIIPs for an IPL or Shutdown Boost.
To dynamically modify LPAR weights, as might be needed to modify the sharing of physical zIIP capacity during a Boost period.
To drive the invocation of the PROC that indicates the beginning of a shutdown process (and the start of the shut-down Boost).
To take advantage of new composite HW API reconfiguration actions.
To control the level of parallelism that is present in the workload at startup (for example, starting middleware regions) and shutdown (for example, performing an orderly shutdown of middleware).
Simultaneous multithreading
SMT is the hardware capability to process up to two simultaneous threads in a single core, sharing the resources of the superscalar core. This capability improves the system capacity and efficiency in the usage of the processor, which increases the overall throughput of the system.
The z15 can run up two threads simultaneously in the same processor, which dynamically shares resources of the core, such as cache, translation lookaside buffer (TLB), and execution resources. It provides better utilization of the cores and more processing capacity.
SMT4 is supported for zIIPs and IFLs.
 
Note: For zIIPs and IFLs, SMT must be enabled on z/OS, z/VM, or Linux on Z instances. An operating system with SMT support can be configured to dispatch work to a thread on a zIIP (for eligible workloads in z/OS) or an IFL (for z/VM) core in single-thread or SMT mode.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
An operating system that uses SMT controls each core and is responsible for maximizing their throughput and meeting workload goals with the smallest number of cores. In z/OS, consider HiperDispatch cache optimization when you must choose the two threads to be dispatched in the same processor.
HiperDispatch attempts to dispatch guest virtual CPUs on the same logical processor on which they ran. PR/SM attempts to dispatch a vertical low logical processor in the same physical processor. If that process is not possible, it attempts to dispatch it in the same node, or then the same CPC drawer where it was dispatched before to maximize cache reuse.
From the point of view of an application, SMT is transparent and no changes are required in the application for it to run in an SMT environment, as shown in Figure 7-1.
Figure 7-1 Simultaneous multithreading
z/OS
The following APARs must be applied to z/OS V2R1 to use SMT:
OA43366 (BCP)
OA43622 (WLM)
OA44439 (XCF)
The use of SMT on z/OS V2R1 requires enabling HiperDispatch, and defining the processor view (PROCVIEW) control statement in the LOADxx parmlib member and the MT_ZIIP_MODE parameter in the IEAOPTxx parmlib member.
The PROCVIEW statement is defined for the life of IPL, and can have the following values:
CORE: This value specifies that z/OS should configure a processor view of core, in which a core can include one or more threads. The number of threads is limited by z15 to two threads. If the underlying hardware does not support SMT, a core is limited to one thread.
CPU: This value is the default. It specifies that z/OS should configure a traditional processor view of CPU and not use SMT.
CORE,CPU_OK: This value specifies that z/OS should configure a processor view of core (as with the CORE value) but the CPU parameter is accepted as an alias for applicable commands.
When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS that is running in z15, HiperDispatch is forced to run as enabled, and you cannot disable HiperDispatch. The PROCVIEW statement cannot be changed dynamically; therefore, you must run an IPL after changing it to make the new setting effective.
The MT_ZIIP_MODE parameter in the IEAOPTxx controls zIIP SMT mode. It can be 1 (the default), where only one thread can be running in a core, or 2, where up two threads can be running in a core. If PROCVIEW CPU is specified, the MT_ZIIP_MODE is always 1. Otherwise, the use of SMT to dispatch two threads in a single zIIP logical processor (MT_ZIIP_MODE=2) can be changed dynamically by using the SET OPT=xx setting in the IEAOPTxx parmlib. Changing the MT mode for all cores can take some time to complete.
PROCVIEW CORE requires DISPLAY M=CORE and CONFIG CORE to display the core states and configure an entire core.
With the introduction of Multi-Threading support for SAPs, a maximum of 88 logical SAPs can be used. RMF is updated to support this change by implementing page break support in the I/O Queuing Activity report that is generated by the RMF Post processor.
z/VM V7R1 and V6R4
The use of SMT in z/VM is enabled by using the MULTITHREADING statement in the system configuration file. Multithreading is enabled only if z/VM is configured to run with the HiperDispatch vertical polarization mode enabled and with the dispatcher work distribution mode set to reshuffle.
The default in z/VM is multithreading disabled. With the addition of dynamic SMT capability to z/VM V6R4 through an SPE, the number of active threads per core can be changed without a system outage and potential capacity gains going from SMT-1 to SMT-2 (one to two threads per core) can now be achieved dynamically. Dynamic SMT requires applying PTFs that are running in SMT enabled mode and enables dynamically varying the active threads per core.
z/VM supports up to 32 multithreaded cores (64 threads) for IFLs, and each thread is treated as an independent processor. z/VM dispatches virtual IFLs on the IFL logical processor so that the same or different guests can share a core. Each core has a single dispatch vector, and z/VM attempts to place virtual sibling IFLs on the same dispatch vector to maximize cache reuses.
The guests have no awareness of SMT, and cannot use it. z/VM SMT exploitation does not include guest support for multithreading. The value of this support for guests is that the first-level z/VM hosts under the guests can achieve higher throughput from the multi-threaded IFL cores.
Linux on Z and the KVM hypervisor
The upstream kernel 4.0 features SMT functionality that was developed by the Linux on Z development team. SMT is supported on LPAR only (not as a second-level guest). For more information, see the Kernel 4.0 page of the developerWorks website.
The following minimum releases of Linux on Z distributions are supported on z15 (native SMT support):
SUSE:
 – SLES 15 SP1 with service
 – SUSE SLES 12 SP4 with service
 – SUSE SLES 11 SP4 with service
Red Hat:
 – Red Hat RHEL 8.0 with service
 – Red Hat RHEL 7.7 with service
 – Red Hat RHEL 6.10 with service
Ubuntu:
 – Ubuntu 18.04.1 LTS with service
 – Ubuntu 16.04.5 LTS with service
The KVM hypervisor is supported on the same Linux on Z distributions in this list.
For most current support, see the Linux on IBM Z Tested platforms website.
Single-instruction multiple-data
The SIMD feature introduces a new set of instructions to enable parallel computing that can accelerate code with string, character, integer, and floating point data types. The SIMD instructions allow a larger number of operands to be processed with a single complex instruction.
z15 is equipped with new set of instructions to improve the performance of complex mathematical models and analytic workloads through vector processing and new complex instructions, which can process numerous data with a single instruction. This new set of instructions, which is known as SIMD, enables more consolidation of analytic workloads and business transactions on Z servers.
SIMD on z15 has support for enhanced math libraries that provide performance improvements for analytical workloads by processing more information with a single CPU instruction.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262. Operating System support includes the following features5:
Enablement of vector registers.
Use of vector registers that use XL C/C++ ARCH(11) and TUNE(11).
A math library with an optimized and tuned math function (Mathematical Acceleration Subsystem [MASS]) that can be used in place of some of the C standard math functions. It includes a SIMD vectorized and non-vectorized version.
A specialized math library, which is known as Automatically Tuned Linear Algebra Software (ATLAS), that is optimized for the hardware.
IBM Language Environment® for C runtime function enablement for ATLAS.
DBX to support the disassembly of the new vector instructions, and to display and set vector registers.
XML SS exploitation to use new vector processing instructions to improve performance.
MASS and ATLAS can reduce the time and effort for middleware and application developers. IBM provides compiler built-in functions for SMID that software applications can use as needed, such as for using string instructions.
The use of new hardware instructions require the z/OS V2R4 XL C/C++ compiler with ARCH(13) and TUNE(13) options for targeting z15 instructions. The ARCH(13) compiler option allows the compiler to use any new z15 instructions where appropriate. The TUNE(13) compiler option allows the compiler to tune for any z15 micro-architecture.
Vector programming support is extended for z15 to provide access to the new instructions that were introduced by the VEF 26 specification.
Older levels of z/OS XL C/C++ compilers do not provide z15 exploitation; however, the z/OS V2R4 XL C/C++ compiler can be used to generate code for the older levels of z/OS running on z15.
The followings compilers include built-in functions for SIMD:
IBM Java
XL C/C++
Enterprise COBOL
Enterprise PL/I
Code must be developed to take advantage of the SIMD functions. Applications with SIMD instructions abend if they run on a lower hardware level system. Some mathematical function replacement can be done without code changes by including the scalar MASS library before the standard math library.
The MASS and standard math library include different accuracies, so assess the accuracy of the functions in the context of the user application before deciding whether to use the MASS and ATLAS libraries.
The SIMD functions can be disabled in z/OS partitions at IPL time by using the MACHMIG parameter in the LOADxx member. To disable SIMD code, use the MACHMIG VEF hardware-based vector facility. If you do not specify a MACHMIG statement, which is the default, the system is unlimited in its use of the Vector Facility for z/Architecture (SIMD).
Hardware decimal floating point
Industry support for decimal floating point is growing, with IBM leading the open standard definition. Examples of support for the draft standard IEEE 754r include Java BigDecimal, C#, XML, C/C++, GCC, COBOL, and other key software vendors, such as Microsoft and SAP.
Decimal floating point support was introduced with z9 EC. z15 servers inherited the decimal floating point accelerator feature that was introduced with z10 EC.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262. For more information, see 7.5.4, “z/OS XL C/C++ considerations” on page 323.
Out-of-order execution
Out-of-order (OOO) execution yields significant performance benefits for compute-intensive applications by reordering instruction execution, which allows later (newer) instructions to be run ahead of a stalled instruction, and reordering storage accesses and parallel storage accesses. OOO maintains good performance growth for traditional applications.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262. For more information, see “3.4.3, “Out-of-Order execution” on page 102.
CPU Measurement Facility
Also known as Hardware Instrumentation Services (HIS), CPU Measurement Facility (CPUMF) was initially introduced with z10 EC to gain insight into the interaction of workload and hardware it runs on. CPU MF data can be collected by z/OS System Measurement Facility on SMF 113 records. The supported operating systems are listed in Table 7-3 on page 260.
For more information about this function, see The Load-Program-Parameter and the CPU-Measurement Facilities.
For more information about the CPU Measurement Facility, see the CPU MF - Update and WSC Experiences page of the IBM Techdocs Library website.
Large page support
In addition to the existing 1-MB large pages, 4-KB pages, and page frames, z15 servers support pageable 1-MB large pages, large pages that are 2 GB, and large page frames. The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
Virtual Flash Memory
IBM Virtual Flash Memory (VFM) is the replacement for the PCIe based Flash Express features, which were available on the IBM zEC12 and IBM z13. No application changes are required to change from IBM Flash Express to VFM for it implements EADM Architecture using HSA-like memory instead of Flash card pairs.
IBM Virtual Flash Memory (FC 0643) offers up to 6.0 TB of memory in 0.5 TB increments for improved application availability and to handle paging workload spikes.
IBM Virtual Flash Memory is designed to help improve availability and handling of paging workload spikes when running z/OS V2.1, V2.2, V2.3, or V2.4. With this support, z/OS is designed to help improve system availability and responsiveness by using VFM across transitional workload events, such as market openings, and diagnostic data collection. z/OS is also designed to help improve processor performance by supporting middleware exploitation of pageable large (1 MB) pages.
Therefore, VFM can help organizations meet their most demanding service level agreements and compete more effectively. VFM is easily configurable, and to provide rapid time to value.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
Guarded Storage Facility
Also known as less-pausing garbage collection, Guarded Storage Facility (GSF) is a new architecture that was introduced with z14 to enable enterprise scale Java applications to run without periodic pause for garbage collection on larger heaps.
z/OS
GSF support allows an area of storage to be identified such that an Exit routine assumes control if a reference is made to that storage. GSF is managed by new instructions that define Guarded Storage Controls and system code to maintain that control information across undispatch and redispatch.
Enabling a less-pausing approach improves Java garbage collection. Function is provided on z14 and subsequent servers that are running z/OS 2.2 and later with APAR OA51643 installed. MACHMIG statement in LOADxx of SYS1.PARMLIB provides ability to disable the function.
z/VM
With the PTF for APAR VM65987, z/VM V6.4 provides support for guest exploitation of the guarded storage facility. This facility is designed to improve the performance of garbage-collection processing by various languages, in particular Java.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
Instruction Execution Protection
Instruction Execution Protection (IEP) is a new hardware function that was introduced with z14 that enables software, such as Language Environment, to mark certain memory regions (for example, a heap or stack), as non-executable to improve the security of programs running on IBM Z servers against stack-overflow or similar attacks.
Through enhanced hardware features (based on DAT table entry bit) and explicit software requests to obtain memory areas as non-executable, areas of memory can be protected from unauthorized execution. A Protection Exception occurs if an attempt is made to fetch an instruction from an address in such an element or if an address in such an element is the target of an execute-type instruction.
z/OS
To use IEP, Real Storage Manager (RSM) is enhanced to request non-executable memory allocation. Use new keyword EXECUTABLE=YES|NO on STORAGE OBTAIN or IARV64 to indicate whether memory to be used contains executable code. Recovery Termination Manager (RTM) writes LOGREC record of any program-check that results from IEP.
IEP support is for z/OS 2.2 and later running on z15 with APARs OA51030 and OA51643 installed.
z/VM
Guest exploitation support for the Instruction Execution Protection Facility is provided with APAR VM65986.
The supported operating systems are listed in Table 7-3 on page 260 and Table 7-4 on page 262.
IBM Integrated Accelerator for zEnterprise Data Compression
The IBM Integrated Accelerator for zEnterprise Data Compression (zEDC) is implemented as on-chip data compression accelerator; that is, Nest Compression Accelerator (NXU) and designed to support Deflate/gzip/zlib algorithms. For more information, see Figure 3-10 on page 106).
Each PU chip has one on-chip compression unit, which is designed to replace the zEnterprise Data Compression (zEDC) Express PCIe feature.
The zEDC Express feature available on older systems is NOT carried forward to z15.
The IBM Integrated Accelerator for zEDC maintains software compatibility with existing zEDC Express use cases. For more information, see Integrated Acceleratorfor zEnterprise Data Compression.
The z/OS zEDC capability is a software-priced feature that is designed to support compression capable hardware. With z15, the zEDC Express (hardware) PCIe feature is replaced by the on-chip compression accelerator unit, but the software (z/OS) component is required to maintain same functionality without increasing CPU costs.
All data interchange with existing (zEDC) compressed data remains compatible as z15 and zEDC capable machines coexist (accessing same data). Data that is compressed and written with zEDC will be read and decompressed by z15 well into the future.
The on-chip compression unit has the following operating modes:
Synchronous execution in Problem State, where user application starts instruction in its virtual address space, which provides low latency and high-bandwidth compression/decompression operations). This mod does not require any special hypervisor support, which removes the virtualization layer (sharing the zEDC Express PCIe adapter among LPARs requires virtualization support).
Asynchronous optimization for Large Operations under z/OS. The authorized application (for example, BSAM/QSAM) issues I/O for asynchronous execution and SAP (PU) starts instruction (synchronously as described in the previous paragraph) on behalf of application. The on-chip accelerator enables load balancing of high compression loads and low latency and high bandwidth compared to zEDC Express, while maintaining current user experience on compression.
Functionality support for the IBM Integrated Accelerator for zEDC is listed in Table 7-3 on page 260 and Table 7-4 on page 262.
7.4.3 Coupling and clustering features and functions
In this section, we describe the coupling and cluster features.
Coupling facility and CFCC considerations
Coupling facility (CF) connectivity to a z15 is supported on the z14, z13, z13s, or another z15. The CFCC levels that are supported on Z servers are listed in Table 7-15.
Table 7-15 IBM Z CFCC code-levels
IBM Z server
Code level
z15
CFCC Level 24
z14 M0x and z14 ZR1
CFCC Level 22 or CFCC Level 23
z13
CFCC Level 20 or CFCC Level 21
z13s
CFCC Level 21
 
Consideration: Because coupling link connectivity with z14 does not support InfiniBand, introducing z15 into an installation requires extra planning. Consider the level of CFCC. For more information, see “Migration considerations” on page 200.
CFCC Level 24
CFCC Level 24 is delivered on z15 servers with driver level 41. CFCC Level 24 introduces the following enhancements:
CFCC Fair Latch Manager
This enhancement to the internals of the Coupling Facility (CFCC) dispatcher provides CF work management efficiency and processor scalability improvements, and improve the “fairness” of arbitration for internal CF resource latches across tasks
CFCC Message Path Resiliency enhancement
CF Message Paths use a z/OS-provided system identifier (SYID) to uniquely identify which z/OS system image, and instance of that system image, is sending requests over a message path to the CF. With z15, we are providing a new resiliency mechanism that transparently recovers for this “missing” message path deactivate (if and when that deactivation ever occurs).
During path initialization, the CF provides more information to z/OS about every message path that appears active, including the SYID for the path. Whenever z/OS interrogates the state of the message paths to the CF, z/OS checks this SYID information for currency and correctness, and if incorrect, gather diagnostic information and reactivates the path to correct the problem.
CFCC Change Shared-Engine CF Default to DYNDISP=THIN
Coupling Facility images can run with shared or dedicated processors. Shared processor CFs can operate with different Dynamic Dispatching (DYNDISP) models:
 – DYNDISP=OFF: LPAR timeslicing completely controls the CF processor.
 – DYNDISP=ON: an optimization over pure LPAR timeslicing, in which the CFCC code manages timer interrupts to share processors more efficiently.
 – DYNDISP=THIN: An interrupt-driven model in which the CF processor is dispatched in response to a set of events that generate Thin Interrupts.
Thin Interrupt support was available since zEC12/zBC12, and is proven to be efficient and well-performing in numerous different test and customer shared-engine coupling facility configurations.
Therefore, z15 is making DYNDISP=THIN the default mode of operation for coupling facility images that use shared processors.
CFCC Level 23
CFCC Level 23 is delivered on z14 servers with driver level 36. In addition to CFCC Level 22 enhancements, it introduces the following enhancements:
Asynchronous cross invalidation (XI) for CF cache structures
This enhancement requires z/OS fixes for APARs OA54688 (exploitation) and OA54985 (toleration). It also requires explicit data manager support (Db2 V12 with PTFs).
Coupling Facility hang detection
These enhancements provide a significant reduction in failure scope and client disruption (CF-level to structure-level), with no loss of FFDC collection capability. With this support, the CFCC dispatcher significantly reduces the CF hang detection interval to only 2 seconds, which allows more timely detection and recovery from such events.
When a hang is detected, in most cases the CF confines the scope of the failure to “structure damage” for the single CF structure the hung command was processing against, capture diagnostics with a nondisruptive CF dump, and continue operating without stopping or rebooting the CF image.
Coupling Facility granular latching
This enhancement eliminates the performance degradation that is caused by structure-wide latching. With this support, most CF list and lock structure ECR processing no longer uses structure-wide latching. It serializes its execution by using the normal structure object latches that all mainline commands use. However, a few “edge conditions” in ECR processing still require structure-wide latching.
Before you begin the migration process, install the compatibility and coexistence PTFs. A planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 23.
CFCC Level 22
CFCC Level 22 is delivered on z14 servers with driver level 32. CFCC Level 22 introduces the following enhancements:
Coupling Express Long Range (CE LR): A new link type that was introduced with z14 for long-distance coupling connectivity.
Coupling Facility (CF) Processor Scalability: CF work management and dispatching changes for IBM z14™ allow improved efficiency and scalability for coupling facility images.
First, ordered work queues were eliminated from the CF in favor of first-in/first-out queues, which avoids the overhead of maintaining ordered queues.
Second, protocols for system-managed duplexing were simplified to avoid the potential for latching deadlocks between duplexed structures.
Third, the CF image can now use its processors to perform specific work management functions when the number of processors in the CF image exceeds a threshold. Together, these changes improve the processor scalability and throughput for a CF image.
CF List Notification Enhancements: Significant enhancements were made to CF notifications that inform users about the status of shared objects within in a Coupling Facility.
First, structure notifications can use a round-robin scheme for delivering immediate and deferred notifications that avoids excessive “shotgun” notifications, which reduces notification overhead.
Second, an option is now available for delivering “aggressive” notifications, which can drive a notification when new elements are added to a queue. This feature provides initiative to get new work processed in a timely manner.
Third, notifications can now be driven when a queue transitions between full and not-full, which allows users to redrive messages that could not be written previously to a “full” queue. The combination of these notification enhancements provides flexibility to accommodate notification preferences among various CF users and yields more consistent, timely notifications.
Coupling Link Constraint Relief: IBM z14™ provides more physical and logical coupling link connectivity compared to z13. Consider the following points:
 – The maximum number of physical ICA SR coupling links (ports) is increased from 40 per CPC to 80 per CPC. These higher limits on z14 support concurrent use of InfiniBand coupling, ICA SR, and CE LR links, for coupling link technology migration purposes.
 – Maximum number of coupling CHPIDs (of all types) is 256 per CPC (same as z13).
CF Encryption: z/OS 2.3 provides support for end-to-end encryption for CF data in flight and data at rest in CF structures (as a part of the Pervasive Encryption solution). Host-based CPACF encryption is used for high performance and low latency. IBM z14™ CF images are not required, but are recommended to simplify some sysplex recovery and reconciliation scenarios involving encrypted CF structures. (The CF image never decrypts or encrypts any data). IBM z14™ z/OS images are not required, but are recommended for the improved AES CBC encrypt/decrypt performance that z14 provides.
The supported operating systems are listed in Table 7-5 on page 263.
For more information about CFCC code levels, see the Parallel Sysplex page of the IBM IT infrastructure website.
For more information about the latest CFCC code levels, see the current exception letter that is published on Resource Link website (login is required).
CF structure sizing changes are expected when upgrading from a previous CFCC Level to CFCC Level 21. Review the CF LPAR size by using the available CFSizer tool, which is available for download at the IBM Systems support website.
The Sizer Utility, which is an authorized z/OS program download, is useful when you are upgrading a CF. The tool is available for download at the IBM Systems support website.
Before you begin the migration process, install the compatibility and coexistence PTFs. A planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 22.
Coupling links support
Integrated Coupling Adapter (ICA) Short Reach and Coupling Express Long Reach (CE LR) coupling link options provide high-speed connectivity at short and longer distances over fiber optic interconnections. For more information, see 4.6.4, “Parallel Sysplex connectivity” on page 196.
Integrated Coupling Adapter
PCIe Gen3 coupling fanout, which is also known as Integrated Coupling Adapter Short Range (ICA SR, ICA SR1.1), supports a maximum distance of 150 meters (492 feet) and is defined as CHPID type CS5 in IOCP.
Coupling Express Long Reach
The CE LR link provides point-to-point coupling connectivity at distances of 10 km (6.21 miles) unrepeated and defined as CHPID type CL5 in IOCP. The supported operating systems are listed in Table 7-5 on page 263.
Virtual Flash Memory use by CFCC
VFM can be used in coupling facility images to provide extended capacity and availability for workloads that use WebSphere MQ Shared Queues structures. The use of VFM can help availability by reducing latency from paging delays that can occur at the start of the workday or during other transitional periods. It is also designed to help eliminate delays that can occur when diagnostic data during failures is collected.
CFCC Coupling Thin Interrupts
The Coupling Thin Interrupts enhancement is delivered with CFCC 19. It improves the performance of a CF partition and the dispatching of z/OS LPARs that are awaiting the arrival of returned asynchronous CF requests when used in a shared engine environment.
For more information, see Coupling Thin Interrupts (default CF LPAR setting with z15)” on page 116. The supported operating systems are listed in Table 7-5 on page 263.
Asynchronous CF Duplexing for lock structures
Asynchronous CF Duplexing enhancement is a general-purpose interface for any CF Lock structure user. It enables secondary structure updates to be performed asynchronously regarding primary updates. Initially delivered with CFCC 21 on z13 as an enhanced continuous availability solution, it offers performance advantages for duplexing lock structures and avoids the need for synchronous communication delays during the processing of every duplexed update operation.
Asynchronous CF Duplexing for lock structures requires the following software support:
z/OS V2R4
V2R3, z/OS V2.2 SPE with PTFs for APAR OA47796 and OA49148
z/VM V7R1, z/VM V6.4 with PTFs for z/OS exploitation of guest coupling environment
Db2 V12 with PTFs for APAR PI66689
IRLM V2.3 with PTFs for APAR PI68378
The supported operating systems are listed in Table 7-5 on page 263.
Asynchronous cross-invalidate for CF cache structures
Asynchronous cross-invalidate (XI) for CF cache structures enables improved efficiency in CF data sharing by adopting a more transactional behavior for cross-invalidate (XI) processing, which is used to maintain coherency and consistency of data managers’ local buffer pools across the sysplex.
Instead of performing XI signals synchronously on every cache update request that causes them, data managers can “opt in” for the CF to perform these XIs asynchronously (and then sync them up with the CF at or before transaction completion). Data integrity is maintained if all XI signals complete by the time transaction locks are released.
The feature enables faster completion of cache update CF requests, especially with cross-site distance involved and provides improved cache structure service times and coupling efficiency. It requires explicit data manager exploitation/participation, which is not transparent to the data manager. No SMF data changes were made for CF monitoring and reporting.
The following requirements must be met:
CFCC Level 23 support, plus
z/OS 2.4
PTFs on every exploiting system in the sysplex:
 – Fixes for APAR OA54688 - Exploitation support z/OS 2.2 and 2.3
 – Fixes for APAR OA54985 - Toleration support for z/O 2.1
Db2 V12 with PTFs for exploitation
z/VM Dynamic I/O support for InfiniBand7 and ICA CHPIDs
z/VM dynamic I/O configuration support allows you to add, delete, and modify the definitions of channel paths, control units, and I/O devices to the server and z/VM without shutting down the system.
This function refers exclusively to the z/VM dynamic I/O support of InfiniBand and ICA coupling links. Support is available for the CIB and CS5 CHPID type in the z/VM dynamic commands, including the change channel path dynamic I/O command.
Specifying and changing the system name when entering and leaving configuration mode are also supported. z/VM does not use InfiniBand or ICA, and does not support the use of InfiniBand or ICA coupling links by guests. The supported operating systems are listed in Table 7-5 on page 263.
7.4.4 Storage connectivity-related features and functions
In this section, we describe the storage connectivity-related features and functions.
zHyperlink Express
z14 introduces IBM zHyperLink Express as a brand new IBM Z input/output (I/O) channel link technology since FICON. zHyperLink Express is designed to help bring data close to processing power, increase the scalability of Z transaction processing, and lower I/O latency.
zHyperLink Express is designed for up to 5x lower latency than High-Performance FICON for Z (zHPF) by directly connecting the Z central processor complex (CPC) to the I/O Bay of the DS8880. This short distance (up to 150 m [492.1 feet]), direct connection is intended to speed Db2 for z/OS transaction processing and improve active log throughput.
The improved performance of zHyperLink Express allows the Processing Unit (PU) to make a synchronous request for the data that is in the DS8880 cache. This feature eliminates the undispatch of the running request, the queuing delays to resume the request, and the PU cache disruption.
Support for zHyperLink Writes can accelerate Db2 log writes to help deliver superior service levels by processing high-volume Db2 transactions at speed. IBM zHyperLink Express requires compatible levels of DS8000/F hardware, firmware R8.5.1 or later, and Db2 12 with PTFs.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
FICON Express16SA
FICON Express16SA supports a link data rate of 16 gigabits per second (Gbps) and autonegotiation to 8 Gbps for synergy with switches, directors, and storage devices. With support for native FICON, High-Performance FICON for Z (zHPF), and Fibre Channel Protocol (FCP), the IBM z15™ server enables you to position your SAN for even higher performance, which helps you to prepare for an end-to-end 16 Gbps infrastructure to meet the lower latency and increased bandwidth demands of your applications.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
FICON Express16S+
FICON Express16S+ supports a link data rate of 16 Gbps and autonegotiation to 4 or 8 Gbps for synergy with switches, directors, and storage devices. With support for native FICON, High-Performance FICON for Z (zHPF), and Fibre Channel Protocol (FCP), the IBM z14™ server enables you to position your SAN for even higher performance, which helps you to prepare for an end-to-end 16 Gbps infrastructure to meet the lower latency and increased bandwidth demands of your applications.
The new FICON Express16S+ channel works with your fiber optic cabling environment (single mode and multimode optical cables). The FICON Express16S+ feature running at end-to-end 16 Gbps link speeds provides reduced latency for large read/write operations and increased bandwidth compared to the FICON Express8S feature.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
FICON Express16S
FICON Express16S supports a link data rate of 16 Gbps and autonegotiation to 4 or 8 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, zHPF, and FCP, the z14 server enables SAN for even higher performance, which helps to prepare for an end-to-end 16 Gbps infrastructure to meet the increased bandwidth demands of your applications.
The new features for the multimode and single mode fiber optic cabling environments reduce latency for large read/write operations and increase bandwidth compared to the FICON Express8S features.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
FICON Express8S
The FICON Express8S provides a link rate of 8 Gbps, with auto negotiation to 4 or 2 Gbps for compatibility with previous devices and investment protection. Both 10 km (6.2 miles) LX and SX connections are offered (in a feature, all connections must include the same type).
FICON Express8S introduced a hardware data router for more efficient zHPF data transfers. It is the first channel with hardware that is designed to support zHPF, as compared to FICON Express8, FICON Express4, and FICON Express2, which include a firmware-only zHPF implementation.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
Extended distance FICON
An enhancement to the industry-standard FICON architecture (FC-SB-3) helps avoid degradation of performance at extended distances by implementing a new protocol for persistent IU pacing. Extended distance FICON is transparent to operating systems and applies to all FICON Express16S+ FICON Express16S, and FICON Express8S features that carry native FICON traffic (CHPID type FC).
To use this enhancement, the control unit must support the new IU pacing protocol. IBM System Storage™ DS8000 series supports extended distance FICON for IBM Z environments. The channel defaults to current pacing values when it operates with control units that cannot use extended distance FICON.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
High-performance FICON
High-performance FICON (zHPF) was first provided on System z10®, and is a FICON architecture for protocol simplification and efficiency. It reduces the number of information units (IUs) that are processed. Enhancements were made to the z/Architecture and the FICON interface architecture to provide optimizations for online transaction processing (OLTP) workloads.
zHPF is available on z15, z14, z13, z13s, zEC12, and zBC12 servers. The FICON Express16SA, FICON Express16S+ FICON Express16S, and FICON Express8S (CHPID type FC) concurrently support the existing FICON protocol and the zHPF protocol in the server LIC.
When used by the FICON channel, the z/OS operating system, and the DS8000 control unit or other subsystems, the FICON channel processor usage can be reduced and performance improved. Appropriate levels of Licensed Internal Code (LIC) are required.
Also, the changes to the architectures provide end-to-end system enhancements to improve reliability, availability, and serviceability (RAS).
zHPF is compatible with the following standards:
Fibre Channel Framing and Signaling standard (FC-FS)
Fibre Channel Switch Fabric and Switch Control Requirements (FC-SW)
Fibre Channel Single-Byte-4 (FC-SB-4) standards
For example, the zHPF channel programs can be used by the z/OS OLTP I/O workloads, Db2, VSAM, the partitioned data set extended (PDSE), and the z/OS file system (zFS).
At the zHPF announcement, zHPF supported the transfer of small blocks of fixed size data (4 K) from a single track. This capability was extended, first to 64 KB, and then to multitrack operations. The 64 KB data transfer limit on multitrack operations was removed by z196. This improvement allows the channel to fully use the bandwidth of FICON channels, which results in higher throughputs and lower response times.
The multitrack operations extension applies to the FICON Express16SA, FICON Express16S+ FICON Express16S, and FICON Express8S, when configured as CHPID type FC and connecting to z/OS. zHPF requires matching support by the DS8000 series. Otherwise, the extended multitrack support is transparent to the control unit.
zHPF is enhanced to allow all large write operations (greater than 64 KB) at distances up to 100 km (62.13 miles) to be run in a single round trip to the control unit. This process does not elongate the I/O service time for these write operations at extended distances. This enhancement to zHPF removes a key inhibitor for clients adopting zHPF over extended distances, especially when the IBM HyperSwap capability of z/OS is used.
From the z/OS perspective, the FICON architecture is called command mode and the zHPF architecture is called transport mode. During link initialization, the channel node and the control unit node indicate whether they support zHPF.
 
Requirement: All FICON channel path identifiers (CHPIDs) that are defined to the same LCU must support zHPF. The inclusion of any non-compliant zHPF features in the path group causes the entire path group to support command mode only.
The mode that is used for an I/O operation depends on the control unit that supports zHPF and its settings in the z/OS operating system. For z/OS use, a parameter is available in the IECIOSxx member of SYS1.PARMLIB (ZHPF=YES or NO) and in the SETIOS system command to control whether zHPF is enabled or disabled. The default is ZHPF=NO.
Support is also added for the D IOS,ZHPF system command to indicate whether zHPF is enabled, disabled, or not supported on the server.
Similar to the existing FICON channel architecture, the application or access method provides the channel program (CCWs). How zHPF (transport mode) manages channel program operations is different from the CCW operation for the existing FICON architecture (command mode). While in command mode, each CCW is sent to the control unit for execution. In transport mode, multiple channel commands are packaged together and sent over the link to the control unit in a single control block. Fewer processors are used compared to the existing FICON architecture. Certain complex CCW chains are not supported by zHPF.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
For more information about FICON channel performance, see the performance technical papers that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
Modified Indirect Data Address Word facility
The Modified Indirect Data Address Word (MIDAW) facility improves FICON performance. It provides a more efficient channel command word (CCW)/indirect data address word (IDAW) structure for certain categories of data-chaining I/O operations.
The MIDAW facility is a system architecture and software feature that is designed to improve FICON performance. This facility was first made available on System z9 servers, and is used by the Media Manager in z/OS.
The MIDAW facility provides a more efficient CCW/IDAW structure for certain categories of data-chaining I/O operations.
MIDAW can improve FICON performance for extended format data sets. Non-extended data sets can also benefit from MIDAW.
MIDAW can improve channel utilization and I/O response time. It also reduces FICON channel connect time, director ports, and control unit processor usage.
IBM laboratory tests indicate that applications that use EF data sets, such as Db2, or long chains of small blocks can gain significant performance benefits by using the MIDAW facility.
MIDAW is supported on FICON channels that are configured as CHPID type FC. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
MIDAW technical description
An IDAW is used to specify data addresses for I/O operations in a virtual environment.8 The IDAW design allows the first IDAW in a list to point to any address within a page. Subsequent IDAWs in the same list must point to the first byte in a page. Also, IDAWs (except the first and last IDAW) in a list must manage complete 2 K or 4 K units of data.
Figure 7-2 shows a single CCW that controls the transfer of data that spans non-contiguous 4 K frames in main storage. When the IDAW flag is set, the data address in the CCW points to a list of words (IDAWs). Each IDAW contains an address that designates a data area within real storage.
Figure 7-2 IDAW usage
The number of required IDAWs for a CCW is determined by the following factors:
IDAW format as specified in the operation request block (ORB)
Count field of the CCW
Data address in the initial IDAW
For example, three IDAWS are required when the following events occur:
The ORB specifies format-2 IDAWs with 4 KB blocks.
The CCW count field specifies 8 KB.
The first IDAW designates a location in the middle of a 4 KB block.
CCWs with data chaining can be used to process I/O data blocks that have a more complex internal structure, in which portions of the data block are directed into separate buffer areas. This process is sometimes known as scatter-read or scatter-write. However, as technology evolves and link speed increases, data chaining techniques become less efficient because of switch fabrics, control unit processing and exchanges, and other issues.
The MIDAW facility is a method of gathering and scattering data from and into discontinuous storage locations during an I/O operation. The MIDAW format is shown in Figure 7-3. It is 16 bytes long and aligned on a quadword.
Figure 7-3 MIDAW format
An example of MIDAW usage is shown in Figure 7-4.
Figure 7-4 MIDAW usage
The use of MIDAWs is indicated by the MIDAW bit in the CCW. If this bit is set, the skip flag cannot be set in the CCW. The skip flag in the MIDAW can be used instead. The data count in the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends when the CCW count goes to zero or the last MIDAW (with the last flag) ends.
The combination of the address and count in a MIDAW cannot cross a page boundary. Therefore, the largest possible count is 4 K. The maximum data count of all the MIDAWs in a list cannot exceed 64 K, which is the maximum count of the associated CCW.
The scatter-read or scatter-write effect of the MIDAWs makes it possible to efficiently send small control blocks that are embedded in a disk record to separate buffers from those that are used for larger data areas within the record. MIDAW operations are on a single I/O block, in the manner of data chaining. Do not confuse this operation with CCW command chaining.
Extended format data sets
z/OS extended format (EF) data sets use internal structures (often not visible to the application program) that require a scatter-read (or scatter-write) operation. Therefore, CCW data chaining is required, which produces less than optimal I/O performance. Because the most significant performance benefit of MIDAWs is achieved with EF data sets, a brief review of the EF data sets is included here.
VSAM and non-VSAM (DSORG=PS) sets can be defined as EF data sets. For non-VSAM data sets, a 32-byte suffix is appended to the end of every physical record (that is, block) on disk. VSAM appends the suffix to the end of every control interval (CI), which normally corresponds to a physical record.
A 32 K CI is split into two records to span tracks. This suffix is used to improve data reliability, and facilitates other functions that are described next. Therefore, for example, if the DCB BLKSIZE or VSAM CI size is equal to 8192, the actual block on storage consists of 8224 bytes. The control unit does not distinguish between suffixes and user data. The suffix is transparent to the access method and database.
In addition to reliability, EF data sets enable the following functions:
DFSMS striping
Access method compression
Extended addressability (EA)
EA is useful for creating large Db2 partitions (larger than 4 GB). Striping can be used to increase sequential throughput, or to spread random I/Os across multiple logical volumes. DFSMS striping is useful for using multiple channels in parallel for one data set. The Db2 logs are often striped to optimize the performance of Db2 sequential inserts.
Processing an I/O operation to an EF data set normally requires at least two CCWs with data chaining. One CCW is used for the 32-byte suffix of the EF data set. With MIDAW, the additional CCW for the EF data set suffix is eliminated.
MIDAWs benefit EF and non-EF data sets. For example, to read 12 4 K records from a non-EF data set on a 3390 track, Media Manager chains together 12 CCWs by using data chaining. To read 12 4 K records from an EF data set, 24 CCWs are chained (two CCWs per 4 K record). By using Media Manager track-level command operations and MIDAWs, an entire track can be transferred by using a single CCW.
Performance benefits
z/OS Media Manager includes I/O channel program support for implementing EF data sets, and automatically uses MIDAWs when appropriate. Most disk I/Os in the system are generated by using Media Manager.
Users of the Executing Fixed Channel Programs in Real Storage (EXCPVR) instruction can construct channel programs that contain MIDAWs. However, doing so requires that they construct an IOBE with the IOBEMIDA bit set. Users of the EXCP instruction cannot construct channel programs that contain MIDAWs.
The MIDAW facility removes the 4 K boundary restrictions of IDAWs and for EF data sets, reduces the number of CCWs. Decreasing the number of CCWs helps to reduce the FICON channel processor utilization. Media Manager and MIDAWs do not cause the bits to move any faster across the FICON link. However, they reduce the number of frames and sequences that flow across the link, and therefore use the channel resources more efficiently.
The performance of a specific workload can vary based on the conditions and hardware configuration of the environment. IBM laboratory tests found that Db2 gains significant performance benefits by using the MIDAW facility in the following areas:
Table scans
Logging
Utilities
Use of DFSMS striping for Db2 data sets
Media Manager with the MIDAW facility can provide significant performance benefits when used in combination applications that use EF data sets (such as Db2) or long chains of small blocks.
For more information about FICON and MIDAW, see the following resources:
The I/O Connectivity page of the IBM IT infrastructure website includes information about FICON channel performance
DS8000 Performance Monitoring and Tuning, SG24-7146
ICKDSF
Device Support Facilities, ICKDSF, Release 17 is required on all systems that share disk subsystems with a z15 processor.
ICKDSF supports a modified format of the CPU information field that contains a two-digit LPAR identifier. ICKDSF uses the CPU information field instead of CCW reserve/release for concurrent media maintenance. It prevents multiple systems from running ICKDSF on the same volume, and at the same time allows user applications to run while ICKDSF is processing. To prevent data corruption, ICKDSF must determine all sharing systems that might run ICKDSF. Therefore, this support is required for z15.
 
Remember: The need for ICKDSF Release 17 also applies to systems that are not part of the same sysplex, or are running an operating system other than z/OS, such as z/VM.
z/OS Discovery and Auto-Configuration
z/OS Discovery and Auto Configuration (zDAC) is designed to automatically run several I/O configuration definition tasks for new and changed disk and tape controllers that are connected to a switch or director, when attached to a FICON channel.
The zDAC function is integrated into the hardware configuration definition (HCD). Clients can define a policy that can include preferences for availability and bandwidth that include parallel access volume (PAV) definitions, control unit numbers, and device number ranges. When new controllers are added to an I/O configuration or changes are made to existing controllers, the system discovers them and proposes configuration changes that are based on that policy.
zDAC provides real-time discovery for the FICON fabric, subsystem, and I/O device resource changes from z/OS. By exploring the discovered control units for defined logical control units (LCUs) and devices, zDAC compares the discovered controller information with the current system configuration. It then determines delta changes to the configuration for a proposed configuration.
All added or changed logical control units and devices are added into the proposed configuration. They are assigned proposed control unit and device numbers, and channel paths that are based on the defined policy. zDAC uses channel path chosen algorithms to minimize single points of failure. The zDAC proposed configurations are created as work I/O definition files (IODFs) that can be converted to production IODFs and activated.
zDAC is designed to run discovery for all systems in a sysplex that support the function. Therefore, zDAC helps to simplify I/O configuration on z15 systems that run z/OS, and reduces complexity and setup time.
zDAC applies to all FICON features that are supported on z15 when configured as CHPID type FC. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
Platform and name server registration in FICON channel
The FICON Express16SA, FICON Express16S+, FICON Express16S, and FICON Express8S features support platform and name server registration to the fabric for CHPID types FC and FCP.
Information about the channels that are connected to a fabric (if registered) allows other nodes or storage area network (SAN) managers to query the name server to determine what is connected to the fabric.
The following attributes are registered for the z14 servers:
Platform information
Channel information
Worldwide port name (WWPN)
Port type (N_Port_ID)
FC-4 types that are supported
Classes of service that are supported by the channel
The platform and name server registration service are defined in the Fibre Channel Generic Services 4 (FC-GS-4) standard.
The 63.75-K subchannels
Servers before z9 EC reserved 1024 subchannels for internal system use, out of a maximum of 64 K subchannels. Starting with z9 EC, the number of reserved subchannels was reduced to 256, which increased the number of subchannels that are available. Reserved subchannels exist in subchannel set 0 only. One subchannel is reserved in each of subchannel sets 1, 2, and 3.
The informal name, 63.75-K subchannels, represents 65280 subchannels, as shown in the following equation:
63 x 1024 + 0.75 x 1024 = 65280
This equation is applicable for subchannel set 0. For subchannel sets 1, 2 and 3, the available subchannels are derived by using the following equation:
(64 X 1024) -1=65535
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
Multiple subchannel sets
First introduced in z9 EC, multiple subchannel sets (MSS) provide a mechanism for addressing more than 63.75-K I/O devices and aliases for FICON (CHPID types FC) on the z15, z14, z13, z13s, zEC12, and zBC12. z196 introduced the third subchannel set (SS2). With z13, one more subchannel set (SS3) was introduced, which expands the alias addressing by 64-K more I/O devices.
z/VM V6R3 MSS support for mirrored direct access storage device (DASD) provides a subset of host support for the MSS facility to allow the use of an alternative subchannel set for Peer-to-Peer Remote Copy (PPRC) secondary volumes.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266. For more information about channel subsystem, see Chapter 5, “Central processor complex channel subsystem” on page 205.
Fourth subchannel set
With z13, a fourth subchannel set (SS3) was introduced. Together with the second subchannel set (SS1) and third subchannel set (SS2), SS3 can be used for disk alias devices of primary and secondary devices, and as Metro Mirror secondary devices. This set helps facilitate storage growth and complements other functions, such as extended address volume (EAV) and Hyper Parallel Access Volumes (HyperPAV).
See Table 7-6 on page 264 and Table 7-7 on page 266 for list of supported operating systems.
IPL from an alternative subchannel set
z15 supports IPL from subchannel set 1 (SS1), subchannel set 2 (SS2), or subchannel set 3 (SS3), in addition to subchannel set 0.
See Table 7-6 on page 264 and Table 7-7 on page 266 for list of supported operating systems. For more information, refer to “IPL from an alternative subchannel set” on page 302.
32 K subchannels
To help facilitate growth and continue to enable server consolidation, the z15 supports up to 32 K subchannels per FICON ExpressSE, FICON Express16S+ and FICON Express16S channels (CHPID). More devices can be defined per FICON channel, which includes primary, secondary, and alias devices. The maximum number of subchannels across all device types that are addressable within an LPAR remains at 63.75 K for subchannel set 0 and 64 K (64 X 1024)-1 for subchannel sets 1, 2, and 3.
This support is available to the z15, z14, z13, and z13s servers and applies to the FICON ExpressSE, FICON Express16S+, and FICON Express16S features (defined as CHPID type FC). FICON Express8S remains at 24 subchannel support when defined as CHPID type FC.
The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
Request node identification data
The request node identification data (RNID) function for native FICON CHPID type FC allows isolation of cabling-detected errors. The supported operating systems are listed in Table 7-6 on page 264.
FICON link incident reporting
FICON link incident reporting allows an operating system image (without operator intervention) to register link incident reports. The supported operating systems are listed in Table 7-6 on page 264.
Health Check for FICON Dynamic routing
Starting with z13, the channel microcode was changed to support FICON dynamic routing. Although change is required in z/OS to support dynamic routing, I/O errors can occur if the FICON switches are configured for dynamic routing despite the missing support in the processor or storage controllers. Therefore, a health check is provided that interrogates the switch to determine whether dynamic routing is enabled in the switch fabric.
No action is required on z/OS to enable the health check; it is automatically enabled at IPL and reacts to changes that might cause problems. The health check can be disabled by using the PARMLIB or SDSF modify commands.
The supported operating systems are listed in Table 7-6 on page 264. For more information about FICON Dynamic Routing (FIDR), see Chapter 4, “Central processor complex I/O structure” on page 155.
Global resource serialization FICON CTC toleration
For some configurations that depend on ESCON CTC definitions, global resource serialization (GRS) FICON CTC toleration that is provided with APAR OA38230 is essential, especially after ESCON channel support was removed from IBM Z starting with zEC12.
The supported operating systems are listed in Table 7-6 on page 264.
Increased performance for the FCP protocol
The FCP LIC is modified to help increase I/O operations per second for small and large block sizes, and to support 16-Gbps link speeds.
For more information about FCP channel performance, see the performance technical papers that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
The FCP protocol is supported by z/VM, z/VSE, and Linux on Z. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
T10-DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on IBM Z for SCSI end-to-end data protection on fixed block (FB) LUN volumes. IBM Z provides added end-to-end data protection between the operating system and the DS8870 unit. This support adds protection information that consists of Cyclic Redundancy Checking (CRC), Logical Block Address (LBA), and host application tags to each sector of FB data on a logical volume.
IBM Z support applies to FCP channels only. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel. First introduced with z9 EC, this feature can be used with supported FICON features on z14 servers. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
Worldwide port name tool
Part of the z15 system installation is the pre-planning of the SAN environment. IBM includes a stand-alone tool to assist with this planning before the installation.
The capabilities of the WWPN are extended to calculate and show WWPNs for virtual and physical ports ahead of system installation.
The tool assigns WWPNs to each virtual FCP channel or port by using the same WWPN assignment algorithms that a system uses when assigning WWPNs for channels that use NPIV. Therefore, the SAN can be set up in advance, which allows operations to proceed much faster after the server is installed. In addition, the SAN configuration can be retained instead of altered by assigning the WWPN to physical FCP ports when a FICON feature is replaced.
The WWPN tool takes a .csv file that contains the FCP-specific I/O device definitions and creates the WWPN assignments that are required to set up the SAN. A binary configuration file that can be imported later by the system is also created. The .csv file can be created manually or exported from the HCD/HCM. The supported operating systems are listed in Table 7-6 on page 264 and Table 7-7 on page 266.
The WWPN tool is applicable to all FICON channels that are defined as CHPID type FCP (for communication with SCSI devices) on z15. It is available for download at the Resource Link at the following website (log in is required).
 
Note: An optional feature can be ordered for WWPN persistency before shipment to keep the same I/O serial number on the new CPC. Current information must be provided during the ordering process.
7.4.5 Networking features and functions
In this section, we describe the networking features and functions.
25GbE RoCE Express2.1 and 25GbE RoCE Express2
Based on the RoCE Express2 generation hardware, the 25GbE RoCE Express2 (FC 0430 and 0450) provides two 25GbE physical ports and requires 25GbE optics and Ethernet switch 25GbE support. The switch port must support 25GbE (negotiation down to 10GbE is not supported).
The 25GbE RoCE Express2 has one PCHID and the same virtualization characteristics and the 10GbE RoCE Express2 (FC 0412 and FC 0432); that is, 126 Virtual Functions per PCHID.
z/OS requires fixes for APAR OA55686. RMF 2.2 and later is also enhanced to recognize the CX4 card type and properly display CX4 cards in the PCIe Activity reports.
25GbE RoCE Express2 feature also are used by Linux on Z for applications that are coded to the native RoCE verb interface or use Ethernet (such as TCP/IP). This native exploitation does not require a peer OSA.
Support for select Linux on Z distributions is now provided for Shared Memory Communications over Remote Direct Memory Access (SMC-R) by using RoCE Express features. For more information, see this Linux on Z Blogspot web page.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
10GbE RoCE Express2.1 and 10GbE RoCE Express2
IBM 10GbE RoCE Express2 provides a natively attached PCIe I/O Drawer-based Ethernet feature that supports 10 Gbps Converged Enhanced Ethernet (CEE) and RDMA over CEE (RoCE). The RoCE feature, with an OSA feature, enables shared memory communications between two CPCs by using a shared switch.
RoCE Express2 provides increased virtualization (sharing capability) by supporting 63 Virtual Functions (VFs) per physical port for a total of 126 VFs per PCHID. This configuration allows RoCE to be extended to more workloads.
z/OS Communications Server (CS) provides a new software device driver ConnectX4 (CX4) for RoCE Express2. The device driver is not apparent to both upper layers of the CS (the SMC-R and TCP/IP stack) and application software (using TCP sockets). RoCE Express2 introduces a minor change in how the physical port is configured.
RMF 2.2 and later is also enhanced to recognize the new CX4 card type and properly display CX4 cards in the PCIE Activity reports.
Support in select Linux on Z distributions is now provided for Shared Memory Communications over Remote Direct Memory Access (SMC-R) using the supported RoCE Express features. For more information, see this Linux on Z Blogspot web page.
The 10GbE RoCE Express2 feature also is used by Linux on Z for applications that are coded to the native RoCE verb interface or use Ethernet (such as TCP/IP). This native use does not require a peer OSA.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
10GbE RoCE Express
z14 servers support carrying forward the 10GbE RoCE Express feature. This feature provides support to the second port on the adapter and sharing the ports to up 31 partitions (per adapter) by using both ports.
The 10GbE RoCE Express feature reduces consumption of CPU resources for applications that use the TCP/IP stack (such as WebSphere accessing a Db2 database). Use of the 10GbE RoCE Express feature also can help reduce network latency with memory-to-memory transfers by using Shared Memory Communications over Remote Direct Memory Access (SMC-R) in z/OS V2R1 or later.
It is transparent to applications and can be used for LPAR-to-LPAR communication on a single z14 server or for server-to-server communication in a multiple CPC environment.
Support in select Linux on Z distributions is now provided for Shared Memory Communications over Remote Direct Memory Access (SMC-R) by using RoCE Express features. For more information, see this Linux on Z Blogspot web page.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Shared Memory Communication - Direct Memory Access
First introduced with z13 servers, the Shared Memory Communication - Direct Memory Access (SMC-D) feature maintains the socket-API transparency aspect of SMC-R so that applications that use TCPI/IP communications can benefit immediately without requiring application software to undergo IP topology changes.
Similar to SMC-R, this protocol uses shared memory architectural concepts that eliminate TCP/IP processing in the data path, yet preserve TCP/IP Qualities of Service for connection management purposes.
Support in select Linux on Z distributions is now provided for Shared Memory Communications over Direct Memory Access (SMC-D). For more information, see this Linux on Z Blogspot web page.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
HiperSockets Completion Queue
The HiperSockets Completion Queue function is implemented on z15, z14, z13, and z13s. This function is designed to allow HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary. Therefore, it combines ultra-low latency with more tolerance for traffic peaks. HiperSockets Completion Queue can be especially helpful in burst situations. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
HiperSockets Virtual Switch Bridge
The HiperSockets Virtual Switch Bridge is implemented on z15, z14, z13, and z13s. With the HiperSockets Virtual Switch Bridge, z/VM virtual switch is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to also directly communicate with the following components:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
HiperSockets Multiple Write Facility
The HiperSockets Multiple Write Facility allows the streaming of bulk data over a HiperSockets link between two LPARs. Multiple output buffers are supported on a single Signal Adapter (SIGA) write instruction. The key advantage of this enhancement is that it allows the receiving LPAR to process a much larger amount of data per I/O interrupt. This process is transparent to the operating system in the receiving partition. HiperSockets Multiple Write Facility with fewer I/O interrupts is designed to reduce processor utilization of the sending and receiving partitions.
Support for this function is required by the sending operating system. For more information, see “HiperSockets” on page 194. The supported operating systems are listed in Table 7-8 on page 267.
HiperSockets support of IPV6
IPv6 is expected to be a key element in the future of networking. The IPv6 support for HiperSockets allows compatible implementations between external networks and internal HiperSockets networks. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
HiperSockets Layer 2 support
For flexible and efficient data transfer for IP and non-IP workloads, the HiperSockets internal networks on z14 can support two transport modes: Layer 2 (Link Layer) and the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
HiperSockets devices are protocol-independent and Layer 3-independent. Each HiperSockets device features its own Layer 2 Media Access Control (MAC) address. This MAC address allows the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same way as they do a non-mainframe environment.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
HiperSockets network traffic analyzer for Linux on Z
Introduced with IBM System z10, HiperSockets network traffic analyzer (HS NTA) provides support for tracing Layer2 and Layer3 HiperSockets network traffic in Linux on Z. This support allows Linux on Z to control the trace for the internal virtual LAN to capture the records into host memory and storage (file systems).
Linux on Z tools can be used to format, edit, and process the trace records for analysis by system programmers and network administrators.
OSA-Express7S 25 Gigabit Ethernet SR
OSA-Express7S 25 GbE SR1.1 (FC 0449) and OSA-Express7S 25GbE (FC 0429) are installed in the PCIe I/O drawer and have one 25GbE physical port and requires 25GbE optics and Ethernet switch 25GbE support (negotiation down to 10GbE is not supported).
Consider the following points regarding operating system support:
z/OS V2R1, V2R2, and V2R3 require fixes for the following APARs: OA55256 (VTAM) and PI95703 (TCP/IP).
z/VM V6R4 and V7R1 require PTF for APAR PI99085.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express7S 10-Gigabit Ethernet LR and SR
OSA-Express6S 10-Gigabit Ethernet features are installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express6S features and they also retain the same form factor and port granularity.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express6S 10-Gigabit Ethernet LR and SR
OSA-Express6S 10-Gigabit Ethernet features are installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express5S 10-Gigabit Ethernet LR and SR
Introduced with the zEC12 and zBC12, the OSA-Express5S 10-Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one port, which is defined as CHPID type OSD that supports the queued direct input/output (QDIO) architecture for high-speed TCP/IP communication.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express7S Gigabit Ethernet LX and SX
z15 introduces an Ethernet technology refresh with OSA-Express7S Gigabit Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express6S features and they also retain the same form factor and port granularity.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express6S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express6S Gigabit Ethernet LX and SX
z14 introduces an Ethernet technology refresh with OSA-Express6S Gigabit Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express6S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express5S Gigabit Ethernet LX and SX
The OSA-Express5S Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). Each port supports attachment to a 1 Gigabit per second (Gbps) Ethernet LAN. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express5S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express7S 1000BASE-T Ethernet
z15 introduces an Ethernet technology refresh with OSA-Express7S 1000BASE-T Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express6S features and they also retain the same form factor and port granularity.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Notes: Consider the following points:
Operating system support is required to recognize and use the second port on the OSA-Express7S 1000BASE-T Ethernet feature.
OSA-Express7S 1000BASE-T Ethernet feature supports only 1000 Mbps duplex mode (no auto-negotiation to 100 or 10 Mbps)
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express6S 1000BASE-T Ethernet
z14 introduces an Ethernet technology refresh with OSA-Express6S 1000BASE-T Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express6S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Express5S 1000BASE-T Ethernet
The OSA-Express5S 1000BASE-T Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one PCIe adapter and two ports. The two ports share a CHPID, which can be defined as OSC, OSD or OSE. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express5S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA-Integrated Console Controller
The OSA-Express 1000BASE-T Ethernet features provide the Integrated Console Controller (OSA-ICC) function, which supports TN3270E (RFC 2355) and non-SNA DFT 3270 emulation. The OSA-ICC function is defined as CHPID type OSC and console controller, and includes multiple LPAR support as shared or spanned channels.
With the OSA-ICC function, 3270 emulation for console session connections is integrated in the z15 through a port on the OSA-Express7S 1000BASE-T, OSA-Express7S GbE, OSA-Express6S 1000BASE-T, or OSA-Express5S 1000BASE-T.
OSA-ICC can be configured on a PCHID-by-PCHID basis, and is supported at any of the feature settings. Each port can support up to 120 console session connections.
To improve security of console operations and to provide a secure, validated connectivity, OSA-ICC supports Transport Layer Security/Secure Sockets Layer (TLS/SSL) with Certificate Authentication staring with z13 GA2 (Driver level 27).
 
Note: OSA-ICC supports up to 48 secure sessions per CHPID (the overall maximum of 120 connections is unchanged).
OSA-ICC Enhancements
With HMC 2.14.1 and newer the following enhancements are available:
The IPv6 communications protocol is supported by OSA-ICC 3270 so that clients can comply with regulations that require all computer purchases to support IPv6.
TLS negotiation levels (the supported TLS protocol levels) for the OSA-ICC 3270 client connection can now be specified:
 – TLS 1.0 OSA-ICC 3270 server permits TLS 1.0, TLS 1.1, and TLS 1.2 client connections.
 – TLS 1.1 OSA-ICC 3270 server permits TLS 1.1 and TLS 1.2 client connections.
 – TLS 1.2 OSA-ICC 3270 server permits only TLS 1.2 client connections.
Separate and unique OSA-ICC 3270 certificates are supported (for each PCHID), for the benefit of customers who host workloads across multiple business units or data centers, where cross-site coordination is required. Customers can avoid interruption of all the TLS connections at the same time when having to renew expired certificates.
OSA-ICC continues to also support a single certificate for all OSA-ICC PCHIDs in the system.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Checksum offload for in QDIO mode (CHPID type OSD)
Checksum offload provides the capability of calculating the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and IP header checksum. Checksum verifies the accuracy of files. By moving the checksum calculations to a Gigabit or 1000BASE-T Ethernet feature, host processor cycles are reduced and performance is improved.
Checksum offload provides checksum offload for several types of traffic and is supported by the following features when configured as CHPID type OSD (QDIO mode only):
OSA-Express7S GbE
OSA-Express7S 1000BASE-T Ethernet
OSA-Express6S GbE
OSA-Express6S 1000BASE-T Ethernet
OSA-Express5S GbE
OSA-Express5S 1000BASE-T Ethernet
When checksum is offloaded, the OSA-Express feature runs the checksum calculations for Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) packets. The checksum offload function applies to packets that go to or come from the LAN.
When multiple IP stacks share an OSA-Express, and an IP stack sends a packet to a next hop address that is owned by another IP stack that is sharing the OSA-Express, OSA-Express sends the IP packet directly to the other IP stack. The packet does not have to be placed out on the LAN, which is termed LPAR-to-LPAR traffic. Checksum offload is enhanced to support the LPAR-to-LPAR traffic, which was not originally available.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Querying and displaying an OSA configuration
OSA-Express3 introduced the capability for the operating system to query and display directly the current OSA configuration information (similar to OSA/SF). z/OS uses this OSA capability by introducing the TCP/IP operator command display OSAINFO. z/VM provides this function with the NETSTAT OSAINFO TCP/IP command.
The use of display OSAINFO (z/OS) or NETSTAT OSAINFO (z/VM) allows the operator to monitor and verify the current OSA configuration and helps improve the overall management, serviceability, and usability of OSA-Express cards.
These commands apply to CHPID type OSD. The supported operating systems are listed in Table 7-8 on page 267.
QDIO data connection isolation for z/VM
The QDIO data connection isolation function provides a higher level of security when sharing an OSA connection in z/VM environments that use VSWITCH. The VSWITCH is a virtual network device that provides switching between OSA connections and the connected guest systems.
QDIO data connection isolation allows disabling internal routing for each QDIO connected. It also provides a means for creating security zones and preventing network traffic between the zones.
QDIO data connection isolation is supported by all OSA-Express features on z15. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
QDIO interface isolation for z/OS
Some environments require strict controls for routing data traffic between servers or nodes. In certain cases, the LPAR-to-LPAR capability of a shared OSA connection can prevent such controls from being enforced. With interface isolation, internal routing can be controlled on an LPAR basis. When interface isolation is enabled, the OSA discards any packets that are destined for a z/OS LPAR that is registered in the OSA Address Table (OAT) as isolated.
QDIO interface isolation is supported on all OSA-Express features on z15. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
QDIO optimized latency mode
QDIO optimized latency mode (OLM) can help improve performance for applications that feature a critical requirement to minimize response times for inbound and outbound data.
OLM optimizes the interrupt processing in the following manner:
For inbound processing, the TCP/IP stack looks more frequently for available data to process. This process ensures that any new data is read from the OSA-Express features without needing more program controlled interrupts (PCIs).
For outbound processing, the OSA-Express cards also look more frequently for available data to process from the TCP/IP stack. Therefore, the process does not require a Signal Adapter (SIGA) instruction to determine whether more data is available.
The supported operating systems are listed in Table 7-8 on page 267.
QDIO Diagnostic Synchronization
QDIO Diagnostic Synchronization enables system programmers and network administrators to coordinate and simultaneously capture software and hardware traces. It allows z/OS to signal OSA-Express features (by using a diagnostic assist function) to stop traces and capture the current trace records.
QDIO Diagnostic Synchronization is supported by the OSA-Express features on z15 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 267.
Adapter interruptions for QDIO
Linux on Z and z/VM work together to provide performance improvements by using extensions to the QDIO architecture. First added to z/Architecture with HiperSockets, adapter interruptions provide an efficient, high-performance technique for I/O interruptions to reduce path lengths and processor usage. These reductions are in the host operating system and the adapter (supported OSA-Express cards when CHPID type OSD is used).
In extending the use of adapter interruptions to OSD (QDIO) channels, the processor utilization to handle a traditional I/O interruption is reduced. This configuration benefits OSA-Express TCP/IP support in z/VM, z/VSE, and Linux on Z. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Inbound workload queuing (IWQ) for OSA
OSA-Express3 introduced inbound workload queuing (IWQ), which creates multiple input queues and allows OSA to differentiate workloads “off the wire.” It then assigns work to a specific input queue (per device) to z/OS.
Each input queue is a unique type of workload, and has unique service and processing requirements. The IWQ function allows z/OS to preassign the appropriate processing resources for each input queue. This approach allows multiple concurrent z/OS processing threads to process each unique input queue (workload), which avoids traditional resource contention.
IWQ reduces the conventional z/OS processing that is required to identify and separate unique workloads. This advantage results in improved overall system performance and scalability.
A primary objective of IWQ is to provide improved performance for business-critical interactive workloads by reducing contention that is created by other types of workloads. In a heavily mixed workload environment, this “off the wire” network traffic separation is provided by OSA-Express7S, OSA-Express6S, or OSA-Express5S9 features that are defined as CHPID type OSD. OSA IWQ is shown in Figure 7-5.
Figure 7-5 OSA inbound workload queuing
The following types of z/OS workloads are identified and assigned to unique input queues:
z/OS Sysplex Distributor traffic
Network traffic that is associated with a distributed virtual Internet Protocol address (VIPA) is assigned to a unique input queue. This configuration allows the Sysplex Distributor traffic to be immediately distributed to the target host.
z/OS bulk data traffic
Network traffic that is dynamically associated with a streaming (bulk data) TCP connection is assigned to a unique input queue. This configuration allows the bulk data processing to be assigned the appropriate resources and isolated from critical interactive workloads.
EE (Enterprise Extender / SNA traffic)
IWQ for the OSA-Express features is enhanced to differentiate and separate inbound Enterprise Extender traffic to a dedicated input queue.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
VLAN management enhancements
VLAN management enhancements are valid for supported OSA-Express features on z15 defines as CHPID type OSD. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
GARP VLAN Registration Protocol
All OSA-Express features support VLAN prioritization, which is a component of the IEEE 802.1 standard. GARP VLAN Registration Protocol (GVRP) support allows an OSA-Express port to register or unregister its VLAN IDs with a GVRP-capable switch and dynamically update its table as the VLANs change. This process simplifies the network administration and management of VLANs because manually entering VLAN IDs at the switch is no longer necessary. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Link aggregation support for z/VM
Link aggregation (IEEE 802.3ad) that is controlled by the z/VM Virtual Switch (VSWITCH) allows the dedication of an OSA-Express port to the z/VM operating system. The port must be participating in an aggregated group that is configured in Layer 2 mode. Link aggregation (trunking) combines multiple physical OSA-Express ports into a single logical link. This configuration increases throughput, and provides nondisruptive failover if a port becomes unavailable. The target links for aggregation must be of the same type.
Link aggregation is applicable to CHPID type OSD (QDIO). The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Multi-VSwitch Link Aggregation
Multi-VSwitch Link Aggregation support allows a port group of OSA-Express features to span multiple virtual switches within a single z/VM system or between multiple z/VM systems. Sharing a Link Aggregation Port Group (LAG) with multiple virtual switches increases optimization and utilization of the OSA-Express features when handling larger traffic loads.
Higher adapter utilization protects customer investments, which is increasingly important as 10 GbE deployments become more prevalent. The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
Large send for IPv6 packets
Large send for IPv6 packets improves performance by offloading outbound TCP segmentation processing from the host to an OSA-Express feature by using a more efficient memory transfer into it.
Large send support for IPv6 packets applies to the OSA-Express7S, OSA-Express6S, and OSA-Express5S9 features (CHPID type OSD) on z15, z14, z13, and z13s.
z13 added support of large send for IPv6 packets (segmentation offloading) for LPAR-to-LPAR traffic. OSA-Express6S added TCP checksum on large send, which reduces the cost (CPU time) of error detection for large send.
The supported operating systems are listed in Table 7-8 on page 267 and Table 7-9 on page 269.
OSA Dynamic LAN idle
The OSA Dynamic LAN idle parameter change helps reduce latency and improve performance by dynamically adjusting the inbound blocking algorithm. System administrators can authorize the TCP/IP stack to enable a dynamic setting that previously was static.
The blocking algorithm is modified based on the following application requirements:
For latency-sensitive applications, the blocking algorithm is modified considering latency.
For streaming (throughput-sensitive) applications, the blocking algorithm is adjusted to maximize throughput.
In all cases, the TCP/IP stack determines the best setting based on the current system and environmental conditions, such as inbound workload volume, processor utilization, and traffic patterns. It can then dynamically update the settings.
Supported OSA-Express features adapt to the changes, which avoids thrashing and frequent updates to the OAT. Based on the TCP/IP settings, OSA holds the packets before presenting them to the host. A dynamic setting is designed to avoid or minimize host interrupts.
OSA Dynamic LAN idle is supported by the OSA-Express7S, OSA-Express6S, and OSA-Express5S features on z15 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 267.
OSA Layer 3 virtual MAC for z/OS environments
To help simplify the infrastructure and facilitate load balancing when an LPAR is sharing an OSA MAC address with another LPAR, each operating system instance can have its own unique logical or virtual MAC (VMAC) address. All IP addresses that are associated with a TCP/IP stack are accessible by using their own VMAC address instead of sharing the MAC address of an OSA port. This situation also applies to Layer 3 mode and to an OSA port spanned among channel subsystems.
OSA Layer 3 VMAC is supported by the OSA-Express7S, OSA-Express6S, and OSA-Express5S features on z15 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 267.
Network Traffic Analyzer
The z15 offers systems programmers and network administrators the ability to more easily solve network problems despite high traffic. With the OSA-Express Network Traffic Analyzer and QDIO Diagnostic Synchronization on the server, you can capture trace and trap data. This data can then be forwarded to z/OS tools for easier problem determination and resolution.
The Network Traffic Analyzer is supported by the OSA-Express7S, OSA-Express6S, and OSA-Express5S features on z15 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 267.
7.4.6 Cryptography Features and Functions Support
IBM z15™ provides the following major groups of cryptographic functions:
Synchronous cryptographic functions, which are provided by CPACF
Asynchronous cryptographic functions, which are provided by the Crypto Express7S feature
The minimum software support levels are described in the following sections. Review the current PSP buckets to ensure that the latest support levels are known and included as part of the implementation plan.
CP Assist for Cryptographic Function
Central Processor Assist for Cryptographic Function (CPACF), which is standard10 on every z15 core, now supports pervasive encryption. Simple policy controls allow business to enable encryption to protect data in mission-critical databases without needing to stop the database or re-create database objects. Database administrators can use z/OS Dataset Encryption, z/OS Coupling Facility Encryption, z/VM encrypted hypervisor paging, and z/TPF transparent database encryption, which use the performance enhancements in the hardware.
CPACF supports the following features in z15:
Advanced Encryption Standard (AES, symmetric encryption)
Data Encryption Standard (DES, symmetric encryption)
Secure Hash Algorithm (SHA, hashing)
SHAKE Algorithms
True Random Number Generation (TRNG)
Improved GCM (Galois Counter Mode) encryption (enabled by a single hardware instruction)
In addition, the z15 core implements a Modulo Arithmetic unit in support of Elliptic Curve Cryptography.
CPACF also is used by several IBM software product offerings for z/OS, such as IBM WebSphere Application Server for z/OS. For more information, see 6.4, “CP Assist for Cryptographic Functions” on page 226.
The supported operating systems are listed in Table 7-10 on page 272 and Table 7-11 on page 273.
Crypto Express7S
Introduced with z15, Crypto Express7S includes a single- or dual-port adapter (single or dual PCIe Cryptographic co-processor [PCIeCC - IBM 4769]) and complies with the following Physical Security Standards:
FIPS 140-2 level 4
Common Criteria EP11 EAL4
Payment Card Industry (PCI) HSM
German Banking Industry Commission (GBIC, formerly DK)
Support of Crypto Express6S functions varies by operating system and release and by the way the card is configured as a coprocessor or an accelerator. For more information, see 6.5, “Crypto Express7S” on page 232. The supported operating systems are listed in Table 7-10 on page 272 and Table 7-11 on page 273.
Crypto Express6S (carry forward on z15)
Introduced with z14, Crypto Express6S complies with the following Physical Security Standards:
FIPS 140-2 level 4
Common Criteria EP11 EAL4
Payment Card Industry (PCI) HSM
German Banking Industry Commission (GBIC, formerly DK)
Support of Crypto Express6S functions varies by operating system and release and by the way the card is configured as a coprocessor or an accelerator. For more information, see 6.5, “Crypto Express7S” on page 232. The supported operating systems are listed in Table 7-10 on page 272 and Table 7-11 on page 273.
Crypto Express5S (carry forward on z15)
Support of Crypto Express5S functions varies by operating system and release and by the way the card is configured as a coprocessor or an accelerator. The supported operating systems are listed in Table 7-10 on page 272 and Table 7-11 on page 273.
Web deliverables
For more information about web-deliverable code on z/OS, see the z/OS downloads website.
For Linux on Z, support is delivered through IBM and the distribution partners. For more information, see Linux on Z on the IBM developerWorks website.
z/OS Integrated Cryptographic Service Facility
Integrated Cryptographic Service Facility (ICSF) is a base component of z/OS. It is designed to transparently use the available cryptographic functions, whether CPACF or Crypto Express, to balance the workload and help address the bandwidth requirements of the applications.
Despite being a z/OS base component, ICSF functions are generally made available through web deliverable support a few months after a new z/OS release. Therefore, new functions are related to an ICSF function modification identifier (FMID) instead of a z/OS version.
ICSF HCR77D1 - Cryptographic Support for z/OS V2R2, V2R3, and V2R4
z/OS V2.2, V2.3, and V2.4 require ICSF Web Deliverable WD19 (HCR77D1) to support the following features:
Support for CCA 5.5 and CCA 6.3:
 – PCI HSM Phase 4 (AES and RSA) and ANSI TR-34
 – ICSF Support with an SPE for HCR77D0
Support for Crypto Express7S
Support for more than 16 Adapters
Support for carry forward for Crypto Express5S and Crypto Express6S
Support for:
 – EP11 and ECC Protected Key
 – CPACF ECC Enablement MSA-9
 – EP11 and CCA Support for new ECC Curves
 – FPE Voltage Algorithms
 – Post Quantum Crypto PoC
ICSF HCR77D0 - Cryptographic Support for z/OS V2R2 and z/OS V2R3
z/OS V2.2 and V2.3 require ICSF Web Deliverable WD18 (HCR77D0) to support the following features:
Support for the updated German Banking standard (DK):
 – CCA 5.4 & 6.111:
 • ISO-4 PIN Blocks (ISO-9564-1)
 • Directed keys: A key can either encrypt or decrypt data, but not both.
 • Allow AES transport keys to be used to export/import DES keys in a standard ISO 20038 key block. This feature helps with interoperability between CCA and non-CCA systems.
 • Allow AES transport keys to be used to export/import a small subset of AES keys in a standard ISO 20038 key block. This feature helps with interoperability between CCA and non-CCA systems.
 • Triple-length TDES keys with Control Vectors for increased data confidentiality.
 – CCA 6.2: PCI HSM 3K DES - Support for triple length DES keys (standards compliance)
EP11 Stage 4:
 – New elliptic curve algorithms for PKCS#11 signature, key derivation operations:
 • Ed448 elliptic curve
 • EC25519 elliptic curve
 – EP11 Concurrent Patch Apply: Allows service to be applied to the EP11 coprocessor dynamically without taking the crypto adapter offline (already available for CCA coprocessors).
 – eIDAS compliance: eIDAS cross-border EU regulation for portable recognition of electronic identification.
ICSF HCR77C1 - Cryptographic Support for z/OS V2R1 - z/OS V2R3
ICSF Web Deliverable HCR77C1 provides support for the following features:
Usage and administration of Crypto Express6S
This feature might be configured as an accelerator (CEX6A), a CCA coprocessor (CEX6C), or an EP-11 coprocessor (CEX6P).
Coprocessor in PCI-HSM Compliance Mode (enablement requires TKE 9.0 or newer).
z14 CPACF support. For more information, see “CP Assist for Cryptographic Function” on page 316.
The following software enhancements are available in ICSF Web Deliverable HCR77C1:
Crypto Usage Statistics: When enabled, ICSF aggregates statistics that are related to crypto workloads and logs to an SMF record.
Panel-based CKDS Administration: ICSF added an ISPF, panel-driven interface that allows interactive administration (View, Create, Modify, and Delete) of CKDS keys.
CICS End User Auditing: When enabled, ICSF retrieves the CICS user identity and includes it as a log string in the SAF resource check. The user identity is not checked for access to the resource. Instead, it is included in the resource check (SMF Type 80) records that are logged for any of the ICSF SAF classes protecting crypto keys and services (CSFKEYS, XCSFKEY, CRYPTOZ, and CSFSERV).
For more information about ICSF versions and FMID cross-references, see the z/OS: ICSF Version and FMID Cross Reference, TD103782, abstract that is available at the IBM Techdoc website.
For PTFs that allow previous levels of ICSF to coexist with the Cryptographic Support for z/OS 2.1 - z/OS V2R3 (HCR77C1) web deliverable, check below FIXCAT, as shown in the following example:
IBM.Coexistence.ICSF.z/OS_V2R1-V2R3-HCR77C1
RMF Support for Crypto Express7S and Crypto Express6S
RMF enhances the Monitor I Crypto Activity data gatherer to recognize and use performance data for the new Crypto Express7S (CEX7) and CryptoExpress6S (CEX6) cards. RMF supports all valid card configurations on z15 and provides CEX7 and CEX6 crypto activity data in the SMF type 70 subtype 2 records and RMF Postprocessor Crypto Activity Report.
Reporting can be done at an LPAR/domain level to provide more granular reports for capacity planning and diagnosing problems. This feature requires fix for APAR OA54952.
The supported operating systems are listed in Table 7-10 on page 272.
z/OS Data Set Encryption
Aligned with IBM Z Pervasive Encryption initiative, IBM provides application-transparent, policy-controlled dataset encryption in IBM z/OS.
Policy-driven z/OS Data Set Encryption enables users to perform the following tasks:
De-couple encryption from data classification; encrypt data automatically independent of labor-intensive data classification work.
Encrypt data immediately and efficiently at the time it is written.
Reduce risks that are associated with mis-classified or undiscovered sensitive data.
Help protect digital assets automatically.
Achieve application transparent encryption.
IBM DB2® for z/OS and IBM Information Management System (IMS) intend to use z/OS Data Set Encryption.
With z/OS, Data Set Encryption DFSMS enhances data security with support for data set level encryption by using DFSMS access methods. This function is designed to give users the ability to encrypt their data sets without changing their application programs.
DFSMS users can identify which data sets require encryption by using JCL, Data Class, or the RACF data set profile. Data set level encryption can allow the data to remain encrypted during functions, such as backup and restore, migration and recall, and replication.
z/OS Data Set Encryption requires CP Assist for Cryptographic Functions (CPACF).
Considering the significant enhancements that were introduced with z14, the encryption mode of XTS is used by access method encryption to obtain the best performance possible. It is not recommended to enable z/OS data set encryption until all sharing systems, fallback, backup, and DR systems support encryption.
In addition to applying PTFs enabling the support, ICSF configuration is required. The supported operating systems are listed in Table 7-10 on page 272.
Crypto Analytics Tool for Z
The IBM Crypto Analytics Tool (CAT) for Z is an analytics solution that collects data on your z/OS cryptographic infrastructure, presents reports, and analyzes if any vulnerabilities exist. CAT collects cryptographic information from across the enterprise and provides reports to help users better manage the crypto infrastructure and ensure it follows best practices. The use of CAT can help you deal with managing complex cryptography resources across your organization.
z/VM encrypted hypervisor paging (encrypted paging support)
With the PTF for APAR VM65993, z/VM V6.4 provides support for encrypted paging in support of the z15 pervasive encryption philosophy of encrypting all data in flight and at rest. Ciphering occurs as data moves between active memory and a paging volume that is owned by z/VM.
Included in this support is the ability to dynamically control whether a running z/VM system is encrypting this data. This support protects guest paging data from administrators or users with access to volumes. Enabled with AES encryption, z/VM Encrypted Paging includes low overhead by using CPACF.
The supported operating systems are listed in Table 7-10 on page 272.
z/TPF transparent database encryption
Shipped in August 2016, z/TPF at-rest Data Encryption provides following features and benefits:
Automatic encryption of at-rest data by using AES CBC (128 or 256).
No application changes required.
Database level encryption by using highly efficient CPACF.
Inclusion of data on disk and cached in memory.
Ability to include data integrity checking (optionally by using SHA-256) to detect accidental or malicious data corruption.
Tools to migrate a database from unencrypted to encrypted state or change the encryption key/algorithm for a specific DB while transactions are flowing (no database downtime).
Pervasive encryption for Linux on Z
Pervasive encryption for Linux on Z combines the full power of Linux with z15 capabilities by using the support of the following features:
Kernel Crypto: z15 CPACF
LUKS dm-crypt Protected-Key CPACF
Libica and openssl: z15 CPACF and acceleration of RSA handshakes by using SIMD
Secure Service Container: High security virtual appliance deployment infrastructure
Protection of data at-rest
By using the integration of industry-unique hardware accelerated CPACF encryption into the standard Linux components, users can achieve optimized encryption transparently to prevent raw key material from being visible to operating systems and applications.
Because of the potential costs and overheads, most of the organizations avoid the use of host-based network encryption today. By using enhanced CPACF and SIMD on z15, TLS and IPSec can use hardware performance gains while benefitting from transparent enablement. Reduced cost of encryption enables broad use of network encryption.
7.5 z/OS migration considerations
Except for base processor support, z/OS releases do not require any of the functions that are introduced with the z15. Minimal toleration support that is needed depends on z/OS release.
Although z15 servers do not require any “functional” software, it is recommended to install all z15 service before upgrading to the new server. The support matrix for z/OS releases and the Z servers that support them are listed in Table 7-16.
Table 7-16 z/OS support summary
z/OS
Release
z10 EC
z10 BC
WDFM1
z196
z114
WDFMa
zEC12
zBC12
WDFMa
z13
z13s
WDFMa
z14
z15
End of
Service
Extended
Defect
Support2
V2R1
X
X
X
X
X
Xb
09/2018
09/2021
V2R2
X
X
X
X
X
X
09/2020b
09/2023b
V2R3
 
 
X
X
X
X
09/2022b
09/2025b
V2R43
 
 
 
X
X
X
-
-

1 Server was withdrawn from marketing.
2 The IBM Software Support Services provides the ability for customers to purchase extended defect support service for z/OS.
7.5.1 General guidelines
The IBM z15™ introduces the latest IBM Z technology. Although support is provided by z/OS starting with z/OS 2.1, the capabilities and use of z15 depends on the z/OS release. Also, web deliverables12 are needed for some functions on some releases. In general, consider the following guidelines:
Do not change software releases and hardware at the same time.
Keep members of the sysplex at the same software level, except during brief migration periods.
Migrate to an STP-only network before introducing a z15 into a sysplex.
Review any restrictions and migration considerations before creating an upgrade plan.
Acknowledge that some hardware features cannot be ordered or carried forward for an upgrade from an earlier server to z15 and plan accordingly.
Determine the changes in IOCP, HCD, and HCM to support defining z15 configuration and the new features and functions it introduces.
Ensure that none of the new z/Architecture Machine Instructions (mnemonics) that were introduced with z15 are colliding with the names of Assembler macro instructions you use13.
Check the use of MACHMIG statements in LOADxx PARMLIB commands.
7.5.2 Hardware Fix Categories (FIXCATs)
Base support includes fixes that are required to run z/OS on the IBM z15™ server. They are identified by:
IBM.Device.Server.z15-8561.RequiredService
The use of many functions covers fixes that are required to use the capabilities of the IBM z15™ server. They are identified by:
IBM.Device.Server.z15-8561.Exploitation
Recommended service is identified by:
IBM.Device.Server.z15-8561.RecommendedService
Support for z15 is provided by using a combination of web deliverables and PTFs, which are documented in PSP Bucket Upgrade = 8561DEVICE, Subset = 8561/ZOS.
Consider the following other Fix Categories of Interest:
Fixes that are required to use Parallel Sysplex InfiniBand Coupling links:
IBM.Function.ParallelSysplexInfiniBandCoupling
Fixes that are required to use the Server Time Protocol function:
IBM.Function.ServerTimeProtocol
Fixes that are required to use the High-Performance FICON function:
IBM.Function.zHighPerformanceFICON
PTFs that allow previous levels of ICSF to coexist with the latest Cryptographic Support for z/OS V2R2 - z/OS V2R4 (HCR77D1) web deliverable:
IBM.Coexistence.ICSF.z/OS_V2R2-V2R4-HCR77D1
PTFs that allow previous levels of ICSF to coexist with the latest Cryptographic Support for z/OS V2R2 - z/OS V2R3 (HCR77D0) web deliverable:
IBM.Coexistence.ICSF.z/OS_V2R2-V2R3-HCR77D0
PTFs that allow previous levels of ICSF to coexist with the Cryptographic Support for z/OS V2R1 - z/OS V2R3 (HCR77C1) web deliverable:
IBM.Coexistence.ICSF.z/OS_V2R1-V2R3-HCR77C1
Use the SMP/E REPORT MISSINGFIX command to determine whether any FIXCAT APARs exist that are applicable and are not yet installed, and whether any SYSMODs are available to satisfy the missing FIXCAT APARs.
For more information about IBM Fix Category Values and Descriptions, see the IBM Fix Category Values and Descriptions page of the IBM IT infrastructure website.
7.5.3 Coupling links14
z15 servers support only active participation in the same Parallel Sysplex with z14, z13, and z13s. Configurations with z/OS on one of these servers can add a z15 Server to their Sysplex for a z/OS or a Coupling Facility image.
Configurations with a Coupling Facility on one of these servers can add a z15 Server to their Sysplex for a z/OS or a Coupling Facility image. z15 does not support participating in a Parallel Sysplex with System zEC12/zBC12 and earlier systems.
Each system can use, or not use, internal coupling links, InfiniBand coupling links, or ICA coupling links independently of what other systems are using.
Coupling connectivity is available only when other systems also support the same type of coupling. For more information about supported coupling link technologies on z15, see 4.6.4, “Parallel Sysplex connectivity” on page 196, and the Coupling Facility Configuration Options white paper.
7.5.4 z/OS XL C/C++ considerations
z/OS V2R4 is required to use the latest level (13) of the following C/C++ compiler options:
ARCHITECTURE: This option selects the minimum level of system architecture on which the program can run. Certain features that are provided by the compiler require a minimum architecture level. ARCH(13) uses instructions that are available on the z15.
TUNE: This option allows optimization of the application for a specific system architecture within the constraints that are imposed by the ARCHITECTURE option. The TUNE level must not be lower than the setting in the ARCHITECTURE option.
The following new functions provide performance improvements for applications by using new z15 instructions:
Vector Programming Enhancements
New z15 hardware instruction support
Packed Decimal support using vector registers
Auto-SIMD enhancements to make use of new data types
To enable the use of new functions, specify ARCH(13) and VECTOR for compilation. The binaries that are produced by the compiler on z15 can be run only on z15 and above because it uses the vector facility on z15 for new functions. The use of older versions of the compiler on z15 does not enable new functions.
For more information about the ARCHITECTURE, TUNE, and VECTOR compiler options, see z/OS V2R2.0 XL C/C++ User’s Guide, SC09-4767.
 
Important: Use the previous Z ARCHITECTURE or TUNE options for C/C++ programs if the same applications run on the previous IBM Z servers. However, if C/C++ applications run on z15 servers only, use the latest ARCHITECTURE and TUNE options to ensure that the best performance possible is delivered through the latest instruction set additions.
For more information, see Migration from z/OS V2R1 to z/OS V2R2, GA32-0889.
7.5.5 z/OS V2.3
Consider the following points before migrating z/OS 2.3 to IBM z15™:
IBM z/OS V2.3 with z15 requires a minimum of 8 GB of memory. When running as a z/VM guest or on an IBM System z® Personal Development Tool, a minimum of 2 GB is required for z/OS V2.3. If the minimum is not met, a warning WTOR is issued at IPL.
Continuing with less than the minimum memory might affect availability. A migration health check will be introduced at z/OS V2.1 and z/OS V2.2 to warn if the system is configured with less than 8 GB.
Dynamic splitting and merging of Coordinated Timing Network (CTN) is available with z15.
The z/OS V2.3 real storage manager (RSM) is planned to support a new asynchronous memory clear operation to clear the data from 1M page frames by using I/O processors (SAPs). The new asynchronous memory clear operation eliminates the CPU cost for this operation and help improve performance of RSM first reference page fault processing and system services, such as IARV64 and STORAGE OBTAIN.
RMF support is provided to collect SMC-D related performance measurements in SMF 73 Channel Path Activity and SMF 74 subtype 9 PCIE Activity records. It also provides these measurements in the RMF Postprocessor and Monitor III PCIE and Channel Activity reports. This support is also available on z/OS V2.2 with PTF UA80445 for APAR OA49113.
HyperSwap support is enhanced to allow RESERVE processing. When a system runs a request to swap to secondary devices that are managed by HyperSwap, z/OS detects when RESERVEs are held and ensures that the devices that are swapped also hold the RESERVE. This enhancement is provided with collaboration from z/OS, GDPS HyperSwap, and CSM HyperSwap.
7.5.6 z/OS V2.4
7.6 z/VM migration considerations
IBM z14 supports z/VM 7.1 and z/VM 6.4. z/VM is moving to continuous delivery model. For more information, see this web page.
7.6.1 z/VM 7.1
z/VM 7.1 can be installed directly on IBM z15. z/VM V7R1 includes the following new features:
Single System Image and Live Guest Relocation included in the base. In z/VM 6.4, this feature was the VMSSI-priced feature.
Enhances the dump process to reduce the time that is required to create and process dumps.
Upgrades to a new Architecture Level Set. This feature requires an IBM zEnterprise EC12 or BC12, or later.
Provides the base for more functionality to be delivered as service Small Program Enhancements (SPEs) after general availability.
z/VM 7.1 includes SPEs shipped for z/VM 6.4, including Virtual Switch Enhanced Load Balancing, DS8000 z-Thin Provisioning, and Encrypted Paging.
7.6.2 z/VM V6.4
z/VM V6.4 can be installed directly on a z15 server with an image that is obtained from IBM after Sept. 23, 2019. The PTF for APAR VM65942 must be applied immediately after installing z/VM V6.4 and before configuring any part of the new z/VM system.
A z/VM Release Status Summary for supported z/VM versions is listed in Table 7-17.
Table 7-17 z/VM Release Status Summary
z/VM
Level1
General
Availability
End of
Marketing
End of
Service
Minimum
Processor
Level
Maximum
Processor
Level
7.1
September 2018
Not announced
Not announced
zEC12 and zBC12
-
6.4
November 2016
Not announced
Not announced
z196 and z114
-

1 Older z/VM versions (6.3, 6.2, 5.4 are End Of Support)
7.6.3 ESA/390-compatibility mode for guests
IBM z15™ no longer supports the full ESA/390 architectural mode. However, IBM z15™ does provide ESA/390-compatibility mode, which is an environment that supports a subset of DAT-off ESA/390 applications in a hybrid architectural mode.
z/VM provides the support that is necessary for DAT-off guests to run in this new compatibility mode. This support allows guests, such as CMS, GCS, and those guests that start in ESA/390 mode briefly before switching to z/Architecture mode to continue to run on IBM z15™.
The available PTF for APAR VM65976 provides infrastructure support for ESA/390 compatibility mode within z/VM V6.4. It must be installed on all members of an SSI cluster before any z/VM V6.4 member of the cluster is run on an IBM z15™ server.
In addition to operating system support, all the stand-alone utilities a client uses must be at a minimum level or need a PTF.
7.6.4 Capacity
For the capacity of any z/VM logical partition (LPAR) and any z/VM guest, in terms of the number of Integrated Facility for Linux (IFL) processors and central processors (CPs), real or virtual, you might want to adjust the number to accommodate the PU capacity of z15 servers.
7.7 z/VSE migration considerations
As described in “z/VSE” on page 258, IBM z15 supports z/VSE 6.2.
Consider the following general guidelines when you are migrating z/VSE environment to z15 servers:
Collect reference information before migration
This information includes baseline data that reflects the status of, for example, performance data, CPU utilization of reference workload, I/O activity, and elapsed times.
This information is required to size z15 and is the only way to compare workload characteristics after migration.
For more information, see the z/VSE Release and Hardware Upgrade document.
Apply required maintenance for z15
Review the Preventive Service Planning (PSP) bucket 8561DEVICE for z15 and apply the required PTFs for IBM and independent software vendor (ISV) products.
 
Note: IBM z15™ supports z/Architecture mode only.
7.8 Software licensing
The IBM z15™ software portfolio includes operating system software (that is, z/OS, z/VM, z/VSE, and z/TPF) and middleware that runs on these operating systems. The portfolio also includes middleware for Linux on Z environments.
For the z15, the following metric groups for software licensing are available from IBM, depending on the software product:
Monthly license charge (MLC)
MLC pricing metrics feature a recurring charge that applies monthly. In addition to the permission to use the product, the charge includes access to IBM product support during the support period. MLC pricing applies to z/OS, z/VSE, and z/TPF operating systems. Charges are based on processor capacity, which is measured in millions of service units (MSU) per hour.
IPLA
IPLA metrics have a single, up-front charge for an entitlement to use the product. An optional and separate annual charge (called subscription and support) entitles clients to access IBM product support during the support period. With this option, you can also receive future releases and versions at no extra charge.
Software Licensing References
For more information about software licensing, see the following websites:
The IBM International Passport Advantage® Agreement can be downloaded from the Learn about Software licensing website.
Subcapacity license charges
For eligible programs, subcapacity licensing allows software charges that are based on the measured utilization by logical partitions instead of the total number of MSUs of the CPC. Subcapacity licensing removes the dependency between the software charges and CPC (hardware) installed capacity.
The subcapacity licensed products are charged monthly based on the highest observed 4-hour rolling average utilization of the logical partitions in which the product runs. The exception is products that are licensed by using the Select Application License Charge (SALC) pricing metric. This type of charge requires measuring the utilization and reporting it to IBM.
The 4-hour rolling average utilization of the logical partition can be limited by a defined capacity value on the image profile of the partition. This value activates the soft capping function of the PR/SM, which limits the 4-hour rolling average partition utilization to the defined capacity value. Soft capping controls the maximum 4-hour rolling average usage (the last 4-hour average value at every 5-minute interval), but does not control the maximum instantaneous partition use.
You can also use an LPAR group capacity limit, which sets soft capping by PR/SM for a group of logical partitions that are running z/OS.
Even by using the soft capping option, the use of the partition can reach up to its maximum share based on the number of logical processors and weights in the image profile. Only the 4-hour rolling average utilization is tracked, which allows utilization peaks above the defined capacity value.
Some pricing metrics apply to stand-alone Z servers. Others apply to the aggregation of multiple Z server workloads within the same Parallel Sysplex.
For more information about WLC and how to combine logical partition utilization, see z/OS Planning for Sub-Capacity Pricing, SA23-2301.
Key MLC Metrics and Offerings
MLC metrics include various offerings. The following metrics and pricing schemes are available. Offerings often are tied to or made available to only on certain Z servers:
Key MLC Metrics:
 – WLC (Workload License Charges)
 – AWLC (Advanced Workload License Charges)
 – CMLC (Country Multiplex License Charges)
 – VWLC (Variable Workload License Charges)
 – FWLC (Flat Workload License Charges)
 – AEWLC (Advanced Entry Workload License Charges)
 – EWLC (Entry Workload License Charges)
 – TWLC (Tiered Workload License Charges)
 – zNALC (System z New Application License Charges)
 – PSLC (Parallel Sysplex License Charges)
 – MWLC (Midrange Workload License Charges)
 – zELC (zSeries Entry License Charges)
 – GOLC (Growth Opportunity License Charges)
 – SALC (Select Application License Charges)
Pricing:
 – GSSP (Getting Started Sub-Capacity Pricing)
 – IWP (Integrated Workload Pricing)
 – MWP (Mobile Workload Pricing)
 – zCAP (Z Collocated Application Pricing)
 – Parallel Sysplex Aggregated Pricing
 – CMP (Country Multiplex Pricing)
 – ULC (IBM S/390® Usage Pricing)
One of the recent changes in software licensing for z/OS and z/VSE is Multi-Version Measurement (MVM), which replaced Single Version Charging (SVC), Migration Pricing Option (MPO), and the IPLA Migration Grace Period.
MVM for z/OS and z/VSE removes time limits for running multiple eligible versions of a software program. Clients can run different versions of a program simultaneously for an unlimited duration during a program version upgrade.
Clients can also choose to run multiple different versions of a program simultaneously for an unlimited duration in a production environment. MVM allows clients to selectively deploy new software versions, which provides more flexible control over their program upgrade cycles. For more information, see Software Announcement 217-093, dated February 14, 2017.
Technology Transition Offerings with z15
Complementing the announcement of the z15 server, IBM introduced the following Technology Transition Offerings (TTOs):
Technology Update Pricing for the IBM z15™.
New and revised Transition Charges for Sysplexes or Multiplexes TTOs for actively coupled Parallel Sysplexes (z/OS), Loosely Coupled Complexes (z/TPF), and Multiplexes (z/OS and z/TPF).
Technology Update Pricing for the IBM z15™ extends the software price and performance that is provided by AWLC and CMLC for z15 servers. The new and revised Transition Charges for Sysplexes or Multiplexes offerings provide a transition to Technology Update Pricing for the IBM z15™ for customers who did not yet fully migrate to z15 servers. This transition ensures that aggregation benefits are maintained and also phases in the benefits of Technology Update Pricing for the IBM z15™ pricing as customers migrate.
When a z15 server is in an actively coupled Parallel Sysplex or a Loosely Coupled Complex, you might choose aggregated Advanced Workload License Charges (AWLC) pricing or aggregated Parallel Sysplex License Charges (PSLC) pricing (subject to all applicable terms and conditions).
When a z15 server is part of a Multiplex under Country Multiplex Pricing (CMP) terms, Country Multiplex License Charges (CMLC), Multiplex zNALC (MzNALC), and Flat Workload License Charges (FWLC) are the only pricing metrics available (subject to all applicable terms and conditions).
When a z15 server is running z/VSE, you can choose Mid-Range Workload License Charges (MWLC), which are subject to all applicable terms and conditions.
For more information about AWLC, CMLC, MzNALC, PSLC, MWLC, or the Technology Update Pricing and Transition Charges for Sysplexes or Multiplexes TTO offerings, see the IBM z Software Pricing page of the IBM IT infrastructure website.
7.9 References
For more information about planning, see the home pages for the following operating systems:
z/OS
z/VM
z/TPF
 

1 Use support for select features by way of PTFs. Toleration support for new hardware might also require PTFs.
2 z/VM Dynamic Memory Downgrade (releasing memory from z/VM LPAR) will be made available in the future with PTFs for APAR VM66173. For more information, see: http://www.vm.ibm.com/newfunction/#dmd
3 FICON Express16SA and FICON Express16S+ do not allow a mixture of CHPID types on the same card.
4 SMT is also enabled (not user configurable) by default for SAPs.
5 The features that are listed here might not be available on all operating systems that are listed in the tables.
6 Hardware-based Vector-extension facility 2.
7 InfiniBand coupling is not supported on z15.
8 Exceptions are made to this statement, and many details are omitted in this description. In this section, we assume that you can merge this brief description with an understanding of I/O operations in a virtual memory environment.
9 Only OSA-Express6S and OSA-Express5S cards are supported on z15 as carry forward.
10 CPACF hardware is implemented on each z15 core. CPACF functionality is enabled with FC 3863.
11 CCA 5.4 and 6.1 enhancements are also supported for z/OS V2R1 with ICSF HCR77C1 (WD17) with SPEs (Small Program Enhancements (z/OS continuous delivery model).
12 For example, the use of Crypto Express7S requires the Cryptographic Support for z/OS V2R1 - z/OS V2R3 web deliverable.
14 IBM z15 does not support InfiniBand coupling links. More planning might be required to integrate the z15 in a Parallel Sysplex with z14 and z13/z13s servers.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.97.187