Operating system support
This chapter describes the minimum operating system requirements and support considerations for the IBM z14™ servers and their features. It addresses z/OS, z/VM, z/VSE, z/TPF, Linux on Z, and the KVM hypervisor.
 
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906) unless otherwise specified.
Because this information is subject to change, see the hardware fix categories (IBM.Device.Server.z14-3906.*) for the most current information.
Support of z14 functions depends on the operating system, its version, and release.
This chapter includes the following topics:
7.1 Operating systems summary
The minimum operating system levels that are required on z14 servers are listed in Table 7-1.
 
End of service operating systems: Operating system levels that are no longer in service are not covered in this publication. These older levels might support some features.
Table 7-1 z14 minimum operating systems requirements
Operating systems1
Supported Version and Release on z142
z/OS
V1R133
z/VM
V6R4
z/VSE
V64
z/TPF
V1R1
Linux on Z
KVM Hypervisor5
Offered with the following Linux distributions SLES-12 SP2 or higher, and Ubuntu 16.04 LTS or higher.

1 Only z/Architecture mode is supported. For more information, see the shaded box titled “z/Architecture mode” that follows this table.
2 Service is required. For more information, see the shaded box that is titled “Features” on page 229.
3 z/OS V1R13 and V2R1 - Compatibility only. The IBM Software Support Services for z/OS V1.13, offered as of October 1, 2016, and V2R1 offered as October 1st, 2018, provide the ability for customers to purchase extended defect support service for z/OS V1.13, V2.1 respectively.
4 As announced on February 7, 2017, the end of service date for z/VSE V5R2 was October 31, 2018.
5 For more information about minimal and recommended distribution levels, see the Linux on Z website.
 
z/Architecture mode: As announced on January 14, 2015 with Announcement letter 115-001, beginning with IBM z14™, all IBM Z servers support operating systems that are running in z/Architecture mode only. This support applies to operating systems that are running native on PR/SM and operating systems that are running as second-level guests.
IBM operating systems that run in ESA/390 mode are no longer in service or currently available only with extended service contracts, and they are not usable on systems beginning with IBM z14™. However, IBM z14™ does provide ESA/390-compatibility mode, which is an environment that supports a subset of DAT-off ESA/390 applications in a hybrid architectural mode.
Problem state application programs (24-bit and 32-bit) are unaffected by this change.
The use of certain features depends on the operating system. In all cases, program temporary fixes (PTFs) might be required with the operating system level that is indicated. Check the z/OS fix categories, or the subsets of the 3906DEVICE PSP buckets for z/VM and z/VSE. The fix categories and the PSP buckets are continuously updated, and contain the latest information about maintenance.
Hardware and software buckets contain installation information, hardware and software service levels, service guidelines, and cross-product dependencies.
For for more information about Linux on Z distributions and KVM hypervisor, see the distributor’s support information.
7.2 Support by operating system
z14 servers introduce several new functions. This section describes the support of those functions by the current operating systems. Also included are some of the functions that were introduced in previous IBM Z servers and carried forward or enhanced in z14 servers. Features and functions that are available on previous servers but no longer supported by z14 servers were removed.
For more information about supported functions that are based on operating systems, see 7.3, “z14 features and function support overview” on page 248. Tables are built by function and feature classification to help you determine, by a quick scan, what is supported and the minimum operating system level that is required.
7.2.1 z/OS
z/OS Version 2 Release 2 is the earliest in-service release that supports z14 servers. Consider the following points:
Service support for z/OS Version 1 Release 13 ended in September of 2016; however, a fee-based extension for defect support (for up to three years) can be obtained by ordering IBM Software Support Services - Service Extension1 for z/OS 1.13.
Service support for z/OS Version 2 Release 1 ended in September of 2018; however, a fee-based extension for defect support (for up to three years) can be obtained by ordering IBM Software Support Services - Service Extension for z/OS 2.1.
z14 capabilities differ depending on the z/OS release. Toleration support is provided on z/OS V1R13 and V2R1. Exploitation support is provided on z/OS V2R2 and later only.
For more information about supported functions and their minimum required support levels, see 7.3, “z14 features and function support overview” on page 248.
7.2.2 z/VM
z/VM V6R4 and z/VM V7R1 provide support that enables guests to use the following features that are supported by z/VM on IBM z14™:
z/Architecture support
New hardware facilities
ESA/390-compatibility mode for guests
Crypto Clear Key ECC operations
RoCE Express2 support
Dynamic I/O support
Provided for managing the configuration of OSA-Express7S and OSA-Express6S OSD CHPIDs, FICON Express16S+ FC and FCP CHPIDs, zHyperLink Express, RoCE Express2 features, and Regional Crypto Enablement (RCE).
Improved memory management
For more information about supported functions and their minimum required support levels, see 7.3, “z14 features and function support overview” on page 248.
 
Statements of Directions1: Consider the following points:
Future z/VM release guest support: z/VM V6.4 is the last release that is supported as a guest of z/VM V6.2 or older releases.
Disk-only support for z/VM dumps: z/VM V6.4 is the last z/VM release to support tape as a media option for stand-alone, hard abend, and snap dumps. Subsequent releases will support dumps to ECKD DASD or FCP SCSI disks only.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.2.3 z/VSE
z14 support is provided by z/VSE V5R22 and later, with the following considerations:
z/VSE runs in z/Architecture mode only.
z/VSE supports 64-bit real and virtual addressing.
For more information about supported functions and their minimum required support levels, see 7.3, “z14 features and function support overview” on page 248.
7.2.4 z/TPF
z14 support is provided by z/TPF V1R1 with PTFs. For more information about supported functions and their minimum required support levels, see 7.3, “z14 features and function support overview” on page 248.
7.2.5 Linux on IBM Z (Linux on Z)
Generally, a new machine is not apparent to Linux on Z. For z14, toleration support is required for the following functions and features:
IPL in “z/Architecture” mode
Crypto Express6S cards
RoCE Express cards
8-byte LPAR offset
The service levels of SUSE, Red Hat, and Ubuntu releases that are supported at the time of this writing are listed in Table 7-2.
Table 7-2 Linux on Z distributions
Linux on Z distribution1
Supported Version and Release on z142
SUSE Linux Enterprise Server
12 SP2
SUSE Linux Enterprise Server
11 SP4
Red Hat RHEL
7.3
Red Hat RHEL
6.8
Ubuntu
16.04 LTS (or higher)

1 Only z/Architecture (64-bit mode) is supported. IBM testing identifies the minimum required level and the recommended levels of the tested distributions.
2 Fix installation is required for toleration.
For more information about supported Linux distributions on IBM Z servers, see the Tested platforms for Linux page of the IBM IT infrastructure website.
IBM is working with Linux distribution Business Partners to provide further use of selected z14 functions in future Linux on Z distribution releases.
Consider the following guidelines:
Use SUSE Linux Enterprise Server 12, Red Hat RHEL 7, or Ubuntu 16.04 LTS or newer in any new projects for z14 servers.
Update any Linux distribution to the latest service level before migrating to z14 servers.
Adjust the capacity of any z/VM and Linux on Z LPAR guests, and z/VM guests, in terms of the number of IFLs and CPs, real or virtual, according to the PU capacity of the z14 servers.
7.2.6 KVM hypervisor
KVM is now offered through our Linux distribution partners for IBM Z and LinuxONE to help simplify delivery and installation. Linux and KVM is provided from a single source. With KVM being included in the Linux distribution, it should make ordering and installing KVM easier.
The KVM hypervisor is supported with the following minimum Linux distributions:
SLES 12 SP2 with service.
RHEL 7.5 with kernel-alt package (kernel 4.14).
Ubuntu 16.04 LTS with service and Ubuntu 18.04 LTS with service.
For more information about minimal and recommended distribution levels, see the IBM Z website.
7.3 z14 features and function support overview
The following tables summarize the z14 features and functions and their minimum required operating system support levels:
Information about Linux on Z refers exclusively to the appropriate distributions of SUSE, Red Hat, and Ubuntu.
All tables use the following conventions:
Y : The function is supported.
N : The function is not supported.
- : The function is not applicable to that specific operating system.
 
Note: The following tables list but do not explicitly mark all the features that require fixes that are required by the corresponding operating system for toleration or exploitation. For more information, see the PSP bucket for 3906DEVICE.
7.3.1 Supported CPC functions
The supported Base CPC Functions or z/OS and z/VM are listed in Table 7-3.
Table 7-3 Supported Base CPC Functions or z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
z14 servers
Y
Y
Y
Y
Y
Y
Maximum processor unit (PUs) per system image
170b
1702
170b
100
643
64c
Maximum main storage size4
4 TB
4 TB
4 TB
1 TB
2 TB
2 TB
85 LPARs
Y
Y
Y
Y
Y
Y
Separate LPAR management of PUs
Y
Y
Y
Y
Y
Y
Dynamic PU add
Y
Y
Y
Y
Y
Y
Dynamic LPAR memory upgrade
Y
Y
Y
Y
Y
Y
LPAR group absolute capping
Y
Y
Y
Y
Y
Y
Capacity Provisioning Manager
Y
Y
Y
Y
N
N
Program-directed re-IPL
-
-
-
-
Y
Y
HiperDispatch
Y
Y
Y
Y
Y
Y
IBM Z Integrated Information Processors (zIIPs)
Y
Y
Y
Y
Y
Y
Transactional Execution
Y
Y
Y
Y
Java Exploitation of Transactional Execution
Y
Y
Y
Y
Yf
Yf
Simultaneous multithreading (SMT)
Y
Y
Y
N
Y7
Yg
Single Instruction Multiple Data (SIMD)
Y
Y
Y
N
Hardware decimal floating point9
Y
Y
Y
Y
Y
Y
2 GB large page support
Y
Y
Y
Y
N
N
Large page (1 MB) support
Y
Y
Y
Y
Yf
Yf
Out-of-order execution
Y
Y
Y
Y
Y
Y
CPUMF (CPU measurement facility) for z14
Y
Y
Y
N
Y
Y
Enhanced flexibility for Capacity on Demand (CoD)
Y
Y
Y
Y
Y
Y
IBM Virtual Flash Memory (VFM)
Y
Y
Y
N
N
1 MB pageable large pages11
Y
Y
Y
Y
N
N
Guarded Storage Facility (GSF)
Y
Y
N
N
Yf
Yf
Instruction Execution Protection (IEP)
Y
Y
N
N
Yf
Yf
Co-processor Compression Enhancements
Y
Y
Y
N
N
N

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 170-way without multithreading; 128-way with multithreading enabled.
3 64-way without multithreading; 32-way with multithreading enabled.
4 A total of 32 TB of real storage is supported per server.
5 Guests are informed that TX facility is available for use.
6 Guest exploitation support.
7 Dynamic SMT with z14.
8 Guests are informed that SIMD is available for use.
9 Packed decimal conversion support.
10 A web deliverable is required, which is available from the z/OS downloads website.
11 With IBM Virtual Flash Memory for middleware exploitation.
The supported base CPC functions for z/VSE, z/TPF and Linux on Z are listed in Table 7-4.
Table 7-4 Supported base CPC functions for z/VSE, z/TPF and Linux on Z
Function1
z/VSE V6R2
z/VSE
V6R1
z/VSE
V5R2
z/TPF
V1R1
Linux on
Z2
z14 servers
Y
Y
Y
Y
Y
Maximum processor unit (PUs) per system image
10
10
10
86
1703
Maximum main storage size4
32 GB
32 GB
32 GB
4 TB
16 TB5
85 LPARs
Y
Y
Y
Y
Y
Separate LPAR management of PUs
Y
Y
Y
Y
Y
Dynamic PU add
Y
Y
Y
N
Y
Dynamic LPAR memory upgrade
N
N
N
N
Y
LPAR group absolute capping
Y
Y
Y
N
N
Capacity Provisioning Manager
-
-
-
-
-
Program-directed re-IPL
Yf
Y6
Yf
N
Y
HiperDispatch
N
N
N
N7
Y
IBM Z Integrated Information Processors (zIIPs)
N
N
N
N
N
Transactional Execution
N
N
N
N
Y
Java Exploitation of Transactional Execution
N
N
N
N
Y
Simultaneous multithreading (SMT)
N
N
N
N
Y
Single Instruction Multiple Data (SIMD)
Y
N
N
N
Y
Hardware decimal floating point8
N
N
N
N
Y
2 GB large page support
N
N
N
Y
Y
Large page (1 MB) support
Yi
Y9
Yi
Y
Y
Out-of-order execution
Y
Y
Y
Y
Y
CPUMF (CPU measurement facility) for z14
N
N
N
Y
Enhanced flexibility for CoD
N
N
N
Ng
N
IBM Virtual Flash Memory (VFM)
N
N
N
N
Y
1 MB pageable large pages11
N
N
N
N
N
Guarded Storage Facility (GSF)
N
N
N
N
Y
Instruction Execution Protection (IEP)
N
N
N
N
Y
Co-processor Compression Enhancements
N
N
N
N
N

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 For SLES12/RHEL7/Ubuntu 16.10, Linux supports 256 cores without SMT and 128 cores with SMT (=256 threads).
4 A total of 32 TB of real storage is supported per server.
5 Linux on Z releases can support up to 64 TB of memory.
6 On SCSI disks.
7 Availability expected in fourth quarter of 2017.
8 Packed decimal conversion support.
9 Supported for data spaces.
10 IBM is working with its Linux distribution Business Partners to provide this feature.
11 With IBM Virtual Flash Memory0 for middleware exploitation.
7.3.2 Coupling and clustering
The supported coupling and clustering functions for z/OS and z/VM are listed in Table 7-5.
Table 7-5 Supported coupling and clustering functions for z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
Server Time Protocol (STP)
Y
Y
Y
Y
Y
Y
CFCC Level 232
Y
Y
Y3
Yc
Ye
Ye
CFCC Level 224
Y
Y
Y
Y
Y5
Ye
CFCC Level 22 Coupling Thin Interrupts
Y
Y
Y
Y
N
N
CFCC Level 22 Large Memory support
Y
Y
Y
Y
N
N
CFCC Level 22 Support for 256 Coupling CHPIDs per CPC
Y
Y
Y
Y
Ye
Ye
CFCC Level 22 Coupling Facility Processor Scalability
Y
Y
Y
N
Ye
Ye
CFCC Level 22 List Notification Enhancements
Y
Y
N
N
Ye
Ye
CFCC Level 22 Encryption Support
Y
N6
Nf
N
Ye
Ye
CFCC Level 22 Exploitation of VFM (Virtual Flash Memory)
Y
Y
Y
Y
N
N
RMF coupling channel reporting
Y
Y
Y
Y
N
N
Coupling over InfiniBand CHPID type CIB
Y
Y
Y
Y
N
N
InfiniBand coupling links 12x at a distance of 150 m (492 ft.)
Y
Y
Y
Y
N
N
InfiniBand coupling links 1x at an unrepeated distance of 10 km (6.2 miles)
Y
Y
Y
Y
N
N
Integrated Coupling Adapter (ICA-SR) links CHPID CS5
Y
Y
Y
Y
Y7
Yg
Coupling Express LR (CE LR) CHPID CL5
Y
Y
Y
N
Yg
Yg
z/VM Dynamic I/O support for InfiniBand CHPIDs
-
-
-
-
Yg
Yg
z/VM Dynamic I/O support for ICA CHPIDs
_
-
-
-
Yg
Yg
Asynchronous CF Duplexing for lock structures
Y
Y
N
N
Ye
Ye
Asynchronous cross-invalidate (XI) for CF cache structures8
Y9
Yi
Yj
Ye
Ye
Dynamic I/O activation for standalone CF CPCs
Yk
Yk
Yk
Yk
Yk

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 CFCC Level 23 with Driver 36.
3 Not all functions supported with z/OS 2.1 or earlier.
4 CFCC Level 22 with Driver 32.
5 Virtual guest coupling.
6 Toleration support (“locking out” down level systems that cannot use encrypted structure) will be provided for z/OS 2.2 and z/OS 2.1.
7 To define, modify, and delete CHPID type CS5 when z/VM is the controlling LPAR for dynamic I/O.
8 Requires data manager support (Db2 fixes).
9 Requires fixes for APAR OA54688 for exploitation.
10 Toleration support only; requires fixes for APAR OA54985. Functional support in z/OS 2.2 and later.
11 Requires HMC 2.14.1, Driver level 36 and various OS fixes (HCD, HCM, IOS, IOCP).
In addition to operating system support that is listed in Table 7-5 on page 251, Server Time Protocol is supported also on z/TPF V1R1 and Linux on Z and CFCC Level 22 and Level 23 are supported for z/TPF V1R1.
Storage connectivity
The supported storage connectivity functions for z/OS and z/VM are listed Table 7-6.
Table 7-6 Supported storage connectivity functions for z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
zHyperLink Express
Y
Y
Y
N
N
N
The 63.75-K subchannels
Y
Y
Y
Y
Y
Y
Six logical channel subsystems (LCSSs)
Y
Y
Y
Y
Y
Y
Four subchannel set per LCSS
Y
Y
Y
Y
Y2
Yb
Health Check for FICON Dynamic routing
Y
Y
Y
Y
N
N
z/VM Dynamic I/O support for FICON Express16S+ FC and FCP CHPIDs
-
-
-
-
Y
Y
CHPID (Channel-Path Identifier) type FC
Extended distance FICON3
Y
Y
Y
Y
Y
Y
FICON Express16S+ for support of zHPF (z Systems High Performance FICON)
Y
Y
Y
Y
Y4
Yd
FICON Express16S for support of zHPF
Y
Y
Y
Y
Yd
Yd
FICON Express8S for support of zHPF
Y
Y
Y
Y
Yd
Yd
MIDAW (Modified Indirect Data Address Word)
Y
Y
Y
Y
Yd
Yd
zDAC (z/OS Discovery and Auto-Configuration)
Y
Y
Y
Y
N
N
FICON Express16S+ when using FICON or CTC (channel-to-channel)
Y
Y
Y
Y
Y5
Ye
FICON Express16S when using FICON or CTC
Y
Y
Y
Y
Ye
Ye
FICON Express8S when using FICON or CTC
Y
Y
Y
Y
Ye
Ye
Global resource serialization (GRS) FICON CTC toleration
Y
Y
Y
Y
N
N
IPL from an alternative subchannel set
Y
Y
Y
Y
N
N
32 K subchannels for the FICON Express16S+
Y
Y
Y
Y
Y
Y
32 K subchannels for the FICON Express16S
Y
Y
Y
Y
Y
Y
Request node identification data
Y
Y
Y
Y
N
N
FICON link incident reporting
Y
Y
Y
Y
N
Y
CHPID (Channel-Path Identifier) type FCP
FICON Express16S+ for support of SCSI devices
-
-
-
-
Y
Y
FICON Express16S for support of SCSI devices
-
-
-
-
Y
Y
FICON Express8S for support of SCSI devices
-
-
-
-
Y
Y
FICON Express16S+ support of hardware data router
-
-
-
-
Yd
Yd
FICON Express16S support of hardware data router
-
-
-
-
Yd
Yd
FICON Express8S support of hardware data router
-
-
-
-
Yd
Yd
FICON Express16S+ T10-DIF support
-
-
-
-
Yd
Yd
FICON Express16S T10-DIF support
-
-
-
-
Yd
Yd
FICON Express8S T10-DIF support
-
-
-
-
Yd
Yd
Increased performance for the FCP protocol
-
-
-
-
Y
Y
N_Port ID Virtualization (NPIV)
-
-
-
-
Y
Y
Worldwide port name tool
-
-
-
-
Y
Y

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 For specific Geographically Dispersed Parallel Sysplex (GDPS) usage only.
3 Transparent to operating systems.
4 For guest use.
5 CTC channel type not supported when CPC is managed in DPM mode.
The supported storage connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-7.
Table 7-7 Supported storage connectivity functions for z/VSE, z/TPF, and Linux on Z
Function1
z/VSE V6R2
z/VSE
V6R1
z/VSE
V5R2
z/TPF
V1R1
Linux on
Z2
zHyperLink Express
-
-
-
-
-
The 63.75-K subchannels
N
N
N
N
Y
Six logical channel subsystems (LCSSs)
Y
Y
Y
N
Y
Four subchannel set per LCSS
Y
Y
Y
N
Y
Health Check for FICON Dynamic routing
N
N
N
N
N
z/VM Dynamic I/O support for FICON Express16S+ FC and FCP CHPIDs
-
-
-
-
-
CHPID (Channel-Path Identifier) type FC
Extended distance FICON3
Y
Y
Y
Y
Y
FICON Express16S+ for support of zHPF (IBM Z High Performance FICON)4
Y
N
N
Y
Y
FICON Express16S for support of zHPF
Y
N
N
Y
Y
FICON Express8S for support of zHPF
Y
N
N
Y
Y
MIDAW (Modified Indirect Data Address Word)
N
N
N
N
N
zDAC (z/OS Discovery and Auto-Configuration)
-
-
-
-
-
FICON Express16S+ when using FICON or CTC (channel-to-channel)
Y
Y
Y
Y
Y5
FICON Express16S when using FICON or CTC
Y
Y
Y
Y
Ye
FICON Express8S when using FICON or CTC
Y
Y
Y
Y
Ye
Global resource serialization (GRS) FICON CTC toleration
-
-
-
-
-
IPL from an alternative subchannel set
N
N
N
N
N
32 K subchannels for the FICON Express16S+
N
N
N
N
Y
32 K subchannels for the FICON Express16S
N
N
N
N
Y
Request node identification data
N
N
N
N
N
FICON link incident reporting
N
N
N
N
N
CHPID (Channel-Path Identifier) type FCP
FICON Express16S+ for support of SCSI devices
Y
Y
Y
-
Y
FICON Express16S for support of SCSI devices
Y
Y
Y
-
Y
FICON Express8S for support of SCSI devices
Y
Y
Y
-
Y
FICON Express16S+ support of hardware data router
N
N
N
N
Y
FICON Express16S support of hardware data router
N
N
N
N
Y
FICON Express8S support of hardware data router
N
N
N
N
Y
FICON Express16S+ T10-DIF support
N
N
N
N
Y
FICON Express16S T10-DIF support
N
N
N
N
Y
FICON Express8S T10-DIF support
N
N
N
N
Y
Increased performance for the FCP protocol
Y
Y
Y
-
Y
N_Port ID Virtualization (NPIV)
Y
Y
Y
N
Y
Worldwide port name tool
-
-
-
-
Y

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 Transparent to operating systems.
4 Will be supported on z/VSE V6.2 with PTFs.
5 CTC channel type not supported when CPC is managed in DPM mode.
7.3.3 Network connectivity
The supported network connectivity functions for z/OS and z/VM are listed in Table 7-8.
Table 7-8 Supported network connectivity functions for z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
Checksum offload for IPV6 packets
Y
Y
Y
Y
Y2
Yb
Checksum offload for LPAR-to-LPAR traffic with IPv4 and IPv6
Y
Y
Y
Y
Yb
Yb
Querying and displaying an OSA configuration
Y
Y
Y
Y
Y
Y
QDIO data connection isolation for z/VM
-
-
-
-
Y
Y
QDIO interface isolation for z/OS
Y
Y
Y
Y
-
-
QDIO OLM (Optimized Latency Mode)
Y
Y
Y
Y
-
-
Adapter interruptions for QDIO
N
N
N
N
Y
Y
QDIO Diagnostic Synchronization
Y
Y
Y
Y
N
N
IWQ (Inbound Workload Queuing) for OSA
Y
Y
Y
Y
Yb
Yb
VLAN management enhancements
Y
Y
Y
Y
Y3
Yc
GARP VLAN Registration Protocol
Y
Y
Y
Y
Y
Y
Link aggregation support for z/VM
-
-
-
-
Y
Y
Multi-vSwitch Link Aggregation
-
-
-
-
Y
Y
Large send for IPV6 packets
 
Y
Y
Y
Yb
Yb
z/VM Dynamic I/O Support for OSA-Express6S OSD CHPIDs
-
-
-
-
Y
Y
z/VM Dynamic I/O Support for OSA-Express7S OSD CHPIDs
-
-
-
-
Y
Y
OSA Dynamic LAN idle
Y
Y
Y
Y
N
N
OSA Layer 3 virtual MAC for z/OS environments
Y
Y
Y
Y
-
-
Network Traffic Analyzer
Y
Y
Y
Y
N
N
Hipersockets
HiperSockets4
Y
Y
Y
Y
Y
Y
32 HiperSockets
Y
Y
Y
Y
Y
Y
HiperSockets Completion Queue
Y
Y
Y
Y
Y
Y
HiperSockets Virtual Switch Bridge
-
-
-
-
Y
Y
HiperSockets Multiple Write Facility
Y
Y
Y
Y
N
N
HiperSockets support of IPV6
Y
Y
Y
Y
Y
Y
HiperSockets Layer 2 support
Y
Y
Y
Y
Y
Y
HiperSockets Network Traffic Analyzer for Linux on Z
-
-
-
-
-
-
SMC-D and SMC-R
SMC-D5 over ISM (Internal Shared Memory)
Y
Y
N
N
Yb
Yb
10GbE RoCE6 Express
Y
Y
Y
Y
Yb
Yb
25GbE and 10GbE RoCE Express2 for SMC-R
Y
Y
Y
N
Yb
Yb
25GbE and 10GbE RoCE Express2 for Ethernet communications7 including Single Root I/O Virtualization (SR-IOV)
N
N
N
N
Yb
Yb
z/VM Dynamic I/O support for RoCE Express2
-
-
-
-
Y
Y
Shared RoCE environment
Y
Y
Y
N
Y
Y
Open Systems Adapter (OSA)8
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
Y
OSA-Express7S9 25-Gigabit Ethernet Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Yj
OSA-Express6S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express6S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
Y

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 For guest use or exploitation.
3 Support of guests is transparent to z/VM if the device is directly connected to the guest (pass through).
4 On z14, the CHPID statement of HiperSockets devices requires the keyword VCHID. Therefore, the z14 IOCP definitions must be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number that is associated with the channel path (valid range is 7E0 - 7FF). VCHID is not valid on z Systems servers before z13.
5 Shared Memory Communications - Direct Memory Access.
6 Remote Direct Memory Access (RDMA) over Converged Ethernet.
7 Does not require a peer OSA.
8 Supported CHPID types: OSC, OSD, OSE, and OSM.
9 Require PTFs for APARs OA55256 (IBM VTAM®) and PI95703 (TCP/IP).
10 Require PTF for APAR PI99085.
The supported network connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-9.
Table 7-9 Supported network connectivity functions for z/VSE, z/TPF and Linux on Z
Function1
z/VSE V6R2
z/VSE
V6R1
z/VSE
V5R2
z/TPF
V1R1
Linux onZ2
Checksum offload for IPV6 packets
N
N
N
N
Y
Checksum offload for LPAR-to-LPAR traffic with IPv4 and IPv6
N
N
N
N
Y
Querying and displaying an OSA configuration
-
-
-
-
-
QDIO data connection isolation for z/VM
-
-
-
-
-
QDIO interface isolation for z/OS
-
-
-
-
-
QDIO OLM (Optimized Latency Mode)
-
-
-
-
-
Adapter interruptions for QDIO
Y
Y
Y
N
Y
QDIO Diagnostic Synchronization
N
N
N
N
N
IWQ (Inbound Workload Queuing) for OSA
N
N
N
N
N
VLAN management enhancements
N
N
N
N
N
GARP VLAN Registration Protocol
N
N
N
N
Y3
Link aggregation support for z/VM
N
N
N
N
N
Multi-vSwitch Link Aggregation
N
N
N
N
N
Large send for IPV6 packets
N
N
N
N
Y
z/VM Dynamic I/O Support for OSA-Express6S and OSA-Express 7S OSD CHPIDs
N
N
N
N
N
OSA Dynamic LAN idle
N
N
N
N
N
OSA Layer 3 virtual MAC for z/OS environments
-
-
-
-
-
Network Traffic Analyzer
-
-
-
-
-
Hipersockets
HiperSockets4
Y
Y
Y
N
Y
32 HiperSockets
Y
Y
Y
N
Y
HiperSockets Completion Queue
Y
Y
Y
N
Y
HiperSockets Virtual Switch Bridge
-
-
-
-
Y5
HiperSockets Multiple Write Facility
N
N
N
N
N
HiperSockets support of IPV6
Y
Y
Y
N
Y
HiperSockets Layer 2 support
Y
Y
Y
N
Y
HiperSockets Network Traffic Analyzer for Linux on Z
N
N
N
N
Y
SMC-D and SMC-R
SMC-D6 over ISM (Internal Shared Memory)
N
N
N
N
N
10GbE RoCE7 Express
N
N
N
N
Y8
25GbE and 10GbE RoCE Express2 for SMC-R
N
N
N
N
N9
25GbE and 10GbE RoCE Express2 for Ethernet communications10 including Single Root I/O Virtualization (SR-IOV)
N
N
N
N
Yh
z/VM Dynamic I/O support for RoCE Express2
-
-
-
-
-
Shared RoCE environment
n
N
N
N
Y
Open Systems Adapter (OSA)11
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
-
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
-
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
-
OSA-Express7S 25-Gigabit Ethernet Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express6S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express5S 10-Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express6S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express6S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
N
N
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
N
N
OSA-Express4S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
N
N

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 By using VLANs.
4 On z14, the CHPID statement of HiperSockets devices requires the keyword VCHID. Therefore, the z14 IOCP definitions must be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number that is associated with the channel path (valid range is 7E0 - 7FF). VCHID is not valid on z Systems servers before z13.
5 Applicable to guest operating systems.
6 Shared Memory Communications - Direct Memory Access.
7 Remote Direct Memory Access (RDMA) over Converged Ethernet.
8 Linux can exploit RocE Express as a standard NIC (Network Interface Card) for Ethernet.
9 SLES 12 SP3 includes support for Linux-to-Linux communication as “Tech Preview.”
10 Does not require a peer OSA.
11 Supported CHPID types: OSC, OSD, OSE, and OSM.
7.3.4 Cryptographic functions
The supported cryptography functions for z/OS and z/VM are listed in Table 7-10.
Table 7-10 Supported cryptography functions for z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
CP Assist for Cryptographic Function (CPACF)
Y
Y
Y
Y
Y2
Yb
CPACF greater than 16 Domain Support
Y
Y
Y
Y
Yb
Yb
CPACF AES-128, AES-192, and AES-256
Y
Y
Y
Y
Yb
Yb
CPACF SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y
Y
Y
Yb
Yb
CPACF protected key
Y
Y
Y
Y
Yb
Yb
Crypto Express6S
Y
Y3
Yc
Yc
Yb
Yb
Crypto Express6S Support for Visa Format Preserving Encryption
Y
Yc
Yc
Yc
Yb
Yb
Crypto Express6S Support for Coprocessor in PCI-HSM Compliance Mode4
Y
Yc
Yc
N
Yb
Yb
Crypto Express6S spouting up to 85 domains
Y
Yc
Yc
Y
Yb
Yb
Crypto Express5S
Y
Y
Y
Y
Yb
Yb
Crypto Express5S spouting up to 85 domains
Y
Y
Y
Y
Yb
Yb
Elliptic Curve Cryptography (ECC)
Y
Y
Y
Y
Yb
Yb
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
Y
Y
Y
Y
Yb
Yb
RCE (Regional Crypto Enablement)
Y
Y
N
N
Yb
Yb
z/VM Dynamic I/O Support for RCE
-
-
-
-
Y
Y
z/OS Data Set Encryption
Y
Y
N
N
-
-
z/VM Encrypted paging support
 
-
-
-
Y
Y
RMF Support for Crypto Express6
Y
Y
Y
N
-
-
z/OS encryption readiness technology (zERT)
Y
Y
Y
N
-
-
z/TPF transparent database encryption
-
-
-
-
-
-

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 For guest use or exploitation.
3 A web deliverable is required. For more information and to download the deliverable, see the z/OS downloads page of the IBM IT infrastructure website.
4 Requires TKE 9.0 or newer.
The supported cryptography functions for z/VSE, z/TPF, and Linux on Z are listed in Table 7-11.
Table 7-11 Supported cryptography functions for z/VSE, z/TPF and Linux on Z
Function1
z/VSE V6R2
z/VSE
V6R1
z/VSE
V5R2
z/TPF
V1R1
Linux on
Z2
CP Assist for Cryptographic Function (CPACF)
Y
Y
Y
Y
Y
CPACF greater than 16 Domain Support
Y
Y
Y
N
Y
CPACF AES-128, AES-192, and AES-256
Y
Y
Y
Y3
Y
CPACF SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y
Y
Y4
Y
CPACF protected key
N
N
N
N
N
Crypto Express6S
Y
Y
Y
Y
Y
Crypto Express6S Support for Visa Format Preserving Encryption
N
N
N
N
N
Crypto Express6S Support for Coprocessor in PCI-HSM Compliance Mode5
N
N
N
N
N
Crypto Express6S spouting up to 85 domains
Y
Y
Y
N
Y
Crypto Express5S
Y
Y
Y
Y
Y
Crypto Express5S spouting up to 85 domains
Y
Y
Y
N
Y
Elliptic Curve Cryptography (ECC)
Y
N
N
N
Y
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
N
N
N
N
Y
RCE (Regional Crypto Enablement)
N
N
N
N
N
z/VM Dynamic I/O Support for RCE
N
N
N
-
N
z/OS Data Set Encryption
-
-
-
-
-
z/VM Encrypted paging support
N
N
N
-
N
RMF Support for Crypto Express6
-
-
-
-
-
z/OS encryption readiness technology (zERT)
-
-
-
-
-
z/TPF transparent database encryption
-
-
-
Y
-

1 PTFs might be required for toleration support or exploitation of z14 features and functions.
2 Support statement varies based on Linux on Z distribution and release.
3 z/TPF supports only AES-128 and AES-256.
4 z/TPF supports only SHA-1 and SHA-256.
5 Requires TKE 9.0 or newer.
7.3.5 Special purpose features
The supported Special-purpose feature for z/OS and z/VM is listed in Table 7-12.
Table 7-12 Supported Special-purpose feature for z/OS and z/VM
Function1
z/OS V2R3
z/OS
V2R2
z/OS
V2R1
z/OS
V1R13
z/VM
V7R1
z/VM
V6R4
Linux on Z
zEDC2 Express
Y
Y
Y
N
Y3
Yc
Y4

1 PTFs might be required for toleration support or exploitation of z14 features and function.
2 zEnterprise Data Compression.
3 For guest exploitation.
7.4 Support by features and functions
This section addresses operating system support by function. Only the currently in-support releases are covered.
Tables in this section use the following convention:
N/A : Not applicable
NA : Not available
7.4.1 LPAR Configuration and Management
A single system image can control several processor units, such as CPs, zIIPs, or IFLs.
Maximum number of PUs per system image
The maximum number of PUs that is supported by each operating system image and by special-purpose LPARs are listed in Table 7-13.
Table 7-13 Maximum number of PUs per system image
Operating system
Maximum number of PUs per system image
z/OS V2R3
25612
z/OS V2R2
256ab
z/OS V2R1
256ab
z/OS V1R13
100b
z/VM V7R1
643
z/VM V6R4
64c
z/VSE V5R2 and later
z/VSE Turbo Dispatcher can use up to 4 CPs, and tolerates up to 10-way LPARs
z/TPF V1R1
86 CPs
CFCC Level 22 and 23
16 CPs or ICFs
CPs and ICFs cannot be mixed.
Linux on Z
SUSE Linux Enterprise Server 12: 256 CPs or IFLs.
SUSE Linux Enterprise Server 11: 64 CPs or IFLs.
Red Hat RHEL 7: 256 CPs or IFLs.
Red Hat RHEL 6: 64 CPs or IFLs.
Ubuntu 16.04 LTS and 18.04 LTS: 256 CPs or IFLs.
KVM Hypervisor
The KVM hypervisor is offered with the following Linux distributions -- 256CPs or IFLs--:
SLES 12 SP2.
RHEL 7.5 with kernel-alt package (kernel 4.14).
Ubuntu 16.04 LTS and Ubuntu 18.04 LTS.
Secure Service Container
80
GDPS Virtual Appliance
80

1 z14 M0x LPARs support 170-way without multithreading; 128-way with multithreading (SMT).
2 Total characterizable PUs, including zIIPs and CPs.
3 A 64-way without multithreading and 32-way with multithreading enabled.
Maximum main storage size
The maximum amount of main storage that is supported by current operating systems is listed in Table 7-14. A maximum of 16 TB of main storage can be defined for an LPAR on a z14 server.
Table 7-14 Maximum memory that is supported by the operating system
Operating system1
Maximum supported main storage2
z/OS
z/OS V2R1 and later support 4 TB.
z/VM
z/VM V6R4 and V7R1 support 2 TB
 
z/VSE
z/VSE V5R2 and later support 32 GB.
z/TPF
z/TPF supports 4 TB.
CFCC
Level 22 and 23 supports up to 3 TB.
Secure Service Containers
Supports up to 3 TB.
Linux on Z (64-bit)
SUSE Linux Enterprise Server 12 supports 10 TB.
SUSE Linux Enterprise Server 11 supports 4 TB.
Red Hat RHEL 7 supports 10 TB.
Red Hat RHEL 6 supports 4TB.
Ubuntu 16.04 LTS and 18.04 LTS support 10 TB

1 An LPAR on z14 supports up to 16 TB of memory.
2 z14 servers support 32 TB user configurable memory per server.
Up to 85 LPARs
This feature was first made available on z13 servers and allows the system to be configured with up to 85 LPARs. Because channel subsystems can be shared by up to 15 LPARs, it is necessary to configure six channel subsystems to reach the 85 LPARs limit.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
 
Remember: A virtual appliance that is deployed in a Secure Service Container runs in a dedicated LPAR. When activated, it reduces the maximum number of available LPARs by one.
Separate LPAR management of PUs
z14 servers use separate PU pools for each optional PU type. The separate management of PU types enhances and simplifies capacity planning and management of the configured LPARs and their associated processor resources.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Dynamic PU add
Planning an LPAR configuration includes defining reserved PUs that can be brought online when extra capacity is needed. Operating system support is required to use this capability without an IPL; that is, nondisruptively. This support is available in z/OS for some time.
The dynamic PU add function enhances this support by allowing you to dynamically define and change the number and type of reserved PUs in an LPAR profile, which removes any planning requirements. The new resources are immediately made available to the operating system and, in the case of z/VM, to its guests.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Dynamic LPAR memory upgrade
An LPAR can be defined with an initial and a reserved amount of memory. At activation time, the initial amount is made available to the partition and the reserved amount can be added later, partially or totally. Although these two memory zones do not have to be contiguous in real memory, they appear as logically contiguous to the operating system that runs in the LPAR.
z/OS can take advantage of this support and nondisruptively acquire and release memory from the reserved area. z/VM V6R2 and later can acquire memory nondisruptively and immediately make it available to guests. z/VM virtualizes this support to its guests, which now also can increase their memory nondisruptively if supported by the guest operating system. Releasing memory from z/VM is not supported. Releasing memory from the z/VM guest depends on the guest’s operating system support.
Linux on Z also supports acquiring and releasing memory nondisruptively. This feature is enabled for SUSE Linux Enterprise Server 11 and RHEL 6 and later releases.
LPAR group absolute capping
On z13 servers, PR/SM is enhanced to support an option to limit the amount of physical processor capacity that is used by an individual LPAR when a PU that is defined as a CP or an IFL is shared across a set of LPARs. This enhancement is designed to provide a physical capacity limit that is enforced as an absolute (versus a relative) limit. It is not affected by changes to the logical or physical configuration of the system. This physical capacity limit can be specified in units of CPs or IFLs.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Capacity Provisioning Manager
The provisioning architecture enables clients to better control the configuration and activation of the On/Off CoD. For more information, see 8.8, “Nondisruptive upgrades” on page 355. The new process is inherently more flexible and can be automated. This capability can result in easier, faster, and more reliable management of the processing capacity.
The Capacity Provisioning Manager, which is a feature that is first available with z/OS V1R9, interfaces with z/OS Workload Manager (WLM) and implements capacity provisioning policies. Several implementation options are available, from an analysis mode that issues only guidelines, to an autonomic mode that provides fully automated operations.
Replacing manual monitoring with autonomic management or supporting manual operation with guidelines can help ensure that sufficient processing power is available with the least possible delay. The supported operating systems are listed in Table 7-3 on page 248.
Program-directed re-IPL
First available on System z9, program directed re-IPL allows an operating system on a z14 to IPL again without operator intervention. This function is supported for SCSI and IBM extended count key data (IBM ECKD) devices.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
IOCP
All IBM Z servers require a description of their I/O configuration. This description is stored in I/O configuration data set (IOCDS) files. The I/O configuration program (IOCP) allows for the creation of the IOCDS file from a source file that is known as the I/O configuration source (IOCS).
The IOCS file contains definitions of LPARs and channel subsystems. It also includes detailed information for each channel and path assignment, each control unit, and each device in the configuration.
IOCP for z14 provides support for the following features:
z14 Base machine definition
New PCI function adapter for zHyperLink (HYL)
New PCI function adapter for RoCE Express2 (CX4)
New IOCP Keyword MIXTYPE required for prior FICON3 cards
New hardware (announced with Driver 36)
IOCP support for Dynamic I/O for standalone CF (Driver 36)
 
IOCP required level for z14 servers: The required level of IOCP for the z14 is IOCP 5.4.0 with PTFs. For more information, see the following publications:
IBM Z Stand-Alone Input/Output Configuration Program User's Guide, SB10-7166
IBM Z Input/Output Configuration Program User’s Guide for ICP IOCP, SB10-7163
 
Dynamic Partition Manager V3.2: At the time of this writing, the Dynamic Partition Manager V3.2 is available for managing IBM Z servers that are running Linux. DPM 3.2 is available with HMC Driver Level 36. IOCP does not need to configure a server that is running in DPM mode. For more information, see IBM Dynamic Partition Manager (DPM) Guide, SB10-7170-02.
7.4.2 Base CPC features and functions
In this section, we describe the features and functions of Base CPC.
HiperDispatch
The HIPERDISPATCH=YES/NO parameter in the IEAOPTxx member of SYS1.PARMLIB and on the SET OPT=xx command controls whether HiperDispatch is enabled or disabled for a z/OS image. It can be changed dynamically, without an IPL or any outage.
The default is that HiperDispatch is disabled on all releases, from z/OS V1R10 (which requires PTFs for zIIP support) through z/OS V1R12.
Beginning with z/OS V1R13, the IEAOPTxx keyword HIPERDISPATCH defaults to YES when it is running on a z14, z13, z13s, zEC12, or zBC12 server. If HIPERDISPATCH=NO is specified, the specification is accepted as it was on previous z/OS releases.
The use of SMT on z14 servers requires that HiperDispatch is enabled on the operating system. For more information, see “Simultaneous multithreading” on page 268.
Additionally, with z/OS V1R12 or later, any LPAR that is running with more than 64 logical processors is required to operate in HiperDispatch Management Mode.
The following rules control this environment:
If an LPAR is defined at IPL with more than 64 logical processors, the LPAR automatically operates in HiperDispatch Management Mode, regardless of the HIPERDISPATCH= specification.
If logical processors are added to an LPAR that has 64 or fewer logical processors and the extra logical processors raise the number of logical processors to more than 64, the LPAR automatically operates in HiperDispatch Management Mode, regardless of the HIPERDISPATCH=YES/NO specification. That is, even if the LPAR has the HIPERDISPATCH=NO specification, that LPAR is converted to operate in HiperDispatch Management Mode.
An LPAR with more than 64 logical processors that are running in HiperDispatch Management Mode cannot be reverted to run in non-HiperDispatch Management Mode.
HiperDispatch on z14 servers uses a new chip and CPC drawer configuration to improve the access cache performance. Beginning with z/OS V2R1, HiperDispatch was changed to use the new node cache structure of z14 servers. The base support is provided by PTFs that are identified by IBM.device.server.z14-3906.requiredservice.
The PR/SM in the System z9 EC to zEC12 servers stripes the memory across all books in the system to take advantage of the fast book interconnection and spread memory controller work. The PR/SM on z14 servers seeks to assign all memory in one CPC drawer that is striped across the clusters of that drawer to take advantage of the lower latency memory access in a drawer.
The PR/SM in the System z9 EC to zEC12 servers attempts to assign all logical processors to one book, packed into PU chips of that book in cooperation with operating system HiperDispatch optimize shared cache usage.
The PR/SM on z14 servers seeks to assign all logical processors of a partition to one CPC drawer, packed into PU chips of that CPC drawer in cooperation with operating system HiperDispatch optimize shared cache usage.
The PR/SM automatically keeps a partition’s memory and logical processors on the same CPC drawer. This arrangement looks simple for a partition, but it is a complex optimization for multiple logical partitions because some must be split among processors drawers.
To use HiperDispatch effectively, WLM goal adjustment might be required. Review the WLM policies and goals and update them as necessary. WLM policies can be changed without turning off HiperDispatch. A health check is provided to verify whether HiperDispatch is enabled on a system image that is running on z14 servers.
z/VM V7R1 and V6R4
z/VM also uses the HiperDispatch facility for improved processor efficiency by better use of the processor cache to take advantage of the cache-rich processor, node, and drawer design of the z14 system. The supported processor limit was increased to 64, whereas it remains at 32 with SMT and supports up to 64 threads that are running simultaneously.
CPU polarization support in Linux on Z
You can optimize the operation of a vertical SMP environment by adjusting the SMP factor based on the workload demands. For more information about CPU polarization support in Linux on Z, see the CPU polarization page of IBM Knowledge Center.
z/TPF
z/TPF on z14 can utilize more processors immediately without reactivating the LPAR or IPLing the z/TPF system.
In installations older than z14, z/TPF workload is evenly distributed across all available processors, even in low-utilization situations. This configuration causes cache and core contention with other LPARs. When z/TPF is running in a shared processor configuration, the achieved MIPS is higher when z/TPF is using a minimum set of processors.
In low-utilization periods, z/TPF now minimizes the processor footprint by compressing TPF workload onto a minimal set of I-streams (engines), which reduces the effect on other LPARs and allows the entire CPC to operate more efficiently.
As a consequence, z/OS and z/VM experience less contention from the z/TPF system when the z/TPF system is operating at periods of low demand.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
zIIP support
zIIPs do not change the model capacity identifier of z14 servers. IBM software product license charges that are based on the model capacity identifier are not affected by the addition of zIIPs. On a z14 server, z/OS Version 1 Release 13 is the minimum level for supporting zIIPs.
No changes to applications are required to use zIIPs. They can be used by the following applications:
Db2 V8 and later for z/OS data serving for applications that use data Distributed Relational Database Architecture (DRDA) over TCP/IP, such as data serving, data warehousing, and selected utilities.
z/OS XML services.
z/OS CIM Server.
z/OS Communications Server for network encryption (Internet Protocol Security (IPSec)) and for large messages that are sent by HiperSockets.
IBM GBS Scalable Architecture for Financial Reporting.
IBM z/OS Global Mirror (formerly XRC) and System Data Mover.
IBM OMEGAMON® XE on z/OS, OMEGAMON XE on Db2 Performance Expert, and Db2 Performance Monitor.
Any Java application that uses the current IBM SDK.
WebSphere Application Server V5R1 and later, and products that are based on it, such as WebSphere Portal, WebSphere Enterprise Service Bus (WebSphere ESB), and WebSphere Business Integration (WBI) for z/OS.
CICS/TS V2R3 and later.
Db2 UDB for z/OS Version 8 and later.
IMS Version 8 and later.
zIIP Assisted HiperSockets for large messages.
z/OSMF (z/OS Management Facility).
IBM z/OS Platform for Apache Spark.
IBM Machine Learning for z/OS.
The functioning of a zIIP is transparent to application programs. The supported operating systems are listed in Table 7-3 on page 248.
On z14 servers, the zIIP processor is designed to run in SMT mode, with up to two threads per processor. This new function is designed to help improve throughput for zIIP workloads and provide appropriate performance measurement, capacity planning, and SMF accounting data. This support is available for z/OS V2.1 with PTFs and higher.
Use the PROJECTCPU option of the IEAOPTxx parmlib member to help determine whether zIIPs can be beneficial to the installation. Setting PROJECTCPU=YES directs z/OS to record the amount of eligible work for zIIPs in SMF record type 72 subtype 3. The field APPL% IIPCP of the Workload Activity Report listing by WLM service class indicates the percentage of a processor that is zIIP eligible. Because of the zIIP’s lower price as compared to a CP, even a utilization as low as 10% can provide cost benefits.
Transactional Execution
The IBM zEnterprise EC12 introduced an architectural feature called Transactional Execution (TX). This capability is known in academia and industry as hardware transactional memory. Transactional execution is also implemented on z14, z13, z13s, and zBC12 servers.
This feature enables software to indicate to the hardware the beginning and end of a group of instructions that must be treated in an atomic way. All of their results occur or none occur, in true transactional style. The execution is optimistic.
The hardware provides a memory area to record the original contents of affected registers and memory as the instruction’s execution occurs. If the transactional execution group is canceled or must be rolled back, the hardware transactional memory is used to reset the values. Software can implement a fallback capability.
This capability increases the software’s efficiency by providing a way to avoid locks (lock elision). This advantage is of special importance for speculative code generation and highly parallelized applications.
TX is used by IBM Java virtual machine (JVM) and might be used by other software. The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Simultaneous multithreading
SMT is the hardware capability to process up to two simultaneous threads in a single core, sharing the resources of the superscalar core. This capability improves the system capacity and efficiency in the usage of the processor, which increases the overall throughput of the system.
The z14 can run up two threads simultaneously in the same processor, which dynamically shares resources of the core, such as cache, translation lookaside buffer (TLB), and execution resources. It provides better utilization of the cores and more processing capacity.
SMT4 is supported for zIIPs and IFLs.
 
Note: For zIIPs and IFLs, SMT must be enabled on z/OS, z/VM, or Linux on Z instances. An operating system with SMT support can be configured to dispatch work to a thread on a zIIP (for eligible workloads in z/OS) or an IFL (for z/VM) core in single-thread or SMT mode.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
An operating system that uses SMT controls each core and is responsible for maximizing their throughput and meeting workload goals with the smallest number of cores. In z/OS, HiperDispatch cache optimization should be considered when you must choose the two threads to be dispatched in the same processor.
HiperDispatch attempts to dispatch guest virtual CPUs on the same logical processor on which they ran. PR/SM attempts to dispatch a vertical low logical processor in the same physical processor. If that process is not possible, it attempts to dispatch it in the same node, or then the same CPC drawer where it was dispatched before to maximize cache reuse.
From the point of view of an application, SMT is transparent and no changes are required in the application for it to run in an SMT environment, as shown in Figure 7-1 on page 269.
Figure 7-1 Simultaneous multithreading
z/OS
The following APARs must be applied to z/OS V2R1 to use SMT:
OA43366 (BCP)
OA43622 (WLM)
OA44439 (XCF)
The use of SMT on z/OS V2R1 requires enabling HiperDispatch, and defining the processor view (PROCVIEW) control statement in the LOADxx parmlib member and the MT_ZIIP_MODE parameter in the IEAOPTxx parmlib member.
The PROCVIEW statement is defined for the life of IPL, and can have the following values:
CORE: This value specifies that z/OS should configure a processor view of core, in which a core can include one or more threads. The number of threads is limited by z14 to two threads. If the underlying hardware does not support SMT, a core is limited to one thread.
CPU: This value is the default. It specifies that z/OS should configure a traditional processor view of CPU and not use SMT.
CORE,CPU_OK: This value specifies that z/OS should configure a processor view of core (as with the CORE value) but the CPU parameter is accepted as an alias for applicable commands.
When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS that is running in z14, HiperDispatch is forced to run as enabled, and you cannot disable HiperDispatch. The PROCVIEW statement cannot be changed dynamically; therefore, you must run an IPL after changing it to make the new setting effective.
The MT_ZIIP_MODE parameter in the IEAOPTxx controls zIIP SMT mode. It can be 1 (the default), where only one thread can be running in a core, or 2, where up two threads can be running in a core. If PROCVIEW CPU is specified, the MT_ZIIP_MODE is always 1. Otherwise, the use of SMT to dispatch two threads in a single zIIP logical processor (MT_ZIIP_MODE=2) can be changed dynamically by using the SET OPT=xx setting in the IEAOPTxx parmlib. Changing the MT mode for all cores can take some time to complete.
The activation of SMT mode also requires that the HMC Customize/Delete Activation Profiles task “Do not end the time slice if a partition enters a wait state” must not be selected. This setting is the recommended default setting.
PROCVIEW CORE requires DISPLAY M=CORE and CONFIG CORE to display the core states and configure an entire core.
With the introduction of Multi-Threading support for SAPs in z14, a maximum of 88 logical SAPs can used. RMF is updated to support this change by implementing page break support in the I/O Queuing Activity report that is generated by the RMF Post processor.
z/VM V7R1 and V6R4
The use of SMT in z/VM is enabled by using the MULTITHREADING statement in the system configuration file. Multithreading is enabled only if z/VM is configured to run with the HiperDispatch vertical polarization mode enabled and with the dispatcher work distribution mode set to reshuffle.
The default in z/VM is multithreading disabled. With the addition of dynamic SMT capability to z/VM V6R4 through an SPE, the number of active threads per core can be changed without a system outage and potential capacity gains going from SMT-1 to SMT-2 (one to two threads per core) can now be achieved dynamically. Dynamic SMT requires applying PTFs that are running in SMT enabled mode and enables dynamically varying the active threads per core.
z/VM supports up to 32 multithreaded cores (64 threads) for IFLs, and each thread is treated as an independent processor. z/VM dispatches virtual IFLs on the IFL logical processor so that the same or different guests can share a core. Each core has a single dispatch vector, and z/VM attempts to place virtual sibling IFLs on the same dispatch vector to maximize cache reuses.
The guests have no awareness of SMT, and cannot use it. z/VM SMT exploitation does not include guest support for multithreading. The value of this support for guests is that the first-level z/VM hosts under the guests can achieve higher throughput from the multi-threaded IFL cores.
Linux on Z and the KVM hypervisor
The upstream kernel 4.0 features SMT functionality that was developed by the Linux on Z development team. SMT is supported on LPAR only (not as a second level guest). For more information see, the Kernel 4.0 page of the developerWorks website.
The following minimum releases of Linux on Z distributions natively support SMT:
Red Hat RHEL 7.2
SUSE Linux Enterprise Server12 SP1
Ubuntu 16.04 LTS
KVM hypervisor, which is offered with the following Linux distributions:
 – SLES 12 SP2 with service.
 – RHEL 7.5 with kernel-alt package (kernel 4.14).
 – Ubuntu 16.04 LTS with service and Ubuntu 18.04 LTS with service
Single-instruction multiple-data
The SIMD feature introduces a new set of instructions to enable parallel computing that can accelerate code with string, character, integer, and floating point data types. The SIMD instructions allow a larger number of operands to be processed with a single complex instruction.
z14 is equipped with new set of instructions to improve the performance of complex mathematical models and analytic workloads through vector processing and new complex instructions, which can process a lot of data with a single instruction. This new set of instructions, which is known as SIMD, enables more consolidation of analytic workloads and business transactions on Z servers.
SIMD on z14 has support for 32-bit floats and enhanced math libraries that provide performance improvements for analytical workloads by processing more information with a single CPU instruction.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249. Operating System support includes the following items5:
Enablement of vector registers.
Use of vector registers that use XL C/C++ ARCH(11) and TUNE(11).
A math library with an optimized and tuned math function (Mathematical Acceleration Subsystem, or MASS) that can be used in place of some of the C standard math functions. It includes a SIMD vectorized and non-vectorized version.
A specialized math library, which is known as Automatically Tuned Linear Algebra Software (ATLAS), that is optimized for the hardware.
IBM Language Environment for C runtime function enablement for ATLAS.
DBX to support the disassembly of the new vector instructions, and to display and set vector registers.
XML SS exploitation to use new vector processing instructions to improve performance.
MASS and ATLAS can reduce the time and effort for middleware and application developers. IBM provides compiler built-in functions for SMID that software applications can use as needed, such as for using string instructions.
The use of new hardware instructions through XL C/C++ ARCH(12) and TUNE(12) or SIMD usage by MASS and ATLAS libraries requires the z14 support for z/OS V2R1 XL C/C++ web deliverable.
The followings compilers have built-in functions for SIMD:
IBM Java
XL C/C++
Enterprise COBOL
Enterprise PL/I
Code must be developed to take advantage of the SIMD functions, and applications with SIMD instructions abend if they run on a lower hardware level system. Some mathematical function replacement can be done without code changes by including the scalar MASS library before the standard math library.
The MASS and standard math library include different accuracies, so assess the accuracy of the functions in the context of the user application before deciding whether to use the MASS and ATLAS libraries.
The SIMD functions can be disabled in z/OS partitions at IPL time by using the MACHMIG parameter in the LOADxx member. To disable SIMD code, use the MACHMIG VEF hardware-based vector facility. If you do not specify a MACHMIG statement, which is the default, the system is unlimited in its use of the Vector Facility for z/Architecture (SIMD).
Hardware decimal floating point
Industry support for decimal floating point is growing, with IBM leading the open standard definition. Examples of support for the draft standard IEEE 754r include Java BigDecimal, C#, XML, C/C++, GCC, COBOL, and other key software vendors, such as Microsoft and SAP.
Decimal floating point support was introduced with z9 EC. z14 servers inherited the decimal floating point accelerator feature that was introduced with z10 EC.
z14 features a new decimal architecture with Vector Enhancements Facility and Vector Packed Decimal Facility for Data Access Accelerator. Vector Packed Decimal Facility introduces a set of instructions that perform operations on decimal types that use vector registers to improve performance.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249. For more information, see 7.5.4, “z/OS XL C/C++ considerations” on page 308.
Out-of-order execution
Out-of-order (OOO) execution yields significant performance benefits for compute-intensive applications by reordering instruction execution, which allows later (newer) instructions to be run ahead of a stalled instruction, and reordering storage accesses and parallel storage accesses. OOO maintains good performance growth for traditional applications.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249. For more information, see “3.4.3, “Out-of-Order execution” on page 99.
CPU Measurement Facility
Also known as Hardware Instrumentation Services (HIS), CPU Measurement Facility (CPUMF) was initially introduced with z10 EC to gain insight into the interaction of workload and hardware it runs on. CPU MF data can be collected by z/OS System Measurement Facility on SMF 113 records. The supported operating systems are listed in Table 7-3 on page 248.
For more information about this function, see The Load-Program-Parameter and the CPU-Measurement Facilities.
For more information about the CPU Measurement Facility, see the CPU MF - Update and WSC Experiences page of the IBM Techdocs Library website.
For more information, see “12.2, “LSPR workload suite” on page 449.
Large page support
In addition to the existing 1-MB large pages, 4-KB pages, and page frames, z14 servers support pageable 1-MB large pages, large pages that are 2 GB, and large page frames. The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Virtual Flash Memory
IBM Virtual Flash Memory (VFM) is the replacement for the Flash Express features (FC 0402, FC 0403), which were available on the IBM zEC12 and IBM z13. No application changes are required to change from IBM Flash Express to VFM for it implements EADM Architecture using HSA-like memory instead of Flash card pairs.
IBM Virtual Flash Memory (FC 0604) offers up to 6.0 TB of memory in 1.5 TB increments for improved application availability and to handle paging workload spikes.
IBM Virtual Flash Memory is designed to help improve availability and handling of paging workload spikes when running z/OS V2.1, V2.2, or V2.3, or on z/OS V1.13. With this support, z/OS is designed to help improve system availability and responsiveness by using VFM across transitional workload events, such as market openings, and diagnostic data collection. z/OS is also designed to help improve processor performance by supporting middleware exploitation of pageable large (1 MB) pages.
Therefore, VFM can help organizations meet their most demanding service level agreements and compete more effectively. VFM is designed to be easily configurable, and to provide rapid time to value.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Guarded Storage Facility
Also known as less-pausing garbage collection, Guarded Storage Facility (GSF) is a new architecture that was introduced with z14 to enable enterprise scale Java applications to run without periodic pause for garbage collection on larger heaps.
z/OS
GSF support allows an area of storage to be identified such that an Exit routine assumes control if a reference is made to that storage. GSF is managed by new instructions that define Guarded Storage Controls and system code to maintain that control information across un-dispatch and re-dispatch.
Enabling a less-pausing approach improves Java garbage collection. Function is provided on z14 that is running z/OS 2.2 and later with APAR OA51643 installed. MACHMIG statement in LOADxx of SYS1.PARMLIB provides ability to disable the function.
z/VM
With the PTF for APAR VM65987, z/VM V6.4 provides support for guest exploitation of the z14 guarded storage facility. This facility is designed to improve the performance of garbage-collection processing by various languages, in particular Java.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Instruction Execution Protection
Instruction Execution Protection (IEP) is a new hardware function that was introduced with z14 that enables software, such as Language Environment, to mark certain memory regions (for example, a heap or stack), as non-executable to improve the security of programs running on IBM Z servers against stack-overflow or similar attacks.
Through enhanced hardware features (based on DAT table entry bit) and explicit software requests to obtain memory areas as non-executable, areas of memory can be protected from unauthorized execution. A Protection Exception occurs if an attempt is made to fetch an instruction from an address in such an element or if an address in such an element is the target of an execute-type instruction.
z/OS
To use IEP, Real Storage Manager (RSM) is enhanced to request non-executable memory allocation. Use new keyword EXECUTABLE=YES|NO on STORAGE OBTAIN or IARV64 to indicate whether memory to be used contains executable code. Recovery Termination Manager (RTM) writes LOGREC record of any program-check that results from IEP.
IEP support is for z/OS 2.2 and later running on z14 with APARs OA51030 and OA51643 installed.
z/VM
Guest exploitation support for the Instruction Execution Protection Facility is provided with APAR VM65986.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Co-processor compression enhancements
Since z196, IBM Z processors have on-chip co-processors with built-in capabilities, such as compression, expansion, and translation. Each follow-on server generation improved on-chip co-processors functionality and performance and z14 is taking this idea even further with following enhancements:
Entropy Encoding for CMPSC with Huffman Coding
z14 enables increased compression ratio (by using Huffman coding) for the on-chip compression coprocessor, which results in fewer CPU cycles to enable further data compression. This feature improves memory, transfer, and disk efficiency. Software-based expansion of Huffman-encoded text is used when hardware support is not present. Support is provided by z/OS 2.1 and later running on z14 with OA49967 installed.
 
Order Preserving Compression
Order Preserving Compression allows comparisons against compressed data. It helps to achieve further disk and memory savings by compressing search trees, index, or sort files that were impractical to compress previously.
Combining Order Preserving Compression (individual compression of index entries) with Entropy Encoding (to compress non-index data) can result up to 35% data reduction6 for Db2 considering that significant portion of Db2 disk space is in indices for fast access to data.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
7.4.3 Coupling and clustering features and functions
In this section, we describe the coupling and cluster features.
Coupling facility and CFCC considerations
Coupling facility (CF) connectivity to a z14 is supported on the z13, zEC12, zBC12, or another z14. The CFCC levels that are supported on Z servers are listed in Table 7-15.
Table 7-15 IBM Z CFCC code-levels
IBM Z server
Code level
z14 M0x and z14 ZR1
CFCC Level 22 or CFCC Level 23
z13
CFCC Level 20 or CFCC Level 21
z13s
CFSSCC Level 21
zEC12
CFCC Level 18 or CFCC Level 19
zBC12
CFCC Level 19
 
Consideration: Because coupling link connectivity to z196 and previous systems is not supported, introducing z14 into an installation requires extra planning. Consider the level of CFCC. For more information, see “Migration considerations” on page 188.
CFCC Level 23
CFCC Level 23 is delivered on z14 servers with driver level 36. In addition to CFCC Level 22 enhancements, it introduces the following enhancements:
Asynchronous cross invalidation (XI) for CF cache structures
This requires z/OS fixes for APARs OA54688 (exploitation) and OA54985 (toleration). It also requires explicit data manager support (Db2 V12 with PTFs).
Coupling Facility hang detection enhancements
These enhancements provide a significant reduction in failure scope and client disruption (CF-level to structure-level), with no loss of FFDC collection capability. With this support, the CFCC dispatcher significantly reduces the CF hang detection interval to only 2 seconds, which allows more timely detection and recovery from such events.
When a hang is detected, in most cases the CF confines the scope of the failure to “structure damage” for the single CF structure the hung command was processing against, capture diagnostics with a non-disruptive CF dump, and continue operating without aborting or rebooting the CF image.
Coupling Facility granular latching
This enhancement eliminates the performance degradation that is caused by structure-wide latching. With this support, most CF list and lock structure ECR processing no longer uses structure-wide latching. It serializes its execution by using the normal structure object latches that all mainline commands use. However, a small number of “edge conditions” in ECR processing still require structure-wide latching.
Before you begin the migration process, install the compatibility and coexistence PTFs. A planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 23.
CFCC Level 22
CFCC Level 22 is delivered on z14 servers with driver level 32. CFCC Level 22 introduces the following enhancements:
Coupling Express Long Range (CE LR): A new link type that was introduced with z14 for long distance coupling connectivity.
Coupling Facility (CF) Processor Scalability: CF work management and dispatching changes for IBM z14™ allow improved efficiency and scalability for coupling facility images.
First, ordered work queues were eliminated from the CF in favor of first-in/first-out queues, which avoids the overhead of maintaining ordered queues.
Second, protocols for system-managed duplexing were simplified to avoid the potential for latching deadlocks between duplexed structures.
Third, the CF image can now use its processors to perform specific work management functions when the number of processors in the CF image exceeds a threshold. Together, these changes improve the processor scalability and throughput for a CF image.
CF List Notification Enhancements: Significant enhancements were made to CF notifications that inform users about the status of shared objects within in a Coupling Facility.
First, structure notifications can use a round-robin scheme for delivering immediate and deferred notifications that avoids excessive “shotgun” notifications, which reduces notification overhead.
Second, an option is now available for delivering “aggressive” notifications, which can drive a notification when new elements are added to a queue. This feature provides initiative to get new work processed in a timely manner.
Third, notifications can now be driven when a queue transitions between full and not-full, which allows users to re-drive messages that could not previously be written to a “full” queue. The combination of these notification enhancements provides flexibility to accommodate notification preferences among various CF users and yields more consistent, timely notifications.
Coupling Link Constraint Relief: IBM z14™ provides more physical and logical coupling link connectivity compared to z13. Consider the following points:
 – The maximum number of physical ICA SR coupling links (ports) is increased from 40 per CPC to 80 per CPC. These higher limits on z14 support concurrent use of InfiniBand coupling, ICA SR, and CE LR links, for coupling link technology migration purposes.
 – Maximum number of coupling CHPIDs (of all types) is 256 per CPC (same as z13).
CF Encryption: z/OS 2.3 provides support for end-to-end encryption for CF data in flight and data at rest in CF structures (as a part of the Pervasive Encryption solution). Host-based CPACF encryption is used for high-performance and low-latency. IBM z14™ CF images are not required, but are recommended to simplify some sysplex recovery and reconciliation scenarios involving encrypted CF structures. (The CF image never decrypts or encrypts any data). IBM z14™ z/OS images are not required, but are recommended for the improved AES CBC encrypt/decrypt performance that z14 provides.
The supported operating systems are listed in Table 7-5 on page 251.
For more information about CFCC code levels, see the Parallel Sysplex page of the IBM IT infrastructure website.
For more information about the latest CFCC code levels, see the current exception letter that is published on Resource Link website (login is required).
CF structure sizing changes are expected when upgrading from a previous CFCC Level to CFCC Level 21. Review the CF LPAR size by using the available CFSizer tool, which is available for download at the IBM Systems support website.
Sizer Utility, an authorized z/OS program download, is useful when you are upgrading a CF. The tool is available for download at the IBM Systems support website.
Before you begin the migration process, install the compatibility and coexistence PTFs. A planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 22.
Coupling links support
Integrated Coupling Adapter (ICA) Short Reach, Coupling Express Long Reach (CE LR) and InfiniBand (IFB) coupling link options provide high-speed connectivity at short and longer distances over fiber optic interconnections. Several areas of this book address CE LR, ICA SR, and InfiniBand characteristics and support. For more information, see 4.7.4, “Parallel Sysplex connectivity” on page 184.
Integrated Coupling Adapter
PCIe Gen3 fanout, which is also known as Integrated Coupling Adapter Short Range (ICA SR), supports a maximum distance of 150 meters (492 feet) and is defined as CHPID type CS5 in IOCP.
Coupling Express Long Reach
The CE LR link provides point-to-point coupling connectivity at distances of 10 km (6.21 miles) unrepeated and defined as CHPID type CL5 in IOCP. The supported operating systems are listed in Table 7-5 on page 251.
InfiniBand coupling links
HCA3-O7 (12xIFB) fanout supports InfiniBand coupling links 12x at a maximum distance of 150 meters (492 feet) and defined as CHPID type CIB in IOCP. The supported operating systems are listed in Table 7-5 on page 251.
InfiniBand coupling links at an unrepeated distance of 10 km (6.2 miles)
HCA3-O LR7 (1xIFB) fanout supports InfiniBand coupling links 1x at an unrepeated distance of 10 KM (6.2 miles). The supported operating systems are listed in Table 7-5 on page 251.
 
Note: IBM z14 is last z Systems and IBM Z server to support HCA3-O fanout for 12x IFB (#0171) and HCA3-O LR fanout for 1x IFB (#0170).1 As announced previously, z13s is the last mid-range z Systems server to support these adapters.
Enterprises should begin migrating from HCA3-O and HCA3-O LR adapters to ICA SR or Coupling Express Long Reach (CE LR) adapters on z14, z13, and z13s. For high-speed short-range coupling connectivity, enterprises should migrate to the Integrated Coupling Adapter (ICA-SR).
For long-range coupling connectivity, enterprises should migrate to the new Coupling Express LR coupling adapter. For long-range coupling connectivity requiring a DWDM, enterprises must determine their needed DWDM vendor’s plan to qualify the planned replacement long-range coupling links.
IBM Z enterprises should plan to migrate off of InfiniBand coupling links. For high-speed short-range coupling connectivity, enterprises should migrate to the Integrated Coupling Adapter (ICA-SR).
For long-range coupling connectivity, enterprises should migrate to the new CE LR coupling link. For long-range coupling connectivity that requires a DWDM, enterprises must determine their wanted DWDM vendor’s plan to qualify the CE LR. Reference Hardware Announcement 117-031, dated March 2017.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
Virtual Flash Memory use by CFCC
VFM can be used in coupling facility images to provide extended capacity and availability for workloads that use Websphere MQ Shared Queues structures. The use of VFM can help availability by reducing latency from paging delays that can occur at the start of the workday or during other transitional periods. It is also designed to help eliminate delays that can occur when diagnostic data during failures are collected.
CFCC Coupling Thin Interrupts
The Coupling Thin Interrupts enhancement is delivered with CFCC 19. It improves the performance of a CF partition and improves the dispatching of z/OS LPARs that are awaiting the arrival of returned asynchronous CF requests when used in a shared engine environment. For more information, see “Coupling Thin Interrupts” on page 112. The supported operating systems are listed in Table 7-5 on page 251.
Asynchronous CF Duplexing for lock structures
Asynchronous CF Duplexing enhancement is a general-purpose interface for any CF Lock structure user. It enables secondary structure updates to be performed asynchronously with respect to primary updates. Initially delivered with CFCC 21 on z13 as an enhanced continuous availability solution, it offers performance advantages for duplexing lock structures and avoids the need for synchronous communication delays during the processing of every duplexed update operation.
Asynchronous CF Duplexing for lock structures requires the following software support:
z/OS V2R3, z/OS V2.2 SPE with PTFs for APAR OA47796 and OA49148
z/VM V7R1, z/VM V6.4 with PTFs for z/OS exploitation of guest coupling environment
Db2 V12 with PTFs for APAR PI66689
IRLM V2.3 with PTFs for APAR PI68378
The supported operating systems are listed in Table 7-5 on page 251.
Asynchronous cross-invalidate (XI) for CF cache structures
Asynchronous XI for CF cache structures enables improved efficiency in CF data sharing by adopting a more transactional behavior for cross-invalidate (XI) processing. which is used to maintain coherency and consistency of data managers’ local buffer pools across the sysplex.
Instead of performing XI signals synchronously on every cache update request that causes them, data managers can “opt in” for the CF to perform these XIs asynchronously (and then sync them up with the CF at or before transaction completion). Data integrity is maintained if all XI signals complete by the time transaction locks are released.
The feature enables faster completion of cache update CF requests, especially with cross-site distance involved and provides improved cache structure service times and coupling efficiency. It requires explicit data manager exploitation/participation, which is not transparent to the data manager. No SMF data changes were made for CF monitoring and reporting.
The following requirements must be met:
CFCC Level 23 support, plus
z/OS PTFs on every exploiting system in the sysplex:
Fixes for APAR OA54688 - Exploitation support z/OS 2.2 and 2.3
Fixes for APAR OA54985 - Toleration support for z/OS 1.13 and 2.1
Db2 V12 with PTFs for exploitation
z/VM Dynamic I/O support for InfiniBand and ICA CHPIDs
z/VM dynamic I/O configuration support allows you to add, delete, and modify the definitions of channel paths, control units, and I/O devices to the server and z/VM without shutting down the system.
This function refers exclusively to the z/VM dynamic I/O support of InfiniBand and ICA coupling links. Support is available for the CIB and CS5 CHPID type in the z/VM dynamic commands, including the change channel path dynamic I/O command.
Specifying and changing the system name when entering and leaving configuration mode are also supported. z/VM does not use InfiniBand or ICA, and does not support the use of InfiniBand or ICA coupling links by guests. The supported operating systems are listed in Table 7-5 on page 251.
7.4.4 Storage connectivity-related features and functions
In this section, we describe the storage connectivity-related features and functions.
zHyperlink Express
z14 introduces IBM zHyperLink Express as a brand new IBM Z input/output (I/O) channel link technology since FICON. zHyperLink Express is designed to help bring data close to processing power, increase the scalability of Z transaction processing, and lower I/O latency.
zHyperLink Express is designed for up to 5x lower latency than High Performance FICON for Z (zHPF) by directly connecting the Z Central Processor Complex (CPC) to the I/O Bay of the DS8880. This short distance (up to 150 m), direct connection is intended to speed Db2 for z/OS transaction processing and improve active log throughput.
The improved performance of zHyperLink Express allows the Processing Unit (PU) to make a synchronous request for the data that is in the DS8880 cache. This feature eliminates the un-dispatch of the running request, the queuing delays to resume the request, and the PU cache disruption.
Support for zHyperLink Writes can accelerate Db2 log writes to help deliver superior service levels by processing high-volume Db2 transactions at speed. IBM zHyperLink Express (FC 0431) requires compatible levels of DS8880/F hardware, firmware R8.5.1, and Db2 12 with PTFs.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
FICON Express16S+
FICON Express16S+ supports a link data rate of 16 gigabits per second (Gbps) and autonegotiation to 4 or 8 Gbps for synergy with switches, directors, and storage devices. With support for native FICON, High Performance FICON for Z (zHPF), and Fibre Channel Protocol (FCP), the IBM z14™ server enables you to position your SAN for even higher performance, which helps you to prepare for an end-to-end 16 Gbps infrastructure to meet the lower latency and increased bandwidth demands of your applications.
The new FICON Express16S+ channel works with your existing fiber optic cabling environment (single mode and multimode optical cables). The FICON Express16S+ feature running at end-to-end 16 Gbps link speeds provides reduced latency for large read/write operations and increased bandwidth compared to the FICON Express8S feature.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
FICON Express16S
FICON Express16S supports a link data rate of 16 Gbps and autonegotiation to 4 or 8 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, zHPF, and FCP, the z14 server enables SAN for even higher performance, which helps to prepare for an end-to-end 16 Gbps infrastructure to meet the increased bandwidth demands of your applications.
The new features for the multimode and single mode fiber optic cabling environments reduce latency for large read/write operations and increase bandwidth compared to the FICON Express8S features.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
FICON Express8S
The FICON Express8S provides a link rate of 8 Gbps, with auto negotiation to 4 or 2 Gbps for compatibility with previous devices and investment protection. Both 10 km (6.2 miles) LX and SX connections are offered (in a feature, all connections must include the same type).
 
Statement of Direction1: IBM z14 is the last z Systems and IBM Z high-end server to support FICON Express8S (#0409 and #0410) channels. Enterprises should begin migrating from FICON Express8S channels to FICON Express16S+ channels (FC 0427 and FC 0428). FICON Express8S is supported on future high-end IBM Z servers as carry forward on an upgrade.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
FICON Express8S introduced a hardware data router for more efficient zHPF data transfers. It is the first channel with hardware that is designed to support zHPF, as compared to FICON Express8, FICON Express4, and FICON Express2, which include a firmware-only zHPF implementation.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Extended distance FICON
An enhancement to the industry-standard FICON architecture (FC-SB-3) helps avoid degradation of performance at extended distances by implementing a new protocol for persistent IU pacing. Extended distance FICON is transparent to operating systems and applies to all FICON Express16S+, FICON Express16S, and FICON Express8S features that carry native FICON traffic (CHPID type FC).
To use this enhancement, the control unit must support the new IU pacing protocol. IBM System Storage® DS8000 series supports extended distance FICON for IBM Z environments. The channel defaults to current pacing values when it operates with control units that cannot use extended distance FICON.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
High-performance FICON
High-performance FICON (zHPF) was first provided on System z10, and is a FICON architecture for protocol simplification and efficiency. It reduces the number of information units (IUs) that are processed. Enhancements were made to the z/Architecture and the FICON interface architecture to provide optimizations for online transaction processing (OLTP) workloads.
zHPF is available on z14, z13, z13s, zEC12, and zBC12 servers. The FICON Express16S+, FICON Express16S, and FICON Express8S (CHPID type FC) concurrently support the existing FICON protocol and the zHPF protocol in the server LIC.
When used by the FICON channel, the z/OS operating system, and the DS8000 control unit or other subsystems, the FICON channel processor usage can be reduced and performance improved. Appropriate levels of Licensed Internal Code (LIC) are required.
Also, the changes to the architectures provide end-to-end system enhancements to improve reliability, availability, and serviceability (RAS).
zHPF is compatible with the following standards:
Fibre Channel Framing and Signaling standard (FC-FS)
Fibre Channel Switch Fabric and Switch Control Requirements (FC-SW)
Fibre Channel Single-Byte-4 (FC-SB-4) standards
For example, the zHPF channel programs can be used by the z/OS OLTP I/O workloads, Db2, VSAM, the partitioned data set extended (PDSE), and the z/OS file system (zFS).
At the zHPF announcement, zHPF supported the transfer of small blocks of fixed size data (4 K) from a single track. This capability was extended, first to 64 KB, and then to multitrack operations. The 64 KB data transfer limit on multitrack operations was removed by z196. This improvement allows the channel to fully use the bandwidth of FICON channels, which results in higher throughputs and lower response times.
The multitrack operations extension applies exclusively to the FICON Express16S+, FICON Express16S, and FICON Express8S, on the z14, z13, z13s, zEC12, and zBC12, when configured as CHPID type FC and connecting to z/OS. zHPF requires matching support by the DS8000 series. Otherwise, the extended multitrack support is transparent to the control unit.
zHPF is enhanced to allow all large write operations (greater than 64 KB) at distances up to 100 km (62.13 miles) to be run in a single round trip to the control unit. This process does not elongate the I/O service time for these write operations at extended distances. This enhancement to zHPF removes a key inhibitor for clients adopting zHPF over extended distances, especially when the IBM HyperSwap capability of z/OS is used.
From the z/OS perspective, the FICON architecture is called command mode and the zHPF architecture is called transport mode. During link initialization, the channel node and the control unit node indicate whether they support zHPF.
 
Requirement: All FICON channel path identifiers (CHPIDs) that are defined to the same LCU must support zHPF. The inclusion of any non-compliant zHPF features in the path group causes the entire path group to support command mode only.
The mode that is used for an I/O operation depends on the control unit that supports zHPF and its settings in the z/OS operating system. For z/OS use, a parameter is available in the IECIOSxx member of SYS1.PARMLIB (ZHPF=YES or NO) and in the SETIOS system command to control whether zHPF is enabled or disabled. The default is ZHPF=NO.
Support is also added for the D IOS,ZHPF system command to indicate whether zHPF is enabled, disabled, or not supported on the server.
Similar to the existing FICON channel architecture, the application or access method provides the channel program (CCWs). The way in which zHPF (transport mode) manages channel program operations is different from the CCW operation for the existing FICON architecture (command mode). While in command mode, each CCW is sent to the control unit for execution. In transport mode, multiple channel commands are packaged together and sent over the link to the control unit in a single control block. Fewer processors are used compared to the existing FICON architecture. Certain complex CCW chains are not supported by zHPF.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
For more information about FICON channel performance, see the performance technical papers that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
Modified Indirect Data Address Word facility
The Modified Indirect Data Address Word (MIDAW) facility improves FICON performance. It provides a more efficient channel command word (CCW)/indirect data address word (IDAW) structure for certain categories of data-chaining I/O operations.
The MIDAW facility is a system architecture and software feature that is designed to improve FICON performance. This facility was first made available on System z9 servers, and is used by the Media Manager in z/OS.
The MIDAW facility provides a more efficient CCW/IDAW structure for certain categories of data-chaining I/O operations.
MIDAW can improve FICON performance for extended format data sets. Non-extended data sets can also benefit from MIDAW.
MIDAW can improve channel utilization, and can improve I/O response time. It also reduces FICON channel connect time, director ports, and control unit processor usage.
IBM laboratory tests indicate that applications that use EF data sets, such as Db2, or long chains of small blocks can gain significant performance benefits by using the MIDAW facility.
MIDAW is supported on FICON channels that are configured as CHPID type FC. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
MIDAW technical description
An IDAW is used to specify data addresses for I/O operations in a virtual environment.8 The IDAW design allows the first IDAW in a list to point to any address within a page. Subsequent IDAWs in the same list must point to the first byte in a page. Also, IDAWs (except the first and last IDAW) in a list must manage complete 2 K or 4 K units of data.
Figure 7-2 on page 283 shows a single CCW that controls the transfer of data that spans non-contiguous 4 K frames in main storage. When the IDAW flag is set, the data address in the CCW points to a list of words (IDAWs). Each IDAW contains an address that designates a data area within real storage.
Figure 7-2 IDAW usage
The number of required IDAWs for a CCW is determined by the following factors:
IDAW format as specified in the operation request block (ORB)
Count field of the CCW
Data address in the initial IDAW
For example, three IDAWS are required when the following events occur:
The ORB specifies format-2 IDAWs with 4 KB blocks.
The CCW count field specifies 8 KB.
The first IDAW designates a location in the middle of a 4 KB block.
CCWs with data chaining can be used to process I/O data blocks that have a more complex internal structure, in which portions of the data block are directed into separate buffer areas. This process is sometimes known as scatter-read or scatter-write. However, as technology evolves and link speed increases, data chaining techniques become less efficient because of switch fabrics, control unit processing and exchanges, and other issues.
The MIDAW facility is a method of gathering and scattering data from and into discontinuous storage locations during an I/O operation. The MIDAW format is shown in Figure 7-3. It is 16 bytes long and is aligned on a quadword.
Figure 7-3 MIDAW format
An example of MIDAW usage is shown in Figure 7-4.
Figure 7-4 MIDAW usage
The use of MIDAWs is indicated by the MIDAW bit in the CCW. If this bit is set, the skip flag cannot be set in the CCW. The skip flag in the MIDAW can be used instead. The data count in the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends when the CCW count goes to zero or the last MIDAW (with the last flag) ends.
The combination of the address and count in a MIDAW cannot cross a page boundary. Therefore, the largest possible count is 4 K. The maximum data count of all the MIDAWs in a list cannot exceed 64 K, which is the maximum count of the associated CCW.
The scatter-read or scatter-write effect of the MIDAWs makes it possible to efficiently send small control blocks that are embedded in a disk record to separate buffers from those that are used for larger data areas within the record. MIDAW operations are on a single I/O block, in the manner of data chaining. Do not confuse this operation with CCW command chaining.
Extended format data sets
z/OS extended format (EF) data sets use internal structures (usually not visible to the application program) that require a scatter-read (or scatter-write) operation. Therefore, CCW data chaining is required, which produces less than optimal I/O performance. Because the most significant performance benefit of MIDAWs is achieved with EF data sets, a brief review of the EF data sets is included here.
VSAM and non-VSAM (DSORG=PS) sets can be defined as EF data sets. For non-VSAM data sets, a 32-byte suffix is appended to the end of every physical record (that is, block) on disk. VSAM appends the suffix to the end of every control interval (CI), which normally corresponds to a physical record.
A 32 K CI is split into two records to span tracks. This suffix is used to improve data reliability, and facilitates other functions that are described in the following paragraphs. Therefore, for example, if the DCB BLKSIZE or VSAM CI size is equal to 8192, the actual block on storage consists of 8224 bytes. The control unit does not distinguish between suffixes and user data. The suffix is transparent to the access method and database.
In addition to reliability, EF data sets enable the following functions:
DFSMS striping
Access method compression
Extended addressability (EA)
EA is useful for creating large Db2 partitions (larger than 4 GB). Striping can be used to increase sequential throughput, or to spread random I/Os across multiple logical volumes. DFSMS striping is useful for using multiple channels in parallel for one data set. The Db2 logs are often striped to optimize the performance of Db2 sequential inserts.
Processing an I/O operation to an EF data set normally requires at least two CCWs with data chaining. One CCW is used for the 32-byte suffix of the EF data set. With MIDAW, the additional CCW for the EF data set suffix is eliminated.
MIDAWs benefit EF and non-EF data sets. For example, to read 12 4 K records from a non-EF data set on a 3390 track, Media Manager chains 12 CCWs together by using data chaining. To read 12 4 K records from an EF data set, 24 CCWs are chained (two CCWs per 4 K record). By using Media Manager track-level command operations and MIDAWs, an entire track can be transferred by using a single CCW.
Performance benefits
z/OS Media Manager has I/O channel program support for implementing EF data sets, and automatically uses MIDAWs when appropriate. Most disk I/Os in the system are generated by using Media Manager.
Users of the Executing Fixed Channel Programs in Real Storage (EXCPVR) instruction can construct channel programs that contain MIDAWs. However, doing so requires that they construct an IOBE with the IOBEMIDA bit set. Users of the EXCP instruction cannot construct channel programs that contain MIDAWs.
The MIDAW facility removes the 4 K boundary restrictions of IDAWs and, for EF data sets, reduces the number of CCWs. Decreasing the number of CCWs helps to reduce the FICON channel processor utilization. Media Manager and MIDAWs do not cause the bits to move any faster across the FICON link. However, they reduce the number of frames and sequences that flow across the link, and therefore use the channel resources more efficiently.
The performance of a specific workload can vary based on the conditions and hardware configuration of the environment. IBM laboratory tests found that Db2 gains significant performance benefits by using the MIDAW facility in the following areas:
Table scans
Logging
Utilities
Use of DFSMS striping for Db2 data sets
Media Manager with the MIDAW facility can provide significant performance benefits when used in combination applications that use EF data sets (such as Db2) or long chains of small blocks.
For more information about FICON and MIDAW, see the following resources:
The I/O Connectivity page of the IBM IT infrastructure website includes information about FICON channel performance.
DS8000 Performance Monitoring and Tuning, SG24-7146.
ICKDSF
Device Support Facilities, ICKDSF, Release 17 is required on all systems that share disk subsystems with a z14 processor.
ICKDSF supports a modified format of the CPU information field that contains a two-digit LPAR identifier. ICKDSF uses the CPU information field instead of CCW reserve/release for concurrent media maintenance. It prevents multiple systems from running ICKDSF on the same volume, and at the same time allows user applications to run while ICKDSF is processing. To prevent data corruption, ICKDSF must determine all sharing systems that might run ICKDSF. Therefore, this support is required for z14.
 
Remember: The need for ICKDSF Release 17 also applies to systems that are not part of the same sysplex, or are running an operating system other than z/OS, such as z/VM.
z/OS Discovery and Auto-Configuration
z/OS Discovery and Auto Configuration (zDAC) is designed to automatically run several I/O configuration definition tasks for new and changed disk and tape controllers that are connected to a switch or director, when attached to a FICON channel.
The zDAC function is integrated into the hardware configuration definition (HCD). Clients can define a policy that can includes preferences for availability and bandwidth that include parallel access volume (PAV) definitions, control unit numbers, and device number ranges. When new controllers are added to an I/O configuration or changes are made to existing controllers, the system discovers them and proposes configuration changes that are based on that policy.
zDAC provides real-time discovery for the FICON fabric, subsystem, and I/O device resource changes from z/OS. By exploring the discovered control units for defined logical control units (LCUs) and devices, zDAC compares the discovered controller information with the current system configuration. It then determines delta changes to the configuration for a proposed configuration.
All added or changed logical control units and devices are added into the proposed configuration. They are assigned proposed control unit and device numbers, and channel paths that are based on the defined policy. zDAC uses channel path chosen algorithms to minimize single points of failure. The zDAC proposed configurations are created as work I/O definition files (IODFs) that can be converted to production IODFs and activated.
zDAC is designed to run discovery for all systems in a sysplex that support the function. Therefore, zDAC helps to simplify I/O configuration on z14 systems that run z/OS, and reduces complexity and setup time.
zDAC applies to all FICON features that are supported on z14 when configured as CHPID type FC. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Platform and name server registration in FICON channel
The FICON Express16S+, FICON Express16S, and FICON Express8S features support platform and name server registration to the fabric for CHPID types FC and FCP.
Information about the channels that are connected to a fabric (if registered) allows other nodes or storage area network (SAN) managers to query the name server to determine what is connected to the fabric.
The following attributes are registered for the z14 servers:
Platform information
Channel information
Worldwide port name (WWPN)
Port type (N_Port_ID)
FC-4 types that are supported
Classes of service that are supported by the channel
The platform and name server registration service are defined in the Fibre Channel Generic Services 4 (FC-GS-4) standard.
The 63.75-K subchannels
Servers before z9 EC reserved 1024 subchannels for internal system use, out of a maximum of 64 K subchannels. Starting with z9 EC, the number of reserved subchannels was reduced to 256, which increased the number of subchannels that are available. Reserved subchannels exist in subchannel set 0 only. One subchannel is reserved in each of subchannel sets 1, 2, and 3.
The informal name, 63.75-K subchannels, represents 65280 subchannels, as shown in the following equation:
63 x 1024 + 0.75 x 1024 = 65280
This equation is applicable for subchannel set 0. For subchannel sets 1, 2 and 3, the available subchannels are derived by using the following equation:
(64 X 1024) -1=65535
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Multiple subchannel sets
First introduced in z9 EC, multiple subchannel sets (MSS) provide a mechanism for addressing more than 63.75-K I/O devices and aliases for FICON (CHPID types FC) on the z14, z13, z13s, zEC12, and zBC12. z196 introduced the third subchannel set (SS2). With z13, one more subchannel set (SS3) was introduced, which expands the alias addressing by 64-K more I/O devices.
z/VM V6R3 MSS support for mirrored direct access storage device (DASD) provides a subset of host support for the MSS facility to allow using an alternative subchannel set for Peer-to-Peer Remote Copy (PPRC) secondary volumes.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253. For more information about channel subsystem, see Chapter 5, “Central processor complex channel subsystem” on page 195.
Fourth subchannel set
With z13, a fourth subchannel set (SS3) was introduced. Together with the second subchannel set (SS1) and third subchannel set (SS2), SS3 can be used for disk alias devices of primary and secondary devices, and as Metro Mirror secondary devices. This set helps facilitate storage growth and complements other functions, such as extended address volume (EAV) and Hyper Parallel Access Volumes (HyperPAV).
See Table 7-6 on page 252 and Table 7-7 on page 253 for list of supported operating systems.
IPL from an alternative subchannel set
z14 supports IPL from subchannel set 1 (SS1), subchannel set 2 (SS2), or subchannel set 3 (SS3), in addition to subchannel set 0.
See Table 7-6 on page 252 and Table 7-7 on page 253 for list of supported operating systems. For more information, refer to “Initial program load from an alternative subchannel set” on page 200.
32 K subchannels for the FICON Express16S+ & FICON Express16S
To help facilitate growth and continue to enable server consolidation, the z14 supports up to 32 K subchannels per FICON Express16S+ and FICON Express16S channels (CHPID). More devices can be defined per FICON channel, which includes primary, secondary, and alias devices. The maximum number of subchannels across all device types that are addressable within an LPAR remains at 63.75 K for subchannel set 0 and 64 K (64 X 1024)-1 for subchannel sets 1, 2, and 3.
This support is exclusive to the z14, z13 and z13s servers and applies to the FICON Express16S+ and FICON Express16S features (defined as CHPID type FC). FICON Express8S remains at 24 subchannel support when defined as CHPID type FC.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Request node identification data
First offered on z9 EC, the request node identification data (RNID) function for native FICON CHPID type FC allows isolation of cabling-detected errors. The supported operating systems are listed in Table 7-6 on page 252.
FICON link incident reporting
FICON link incident reporting allows an operating system image (without operator intervention) to register link incident reports. The supported operating systems are listed in Table 7-6 on page 252.
Health Check for FICON Dynamic routing
Starting with z13, the channel microcode was changed to support FICON dynamic routing. Although change is required in z/OS to support dynamic routing, I/O errors can occur if the FICON switches are configured for dynamic routing despite the missing support in the processor or storage controllers. Therefore, a health check is provided that interrogates the switch to determine if dynamic routing is enabled in the switch fabric.
No action is required on z/OS to enable the health check; it is automatically enabled at IPL and reacts to changes that might cause problems. The health check can be disabled by using the PARMLIB or SDSF modify commands.
The supported operating systems are listed in Table 7-6 on page 252. For more information about FICON Dynamic Routing (FIDR), see “Central processor complex I/O system structure” on page 145.
Global resource serialization FICON CTC toleration
For some configurations that depend on ESCON CTC definitions, global resource serialization (GRS) FICON CTC toleration that is provided with APAR OA38230 is essential, especially after ESCON channel support was removed from IBM Z starting with zEC12.
The supported operating systems are listed in Table 7-6 on page 252.
Increased performance for the FCP protocol
The FCP LIC is modified to help increase I/O operations per second for small and large block sizes, and to support 16-Gbps link speeds.
For more information about FCP channel performance, see the performance technical papers that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
The FCP protocol is supported by z/VM, z/VSE, and Linux on Z. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
T10-DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on IBM Z for SCSI end-to-end data protection on fixed block (FB) LUN volumes. IBM Z provides added end-to-end data protection between the operating system and the DS8870 unit. This support adds protection information that consists of Cyclic Redundancy Checking (CRC), Logical Block Address (LBA), and host application tags to each sector of FB data on a logical volume.
IBM Z support applies to FCP channels only. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel. First introduced with z9 EC, this feature can be used with supported FICON features on z14 servers. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Worldwide port name tool
Part of the z14 system installation is the pre-planning of the SAN environment. IBM includes a stand-alone tool to assist with this planning before the installation.
The capabilities of the WWPN are extended to calculate and show WWPNs for virtual and physical ports ahead of system installation.
The tool assigns WWPNs to each virtual FCP channel or port by using the same WWPN assignment algorithms that a system uses when assigning WWPNs for channels that use NPIV. Therefore, the SAN can be set up in advance, which allows operations to proceed much faster after the server is installed. In addition, the SAN configuration can be retained instead of altered by assigning the WWPN to physical FCP ports when a FICON feature is replaced.
The WWPN tool takes a .csv file that contains the FCP-specific I/O device definitions and creates the WWPN assignments that are required to set up the SAN. A binary configuration file that can be imported later by the system is also created. The .csv file can be created manually or exported from the HCD/HCM. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
The WWPN tool is applicable to all FICON channels that are defined as CHPID type FCP (for communication with SCSI devices) on z14. It is available for download at the Resource Link at the following website (log in is required).
 
Note: An optional feature can be ordered for WWPN persistency before shipment to keep the same I/O serial number on the new CPC. Current information must be provided during the ordering process.
7.4.5 Networking features and functions
In this section, we describe the networking features and functions.
25GbE RoCE Express2
Based on the RoCE Express2 generation hardware, the 25GbE RoCE Express2 (FC 0430), which was introduced with October 2nd, 2018 Announcement, provides two 25GbE physical ports and requires 25GbE optics and Ethernet switch 25GbE support. The switch port must support 25GbE (negotiation down to 10GbE is not supported).
The 25GbE RoCE Express2 has one PCHID and the same virtualization characteristics and the 10GbE RoCE Express2 (FC 0412) - 126 Virtual Functions per PCHID.
z/OS requires fixes for APAR OA55686. RMF 2.2 and later is also enhanced to recognize the CX4 card type and properly display CX4 cards in the PCIe Activity reports.
25GbE RoCE Express2 feature also are exploited by Linux on Z for applications that are coded to the native RoCE verb interface or use Ethernet (such as TCP/IP). This native exploitation does not require a peer OSA.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
10GbE RoCE Express2
IBM 10GbE RoCE Express2 provides a natively attached PCIe I/O Drawer-based Ethernet feature that supports 10 Gbps Converged Enhanced Ethernet (CEE) and RDMA over CEE (RoCE). The RoCE feature, with an OSA feature, enables shared memory communications between two CPCs by using a shared switch.
RoCE Express2 provides increased virtualization (sharing capability) by supporting 63 Virtual Functions (VFs) per physical port for a total of 126 VFs per PCHID. This configuration allows RoCE to be extended to more workloads.
z/OS Communications Server (CS) provides a new software device driver ConnectX4 (CX4) for RoCE Express2. The device driver is not apparent to both upper layers of the CS (the SMC-R and TCP/IP stack) and application software (exploiting TCP sockets). RoCE Express2 introduces a minor change in how the physical port is configured.
RMF 2.2 and later is also enhanced to recognize the new CX4 card type and properly display CX4 cards in the PCIE Activity reports.
10GbE RoCE Express2 feature also are exploited by Linux on Z for applications that are coded to the native RoCE verb interface or use Ethernet (such as TCP/IP). This native exploitation does not require a peer OSA.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
10GbE RoCE Express
z14 servers support carrying forward the 10GbE RoCE Express feature. This feature provides support to the second port on the adapter and sharing the ports to up 31 partitions (per adapter) by using both ports.
The 10-Gigabit Ethernet (10GbE) RoCE Express feature is designed to help reduce consumption of CPU resources for applications that use the TCP/IP stack (such as WebSphere accessing a Db2 database). Use of the 10GbE RoCE Express feature also can help reduce network latency with memory-to-memory transfers by using Shared Memory Communications over Remote Direct Memory Access (SMC-R) in z/OS V2R1 or later.
It is transparent to applications and can be used for LPAR-to-LPAR communication on a single z14 server or for server-to-server communication in a multiple CPC environment.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257. For more information, see Appendix D, “Shared Memory Communications” on page 475.
Shared Memory Communication - Direct Memory Access
First introduced with z13 servers, the Shared Memory Communication - Direct Memory Access (SMC-D) feature maintains the socket-API transparency aspect of SMC-R so that applications that use TCPI/IP communications can benefit immediately without requiring application software to undergo IP topology changes. Similar to SMC-R, this protocol uses shared memory architectural concepts that eliminate TCP/IP processing in the data path, yet preserve TCP/IP Qualities of Service for connection management purposes.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257. For more information, see Appendix D, “Shared Memory Communications” on page 475.
HiperSockets Completion Queue
The HiperSockets Completion Queue function is implemented on z14, z13, z13s, zEC12, and zBC12. The HiperSockets Completion Queue function is designed to allow HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary. Therefore, it combines ultra-low latency with more tolerance for traffic peaks. HiperSockets Completion Queue can be especially helpful in burst situations. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
HiperSockets Virtual Switch Bridge
The HiperSockets Virtual Switch Bridge is implemented on z14, z13, z13s, zEC12, and zBC12. With the HiperSockets Virtual Switch Bridge, z/VM virtual switch is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to also directly communicate with the following components:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
HiperSockets Multiple Write Facility
The HiperSockets Multiple Write Facility allows the streaming of bulk data over a HiperSockets link between two LPARs. Multiple output buffers are supported on a single Signal Adapter (SIGA) write instruction. The key advantage of this enhancement is that it allows the receiving LPAR to process a much larger amount of data per I/O interrupt. This process is transparent to the operating system in the receiving partition. HiperSockets Multiple Write Facility with fewer I/O interrupts is designed to reduce processor utilization of the sending and receiving partitions.
Support for this function is required by the sending operating system. For more information, see “HiperSockets” on page 183. The supported operating systems are listed in Table 7-8 on page 255.
HiperSockets support of IPV6
IPv6 is expected to be a key element in the future of networking. The IPv6 support for HiperSockets allows compatible implementations between external networks and internal HiperSockets networks. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
HiperSockets Layer 2 support
For flexible and efficient data transfer for IP and non-IP workloads, the HiperSockets internal networks on z14 can support two transport modes: Layer 2 (Link Layer) and the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
HiperSockets devices are protocol-independent and Layer 3-independent. Each HiperSockets device features its own Layer 2 Media Access Control (MAC) address. This MAC address allows the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same way as they do a non-mainframe environment.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
HiperSockets network traffic analyzer for Linux on Z
Introduced with IBM System z10, HiperSockets network traffic analyzer (HS NTA) provides support for tracing Layer2 and Layer3 HiperSockets network traffic in Linux on Z. This support allows Linux on Z to control the trace for the internal virtual LAN to capture the records into host memory and storage (file systems).
Linux on Z tools can be used to format, edit, and process the trace records for analysis by system programmers and network administrators.
OSA-Express7S 25 Gigabit Ethernet SR
OSA-Express7S 25GbE (FC 0429) is installed in the PCIe I/O drawer and has one 25GbE physical port and requires 25GbE optics and Ethernet switch 25GbE support (negotiation down to 10GbE is not supported).
Consider the following points regarding operating system support:
z/OS V2R1, V2R2, and V2R3 require fixes for the following APARs: OA55256 (VTAM) and PI95703 (TCP/IP).
z/VM V6R4 and V7R1 require PTF for APAR PI99085.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express6S 10-Gigabit Ethernet LR and SR
OSA-Express6S 10-Gigabit Ethernet features are installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express5S 10-Gigabit Ethernet LR and SR
Introduced with the zEC12 and zBC12, the OSA-Express5S 10-Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one port, which is defined as CHPID type OSD that supports the queued direct input/output (QDIO) architecture for high-speed TCP/IP communication.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express6S Gigabit Ethernet LX and SX
z14 introduces an Ethernet technology refresh with OSA-Express6S Gigabit Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express6S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express5S Gigabit Ethernet LX and SX
The OSA-Express5S Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). Each port supports attachment to a 1 Gigabit per second (Gbps) Ethernet LAN. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express5S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express6S 1000BASE-T Ethernet
z14 introduces an Ethernet technology refresh with OSA-Express6S 1000BASE-T Ethernet features to be installed in the PCIe I/O drawer, which is supported by the 16 GBps PCIe Gen3 host bus. The performance characteristics are comparable to the OSA-Express5S features and they also retain the same form factor and port granularity.
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express6S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
 
Statement of Direction1: OSA-Express6S 1000BASE-T adapters (#0426) is the last generation of OSA 1000BASE-T adapters to support connections operating at 100 Mbps link speed. Future OSA-Express 1000BASE-T adapter generations will support operation only at 1000 Mbps (1Gbps) link speed.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
OSA-Express5S 1000BASE-T Ethernet
The OSA-Express5S 1000BASE-T Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one PCIe adapter and two ports. The two ports share a CHPID, which can be defined as OSC, OSD or OSE. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express5S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express4S 1000BASE-T Ethernet
The OSA-Express4S 1000BASE-T Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature includes one PCIe adapter and two ports. The two ports share a CHPID, which is defined as OSC, OSD, or OSE. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD and OSN
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
 
Note: Operating system support is required to recognize and use the second port on the OSA-Express4S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA-Express NCP support
 
Removal of support for configuring OSA-Express NCP support OSN CHPID types: The IBM z13 and z13s is the last z Systems and IBM Z generation to support configuring OSN CHPID types. IBM z14™ servers do not support CHPID Type = OSN.
OSN CHPIDs were used to communicate between an operating system instance that is running in one logical partition and the IBM Communication Controller for Linux on Z (CCL) product in another logical partition on the same CPC. For more information about withdrawal from marketing for the CCL product, see announcement letter #914-227 dated 12/02/2014.
OSA-Integrated Console Controller
The OSA-Express 1000BASE-T Ethernet features provide the Integrated Console Controller (OSA-ICC) function, which supports TN3270E (RFC 2355) and non-SNA DFT 3270 emulation. The OSA-ICC function is defined as CHPID type OSC and console controller, and includes multiple LPAR support as shared or spanned channels.
With the OSA-ICC function, 3270 emulation for console session connections is integrated in the z14 through a port on the OSA-Express6S 1000BASE-T, OSA-Express5S 1000BASE-T, or OSA-Express4S 1000BASE-T features.
OSA-ICC can be configured on a PCHID-by-PCHID basis, and is supported at any of the feature settings. Each port can support up to 120 console session connections.
To improve security of console operations and to provide a secure, validated connectivity, OSA-ICC supports Transport Layer Security/Secure Sockets Layer (TLS/SSL) with Certificate Authentication staring with z13 GA2 (Driver level 27).
 
Note: OSA-ICC supports up to 48 secure sessions per CHPID (the overall maximum of 120 connections is unchanged).
OSA-ICC Enhancements with HMC 2.14.1
With HMC 2.14.1, the following enhancements were introduced:
The IPv6 communications protocol is supported by OSA-ICC 3270 so that clients can comply with existing regulations that require all computer purchases to support IPv6.
TLS negotiation levels (the supported TLS protocol levels) for the OSA-ICC 3270 client connection can now be specified:
 – TLS 1.0 OSA-ICC 3270 server will permit TLS 1.0, TLS 1.1 and TLS 1.2 client connections.
 – TLS 1.1 OSA-ICC 3270 server will permit TLS 1.1 and TLS 1.2 client connections.
 – TLS 1.2 OSA-ICC 3270 server will permit only TLS 1.2 client connections.
Separate and unique OSA-ICC 3270 certificates are supported (for each PCHID), for the benefit of customers who host workloads across multiple business units or data centers, where cross-site coordination is required. Customers can avoid interruption of all the TLS connections at the same time when having to renew expired certificates.
OSA-ICC continues to also support a single certificate for all OSA-ICC PCHIDs in the system.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Checksum offload for in QDIO mode (CHPID type OSD)
Checksum offload provides the capability of calculating the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and IP header checksum. Checksum verifies the accuracy of files. By moving the checksum calculations to a Gigabit or 1000BASE-T Ethernet feature, host processor cycles are reduced and performance is improved.
Checksum offload provide checksum offload for several types of traffic and is supported by OSA-Express6S GbE, OSA-Express6S 1000BASE-T Ethernet, OSA-Express5S GbE, OSA-Express5S 1000BASE-T Ethernet, and OSA-Express4S 1000BASE-T Ethernet features when configured as CHPID type OSD (QDIO mode only).
When checksum is offloaded, the OSA-Express feature runs the checksum calculations for Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) packets. The checksum offload function applies to packets that go to or come from the LAN.
When multiple IP stacks share an OSA-Express, and an IP stack sends a packet to a next hop address that is owned by another IP stack that is sharing the OSA-Express, OSA-Express sends the IP packet directly to the other IP stack. The packet does not have to be placed out on the LAN, which is termed LPAR-to-LPAR traffic. Checksum offload is enhanced to support the LPAR-to-LPAR traffic, which was not originally available.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Querying and displaying an OSA configuration
OSA-Express3 introduced the capability for the operating system to query and display directly the current OSA configuration information (similar to OSA/SF). z/OS uses this OSA capability by introducing the TCP/IP operator command display OSAINFO. z/VM provides this function with the NETSTAT OSAINFO TCP/IP command.
The use of display OSAINFO (z/OS) or NETSTAT OSAINFO (z/VM) allows the operator to monitor and verify the current OSA configuration and helps improve the overall management, serviceability, and usability of OSA-Express cards.
These commands apply to CHPID type OSD. The supported operating systems are listed in Table 7-8 on page 255.
QDIO data connection isolation for z/VM
The QDIO data connection isolation function provides a higher level of security when sharing an OSA connection in z/VM environments that use VSWITCH. The VSWITCH is a virtual network device that provides switching between OSA connections and the connected guest systems.
QDIO data connection isolation allows disabling internal routing for each QDIO connected. It also provides a means for creating security zones and preventing network traffic between the zones.
QDIO data connection isolation is supported by all OSA-Express features on z14. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
QDIO interface isolation for z/OS
Some environments require strict controls for routing data traffic between servers or nodes. In certain cases, the LPAR-to-LPAR capability of a shared OSA connection can prevent such controls from being enforced. With interface isolation, internal routing can be controlled on an LPAR basis. When interface isolation is enabled, the OSA discards any packets that are destined for a z/OS LPAR that is registered in the OSA Address Table (OAT) as isolated.
QDIO interface isolation is supported on all OSA-Express features on z14. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
QDIO optimized latency mode
QDIO optimized latency mode (OLM) can help improve performance for applications that feature a critical requirement to minimize response times for inbound and outbound data.
OLM optimizes the interrupt processing in the following manner:
For inbound processing, the TCP/IP stack looks more frequently for available data to process. This process ensures that any new data is read from the OSA-Express features without needing more program controlled interrupts (PCIs).
For outbound processing, the OSA-Express cards also look more frequently for available data to process from the TCP/IP stack. Therefore, the process does not require a Signal Adapter (SIGA) instruction to determine whether more data is available.
The supported operating systems are listed in Table 7-8 on page 255.
QDIO Diagnostic Synchronization
QDIO Diagnostic Synchronization enables system programmers and network administrators to coordinate and simultaneously capture software and hardware traces. It allows z/OS to signal OSA-Express features (by using a diagnostic assist function) to stop traces and capture the current trace records.
QDIO Diagnostic Synchronization is supported by the OSA-Express features on z14 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 255.
Adapter interruptions for QDIO
Linux on Z and z/VM work together to provide performance improvements by using extensions to the QDIO architecture. First added to z/Architecture with HiperSockets, adapter interruptions provide an efficient, high-performance technique for I/O interruptions to reduce path lengths and processor usage. These reductions are in the host operating system and the adapter (supported OSA-Express cards when CHPID type OSD is used).
In extending the use of adapter interruptions to OSD (QDIO) channels, the processor utilization to handle a traditional I/O interruption is reduced. This configuration benefits OSA-Express TCP/IP support in z/VM, z/VSE, and Linux on Z. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Inbound workload queuing (IWQ) for OSA
OSA-Express3 introduced inbound workload queuing (IWQ), which creates multiple input queues and allows OSA to differentiate workloads “off the wire.” It then assigns work to a specific input queue (per device) to z/OS.
Each input queue is a unique type of workload, and has unique service and processing requirements. The IWQ function allows z/OS to preassign the appropriate processing resources for each input queue. This approach allows multiple concurrent z/OS processing threads to process each unique input queue (workload), which avoids traditional resource contention.
IWQ reduces the conventional z/OS processing that is required to identify and separate unique workloads. This advantage results in improved overall system performance and scalability.
A primary objective of IWQ is to provide improved performance for business-critical interactive workloads by reducing contention that is created by other types of workloads. In a heavily mixed workload environment, this “off the wire” network traffic separation is provided by OSA-Express6S, OSA-Express5S, or OSA-Express4S9 features that are defined as CHPID type OSD. OSA IWQ is shown in Figure 7-5.
Figure 7-5 OSA inbound workload queuing (IWQ)
The following types of z/OS workloads are identified and assigned to unique input queues:
z/OS Sysplex Distributor traffic:
Network traffic that is associated with a distributed virtual Internet Protocol address (VIPA) is assigned to a unique input queue. This configuration allows the Sysplex Distributor traffic to be immediately distributed to the target host.
z/OS bulk data traffic:
Network traffic that is dynamically associated with a streaming (bulk data) TCP connection is assigned to a unique input queue. This configuration allows the bulk data processing to be assigned the appropriate resources and isolated from critical interactive workloads.
EE (Enterprise Extender / SNA traffic):
IWQ for the OSA-Express features is enhanced to differentiate and separate inbound Enterprise Extender traffic to a dedicated input queue.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
VLAN management enhancements
VLAN management enhancements are valid for supported OSA-Express features on z14 defines as CHPID type OSD. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
GARP VLAN Registration Protocol (GVRP)
All OSA-Express9,10 features support VLAN prioritization, which is a component of the IEEE 802.1 standard. GARP VLAN Registration Protocol (GVRP) support allows an OSA-Express port to register or unregister its VLAN IDs with a GVRP-capable switch and dynamically update its table as the VLANs change. This process simplifies the network administration and management of VLANs because manually entering VLAN IDs at the switch is no longer necessary. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Link aggregation support for z/VM
Link aggregation (IEEE 802.3ad) that is controlled by the z/VM Virtual Switch (VSWITCH) allows the dedication of an OSA-Express9.10 port to the z/VM operating system. The port must be participating in an aggregated group that is configured in Layer 2 mode. Link aggregation (trunking) combines multiple physical OSA-Express ports into a single logical link. This configuration increases throughput, and provides nondisruptive failover if a port becomes unavailable. The target links for aggregation must be of the same type.
Link aggregation is applicable to CHPID type OSD (QDIO). The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Multi-VSwitch Link Aggregation
Multi-VSwitch Link Aggregation support allows a port group of OSA-Express features to span multiple virtual switches within a single z/VM system or between multiple z/VM systems. Sharing a Link Aggregation Port Group (LAG) with multiple virtual switches increases optimization and utilization of the OSA-Express features when handling larger traffic loads.
Higher adapter utilization protects customer investments, which is increasingly important as 10 GbE deployments become more prevalent. The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Large send for IPv6 packets
Large send for IPv6 packets improves performance by offloading outbound TCP segmentation processing from the host to an OSA-Express feature by employing a more efficient memory transfer into it.
Large send support for IPv6 packets applies to the OSA-Express6S, OSA-Express5S, and OSA-Express4S9 features (CHPID type OSD) on z14, z13, z13s, zEC12, and zBC12.
z13 added support of large send for IPv6 packets (segmentation offloading) for LPAR-to-LPAR traffic. OSA-Express6S on z14 added TCP checksum on large send, which reduces the cost (CPU time) of error detection for large send.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
OSA Dynamic LAN idle
The OSA Dynamic LAN idle parameter change helps reduce latency and improve performance by dynamically adjusting the inbound blocking algorithm. System administrators can authorize the TCP/IP stack to enable a dynamic setting that previously was static.
The blocking algorithm is modified based on the following application requirements:
For latency-sensitive applications, the blocking algorithm is modified considering latency.
For streaming (throughput-sensitive) applications, the blocking algorithm is adjusted to maximize throughput.
In all cases, the TCP/IP stack determines the best setting based on the current system and environmental conditions, such as inbound workload volume, processor utilization, and traffic patterns. It can then dynamically update the settings.
Supported OSA-Express features adapt to the changes, which avoids thrashing and frequent updates to the OAT. Based on the TCP/IP settings, OSA holds the packets before presenting them to the host. A dynamic setting is designed to avoid or minimize host interrupts.
OSA Dynamic LAN idle is supported by the OSA-Express6S, OSA-Express5S, and OSA-Express4S9 features on z14 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 255.
OSA Layer 3 virtual MAC for z/OS environments
To help simplify the infrastructure and facilitate load balancing when an LPAR is sharing an OSA MAC address with another LPAR, each operating system instance can have its own unique logical or virtual MAC (VMAC) address. All IP addresses that are associated with a TCP/IP stack are accessible by using their own VMAC address instead of sharing the MAC address of an OSA port. This situation also applies to Layer 3 mode and to an OSA port spanned among channel subsystems.
OSA Layer 3 VMAC is supported by the OSA-Express6S, OSA-Express5S, and OSA-Express4S9 features on z14 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 255.
Network Traffic Analyzer
The z14 offers systems programmers and network administrators the ability to more easily solve network problems despite high traffic. With the OSA-Express Network Traffic Analyzer and QDIO Diagnostic Synchronization on the server, you can capture trace and trap data. This data can then be forwarded to z/OS tools for easier problem determination and resolution.
The Network Traffic Analyzer is supported by the OSA-Express6S, OSA-Express5S, and OSA-Express4S9 features on z14 when in QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on page 255.
7.4.6 Cryptography Features and Functions Support
IBM z14™ provides the following major groups of cryptographic functions:
Synchronous cryptographic functions, which are provided by CPACF
Asynchronous cryptographic functions, which are provided by the Crypto Express6S feature
The minimum software support levels are described in the following sections. Review the current PSP buckets to ensure that the latest support levels are known and included as part of the implementation plan.
CP Assist for Cryptographic Function
Central Processor Assist for Cryptographic Function (CPACF), which is standard11 on every z14 core, now supports pervasive encryption. Simple policy controls allow business to enable encryption to protect data in mission-critical databases without needing to stop the database or recreate database objects. Database administrators can use z/OS Dataset Encryption, z/OS Coupling Facility Encryption, z/VM encrypted hypervisor paging, and z/TPF transparent database encryption, which use the performance enhancements in the hardware.
CPACF supports the following features in z14:
Advanced Encryption Standard (AES, symmetric encryption)
Data Encryption Standard (DES, symmetric encryption)
Secure Hash Algorithm (SHA, hashing)
SHAKE Algorithms
True Random Number Generation (TRNG)
Improved GCM (Galois Counter Mode) encryption (enabled by a single hardware instruction)
CPACF also is used by several IBM software product offerings for z/OS, such as IBM WebSphere Application Server for z/OS. For more information, see 6.4, “CP Assist for Cryptographic Functions” on page 216.
The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on page 260.
Crypto Express6S
Introduced with z14, Crypto Express6S complies with the following Physical Security Standards:
FIPS 140-2 level 4
Common Criteria EP11 EAL4
Payment Card Industry (PCI) HSM
German Banking Industry Commission (GBIC, formerly DK)
Support of Crypto Express6S functions varies by operating system and release and by the way the card is configured as a coprocessor or an accelerator. For more information, see 6.5, “Crypto Express6S” on page 220. The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on page 260.
Crypto Express5S (carry forward on z14)
Support of Crypto Express5S functions varies by operating system and release and by the way the card is configured as a coprocessor or an accelerator. The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on page 260.
Regional Crypto Enablement (RCE)
Starting with z13 GA2, IBM enabled geo-specific cryptographic support that is supplied by IBM approved vendors. China is the first geography to use this support to meet the cryptography requirements of Chinese clients that are required to comply with the People’s Bank of China Financial IC Card Specifications (PBOC 3.0) for payment card processing. When ordered, the Regional Crypto Enablement support reserves the I/O slot or slots for the IBM approved vendor-supplied cryptographic card or cards. Clients must contact the IBM approved vendor directly for purchasing information.
RCE is a framework to enable the integration of IBM certified third-party cryptographic hardware for regional or industry encryption requirements. It also supports the use of cryptography algorithms and equipment from selected providers with IBM Z in specific countries. Support for exploiting international algorithms (AES, DES, RSA, and ECC) with regional crypto devices (supporting regional algorithms, such as SMx) is added to the ICSF PKCS#11 services.
The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on page 260.
Web deliverables
For more information about web-deliverable code on z/OS, see the z/OS downloads website.
For Linux on Z, support is delivered through IBM and the distribution partners. For more information, see Linux on Z on the IBM developerWorks website.
z/OS Integrated Cryptographic Service Facility
Integrated Cryptographic Service Facility (ICSF) is a base component of z/OS. It is designed to transparently use the available cryptographic functions, whether CPACF or Crypto Express, to balance the workload and help address the bandwidth requirements of the applications.
Despite being a z/OS base component, ICSF functions are generally made available through web deliverable support a few months after a new z/OS release. Therefore, new functions are related to an ICSF function modification identifier (FMID) instead of a z/OS version.
ICSF HCR77D0 - Cryptographic Support for z/OS V2R2 and z/OS V2R3
z/OS V2.2 and V2.3 require ICSF Web Deliverable WD18 (HCR77D0) to support the following features:
Support for the updated German Banking standard (DK):
 – CCA 5.4 & 6.112:
 • ISO-4 PIN Blocks (ISO-9564-1)
 • Directed keys: A key can either encrypt or decrypt data, but not both.
 • Allow AES transport keys to be used to export/import DES keys in a standard ISO 20038 key block. This feature helps with interoperability between CCA and non-CCA systems.
 • Allow AES transport keys to be used to export/import a small subset of AES keys in a standard ISO 20038 key block. This feature helps with interoperability between CCA and non-CCA systems.
 • Triple-length TDES keys with Control Vectors for increased data confidentiality.
 – CCA 6.2: PCI HSM 3K DES - Support for triple length DES keys (standards compliance)
EP11 Stage 4:
 – New elliptic curve algorithms for PKCS#11 signature, key derivation operations
 • Ed448 elliptic curve
 • EC25519 elliptic curve
 – EP11 Concurrent Patch Apply: Allows service to be applied to the EP11 coprocessor dynamically without taking the crypto adapter offline (already available for CCA coprocessors).
 – eIDAS compliance: eIDAS cross-border EU regulation for portable recognition of electronic identification.
ICSF HCR77C1 - Cryptographic Support for z/OS V2R1 - z/OS V2R3
ICSF Web Deliverable HCR77C1 provides support for the following features:
Usage and administration of Crypto Express6S
This feature might be configured as an accelerator (CEX6A), a CCA coprocessor (CEX6C), or an EP-11 coprocessor (CEX6P).
Coprocessor in PCI-HSM Compliance Mode (enablement requires TKE 9.0 or newer).
z14 CPACF support. For more information, see “CP Assist for Cryptographic Function” on page 301.
The following software enhancements are available in ICSF Web Deliverable HCR77C1 when running on z14 server:
Crypto Usage Statistics: When enabled, ICSF aggregates statistics that are related to crypto workloads and logs to an SMF record.
Panel-based CKDS Administration: ICSF added an ISPF, panel-driven interface that allows interactive administration (View, Create, Modify, and Delete) of CKDS keys.
CICS End User Auditing: When enabled, ICSF retrieves the CICS user identity and includes it as a log string in the SAF resource check. The user identity is not checked for access to the resource. Instead, it is included in the resource check (SMF Type 80) records that are logged for any of the ICSF SAF classes protecting crypto keys and services (CSFKEYS, XCSFKEY, CRYPTOZ, and CSFSERV).
For more information about ICSF versions and FMID cross-references, see the z/OS: ICSF Version and FMID Cross Reference, TD103782, abstract that is available at the IBM Techdocs website.
For PTFs that allow previous levels of ICSF to coexist with the Cryptographic Support for z/OS 2.1 - z/OS V2R3 (HCR77C1) web deliverable, check below FIXCAT, as shown in the following example:
IBM.Coexistence.ICSF.z/OS_V2R1-V2R3-HCR77C1
RMF Support for Crypto Express6
RMF enhances the Monitor I Crypto Activity data gatherer to recognize and exploit performance data for the new Crypto Express6 (CEX6) card. RMF supports all valid card configurations on z14 and provides CEX6 crypto activity data in the SMF type 70 subtype 2 records and RMF Postprocessor Crypto Activity Report.
Reporting can be done at an LPAR/domain level to provide more granular reports for capacity planning and diagnosing problems. This feature requires fix for APAR OA54952.
The supported operating systems are listed in Table 7-10 on page 259.
z/OS Data Set Encryption
Aligned with IBM Z Pervasive Encryption initiative, IBM provides application-transparent, policy-controlled dataset encryption in IBM z/OS.
Policy driven z/OS Data Set Encryption enables users to perform the following tasks:
De-couple encryption from data classification; encrypt data automatically independent of labor intensive data classification work.
Encrypt data immediately and efficiently at the time it is written.
Reduce risks that are associated with mis-classified or undiscovered sensitive data.
Help protect digital assets automatically.
Achieve application transparent encryption.
IBM Db2 for z/OS and IBM Information Management System (IMS) intend to use z/OS Data Set Encryption.
With z/OS Data Set Encryption DFSMS enhances data security with support for data set level encryption by using DFSMS access methods. This function is designed to give users the ability to encrypt their data sets without changing their application programs. DFSMS users can identify which data sets require encryption by using JCL, Data Class, or the RACF data set profile. Data set level encryption can allow the data to remain encrypted during functions, such as backup and restore, migration and recall, and replication.
z/OS Data Set Encryption requires CP Assist for Cryptographic Functions (CPACF). For protected keys, it requires z196 or later Z servers with CEX3 or later. The degree of encryption performance improvement is based on the encryption mode that is used.
Considering the significant enhancements that were introduced with z14, the encryption mode of XTS is used by access method encryption to obtain the best performance possible. It is not recommended to enable z/OS data set encryption until all sharing systems, fallback, backup, and DR systems support encryption.
In addition to applying PTFs enabling the support, ICSF configuration is required. The supported operating systems are listed in Table 7-10 on page 259.
Crypto Analytics Tool for Z
The IBM CAT is an analytics solution that collects data on your z/OS cryptographic infrastructure, presents reports, and analyzes if there are vulnerabilities. CAT collects cryptographic information from across the enterprise and provides reports to help users better manage the crypto infrastructure and ensure it follows best practices. The use of CAT can help you deal with managing complex cryptography resources across your organization.
z/VM encrypted hypervisor paging (encrypted paging support)
With the PTF for APAR VM65993 (planned to be available December 15, 2017), z/VM V6.4 will provide support for encrypted paging in support of the z14 pervasive encryption philosophy of encrypting all data in flight and at rest. Ciphering will occur as data moves between active memory and a paging volume owned by z/VM.
Included in this support is the ability to dynamically control whether a running z/VM system is encrypting this data. This support will protect guest paging data from administrators or users with access to volumes. Enabled with AES encryption, z/VM Encrypted Paging includes low overhead by using CPACF.
The supported operating systems are listed in Table 7-10 on page 259.
z/TPF transparent database encryption
Shipped in August 2016, z/TPF at-rest Data Encryption provides following features and benefits:
Automatic encryption of at-rest data by using AES CBC (128 or 256).
No application changes required.
Database level encryption by using highly efficient CPACF.
Inclusion of data on disk and cached in memory.
Ability to include data integrity checking (optionally by using SHA-256) to detect accidental or malicious data corruption.
Tools to migrate a database from un-encrypted to encrypted state or change the encryption key/algorithm for a specific DB while transactions are flowing (no database downtime).
Pervasive encryption for Linux on Z
Pervasive encryption for Linux on Z combines the full power of Linux with z14 capabilities by using the support of the following features:
Kernel Crypto: z14 CPACF
LUKS dm-crypt Protected-Key CPACF
Libica and openssl: z14 CPACF and acceleration of RSA handshakes by using SIMD
Secure Service Container: High security virtual appliance deployment infrastructure
Protection of data at-rest
By using the integration of industry-unique hardware accelerated CPACF encryption into the standard Linux components, users can achieve optimized encryption transparently to prevent raw key material from being visible to OS and applications.
Protection of data in-flight
Because of the potential costs and overheads, most of the organizations avoid the use of host-based network encryption today. By using enhanced CPACF and SIMD on z14, TLS and IPSec can use hardware performance gains while benefitting from transparent enablement. Reduced cost of encryption enables broad use of network encryption.
7.4.7 Special-purpose Features and Functions
This section describes the zEnterprise Data Compression Express.
zEnterprise Data Compression Express
The growth of data that must be captured, transferred, and stored for extended periods is unrelenting. Software-implemented compression algorithms are costly in terms of processor resources, and storage costs are not negligible.
zEnterprise Data Compression (zEDC) Express is an optional feature that is available on z14, z13, z13s, zEC12, and zBC12 servers that addresses those requirements by providing hardware-based acceleration for data compression and decompression. zEDC provides data compression with lower CPU consumption than the compression technology that previously was available on Z servers.
Support for data recovery (decompression) when the zEDC is not installed, or installed but not available on the system, is provided through software on z/OS V2R2, z/OS V2R1, and V1R13 with required PTFs applied. Software decompression is slow and uses considerable processor resources, so it is not recommended for production environments.
zEDC supports QSAM/BSAM (non-VSAM) data set compression by using any of the following ways:
Data class level: Two new values, zEDC Required (ZR) and zEDC Preferred (ZP), can be set with the COMPACTION option in the data class.
System Level: Two new values, zEDC Required (ZEDC_R) and zEDC Preferred (ZEDC_P), can be specified with the COMPRESS parameter found in the IGDSMSXX member of the SYS1.PARMLIB data set.
Data class takes precedence over system level. The supported operating systems are listed in Table 7-12 on page 261.
7.5 z/OS migration considerations
Except for base processor support, z/OS releases do not require any of the functions that are introduced with the z14. Minimal toleration support that is needed depends on z/OS release.
Although z14 servers do not require any “functional” software, it is recommended to install all z14 service before upgrading to the new server. The support matrix for z/OS releases and the Z servers that support them are listed in Table 7-16.
Table 7-16 z/OS Support Summary
z/OS
Release
z9 EC
z9 BC
WDFM1
z10 EC
z10 BC
WDFMa
z196
z114
WDFMa
zEC12
zBC12
WDFMa
z13
z13s
z14
End of
Service
Extended
Defect
Support2
V1R13
X
X
X
X
X
Xb
09/2016
09/20193
V2R1
X
X
X
X
X
X4
09/2018
09/2021c
V2R2
 
X
X
X
X
X
09/2020c
09/2023c
V2R3
 
 
 
X
X
X
09/2022c
09/2025c

1 Server was withdrawn from marketing.
2 The IBM Software Support Services for z/OS V1.13, which was offered as of October 1, 2016, provides the ability for customers to purchase extended defect support service for z/OS V1.13.
3 Planned. All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice.
4 The IBM Software Support Services for z/OS V2.1 provides the ability for customers to purchase extended defect support service for z/OS V2.1.
7.5.1 General guidelines
The IBM z14™ introduces the latest IBM Z technology. Although support is provided by z/OS starting with z/OS V1R13, the capabilities and use of z14 depends on the z/OS release. Also, web deliverables13 are needed for some functions on some releases. In general, consider the following guidelines:
Do not change software releases and hardware at the same time.
Keep members of the sysplex at the same software level, except during brief migration periods.
Migrate to an STP-only network before introducing a z14 into a sysplex.
Review any restrictions and migration considerations before creating an upgrade plan.
Acknowledge that some hardware features cannot be ordered or carried forward for an upgrade from an earlier server to z14 and plan accordingly.
Determine the changes in IOCP, HCD, and HCM to support defining z14 configuration and the new features and functions it introduces.
Ensure that none of the new z/Architecture Machine Instructions (mnemonics) that were introduced with z14 are colliding with the names of Assembler macro instructions you use14.
Check the use of MACHMIG statements in LOADxx PARMLIB commands.
7.5.2 Hardware Fix Categories (FIXCATs)
Base support includes fixes that are required to run z/OS on the IBM z14™ server. They are identified by:
IBM.Device.Server.z14-3906.RequiredService
The exploitation of many functions covers fixes that are required to use the capabilities of the IBM z14™ server. They are identified by:
IBM.Device.Server.z14-3906.Exploitation
Recommended service is identified by:
IBM.Device.Server.z14-3906.RecommendedService
Support for z14 is provided by using a combination of web deliverables and PTFs, which are documented in PSP Bucket Upgrade = 3906DEVICE, Subset = 3906/ZOS.
Consider the following other Fix Categories of Interest:
Fixes that are required to use Parallel Sysplex InfiniBand Coupling links:
IBM.Function.ParallelSysplexInfiniBandCoupling
Fixes that are required to use the Server Time Protocol function:
IBM.Function.ServerTimeProtocol
Fixes that are required to use the High-Performance FICON function:
IBM.Function.zHighPerformanceFICON
PTFs that allow previous levels of ICSF to coexist with the latest Cryptographic Support for z/OS 2.2 - z/OS V2R3 (HCR77D0) web deliverable:
IBM.Coexistence.ICSF.z/OS_V2R2-V2R3-HCR77D0
PTFs that allow previous levels of ICSF to coexist with the Cryptographic Support for z/OS 2.1 - z/OS V2R3 (HCR77C1) web deliverable:
IBM.Coexistence.ICSF.z/OS_V2R1-V2R3-HCR77C1
Use the SMP/E REPORT MISSINGFIX command to determine whether any FIXCAT APARs exist that are applicable and are not yet installed, and whether any SYSMODs are available to satisfy the missing FIXCAT APARs.
For more information about IBM Fix Category Values and Descriptions, see the IBM Fix Category Values and Descriptions page of the IBM IT infrastructure website.
7.5.3 Coupling links
z14 servers support only active participation in the same Parallel Sysplex with z13, z13s, IBM zEC1215, and zBC12. Configurations with z/OS on one of these servers can add a z14 Server to their Sysplex for a z/OS or a Coupling Facility image.
Configurations with a Coupling Facility on one of these servers can add a z14 Server to their Sysplex for a z/OS or a Coupling Facility image. z14 does not support participating in a Parallel Sysplex with System z196 and earlier systems.
Each system can use, or not use, internal coupling links, InfiniBand coupling links, or ICA coupling links independently of what other systems are using.
Coupling connectivity is available only when other systems also support the same type of coupling. For more information about supported coupling link technologies on z14, see 4.7.4, “Parallel Sysplex connectivity” on page 184, and the Coupling Facility Configuration Options white paper.
 
7.5.4 z/OS XL C/C++ considerations
z/OS V2R1 with PTFs or higher is required to use the latest level (12) of the following C/C++ compiler options:
ARCHITECTURE: This option selects the minimum level of system architecture on which the program can run. Certain features that are provided by the compiler require a minimum architecture level. ARCH(12) uses instructions that are available on the z14.
TUNE: This option allows optimization of the application for a specific system architecture, within the constraints that are imposed by the ARCHITECTURE option. The TUNE level must not be lower than the setting in the ARCHITECTURE option.
The following new functions provide performance improvements for applications by using new z14 instructions:
Vector Programming Enhancements
New z14 hardware instruction support
Packed Decimal support using vector registers
Auto-SIMD enhancements to make use of new data types
To enable the use of new functions, Specify ARCH(12) and VECTOR for compilation. The binaries that are produced by the compiler on z14 can be executed only on z14 and above because it makes use of the vector facility on z14 for new functions. The use of older versions of the compiler on z14 do not enable new functions.
For more information about the ARCHITECTURE, TUNE, and VECTOR compiler options, see z/OS V2R2.0 XL C/C++ User’s Guide, SC09-4767.
 
Important: Use the previous Z ARCHITECTURE or TUNE options for C/C++ programs if the same applications run on the z14 and on previous IBM Z servers. However, if C/C++ applications run only on z14 servers, use the latest ARCHITECTURE and TUNE options to ensure that the best performance possible is delivered through the latest instruction set additions.
For more information, see Migration from z/OS V2R1 to z/OS V2R2, GA32-0889.
7.5.5 z/OS V2.3
Consider the following points before migrating z/OS 2.3 to IBM z14 Model ZR1:
IBM z/OS V2.3 with z14 ZR1 requires a minimum of 8 GB of memory. When running as a z/VM guest or on an IBM System z Personal Development Tool, a minimum of 2 GB is required for z/OS V2.3. If the minimum is not met, a warning WTOR is issued at IPL.
Continuing with less than the minimum memory might affect availability. A migration health check will be introduced at z/OS V2.1 and z/OS V2.2 to warn if the system is configured with less than 8 GB.
Dynamic splitting and merging of Coordinated Timing Network (CTN) is available with z14 ZR1.
The z/OS V2.3 real storage manager (RSM) is planned to support a new asynchronous memory clear operation to clear the data from 1M page frames by using I/O processors (SAPs) on next generation processors. The new asynchronous memory clear operation eliminates the CPU cost for this operation and help improve performance of RSM first reference page fault processing and system services, such as IARV64 and STORAGE OBTAIN.
RMF support is provided to collect SMC-D related performance measurements in SMF 73 Channel Path Activity and SMF 74 subtype 9 PCIE Activity records. It also provides these measurements in the RMF Postprocessor and Monitor III PCIE and Channel Activity reports. This support is also available on z/OS V2.2 with PTF UA80445 for APAR OA49113.
HyperSwap support is enhanced to allow RESERVE processing. When a system runs a request to swap to secondary devices that are managed by HyperSwap, z/OS detects when RESERVEs are held and ensures that the devices that are swapped also hold the RESERVE. This enhancement is provided with collaboration from z/OS, GDPS HyperSwap, and CSM HyperSwap.
7.6 z/VM migration considerations
IBM z14 supports z/VM 7.1 and z/VM 6.4. z/VM is moving to continuous delivery model. For more information, see this web page.
7.6.1 z/VM 7.1
z/VM 7.1 can be installed directly on IBM z14. z/VM V7R1 includes the following new features:
Single System Image and Live Guest Relocation included in the base. In z/VM 6.4, this feature was the VMSSI-priced feature.
Enhances the dump process to reduce the time that is required to create and process dumps.
Upgrades to a new Architecture Level Set. This feature requires an IBM zEnterprise EC12 or BC12, or later.
Provides the base for additional functionality to be delivered as service Small Program Enhancements (SPEs) after general availability.
z/VM 7.1 includes SPEs shipped for z/VM 6.4, including Virtual Switch Enhanced Load Balancing, DS8K z-Thin Provisioning, and Encrypted Paging.
7.6.2 z/VM V6.4
z/VM V6.4 can be installed directly on a z14 server with an image that is obtained from IBM after August 25, 2017. The PTF for APAR VM65942 must be applied immediately after installing z/VM V6.4 and before configuring any part of the new z/VM system.
A z/VM Release Status Summary for supported z/VM versions is listed in Table 7-17.
Table 7-17 z/VM Release Status Summary
z/VM
Level1
General
Availability
End of
Marketing
End of
Service
Minimum
Processor
Level
Maximum
Processor
Level
7.1
September, 2018
Not announced
Not announced
zEC12 & zBC12
-
6.4
November, 2016
Not announced
Not announced
z196 & z114
-

1 Older z/VM versions (6.3, 6.2, 5.4 are End Of Support)
7.6.3 ESA/390-compatibility mode for guests
IBM z14™ no longer supports the full ESA/390 architectural mode. However, IBM z14™ does provide ESA/390-compatibility mode, which is an environment that supports a subset of DAT-off ESA/390 applications in a hybrid architectural mode.
z/VM provides the support necessary for DAT-off guests to run in this new compatibility mode. This support allows guests, such as CMS, GCS, and those that start in ESA/390 mode briefly before switching to z/Architecture mode, to continue to run on IBM z14™.
The available PTF for APAR VM65976 provides infrastructure support for ESA/390 compatibility mode within z/VM V6.4. It must be installed on all members of an SSI cluster before any z/VM V6.4 member of the cluster is run on an IBM z14™ server.
In addition to OS support, all the stand-alone utilities a client uses must be at a minimum level or need a PTF.
7.6.4 Capacity
For the capacity of any z/VM logical partition (LPAR), and any z/VM guest, in terms of the number of Integrated Facility for Linux (IFL) processors and central processors (CPs), real or virtual, you might want to adjust the number to accommodate the processor unit (PU) capacity of z14 servers.
7.7 z/VSE migration considerations
As described in “z/VSE” on page 246, IBM z14 ZR1 supports z/VSE 6.2, z/VSE 6.1, z/VSE 5.2, and z/VSE 5.116.
Consider the following general guidelines when you are migrating z/VSE environment to z14 ZR1 servers:
Collect reference information before migration
This information includes baseline data that reflects the status of, for example, performance data, CPU utilization of reference workload, I/O activity, and elapsed times.
This information is required to size z14 ZR1 and is the only way to compare workload characteristics after migration.
For more information, see the z/VSE Release and Hardware Upgrade document.
Apply required maintenance for z14 ZR1
Review the Preventive Service Planning (PSP) bucket 3907DEVICE for z14 ZR1 and apply the required PTFs for IBM and independent software vendor (ISV) products.
 
Note: z14 ZR1 supports z/Architecture mode only.
7.8 Software licensing
The IBM z14™ software portfolio includes operating system software (that is, z/OS, z/VM, z/VSE, and z/TPF) and middleware that runs on these operating systems. The portfolio also includes middleware for Linux on Z environments.
For the z14, the following metric groups for software licensing are available from IBM, depending on the software product:
Monthly license charge (MLC)
MLC pricing metrics feature a recurring charge that applies each month. In addition to the permission to use the product, the charge includes access to IBM product support during the support period. MLC pricing applies to z/OS, z/VSE, and z/TPF operating systems. Charges are based on processor capacity, which is measured in millions of service units (MSU) per hour.
IPLA
IPLA metrics have a single, up-front charge for an entitlement to use the product. An optional and separate annual charge (called subscription and support) entitles clients to access IBM product support during the support period. With this option, you can also receive future releases and versions at no extra charge.
Software Licensing References
For more information about software licensing, see the following websites:
The IBM International Passport Advantage® Agreement can be downloaded from the Learn about Software licensing website.
Subcapacity license charges
For eligible programs, subcapacity licensing allows software charges that are based on the measured utilization by logical partitions instead of the total number of MSUs of the CPC. Subcapacity licensing removes the dependency between the software charges and CPC (hardware) installed capacity.
The subcapacity licensed products are charged monthly based on the highest observed 4-hour rolling average utilization of the logical partitions in which the product runs. The exception is products that are licensed by using the Select Application License Charge (SALC) pricing metric. This type of charge requires measuring the utilization and reporting it to IBM.
The 4-hour rolling average utilization of the logical partition can be limited by a defined capacity value on the image profile of the partition. This value activates the soft capping function of the PR/SM, which limits the 4-hour rolling average partition utilization to the defined capacity value. Soft capping controls the maximum 4-hour rolling average usage (the last 4-hour average value at every 5-minute interval), but does not control the maximum instantaneous partition use.
You can also use an LPAR group capacity limit, which sets soft capping by PR/SM for a group of logical partitions that are running z/OS.
Even by using the soft capping option, the use of the partition can reach up to its maximum share based on the number of logical processors and weights in the image profile. Only the 4-hour rolling average utilization is tracked, which allows utilization peaks above the defined capacity value.
Some pricing metrics apply to stand-alone Z servers. Others apply to the aggregation of multiple Z server workloads within the same Parallel Sysplex.
For more information about WLC and details about how to combine logical partition utilization, see z/OS Planning for Sub-Capacity Pricing, SA23-2301.
Key MLC Metrics and Offerings
MLC metrics include various offerings. The following metrics and pricing schemes are available. Offerings often are tied to or made available to only on certain Z servers:
Key MLC Metrics
 – WLC (Workload License Charges)
 – AWLC (Advanced Workload License Charges)
 – CMLC (Country Multiplex License Charges)
 – VWLC (Variable Workload License Charges)
 – FWLC (Flat Workload License Charges)
 – AEWLC (Advanced Entry Workload License Charges)
 – EWLC (Entry Workload License Charges)
 – TWLC (Tiered Workload License Charges)
 – zNALC (System z New Application License Charges)
 – PSLC (Parallel Sysplex License Charges)
 – MWLC (Midrange Workload License Charges)
 – zELC (zSeries Entry License Charges)
 – GOLC (Growth Opportunity License Charges)
 – SALC (Select Application License Charges)
Pricing
 – GSSP (Getting Started Sub-Capacity Pricing)
 – IWP (Integrated Workload Pricing)
 – MWP (Mobile Workload Pricing)
 – zCAP (Z Collocated Application Pricing)
 – Parallel Sysplex Aggregated Pricing
 – CMP (Country Multiplex Pricing)
 – ULC (IBM S/390® Usage Pricing)
One of the recent changes in software licensing for z/OS and z/VSE is Multi-Version Measurement (MVM), which replaced Single Version Charging (SVC), Migration Pricing Option (MPO), and the IPLA Migration Grace Period.
MVM for z/OS and z/VSE removes time limits for running multiple eligible versions of a software program. Clients can run different versions of a program simultaneously for an unlimited duration during a program version upgrade.
Clients can also choose to run multiple different versions of a program simultaneously for an unlimited duration in a production environment. MVM allows clients to selectively deploy new software versions, which provides more flexible control over their program upgrade cycles. For more information, see Software Announcement 217-093, dated February 14, 2017.
Technology Transition Offerings with z14
Complementing the announcement of the z14 server, IBM has introduced the following Technology Transition Offerings (TTOs):
Technology Update Pricing for the IBM z14™.
New and revised Transition Charges for Sysplexes or Multiplexes TTOs for actively coupled Parallel Sysplexes (z/OS), Loosely Coupled Complexes (z/TPF), and Multiplexes (z/OS and z/TPF).
Technology Update Pricing for the IBM z14™ extends the software price and performance that is provided by AWLC and CMLC for z14 servers. The new and revised Transition Charges for Sysplexes or Multiplexes offerings provide a transition to Technology Update Pricing for the IBM z14™ for customers who have not yet fully migrated to z14 servers. This transition ensures that aggregation benefits are maintained and also phases in the benefits of Technology Update Pricing for the IBM z14™ pricing as customers migrate.
When a z14 server is in an actively coupled Parallel Sysplex or a Loosely Coupled Complex, you might choose aggregated Advanced Workload License Charges (AWLC) pricing or aggregated Parallel Sysplex License Charges (PSLC) pricing (subject to all applicable terms and conditions).
When a z14 server is part of a Multiplex under Country Multiplex Pricing (CMP) terms, Country Multiplex License Charges (CMLC), Multiplex zNALC (MzNALC), and Flat Workload License Charges (FWLC) are the only pricing metrics available (subject to all applicable terms and conditions).
For more information about software pricing for the z14 server, see Software Announcement 217-273, dated July 17, 2017, Technology Transition Offerings for the IBM z14™ offer price-performance advantages.
When a z14 server is running z/VSE, you can choose Mid-Range Workload License Charges (MWLC) (subject to all applicable terms and conditions).
For more information about AWLC, CMLC, MzNALC, PSLC, MWLC, or the Technology Update Pricing and Transition Charges for Sysplexes or Multiplexes TTO offerings, see the IBM z Systems Software Pricing page of the IBM IT infrastructure website.
7.9 References
For more information about planning, see the home pages for each of the following operating systems:
z/OS
z/VM
z/TPF
 

1 Beginning with z/OS V1.12, IBM Software Support Services replaced the IBM Lifecycle Extension for z/OS offering with a service extension for extended defect support.
2 z/VSE 5.2 End of Support is October 31, 2018.
3 FICON Express16S+ does not allow a mixture of CHPID types on new cards
4 On z14 SMT is also enabled (not user configurable) by default for SAPs
5 The features that are listed here might not be available on all operating systems that are listed in the tables.
6 Results based on modeling, not actual measurements
7 HCA2-O is not supported on z14.
8 Exceptions are made to this statement, and many details are omitted in this description. In this section, we assume that you can merge this brief description with an existing understanding of I/O operations in a virtual memory environment.
9 Only OSA-Express4S 1000BASE-T cards are supported on z14 as carry forward.
10 OSA-Express4S or newer.
11 CPACF hardware is implemented on each z14 core. CPACF functionality is enabled with FC 3863.
12 CCA 5.4 and 6.1 enhancements are also supported for z/OS V2R1 with ICSF HCR77C1 (WD17) with SPEs (Small Program Enhancements (z/OS continuous delivery model).
13 For example, the use of Crypto Express6S requires the Cryptographic Support for z/OS V2R1 - z/OS V2R3 web deliverable.
15 z14 ZR1 (Machine Type 3907) does not support direct coupling connectivity to zEC12/zBC12.
16 z/VSE 5.1 is end of support since June 2016. It can be IPL’ed on z14 after applying APAR DY47654 (PTF UD54170).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.39.142