Software support
This chapter lists the minimum operating system requirements and support considerations for the IBM z13 (z13) and its features. It addresses z/OS, z/VM, z/VSE, z/TPF, and Linux on z Systems. Because this information is subject to change, see the Preventive Service Planning (PSP) bucket for 2964DEVICE for the most current information. Also included is generic software support for IBM z BladeCenter Extension (zBX) Model 004.
Support of z13 functions depends on the operating system, its version, and release.
This chapter includes the following sections:
IOCP
7.1 Operating systems summary
Table 7-1 lists the minimum operating system levels that are required on the z13. For similar information about the IBM z BladeCenter Extension (zBX) Model 004, see 7.15, “IBM z BladeCenter Extension (zBX) Model 004 software support” on page 298.
 
End-of-service operating systems: Operating system levels that are no longer in service are not covered in this publication. These older levels might provide support for some features.
Table 7-1 z13 minimum operating systems requirements
Operating systems
ESA/390
(31-bit mode)
z/Architecture
(64-bit mode)
Notes
z/OS V1R121
No
Yes
 
 
Service is required.
See the following box, which is titled “Features”.
z/VM V6R22
No
Yes3
z/VSE V5R1
No
Yes
z/TPF V1R1
Yes
Yes
Linux on z Systems
No4

1 Regular service support for z/OS V1R12 ended September 2014. However, by ordering the IBM Lifecycle Extension for z/OS V1R12 product, fee-based corrective service can be obtained up to September 2017.
2 z/VM V6R2 with PTF provides compatibility support (CEX5S with enhanced crypto domain support)
3 VM supports both 31-bit and 64-bit mode guests.
4 64-bit distributions included the 31-bit emulation layer to run 31-bit software products.
 
Features: Usage of certain features depends on the operating system. In all cases, PTFs might be required with the operating system level that is indicated. Check the z/OS, z/VM, z/VSE, and z/TPF subsets of the 2964DEVICE PSP buckets. The PSP buckets are continuously updated, and contain the latest information about maintenance.
Hardware and software buckets contain installation information, hardware and software service levels, service guidelines, and cross-product dependencies.
For Linux on z Systems distributions, consult the distributor’s support information.
7.2 Support by operating system
IBM z13 introduces several new functions. This section addresses support of those functions by the current operating systems. Also included are some of the functions that were introduced in previous z Systems and carried forward or enhanced in zEC12. Features and functions that are available on previous servers but no longer supported by zEC12 have been removed.
For a list of supported functions and the z/OS and z/VM minimum required support levels, see Table 7-3 on page 232. For z/VSE, z/TPF, and Linux on z Systems, see Table 7-4 on page 237. The tabular format is intended to help you determine, by a quick scan, which functions are supported and the minimum operating system level that is required.
7.2.1 z/OS
z/OS Version 1 Release 13 is the earliest in-service release that supports z13. After September 2016, a fee-based Extended Service for defect support (for up to three years) can be obtained for z/OS V1R13. Although service support for z/OS Version 1 Release 12 ended in September of 2014, a fee-based extension for defect support (for up to three years) can be obtained by ordering the IBM Lifecycle Extension for z/OS V1R12. Also, z/OS.e is not supported on zEC12, and z/OS.e Version 1 Release 8 was the last release of z/OS.e.
z13 capabilities differ depending on the z/OS release. Toleration support is provided on z/OS V1R12. Exploitation support is provided only on z/OS V2R1 and higher. For a list of supported functions and their minimum required support levels, see Table 7-3 on page 232.
7.2.2 z/VM
At general availability, z/VM V6R2 and V6R3 provide compatibility support with limited use of new z13 functions.
For a list of supported functions and their minimum required support levels, see Table 7-3 on page 232.
 
Capacity: For the capacity of any z/VM logical partition (LPAR), and any z/VM guest, in terms of the number of Integrated Facility for Linux (IFL) processors and central processors (CPs), real or virtual, you might want to adjust the number to accommodate the processor unit (PU) capacity of the z13.
z/VM V6R3 and IBM z Unified Resource Manager: In light of the IBM cloud strategy and adoption of OpenStack, the management of z/VM environments in zManager is now stabilized and will not be further enhanced. Accordingly, zManager does not provide systems management support for z/VM V6R2 on IBM z13 or for z/VM V6.3 and later releases. However, zManager continues to play a distinct and strategic role in the management of virtualized environments that are created by integrated firmware hypervisors (IBM Processor Resource/Systems Manager (PR/SM), PowerVM, and System x hypervisor based on a kernel-based virtual machine (KVM)) of the z Systems.
Statements of Direction1:
Removal of support for Expanded Storage (XSTORE): z/VM V6.3 is the last z/VM release that supports Expanded Storage (XSTORE) for either host or guest use. The z13 will be the last high-end server to support Expanded Storage (XSTORE).
Stabilization of z/VM V6.2 support: The IBM z13 server family is planned to be the last z Systems server supported by z/VM V6.2 and the last z Systems server that will be supported where z/VM V6.2 is running as a guest (second level). This is in conjunction with the statement of direction that the IBM z13 server family will be the last to support ESA/390 architecture mode, which z/VM V6.2 requires. z/VM V6.2 will continue to be supported until December 31, 2016, as announced in announcement letter # 914-012.
Product Delivery of z/VM on DVD/Electronic only: z/VM V6.3 is the last release of z/VM that is available on tape. Subsequent releases will be available on DVD or electronically.
Enhanced RACF password encryption algorithm for z/VM: In a future deliverable, an enhanced RACF/VM password encryption algorithm is planned. This support is designed to provide improved cryptographic strength by using AES-based encryption in RACF/VM password algorithm processing. This planned design is intended to provide better protection for encrypted RACF password data if a copy of RACF database becomes inadvertently accessible
z/VM V6.3 Multi-threading CPU Pooling support: z/VM CPU Pooling support will be enhanced to enforce IFL pool capacities as cores rather than as threads in an environment with multi-threading enabled.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.2.3 z/VSE
Support is provided by z/VSE V5R1 and later. Note the following considerations:
z/VSE runs in z/Architecture mode only.
z/VSE uses 64-bit real memory addressing.
Support for 64-bit virtual addressing is provided by z/VSE V5R1.
z/VSE V5R1 requires an architectural level set that is specific to the IBM System z9.
For a list of supported functions and their minimum required support levels, see Table 7-4 on page 237.
7.2.4 z/TPF
For a list of supported functions and their minimum required support levels, see Table 7-4 on page 237.
7.2.5 Linux on z Systems
Linux on z Systems distributions are built separately for the 31-bit and 64-bit addressing modes of the z/Architecture. The newer distribution versions are built for 64-bit only. Using the 31-bit emulation layer on a 64-bit Linux on z Systems distribution provides support for running 31-bit applications. None of the current versions of Linux on z Systems distributions (SUSE Linux Enterprise Server (SLES1) 12 and SLES 11, and Red Hat (RHEL) 7 and RHEL 6)2 require z13 toleration support. Table 7-2 shows the service levels of SUSE and Red Hat releases supported at the time of writing.
Table 7-2 Current Linux on z Systems distributions
Linux on z Systems distribution
z/Architecture
(64-bit mode)
SUSE SLES 12
Yes
SUSE SLES 11
Yes
Red Hat RHEL 7
Yes
Red Hat RHEL 6
Yes
For the latest information about supported Linux distributions on z Systems, see this website:
IBM is working with its Linux distribution partners to provide further use of selected z13 functions in future Linux on z Systems distribution releases.
Consider the following guidelines:
Use SUSE SLES 12 or Red Hat RHEL 6 in any new projects for the z13.
Update any Linux distributions to their latest service level before the migration to z13.
Adjust the capacity of any z/VM and Linux on z Systems LPAR guests, and z/VM guests, in terms of the number of IFLs and CPs, real or virtual, according to the PU capacity of the z13.
7.2.6 z13 function support summary
The following tables summarize the z13 functions and their minimum required operating system support levels:
Table 7-3 on page 232 is for z/OS and z/VM.
Table 7-4 on page 237 is for z/VSE, z/TPF, and Linux on z Systems.
Information about Linux on z Systems refers exclusively to the appropriate distributions of SUSE and Red Hat.
Both tables use the following conventions:
Y The function is supported.
N The function is not supported.
- The function is not applicable to that specific operating system.
Although the following tables list all functions that require support, the PTF numbers are not given. Therefore, for the current information, see the PSP bucket for 2964DEVICE.
Table 7-3 z13 function minimum support requirements summary (part 1 of 2)
Function
z/OS
V2 R1
z/OS
V1R13
z/OS
V1R12
z/VM
V6R3
z/VM
V6R2
z13
Y
Y
Y
Y
Y
Maximum Processor Unit (PUs) per system image
1411
100
100
642
32
Support of IBM zAware
Y
Y
N
-
-
z Systems Integrated Information Processors (zIIPs)
Y
Y
Y
Y
Y
Java Exploitation of Transactional Execution
Y
Y
N
N
N
Large memory support (TB)3
4 TB
1 TB
1 TB
1 TB
256 GB
Large page support of 1 MB pageable large pages
Y
Y
Y
N
N
2 GB large page support
Y
Y4
N
N
N
Out-of-order execution
Y
Y
Y
Y
Y
Hardware decimal floating point5
Y
Y
Y
Y
Y
85 LPARs
Y6
Yf
N
Y
Y
CPU measurement facility
Y
Y
Y
Y
Y
Enhanced flexibility for Capacity on Demand (CoD)
Y
Y
Y
Y
Y
HiperDispatch
Y
Y
Y
Y
N
Six logical channel subsystems (LCSSs)
Y
Y
N
N
N
Four subchannel sets per LCSS
Y
Y
N
Y
Y
Simultaneous MultiThreading (SMT)
Y
N
N
Y
N
Single-Instruction Multiple-Data (SIMD)
Y
N
N
N
N
Multi-vSwitch Link Aggregation
N
N
N
Y
N
Cryptography
CP Assist for Cryptographic Function (CPACF) greater than 16 Domain Support
Y
Y
Y
Y
Y
CPACF AES-128, AES-192, and AES-256
Y
Y
Y
Y
Y
CPACF SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y
Y
Y
Y
CPACF protected key
Y
Y
Y
Y
Y
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
Y
Y
Y
Y
Y
Crypto Express5S
Y
Y
Y
Y
Y
Elliptic Curve Cryptography (ECC)
Y
Y
Y
Y
Y
HiperSockets
32 HiperSockets
Y
Y
Y
Y
Y
HiperSockets Completion Queue
Y
Y
N
Y
Y
HiperSockets integration with IEDN
Y
Y
N
N
N
HiperSockets Virtual Switch Bridge
-
-
-
Y
Y
HiperSockets Network Traffic Analyzer
N
N
N
Y
Y
HiperSockets Multiple Write Facility
Y
Y
Y
N
N
HiperSockets support of IPV6
Y
Y
Y
Y
Y
HiperSockets Layer 2 support
Y
Y
Y
Y
Y
HiperSockets
Y
Y
Y
Y
Y
Flash Express Storage
Flash Express
Y
Y
N
N
N
zEnterprise Data Compression (zEDC)
zEDC Express
Y
N
N
Y
N
Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE)
10GbE RoCE Express
Y
Y
Y
Y
N
Shared RoCE environment
Y
N
N
Y
N
FICON (Fibre Connection) and FCP (Fibre Channel Protocol)
FICON Express 8S (CHPID type FC) when using z13 FICON or Channel-To-Channel (CTC)
Y
Y
Y
Y
Y
FICON Express 8S (CHPID type FC) for support of zHPF single-track operations
Y
Y
Y
Y
Y
FICON Express 8S (CHPID type FC) for support of zHPF multitrack operations
Y
Y
Y
Y
Y
FICON Express 8S (CHPID type FCP) for support of SCSI devices
N
N
N
Y
Y
FICON Express 8S (CHPID type FCP) support of hardware data router
N
N
N
Y
N
T10-DIF support by the FICON Express8S and FICON Express8 features when defined as CHPID type FCP
N
N
N
Y
Y
GRS FICON CTC toleration
Y
Y
Y
N
N
FICON Express8 CHPID 10KM LX and SX type FC
Y
Y
Y
Y
Y
FICON Express 16S (CHPID type FC) when using FICON or CTC
Y
Y
Y
Y
Y
FICON Express 16S (CHPID type FC) for support of zHPF single-track operations
Y
Y
Y
Y
Y
FICON Express 16S (CHPID type FC) for support of zHPF multitrack operations
Y
Y
Y
Y
Y
FICON Express 16S (CHPID type FCP) for support of SCSI devices
N
N
N
Y
Y
FICON Express 16S (CHPID type FCP) support of hardware data router
N
N
N
Y
N
T10-DIF support by the FICON Express16S features when defined as CHPID type FCP
N
N
N
Y
Y
Health Check for FICON Dynamic routing
Y
Y
Y
N
N
OSA (Open Systems Adapter)
OSA-Express5S 10 Gigabit Ethernet Long Reach (LR) and Short Reach (SR) CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express5S 10 Gigabit Ethernet LR and SR
CHPID type OSX
Y
Y
Y
N7
Ng
OSA-Express5S Gigabit Ethernet Long Wave (LX) and Short Wave (SX) CHPID type OSD (two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD (two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
Y
Y
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSM
Y
Y
Y
Ng
Ng
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSN
Y
Y
Y
Y
Y
OSA-Express4S 10-Gigabit Ethernet LR and SR
CHPID type OSD
Y
Y
Y
Y
Y
OSA-Express4S 10-Gigabit Ethernet LR and SR
CHPID type OSX
Y
Y
Y
Ng
Ng
OSA-Express4S Gigabit Ethernet LX and SX
CHPID type OSD (two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S Gigabit Ethernet LX and SX
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSC (one or two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSD (two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSE (one or two ports per CHPID)
Y
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSM (one port per CHPID)
Y
Y
Y
Ng
Ng
OSA-Express4S 1000BASE-T
CHPID type OSN
Y
Y
Y
Y
Y
Inbound workload queuing Enterprise extender
Y
Y
Y
Y
Y
Checksum offload IPV6 packets
Y
Y
Y
Y
Y
Checksum offload for LPAR-to-LPAR traffic
Y
Y
Y
Y
Y
Large send for IPV6 packets
Y
Y
Y
Y
Y
Parallel Sysplex and other
STP
Y
Y
Y
-
-
Coupling over InfiniBand CHPID type CIB
Y
Y
Y
N
N
InfiniBand coupling links 12x at a distance of 150 m (492 ft.)
Y
Y
Y
N
N
InfiniBand coupling links 1x at an unrepeated distance of 10 km (6.2 miles)
Y
Y
Y
N
N
CFCC Level 20
Y
Y
Y
Y
Y
CFCC Level 20 Flash Express exploitation
Y
Y
N
N
N
CFCC Level 20 Coupling Thin Interrupts
Y
Y
Y
N
N
CFCC Level 20 Coupling Large Memory support
Y
Y
Y
N
N
CFCC 20 Support for 256 Coupling CHPIDs
Y
Y
Y
N
N
IBM Integrated Coupling Adapter (ICA)
Y
Y
Y
N
N
Dynamic I/O support for InfiniBand and ICA CHPIDs
-
-
-
Y
Y
RMF coupling channel reporting
Y
Y
Y
N
N

1 141-way without multithreading. 128-way with multithreading.
2 64-way without multithreading and 32-way with multithreading enabled.
3 10 TB of real storage available per server.
4 PTF support required and with RSM enabled web delivery.
5 Packed decimal conversion support.
6 Only 60 LPARs can be defined if z/OS V1R12 is running.
7 Dynamic I/O support only
Table 7-4 z13 functions minimum support requirements summary (part 2 of 2)
Function
z/VSE V5R2
z/VSE V5R1
z/TPF V1R1
Linux on z Systems
z13
Yf
Yf
Y
Y
Maximum PUs per system image
10
10
86
1411
Support of IBM zAware
-
-
-
Y
System z Integrated Information Processors (zIIPs)
-
-
-
-
Java Exploitation of Transactional Execution
N
N
N
Y
Large memory support 2
32 GB
32 GB
4 TB
4 TB3
Large page support pageable 1 MB page support
Y
Y
N
Y
2 GB Large Page Support
-
-
-
-
Out-of-order execution
Y
Y
Y
Y
85 logical partitions
Y
Y
Y
Y
HiperDispatch
N
N
N
N
Six logical channel subsystems (LCSSs)
Y
Y
N
Y
Four subchannel set per LCSS
Y
Y
N
Y
Simultaneous multithreading (SMT)
N
N
N
Y
Single Instruction Multiple Data (SIMD)
N
N
N
N
Multi-vSwitch Link Aggregation
N
N
N
N
Cryptography
CP Assist for Cryptographic Function (CPACF)
Y
Y
Y
Y
CPACF AES-128, AES-192, and AES-256
Y
Y
Y4
Y
CPACF SHA-1/SHA-2, SHA-224, SHA-256, SHA-384, and SHA-512
Y
Y
Y5
Y
CPACF protected key
N
N
N
N
Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode
N
N
N
N
Crypto Express5S
Y
Y
Y
Elliptic Curve Cryptography (ECC)
N
N
N
Ni
HiperSockets
32 HiperSockets
Y
Y
Y
Y
HiperSockets Completion Queue
Yf
N
N
Y
HiperSockets integration with IEDN
N
N
N
N
HiperSockets Virtual Switch Bridge
-
-
-
Y8
HiperSockets Network Traffic Analyzer
N
N
N
Y9
HiperSockets Multiple Write Facility
N
N
N
N
HiperSockets support of IPV6
Y
Y
N
Y
HiperSockets Layer 2 support
N
N
N
Y
HiperSockets
Y
Y
N
Y
Flash Express Storage
Flash Express
N
N
N
Y
zEnterprise Data Compression (zEDC)
zEDC Express
N
N
N
N
Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE)
10GbE RoCE Express
N
N
N
Ni
FICON and FCP
FICON Express8S support of zHPF single track
CHPID type FC
N
N
N
Y
FICON Express8 support of zHPF multitrack
CHPID type FC
N
N
N
Y
High Performance FICON (zHPF)
N
N
N
GRS FICON CTC toleration
-
-
-
-
N-Port ID Virtualization for FICON (NPIV) CHPID type FCP
Y
Y
N
Y
FICON Express8S support of hardware data router
CHPID type FCP
N
N
N
FICON Express8S and FICON Express8 and FICON Express8S support of T10-DIF CHPID type FCP
N
N
N
Yj
FICON Express8S, FICON Express8, FICON Express16S 10KM LX, and FICON Express4 SX support of SCSI disks
CHPID type FCP
Y
Y
N
Y
FICON Express8S CHPID type FC
Y
Y
Y
Y
FICON Express8 CHPID type FC
Y
Yl
Yl
FICON Express 16S (CHPID type FC) when using FICON or CTC
Y
Y
Y
Y
FICON Express 16S (CHPID type FC) for support of zHPF single-track operations
N
N
N
Y
FICON Express 16S (CHPID type FC) for support of zHPF multitrack operations
N
N
N
Y
FICON Express 16S (CHPID type FCP) for support of SCSI devices
Y
Y
N
Y
FICON Express 16S (CHPID type FCP) support of hardware data router
N
N
N
Y
T10-DIF support by the FICON Express16S features when defined as CHPID type FCP
N
N
N
Y
OSA
Large send for IPv6 packets
-
-
-
-
Inbound workload queuing for Enterprise Extender
N
N
N
N
Checksum offload for IPV6 packets
N
N
N
N
Checksum offload for LPAR-to-LPAR traffic
N
N
N
N
OSA-Express5S 10 Gigabit Ethernet LR and SR
CHPID type OSD
Y
Y
Y
OSA-Express5S 10 Gigabit Ethernet LR and SR
CHPID type OSX
Y
Y
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD (two port per CHPID)
Y
Y
Ym
OSA-Express5S Gigabit Ethernet LX and SX
CHPID type OSD (one port per CHPID)
Y
Y
Ym
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSC
Y
Y
N
-
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD (two ports per CHPID)
Y
Y
Ym
Yp
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSD (one port per CHPID)
Y
Y
Ym
Y
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSE
Y
Y
N
N
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSM
N
N
N
OSA-Express5S 1000BASE-T Ethernet
CHPID type OSN
Y
Y
Y
Y
OSA-Express4S 10-Gigabit Ethernet LR and SR
CHPID type OSD
Y
Y
Y
Y
OSA-Express4S 10-Gigabit Ethernet LR and SR
CHPID type OSX
Y
N
Y
OSA-Express4S Gigabit Ethernet LX and SX
CHPID type OSD (two ports per CHPID)
Y
Y
Yr
Y
OSA-Express4S Gigabit Ethernet LX and SX
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSC (one or two ports per CHPID)
Y
Y
N
-
OSA-Express4S 1000BASE-T
CHPID type OSD (two ports per CHPID)
Y
Y
Yf
Y
OSA-Express4S 1000BASE-T
CHPID type OSD (one port per CHPID)
Y
Y
Y
Y
OSA-Express4S 1000BASE-T
CHPID type OSE (one or two ports per CHPID)
Y
Y
N
N
OSA-Express4S 1000BASE-T
CHPID type OSM
N
N
N
Y
OSA-Express4S 1000BASE-T
CHPID type OSN (one or two ports per CHPID)
Y
Y
Yf
Y
Parallel Sysplex and other
Server Time Protocol (STP) enhancements
-
-
-
Y
STP - Server Time Protocol
-
-
Y
Coupling over InfiniBand CHPID type CIB
-
-
Y
-
InfiniBand coupling links 12x at a distance of 150 m (492 ft.)
-
-
-
-
InfiniBand coupling links 1x at unrepeated distance of 10 km (6.2 miles)
-
-
-
-
Dynamic I/O support for InfiniBand CHPIDs
-
-
-
-
CFCC Level 20
-
-
Y
-

1 SLES12 and RHEl can support up to 256 PUs (IFLs or CPs).
2 10 TB of real storage is supported per server.
3 Red Hat (RHEL) supports a maximum of 3 TB.
4 z/TPF supports only AES-128 and AES-256.
5 z/TPF supports only SHA-1 and SHA-256.
6 Service is required.
7 Supported only when running in accelerator mode.
8 Applicable to guest operating systems.
9 IBM is working with its Linux distribution partners to include support in future Linux on z Systems distribution releases.
10 Supported by SLES 11.
11 Supported by SLES 11 SP3 and RHEL 6.4.
13 Requires PUT 5 with PTFs.
14 Requires PUT 8 with PTFs.
15 Supported by SLES 11 SP1, SLES 10 SP4, and RHEL 6, RHEL 5.6.
16 Supported by SLES 11, SLES 10 SP2, and RHEL 6, RHEL 5.2.
17 Supported by SLES 11 SP2, SLES 10 SP4, and RHEL 6, RHEL 5.2.
18 Requires PUT 4 with PTFs.
19 Server Time Protocol (STP) is supported in z/TPF with APAR PJ36831 in PUT 07.
7.3 Support by function
This section addresses operating system support by function. Only the currently in-support releases are covered.
Tables in this section use the following convention:
N/A Not applicable
NA Not available
7.3.1 Single system image
A single system image can control several processor units, such as CPs, zIIPs, or IFLs.
Maximum number of PUs per system image
Table 7-5 lists the maximum number of PUs supported by each operating system image and by special-purpose LPARs.
Table 7-5 Single system image size software support
Operating system
Maximum number of PUs per system image
z/OS V2R1
1411.
z/OS V1R13
1002.
z/OS V1R12
100b.
z/VM V6R3
643.
z/VM V6R2
32.
z/VSE V5R1 and later
z/VSE Turbo Dispatcher can use up to 4 CPs, and tolerates up to 10-way LPARs.
z/TPF V1R1
86 CPs.
CFCC Level 20
16 CPs or ICFs: CPs and ICFs cannot be mixed.
IBM zAware
80.
Linux on z Systems4
SUSE SLES 12: 256 CPs or IFLs.
SUSE SLES 11: 64 CPs or IFLs.
Red Hat RHEL 7: 256 CPs or IFLs.
Red Hat RHEL 6 64 CPs or IFLs.

1 128 PUs in multithreading mode and 141 PUs supported without multithreading.
2 Total characterizable PUs including zIIPs and CPs.
3 64 PUs without SMT mode and 32 PUs with SMT.
4 IBM is working with its Linux distribution partners to provide the use of this function in future Linux on z Systems distribution releases.
The zAware-mode logical partition
zEC12 introduced an LPAR mode, called zAware-mode, that is exclusively for running the IBM zAware virtual appliance. The IBM zAware virtual appliance can pinpoint deviations in z/OS normal system behavior. It also improves real-time event diagnostic tests by monitoring the z/OS operations log (OPERLOG). It looks for unusual messages, unusual message patterns that typical monitoring systems miss, and unique messages that might indicate system health issues. The IBM zAware virtual appliance requires the monitored clients to run z/OS V1R13 with PTFs or later. The newer version of IBM zAware is enhanced to work with messages without message IDs. This includes support for Linux running natively or as a guest under z/VM on z Systems.
The z/VM-mode LPAR
z13 supports an LPAR mode, called z/VM-mode, that is exclusively for running z/VM as the first-level operating system. The z/VM-mode requires z/VM V6R2 or later, and allows z/VM to use a wider variety of specialty processors in a single LPAR. For example, in a z/VM-mode LPAR, z/VM can manage Linux on z Systems guests running on IFL processors while also managing z/VSE and z/OS guests on central processors (CPs). It also allows z/OS to fully use zIIPs.
7.3.2 zIIP support
zIIPs do not change the model capacity identifier of the z13. IBM software product license charges based on the model capacity identifier are not affected by the addition of zIIPs. On a z13, z/OS Version 1 Release 12 is the minimum level for supporting zIIPs.
No changes to applications are required to use zIIPs. zIIPs can be used by these applications:
DB2 V8 and later for z/OS data serving, for applications that use data Distributed Relational Database Architecture (DRDA) over TCP/IP, such as data serving, data warehousing, and selected utilities.
z/OS XML services.
z/OS CIM Server.
z/OS Communications Server for network encryption (Internet Protocol Security (IPSec)) and for large messages that are sent by HiperSockets.
IBM GBS Scalable Architecture for Financial Reporting.
IBM z/OS Global Mirror (formerly XRC) and System Data Mover.
IBM OMEGAMON® XE on z/OS, OMEGAMON XE on DB2 Performance Expert, and DB2 Performance Monitor.
Any Java application that is using the current IBM SDK.
WebSphere Application Server V5R1 and later, and products that are based on it, such as WebSphere Portal, WebSphere Enterprise Service Bus (WebSphere ESB), and WebSphere Business Integration (WBI) for z/OS.
CICS/TS V2R3 and later.
DB2 UDB for z/OS Version 8 and later.
IMS Version 8 and later.
zIIP Assisted Hiper Sockets for large messages
The functioning of a zIIP is transparent to application programs.
In z13, the zIIP processor is designed to run in SMT mode, with up to two threads per processor. This new function is designed to help improve throughput for zIIP workloads and provide appropriate performance measurement, capacity planning, and SMF accounting data. This support is planned to be available for z/OS V2.1 with PTFs at z13 general availability.
Use the PROJECTCPU option of the IEAOPTxx parmlib member to help determine whether zIIPs can be beneficial to the installation. Setting PROJECTCPU=YES directs z/OS to record the amount of eligible work for zIIPs in SMF record type 72 subtype 3. The field APPL% IIPCP of the Workload Activity Report listing by WLM service class indicates the percentage of a processor that is zIIP eligible. Because of the zIIP lower price as compared to a CP, a utilization as low as 10% might provide benefits.
7.3.3 Transactional Execution
The IBM zEnterprise EC12 introduced an architectural feature called Transactional Execution (TX). This capability is known in academia and industry as “hardware transactional memory”. Transactional execution has also been implemented in z13.
This feature enables software to indicate to the hardware the beginning and end of a group of instructions that need to be treated in an atomic way. Either all of their results happen or none happens, in true transactional style. The execution is optimistic. The hardware provides a memory area to record the original contents of affected registers and memory as the instruction’s execution takes place. If the transactional execution group is canceled or must be rolled back, the hardware transactional memory is used to reset the values. Software can implement a fallback capability.
This capability enables more efficient software by providing a way to avoid locks (lock elision). This advantage is of special importance for speculative code generation and highly parallelized applications.
TX is used by IBM JVM, but potentially can be used by other software. z/OS V1R13 with program temporary fixes (PTFs) or later is required. The feature also is enabled for specific Linux distributions.3
7.3.4 Maximum main storage size
Table 7-6 on page 244 lists the maximum amount of main storage that is supported by current operating systems. A maximum of 10 TB of main storage can be defined for an LPAR4 on a z13.
Expanded storage, although part of the z/Architecture, is used only by z/VM.
 
Statement of direction1: z/VM V6.3 is the last z/VM release that supports Expanded Storage (XSTORE) for either host or guest use. The z13 will be the last high-end server to support Expanded Storage (XSTORE).

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
Table 7-6 Maximum memory that is supported by the operating system
Operating system
Maximum supported main storage1
z/OS
z/OS V2R1 and later support 4 TBa.
z/VM
z/VM V6R3 and later support 1 TB.
z/VM V6R2 supports 256 GB
z/VSE
z/VSE V5R1 and later support 32 GB.
z/TPF
z/TPF supports 4 TB.a
CFCC
Level 20 supports up to 3 TB per server.a
IBM zAware
Supports up to 3 TB per server.a
Linux on z Systems (64-bit)
SUSE SLES 12 supports 4 TB.a
SUSE SLES 11 supports 4 TB.a
SUSE SLES 10 supports 4 TB.a
Red Hat RHEL 7 supports 3 TB.a
Red Hat RHEL 6 supports 3 TB.a
Red Hat RHEL 5 supports 3 TB.a

1 z13 supports 10 TB user configurable memory per server.
7.3.5 Flash Express
IBM z13 continues support for Flash Express, which can help improve the resilience and performance of the z/OS system. Flash Express is designed to assist with the handling of workload spikes or increased workload demand that might occur at the opening of the business day, or in a workload shift from one system to another.
z/OS is the first OS to use Flash Express storage as storage-class memory (SCM) for paging store and supervisor call (SVC) memory dumps. Flash memory is a faster paging device as compared to a hard disk drive (HDD). SVC memory dump data capture time is expected to be substantially reduced. As a paging store, Flash Express storage is suitable for workloads that can tolerate paging. It does not benefit workloads that cannot afford to page. The z/OS design for Flash Express storage does not completely remove the virtual storage constraints that are created by a paging spike in the system. However, some scalability relief is expected because of faster paging I/O with Flash Express storage.
Flash Express storage is allocated to an LPAR similarly to main memory. The initial and maximum amount of Flash Express Storage that is available to a particular LPAR is specified at the Support Element (SE) or Hardware Management Console (HMC) by using a new Flash Storage Allocation panel. The Flash Express storage granularity is 16 GB. The amount of Flash Express storage in the partition can be changed dynamically between the initial and the maximum amount at the SE or HMC. For z/OS, this change also can be made by using an operator command. Each partition’s Flash Express storage is isolated like the main storage, and sees only its own space in the flash storage space.
Flash Express provides 1.4 TB of storage per feature pair. Up to four pairs can be installed, for a total of 5.6 TB. All paging data can easily be on Flash Express storage, but not all types of page data can be on it. For example, virtual I/O (VIO) data always is placed on an external disk. Local page data sets are still required to support peak paging demands that require more capacity than provided by the amount of configured SCM.
The z/OS paging subsystem works with a mix of internal Flash Express storage and external disk. The placement of data on Flash Express storage and external disk is self-tuning, based on measured performance. At IPL time, z/OS detects whether Flash Express storage is assigned to the partition. z/OS automatically uses Flash Express storage for paging unless specified otherwise by using PAGESCM=NONE in IEASYSxx. No definition is required for placement of data on Flash Express storage.
The support is delivered in the z/OS V1R13 real storage manager (RSM) Enablement Offering Web Deliverable (FMID JBB778H) for z/OS V1R13.5 The installation of this web deliverable requires careful planning because the size of the Nucleus, extended system queue area (ESQA) per CPU, and RSM stack is increased. Also, there is a new memory pool for pageable large pages. For web-deliverable code on z/OS, see the z/OS downloads website:
The support also is delivered in z/OS V2R1 (included with the base product) or later.
Linux on z Systems also offers the support for Flash Express as an SCM device. This is useful for workloads with large write operations with a block size of 256 KB or more of data. The SCM increments are accessed through extended asynchronous data mover (EADM) subchannels.
Table 7-7 lists the minimum support requirements for Flash Express.
Table 7-7 Minimum support requirements for Flash Express
Operating system
Support requirements
z/OS
z/OS V1R131
Linux on z Systems
SLES12
SLES11 SP3
RHEL7
RHEL6

1 Web deliverable and PTFs are required.
Flash Express usage by CFCC
Coupling facility control code (CFCC) Level 20 supports Flash Express. Initial CF Flash usage is targeted for WebSphere MQ shared queues application structures. Structures can now be allocated with a combination of real memory and SCM that is provided by the Flash Express feature. For more information, see “Flash Express exploitation by CFCC” on page 289.
Flash Express usage by Java
z/OS Java SDK 7 SR3, CICS TS V5.1, WebSphere Liberty Profile 8.5, DB2 V11, and IMS V12 are targeted for Flash Express usage. There is a statement of direction to support traditional WebSphere V8. The support is for just-in-time (JIT) Code Cache and Java Heap to improve performance for pageable large pages.
7.3.6 zEnterprise Data Compression Express (zEDC)
The growth of data that must be captured, transferred, and stored for a long time is unrelenting. Software-implemented compression algorithms are costly in terms of processor resources, and storage costs are not negligible either.
zEDC is an optional feature that is available to z13, zEC12, and zBC12, addresses those requirements by providing hardware-based acceleration for data compression and decompression. zEDC provides data compression with lower CPU consumption than the compression technology that previously was available on z Systems.
Exploitation support of zEDC Express functions is provided exclusively by z/OS V2R1 zEnterprise Data Compression for both data compression and decompression.
Support for data recovery (decompression) when the zEDC is not installed, or installed but not available, on the system, is provided through software on z/OS V2R1, V1R13, and V1R12 with the correct PTFs. Software decompression is slow and uses considerable processor resources; therefore, it is not suggested for production environments.
zEDC is enhanced to support QSAM/BSAM (non-VSAM) data set compression. This can be achieved by any of the following ways
Data class level: Two new values, zEDC Required (ZR) and zEDC Preferred (ZP), can be set with the COMPACTION option in the data class.
System Level: Two new values, zEDC Required (ZEDC_R) and zEDC Preferred (ZEDC_P), can be specified with the COMPRESS parameter found in the IGDSMSXX member of the SYS1.PARMLIB data set.
Data class takes precedence over system level.
Table 7-8 shows the minimum support requirements for zEDC Express.
Table 7-8 Minimum support requirements for zEDC Express
Operating system
Support requirements
z/OS
z/OS V2R11
z/OS V1R13a (Software decompression support only)
z/OS V1R12a (Software decompression support only)

1 PTFs are required.
7.3.7 10GbE RoCE Express
The IBM z13 supports the RoCE Express feature. It extends this support by providing support to the second port on the adapter and by sharing the ports to up 31 partitions, per adapter, using both ports.
The 10 Gigabit Ethernet (10GbE) RoCE Express feature is designed to help reduce consumption of CPU resources for applications that use the TCP/IP stack (such as WebSphere accessing a DB2 database). Use of the 10GbE RoCE Express feature also can help reduce network latency with memory-to-memory transfers using Shared Memory Communications over Remote Direct Memory Access (SMC-R) in z/OS V2R1. It is transparent to applications and can be used for LPAR-to-LPAR communication on a single z13 or for server-to-server communication in a multiple CPC environment.
z/OS V2R1 with PTFs is the only supporting OS for the SMC-R protocol. It does not roll back to previous z/OS releases. z/OS V1R12 and z/OS V1R13 with PTFs provide only compatibility support.
IBM is working with its Linux distribution partners to include support in future Linux on z Systems distribution releases.
Table 7-9 lists the minimum support requirements for 10GbE RoCE Express.
Table 7-9 Minimum support requirements for RoCE Express
Operating system
Support requirements
z/OS
z/OS V2R1 with supporting PTFs (SPE for IBM VTAM®, TCP/IP, and IOS1). The IOS PTF is a minor change to allow greater than 256 (xFF) PFIDs for RoCE.
z/VM
z/VM V6.3 with supporting PTFs. The z/VM V6.3 SPE for base PCIe support is required. There currently are no known additional z/VM software changes that are required for SR-IOV. When running z/OS as a guest z/OS, APAR OA43256 is required for RoCE, and APARs OA43256 and OA44482 are required for zEDC. A z/VM website that details the prerequisites for using RoCE and zEDC as a guest can be found at following location:
Linux on z Systems
Currently limited to experimental support in:
SUSE SLES 12
SUSE SLES11 SP3 with latest maintenance.
RHEL 7.0.
7.3.8 Large page support
In addition to the existing 1-MB large pages, 4-KB pages, and page frames, z13 supports pageable 1-MB large pages, large pages that are 2 GB, and large page frames. For more information, see “Large page support” on page 113.
Table 7-10 lists the support requirements for 1-MB large pages.
Table 7-10 Minimum support requirements for 1-MB large page
Operating system
Support requirements
z/OS
z/OS V1R11
z/OS V1R131 for pageable 1-MB large pages
z/VM
Not supported, and not available to guests
z/VSE
z/VSE V4R3: Supported for data spaces
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6

1 Web deliverable and PTFs are required, plus the Flash Express hardware feature.
Table 7-11 lists the support requirements for 2-GB large pages.
Table 7-11 Minimum support requirements for 2-GB large pages
Operating system
Support requirements
z/OS
z/OS V1R13
7.3.9 Hardware decimal floating point
Industry support for decimal floating point is growing, with IBM leading the open standard definition. Examples of support for the draft standard IEEE 754r include Java BigDecimal, C#, XML, C/C++, GCC, COBOL, and other key software vendors, such as Microsoft and SAP.
Decimal floating point support was introduced with z9 EC. z13 inherited the decimal floating point accelerator feature that was introduced with z10 EC. For more information, see 3.4.6, “Decimal floating point (DFP) accelerator” on page 96.
Table 7-12 lists the operating system support for decimal floating point. For more information, see 7.6.6, “Decimal floating point and z/OS XL C/C++ considerations” on page 286.
Table 7-12 Minimum support requirements for decimal floating point
Operating system
Support requirements
z/OS
z/OS V1R12.
z/VM
z/VM V6R2: Support is for guest use.
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.10 Up to 85 LPARs
This feature, first made available in z13, allows the system to be configured with up to 85 LPARs. Because channel subsystems can be shared by up to 15 LPARs, it is necessary to configure six channel subsystems to reach the 85 LPARs limit. Table 7-13 lists the minimum operating system levels for supporting 85 LPARs.
Table 7-13 Minimum support requirements for 85 LPARs
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
z/VSE
z/VSE V5R1
z/TPF
z/TPF V1R1
Linux on z Systems
SUSE SLES 11
SUSE SLES 12
Red Hat RHEL 7
Red Hat RHEL 6
 
Remember: The IBM zAware virtual appliance runs in a dedicated LPAR, so when it is activated, it reduces by one the maximum number of available LPARs.
7.3.11 Separate LPAR management of PUs
The z13 uses separate PU pools for each optional PU type. The separate management of PU types enhances and simplifies capacity planning and management of the configured LPARs and their associated processor resources. Table 7-14 lists the support requirements for the separate LPAR management of PU pools.
Table 7-14 Minimum support requirements for separate LPAR management of PUs
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
z/VSE
z/VSE V5R1
z/TPF
z/TPF V1R1
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.12 Dynamic LPAR memory upgrade
An LPAR can be defined with both an initial and a reserved amount of memory. At activation time, the initial amount is made available to the partition and the reserved amount can be added later, partially or totally. Those two memory zones do not have to be contiguous in real memory, but appear as logically contiguous to the operating system that runs in the LPAR.
z/OS can take advantage of this support and nondisruptively acquire and release memory from the reserved area. z/VM V6R2 and higher can acquire memory nondisruptively, and immediately make it available to guests. z/VM virtualizes this support to its guests, which now also can increase their memory nondisruptively if supported by the guest operating system. Releasing memory from z/VM is not supported. Releasing memory from the z/VM guest depends on the guest’s operating system support.
Dynamic LPAR memory upgrade is not supported for zAware-mode LPARs.
7.3.13 LPAR physical capacity limit enforcement
On the IBM z13, PR/SM is enhanced to support an option to limit the amount of physical processor capacity that is consumed by an individual LPAR when a PU that is defined as a central processor (CP) or an IFL is shared across a set of LPARs. This enhancement is designed to provide a physical capacity limit that is enforced as an absolute (versus a relative) limit; it is not affected by changes to the logical or physical configuration of the system. This physical capacity limit can be specified in units of CPs or IFLs.
Table 7-15 lists the minimum operating system level that is required on z13.
Table 7-15 Minimum support requirements for LPAR physical capacity limit enforcement
Operating system
Support requirements
z/OS
z/OS V1R121
z/VM
z/VM V6R3
z/VSE
z/VSE V5R1a

1 PTFs are required.
7.3.14 Capacity Provisioning Manager
The provisioning architecture enables clients to better control the configuration and activation of the On/Off CoD. For more information, see 8.8, “Nondisruptive upgrades” on page 346. The new process is inherently more flexible, and can be automated. This capability can result in easier, faster, and more reliable management of the processing capacity.
The Capacity Provisioning Manager, a function that is first available with z/OS V1R9, interfaces with z/OS Workload Manager (WLM) and implements capacity provisioning policies. Several implementation options are available, from an analysis mode that issues only guidelines, to an autonomic mode that provides fully automated operations.
Replacing manual monitoring with autonomic management or supporting manual operation with guidelines can help ensure that sufficient processing power is available with the least possible delay. Support requirements are listed in Table 7-16.
Table 7-16 Minimum support requirements for capacity provisioning
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
Not supported: Not available to guests
7.3.15 Dynamic PU add
Planning of an LPAR configuration allows defining reserved PUs that can be brought online when extra capacity is needed. Operating system support is required to use this capability without an IPL, that is, nondisruptively. This support has been in z/OS for a long time.
The dynamic PU add function enhances this support by allowing you to define and change dynamically the number and type of reserved PUs in an LPAR profile, removing any planning requirements. Table 7-17 lists the minimum required operating system levels to support this function.
The new resources are immediately made available to the operating system and, in the case of z/VM, to its guests. The dynamic PU add function is not supported for zAware-mode LPARs.
Table 7-17 Minimum support requirements for dynamic PU add
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
z/VSE
z/VSE V5R1
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.16 HiperDispatch
The HIPERDISPATCH=YES/NO parameter in the IEAOPTxx member of SYS1.PARMLIB and on the SET OPT=xx command controls whether HiperDispatch is enabled or disabled for a z/OS image. It can be changed dynamically, without an IPL or any outage.
The default is that HiperDispatch is disabled on all releases, from z/OS V1R10 (requires PTFs for zIIP support) through z/OS V1R12.
Beginning with z/OS V1R13, when running on a z13, zEC12, zBC12, z196, or z114 server, the IEAOPTxx keyword HIPERDISPATCH defaults to YES. If HIPERDISPATCH=NO is specified, the specification is accepted as it was on previous z/OS releases.
The usage of SMT in the z13 requires that HiperDispatch is enabled on the operating system (z/OS V2R1 or z/VM V6R3).
Additionally, with z/OS V1R12 or later, any LPAR running with more than 64 logical processors is required to operate in HiperDispatch Management Mode.
The following rules control this environment:
If an LPAR is defined at IPL with more than 64 logical processors, the LPAR automatically operates in HiperDispatch Management Mode, regardless of the HIPERDISPATCH= specification.
If more logical processors are added to an LPAR that has 64 or fewer logical processors and the additional logical processors raise the number of logical processors to more than 64, the LPAR automatically operates in HiperDispatch Management Mode regardless of the HIPERDISPATCH=YES/NO specification. That is, even if the LPAR has the HIPERDISPATCH=NO specification, that LPAR is converted to operate in HiperDispatch Management Mode.
An LPAR with more than 64 logical processors running in HiperDispatch Management Mode cannot be reverted to run in non-HiperDispatch Management Mode.
HiperDispatch in the z13 uses a new chip and CPC drawer configuration to improve the access cache performance. Beginning with z/OS V1R13, HiperDispatch changed to use the new node cache structure of z13. The base support is provided by PTFs identified by IBM.device.server.z13-2964.requiredservice.
The PR/SM in the System z9 EC to zEC12 servers stripes the memory across all books in the system to take advantage of the fast book interconnection and spread memory controller work. The PR/SM in the z13 seeks to assign all memory in one drawer striped across the two nodes to take advantage of the lower latency memory access in a drawer and smooth performance variability across nodes in the drawer.
The PR/SM in the System z9 EC to zEC12 servers attempts to assign all logical processors to one book, packed into PU chips of that book in cooperation with operating system HiperDispatch optimize shared cache usage. The PR/SM in the z13 seeks to assign all logical processors of a partition to one CPC drawer, packed into PU chips of that CPC drawer in cooperation with operating system HiperDispatch optimize shared cache usage.
The PR/SM automatically keeps partition’s memory and logical processors on the same CPC drawer. This arrangement looks simple for a partition, but it is a complex optimization for multiple logical partitions because some must be split among processors drawers.
To use effectively HiperDispatch, WLM goal adjustment might be required. Review the WLM policies and goals, and update them as necessary. You might want to run with the new policies and HiperDispatch on for a period, turn it off, and then run with the older WLM policies. Compare the results of using HiperDispatch, readjust the new policies, and repeat the cycle, as needed. WLM policies can be changed without turning off HiperDispatch.
A health check is provided to verify whether HiperDispatch is enabled on a system image that is running on z13.
z/VM V6R3
z/VM V6R3 also uses the HiperDispatch facility for improved processor efficiency by better use of the processor cache to take advantage of the cache-rich processor, node, and drawer design of the z13 system. The supported processor limit has been increased to 64, while with SMT, it remains at 32, supporting having up to 64 threads running simultaneously (32 processors).
The operating system support requirements for HiperDispatch are listed in Table 7-18.
Table 7-18 Minimum support requirements for HiperDispatch
Operating system
Support requirements
z/OS
z/OS V1R11 with PTFs
z/VM
z/VM V6R3
Linux on z Systems
SUSE SLES 12a
SUSE SLES 11a
Red Hat RHEL 7a
Red Hat RHEL 61

7.3.17 The 63.75-K subchannels
Servers before z9 EC reserved 1024 subchannels for internal system use, out of a maximum of 64 K subchannels. Starting with z9 EC, the number of reserved subchannels was reduced to 256, increasing the number of subchannels that are available. Reserved subchannels exist only in subchannel set 0. One subchannel is reserved in each of subchannel sets 1, 2, and 3.
The informal name, 63.75-K subchannels, represents 65280 subchannels, as shown in the following equation:
63 x 1024 + 0.75 x 1024 = 65280
The above equation is applicable for subchannel set 0. For subchannel sets 1, 2 and 3, the available subchannels are derived by using the following equation:
(64 X 1024) -1=65535
Table 7-19 lists the minimum operating system level that is required on the z13.
Table 7-19 Minimum support requirements for 63.75-K subchannels
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.18 Multiple subchannel sets (MSS)
MSS, first introduced in z9 EC, provide a mechanism for addressing more than 63.75-K I/O devices and aliases for ESCON6 (CHPID type CNC) and FICON (CHPID types FC) on the z13, zEC12, zBC12, z196, z114, z10 EC, and z9 EC. z196 introduced the third subchannel set (SS2). With z13, one more subchannel set (SS3) has been introduced, which expands the alias addressing by 64-K more I/O devices.
Table 7-20 lists the minimum operating system levels that are required on the z13.
Table 7-20 Minimum software requirement for MSS
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R31
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6

1 For specific Geographically Dispersed Parallel Sysplex (GDPS) usage only
z/VM V6R3 MSS support for mirrored direct access storage device (DASD) provides a subset of host support for the MSS facility to allow using an alternative subchannel set for Peer-to-Peer Remote Copy (PPRC) secondary volumes.
7.3.19 Fourth subchannel set
With z13, a fourth subchannel set (SS3) was introduced. It applies FICON (CHPID type FC for both FICON and zHPF paths) channels.
Together with the second subchannel set (SS1) and third subchannel set (SS2), SS3 can be used for disk alias devices of both primary and secondary devices, and as Metro Mirror secondary devices. This set helps facilitate storage growth and complements other functions, such as extended address volume (EAV) and Hyper Parallel Access Volumes (HyperPAV).
Table 7-21 lists the minimum operating systems level that is required on the z13.
Table 7-21 Minimum software requirement for SS3
Operating system
Support requirements
z/OS
z/OS V1R13 with PTFs
z/VM
z/VM V6R2 with PTF
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.20 IPL from an alternative subchannel set
z13 supports IPL from subchannel set 1 (SS1), subchannel set 2 (SS2), or subchannel set 3 (SS3), in addition to subchannel set 0. For more information, see “IPL from an alternative subchannel set” on page 187.
7.3.21 Modified Indirect Data Address Word (MIDAW) facility
The MIDAW facility improves FICON performance. It provides a more efficient channel command word (CCW)/indirect data address word (IDAW) structure for certain categories of data-chaining I/O operations.
Support for the MIDAW facility when running z/OS as a guest of z/VM requires z/VM V6R2 or higher. For more information, see 7.9, “Simultaneous multithreading (SMT)” on page 290.
Table 7-22 lists the minimum support requirements for MIDAW.
Table 7-22 Minimum support requirements for MIDAW
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 for guest use
7.3.22 HiperSockets Completion Queue
The HiperSockets Completion Queue function is exclusive to z13, zEC12, zBC12, z196, and z114. The HiperSockets Completion Queue function is designed to allow HiperSockets to transfer data synchronously if possible, and asynchronously if necessary. Therefore, it combines ultra-low latency with more tolerance for traffic peaks. This benefit can be especially helpful in burst situations.
Table 7-23 lists the minimum support requirements for HiperSockets Completion Queue.
Table 7-23 Minimum support requirements for HiperSockets Completion Queue
Operating system
Support requirements
z/OS
z/OS V1R13a
z/VSE
z/VSE V5R11
z/VM
z/VM V6R2a
Linux on z Systems
SUSE SLES 12
SUSE SLES 11 SP2
Red Hat RHEL 7
Red Hat RHEL 6.2

1 PTFs are required.
7.3.23 HiperSockets integration with the intraensemble data network
The HiperSockets integration with the intraensemble data network (IEDN) is exclusive to z13, zEC12, zBC12, z196, and z114. HiperSockets integration with the IEDN combines the HiperSockets network and the physical IEDN to be displayed as a single Layer 2 network. This configuration extends the reach of the HiperSockets network outside the CPC to the entire ensemble, displaying as a single Layer 2 network.
Table 7-24 lists the minimum support requirements for HiperSockets integration with the IEDN.
Table 7-24 Minimum support requirements for HiperSockets integration with IEDN
Operating system
Support requirements
z/OS
z/OS V1R131

1 PTFs are required.
7.3.24 HiperSockets Virtual Switch Bridge
The HiperSockets Virtual Switch Bridge is exclusive to z13, zEC12, zBC12, z196, and z114. HiperSockets Virtual Switch Bridge can integrate with the IEDN through OSA-Express for zBX (OSX) adapters. It can then bridge to another central processor complex (CPC) through OSD adapters. This configuration extends the reach of the HiperSockets network outside of the CPC to the entire ensemble and hosts that are external to the CPC. The system is displayed as a single Layer 2 network.
Table 7-25 lists the minimum support requirements for HiperSockets Virtual Switch Bridge.
Table 7-25 Minimum support requirements for HiperSockets Virtual Switch Bridge
Operating system
Support requirements
z/VM
z/VM V6R21,
z/VM V6R3
Linux on z Systems2
SUSE SLES 12
SUSE SLES 11
SUSE SLES 10 SP4 update (kernel 2.6.16.60-0.95.1)
Red Hat RHEL 7
Red Hat RHEL 6
Red Hat RHEL 5.8 (GA-level)

1 PTFs are required.
2 Applicable to guest operating systems.
7.3.25 HiperSockets Multiple Write Facility
The HiperSockets Multiple Write Facility allows the streaming of bulk data over a HiperSockets link between two LPARs. Multiple output buffers are supported on a single Signal Adapter (SIGA) write instruction. The key advantage of this enhancement is that it allows the receiving LPAR to process a much larger amount of data per I/O interrupt. This process is transparent to the operating system in the receiving partition. HiperSockets Multiple Write Facility with fewer I/O interrupts is designed to reduce processor utilization of the sending and receiving partitions.
Support for this function is required by the sending operating system. For more information, see 4.8.6, “HiperSockets” on page 167. Table 7-26 lists the minimum support requirements for HiperSockets Virtual Multiple Write Facility.
Table 7-26 Minimum support requirements for HiperSockets multiple write
Operating system
Support requirements
z/OS
z/OS V1R12
7.3.26 HiperSockets IPv6
IPv6 is expected to be a key element in future networking. The IPv6 support for HiperSockets allows compatible implementations between external networks and internal HiperSockets networks.
Table 7-27 lists the minimum support requirements for HiperSockets IPv6 (CHPID type IQD).
Table 7-27 Minimum support requirements for HiperSockets IPv6 (CHPID type IQD)
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
7.3.27 HiperSockets Layer 2 support
For flexible and efficient data transfer for IP and non-IP workloads, the HiperSockets internal networks on z13 can support two transport modes. These modes are Layer 2 (Link Layer) and the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
HiperSockets devices are protocol-independent and Layer 3-independent. Each HiperSockets device has its own Layer 2 Media Access Control (MAC) address. This MAC address allows the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same way as they do a non-mainframe environment.
Table 7-28 lists the minimum support requirements for HiperSockets Layer 2.
Table 7-28 Minimum support requirements for HiperSockets Layer 2
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 for guest use
Linux on z Systems
SUSE SLES 11
SUSE SLES 12
Red Hat RHEL 7
Red Hat RHEL 6
7.3.28 HiperSockets network traffic analyzer for Linux on z Systems
HiperSockets network traffic analyzer (HS NTA), introduced with IBM System z10, provides support for tracing Layer2 and Layer3 HiperSockets network traffic in Linux on z Systems. This support allows Linux on z Systems to control the trace for the internal virtual LAN to capture the records into host memory and storage (file systems).
Linux on z Systems tools can be used to format, edit, and process the trace records for analysis by system programmers and network administrators.
7.3.29 FICON Express16S
FICON Express16S supports a link data rate of 16 gigabits per second (Gbps) and autonegotiation to 4 or 8 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, High Performance FICON for z Systems (zHPF), and Fibre Channel Protocol (FCP), the z13 server enables SAN for even higher performance, helping to prepare for an end-to-end 16 Gbps infrastructure to meet the increased bandwidth demands of your applications.
The new features for the multimode and single mode fiber optic cabling environments have reduced latency for large read/write operations and increased bandwidth compared to the FICON Express8S features.
Table 7-29 lists the minimum support requirements for FICON Express8S.
Table 7-29 Minimum support requirements for FICON Express8S
Operating system
z/OS
z/VM
z/VSE
z/TPF
Linux on z Systems
Native FICON and CTC
CHPID type FC
V1R121
V6R2
V5R1
V1R1
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF single-track operations
CHPID type FC
V1R12
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF multitrack operations
CHPID type FC
V1R12b
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of SCSI devices
CHPID type FCP
N/A
V6R22
V5R1
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of hardware data router
CHPID type FCP
N/A
V6R3
N/A
N/A
SUSE SLES 12
SUSE SLES 11 SP3
Red Hat RHEL 7
Red Hat RHEL 6.4
Support of T10-DIF
CHPID type FCP
N/A
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11 SP3
Red Hat RHEL 7
Red Hat RHEL 6.4

1 PTFs are required to support global resource serialization (GRS) FICON CTC toleration.
2 PTFs are required.
7.3.30 FICON Express8S
The FICON Express8S feature is exclusively installed in the Peripheral Component Interconnect Express (PCIe) I/O drawer. It provides a link rate of 8 Gbps, with auto negotiation to 4 or 2 Gbps for compatibility with previous devices and investment protection. Both 10 km (6.2 miles) LX and SX connections are offered (in a feature, all connections must have the same type).
With FICON Express 8S, clients might be able to consolidate existing FICON, FICON Express27, and FICON Express47 channels, while maintaining and enhancing performance.
FICON Express8S introduced a hardware data router for more efficient zHPF data transfers. It is the first channel with hardware that is designed to support zHPF, as contrasted to FICON Express8, FICON Express47, and FICON Express27, which have a firmware-only zHPF implementation.
Table 7-30 lists the minimum support requirements for FICON Express8S.
Table 7-30 Minimum support requirements for FICON Express8S
Operating system
z/OS
z/VM
z/VSE
z/TPF
Linux on z Systems
Native FICON and CTC
CHPID type FC
V1R121
V6R2
V5R1
V1R1
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF single-track operations
CHPID type FC
V1R12
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF multitrack operations
CHPID type FC
V1R12b
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of SCSI devices
CHPID type FCP
N/A
V6R22
V5R1
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of hardware data router
CHPID type FCP
N/A
V6R3
N/A
N/A
SUSE SLES 12
SUSE SLES 11 SP3
Red Hat RHEL 7
Red Hat RHEL 6.4
Support of T10-DIF
CHPID type FCP
N/A
V6R2b
N/A
N/A
SUSE SLES 12
SUSE SLES 11 SP3
Red Hat RHEL 7
Red Hat RHEL 6.4

1 PTFs are required to support GRS FICON CTC toleration.
2 PTFs are required.
7.3.31 FICON Express8
The FICON Express8 features provide a link rate of 8 Gbps, with auto-negotiation to 4 Gbps or 2 Gbps for compatibility with previous devices and investment protection. Both 10 km (6.2 miles) LX and SX connections are offered (in a feature, all connections must have the same type).
 
Important1: The z13 is the last z Systems server to support FICON Express 8 channels. Enterprises should begin migrating from FICON Express8 channel features (FC 3325 and FC 3326) to FICON Express16S channel features (FC 0418 and FC 0419). FICON Express 8 will not be supported on future high-end z Systems servers as carry-forward on an upgrade.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
Table 7-31 lists the minimum support requirements for FICON Express8.
Table 7-31 Minimum support requirements for FICON Express8
Operating system
z/OS
z/VM
z/VSE
z/TPF
Linux on z Systems
Native FICON and CTC
CHPID type FC
V1R12
V6R2
V5R1
V1R1
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF single-track operations
CHPID type FC
V1R12
V6R2
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
zHPF multitrack operations
CHPID type FC
V1R12a
V6R2
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of SCSI devices
CHPID type FCP
N/A
V6R21
V5R1
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
Support of T10-DIF
CHPID type FCP
N/A
V6R2a
N/A
N/A
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6

1 PTFs are required.
7.3.32 z/OS Discovery and Auto-Configuration (zDAC)
zDAC is designed to automatically run a number of I/O configuration definition tasks for new and changed disk and tape controllers that are connected to a switch or director, when attached to a FICON channel.
The zDAC function is integrated into the existing hardware configuration definition (HCD). Clients can define a policy that can include preferences for availability and bandwidth that include parallel access volume (PAV) definitions, control unit numbers, and device number ranges. When new controllers are added to an I/O configuration or changes are made to existing controllers, the system discovers them and proposes configuration changes that are based on that policy.
zDAC provides real-time discovery for the FICON fabric, subsystem, and I/O device resource changes from z/OS. By exploring the discovered control units for defined logical control units (LCUs) and devices, zDAC compares the discovered controller information with the current system configuration. It then determines delta changes to the configuration for a proposed configuration.
All added or changed logical control units and devices are added into the proposed configuration. They are assigned proposed control unit and device numbers, and channel paths that are based on the defined policy. zDAC uses channel path chosen algorithms to minimize single points of failure. The zDAC proposed configurations are created as work I/O definition files (IODF) that can be converted to production IODFs and activated.
zDAC is designed to run discovery for all systems in a sysplex that support the function. Therefore, zDAC helps to simplify I/O configuration on z13 systems that run z/OS, and reduces complexity and setup time.
zDAC applies to all FICON features that are supported on z13 when configured as CHPID type FC. Table 7-32 lists the minimum support requirements for zDAC.
Table 7-32 Minimum support requirements for zDAC
Operating system
Support requirements
z/OS
z/OS V1R121

1 PTFs are required.
7.3.33 High-performance FICON
High-performance FICON (zHPF), first provided on System z10, is a FICON architecture for protocol simplification and efficiency. It reduces the number of information units (IUs) processed. Enhancements have been made to the z/Architecture and the FICON interface architecture to provide optimizations for online transaction processing (OLTP) workloads.
When used by the FICON channel, the z/OS operating system, and the DS8000 control unit or other subsystems, the FICON channel processor usage can be reduced and performance improved. Appropriate levels of Licensed Internal Code (LIC) are required. Additionally, the changes to the architectures provide end-to-end system enhancements to improve reliability, availability, and serviceability (RAS).
zHPF is compatible with these standards:
Fibre Channel Framing and Signaling standard (FC-FS)
Fibre Channel Switch Fabric and Switch Control Requirements (FC-SW)
Fibre Channel Single-Byte-4 (FC-SB-4) standards
The zHPF channel programs can be used, for example, by the z/OS OLTP I/O workloads, DB2, VSAM, the partitioned data set extended (PDSE), and the z/OS file system (zFS).
At the zHPF announcement, zHPF supported the transfer of small blocks of fixed size data (4 K) from a single track. This capability is extended, first to 64 KB, and then to multitrack operations. The 64 KB data transfer limit on multitrack operations was removed by z196. This improvement allows the channel to fully use the bandwidth of FICON channels, resulting in higher throughputs and lower response times.
The multitrack operations extension applies exclusively to the FICON Express8S, FICON Express8, and FICON Express16S, on the z13, zEC12, zBC12, z196, and z114, when configured as CHPID type FC and connecting to z/OS. zHPF requires matching support by the DS8000 series. Otherwise, the extended multitrack support is transparent to the control unit.
From the z/OS point of view, the existing FICON architecture is called command mode and the zHPF architecture is called transport mode. During link initialization, the channel node and the control unit node indicate whether they support zHPF.
 
Requirement: All FICON channel path identifiers (CHPIDs) that are defined to the same LCU must support zHPF. The inclusion of any non-compliant zHPF features in the path group causes the entire path group to support command mode only.
The mode that is used for an I/O operation depends on the control unit that supports zHPF and its settings in the z/OS operating system. For z/OS use, there is a parameter in the IECIOSxx member of SYS1.PARMLIB (ZHPF=YES or NO) and in the SETIOS system command to control whether zHPF is enabled or disabled. The default is ZHPF=NO.
Support is also added for the D IOS,ZHPF system command to indicate whether zHPF is enabled, disabled, or not supported on the server.
Similar to the existing FICON channel architecture, the application or access method provides the channel program (CCWs). The way that zHPF (transport mode) manages channel program operations is different from the CCW operation for the existing FICON architecture (command mode). While in command mode, each CCW is sent to the control unit for execution. In transport mode, multiple channel commands are packaged together and sent over the link to the control unit in a single control block. Fewer processors are used compared to the existing FICON architecture. Certain complex CCW chains are not supported by zHPF.
zHPF is available to z13, zEC12, zBC12, z196, z114, and System z10. The FICON Express8S, FICON Express8, and FICON Express16S (CHPID type FC) concurrently support both the existing FICON protocol and the zHPF protocol in the server LIC.
zHPF is enhanced to allow all large write operations (> 64 KB) at distances up to 100 km to be run in a single round trip to the control unit, thus not elongating the I/O service time for these write operations at extended distances. This enhancement to zHPF removes a key inhibitor for clients adopting zHPF over extended distances, especially when using the IBM HyperSwap® capability of z/OS.
Table 7-33 lists the minimum support requirements for zHPF.
Table 7-33 Minimum support requirements for zHPF
Operating system
Support requirements
z/OS
Single-track operations: z/OS V1R12.
Multitrack operations: z/OS V1R12 with PTFs.
64 K enhancement: z/OS V1R12 with PTFs.
z/VM
z/VM V6.2 for guest use only.
Linux on z Systems
SUSE SLES 12
SUSE SLES 11 SP1
Red Hat RHEL 7
Red Hat RHEL 6
IBM continues to work with its Linux distribution partners on use of appropriate z Systems (z13, zEC12) functions to be provided in future Linux on z Systems distribution releases.
For more information about FICON channel performance, see the performance technical papers on the z Systems I/O connectivity website at:
7.3.34 Request node identification data
First offered on z9 EC, the request node identification data (RNID) function for native FICON CHPID type FC allows isolation of cabling-detected errors.
Table 7-34 lists the minimum support requirements for RNID.
Table 7-34 Minimum support requirements for RNID
Operating system
Support requirements
z/OS
z/OS V1R12
7.3.35 32 K subchannels for the FICON Express16S
To help facilitate growth and continue to enable server consolidation, the z13 supports up to 32 K subchannels per FICON Express16S channel (CHPID). More devices can be defined per FICON channel, which includes primary, secondary, and alias devices. The maximum number of subchannels across all device types that are addressable within an LPAR remains at 63.75 K for subchannel set 0 and 64 K (64 X 1024)-1 for subchannel sets 1, 2, and 3.
This support is exclusive to the z13 and applies to the FICON Express16S feature (defined as CHPID type FC). FICON Express8S and FICON Express8 remain at 24 subchannel support when defined as CHPID type FC.
Table 7-35 lists the minimum support requirements of 32 K subchannel support for FICON Express.
Table 7-35 Minimum support requirements for 32K subchannel
Operating system
Support requirements
z/OS
z/OS V1R121
z/VM
z/VM V6R2
Linux on z Systems
SLES 12
SLES 11
RHEL 7
RHEL 6

1 PTFs are required.
7.3.36 Extended distance FICON
An enhancement to the industry-standard FICON architecture (FC-SB-3) helps avoid degradation of performance at extended distances by implementing a new protocol for persistent IU pacing. Extended distance FICON is transparent to operating systems and applies to all FICON Express16S, FICON Express8S, and FICON Express8 features that carry native FICON traffic (CHPID type FC).
To use this enhancement, the control unit must support the new IU pacing protocol. IBM System Storage® DS8000 series supports extended distance FICON for IBM z Systems environments. The channel defaults to current pacing values when it operates with control units that cannot use extended distance FICON.
7.3.37 Platform and name server registration in FICON channel
The FICON Express16S, FICON Express8S, and FICON Express8 features (on the zEC12 servers) support platform and name server registration to the fabric for CHPID types FC and FCP.
Information about the channels that are connected to a fabric, if registered, allows other nodes or storage area network (SAN) managers to query the name server to determine what is connected to the fabric.
The following attributes are registered for the z13 servers:
Platform information
Channel information
Worldwide port name (WWPN)
Port type (N_Port_ID)
FC-4 types that are supported
Classes of service that are supported by the channel
The platform and name server registration service are defined in the Fibre Channel - Generic Services 4 (FC-GS-4) standard.
7.3.38 FICON link incident reporting
FICON link incident reporting allows an operating system image (without operator intervention) to register for link incident reports. Table 7-36 lists the minimum support requirements for this function.
Table 7-36 Minimum support requirements for link incident reporting
Operating system
Support requirements
z/OS
z/OS V1R12
7.3.39 FCP provides increased performance
The FCP LIC is modified to help provide increased I/O operations per second for both small and large block sizes, and to support 16-Gbps link speeds.
For more information about FCP channel performance, see the performance technical papers on the z Systems I/O connectivity website at:
7.3.40 N Port ID Virtualization (NPIV)
NPIV allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel. This feature, first introduced with z9 EC, can be used with earlier FICON features that have been carried forward from earlier servers.
Table 7-37 lists the minimum support requirements for NPIV.
Table 7-37 Minimum support requirements for NPIV
Operating system
Support requirements
z/VM
z/VM V6R2 provides support for guest operating systems and VM users to obtain virtual port numbers.
Installation from DVD to SCSI disks is supported when NPIV is enabled.
z/VSE
z/VSE V5R1.
Linux on z Systems
SUSE SLES 12.
SUSE SLES 11.
Red Hat RHEL 7.
Red Hat RHEL 6.
7.3.41 OSA-Express5S 10-Gigabit Ethernet LR and SR
The OSA-Express5S 10-Gigabit Ethernet feature, introduced with the zEC12 and zBC12, is installed exclusively in the PCIe I/O drawer. Each feature has one port, which is defined as either CHPID type OSD or OSX. CHPID type OSD supports the queued direct input/output (QDIO) architecture for high-speed TCP/IP communication. The z196 introduced the CHPID type OSX. For more information, see 7.3.50, “Intraensemble data network” on page 270.
Table 7-38 lists the minimum support requirements for OSA-Express5S 10-Gigabit Ethernet LR and SR features.
Table 7-38 Minimum support requirements for OSA-Express5S 10-Gigabit Ethernet LR and SR
Operating system
Support requirements
z/OS
OSD: z/OS V1R12a
OSX: z/OS V1R121
z/VM
OSD: z/VM V6R2
z/VSE
OSX: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1 PUT 5a
OSX: z/TPF V1R1 PUT 8a
IBM zAware
OSD
OSX
Linux on z Systems
OSD: SUSE SLES 11, SUSE SLES 12, Red Hat RHEL 7, and Red Hat RHEL6
OSX: SUSE SLES 11 SP12, SUSE SLES 12, Red Hat RHEL 6, Red Hat RHEL 7

1 IBM Lifecycle extension is required for support.
2 Maintenance update is required.
7.3.42 OSA-Express5S Gigabit Ethernet LX and SX
The OSA-Express5S Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature has one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). Each port supports attachment to a 1 Gigabit per second (Gbps) Ethernet local area network (LAN). The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
Operating system support is required to recognize and use the second port on the OSA-Express5S Gigabit Ethernet feature. Table 7-39 lists the minimum support requirements for OSA-Express5S Gigabit Ethernet LX and SX.
Table 7-39 Minimum support requirements for OSA-Express5S Gigabit Ethernet LX and SX
Operating system
Support requirements
using two ports per CHPID
Support requirements
using one port per CHPID
z/OS
OSD: z/OS V1R12
OSD: z/OS V1R12
z/VM
OSD: z/VM V6R2
OSD: z/VM V6R2
z/VSE
OSD: z/VSE V5R1
OSD: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1
OSD: z/TPF V1R1
IBM zAware
OSD
OSD
Linux on z Systems
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
7.3.43 OSA-Express5S 1000BASE-T Ethernet
The OSA-Express5S 1000BASE-T Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature has one PCIe adapter and two ports. The two ports share a CHPID, which can be defined as OSC, OSD, OSE, OSM, or OSN. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems. The OSM CHPID type was introduced with z196. For more information, see 7.3.49, “Intranode management network (INMN)” on page 270.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD and OSN
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
Ensemble management, with CHPID type OSM8
Table 7-40 lists the minimum support requirements for OSA-Express5S 1000BASE-T.
Table 7-40 Minimum support requirements for OSA-Express5S 1000BASE-T Ethernet
Operating system
Support requirements
using two ports per CHPID
Support requirements
using one port per CHPID
z/OS
OSC, OSD, OSE, and OSNb: z/OS V1R12a
OSC, OSD, OSE, OSM, and OSNb: z/OS V1R12a
z/VM
OSC, OSD1, OSE, and OSN2: z/VM V6R2
OSC, OSD, OSE, OSMa3, and OSNb: z/VM V6R2
z/VSE
OSC, OSD, OSE, and OSNb: z/VSE V5R1
OSC, OSD, OSE, and OSNb: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1
OSNb: z/TPF V1R1
OSD: z/TPF
OSNb: z/TPF
IBM zAware
OSD
OSD
Linux on z Systems
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSNb: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSM: SUSE SLES 12, SUSE SLES 11 SP2, Red Hat RHEL 7, and Red Hat RHEL 6
OSNb: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6

1 PTFs are required.
2 Although CHPID type OSN does not use any ports (because all communication is LPAR to LPAR), it is listed here for completeness.
3 OSM support in V6R2 and V6R3 for dynamic I/O only.
7.3.44 OSA-Express4S 10-Gigabit Ethernet LR and SR
The OSA-Express4S 10-Gigabit Ethernet feature, introduced with the zEC12, is installed exclusively in the PCIe I/O drawer. Each feature has one port, which is defined as either CHPID type OSD or OSX. CHPID type OSD supports the QDIO architecture for high-speed TCP/IP communication. The z196 introduced the CHPID type OSX. For more information, see 7.3.50, “Intraensemble data network” on page 270.
The OSA-Express4S features have half the number of ports per feature when compared to OSA-Express3, and half the size as well. This configuration results in an increased number of installable features. It also facilitates the purchase of the correct number of ports to help satisfy your application requirements and to optimize better for redundancy.
Table 7-41 lists the minimum support requirements for OSA-Express4S 10-Gigabit Ethernet LR and SR features.
Table 7-41 Minimum support requirements for OSA-Express4S 10-Gigabit Ethernet LR and SR
Operating system
Support requirements
z/OS
OSD: z/OS V1R12a
OSX: z/OS V1R121
z/VM
OSD: z/VM V6R2
OSX: z/VM V6R2a and V6R3 for dynamic I/O only
z/VSE
OSD: z/VSE V5R1
OSX: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1
OSX: z/TPF V1R1
IBM zAware
OSD
OSX
Linux on z Systems
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSX: SUSE SLES 12. SUSE SLES 11 SP12, Red Hat RHEL 7, and Red Hat RHEL 6

1 PTFs are required.
2 Maintenance update is required.
7.3.45 OSA-Express4S Gigabit Ethernet LX and SX
The OSA-Express4S Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature has one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). Each port supports attachment to a 1 Gbps Ethernet LAN. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems.
Operating system support is required to recognize and use the second port on the OSA-Express4S Gigabit Ethernet feature. Table 7-42 lists the minimum support requirements for OSA-Express4S Gigabit Ethernet LX and SX.
Table 7-42 Minimum support requirements for OSA-Express4S Gigabit Ethernet LX and SX
Operating system
Support requirements
using two ports per CHPID
Support requirements
using one port per CHPID
z/OS
OSD: z/OS V1R12
OSD: z/OS V1R12
z/VM
OSD: z/VM V6R2
OSD: z/VM V6R2
z/VSE
OSD: z/VSE V5R1
OSD: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1
OSD: z/TPF V1R1
IBM zAware
OSD
OSD
Linux on z Systems
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
7.3.46 OSA-Express4S 1000BASE-T Ethernet
The OSA-Express4S 1000BASE-T Ethernet feature is installed exclusively in the PCIe I/O drawer. Each feature has one PCIe adapter and two ports. The two ports share a CHPID, which is defined as OSC, OSD, OSE, OSM, or OSN. The ports can be defined as a spanned channel, and can be shared among LPARs and across logical channel subsystems. The OSM CHPID type was introduced with z196. For more information, see 7.3.49, “Intranode management network (INMN)” on page 270.
Each adapter can be configured in the following modes:
QDIO mode, with CHPID types OSD and OSN
Non-QDIO mode, with CHPID type OSE
Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC
Ensemble management, with CHPID type OSM
Operating system support is required to recognize and use the second port on the OSA-Express4S 1000BASE-T feature. Table 7-43 lists the minimum support requirements for OSA-Express4S 1000BASE-T.
Table 7-43 Minimum support requirements for OSA-Express4S 1000BASE-T Ethernet
Operating system
Support requirements
using two ports per CHPID
Support requirements
using one port per CHPID
z/OS
OSC, OSD, OSE, and OSNb: z/OS V1R12a
OSC, OSD, OSE, OSM, and OSNb: z/OS V1R12a
z/VM
OSC, OSD1, OSE, and OSN2: z/VM V6R2
OSC, OSD, OSE, OSMa3, and OSNb: z/VM V6R2
z/VSE
OSC, OSD, OSE, and OSNb: z/VSE V5R1
OSC, OSD, OSE, and OSNb: z/VSE V5R1
z/TPF
OSD: z/TPF V1R1
OSNb: z/TPF V1R1
OSD: z/TPF V1R1
OSNb: z/TPF V1R1
IBM zAware
OSD
OSD
Linux on z Systems
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSNb: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSD: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6
OSM: SUSE SLES 12, SUSE SLES 11 SP2, Red Hat RHEL 7, and Red Hat RHEL 6
OSNb: SUSE SLES 12, SUSE SLES 11, Red Hat RHEL 7, and Red Hat RHEL 6

1 PTFs are required.
2 Although CHPID type OSN does not use any ports (because all communication is LPAR to LPAR), it is listed here for completeness.
3 OSM Support in z/VM V6R2 and V6R3 for dynamic I/O only.
7.3.47 Open Systems Adapter for IBM zAware
The IBM zAware server requires connections to the graphical user interface (GUI) browser and z/OS monitored clients. An OSA channel is the most logical choice for allowing GUI browser connections to the server. By using this channel, users can view the analytical data for the monitored clients through the IBM zAware GUI. For z/OS monitored clients that connect an IBM zAware server, one of the following network options is supported:
A client-provided data network that is provided through an OSA Ethernet channel.
A HiperSockets subnetwork within the z13.
IEDN on the z13 to other CPC nodes in the ensemble. The z13 server also supports the use of HiperSockets over the IEDN.
7.3.48 Open Systems Adapter for Ensemble
Five different OSA-Express5S and OSA-Express4S features are used to connect the z13 to Unified Resource Manager, and other ensemble nodes. These connections are part of the ensemble’s two private and secure internal networks.
For the intranode management network (INMN), use these features:
OSA Express5S 1000BASE-T Ethernet, FC 0417
OSA Express4S 1000BASE-T Ethernet, FC 0408
For the IEDN, use these features:
OSA-Express5S 10 Gigabit Ethernet (GbE) Long Reach (LR), FC 0415
OSA-Express5S 10 Gigabit Ethernet (GbE) Short Reach (SR), FC 0416
OSA-Express4S 10 Gigabit Ethernet (GbE) Long Reach (LR), FC 0406
OSA-Express4S 10 Gigabit Ethernet (GbE) Short Reach (SR), FC 0407
7.3.49 Intranode management network (INMN)
The INMN is one of the ensemble’s two private and secure internal networks. The INMN is used by the Unified Resource Manager functions.
The INMN is a private and physically isolated 1000Base-T Ethernet internal platform management network. It operates at 1 Gbps, and connects all resources (CPC components) of an ensemble node for management purposes. It is pre-wired, internally switched, configured, and managed with full redundancy for high availability.
The z196 introduced the OSA-Express for Unified Resource Manager (OSM) CHPID type. INMN requires two OSA-Express5S 1000BASE-T or OSA-Express4S 1000BASE-T ports from two different OSA-Express5S 1000BASE-T or OSA-Express4S 1000BASE-T features, which are configured as CHPID type OSM. One port per CHPID is available with CHPID type OSM.
The OSA connection is through the system control hub (SCH) on the z13 to the Hardware Management Console (HMC) network interface.
7.3.50 Intraensemble data network
The intraensemble data network (IEDN) is one of the ensemble’s two private and secure internal networks. The IEDN provides applications with a fast data exchange path between ensemble nodes. Specifically, it is used for communications across the virtualized images (LPARs, z/VM virtual machines, and blade LPARs).
The IEDN is a private and secure 10 Gbps Ethernet network that connects all elements of an ensemble. It is access-controlled by using integrated virtual LAN (VLAN) provisioning. No client-managed switches or routers are required. The IEDN is managed by the primary HMC that controls the ensemble. This configuration helps reduce the need for firewalls and encryption, and simplifies network configuration and management. It also provides full redundancy for high availability.
The z196 introduced the OSX CHPID type. The OSA connection is from the z13 to the ToR switches on zBX.
The IEDN requires two OSA-Express5S or OSA-Express4S 10 GbE ports that are configured as CHPID type OSX.
7.3.51 OSA-Express5S and OSA-Express4S NCP support
The OSA-Express5S 1000BASE-T Ethernet and OSA-Express4S 1000BASE-T Ethernet features can provide channel connectivity from an operating system in a z13 to IBM Communication Controller for Linux on z Systems (CCL). This configuration uses the Open Systems Adapter for Network Control Program (NCP) (CHPID type OSN) in support of the Channel Data Link Control (CDLC) protocol. OSN eliminates the requirement for an external communication medium for communications between the operating system and the CCL image.
The data flow of the LPAR to the LPAR is accomplished by the OSA-Express5S or OSA-Express4S feature without ever exiting the card. The OSN support allows multiple connections between the CCL image and the operating system, such as z/OS or z/TPF. The operating system must be in the same physical server as the CCL image.
For CCL planning information, see IBM Communication Controller for Linux on System z V1.2.1 Implementation Guide, SG24-7223. For the most recent CCL information, see this website:
CDLC, when used with CCL, emulates selected functions of IBM 3745/NCP operations. The port that is used with the OSN support is displayed as an ESCON channel to the operating system. This support can be used with OSA-Express5S 1000BASE-T and OSA-Express4S 1000BASE-T features.
Table 7-44 lists the minimum support requirements for OSN.
Table 7-44 Minimum support requirements for OSA-Express5S and OSA-Express4S OSN
Operating system
Support requirements
z/OS
z/OS V1R12a
z/VM
z/VM V6R2
z/VSE
z/VSE V5R1
z/TPF
z/TPF V1R11
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6

1 PTFs are required.
7.3.52 Integrated Console Controller
The 1000BASE-T Ethernet features provide the Integrated Console Controller (OSA-ICC) function, which supports TN3270E (RFC 2355) and non-SNA DFT 3270 emulation. The OSA-ICC function is defined as CHIPD type OSC and console controller, and has multiple LPAR support, both as shared or spanned channels.
With the OSA-ICC function, 3270 emulation for console session connections is integrated in the z13 through a port on the OSA-Express5S 1000BASE-T or OSA-Express4S 1000BASE-T features. This function eliminates the requirement for external console controllers, such as 2074 or 3174, helping to reduce cost and complexity. Each port can support up to 120 console session connections.
OSA-ICC can be configured on a PCHID-by-PCHID basis, and is supported at any of the feature settings (10, 100, or 1000 Mbps, half-duplex or full-duplex).
7.3.53 VLAN management enhancements
Table 7-45 lists the minimum support requirements for VLAN management enhancements for the OSA-Express5S and OSA-Express4S features (CHPID type OSD).
Table 7-45 Minimum support requirements for VLAN management enhancements
Operating system
Support requirements
z/OS
z/OS V1R12.
z/VM
z/VM V6R2. Support of guests is transparent to z/VM if the device is directly connected to the guest (pass through).
7.3.54 GARP VLAN Registration Protocol
All OSA-Express5S and OSA-Express4S features support VLAN prioritization, a component of the IEEE 802.1 standard. GARP VLAN Registration Protocol (GVRP) support allows an OSA-Express port to register or unregister its VLAN IDs with a GVRP-capable switch and dynamically update its table as the VLANs change. This process simplifies the network administration and management of VLANs because manually entering VLAN IDs at the switch is no longer necessary.
The minimum support requirements are listed in Table 7-46.
Table 7-46 Minimum support requirements for GVRP
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2
Linux on z Systems1
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6

1 By using VLANs
7.3.55 Inbound workload queuing for OSA-Express5S and OSA-Express4S
OSA-Express3 introduced inbound workload queuing (IWQ), which creates multiple input queues and allows OSA to differentiate workloads “off the wire.” It then assigns work to a specific input queue (per device) to z/OS. The support also is available with OSA-Express5S and OSA-Express4S. CHPID types OSD and OSX are supported.
Each input queue is a unique type of workload, and has unique service and processing requirements. The IWQ function allows z/OS to preassign the appropriate processing resources for each input queue. This approach allows multiple concurrent z/OS processing threads to process each unique input queue (workload), avoiding traditional resource contention. In a heavily mixed workload environment, this “off the wire” network traffic separation is provided by OSA-Express5S and OSA-Express4S. IWQ reduces the conventional z/OS processing that is required to identify and separate unique workloads. This advantage results in improved overall system performance and scalability.
A primary objective of IWQ is to provide improved performance for business-critical interactive workloads by reducing contention that is created by other types of workloads. The following types of z/OS workloads are identified and assigned to unique input queues:
z/OS Sysplex Distributor traffic: Network traffic that is associated with a distributed virtual Internet Protocol address (VIPA) is assigned to a unique input queue. This configuration allows the Sysplex Distributor traffic to be immediately distributed to the target host.
z/OS bulk data traffic: Network traffic that is dynamically associated with a streaming (bulk data) TCP connection is assigned to a unique input queue. This configuration allows the bulk data processing to be assigned the appropriate resources and isolated from critical interactive workloads.
IWQ is exclusive to OSA-Express5S and OSA-Express4S CHPID types OSD and OSX, and the z/OS operating system. This limitation applies to z13, zEC12, zBC12, z196, z114, and System z10. The minimum support requirements are listed in Table 7-47.
Table 7-47 Minimum support requirements for IWQ
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 for guest use only, service required
7.3.56 Inbound workload queuing for Enterprise Extender
IWQ for the OSA-Express features is enhanced to differentiate and separate inbound Enterprise Extender traffic to a dedicated input queue.
IWQ for Enterprise Extender is exclusive to OSA-Express5S and OSA-Express4S, CHPID types OSD and OSX, and the z/OS operating system. This limitation applies to z13, zEC12, zBC12, z196, and z114. The minimum support requirements are listed in Table 7-48.
Table 7-48 Minimum support requirements for IWQ
Operating system
Support requirements
z/OS
z/OS V1R13
z/VM
z/VM V6R2 for guest use only, service required
7.3.57 Querying and displaying an OSA configuration
OSA-Express3 introduced the capability for the operating system to query and display directly the current OSA configuration information (similar to OSA/SF). z/OS uses this OSA capability by introducing a TCP/IP operator command called display OSAINFO.
Using display OSAINFO allows the operator to monitor and verify the current OSA configuration. Doing so helps improve the overall management, serviceability, and usability of OSA-Express5S and OSA-Express4S.
The display OSAINFO command is exclusive to z/OS, and applies to CHPID types OSD, OSM, and OSX.
7.3.58 Link aggregation support for z/VM
Link aggregation (IEEE 802.3ad) that is controlled by the z/VM Virtual Switch (VSWITCH) allows the dedication of an OSA-Express5S or OSA-Express4S port to the z/VM operating system. The port must be participating in an aggregated group that is configured in Layer 2 mode. Link aggregation (trunking) combines multiple physical OSA-Express5S or OSA-Express4S ports into a single logical link. This configuration increases throughput, and provides nondisruptive failover if a port becomes unavailable. The target links for aggregation must be of the same type.
Link aggregation is applicable to CHPID type OSD (QDIO). Link aggregation is supported by z/VM V6R2 and later.
7.3.59 Multi-VSwitch Link Aggregation
Multi-VSwitch Link Aggregation support allows a port group of OSA-Express features to span multiple virtual switches within a single z/VM system or between multiple z/VM systems. Sharing a Link Aggregation Port Group (LAG) with multiple virtual switches increases optimization and utilization of the OSA -Express when handling larger traffic loads. Higher adapter utilization protects customer investments, which is increasingly important as 10 Gb deployments become more prevalent.
The minimum support requirements are listed in Table 7-49.
Table 7-49 Minimum support requirements for Multi-VSwitch Link Aggregation
Operating system
Support requirements
z/VM
z/VM V6R31

1 PTF support is required.
7.3.60 QDIO data connection isolation for z/VM
The QDIO data connection isolation function provides a higher level of security when sharing an OSA connection in z/VM environments that use VSWITCH. The VSWITCH is a virtual network device that provides switching between OSA connections and the connected guest systems.
QDIO data connection isolation allows disabling internal routing for each QDIO connected. It also provides a means for creating security zones and preventing network traffic between the zones.
QDIO data connection isolation is supported by all OSA-Express5S and OSA-Express4S features on z13 and zEC12.
7.3.61 QDIO interface isolation for z/OS
Some environments require strict controls for routing data traffic between servers or nodes. In certain cases, the LPAR-to-LPAR capability of a shared OSA connection can prevent such controls from being enforced. With interface isolation, internal routing can be controlled on an LPAR basis. When interface isolation is enabled, the OSA discards any packets that are destined for a z/OS LPAR that is registered in the OSA address table (OAT) as isolated.
QDIO interface isolation is supported by Communications Server for z/OS V1R12 or later, and all OSA-Express5S and OSA-Express4S features on z13.
7.3.62 QDIO optimized latency mode
QDIO optimized latency mode (OLM) can help improve performance for applications that have a critical requirement to minimize response times for inbound and outbound data.
OLM optimizes the interrupt processing in the following manner:
For inbound processing, the TCP/IP stack looks more frequently for available data to process. This process ensures that any new data is read from the OSA-Express5S or OSA-Express4S without needing more program controlled interrupts (PCIs).
For outbound processing, the OSA-Express5S or OSA-Express4S also look more frequently for available data to process from the TCP/IP stack. Therefore, the process does not require a Signal Adapter (SIGA) instruction to determine whether more data is available.
7.3.63 Large send for IPv6 packets
Large send for IPv6 packets improves performance by offloading outbound TCP segmentation processing from the host to an OSA-Express5S and OSA-Express4S feature by employing a more efficient memory transfer into OSA-Express5S and OSA-Express4S. Large send support for IPv6 packets applies to the OSA-Express5S and OSA-Express4S features (CHPID type OSD and OSX), and is exclusive to z13, zEC12, zBC12, z196, and z114. With z13, large send for IPv6 packets (segmentation offloading) for LPAR-to-LPAR traffic is supported. The minimum support requirements are listed in Table 7-50.
Table 7-50 Minimum support requirements for large send for IPv6 packets
Operating system
Support requirements
z/OS
z/OS V1R131
z/VM
z/VM V6R2 for guest use only

1 PTFs are required.
7.3.64 OSA-Express5S and OSA-Express4S checksum offload
OSA-Express5S and OSA-Express4S features, when configured as CHPID type OSD, provide checksum offload for several types of traffic, as indicated in Table 7-51.
Table 7-51 Minimum support requirements for OSA-Express5S and OSA-Express4S checksum offload
Traffic
Support requirements
LPAR to LPAR
z/OS V1R12 1
z/VM V6R2 for guest use2
IPv6
z/OS V1R13
z/VM V6R2 for guest useb
LPAR-to-LPAR traffic for IPv4 and IPv6
z/OS V1R13
z/VM V6R2 for guest useb

1 PTFs are required.
2 Device is directly attached to guest, and PTFs are required.
7.3.65 Checksum offload for IPv4and IPv6 packets when in QDIO mode
The checksum offload function supports z/OS and Linux on z Systems environments. It is offered on the OSA-Express5S GbE, OSA-Express5S 1000BASE-T Ethernet, OSA-Express4S GbE, and OSA-Express4S 1000BASE-T Ethernet features. Checksum offload provides the capability of calculating the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and IP header checksum. Checksum verifies the accuracy of files. By moving the checksum calculations to a Gigabit or 1000BASE-T Ethernet feature, host processor cycles are reduced and performance is improved.
When checksum is offloaded, the OSA-Express feature runs the checksum calculations for Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) packets. The checksum offload function applies to packets that go to or come from the LAN. When multiple IP stacks share an OSA-Express, and an IP stack sends a packet to a next hop address that is owned by another IP stack that is sharing the OSA-Express, OSA-Express sends the IP packet directly to the other IP stack. The packet does not have to be placed out on the LAN, which is termed LPAR-to-LPAR traffic. Checksum offload is enhanced to support the LPAR-to-LPAR traffic, which was not originally available.
Checksum offload is supported by the GbE features, which include FC 0404, FC 0405,
FC 0413, and FC 0414. It also is supported by the 1000BASE-T Ethernet features, including FC 0408 and FC 0417, when it is operating at 1000 Mbps (1 Gbps). Checksum offload is applicable to the QDIO mode only (channel type OSD).
z/OS support for checksum offload is available in all in-service z/OS releases, and in all supported Linux on z Systems distributions.
7.3.66 Adapter interruptions for QDIO
Linux on z Systems and z/VM work together to provide performance improvements by using extensions to the QDIO architecture. Adapter interruptions, first added to z/Architecture with HiperSockets, provide an efficient, high-performance technique for I/O interruptions to reduce path lengths and processor usage. These reductions are in both the host operating system and the adapter (OSA-Express5S and OSA-Expres4S when using CHPID type OSD).
In extending the use of adapter interruptions to OSD (QDIO) channels, the processor utilization to handle a traditional I/O interruption is reduced. This benefits OSA-Express TCP/IP support in z/VM, z/VSE, and Linux on z Systems.
Adapter interruptions apply to all of the OSA-Express5S and OSA-Express4S features on z13 when in QDIO mode (CHPID type OSD).
7.3.67 OSA Dynamic LAN idle
The OSA Dynamic LAN idle parameter change helps reduce latency and improve performance by dynamically adjusting the inbound blocking algorithm. System administrators can authorize the TCP/IP stack to enable a dynamic setting that previously was static.
For latency-sensitive applications, the blocking algorithm is modified to be latency-sensitive. For streaming (throughput-sensitive) applications, the blocking algorithm is adjusted to maximize throughput. In all cases, the TCP/IP stack determines the best setting based on the current system and environmental conditions, such as inbound workload volume, processor utilization, and traffic patterns. It can then dynamically update the settings. OSA-Express5S and OSA-Express4S features adapt to the changes, avoiding thrashing and frequent updates to the OAT. Based on the TCP/IP settings, OSA holds the packets before presenting them to the host. A dynamic setting is designed to avoid or minimize host interrupts.
OSA Dynamic LAN idle is supported by the OSA-Express5S, and OSA-Express4S features on z13 when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and higher releases.
7.3.68 OSA Layer 3 virtual MAC for z/OS environments
To help simplify the infrastructure and facilitate load balancing when an LPAR is sharing an OSA MAC address with another LPAR, each operating system instance can have its own unique logical or virtual MAC (VMAC) address. All IP addresses that are associated with a TCP/IP stack are accessible by using their own VMAC address instead of sharing the MAC address of an OSA port. This situation also applies to Layer 3 mode and to an OSA port spanned among channel subsystems.
OSA Layer 3 VMAC is supported by the OSA-Express5S and OSA-Express4S features on z13 when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and later.
7.3.69 QDIO Diagnostic Synchronization
QDIO Diagnostic Synchronization enables system programmers and network administrators to coordinate and simultaneously capture both software and hardware traces. It allows z/OS to signal OSA-Express5S and OSA-Express4S features (by using a diagnostic assist function) to stop traces and capture the current trace records.
QDIO Diagnostic Synchronization is supported by the OSA-Express5S and OSA-Express4S features on z13 when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and later.
7.3.70 Network Traffic Analyzer
The z13 offers systems programmers and network administrators the ability to more easily solve network problems despite high traffic. With the OSA-Express Network Traffic Analyzer and QDIO Diagnostic Synchronization on the server, you can capture trace and trap data. This data can then be forwarded to z/OS tools for easier problem determination and resolution.
The Network Traffic Analyzer is supported by the OSA-Express5S and OSA-Express4S features on z13 when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and later.
7.3.71 Program-directed re-IPL
First available on System z9, program directed re-IPL allows an operating system on a z13 to re-IPL without operator intervention. This function is supported for both SCSI and IBM extended count key data (IBM ECKD™) devices. Table 7-52 lists the minimum support requirements for program directed re-IPL.
Table 7-52 Minimum support requirements for program directed re-IPL
Operating system
Support requirements
z/VM
z/VM V6R2
Linux on z Systems
SUSE SLES 12
SUSE SLES 11
Red Hat RHEL 7
Red Hat RHEL 6
z/VSE
V5R1 on SCSI disks
7.3.72 Coupling over InfiniBand and Integrated Coupling Adapter
InfiniBand (IFB) and Integrated Coupling Adapter (ICA) using PCIe Gen3 technology can potentially provide high-speed interconnection at short distances, longer distance fiber optic interconnection, and interconnection between partitions on the same system without external cabling. Several areas of this book address InfiniBand and PCIe Gen3 characteristics and support. For more information, see 4.9, “Parallel Sysplex connectivity” on page 168.
Integrated Coupling Adapter
Support for PCIe Gen3 fanout (also known as Integrated Coupling Adapter Short Range (ICA SR)) that supports a maximum distance of 150 meters (492 feet) is listed inTable 7-53.
Table 7-53 Minimum support requirements for coupling links over InfiniBand
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 (dynamic I/O support for ICA CHPIDs only; coupling over ICA is not supported for guest use.)
z/TPF
z/TPF V1R1
InfiniBand coupling links
Support for HCA3-O9 (12xIFB) fanout that supports InfiniBand coupling links 12x at a maximum distance of 150 meters (492 feet) is listed in Table 7-54.
Table 7-54 Minimum support requirements for coupling links over InfiniBand
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 (dynamic I/O support for InfiniBand CHPIDs only; coupling over InfiniBand is not supported for guest use.)
z/TPF
z/TPF V1R1
InfiniBand coupling links at an unrepeated distance of 10 km (6.2 miles)
Support for HCA3-O LR10 (1xIFB) fanout that supports InfiniBand coupling links 1x at an unrepeated distance of 10 KM (6.2 miles) is listed in Table 7-55.
Table 7-55 Minimum support requirements for coupling links over InfiniBand at 10 km (6.2 miles)
Operating system
Support requirements
z/OS
z/OS V1R12
z/VM
z/VM V6R2 (dynamic I/O support for InfiniBand CHPIDs only; coupling over InfiniBand is not supported for guest use.)
7.3.73 Dynamic I/O support for InfiniBand and ICA CHPIDs
This function refers exclusively to the z/VM dynamic I/O support of InfiniBand and ICA coupling links. Support is available for the CIB and CS5 CHPID type in the z/VM dynamic commands, including the change channel path dynamic I/O command. Specifying and changing the system name when entering and leaving configuration mode also are supported. z/VM does not use InfiniBand or ICA, and does not support the use of InfiniBand or ICA coupling links by guests.
Table 7-56 lists the minimum support requirements for dynamic I/O support for InfiniBand CHPIDs.
Table 7-56 Minimum support requirements for dynamic I/O support for InfiniBand CHPIDs
Operating system
Support requirements
z/VM
z/VM V6R2
7.3.74 Simultaneous multithreading (SMT)
SMT is the hardware capability to process up to two simultaneous threads in a single core, sharing the resources of the superscalar core. This capability improves the system capacity and the efficiency in the usage of the processor, increasing the overall throughput of the system.
SMT is supported only by zIIP and IFL speciality engines on z13 and must be used by the operating system. An operating system with SMT support can be configured to dispatch work to a thread on a zIIP (for eligible workloads in z/OS) or an IFL (for z/VM) core in single-thread or SMT mode. For more information, see 7.9, “Simultaneous multithreading (SMT)” on page 290.
Table 7-57 lists the minimum support requirements for SMT.
Table 7-57 Minimum support requirements for SMT
Operating system
Support requirements
z/OS
z/OS V2R1 with APARs
z/VM
z/VM V6R3
 
Statement of Direction1: IBM is working with its Linux distribution partners to include SMT support in future distribution releases.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.3.75 Single Instruction Multiple Data (SIMD)
The SIMD feature introduces a new set of instructions with z13 to enable parallel computing that can accelerate code with string, character, integer, and floating point data types. The SIMD instructions allow a larger number of operands to be processed with a single complex instruction. For more information, see 3.4.2, “Single-instruction multiple-data (SIMD)” on page 90.
Table 7-58 lists the minimum support requirements for SIMD.
Table 7-58 Minimum support requirements for SIMD
Operating system
Support requirements
z/OS
z/OS V2R1 with Small Program Enhancement (SPE)
 
Statement of Direction1: IBM is working with its Linux distribution partners to include SIMD support in future distribution releases.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.4 Cryptographic support
IBM z13 provides two major groups of cryptographic functions:
Synchronous cryptographic functions, which are provided by CPACF
Asynchronous cryptographic functions, which are provided by the Crypto Express5S feature
The minimum software support levels are listed in the following sections. Obtain and review the current PSP buckets to ensure that the latest support levels are known and included as part of the implementation plan.
7.4.1 CP Assist for Cryptographic Function
In z13, CPACF supports the following encryption types:
The Advanced Encryption Standard (AES, symmetric encryption)
The Data Encryption Standard (DES, symmetric encryption)
The Secure Hash Algorithm (SHA, hashing)
Table 7-59 lists the support requirements for CPACF at z13.
Table 7-59 Support requirements for CPACF
Operating system
Support requirements
z/OS1
z/OS V1R12 and later with the Cryptographic Support for web deliverable that is specific for the operating system level
z/VM
z/VM V6R2 with PTFs and higher: Supported for guest use
z/VSE
z/VSE V5R1 and later with PTFs
z/TPF
z/TPF V1R1
Linux on z Systems
SUSE SLES 12
Red Hat RHEL 7

1 CPACF also is used by several IBM software product offerings for z/OS, such as IBM WebSphere Application Server for z/OS.
7.4.2 Crypto Express5S
Support of Crypto Express5S functions varies by operating system and release. Table 7-60 lists the minimum software requirements for the Crypto Express5S features when configured as a coprocessor or an accelerator. For more information, see 6.7, “Crypto Express5S” on page 210.
Table 7-60 Crypto Express5S support on z13
Operating system
Crypto Express5S
z/OS
z/OS V2R1, z/OS V1R13, or z/OS V1R12 with the specific web deliverable
z/VM
For guest use: z/VM V6R3 and z/VM V6R2
z/VSE
z/VSE V5R1and later with PTFs
z/TPF V1R1
Service required (accelerator mode only)
Linux on z Systems
IBM is working with its Linux distribution partners to include support in future Linux on z Systems distribution releases
7.4.3 Web deliverables
For web-deliverable code on z/OS, see the z/OS downloads website:
For Linux on z Systems, support is delivered through IBM and the distribution partners. For more information, see Linux on z Systems on the IBM developerWorks® website:
7.4.4 z/OS Integrated Cryptographic Service Facility (ICSF) FMIDs
ICSF is a base component of z/OS. It is designed to use transparently the available cryptographic functions, whether CPACF or Crypto Express, to balance the workload and help address the bandwidth requirements of the applications.
Despite being a z/OS base component, ICSF functions are generally made available through web deliverable support a few months after a new z/OS release. Therefore, new functions must be related to an ICSF FMID instead of a z/OS version.
For a list of ICSF versions and FMID cross-references, see the Technical Documents website:
Table 7-61 lists the ICSF function modification identifiers (FMIDs) and web-deliverable codes for z/OS V1R10 through V1R13. Later FMIDs include the functions of previous ones.
Table 7-61 z/OS ICSF FMIDs
ICSF
FMID
z/OS
Web deliverable name
Supported function
HCR7750
V1R10
Included as a base element of z/OS V1R10
CPACF AES-192 and AES-256
CPACF SHA-224, SHA-384, and SHA-512
4096-bit RSA keys
ISO-3 PIN block format
HCR7751
V1R11V1R10
Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8a
Included as a base element of z/OS V1R11
IBM System z10 BC support
Secure key AES
Keystore policy
Public Key Data Set (PKDS) Sysplex-wide consistency
In-storage copy of the PKDS
13-digit through 19-digit PANs
Crypto Query service
Enhanced SAF checking
HCR7770
V1R12V1R11V1R10
Cryptographic Support for z/OS V1R9-V1R11
Included as a base element of z/OS V1R12
Crypto Express3 and Crypto Express3-1P support
PKA Key Management Extensions
CPACF Protected Key
Extended PKCS #11
ICSF Restructure (Performance, RAS, and ICSF-CICS Attach Facility)
HCR7780
V1R13V1R12V1R11V1R10
Cryptographic Support for z/OS V1R10-V1R12
Included as a base element of z/OS V1R13
IBM zEnterprise 196 support
Elliptic Curve Cryptography
Message-Security-Assist-4
HMAC Support
ANSI X9.8 Pin
ANSI X9.24 (CBC Key Wrapping)
CKDS constraint relief
PCI Audit
All callable services AMODE(64)
PKA RSA OAEP with SHA-256 algorithm1
HCR7790
V1R13V1R12V1R11
Cryptographic Support for z/OS V1R11-V1R13
Expanded key support for AES algorithm
Enhanced ANSI TR-31
PIN block decimalization table protection
Elliptic Curve Diffie-Hellman (ECDH) algorithm
RSA in the Modulus-Exponent (ME) and Chinese Remainder Theorem (CRT) formats
HCR77A1
V2R1
V1R13V1R12
Cryptographic Support for z/OS V1R13-V2R1
AP Configuration Simplification
KDS Key Utilization Statistics
Dynamic SSM
UDX Reduction and Simplification
Europay-Mastercard-Visa (EMV) Enhancements
Key wrapping and other security enhancements
OWH/RNG Authorization Access
SAF ACEE Selection
Non-SAF Protected IQF
RKX Key Export Wrapping
AES MAC Enhancements
PKCS #11 Enhancements
Improved CTRACE Support
HCR77B0
V2R1
V1R13
Cryptographic Support for z/OS V1R13-z/OS V2R1 web deliverable
85 domains support
VISA format preserving encryption (FPE)

1 Service is required.
7.4.5 ICSF migration considerations
Consider the following if you have installed the Cryptographic Support for z/OS V1R13 – z/OS V2R1 web deliverable (FMID HCR77A1) to accommodate the change in the way master keys are processed to determine which coprocessors become active:
The FMID HCR77A1 ICSF level is not integrated in z/OS V2R1 and needs to be downloaded and installed even after ordering a z/OS V2R1 ServerPac. The Cryptographic web deliverable is available at the following website:
Starting with FMID HCR77A1, the activation procedure now uses the master key verification patterns (MKVP) in the header record of the Cryptographic Key Data Set (CKDS) and PKDS to determine which coprocessors become active.
You can use the IBM Health Checker check ICSFMIG77A1_UNSUPPORTED_HW to determine whether your current server can support HCR77A1. The migration check is available for HCR7770, HCR7780. HCR7790, and HCR77A0 through APAR OA42011.
All systems in a sysplex that share a PKDS/TKDS must be at HCR77A1 to use the PKDS/TKDS Coordinated Administration support.
For more information, see Migration from z/OS V1R13 and z/OS V1R12 to z/OS V2R1, GA32-0889.
7.5 GDPS Virtual Appliance
 
Statement of Direction1: In the first half of 2015, IBM intends to deliver a GDPS/PPRC multiplatform resiliency capability for customers who do not run the z/OS operating system in their environment. This solution is intended to provide IBM z Systems customers who run z/VM and their associated guests, for instance, Linux on z Systems, with similar high availability and disaster recovery benefits to those who run on z/OS. This solution will be applicable for any IBM z Systems announced after and including the zBC12 and zEC12.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
The GDPS Virtual Appliance solution implements GDPS/PPRC Multiplatform Resilience for z Systems, also known as xDR. xDR coordinates near-continuous availability and a DR solution through the following features:
Disk error detection
Heartbeat for smoke tests
Re-IPL in place
Coordinated site takeover
Coordinated HyperSwap
Single point of control
The GDPS Virtual Appliance software requirements are z/VM Version 5 Release 4 or later, or z/VM 6.2 or later. z/VM 5.4 is not supported on z13.
7.6 z/OS migration considerations
Except for base processor support, z/OS software changes do not require any of the functions that are introduced with the z13. Also, the functions do not require functional software. The approach, where applicable, allows z/OS to automatically enable a function based on the presence or absence of the required hardware and software.
7.6.1 General guidelines
The IBM z13 introduces the latest z Systems technology. Although support is provided by z/OS starting with z/OS V1R12, use of z13 depends on the z/OS release. z/OS.e is not supported on z13.
In general, consider the following guidelines:
Do not change software releases and hardware at the same time.
Keep members of the sysplex at the same software level, except during brief migration periods.
Migrate to an STP-only network before introducing a z13 into a sysplex.
Review z13 restrictions and migration considerations before creating an upgrade plan.
7.6.2 Hardware configuration definition
On z/OS V1R12 and later, the HCD or Hardware Configuration Manager (HCM) help define a configuration for z13.
7.6.3 Coupling links
Each system can use, or not use, internal coupling links, InfiniBand coupling links, or ICA coupling links independently of what other systems are using. z13 does not support participating in a Parallel Sysplex with System z10 and earlier systems.
Coupling connectivity is available only when other systems also support the same type of coupling. When you plan to use the InfiniBand coupling or ICA coupling links technology, consult the Coupling Facility Configuration Options white paper, which is available at the following website:
7.6.4 Large page support
The large page support function must not be enabled without the respective software support. If large page is not specified, page frames are allocated at the current size of 4 K.
In z/OS V1R9 and later, the amount of memory to be reserved for large page support is defined by using the LFAREA parameter in the IEASYSxx member of SYS1.PARMLIB:
LFAREA=xx%|xxxxxxM|xxxxxxG
The parameter indicates the amount of storage, in percentage, megabytes, or gigabytes. The value cannot be changed dynamically.
7.6.5 Capacity Provisioning Manager
The installation of the capacity provision function on z/OS requires the following prerequisites:
Setting up and customizing z/OS RMF, including the Distributed Data Server (DDS)
Setting up the z/OS CIM Server (included in z/OS base)
Performing capacity provisioning customization, as described in z/OS MVS Capacity Provisioning User’s Guide, SA33-8299
Using the capacity provisioning function requires these prerequisites:
TCP/IP connectivity to observed systems.
RMF Distributed Data Server must be active.
CIM server must be active.
Security and CIM customization.
Capacity Provisioning Manager customization.
In addition, the Capacity Provisioning Control Center must be downloaded from the host and installed on a PC server. This application is used only to define policies. It is not required for regular operation.
Customization of the capacity provisioning function is required on the following systems:
Observed z/OS systems. These are the systems in one or multiple sysplexes that are to be monitored. For more information about the capacity provisioning domain, see 8.8, “Nondisruptive upgrades” on page 346.
Runtime systems. These are the systems where the Capacity Provisioning Manager is running, or to which the server can fail over after server or system failures.
7.6.6 Decimal floating point and z/OS XL C/C++ considerations
z/OS V1R13 with PTFs or higher is required to use the latest level (11) of the following two C/C++ compiler options:
ARCHITECTURE: This option selects the minimum level of system architecture on which the program can run. Certain features that are provided by the compiler require a minimum architecture level. ARCH(11) uses instructions that are available on the z13.
TUNE: This option allows optimization of the application for a specific system architecture, within the constraints that are imposed by the ARCHITECTURE option. The TUNE level must not be lower than the setting in the ARCHITECTURE option.
For more information about the ARCHITECTURE and TUNE compiler options, see the z/OS V1R13.0 XL C/C++ User’s Guide, SC09-4767.
 
Important: Use the previous z Systems ARCHITECTURE or TUNE options for C/C++ programs if the same applications run on both the z13 and on previous z Systems servers. However, if C/C++ applications run only on z13 servers, use the latest ARCHITECTURE and TUNE options to ensure that the best performance possible is delivered through the latest instruction set additions.
For more information, see Migration from z/OS V1R13 and z/OS V1R12 to z/OS V2R1, GA32-0889.
7.7 IBM z Advanced Workload Analysis Reporter (zAware)
IBM zAware is designed to offer a real-time, continuous learning, diagnostic, and monitoring capability. This capability is intended to help you pinpoint and resolve potential problems quickly enough to minimize impacts to your business. IBM zAware runs analytics in firmware and intelligently examines the message logs for potential deviations, inconsistencies, or variations from the norm. Many z/OS environments produce such a large volume of OPERLOG messages that it is difficult for operations personnel to analyze them easily. IBM zAware provides a simple GUI for easy identification of message anomalies, which can facilitate faster problem resolution.
IBM zAware is enhanced as part of z13 general availability to process messages without message IDs. This feature opened up the means to analyze the health of Linux operating systems by analyzing Linux SYSLOG. Linux on z Systems running natively and as a guest on z/VM is supported.
IBM zAware is ordered through specific features of z13, and requires z/OS V1R13 or later with IBM zAware exploitation support to collect specific log stream data. It requires a correctly configured LPAR. For more information, see “The zAware-mode logical partition” on page 242.
 
Statement of Direction1: IBM intends to deliver IBM zAware support for z/VM. This future release of IBM zAware is intended to help identify unusual behaviors of workloads running on z/VM to accelerate problem determination and improve service levels.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
To use the IBM zAware feature, complete the following tasks in z/OS:
For each z/OS that is to be monitored through the IBM zAware client, configure a network connection in the TCP/IP profile. If necessary, update the firewall settings.
Verify that each z/OS system meets the sysplex configuration and OPERLOG requirements for monitored clients of the IBM zAware virtual appliance.
Configure the z/OS system logger to send data to the IBM zAware virtual appliance server.
Prime the IBM zAware server with prior data from monitored clients.
7.8 Coupling facility and CFCC considerations
Coupling facility connectivity to a z13 is supported on the zEC12, zBC12, z196, z114, or another z13. The LPAR running the CFCC can be on any of the previously listed supported systems. For more information about CFCC requirements for supported systems, see Table 7-62 on page 288.
 
Consideration: Because coupling link connectivity to System z10 and previous systems is not supported, introduction of z13 into existing installations might require more planning. Also, consider the level of CFCC. For more information, see “Coupling link considerations” on page 175.
CFCC Level 20
CFCC level 20 is delivered on the z13 with driver level 22. CFCC Level 20 introduces the following enhancements:
Support for up to 141 ICF processors. The maximum number of logical processors in a Coupling Facility Partition remains at 16.
Large memory support:
 – Improve availability/scalability for larger CF cache structures and data sharing performance with larger DB2 group buffer pools (GBPs).
 – This support removes inhibitors to using large CF structures, enabling use of Large Memory to scale to larger DB2 local buffer pools (LBPs) and GBPs in data sharing environments.
 – CF structure size remains at a maximum of 1 TB.
Support for the new IBM ICA
 
Statement of Direction1: At the time of writing, IBM plans to support up to 256 Coupling CHPIDs on z13 short, that is, twice the 128 coupling CHPIDs that are supported on zEC12. Each CF image continues to support a maximum of 128 coupling CHPIDs.

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
z13 systems with CFCC Level 20 require z/OS V1R12 or later, and z/VM V6R2 or later for guest virtual coupling.
To support an upgrade from one CFCC level to the next, different levels of CFCC can be run concurrently while the coupling facility LPARs are running on different servers. CF LPARs that run on the same server share the CFCC level. The CFCC level for z13 servers is CFCC Level 20, as shown in Table 7-62.
z13 (CFCC level 20) can coexist in a sysplex with CFCC levels 17 and 19.
Table 7-62 z Systems CFCC code-level considerations
z Systems
Code level
z13
CFCC Level 20
zEC12
CFCC Level 181 or CFCC Level 19
zBC12
CFCC Level 19
z196 and z114
CFCC Level 17
z10 EC or z10 BC
CFCC Level 15a or CFCC Level 16a

1 This CFCC level cannot coexist in the same sysplex with CFCC level 20.
For more information about CFCC code levels, see the Parallel Sysplex website at:
For the latest CFCC code levels, see the current exception letter that is published on Resource Link at the following website:
CF structure sizing changes are expected when upgrading from a previous CFCC Level to CFCC Level 20. Review the CF LPAR size by using the available CFSizer tool, found at the following website:
Sizer Utility, an authorized z/OS program download, is useful when you are upgrading a CF. It is available at the following web page:
Before the migration, install the compatibility/coexistence PTFs. A planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 20.
Flash Express exploitation by CFCC
CFCC Level 19 supports Flash Express. Initial CF Flash Express exploitation is targeted for WebSphere MQ shared queue application structures. It is designed to help improve resilience and provide cost-effective standby capacity to help manage the potential overflow of WebSphere MQ shared queues. Structures now can be allocated with a combination of real memory and SCM that is provided by the Flash Express feature.
Flash memory in the CPC is assigned to a CF partition through hardware definition panels, which is like how it is assigned to the z/OS partitions. The CFRM policy definition allows the maximum amount of flash memory that you want to be used by a particular structure, on a structure-by-structure basis.
 
Important: Flash memory is not pre-assigned to structures at allocation time.
Structure size requirements for real memory get larger at initial allocation time to accommodate more control objects that are needed to use flash memory.
The CFSIZER structure recommendations consider these additional requirements, both for sizing the structure’s flash usage and for the related real memory considerations.
Here are the current CFCC Flash Express exploitation requirements:
CFCC Level 19 support
z/OS support for z/OS V1R13 with PTFs and z/OS V2R1 with PTFs
WebSphere MQ Version 7 is required.
CFCC Coupling Thin Interrupts
The Coupling Thin Interrupts enhancement, which is delivered with CFCC 19, improves the performance of a Coupling Facility partition and improves the dispatching of z/OS LPARs awaiting the arrival of returned asynchronous CF requests, when used in a shared engine environment.
7.9 Simultaneous multithreading (SMT)
The z13 can run up two threads simultaneously in the same processor, sharing all resources of the core, that provides better utilization of the cores and more processing capacity. This function is known as SMT, and is available only in zIIP and IFL cores. For more information about SMT, see 3.4.1, “Simultaneous multithreading (SMT)” on page 89.
The z/OS and the z/VM have SMT support if the support is enabled by PTFs.
The following APARs must be applied to z/OS V2R1 to use SMT11:
OA43366 (BCP)
OA43622 (WLM)
OA44439 (XCF)
The use of SMT on z/OS V2R1 requires enabling HiperDispatch, and defining the processor view (PROCVIEW) control statement in the LOADxx parmlib member and the MT_ZIIP_MODE parameter in the IEAOPTxx parmlib member.
The PROCVIEW statement is defined for the life of IPL and can have the following values:
CORE: This value specifies that z/OS should configure a processor view of core, where a core can have one or more threads. The number of threads is limited by z13 to two threads. If the underlying hardware does not support SMT, a core is limited to one thread.
CPU: This value is the default. It specifies that z/OS should configure a traditional processor view of CPU and not use SMT.
CORE,CPU_OK: This value specifies that z/OS should configure a processor view of core, as with the CORE value, but the CPU parameter is accepted as an alias for applicable commands.
When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS running in z13, HiperDispatch is forced to run as enabled, and you cannot disable HiperDispatch.The PROCVIEW statement cannot be changed dynamically, so you must run an IPL after changing it to make the new setting effective.
The MT_ZIIP_MODE parameter in the IEAOPTxx controls zIIP SMT mode and can be 1 (the default), where only one thread can be running in a core, or 2, where up two threads can be running in a core. If PROCVIEW CPU is specified, the MT_ZIIP_MODE is always 1. Otherwise, the use of SMT to dispatch two threads in a single zIIP logical processor (MT_ZIIP_MODE=2) can be changed dynamically by using the SET OPT=xx setting in the IEAOPTxx parmlib. Changing the MT mode for all cores can take some time to complete.
The activation of SMT mode also requires that the HMC Customize/Delete Activation Profiles task, “Do not end the time slice if a partition enters a wait state”, must not be selected. This is the recommended default setting.
PROCVIEW CORE requires DISPLAY M=CORE and CONFIG CORE to display the core states and configure an entire core.
Figure 7-1 shows the result of the display core command with processor view core and SMT enabled.
Figure 7-1 Result of the display core command
The use of SMT in z/VM V6R312 requires the application of a small programming enhancement (SPE) that has a new MULTITHREADING statement in the system configuration file, has HiperDispatch enabled, and has the dispatcher work distribution mode set to reshuffle.
The default in z/VM is multithreading disabled by default. There is no dynamic switching of the SMTmode. It can be enabled or disabled only by setting the MULTITHREADING statement in the configuration file. After the statement is set, you must run an IPL of the partition to enable or disable the SMT.
z/VM supports up to 32 multithreaded cores (64 threads) for IFLs, and each thread is treated as an independent processor. z/VM dispatches virtual IFLs on the IFL logical processor so that same or different guests can share a core. There is a single dispatch vector per core, and z/VM attempts to place virtual sibling IFLs on the same dispatch vector to maximize cache reuses. The guests have no awareness of SMT, and they cannot use it.
z/VM SMT exploitation does not include guest support for multithreading. The value of this support for guests is that the first-level z/VM hosts under the guests can achieve higher throughput from the multi-threaded IFL cores.
IBM is working with its Linux distribution partners to support SMT.
An operating system that uses SMT controls each core and is responsible for maximizing their throughput and meeting workload goals with the smallest number of cores. In z/OS, HiperDispatch cache optimization should be considered when you must choose the two threads to be dispatched in the same processor. HiperDispatch attempts to dispatch guest virtual CPU on the same logical processor on which they have run previously. PR/SM attempts to dispatch a vertical low logical processor in the same physical processor, or if that is not possible, in the same node, or if that is not possible, in the same CPC drawer where it was dispatched before to maximize cache reuse.
From the point of view of an application, SMT is transparent and no changes are required in the application for it to run in an SMT environment, as shown in Figure 7-2.
Figure 7-2 Simultaneous Multithreading
7.10 Single-instruction multiple-data (SIMD)
z13 is equipped with new set of instructions to improve the performance of complex mathematical models and analytic workloads through vector processing and new complex instructions, which can process much data with only a single instruction.
This new set of instructions, which is known as SIMD, enables more consolidation of analytic workloads and business transactions on z Systems.
z/OS V2R1 has support for SIMD through an SPE. The z/VM and Linux for z Systems SIMD support will be delivered after z13 GA1.
OS support includes the following items:
Enablement of vector registers.
Use of vector registers using XL C/C++ ARCH(11) and TUNE(11).
A math library with an optimized and tuned math function (Mathematical Acceleration Subsystem (MASS)) that can be used in place of some of the C standard math functions. It has a SIMD vectorized and non-vectorized version.
A specialized math library, which is known as Automatically Tuned Linear Algebra Software (ATLAS), which is optimized for the hardware.
IBM Language Environment® for C runtime function enablement for ATLAS.
DBX to support the disassembly of the new vector instructions, and to display and set vector registers.
XML SS exploitation to use new vector processing instructions to improve performance.
MASS and ATLAS can reduce the time and effort for middleware and application developers. IBM provides compiler built-in functions for SMID that software applications can use as needed, such as for using string instructions.
The use of new hardware instructions through XL C/C++ ARCH(11) and TUNE(11) or SIMD usage by MASS and ATLAS libraries requires the z13 support for z/OS V2R1 XL C/C++ web deliverable.
The followings compilers have built-in functions for SMID:
IBM Java
XL C/C++
Enterprise COBOL
Enterprise PL/I
Code must be developed to take advantage of the SIMD functions, and applications with SIMD instructions abend if they run on a lower hardware level system. Some mathematical function replacement can be done without code changes by including the scalar MASS library before the standard math library.
The MASS and standard math library have different accuracies, so it is necessary assess the accuracy of the functions in the context of the user application when you decided whether to use the MASS and ATLAS libraries.
The SIMD functions can be disabled in z/OS partitions at IPL time by using the MACHMIG parameter in the LOADxx member. To disable SIMD code, use the MACHMIG VEF hardware-based vector facility. If you do not specify a MACHMIG statement, which is the default, the system will be unlimited in its use of the Vector Facility for z/Architecture (SIMD).
 
Statement of Direction1: In a future deliverable, IBM intends to deliver support to enable z/VM guests to use the Vector Facility for z/Architecture (SIMD).

1 All statements regarding IBM plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM.
7.11 The MIDAW facility
The MIDAW facility is a system architecture and software exploitation that is designed to improve FICON performance. This facility was first made available on System z9 servers, and is used by the Media Manager in z/OS.
The MIDAW facility provides a more efficient CCW/IDAW structure for certain categories of data-chaining I/O operations:
MIDAW can improve FICON performance for extended format data sets. Non-extended data sets can also benefit from MIDAW.
MIDAW can improve channel utilization, and can improve I/O response time. It reduces FICON channel connect time, director ports, and control unit processor usage.
IBM laboratory tests indicate that applications that use EF data sets, such as DB2, or long chains of small blocks can gain significant performance benefits by using the MIDAW facility.
MIDAW is supported on FICON channels that are configured as CHPID type FC.
7.11.1 Modified Indirect Data Address Word (MIDAW) technical description
An IDAW is used to specify data addresses for I/O operations in a virtual environment.13 The existing IDAW design allows the first IDAW in a list to point to any address within a page. Subsequent IDAWs in the same list must point to the first byte in a page. Also, IDAWs (except the first and last IDAW) in a list must deal with complete 2 K or 4 K units of data.
Figure 7-3 shows a single CCW that controls the transfer of data that spans non-contiguous 4 K frames in main storage. When the IDAW flag is set, the data address in the CCW points to a list of words (IDAWs). Each IDAW contains an address that designates a data area within real storage.
Figure 7-3 IDAW usage
The number of required IDAWs for a CCW is determined by these factors:
The IDAW format as specified in the operation request block (ORB)
The count field of the CCW
The data address in the initial IDAW
For example, three IDAWS are required when these events occur:
The ORB specifies format-2 IDAWs with 4 KB blocks.
The CCW count field specifies 8 KB.
The first IDAW designates a location in the middle of a 4 KB block.
CCWs with data chaining can be used to process I/O data blocks that have a more complex internal structure, in which portions of the data block are directed into separate buffer areas. This process is sometimes known as scatter-read or scatter-write. However, as technology evolves and link speed increases, data chaining techniques become less efficient because of switch fabrics, control unit processing and exchanges, and other issues.
The MIDAW facility is a method of gathering and scattering data from and into discontinuous storage locations during an I/O operation. The modified IDAW (MIDAW) format is shown in Figure 7-4. It is 16 bytes long and is aligned on a quadword.
Figure 7-4 MIDAW format
An example of MIDAW usage is shown in Figure 7-5.
Figure 7-5 MIDAW usage
The use of MIDAWs is indicated by the MIDAW bit in the CCW. If this bit is set, the skip flag cannot be set in the CCW. The skip flag in the MIDAW can be used instead. The data count in the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends when the CCW count goes to zero or the last MIDAW (with the last flag) ends. The combination of the address and count in a MIDAW cannot cross a page boundary. This means that the largest possible count is 4 K. The maximum data count of all the MIDAWs in a list cannot exceed 64 K, which is the maximum count of the associated CCW.
The scatter-read or scatter-write effect of the MIDAWs makes it possible to efficiently send small control blocks that are embedded in a disk record to separate buffers from those used for larger data areas within the record. MIDAW operations are on a single I/O block, in the manner of data chaining. Do not confuse this operation with CCW command chaining.
7.11.2 Extended format (EF) data sets
z/OS extended format data sets use internal structures (usually not visible to the application program) that require a scatter-read (or scatter-write) operation. Therefore, CCW data chaining is required, which produces less than optimal I/O performance. Because the most significant performance benefit of MIDAWs is achieved with EF data sets, a brief review of the EF data sets is included here.
Both VSAM and non-VSAM (DSORG=PS) sets can be defined as extended format data sets. For non-VSAM data sets, a 32-byte suffix is appended to the end of every physical record (that is, block) on disk. VSAM appends the suffix to the end of every control interval (CI), which normally corresponds to a physical record. A 32 K CI is split into two records to span tracks. This suffix is used to improve data reliability and facilitates other functions that are described in the following paragraphs. Therefore, for example, if the DCB BLKSIZE or VSAM CI size is equal to 8192, the actual block on storage consists of 8224 bytes. The control unit itself does not distinguish between suffixes and user data. The suffix is transparent to the access method and database.
In addition to reliability, EF data sets enable three other functions:
DFSMS striping
Access method compression
Extended addressability (EA)
EA is useful for creating large DB2 partitions (larger than 4 GB). Striping can be used to increase sequential throughput, or to spread random I/Os across multiple logical volumes. DFSMS striping is useful for using multiple channels in parallel for one data set. The DB2 logs often are striped to optimize the performance of DB2 sequential inserts.
Processing an I/O operation to an EF data set normally requires at least two CCWs with data chaining. One CCW is used for the 32-byte suffix of the EF data set. With MIDAW, the additional CCW for the EF data set suffix is eliminated.
MIDAWs benefit both EF and non-EF data sets. For example, to read twelve 4 K records from a non-EF data set on a 3390 track, Media Manager chains 12 CCWs together by using data chaining. To read twelve 4 K records from an EF data set, 24 CCWs are chained (two CCWs per 4 K record). Using Media Manager track-level command operations and MIDAWs, an entire track can be transferred using a single CCW.
7.11.3 Performance benefits
z/OS Media Manager has I/O channel program support for implementing EF data sets, and automatically uses MIDAWs when appropriate. Most disk I/Os in the system are generated by using Media Manager.
Users of the Executing Fixed Channel Programs in Real Storage (EXCPVR) instruction can construct channel programs that contain MIDAWs. However, doing so requires that they construct an IOBE with the IOBEMIDA bit set. Users of the EXCP instruction cannot construct channel programs that contain MIDAWs.
The MIDAW facility removes the 4 K boundary restrictions of IDAWs and in the case of EF data sets, reduces the number of CCWs. Decreasing the number of CCWs helps to reduce the FICON channel processor utilization. Media Manager and MIDAWs do not cause the bits to move any faster across the FICON link. However, they reduce the number of frames and sequences that flow across the link, therefore using the channel resources more efficiently.
The MIDAW facility with FICON Express8S, operating at 8 Gbps, shows an improvement in throughput for all reads on DB2 table scan tests with EF data sets compared to the use of IDAWs with FICON Express2, operating at 2 Gbps.
The performance of a specific workload can vary according to the conditions and hardware configuration of the environment. IBM laboratory tests found that DB2 gains significant performance benefits by using the MIDAW facility in the following areas:
Table scans
Logging
Utilities
Use of DFSMS striping for DB2 data sets
Media Manager with the MIDAW facility can provide significant performance benefits when used in combination applications that use EF data sets (such as DB2) or long chains of small blocks.
For more information about FICON and MIDAW, see the following resources:
The I/O Connectivity website contains material about FICON channel performance:
DS8000 Performance Monitoring and Tuning, SG24-7146
7.12 IOCP
All z Systems servers require a description of their I/O configuration. This description is stored in input/output configuration data set (IOCDS) files. The input/output configuration program (IOCP) allows the creation of the IOCDS file from a source file that is known as the input/output configuration source (IOCS).
The IOCS file contains detailed information for each channel and path assignment, each control unit, and each device in the configuration.
The required level of IOCP for the z13 is V5 R1 L0 (IOCP 5.1.0) or later with PTFs. For more information, see the Stand-Alone Input/Output Configuration Program User's Guide, SB10-7152-08.
7.13 Worldwide port name (WWPN) tool
Part of the installation of your z13 system is the pre-planning of the SAN environment. IBM has a stand-alone tool to assist with this planning before the installation.
The capabilities of the worldwide port name (WWPN) are extended to calculate and show WWPNs for both virtual and physical ports ahead of system installation.
The tool assigns WWPNs to each virtual FCP channel/port by using the same WWPN assignment algorithms that a system uses when assigning WWPNs for channels using NPIV. Therefore, the SAN can be set up in advance, allowing operations to proceed much faster after the server is installed. In addition, the SAN configuration can be retained instead of altered by assigning the WWPN to physical FCP ports when a FICON feature is replaced.
The WWPN tool takes a .csv file that contains the FCP-specific I/O device definitions and creates the WWPN assignments that are required to set up the SAN. A binary configuration file that can be imported later by the system also is created. The .csv file can either be created manually, or exported from the HCD/HCM.
The WWPN tool on z13 (CHPID type FCP) requires the following levels:
z/OS V1R12 and later
z/VM V6R2 and later
The WWPN tool is applicable to all FICON channels that are defined as CHPID type FCP (for communication with SCSI devices) on z13. It is available for download at the Resource Link at the following website:
7.14 ICKDSF
Device Support Facilities, ICKDSF, Release 17 is required on all systems that share disk subsystems with a z13 processor.
ICKDSF supports a modified format of the CPU information field that contains a two-digit LPAR identifier. ICKDSF uses the CPU information field instead of CCW reserve/release for concurrent media maintenance. It prevents multiple systems from running ICKDSF on the same volume, and at the same time allows user applications to run while ICKDSF is processing. To prevent data corruption, ICKDSF must be able to determine all sharing systems that can potentially run ICKDSF. Therefore, this support is required for z13.
 
Remember: The need for ICKDSF Release 17 applies even to systems that are not part of the same sysplex, or are running an operating system other than z/OS, such as z/VM.
7.15 IBM z BladeCenter Extension (zBX) Model 004 software support
zBX Model 004 houses two types of blades: Power platform specific, and IBM blades for Linux and Windows operating systems.
7.15.1 IBM blades
IBM offers a selected subset of IBM POWER7 blades that can be installed and operated on the zBX Model 004.
The blades are virtualized by PowerVM Enterprise Edition. Their LPARs run either AIX Version 5 Release 3 technology level (TL) 12 (IBM POWER6® mode), AIX Version 6 Release 1 TL5 (POWER7 mode), or AIX Version 7 Release 1 and subsequent releases. Applications that are supported on AIX can be deployed to blades.
Also offered are selected IBM System x HX5 blades. Virtualization is provided by an integrated hypervisor by using kernel-based virtual machines, and supporting Linux on System x and Microsoft Windows operating systems.
Table 7-63 lists the operating systems that are supported by HX5 blades.
Table 7-63 Operating system support for zBX Model 003 HX5 blades
Operating system
Support requirements
Linux on System x
Red Hat RHEL 5.5 and up, 6.0 and up, and 7.0 and up
SUSE SLES 10 (SP4) and up, SLES 11 (SP1)1 and up, and SLES 12.0 and up
Microsoft Windows
Microsoft Windows Server 2008 R22
Microsoft Windows Server 2008 (SP2)b (Datacenter Edition preferred)
Microsoft Windows Server 2012b (Datacenter Edition preferred)
Microsoft Windows Server 2012b R2

1 Latest patch level required
2 64-bit only
7.15.2 IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise
The IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise (DataPower XI50z) is a special-purpose, double-wide blade.
The DataPower XI50z is a multifunctional appliance that can help provide these features:
Offers multiple levels of XML optimization
Streamlines and secures valuable service-oriented architecture (SOA) applications
Provides drop-in integration for heterogeneous environments by enabling core enterprise service bus (ESB) functions, including routing, bridging, transformation, and event handling
Simplifies, governs, and enhances the network security for XML and web services
Table 7-64 lists the minimum support requirements for DataPower Sysplex Distributor support.
Table 7-64 Minimum support requirements for DataPower Sysplex Distributor support
Operating system
Support requirements
z/OS
z/OS V1R11 for IPv4
z/OS V1R12 for IPv4 and IPv6
7.16 Software licensing
This section briefly describes the software licensing options that are available for the z13. Basic information about software licensing for the IBM z BladeCenter Extension (zBX) Model 004 environments also is covered.
7.16.1 Software licensing considerations
The IBM z13 software portfolio includes operating system software (that is, z/OS, z/VM, z/VSE, and z/TPF) and middleware that runs on these operating systems. The portfolio also includes middleware for Linux on z Systems environments.
zBX software products are covered by the International Program License Agreement (IPLA) and more agreements, such as the IBM International Passport Advantage® Agreement, similar to other AIX, Linux on System x, and Windows environments. PowerVM Enterprise Edition licenses must be ordered for IBM POWER7 blades.
For the z13, two metric groups for software licensing are available from IBM, depending on the software product:
Monthly license charge (MLC)
International Program License Agreement (IPLA)
MLC pricing metrics have a recurring charge that applies each month. In addition to the permission to use the product, the charge includes access to IBM product support during the support period. MLC metrics, in turn, include various offerings.
IPLA metrics have a single, up-front charge for an entitlement to use the product. An optional and separate annual charge, called subscription and support, entitles clients to access IBM product support during the support period. With this option, you can also receive future releases and versions at no additional charge.
For more information about software licensing, see the following websites:
Learn about Software licensing:
Base license agreements:
IBM z Systems Software Pricing Reference Guide:
IBM z Systems Software Pricing:
The IBM International Passport Advantage Agreement can be downloaded from the “Learn about Software licensing” website:
The remainder of this section describes the software licensing options that are available for the z13.
7.16.2 Monthly license charge (MLC) pricing metrics
MLC pricing applies to z/OS, z/VSE, and z/TPF operating systems. Any mix of z/OS, z/VM, Linux, z/VSE, and z/TPF images is allowed. Charges are based on processor capacity, which is measured in millions of service units (MSU) per hour.
Charge models
There are various Workload License Charges (WLC) pricing structures that support two charge models:
Variable charges (several pricing metrics):
Variable charges apply to products such as z/OS, z/VSE, z/TPF, DB2, IMS, CICS, and WebSphere MQ. Several pricing metrics employ the following charge types:
 – Full-capacity license charges:
The total number of MSUs of the CPC is used for charging. Full-capacity licensing is applicable when the CPC of the client is not eligible for subcapacity.
 – Subcapacity license charges:
Software charges that are based on the utilization of the logical partitions where the product is running.
Flat charges:
Software products that are licensed under flat charges are not eligible for subcapacity pricing. There is a single charge for each CPC on the z13.
Subcapacity license charges
For eligible programs, subcapacity licensing allows software charges that are based on the measured utilization by logical partitions instead of the total number of MSUs of the CPC. Subcapacity licensing removes the dependency between the software charges and CPC (hardware) installed capacity.
The subcapacity licensed products are charged monthly based on the highest observed 4-hour rolling average utilization of the logical partitions in which the product runs (except for products that are licensed by using the select application license charge (SALC) pricing metric). This type of charge requires measuring the utilization and reporting it to IBM.
The 4-hour rolling average utilization of the logical partition can be limited by a defined capacity value on the image profile of the partition. This value activates the soft capping function of the PR/SM, limiting the 4-hour rolling average partition utilization to the defined capacity value. Soft capping controls the maximum 4-hour rolling average usage (the last 4-hour average value at every 5-minute interval), but does not control the maximum instantaneous partition use.
Also available is an LPAR group capacity limit, which sets soft capping by PR/SM for a group of logical partitions running z/OS.
Even by using the soft capping option, the use of the partition can reach up to its maximum share based on the number of logical processors and weights in the image profile. Only the 4-hour rolling average utilization is tracked, allowing utilization peaks above the defined capacity value.
Some pricing metrics apply to stand-alone z Systems servers. Others apply to the aggregation of multiple z Systems server workloads within the same Parallel Sysplex.
For more information about WLC and details about how to combine logical partition utilization, see z/OS Planning for Workload License Charges, SA22-7506, which is available at:
IBM z13
Metrics that are applicable to a stand-alone z13 include the following charges:
Advanced Workload License Charges (AWLC)
System z New Application License Charge (zNALC)
Parallel Sysplex License Charge (PSLC)
Metrics that are applicable to a z13 in an actively coupled Parallel Sysplex include the following charges:
AWLC, when all nodes are z13, zEC12, zBC12, z196, or z114.
Variable Workload License Charge (VWLC), allowed only under the AWLC Transition Charges for Sysplexes when not all of the nodes are z13, zEC12, zBC12, z196, or z114.
zNALC
PSLC
7.16.3 Advanced Workload License Charges (AWLC)
AWLCs were introduced with the IBM zEnterprise 196. They use the measuring and reporting mechanisms, and the existing MSU tiers, from VWLCs, although the prices for each tier were lowered.
AWLC can be implemented in full-capacity or subcapacity mode. The AWLC applies to z/OS and z/TPF and their associated middleware products, such as DB2, IMS, CICS, and WebSphere MQ, and IBM Lotus® and IBM Domino®.
With z13, Technology Transition Offerings are available that extend the software price and performance of the AWLC pricing metric:
Technology Update Pricing for z13 is applicable for clients that run on a stand-alone z13 or in an aggregated Parallel Sysplex consisting exclusively of z13 servers.
New Transition Charges for Sysplexes (TC3) are applicable when z13, zEC12, and zBC12 are the only servers in an actively coupled Parallel Sysplex.
Transition Charges for Sysplexes (TC2) are applicable when two or more servers exist in an actively coupled Parallel Sysplex consisting of one or more z13, zEC12, zBC12, z196, or z114 servers.
For more information, see the AWLC website:
System z New Application License Charge (zNALCs)
zNALCs offer a reduced price for the z/OS operating system on logical partitions that run a qualified new workload application. An example includes Java language business applications that run under the WebSphere Application Server for z/OS or SAP.
z/OS with zNALC provides a strategic pricing model that is available on the full range of
z Systems servers for simplified application planning and deployment. zNALC allows for aggregation across a qualified Parallel Sysplex, which can provide a lower cost for incremental growth across new workloads that span a Parallel Sysplex.
For more information, see the zNALC website:
Midrange workload license charges (MWLCs)
MWLCs apply to z/VSE V4 and later when running on z13, zEC12, z196, System z10, and z9 servers. The exceptions are the z10 BC and z9 BC servers at the capacity setting A01, to which zELC applies, and z114 and zBC12, where MWLC is not available.
Similar to workload license charges, MWLC can be implemented in full-capacity or subcapacity mode. An MWLC applies to z/VSE V4 and later, and several IBM middleware products for z/VSE. All other z/VSE programs continue to be priced as before.
The z/VSE pricing metric is independent of the pricing metric for other systems (for example, z/OS) that might be running on the same server. When z/VSE is running as a guest of z/VM, z/VM V5R4 or later is required.
To report usage, the subcapacity report tool is used. One Sub-Capacity Reporting Tool (SCRT) report per server is required.
For more information, see the MWLC website:
Parallel Sysplex License Charges (PSLCs)
PSLCs apply to a large range of mainframe servers. The list can be obtained from this website:
Although it can be applied to stand-alone CPCs, the metric provides aggregation benefits only when applied to a group of CPCs in an actively coupled Parallel Sysplex cluster according to IBM terms and conditions.
Aggregation allows charging a product that is based on the total MSU value of the systems where the product runs (as opposed to all the systems in the cluster). In an uncoupled environment, software charges are based on the MSU capacity of the system.
For more information, see the PSLC website:
7.16.4 z Systems International Program License Agreement (IPLA)
For z Systems systems, the following types of products are generally in the IPLA category:
Data management tools
DB2 for z/OS VUE
CICS TS VUE V5 and CICS Tools
IMS DB VUE V12 and IMS Tools
Application development tools
Certain WebSphere for z/OS products
Linux middleware products
z/VM V5 and V6
Generally, three pricing metrics apply to IPLA products for z13 and z Systems:
Value unit (VU)
VU pricing applies to the IPLA products that run on z/OS. Value Unit pricing is typically based on the number of MSUs and allows for a lower cost of incremental growth. Examples of eligible products are IMS Tools, CICS Tools, DB2 Tools, application development tools, and WebSphere products for z/OS.
Engine-based value unit (EBVU)
EBVU pricing enables a lower cost of incremental growth with more engine-based licenses that are purchased. Examples of eligible products include z/VM V5 and V6, and certain z/VM middleware, which are priced based on the number of engines.
Processor value unit (PVU)
PVUs are determined from the number of engines, under the Passport Advantage terms and conditions. Most Linux middleware also is priced based on the number of engines. In z/VM environments, CPU pooling can be used to limit the number of engines used to determine the PVUs for a particular software product.
For more information, see the z Systems IPLA website:
7.16.5 zBX licensed software
The software licensing for the zBX select System x and POWER7 blades and DataPower XI50z follows the same rules as licensing for blades that are installed outside of zBX.
PowerVM Enterprise Edition must be licensed for POWER7 blades at the time of ordering the blades.
The hypervisor for the select System x blades for zBX is provided as part of the zEnterprise Unified Resource Manager.
IBM z Unified Resource Manager
The IBM z Unified Resource Manager is available through z13, zEC12 and zBC12 hardware features, either ordered with the system or later. No separate software licensing is required.
7.17 References
For current planning information, see the support website for each of the following operating systems:
z/OS:
z/VM:
z/VSE:
z/TPF:
Linux on z Systems:
 

1 SLES is SUSE Linux Enterprise Server.
2 RHEL is Red Hat Enterprise Linux.
3 SLES11 SP3, RHEL6.4, SLES12, and RHEL7 are eligible for Transactional Execution Support.
4 If an I/O drawer is present (as carry-forward), the LPAR maximum memory is limited to 1 TB.
5 Dynamic reconfiguration support for SCM is available as a web deliverable.
6 ESCON features are not supported on z13 and zEC12.
7 All FICON Express4, FICON Express2, and FICON features have been withdrawn from marketing.
8 For dynamic I/O only
9 HCA2-O is not supported on z13.
10 HCA2-O is not supported on z13.
11 SMT is only available for zIIP workload.
12 The z/VM 6.3 SMT enablement APAR is VM65586.
13 There are exceptions to this statement, and many details are skipped in this description. This section assumes that you can merge this brief description with an existing understanding of I/O operations in a virtual memory environment.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.27.45