Supported features and functions
This chapter describes the I/O and other miscellaneous features and functions of the z14 ZR1. The information in this chapter expands upon the overview of the key hardware elements that was provided in Chapter 2, “IBM z14 ZR1 hardware overview” on page 17.
Only the enhanced features and functions that were introduced with the z14 ZR1 are described more in detail in this chapter. The remaining supported features from earlier generations of Z platforms are listed for convenience.
 
Note: Throughout the chapter, reference is made to the IBM z14 ZR1 Technical Guide, SG24-8651.
This chapter includes the following topics:
3.1 I/O features at a glance
The z14 ZR1 supports a PCIe-based infrastructure for PCIe+ I/O drawers to support the following I/O features:
The following clustering and coupling links are support on the z14 ZR1:
Integrated Coupling Adapter Short Reach (ICA SR)
The following features that were part of earlier Z platforms are not orderable and cannot be carried forward to the z14 ZR1:
ESCON
FICON Express8 and older
OSA-Express3 and older
ISC-3
Crypto Express4S and older
Flash Express
Host Channel Adapter3 - Optical Long Reach (HCA3-O LR)
Host Channel Adapter3 - Optical (HCA3-O)
Connector type LC Duplex is used for all fiber optic cables, except those cables that are used for the zHyperLink Express, and ICA SR connections, which feature multi-fiber push-on (MPO) connectors. The MPO connector of the zHyperLink Express and the ICA SR connection includes two rows of 12 fibers and are interchangeable.
Storage connectivity options are listed in Table 3-1. For more information about zHyperLink, FICON, and FCP connectivity in relation to the z14 ZR1, see 3.3, “Storage connectivity” on page 36.
Table 3-1 Storage connectivity features
Feature
Feature codes
Bit rate
in Gbps
(or stated)
Cable type
Maximum
unrepeated
distance
Ordering information
zHyperLink Express
0431
8 GBps
OM4, OM5
150 m
New build
OM3
100 m
FICON Express16S+ 10KM LX
0427
4, 8, or 16
SM 9 µm
10 km
(6.2 miles)
New build
FICON Express16S+ SX
0428
4, 8, or 16
OM2, OM3, and OM4
New build
FICON Express16S 10KM LX
0418
4, 8, or 16
SM 9 µm
10 km
(6.2 miles)
Carry forward
FICON Express16S SX
0419
4, 8, or 16
OM2, OM3, and OM4
Carry forward
FICON Express8S 10KM LX
0409
2, 4, or 8
SM 9 µm
10 km
(6.2 miles)
Carry forward
FICON Express8S SX
0410
2, 4, or 8
OM2, OM3, and OM4
Carry forward
The maximum unrepeated distances for different multimode fiber optic cable types when used with FICON SX (shortwave) features running at different bit rates are listed in Table 3-2.
Table 3-2 Unrepeated distances for different multimode fiber optic cable types
Cable type
(Modal bandwidth)
2 Gbps
4 Gbps
8 Gbps
16 Gbps
OM1
(62.5 µm at 200 MHz·km)
150 meters
70 meters
21 meters
N/A
492 feet
230 feet
69 feet
N/A
OM2
(50 µm at 500 MHz·km)
300 meters
150 meters
50 meters
35 meters
984 feet
492 feet
164 feet
115 feet
OM3
(50 µm at 2000 MHz·km)
500 meters
380 meters
150 meters
100 meters
1640 feet
1247 feet
492 feet
328 feet
OM4
(50 µm at 4700 MHz·km)
N/A
400 meters
190 meters
125 meters
N/A
1312 feet
623 feet
410 feet
The network connectivity options are listed in Table 3-3. For more information about OSA-Express and RoCE Express connectivity in relation to the z14 ZR1, see 3.4, “Network connectivity” on page 40.
Table 3-3 Network connectivity features
Feature
Feature codes
Bit rate
in Gbps
(or stated)
Cable type
Maximum
unrepeated
distance1
Ordering information
OSA-Express6S 10 Gigabit Ethernet (GbE) LR
0424
 
10
 
SM 9 µm
10 km
(6.2 miles)
New build
OSA-Express5S 10GbE LR
0415
 
 
 
Carry forward
OSA-Express4S 10GbE LR
0406
 
 
 
 
OSA-Expess6S 10GbE SR
0425
 
10
MM 62.5 µm
 
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
New build
OSA-Expess5S 10GbE SR
0416
 
Carry forward
OSA-Express4S 10GbE SR
0407
OSA-Express6S GbE LX
0422
 
1.25
 
SM 9 µm
5 km
(3.1 miles)
New build
OSA-Express5S GbE LX
0413
 
Carry forward
OSA-Express4S GbE LX
0404
OSA-Express6S GbE SX
0423
 
1.25
MM 62.5 µm
 
MM 50 µm
275 m (200)
 
550 m (500)
New build
OSA-Express5S GbE SX
0414
 
Carry forward
OSA-Express4S GbE SX
0405
OSA-Express6S 1000BASE-T
0426
100 or 1000 Mbps
 
Cat 5, Cat 6
unshielded twisted pair (UTP)
 
 
100 m
New build
OSA-Express5S 1000BASE-T
0417
 
Carry forward
10GbE RoCE Express2
0412
 
10
MM 62.5 µm
 
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
New build
10GbE RoCE Express
0411
Carry forward

1 The minimum fiber bandwidth distance in MHz-km for multi-mode fiber optic links is included in parentheses, where applicable.
Coupling link options are listed in Table 3-4. For more information about the parallel sysplex or STP-only link connectivity in relation to the z14 ZR1, see 3.7, “Coupling and clustering” on page 51 and 3.9, “Server Time Protocol” on page 53.
Table 3-4 Coupling and clustering features1
Feature
Feature codes
Bit rate
 
Cable type
Maximum
unrepeated
distance
Ordering information
CE LR
0433
10 Gbps
SM 9 µm
10 km
(6.2 miles)
New build or
Carry forward
ICA SR
0172
8 GBps
OM4, OM5
150 m
New build
or
Carry forward
OM3
100 m
Internal Coupling (IC)
No coupling link feature or fiber optic cable is required.
Special purpose features, such as cryptographic or compression features, and Virtual Flash Memory are listed in Table 3-5. For more information about cryptographic the feature, see 3.6, “Cryptographic features” on page 49.
Table 3-5 Special-purpose features
Feature
Feature codes
Bit rate
in Gbps
Cable type
Maximum
unrepeated
distance
Ordering information
Crypto Express6S
0893
N/A
N/A
N/A
New build
Crypto Express5S
0890
N/A
N/A
N/A
Carry forward
zEDC Express
0420
N/A
N/A
N/A
New build or
Carry forward
Virtual Flash Memory
0614
N/A
N/A
N/A
New build
3.2 Native PCIe features and integrated firmware processor
The following native PCIe features are available on the z14 ZR1:
zHyperLink Express
Coupling Express Long Reach (CE LR)
10 Gigabit Ethernet (GbE) RoCE Express2
10GbE RoCE Express
zEDC Express
PCIe+ I/O drawer for z14 ZR1
On the z14 ZR1, a PCIe+ I/O drawer (FC 4001) is used to host the PCIe features. The drawer fits in the 19-inch standard rack of the z14 ZR1. The PCIe+ I/O drawer hosts up to 16 PCIe features.
 
Note: PCIe I/O drawers from earlier Z platforms cannot be carried forward to z14 ZR1.
These features are plugged exclusively into a PCIe+ I/O drawer, where they coexist with the other, non-native PCIe, I/O adapters, and features. However, they are managed in a different way from the other I/O adapters and features. The native PCle feature cards feature a PCHID that is assigned according to the physical location in the PCIe+ I/O drawer.
For the native PCIe features that are supported by z14 ZR1, drivers are included in the operating system and the adaptation layer is not needed. The adapter management functions (such as diagnostics and firmware updates) are provided by Resource Group partitions that are running on the integrated firmware processor (IFP). The z14 ZR1 includes four Resource Groups.
The IFP is used to manage native PCIe adapters that are installed in a PCIe+ I/O drawer. The IFP is allocated from a pool of processor units that are available for the entire system. Because the IFP is exclusively used to manage native PCIe adapters, it is not taken from the pool of processor units that can be characterized for customer usage.
If a native PCIe feature is present in the system, the IFP is initialized and allocated during the system POR phase. Although the IFP is allocated to one of the physical processor units, it is not visible. In error or failover scenarios, the IFP acts as any other processor unit (that is, sparing is started).
3.3 Storage connectivity
In the storage connectivity area, the main focus is on improving the latency for I/O transmission. With the introduction of zHyperLink Express, IBM ensures the optimization of the I/O infrastructure. The FICON Express 16S+ feature offers the same functions as its predecessor, with increased performance through new design enhancements.
For more information about FICON channel, see the Z I/O connectivity website. Technical papers about performance data are also available.
3.3.1 zHyperLink Express
IBM zHyperLink Express is a short distance Z I/O adapter with up to 5x lower latency than High-Performance FICON for read requests. This
feature is in the PCIe+ I/O drawer and is a two-port adapter that is used for short distance, direct connectivity between a z14 ZR1 and a DS8880. The zHyperLink Express is designed to support distances up to 150 meters at a link data rate of 8 GigaBytes per second (GBps).
A 24x MTP-MTP cable is required for each port of the zHyperLink Express feature. It is single 24-fiber cable with Multi-fiber Termination Push-on (MTP) connectors. Internally, the single cable houses 12 fibers for transmit and 12 fibers for receive.
 
Note: FICON connectivity to each storage system is still required. The FICON connection is used for zHyperLink initialization, I/O requests that are not eligible for zHyperLink communications, and as an alternative path if zHyperLink requests fail (for example, storage cache misses or busy storage device conditions).
 
3.3.2 FICON functions
FICON features continue to evolve and deliver improved throughput, reliability, availability, and serviceability (RAS). FICON features in the z14 ZR1 can provide connectivity to systems, Fibre Channel (FC) switches, and various devices in a SAN environment.
The FICON protocol is fully supported on the z14 ZR1. It is commonly used with IBM z/OS, IBM z/VM (and guest systems), Linux on Z, IBM z/VSE, and IBM z/TPF. The FICON enhancements are described next.
FICON multi-hops and cascaded switch support
The z14 ZR1 supports three hops (up to four FC switches) in a cascaded switch configuration. This support can help simplify the infrastructure with optimized RAS functionality. The support for a FICON multi-hop environment must also be provided by the FC switch vendor.
High-Performance FICON for z Systems
High-Performance FICON for z Systems (zHPF) simplifies and improves protocol efficiency by reducing the number of information units (IU) that are processed. Enhancements to the z/Architecture and the FICON protocol provide optimizations for online transaction processing (OLTP) workloads. zHPF can also be used by z/OS for IBM Db2, VSAM, PDSE, and zFS.
zHPF was further enhanced to allow all large write operations that are greater than 64 KB to be run in a single round trip to the control unit at distances up to 100 km (62.1 miles). This enhancement does not elongate the I/O service for these write operations at extended distances. It is especially useful for IBM GDPS HyperSwap configurations.
Additionally, the changes to the architecture provide end-to-end system enhancements to improve RAS.
zHPF requires matching support by the IBM System Storage® DS8880 series or similar devices from other vendors. FICON Express16S+, FICON Express16S, and FICON Express8S support the FICON protocol and the zHPF protocol in the server Licensed Internal Code.
FICON Forward Error Correction
Even with proper fiber optic cable cleaning discipline, errors can still occur on 16 Gbps links. Forward Error Correction (FEC) is a technique that is used for controlling errors in data transmission over lower quality communication channels. With FEC, I/O errors are decreased, which reduces the potential effect on workload performance that is caused by I/O errors.
When running at 16 Gbps, FICON Express16S+ and FICON Express16S features can use FEC when connected to devices that support FEC, such as the IBM DS8880. FEC allows channels to operate at higher speeds over longer distances and with reduced power and higher throughput, while retaining the same reliability and robustness for which FICON channels are traditionally known.
FICON Dynamic Routing
FICON Dynamic Routing (FIDR) is designed to support the dynamic routing policies that are supplied by FICON Director providers, such as Brocade’s Exchange Based Routing (EBR) and Cisco’s Open Exchange ID Routing (OxID).
With FIDR, you are no longer restricted to the use of static storage area network (SAN) routing policies for inter-switch links (ISLs) in a cascaded FICON Directors configuration. Performance of FICON and FCP traffic improves because of SAN dynamic routing policies that better use all of the available ISL bandwidth through higher utilization.
The IBM DS8880 also supports FIDR. Therefore, in a configuration with the z14 ZR1, capacity planning and management can be simplified and provide persistent and repeatable performance and higher resiliency.
All devices in the SAN environment must support FICON Dynamic Routing to use this feature.
The z14 ZR1s continue to provide the functions that were introduced on other Z platforms with the supported FICON features. For more information, see IBM Z Connectivity Handbook, SG24-5444.
3.3.3 FCP functions
Fibre Channel Protocol (FCP) is fully supported on the z14 ZR1. It is commonly used with Linux on IBM Z and supported by the z/VM and z/VSE. The key FCP functions are described next.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) is designed to allow the sharing of a single physical FCP channel among operating system images, whether in logical partitions or as z/VM guests. This goal is achieved by assigning a unique worldwide port name (WWPN) for each operating system that is connected to the FCP channel. In turn, each operating system appears to have its own distinct WWPN in the SAN environment, which enables separating the associated FCP traffic on the channel.
Access controls that are based on the assigned WWPN can be applied in the SAN environment. This function can be done by using standard mechanisms, such as zoning in SAN switches and logical unit number (LUN) masking in the storage controllers.
The following preferred and allowable operating characteristic values in the FCP protocol were increased:
The preferred maximum number of NPIV hosts defined to any single physical FCP channel increased from 32 to 64.
The allowable maximum number of remote N_Ports a single physical channel can communicate with increased from 512 to 1024.
The maximum number of LUNs that is addressable by a single physical channel increased from 4096 to 8192.
In support of these increases, the FCP channels also were designed to now support 1528 concurrent I/O operations, which is an increase from the previous generation FCP channel limit of 764.
Export/import physical port WWPNs for FCP channels
IBM Z platforms automatically assign WWPNs to the physical ports of an FCP channel. This WWPN assignment changes when an FCP channel is moved to a different physical slot position in the I/O drawer. The z14 ZR1 allows for the modification of these default assignments, which permits FCP channels to keep previously assigned WWPNs. This capability eliminates the need for reconfiguration of the SAN environment when a Z platform upgrade occurs or when a FICON Express feature is replaced.
Fibre Channel Read Diagnostic Parameter
An extended link service (ELS) command called Read Diagnostic Parameter (RDP) was added to the Fibre Channel T11 standard to allow Z platforms to obtain more diagnostic data from the Small Form-factor Pluggable (SFP) optics that are throughout the SAN fabric. RDP can identify a failed or failing component without unnecessarily replacing more components in the SAN fabric (such as FICON features, optics, and cables).
FICON and FCP channels provide a means to read this extra diagnostic data for all of the ports that are accessed in the I/O configuration and make the data available to a Z LPAR. For FICON channels, z/OS displays the data with a message and display command. For Linux on IBM Z and the KVM hypervisor, z/VM, and z/VSE, this diagnostic data is made available in a window in the SAN Explorer tool on the Hardware Management Console (HMC).
3.3.4 FICON Express16S+
The following types of transceivers for FICON Express16S+ are supported on a new build system:
FICON Express16S+ LX feature (long wavelength)
FICON Express16S+ SX feature (short wavelength)
Each port supports attachment to the following elements:
FICON/FCP switches and directors that support 4 Gbps, 8 Gbps, or 16 Gbps
Control units (storage subsystems) that support 4 Gbps, 8 Gbps, or 16 Gbps
 
Note: Both ports of FICON Express16S+ adapter must be the same CHPID type (FC or FCP).
FICON Express16S+ LX feature
The FICON Express16S LX feature occupies one I/O slot in the PCIe+ I/O drawer. It features two ports, each supporting an LC duplex connector and auto-negotiated link speeds of 4 Gbps, 8 Gbps, and 16 Gbps up to an unrepeated maximum distance of 10 km (6.2 miles).
FICON Express16S+ SX feature
The FICON Express16S SX feature occupies one I/O slot in the PCIe+ I/O drawer. It features two ports, each supporting an LC duplex connector and auto-negotiated link speeds of 4 Gbps, 8 Gbps, and 16 Gbps up to an unrepeated maximum distance2 of 380 meters (1246.7 feet) at 4 Gbps, 150 meters (492.1 feet) at 8 Gbps, or 100 meters (328 feet) at 16 Gbps.
3.3.5 FICON Express16S (carry forward only)
The following types of transceivers for FICON Express16S are available only when carried forward on upgrades and are supported on z14 ZR1:
FICON Express16S LX feature
FICON Express16S SX feature
Each port supports attachment to the following elements:
FICON/FCP switches and directors that support 4 Gbps, 8 Gbps, or 16 Gbps
Control units (storage subsystems) that support 4 Gbps, 8 Gbps, or 16 Gbps
 
Note: To permit the mix of different CHPID types (FC and FCP) in the FICON Express16S features, the keyword MIXTYPE must be defined in the IODF to at least one port of the adapter.
FICON Express16S LX feature
The FICON Express16S LX feature occupies one I/O slot in the PCIe+ I/O drawer. It features two ports, each supporting an LC duplex connector and auto-negotiated link speeds of 4 Gbps, 8 Gbps, and 16 Gbps up to an unrepeated maximum distance of 10 km (6.2 miles).
FICON Express16S SX feature
The FICON Express16S SX feature occupies one I/O slot in the PCIe+ I/O drawer. It features two ports, each supporting an LC duplex connector and auto-negotiated link speeds of 4 Gbps, 8 Gbps, and 16 Gbps up to an unrepeated maximum distance of 380 meters (1246 feet) at 4 Gbps, 150 meters (492 feet) at 8 Gbps, or 100 meters (328 feet) at 16 Gbps.
3.3.6 FICON Express8S (carry forward only)
The FICON Express8S features are available only when carried forward on upgrades. The following types of transceivers for FICON Express8 are supported on z14 ZR1:
FICON Express8S 10KM LX feature
FICON Express8S SX feature
FICON Express8S 10KM LX feature
The FICON Express8S 10KM LX feature occupies one I/O slot in the I/O drawer. It includes four ports, each supporting an LC duplex connector, and auto-negotiated link speeds of 2 Gbps, 4 Gbps, and 8 Gbps up to an unrepeated maximum distance of 10 km (6.2 miles).
FICON Express8S SX feature
The FICON Express8S SX feature occupies one I/O slot in the I/O drawer. This feature includes four ports, each supporting an LC duplex connector, and auto-negotiated link speeds of 2 Gbps, 4 Gbps, and 8 Gbps up to an unrepeated maximum distance of 500 meters (1640 feet) at 2 Gbps, 380 meters (1246 feet) at 4 Gbps, or 150 meters (492 feet) at 8 Gbps.
3.4 Network connectivity
The z14 ZR1 offers a wide range of functions that can help consolidate or simplify the network environment. These functions are supported by OSA-Express, RoCE Express, and HiperSockets features.
3.4.1 OSA-Express functions
Improved throughput (mixed inbound/outbound) is achieved by the data router function that was introduced in the OSA-Express3, and enhanced in OSA-Express6S and OSA-Express5S features. With the data router, the store and forward technique in DMA is no longer used. The data router enables a direct host memory-to-LAN flow. This function avoids a hop and is designed to reduce latency and increase throughput for standard frames (1492 bytes) and jumbo frames (8992 bytes).
The most current OSA-Express functions are described next.
OSM CHPID for usage with Dynamic Partition Manager
Dynamic Partition Manager (DPM) requires that the z14 ZR1 has two OSA-Express5S 1000BASE-T Ethernet or OSA-Express6S 1000BASE-T Ethernet features that are defined as CHPID type OSM for connectivity. OSA-Express features that are defined with OSM cannot be shared with other CHPID types and must be dedicated for use by DPM. DPM supports Linux on IBM Z (running in an LPAR) under KVM hypervisor or z/VM.
DPM can be ordered along with Ensemble membership, but both cannot be enabled at the same time on the z14 ZR1.
 
Statement of direction1: IBM z14 ZR1 is the last platform to support Ensembles and zEnterprise Unified Resource Manager (zManager).

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. The development, release, and timing of any future features or functionality that is described for our products remain at IBM’s sole discretion.
OSA-ICC support for Secure Sockets Layer
When configured as an integrated console controller CHPID type (OSC) on the z14 ZR1, the Open Systems Adapter supports the configuration and enablement of secure connections by using the Transport Layer Security (TLS) protocol versions 1.0, 1.1, and 1.2. Server-side authentication is supported by using a self-signed certificate or customer supplied certificate, which can be signed by a customer-specified certificate authority.
The certificates that are used must include an RSA key length of 2048 bits and be signed by using SHA-256. This support negotiates a cipher suite of AES-128 for the session key.
Queued direct I/O optimized latency mode
Queued direct I/O (QDIO) optimized latency mode can help improve performance for applications that feature a critical requirement to minimize response times for inbound and outbound data. It optimizes the interrupt processing as noted in the following configurations:
For inbound processing, the TCP/IP stack looks more frequently for available data to process, which ensures that any new data is read from the OSA-Express6S or OSA-Express5S without requiring more program-controlled interrupts.
For outbound processing, the OSA-Express6S or OSA-Express5S looks more frequently for available data to process from the TCP/IP stack; therefore, a Signal Adapter instruction is not required to determine whether more data is available.
Inbound workload queuing
Inbound workload queuing (IWQ) can help to reduce overhead and latency for inbound z/OS network data traffic and implement an efficient way for initiating parallel processing. This improvement is achieved by using OSA-Express features in QDIO mode (CHPID type OSD) with multiple input queues, and by processing network data traffic that is based on workload types. The data from a specific workload type is placed in one of four input queues (per device). A process is created and scheduled to run on one of the multiple processors, independent from the other three queues. This change can improve performance because IWQ can use the symmetric multiprocessor (SMP) architecture of the Z.
Virtual local area network support
Virtual local area network (VLAN) is a function of OSA-Express features that takes advantage of the Institute of Electrical and Electronics Engineers (IEEE) 802.q standard for virtual bridged LANs. VLANs allow easier administration of logical groups of stations that communicate as though they were on the same LAN.
In the virtualized environment of the Z, TCP/IP stacks can exist and potentially share OSA-Express features. VLAN provides a greater degree of isolation by allowing contact with a server from only the set of stations that comprise the VLAN.
Virtual MAC support
When sharing OSA port addresses across LPARs, Virtual MAC (VMAC) support enables each operating system instance to have a unique VMAC address. All IP addresses that are associated with a TCP/IP stack are accessible by using their own VMAC address, instead of sharing the MAC address of the OSA port. Advantages can include a simplified configuration setup and improvements to IP workload load balancing and outbound routing.
This support is available for Layer 3 mode. It is used by z/OS and supported by z/VM for guest use.
z/VM multi-VSwitch link aggregation support
z/VM provides multi-VSwitch link aggregation support, which allows a port group of OSA-Express features to span multiple virtual switches within a single z/VM LPAR or between multiple z/VM LPARs. Sharing a link aggregation port group (LAG) with multiple virtual switches increases optimization and utilization of the OSA-Express when handling larger traffic loads. With this support, a port group is no longer required to be dedicated to a single z/VM virtual switch.
QDIO data connection isolation for the z/VM environment
New workloads increasingly require multitier security zones. In a virtualized environment, an essential requirement is to protect workloads from intrusion or exposure of data and processes from other workloads.
The QDIO data connection isolation enables the following elements:
Adherence to security and HIPPA-security guidelines and regulations for network isolation between the instances that share physical network connectivity.
Establishment of security zone boundaries that are defined by the network administrators.
Use of a mechanism to isolate a QDIO data connection (on an OSA port) by forcing traffic to flow to the external network. This feature ensures that all communication flows only between an operating system and the external network.
Internal routing can be disabled on a per-QDIO connection basis. This support does not affect the ability to share an OSA port. Sharing occurs as it does today, but the ability to communicate between sharing QDIO data connections can be restricted through this support.
QDIO data connection isolation (also known as VSWITCH port isolation) applies to the z/VM environment when the Virtual Switch (VSWITCH) function is used, and to all supported OSA-Express features (CHPID type OSD) on Z. z/OS supports a similar capability.
QDIO interface isolation for z/OS
Some environments require strict controls for routing data traffic between servers or nodes. In certain cases, the LPAR-to-LPAR capability of a shared OSA port can prevent such controls from being enforced. With interface isolation, internal routing can be controlled on an LPAR basis. When interface isolation is enabled, the OSA discards any packets that are destined for a z/OS LPAR that is registered in the OAT as isolated.
QDIO interface isolation is supported by Communications Server for z/OS V1R11 and later, and for all supported OSA-Express features on Z.
3.4.2 OSA-Express6S
This section describes the connectivity options that are offered by the OSA-Express6S features. The following OSA-Express6S features can be installed on z14 ZR1:
OSA-Express6S 10GbE Long Reach (LR)
OSA-Express6S 10GbE Short Reach (SR)
OSA-Express6S Gigabit Ethernet Long Wavelength (GbE LX)
OSA-Express6S Gigabit Ethernet Short Wavelength (GbE SX)
OSA-Express6S 1000BASE-T Ethernet
OSA-Express6S 10GbE LR feature
The OSA-Express6S 10GbE LR feature occupies one slot in a PCIe+ I/O drawer. It includes one port that connects to a 10 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable that ends with an LC Duplex connector. The feature supports an unrepeated maximum distance of 10 km (6.2 miles).
OSA-Express6S 10GbE SR feature
The OSA-Express6S 10GbE SR feature occupies one slot in the PCIe+ I/O drawer. This feature includes one port that connects to a 10 Gbps Ethernet LAN through a 62.5 µm or 50 µm multimode fiber optic cable that ends with an LC Duplex connector.
The maximum supported unrepeated distance is 33 m (108 feet) on a 62.5 µm multimode fiber optic cable, and 300 m (984 feet) on a 50 µm multimode fiber optic cable.
OSA-Express6S GbE LX feature
The OSA-Express6S GbE LX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports, which represent one channel path identifier (CHPID), that connect to a 1 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable. This cable ends with an LC Duplex connector, which supports an unrepeated maximum distance of 5 km (3.1 miles).
A multimode (62.5 or 50 µm) fiber optic cable can be used with this feature. The use of these multimode cable types requires a Mode Conditioning Patch (MCP) cable at each end of the fiber optic link. Use of the single mode to multimode MCP cables reduces the supported distance of the link to a maximum of 550 meters (1804 feet).
OSA-Express6S GbE SX feature
The OSA-Express6S GbE SX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports (which represent one CHPID) that connect to a 1 Gbps Ethernet LAN through 50 or 62.5 µm multimode fiber optic cable. This cable ends with an LC Duplex connector over an unrepeated distance of 550 meters (1804 feet) for 50 µm fiber or 220 meters (721 feet) for 62.5 µm fiber.
OSA-Express6S 1000BASE-T feature
The OSA-Express6S 1000BASE-T occupies one slot in the PCIe+ I/O drawer. It features two ports (which represent one CHPID) that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port has an RJ-45 receptacle for UTP Cat5 or Cat6 cabling, which supports a maximum distance of 100 meters (328 feet).
 
 
 
Statement of Direction1: The OSA-Express6S 1000BASE-T feature is the last generation to support connections that operate at 100 Mbps link speed. Future OSA-Express 1000BASE-T features will support operation at only 1 Gbps link speed.

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. The development, release, and timing of any future features or functionalit)y described for our products remain at IBM’s sole discretion.
3.4.3 OSA-Express5S (carry forward only)
This section describes the connectivity options that are offered by the OSA-Express5S features. The following OSA-Express5S features can be installed on z14 ZR1:
OSA-Express5S 10GbE Long Reach (LR)
OSA-Express5S 10GbE Short Reach (SR)
OSA-Express5S Gigabit Ethernet Long Wavelength (GbE LX)
OSA-Express5S Gigabit Ethernet Short Wavelength (GbE SX)
OSA-Express5S 1000BASE-T Ethernet
OSA-Express5S 10GbE LR feature
The OSA-Express5S 10GbE LR feature occupies one slot in a PCIe+ I/O drawer. It features one port that connects to a 10 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable that ends with an LC Duplex connector. The feature supports an unrepeated maximum distance of 10 km (6.2 miles).
OSA-Express5S 10GbE SR feature
The OSA-Express5S 10GbE SR feature occupies one slot in the PCIe+ I/O drawer. This feature includes one port that connects to a 10 Gbps Ethernet LAN through a 62.5 µm or 50 µm multimode fiber optic cable that ends with an LC Duplex connector. The maximum supported unrepeated distance is 33 m (98 feet) on a 62.5 µm multimode fiber optic cable, and 300 m (984 feet) on a 50 µm multimode fiber optic cable.
OSA-Express5S GbE LX feature
The OSA-Express5S GbE LX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports (which represent one CHPID) that connect to a 1 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable. This cable ends with an LC Duplex connector, which supports an unrepeated maximum distance of 5 km (3.1 miles).
A multimode (62.5 or 50 µm) fiber optic cable can be used with this feature. The use of these multimode cable types requires an MCP cable at each end of the fiber optic link. Use of the single mode to multimode MCP cables reduces the supported distance of the link to a maximum of 550 meters (1804 feet).
OSA-Express5S GbE SX feature
The OSA-Express5S GbE SX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports (which represent one CHPID) that connect to a 1 Gbps Ethernet LAN through 50 or 62.5 µm multimode fiber optic cable. This cable ends with an LC Duplex connector over an unrepeated distance of 550 meters (1804 feet) for 50 µm fiber or 220 meters (721 feet) for 62.5 µm fiber.
OSA-Express5S 1000BASE-T feature
The OSA-Express5S 1000BASE-T occupies one slot in the PCIe+ I/O drawer. It features two ports (which represent one CHPID) that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port includes an RJ-45 receptacle for UTP Cat5 or Cat6 cabling, which supports a maximum distance of 100 meters (328 feet).
3.4.4 OSA-Express4S (carry forward only, select features)
This section describes the connectivity options that are offered by the OSA-Express4S features. The following OSA-Express4S features can be installed on z14 ZR1:
OSA-Express4S 10GbE Long Reach (LR)
OSA-Express4S 10GbE Short Reach (SR)
OSA-Express4S Gigabit Ethernet Long Wavelength (GbE LX)
OSA-Express4S Gigabit Ethernet Short Wavelength (GbE SX)
OSA-Express4S 10GbE LR feature
The OSA-Express4S 10GbE LR feature occupies one slot in a PCIe+ I/O drawer. It features one port that connects to a 10 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable that ends with an LC Duplex connector. The feature supports an unrepeated maximum distance of 10 km (6.2 miles).
OSA-Express4S 10GbE SR feature
The OSA-Express4S 10GbE SR feature occupies one slot in the PCIe+ I/O drawer. This feature includes one port that connects to a 10 Gbps Ethernet LAN through a 62.5 µm or 50 µm multimode fiber optic cable that ends with an LC Duplex connector. The maximum supported unrepeated distance is 33 m (108 feet) on a 62.5 µm multimode fiber optic cable, and 300 m (984 feet) on a 50 µm multimode fiber optic cable.
OSA-Express4S GbE LX feature
The OSA-Express4S GbE LX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports (which represent one CHPID) that connect to a 1 Gbps Ethernet LAN through a 9 µm single mode fiber optic cable. This cable ends with an LC Duplex connector that supports an unrepeated maximum distance of 5 km (3.1 miles).
A multimode (62.5 or 50 µm) fiber optic cable can be used with this feature. The use of these multimode cable types requires an MCP cable at each end of the fiber optic link. Use of the single mode to multimode MCP cables reduces the supported distance of the link to a maximum of 550 meters (1804 feet).
OSA-Express4S GbE SX feature
The OSA-Express4S GbE SX occupies one slot in the PCIe+ I/O drawer. This feature includes two ports (which represent one CHPID) that connect to a 1 Gbps Ethernet LAN through 50 or 62.5 µm multimode fiber optic cable. This cable ends with an LC Duplex connector over an unrepeated distance of 550 meters (1804 feet) for 50 µm fiber or 220 meters (721 feet) for 62.5 µm fiber.
3.4.5 HiperSockets functions
IBM HiperSockets are referred to as the “network in a box” because it simulates LAN environments entirely within the IBM Z platform. The data transfer is from LPAR memory to LPAR memory, which is mediated by IBM Z firmware. The z14 ZR1 supports up to 32 HiperSockets. One HiperSockets network can be shared by up to 40 LPARs. Up to 4096 communication paths support a total of 12,288 IP addresses across all 32 HiperSockets.
The HiperSockets internal networks can support the following transport modes:
Layer 2 (link layer)
Layer 3 (network or IP layer)
Traffic can be Internet Protocol Version 4 or Version 6 (IPv4, IPv6) or non-IP. HiperSockets devices are independent of protocol and Layer 3. Each HiperSockets device has its own Layer 2 Media Access Control (MAC) address. This address is designed to allow the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same way as they do for a non-mainframe environment. HiperSockets Layer 2 support is provided by Linux on IBM Z, and by z/VM for guest use.
The most current HiperSockets functions are described in the following sections.
HiperSockets Multiple Write Facility
HiperSockets performance is enhanced to allow for the streaming of bulk data over a HiperSockets link between LPARs. The receiving LPAR can now process a much larger amount of data per I/O interrupt. This enhancement is transparent to the operating system in the receiving LPAR. HiperSockets Multiple Write Facility, with fewer I/O interrupts, reduces CPU use of the sending and receiving LPAR. The HiperSockets Multiple Write Facility is supported in the z/OS environment.
zIIP-Assisted HiperSockets for large messages
In z/OS, HiperSockets are eligible for zIIP processing. Specifically, the z/OS Communications Server allows the HiperSockets Multiple Write Facility processing for outbound large messages that originate from z/OS to be performed on a zIIP.
zIIP-Assisted HiperSockets can help make highly secure and available HiperSockets networking an even more attractive option. z/OS application workloads that are based on XML, HTTP, SOAP, Java, and traditional file transfer can benefit from zIIP enablement by lowering general-purpose processor use for such TCP/IP traffic.
When the workload is eligible, the TCP/IP HiperSockets device driver layer (write) processing is redirected to a zIIP, which unblocks the sending application.
HiperSockets network traffic analyzer
HiperSockets network traffic analyzer (NTA) is a function that is available in the Licensed Internal Code (LIC) of IBM Z systems. It can simplify problem isolation and resolution by allowing Layer 2 and Layer 3 tracing of HiperSockets network traffic.
HiperSockets NTA allows Linux on IBM Z to control tracing of the internal virtual LAN. It captures records into host memory and storage (file systems) that can be analyzed by system programmers and network administrators. These administrators can use Linux on IBM Z tools to format, edit, and process the trace records.
A customized HiperSockets NTA rule enables authorizing an LPAR to trace messages only from LPARs that can be traced by the NTA on the selected IQD channel.
HiperSockets completion queue
The HiperSockets completion queue function allows synchronous and asynchronous transfer of data between logical partitions. With the asynchronous support, data can be temporarily held until the receiver has buffers available in its inbound queue during high volume situations. This function can provide performance improvement for LPAR-to-LPAR communication, and can be especially helpful in burst situations.
HiperSockets virtual switch bridge support
The z/VM virtual switch is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to also directly communicate with the following systems:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
A HiperSockets channel alone can provide intra-CPC communications only. The HiperSockets Bridge Port allows a virtual switch to connect z/VM guests by using real HiperSockets devices, which provides the ability to communicate with hosts that are external to the CPC. The virtual switch HiperSockets Bridge Port eliminates the need to configure a separate next hop router on the HiperSockets channel to provide connectivity to destinations that are outside of a HiperSockets channel.
3.4.6 Shared Memory Communications functions
The Shared Memory Communication (SMC) capabilities of the z14 ZR1 optimizes the communications between applications in server-to-server (SMC-R) or LPAR-to-LPAR (SMC-D) connectivity.
SMC-R provides application transparent use of the RoCE Express feature that reduces the network overhead and latency of data transfers, which effectively offers the benefits of optimized network performance across processors.
The Internal Shared Memory (ISM) virtual PCI function takes advantage of the capabilities of SMC-D. ISM is a virtual PCI network adapter that enables direct access to shared virtual memory, which provides a highly optimized network interconnect for Z intra-system communications. Up to 32 channels for SMC-D traffic can be defined in a z14 ZR1, whereby each channel can be virtualized to a maximum of 255 Function IDs3. No other hardware is required for SMC-D.
3.4.7 10GbE RoCE Express features
This section describes the connectivity options that are offered by the Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) Express features. The following RoCE features can be installed on z14 ZR1:
10GbE RoCE Express2
10GbE RoCE Express (carry forward only)
The 10GbE RoCE Express feature helps reduce the use of CPU resources for applications that use the TCP/IP stack. It might also help to reduce network latency with memory-to-memory transfers that use SMC-R in z/OS V2R1 and later versions. It is transparent to applications, and can be used for server-to-server communication in a multiple Z platform environment.
This feature is in the PCIe+ I/O drawer and available to the z14 ZR1. The 10GbE RoCE Express features include one PCIe adapter with two ports.
The 10GbE RoCE Express feature uses a short reach (SR) laser as the optical transceiver. It also supports use of a multi-mode fiber optic cable ending with an LC Duplex connector. Both point-to-point connection and switched connection with an enterprise-class 10 GbE switch are supported. Switches that are used by the 10GbE RoCE Express feature must include the Pause frame enabled as defined by the IEEE 802.3x standard.
A maximum of four features (any combination of 10GbE RoCE Express2 or 10GbE RoCE Express features) can be installed in the z14 ZR1.
10GbE RoCE Express2
The 10GbE RoCE Express2 feature occupies one slot in the PCIe+ I/O drawer. This feature includes two ports that connect to a 10 Gbps Ethernet LAN through a 62.5 µm or 50 µm multimode fiber optic cable that ends with an LC Duplex connector.
The maximum supported unrepeated distance is 300 m (984 feet) on an OM3 multimode fiber optic cable, and can be increased to 600 m (1968 feet) when a switch is shared across two 10GbE RoCE Express2 features. The 10GbE RoCE Express2 supports 31 Virtual Functions (VFs)4 per physical port for a total of 62 per adapter.
10GbE RoCE Express (carry forward only)
The 10GbE RoCE Express feature occupies one slot in the PCIe+ I/O drawer. This feature includes two ports that connect to a 10 Gbps Ethernet LAN through a 62.5 µm or 50 µm multimode fiber optic cable that ends with an LC Duplex connector. The maximum supported unrepeated distance is 300 m (984 feet) on an OM3 multimode fiber optic cable, and can be increased to 600 m (1968 feet) when sharing a switch across two RoCE Express features. The RoCE Express supports 31 VFs per feature.
3.5 Compression options
Two types of compression options are available with the z14 ZR1: A standard internal compression coprocessor that is tightly connected to each processor unit, and an external native PCIe feature.
3.5.1 Compression Coprocessor
The Compression Coprocessor (CMPSC) is a well-known feature that works with the processor units in the Z platform. This coprocessor works with a proprietary compression format and is used for many types of z/OS data.
3.5.2 zEnterprise Data Compression
zEnterprise Data Compression (zEDC) Express is an optional native PCIe feature that is available in the z14 ZR1. It provides hardware-based acceleration for data compression and decompression for the enterprise, which can help to improve cross platform data exchange, reduce CPU consumption, and save disk space.
A minimum of one feature can be ordered and a maximum of 16 can be installed on the system in the PCIe+ I/O drawer. Up to two zEDC Express features per domain can be installed. One PCIe adapter/compression processor is available per feature that implements compression as defined by RFC1951 (DEFLATE). A zEDC Express feature can be shared between up to 15 LPARs.
3.6 Cryptographic features
The z14 ZR1 provides cryptographic functions that, from an application program perspective, can be categorized in the following groups:
Synchronous cryptographic functions, which are provided by the CP Assist for Cryptographic Function (CPACF) or the Crypto Express features when defined as an accelerator.
Asynchronous cryptographic functions, provided by the Crypto Express features.
3.6.1 CP Assist for Cryptographic Function
CPACF offers a set of symmetric cryptographic functions for high-performance encryption and decryption with clear key operations for SSL/TLS, VPN, and data-storing applications that do not require FIPS 140-2 level 4 security5. The CPACF is an optional feature that is integrated with the compression unit in the coprocessor in the z14 ZR1 microprocessor core.
The CPACF protected key is a function that facilitates the continued privacy of cryptographic key material while keeping the wanted high performance. CPACF ensures that key material is not visible to applications or operating systems during encryption operations. CPACF protected key provides substantial throughput improvements for large-volume data encryption and low latency for encryption of small blocks of data.
The cryptographic assist includes support for the following functions:
Advanced Encryption Standard (AES) for 128-bit, 192-bit, and 256-bit keys
Data Encryption Standard (DES) data encryption and decryption with single, double, or triple length keys.
Pseudo-random number generation (PRNG)
True-random number generator (TRNG)
Message authentication code (MAC)
Hashing algorithms: SHA-1, SHA-2, and SHA-3
SHA-1, SHA-2, and SHA-3 support are enabled on all Z platforms and do not require the CPACF enablement feature. The CPACF functions are supported by z/OS, z/VM, z/VSE, z/TPF, and Linux on IBM Z.
For more information about the use of this function, see “Pervasive encryption” on page 83.
3.6.2 Crypto Express6S
The Crypto Express6S represents the newest generation of the Peripheral Component Interconnect Express (PCIe) cryptographic coprocessors, which are an optional feature that is available on the z14 ZR1. These coprocessors are Hardware Security Modules (HSMs) that provide high-security cryptographic processing as required by banking and other industries.
This feature provides a secure programming and hardware environment wherein crypto processes are performed. Each cryptographic coprocessor includes general-purpose processors, non-volatile storage, and specialized cryptographic electronics, which are all contained within a tamper-sensing and tamper-responsive enclosure that eliminates all keys and sensitive data on any attempt to tamper with the device. The security features of the HSM are designed to meet the requirements of FIPS 140-2, Level 4, which is the highest security level defined.
The Crypto Express6S includes one PCIe adapter per feature. For availability reasons, a minimum of two features is required. Up to 16 Crypto Express6S features are supported. The Crypto Express6S feature occupies one I/O slot in a PCIe+ I/O drawer.
Each adapter can be configured as a Secure IBM CCA coprocessor, a Secure IBM Enterprise PKCS #11 (EP11) coprocessor, or as an accelerator.
Crypto Express6S provides domain support for up to 40 logical partitions.
The accelerator function is designed for maximum-speed Secure Sockets Layer and Transport Layer Security (SSL/TLS) acceleration, rather than for specialized financial applications for secure, long-term storage of keys or secrets. The Crypto Express6S can also be configured as one of the following configurations:
The Secure IBM CCA coprocessor includes secure key functions with emphasis on the specialized functions that are required for banking and payment card systems. It is optionally programmable to add custom functions and algorithms by using User Defined Extensions (UDX).
A new mode, called Payment Card Industry (PCI) PIN Transaction Security (PTS) Hardware Security Module (HSM) (PCI-HSM), is available exclusively for Crypto Express6S in CCA mode. PCI-HSM mode simplifies compliance with PCI requirements for hardware security modules.
The Secure IBM Enterprise PKCS #11 (EP11) coprocessor implements an industry-standardized set of services that adheres to the PKCS #11 specification v2.20 and more recent amendments. It was designed for extended FIPS and Common Criteria evaluations to meet industry requirements.
This cryptographic coprocessor mode introduced the PKCS #11 secure key function.
 
TKE feature: The Trusted Key Entry (TKE) Workstation feature is required for supporting the administration of the Crypto Express6S when configured as an Enterprise PKCS #11 coprocessor or managing the new CCA mode PCI-HSM.
When the Crypto Express6S PCI Express adapter is configured as a secure IBM CCA co-processor, it still provides accelerator functions. However, up to 3x better performance for those functions can be achieved if the Crypto Express6S PCI Express adapter is configured as an accelerator.
3.6.3 Crypto Express5S (carry forward only)
The Crypto Express5S has one PCIe adapter per feature. For availability reasons, a minimum of two features is required. Up to 16 Crypto Express5S features are supported. The Crypto Express5S feature occupies one I/O slot in a PCIe+ I/O drawer.
Each adapter can be configured as a Secure IBM CCA coprocessor, a Secure IBM Enterprise PKCS #11 (EP11) coprocessor, or as an accelerator.
Crypto Express5S provides domain support for up to 40 logical partitions on IBM z14 ZR1.
The Crypto Express5S feature supports all the functions of the Crypto Express6S, except the PCI-HSM standard.
3.7 Coupling and clustering
Coupling connectivity for Parallel Sysplex on z14 ZR1 use CE LR and Integrated Coupling Adapter Short Reach (ICA SR). The ICA SR is designed to support distances up to 150 m (492 feet). The CE LR supports longer distances between systems, up to 10 km (6.2 miles) unrepeated.
CE LR and ICA SR allow all of the z/OS-to-CF communication, CF-to-CF traffic, or Server Time Protocol (STP)6 through high-speed fiber optic connections at short, which are up to 150 m (492 feet) or long, which are up to 10 km (6.2 miles) unrepeated distances.
 
Important: zEC12 or zBC12 can coexist in the same parallel sysplex with z14 ZR1 only if the CPC that is hosting the CFs includes coupling connectivity to the zEC12 or zBC12 and z14 ZR1. z14 ZR1 does not support direct coupling connectivity to zEC12, zBC12, or older Z platforms.
For more information about coupling links technologies, see the Coupling Facility Configuration Options white paper.
 
Note: The z14 ZR1 does not support InfiniBand coupling connectivity. For more information, see the Statement Of Direction from Hardware Announcement 117-031 - fulfilled.
3.7.1 Coupling Express Long Reach
The CE LR is a two-port PCIe native adapter that is used for long-distance coupling connectivity. CE LR uses the CL5 coupling channel type. The CE LR feature also uses PCIe Gen3 technology and is hosted in a PCIe+ I/O drawer.
The feature supports communication at unrepeated distances up to 10 km (6.2 miles) by using 9 µm single-mode fiber optic cables and repeated distances up to 100 km (62 miles) by using IBM Z qualified DWDM vendor equipment. It supports up to 4 CHPIDs per port and 8 or 32 subchannels (devices) per CHPID. The coupling links can be defined as shared between images within a CSS or spanned across multiple CSSs in a Z system.
3.7.2 Integrated Coupling Adapter Short Reach
The Integrated Coupling Adapter Short Reach (ICA SR) is a two-port fanout in the CPC drawer. It is used for short distance coupling connectivity and includes the coupling channel type CS5. The ICA SR uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for coupling.
The ICA SR supports cable length of up to 150 m (492 feet) and supports a link data rate of 8 GBps. It also supports up to four CHPIDs per port and eight subchannels (devices) per CHPID. The coupling links can be defined as shared between images within a CSS. They can also be spanned across multiple CSSs in a Z system.
3.7.3 Internal coupling
Internal coupling (IC) links are used for internal communication between LPARs on the same system that are running coupling facilities (CF) and z/OS images. The connection is emulated in Licensed Internal Code (LIC) and provides for fast and secure memory-to-memory communications between LPARs within a single system. No physical cabling is required.
3.7.4 Coupling Facility Control Code Level 22
Various levels of Coupling Facility Control Code (CFCC) are available. CFCC Level 22 is available on z14 ZR1 with driver level 32, and includes the following enhancements:
Support for up to 1707 ICF processors.
The maximum number of logical processors in a Coupling Facility Partition remains at 16.
Scalability enhancement
CF work management and dispatcher changes to allow improved efficiency as processors are added to scale up the capacity of a CF image:
 – Non-prioritized (FIFO-based) work queues
 – Simplified system-managed duplexing protocol
CF list notification enhancements
CF list structures support three notification mechanisms to inform exploiters about the status of shared objects in the CF:
 – List (used by many exploiters including XCF Signaling).
 – Keyrange (used predominantly by WebSphere MQ shared queues).
 – Sublist notification (used predominantly by IMS shared queues).
The following enhancements to these notification mechanisms are provided:
 – Immediate/delayed round-robin notification for list and keyrange notifications.
 – More notifications to be sent to users as extra work elements are placed onto list or keyrange (controlled by using the API).
 – List full/not-full notifications.
CF structure size changes are expected to increase when moving from CFCC Level 21 (or earlier) to CFCC Level 22. Review the CF LPAR size by using the following tools:
The CFSizer tool is a web-based and is useful when a workload is changed or introduced.
The Sizer Utility, which is an authorized z/OS program download, is useful when upgrading a CF.
3.8 Virtual Flash Memory
IBM Virtual Flash Memory (VFM) is the replacement for the Flash Express features that were available on the z13s and zBC12. On z14 ZR1, the VFM feature (0614) can be ordered in 512 GB increments up to 2 TB in total.
VFM is designed to help improve availability and handling of paging workload spikes when z/OS V2.1, V2.2, or V2.3 is run. With this support, z/OS is designed to help improve system availability and responsiveness by using VFM across transitional workload events, such as market openings, and diagnostic data collection. z/OS is also designed to help improve processor performance by supporting middleware exploitation of pageable large (1 MB) pages.
VFM can also be used in CF images to provide extended capacity and availability for workloads that use WebSphere MQ Shared Queues structures. The use of VFM can help availability by reducing latency from paging delays that can occur at the start of the workday or during other transitional periods. It is also designed to eliminate delays that can occur when diagnostic data is collected during failures.
Therefore, VFM can help meet most demanding service level agreements and compete more effectively. VFM is easy to configure and provides rapid time to value.
No application changes are required to migrate from IBM Flash Express to VFM.
3.9 Server Time Protocol
Each system must have an accurate time source to maintain a time-of-day value. Logical partitions use their system’s time. When system images participate in a Sysplex, coordinating the time across all system images in the sysplex is critical to its operation.
 
Important: If a z14 ZR1 plays a CTN role (PTS/BTS/Arbiter), the other CTN role playing Z platforms must include direct coupling connectivity to the z14 ZR1. The z14 ZR1 does not support direct coupling connectivity to zEC12, zBC12, or older Z platforms.
The z14 ZR1 supports the Server Time Protocol (STP) and can participate in a STP-only Coordinated Timing Network (CTN). A CTN is a collection of Z platforms that are time-synchronized to a time value called Coordinated Server Time (CST). Each CPC to be configured in a CTN must be STP-enabled. STP is intended for CPCs that are configured to participate in a Parallel Sysplex or CPCs that are not in a Parallel Sysplex, but must be time-synchronized.
STP is a message-based protocol in which timekeeping information is passed over coupling links between servers. The timekeeping information is transmitted over externally defined coupling links. The STP feature is the supported method for maintaining time synchronization between the z14 ZR1 and CF in sysplex environments.
STP is implemented in LIC as a system-wide facility of the z14 ZR1 and other Z platforms. STP presents a single view of time to PR/SM and provides the capability for multiple CPCs to maintain time synchronization with each other. The z14 ZR1 is enabled for STP by installing the STP feature code. Extra configuration is required for a z14 ZR1 to become a member of a CTN.
STP supports a multi-site timing network of up to 100 km (62 miles) over fiber optic cabling, without requiring an intermediate site. This protocol allows a Parallel Sysplex to span these distances and reduces the cross-site connectivity that is required for a multi-site Parallel Sysplex.
Network Time Protocol client support
The use of Network Time Protocol (NTP) servers as an External Time Source (ETS) usually fulfills a requirement for a time source or common time reference across heterogeneous platforms and for providing a higher time accuracy.
NTP client support is available in the Support Element (SE) code of the z14 ZR1. The code interfaces with the NTP servers. This interaction allows an NTP server to become the single-time source for z14 ZR1 and for other servers that have NTP clients.
Pulse per second support
Two oscillator cards (OSCs), which are included as a standard feature of the z14 ZR1, provide a dual-path interface for the pulse per second (PPS) signal. The cards contain a BNC connector for PPS attachment at the rear side of the CPC drawer. The redundant design allows continuous operation during the failure of one card, and concurrent card maintenance.
STP tracks the highly stable accurate PPS signal from the NTP server. PPS maintains accuracy of 10 µs as measured at the PPS input of the z14 ZR1.
If STP uses an NTP server without PPS, a time accuracy of 100 ms to the ETS is maintained. A cable connection from the PPS port to the PPS output of an NTP server is required when the z14 ZR1 is configured for the use of NTP with PPS as the ETS for time synchronization.
For more information about STP, see the following publications:
3.10 Hardware Management Console functions
The HMC and SE are appliances that provide Z hardware platform management. Hardware platform management covers a complex set of setup, configuration, operation, monitoring, and service management tasks, and services that are essential to the use of the Z platform.
When tasks are performed on the HMC, the commands are sent to one or more SEs, which issue commands to their Z platforms.
HMC/SE Version 2.14.0 or later is required for the z14 ZR1. For more information about these HMC functions and capabilities, see the IBM z14 ZR1 Technical Guide, SG24-8651.
3.10.1 HMC key enhancements for z14 ZR1
The HMC application includes the following enhancements:
The user interface is redesigned and simplified.
Tasks and windows are updated to support configuring and managing STP, IBM Virtual Flash Memory, IBM Secure Service Container, and 10GbE RoCE Express features.
OSA/SF is on the HMC with more monitoring and diagnostic capabilities.
A new set of Firmware Integrity Monitoring features is implemented for tampering protection, including BIOS Secure boot function, signature, and verification for HMCs firmware.
Starting with version 2.13.1 HMC Tasks no longer have Java Applets-based implementations. Java Applets were used in Operating System Messages; Version 2.14.0 implements new IOCDS Source option on input/output configuration tasks.
FTP, FTPS, and SFTP are supported. All FTP connections that originate from Support Element are taken by HMC and performed on behalf of FTP.
Secure console-to-console communication is established for HMC consoles, which introduces new security standards on internal communication.
Functional enhancements are available in SNMP/BCPii API interfaces, such as queries to Virtual Flash Memory or queries to Secure Service Container. Security of BCPii interface also is enhanced.
Remote Browser IP Address Limiting function is implemented because of security reasons. This function allows you to specify valid remote browser IP or valid mask for group of IP addresses. Global setting to enable/disable remote access is still available.
Multi-factor authentication is implemented for the HMC/SE/TKE. The feature enables you to log in with higher security level by using two factors. The first factor is traditional login and password; the second factor is a passcode that is sent on your smartphone.
The HMC Global OSA/SF now provides a global view of all OSA PCHIDs and the monitoring and diagnostic information that was available in the Query Host command.
The PCI HSM specification defines a set of logical and physical security standards for HSMs that are specific to the needs of payments industry.
Compliance mode for CCA PCI-HSM and EP11 and other certificates is now displayed on Support Element. Setup and administration tasks are done on TKE.
A mobile application interface is provided for the HMC 2.14.0 or later, which includes security technology. The mobile application interface allows HMC users to securely monitor and manage systems from anywhere.
iOS and Android HMC applications can provide system and partition views, the ability to monitor status and hardware and operating system messages, and the ability to receive mobile push notifications from the HMC by using the IBM Z Remote Support Facility (zRSF) connection. A full set of granular security controls is provided from the HMC console, to the user, to the monitor only, and mobile app password, including multi-factor authentication. This mobile interface is optional and disabled by default.
For more information about the key capabilities and enhancements of the HMC, see IBM z14 ZR1 Technical Guide, SG24-8651.
3.10.2 Hardware Management Console and Support Element
The HMC and SE appliances together provide hardware platform management for IBM Z. Hardware platform management covers a complex set of configuration, operation, monitoring, service management tasks, and other services that are essential to the use of the hardware platform product.
With z14 ZR1, the HMC can be a stand-alone desktop computer or an optional 1U rack-mounted computer.
The z14 ZR1 is supplied with a pair of integrated 1U SEs. The primary SE is always active; the other SE is an alternative. Power for the SEs is supplied by the rack PDUs. Each support Element features dual PSU units, 1+1 redundant. The SEs are connected to the external customer switches for network connectivity with the CPC and the HMCs.
The SEs and HMCs are closed systems, and no other applications can be installed on them.
The HMCs and SEs of the system are attached to a Customer LAN. An HMC communicates with one or more Z platforms, as shown in Figure 3-1. When tasks are performed on the HMC, the commands are sent to one or more SEs, which then issue commands to their CPCs.
Figure 3-1 HMC and SE connectivity
The HMC Remote Support Facility (RSF) provides communication with the IBM support network for hardware problem reporting and service.
 
Note: An RSF connection through a modem is not supported on the z14 ZR1 HMC. An internet connection to IBM is required to enable hardware problem reporting and service.
 

1 The IBM z14 Model ZR1 does not support InfiniBand features
2 Distances are valid for OM3 cabling. For more information about available options, see Table 3-5 on page 35.
3 The 10GbE RoCE features and the ISM adapters are identified by a hexadecimal Function Identifier (FID) with a range of 00 - FF.
4 Virtual Function ID is defined when PCIe hardware or the ISM is shared between LPARs.
5 Federal Information Processing Standards (FIPS) 140-2 Security Requirements for Cryptographic Modules
6 All coupling links can be used to carry STP timekeeping information.
7 z14 ZR1 supports up to 30 ICFs.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.230.81