Central processor complex I/O system structure
This chapter describes the I/O system structure and connectivity options that are available on the IBM z14™ servers.
 
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906) unless otherwise specified.
This chapter includes the following topics:
4.1 Introduction to I/O infrastructure
The IBM z14™ servers support PCIe I/O drawers only. I/O cage or I/O drawer are no longer supported.
4.1.1 I/O infrastructure
IBM extends the use of industry standards on the IBM Z servers by offering a Peripheral Component Interconnect Express Generation 3 (PCIe Gen3) I/O infrastructure. The PCIe I/O infrastructure that is provided by the central processor complex (CPC) improves I/O capability and flexibility, while allowing for the future integration of PCIe adapters and accelerators.
The PCIe I/O infrastructure in z14 consists of the following components:
PCIe fanouts that support 16 GBps I/O bus interconnection for CPC drawer connectivity to the PCIe I/O drawers.
The 7U, 32-slot, and 4-domain PCIe I/O drawer for PCIe I/O features.
The z14 PCIe I/O infrastructure provides the following benefits:
The bus connecting the CPC drawer to the I/O domain in the PCIe I/O drawer bandwidth is 16 GBps.
The PCIe I/O drawer doubles the number of I/O ports compared to an I/O drawer (z13 or earlier only). Up to 64 channels (32 PCIe I/O features) are supported in the PCIe I/O drawer.
Granularity for the storage area network (SAN) and the local area network (LAN):
 – The FICON Express16S+ features two channels per feature for Fibre Channel connection (FICON), High Performance FICON on Z (zHPF), and Fibre Channel Protocol (FCP) storage area networks.
 – The Open Systems Adapter (OSA)-Express6S GbE and the OSA-Express6S 1000BASE-T features have two ports each (LAN connectivity), while the OSA-Express7S 25GbE and the OSA-Express6S 10 GbE features have one port each (LAN connectivity).
Native PCIe features (plugged into the PCIe I/O drawer):
 – IBM zHyperLink Express (new for z14)
 – 25GbE GbE Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) Express2 (new)
 – 10 GbE RoCE Express2 (introduced with z14)
 – Coupling Express Long Reach (CE LR) (new for z14 and z13, z13s)
 – zEnterprise Data Compression (zEDC) Express
 – 10 GbE RDMA over RoCE Express (carry forward only)
 – Crypto Express6s (introduced with z14)
 – Crypto Express5s (carry forward only)
4.1.2 PCIe Generation 3
The PCIe Generation 3 uses 128b/130b encoding for data transmission. This encoding reduces the encoding processor usage to approximately 1.54%, compared to the PCIe Generation 2, which has an encoding processor usage of 20% by using 8b/10b encoding.
The PCIe standard uses a low-voltage differential serial bus. Two wires are used for signal transmission, and a total of four wires (two for transmit and two for receive) become a lane of a PCIe link, which is full duplex. Multiple lanes can be aggregated into a larger link width. PCIe supports link widths of one lane (x1), x2, x4, x8, x12, x16, and x32.
The data transmission rate of a PCIe link is determined by the link width (numbers of lanes), the signaling rate of each lane, and the signal encoding rule. The signaling rate of a PCIe Generation 3 lane is 8 gigatransfers per second (GTps), which means that nearly 8 gigabits are transmitted per second (Gbps).
A PCIe Gen3 x16 link has the following data transmission rates:
Data transmission rate per lane:
8 Gbps * 128/130 bit (encoding) = 7.87 Gbps=984.6 MBps
Data transmission rate per link: 984.6 MBps * 16 (lanes) = 15.75 GBps
Considering that the PCIe link is full-duplex mode, the data throughput rate of a PCIe Gen3 x16 link is 31.5 GBps (15.75 GBps in both directions).
 
Link performance: The link speeds do not represent the actual performance of the link. The actual performance depends on many factors that include latency through the adapters, cable lengths, and the type of workload.
PCIe Gen3 x16 links are used in IBM z14™ servers for driving the PCIe I/O drawers, and coupling links for CPC to CPC communications.
 
Note: Unless specified otherwise, when PCIe is mentioned in remaining sections of this chapter, it refers to PCIe Generation 3.
4.2 I/O system overview
The z14 I/O characteristics and supported features are described in this section.
4.2.1 Characteristics
The z14 I/O subsystem is designed to provide great flexibility, high availability, and the following excellent performance characteristics:
High bandwidth
 
Link performance: The link speeds do not represent the actual performance of the link. The actual performance depends on many factors that include latency through the adapters, cable lengths, and the type of workload.
IBM z14™ servers use PCIe as an internal interconnect protocol to drive PCIe I/O drawers and CPC to CPC connections. The I/O bus infrastructure data rate increases up to 160 GBps per drawer (10 PCIe Gen3 Fanout slots). For more information about coupling link connectivity, see 4.7.4, “Parallel Sysplex connectivity” on page 184.
Connectivity options
 – IBM z14™ servers can be connected to an extensive range of interfaces, such as FICON/FCP for SAN connectivity, 10 Gigabit Ethernet, Gigabit Ethernet, and 1000BASE-T Ethernet for LAN connectivity, zHyperLink Express for storage connectivity (low latency compare to FICON).
 – For CPC to CPC connections, IBM z14™ servers use Integrated Coupling Adapter (ICA SR), CE LR, or Parallel Sysplex InfiniBand (IFB).
 – The 25GbE RoCE Express2, 10GbE RoCE Express2, and the 10GbE RoCE Express features provides high-speed memory-to-memory data exchange to a remote CPC by using the Shared Memory Communications over RDMA (SMC-R) protocol for TCP (socket-based) communications.
Concurrent I/O upgrade
You can concurrently add I/O features to IBM z14™ servers if unused I/O slot positions are available.
Concurrent PCIe I/O drawer upgrade
More PCIe I/O drawers can be installed concurrently if free frame slots for the PCIe I/O drawers are available.
Dynamic I/O configuration
Dynamic I/O configuration supports the dynamic addition, removal, or modification of the channel path, control units, and I/O devices without a planned outage.
Pluggable optics:
 – The FICON Express16S+ FICON Express16S and FICON Express8S, OSA-Express7S, OSA-Express6S, OSA-Express5S, RoCE Express2, and RoCE Express features have Small Form-Factor Pluggable (SFP) optics.1 These optics allow each channel to be individually serviced in a fiberoptic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing.
 – For zHyperLink Express, it uses the cable with MTP connector and the cable goes to a CXP connection to the adapter. The CXP optics are provided with the adapter.
Concurrent I/O card maintenance
Every I/O card that is plugged in an I/O drawer or PCIe I/O drawer supports concurrent card replacement during a repair action.
4.2.2 Supported I/O features
The following I/O features are supported:
Up to 320 FICON Express16S+ channels (up to 160 on M01)
Up to 320 FICON Express16S channels (up to 160 on M01)
Up to 320 FICON Express8S channels (up to 160 on M01)
Up to 48 OSA-Express7S 25GbE SR ports
Up to 96 OSA-Express6S ports
Up to 96 OSA-Express5S ports
Up to 16 zEDC Express features
Up to eight 25GbE RoCE Express2 features
Up to eight 10GbE RoCE Express2 features
Up to eight 10GbE RoCE Express features
Up to 16 zHyperLink Express features
Up to 40 ICA SR features with up to 80 coupling links
Up to 32 CE LR features with up to 64 coupling links
Up to 16 InfiniBand fanouts (features) with one of the following options:
 – Up to 32 12x InfiniBand coupling links (HCA3-O fanouts, two ports per feature)
 – Up to 64 1x InfiniBand coupling links (HCA3-O LR fanouts, four ports per feature)
 
Notes: The maximum number of coupling CHPIDs on a IBM z14™ server is 256, which is a combination of the following ports (not all combinations are possible (subject to I/O configuration options):
Up to 80 ICA SR ports
Up to 64 CE LR ports
Up to 32 HCA3-O 12x IFB ports
Up to 64 HCA3-O LR 1x IFB ports
IBM Virtual Flash Memory replaces IBM zFlash Express feature on IBM z14™ servers.
The maximum combined number of RoCE features that can be installed is 8; that is, any combination of 25GbE RoCE Express2, 10GbE RoCE Express2, and 10GbE RoCE Express (carry forward only) features.
Regarding SMC-R, 25GbE RoCE Express should not be configured in the same SMC-R link group with 10GbE RoCE features.
4.3 PCIe I/O drawer
The PCIe I/O drawers (see Figure 4-1) are attached to the CPC drawer through a PCIe bus and use PCIe as the infrastructure bus within the drawer. The PCIe I/O bus infrastructure data rate is up to 16 GBps.
Figure 4-1 PCIe I/O drawer
PCIe switch application-specific integrated circuits (ASICs) are used to fan out the host bus from the processor drawers to the individual I/O features. Maximum 32 PCIe I/O features (up to 64 channels) per drawer are supported.
The PCIe I/O drawer is a two-sided drawer (I/O cards on both sides, front and back) that is 7U high. The drawer contains the 32 I/O slots for PCIe features, four switch cards (two in front, two in the back), two DCAs to provide redundant power, and two air-moving devices (AMDs) for redundant cooling, as shown in Figure 4-1.
The PCIe I/O drawer slots numbers are shown in Figure 4-2.
Figure 4-2 PCIe I/O drawer slots numbers
The I/O structure in a z14 CPC is shown in Figure 4-3 on page 152. The PCIe switch card provides the fanout from the high-speed x16 PCIe host bus to eight individual card slots. The PCIe switch card is connected to the drawers through a single x16 PCIe Gen 3 bus from a PCIe fanout card.
In the PCIe I/O drawer, the eight I/O feature cards that directly attach to the switch card constitute an I/O domain. The PCIe I/O drawer supports concurrent add and replace I/O features to enable you to increase I/O capability as needed without having to plan ahead.
Figure 4-3 z14 I/O connectivity
The PCIe I/O slots are organized into four hardware I/O domains. Each I/O domain supports up to eight features and is driven through a PCIe switch card. Two PCIe switch cards always provide a backup path for each other through the passive connection in the PCIe I/O drawer backplane. During a PCIe fanout card or cable failure, 16 I/O cards in two domains can be driven through a single PCIe switch card.
A switch card in the front is connected to a switch card in the rear through the PCIe I/O drawer board (through the Redundant I/O Interconnect, or RII). In addition, switch cards in same PCIe I/O drawer are connected to PCIe fanouts across nodes and CPC drawers for higher availability.
The RII design provides a failover capability during a PCIe fanout card failure or CPC drawer upgrade. All four domains in one of these PCIe I/O drawers can be activated with four fanouts. The flexible service processors (FSPs) are used for system control.
The domains and their related I/O slots are shown in Figure 4-4.
Figure 4-4 PCIe I/O drawer with 32 PCIe slots and 4 I/O domains
Each I/O domain supports up to eight features (FICON, OSA, Crypto, and so on.) All I/O cards connect to the PCIe switch card through the backplane board. The I/O domains and slots are listed in Table 4-1.
Table 4-1 I/O domains of PCIe I/O drawer
Domain
I/O slot in domain
0
01, 02, 03, 04, 06, 07, 08, and 09
1
30, 31, 32, 33, 35, 36, 37, and 38
2
11, 12, 13, 14, 16, 17, 18, and 19
3
20, 21, 22, 23, 25, 26, 27, and 28
4.4 PCIe I/O drawer offerings
A maximum of five PCIe I/O drawers can be installed that support up to 160 PCIe I/O features.
For an upgrade to IBM z14™ servers, only the following PCIe I/O features carried forward:
FICON Express16S
FICON Express8S
OSA-Express5S (all 5S features)
OSA-Express4S 1000BaseT (only)
10GbE RoCE Express
Crypto Express5S
zEDC Express
Coupling Express Long Reach (CE LR)
 
Consideration: On a new build IBM z14™ server, only PCIe I/O drawers are supported. No carry-forward of I/O drawers or associated features is supported on upgrades to a IBM z14™ server.
A new build IBM z14 server supports the following PCIe I/O feature that is hosted in the PCIe I/O drawers:
FICON Express16S+
OSA-Express7S 25GbE SR
OSA-Express6S
25GbE RoCE Express2
10GbE RoCE Express2
Crypto Express6S
zEDC Express
Coupling Express Long Reach (CE LR)
zHyperLink Express
 
Note: Model-upgrades allowed to IBM z14™ from z13 or zEC12, downgrades are not allowed from IBM z14™. Capacity upgrades or downgrades are allowed in an upgrade to IBM z14™ from z13 or zEC12.
For frame roll MES from zEC12 and z13 to IBM z14™, new frames are shipped. New PCIe I/O drawers are supplied with the MES for zEC12 to replace the I/O drawers.
4.5 Fanouts
The z14 server uses fanout cards to connect the I/O subsystem to the CPC drawer. The fanout cards also provide the ICA SR and InfiniBand coupling links for Parallel Sysplex. All fanout cards support concurrent add, delete, and move.
The internal z14 I/O infrastructure consists of the following cards:
The PCIe Generation3 fanout cards: One port card (feature) and connects to a PCIe I/O drawer supporting an eight-slot I/O domain. This card is always installed in pairs to support I/O domain redundant connectivity.
InfiniBand HCA3-C fanout card, HCA2-O fanout card is not supported on IBM z14™ servers.
 
Note: IBM z14 is the last z Systems and IBM Z server to support HCA3-O and HCA3-O LR adapters.1
Also, z14 is the last z Systems and IBM Z server to support HCA3-O fanout for 12x IFB (FC 0171) and HCA3-O LR fanout for 1x IFB (FC 0170).

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion.
The PCIe and InfiniBand fanouts are installed in the front of each CPC drawer. Each CPC drawer has 10 PCIe Gen3 fanout slots and four InfiniBand fanout slots. The PCIe fanout slots are named LG03 - LG12, left to right. The InfiniBand fanout slots are in the right of the CPC drawer and are named LG13 - LG16, left to right. Slots LG01 and LG02 are used for FSP.
Five types of fanout cards are supported by IBM z14™ servers. Each slot can hold one of the following five fanouts:
PCIe Gen3 fanout card: This copper fanout provides connectivity to the PCIe switch card in the PCIe I/O drawer.
Integrated Coupling Adapter (ICA SR): This adapter provides coupling connectivity between z14, z13 and z13s servers, up to 150-meter (492 ft) distance, 8 GBps link rate.
Host Channel Adapter (HCA3-O (12xIFB)): This optical fanout provides 12x InfiniBand coupling link connectivity up to 150-meter (492 ft) distance to a IBM z14™, z13, z13s, zEC12, zBC12 servers.
Host Channel Adapter (HCA3-O LR (1xIFB)): This optical long range fanout provides 1x InfiniBand coupling link connectivity to IBM z14™, z13, z13s, zEC12, zBC12 servers. HCA3-O LR supports up to 10 km (6.2 miles) unrepeated distance or 100 km (62 miles) when IBM Z qualified dense wavelength division multiplexing (DWDM) equipment is used.
The PCIe Gen3 fanout card includes one port. The HCA3-O LR (1xIFB) fanout includes four ports, and other fanouts include two ports.
 
Note: HCA2-O fanout card carry-forward is no longer supported on IBM z14™ servers.
The following PCIe and IFB connections from the CPC drawer (see Figure 4-7 on page 188):
PCIe I/O drawer (PCIe Gen3)
Z server that is connected through InfiniBand (12x or 1x HCA3-O)
Z server that is connected through a dedicated PCIe ICA SR
Figure 4-3 on page 152 shows an I/O connection scheme that is not tied to a particular CPC drawer. In a real configuration, I/O connectivity is mixed across multiple CPC drawers (if available) for I/O connection redundancy.
4.5.1 PCIe Generation 3 fanout (FC 0173)
The PCIe Gen3 fanout card provides connectivity to an PCIe I/O drawer by using a copper cable. One port on the fanout card is dedicated for PCIe I/O. The bandwidth of this PCIe fanout card supports a link rate of 16 GBps.
A 16x PCIe copper cable of 1.5 meters (4.92 ft) to 4.0 meters (13.1 ft) is used for connection to the PCIe switch card in the PCIe I/O drawer. PCIe fanout cards are always plugged in pairs and provide redundancy for I/O domains within the PCIe I/O drawer.
The pairs of PCIe fanout cards of a z14 are named as LG03 - LG12 from left to right. All z14 models (except for model M01) split the PCIe fanout pairs across different processor drawers for redundancy purposes.
 
PCIe fanout: The PCIe fanout is used exclusively for I/O and cannot be shared for any other purpose.
4.5.2 Integrated Coupling Adapter (FC 0172)
Introduced with IBM z13, the IBM ICA SR is a two-port fanout feature that is used for short distance coupling connectivity and uses channel type CS5.
The ICA SR uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for coupling. No performance degradation is expected compared to the coupling over InfiniBand 12x IFB3 protocol.
The ICA SR is designed to drive distances up to 150 m (492 ft) with a link data rate of 8 GBps. ICA SR supports up to four channel-path identifiers (CHPIDs) per port and eight subchannels (devices) per CHPID.
The coupling links can be defined as shared between images within a CSS. They also can be spanned across multiple CSSs in a CPC. Unlike the HCA3-O 12x InfiniBand links, the ICA SR cannot define more than four CHPIDS per port. When STP is enabled, ICA SR coupling links can be defined as timing-only links to other z14 and z13/z13s CPCs.
The ICA SR fanout is housed in the PCIe I/O fanout slot on the z14 CPC drawer, which supports 10 PCIe I/O slots. Up to 10 ICA SR fanouts and up to 20 ICA SR ports are supported on a z14 CPC drawer, enabling greater connectivity for short distance coupling on a single processor node compared to previous generations. The maximum number of ICA SR fanout features is 20 per system on IBM z14™ servers.
The ICA SR can be used for coupling connectivity between z14 and z13/z13s servers. It does not support connectivity to zEC12, zBC12 servers, and it cannot be connected to HCA3-O or HCA3-O LR coupling fanouts.
The ICA SR fanout requires cabling that different from the 12x IFB cables. For distances up to 100 m (328 ft), OM3 fiber optic can be used. For distances up to 150 m (492 ft), OM4 fiber optic cables can be used. For more information, see the following resources:
Planning for Fiber Optic Links, GA23-1407
IBM 3906 Installation Manual for Physical Planning, GC28-6965
4.5.3 HCA3-O (12x IFB) fanout (FC 0171)
The HCA3-O fanout for 12x InfiniBand provides an optical interface that is used for coupling links. The two ports on the fanout are dedicated to coupling links that connect to z14, z13, z13s CPCs. Up to 16 HCA3-O (12x IFB) fanouts are supported and provide up to 32 ports for coupling links.
The fiber optic cables are industry-standard OM3 (2000 MHz-km) 50-µm multimode optical cables with multifiber push-on (MPO) connectors. The maximum cable length is 150 m (492 ft). Each port (link) has 12 pairs of fibers: 12 fibers for transmitting, and 12 fibers for receiving. The HCA3-O (12xIFB) fanout supports a link data rate of 6 GBps.
 
Important: The HCA3-O fanout features two ports (1 and 2). Each port includes one connector for transmitting (TX) and one connector for receiving (RX). Ensure that you use the correct cables. An example is shown in Figure 4-5 on page 157.
For more information, see the following resources:
Planning for Fiber Optic Links, GA23-1407
IBM 3906 Installation Manual for Physical Planning, GC28-6965
Figure 4-5 OM3 50/125 µm multimode fiber cable with MPO connectors
A fanout features two ports for optical link connections, and supports up to 16 CHPIDs across both ports. These CHPIDs are defined as channel type CIB in the I/O configuration data set (IOCDS). The coupling links can be defined as shared between images within a channel subsystem (CSS). They also can be spanned across multiple CSSs in a CPC.
Each HCA3-O (12x IFB) fanout has an assigned Adapter ID (AID) number. This number must be used for definitions in IOCDS to create a relationship between the physical fanout location and the CHPID number. For more information about AID numbering, see “Adapter ID number assignment” on page 158.
For more information about how the AID is used and referenced in the HCD, see Implementing and Managing InfiniBand Coupling Links on System z SG24-7539.
When STP is enabled, IFB coupling links can be defined as timing-only links to other z14, z13, z13s, zEC12, and zBC12 CPCs.
12x IFB and 12x IFB3 protocols
The following protocols are supported by the HCA3-O for 12x IFB feature:
12x IFB3 protocol: This protocol is used when HCA3-O (12xIFB) fanouts are communicating with HCA3-O (12x IFB) fanouts and are defined with four or fewer CHPIDs per port.
12x IFB protocol: If more than four CHPIDs are defined per HCA3-O (12xIFB) port, or HCA3-O (12x IFB) features are communicating with HCA2-O (12x IFB) features on zEC12 and zBC12 CPCs, links run with the 12x IFB protocol.
The HCA3-O feature that supports 12x InfiniBand coupling links is designed to deliver improved service times. When no more than four CHPIDs are defined per HCA3-O (12xIFB) port, the 12x IFB3 protocol is used. When you use the 12x IFB3 protocol, synchronous service times are up to 40% faster than when you use the 12x IFB protocol.
4.5.4 HCA3-O LR (1x IFB) fanout (FC 0170)
The HCA3-O LR fanout for 1x InfiniBand provides an optical interface that is used for coupling links. The four ports on the fanout are dedicated to coupling links to connect to z14, z13, and z13s servers. Up to 16 HCA3-O LR (1xIFB) fanouts are supported on IBM z14™ servers (up to 64 1x IFB3 ports for coupling).
The HCA3-O LR fanout supports InfiniBand 1x optical links that offer long-distance coupling links. The cable has one lane that contains two fibers. One fiber is used for transmitting, and the other fiber is used for receiving data.
Each connection supports a link rate of up to 5 Gbps if connected to a z14, z13, or z13s server. HCA3-O LR supports also a link rate of 2.5 Gbps when connected to IBM Z qualified DWDM equipment. The link rate is auto-negotiated to the highest common rate.
The fiber optic cables are 9-µm SM optical cables that end with an LC Duplex connector. With direct connection, the supported unrepeated distance2 is up to 10 km (6.2 miles), and up to 100 km (62 miles) with IBM Z qualified DWDM equipment.
A fanout has four ports for optical link connections, and supports up to 16 CHPIDs across all four ports. These CHPIDs are defined as channel type CIB in the IOCDS. The coupling links can be defined as shared between images within a channel subsystem, and also can be spanned across multiple channel subsystems in a server.
Each HCA3-O LR (1xIFB) fanout can be used for link definitions to another server, or a link from one port to a port in another fanout on the same server.
The source and target operating system image, CF image, and the CHPIDs that are used on both ports in both servers are defined in IOCDS.
Each HCA3-O LR (1xIFB) fanout has an assigned AID number. This number must be used for definitions in IOCDS to create a relationship between the physical fanout location and the CHPID number. For more information about AID numbering, see “Adapter ID number assignment” on page 158.
When STP is enabled, HCA3-O LR coupling links can be defined as timing-only links to other z14, z13, z13s, zEC12, and zBC12 CPCs.
4.5.5 Fanout considerations
Fanout slots in each CPC drawer can be used to plug different fanouts. One drawer can hold up to 10 PCIe fanouts and four InfiniBand fanout cards.
Adapter ID number assignment
PCIe and IFB fanouts and ports are identified by an AID that is initially dependent on their physical locations, which is unlike channels that are installed in a PCIe I/O drawer or I/O drawer. Those channels are identified by a physical channel ID (PCHID) number that is related to their physical location. This AID must be used to assign a CHPID to the fanout in the IOCDS definition. The CHPID assignment is done by associating the CHPID to an AID port (see Table 4-2).
Table 4-2 AID number assignment
Drawer
Location
Fanout slot
AIDs
First
A15A
LG03 - LG12 (PCIe)
2E-37
LG13 - LG16 (IFB)
0C-0F
Second
A19A
LG03 - LG12 (PCIe)
24-2D
LG13 - LG16 (IFB)
08-0B
Third
A23A
LG03 - LG12 (PCIe)
1A-23
LG13 - LG16 (IFB)
04-07
Fourth
A27A
LG03 - LG12 (PCIe)
10-19
LG13 - LG16 (IFB)
00-03
Fanout slots
The fanout slots are numbered LG03 - LG16 left to right, as shown in Figure 4-4 on page 153. All fanout locations and their AIDs for all four drawers are shown for reference only. Slots LG01 and LG02 never include a fanout that is installed because they are dedicated for FSPs.
 
Important: The AID numbers that are listed in Table 4-2 on page 158 are valid only for a new build system or if new processor drawers are added. If a fanout is moved, the AID follows the fanout to its new physical location.
The AID assignment is listed in the PCHID REPORT that is provided for each new server or for an MES upgrade on existing servers. Part of a PCHID REPORT for a model M03 is shown in Example 4-1. In this example, one fanout card is installed in the first drawer (location A15A, slot LG14); then, it is assigned as AID 0D. Another fanout card is installed in the drawer (location A19A, slot LG14) and is assigned as AID 09.
Example 4-1 AID assignment in PCHID REPORT
CHPIDSTART
19567745 PCHID REPORT Jul 14,2017
Machine: 3906-M03 00000NEW1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Source Drwr Slot F/C PCHID/Ports or AID Comment
A23/LG14 A23A LG14 0170 AID=05
 
A23/LG16 A23A LG16 0171 AID=07
 
A19/LG14 A19A LG14 0171 AID=09
 
A19/LG15 A19A LG15 0171 AID=0A
 
A19/LG16 A19A LG16 0170 AID=0B
 
A15/LG13 A15A LG13 0171 AID=0C
 
A15/LG14 A15A LG14 0170 AID=0D
 
A15/LG16 A15A LG16 0170 AID=0F
 
A23/LG04 A23A LG04 0172 AID=1B
 
A23/LG11 A23A LG11 0172 AID=22
 
A19/LG04 A19A LG04 0172 AID=25
 
A15/LG11 A15A LG11 0172 AID=36
Fanout features that are supported by the z14 server are listed in Table 4-3, which includes the feature type, feature code, and information about the link supported by the fanout feature.
Table 4-3 Fanout summary
Fanout feature
Feature code
Use
Cable
type
Connector type
Maximum distance
Link data rate1
PCIe fanout
0173
Connect to PCIe I/O drawer
Copper
N/A
4 m (13.1 ft)
16 GBps
HCA3-O (12xIFB)
0171
Coupling link
50-µm MM OM3 (2000 MHz-km)
MPO
150 m (492 ft)
6 GBps2
HCA3-O LR (1xIFB)
0170
Coupling link
9-µm SM
LC Duplex
10 km
(6.2 miles)
 
100 km3
(62 miles)
5.0 Gbps
ICA SR
0172
Coupling link
OM4
MTP
150 m (492 ft)
8 Gbps
OM3
MTP
100 m (328 ft)
8 Gbps

1 The link data rates do not represent the actual performance of the link. The actual performance depends on many factors, including latency through the adapters, cable lengths, and the type of workload.
2 When the 12x IFB3 protocol is used, synchronous service times are 40% faster than the 12x IFB protocol.
3 Up to 100 km (62 miles) with repeaters (IBM Z qualified DWDM).
4.6 I/O features (cards)
I/O features (cards) have ports3 to connect the z14 server to external devices, networks, or other servers. I/O features are plugged into the PCIe I/O drawer, based on the configuration rules for the server. Different types of I/O cards are available, one for each channel or link type. I/O cards can be installed or replaced concurrently.
4.6.1 I/O feature card ordering information
The I/O features that are supported by z14 servers and the ordering information for them are listed in Table 4-4.
Table 4-4 I/O features and ordering information
Channel feature
Feature code
New build
Carry-forward
FICON Express16S+ LX
0427
Y
N/A
FICON Express16S+ SX
0428
Y
N/A
FICON Express16S 10KM LX
0418
N
Y
FICON Express16S SX
0419
N
Y
FICON Express8S 10KM LX
0409
N
Y
FICON Express8S SX
0410
N
Y
OSA-Express7S 25GbE SR
0429
Y
N/A
OSA-Express6S 10GbE LR
0424
Y
N/A
OSA-Express6S 10GbE SR
0425
Y
N/A
OSA-Express6S GbE LX
0422
Y
N/A
OSA-Express6S GbE SX
0423
Y
N/A
OSA-Express6S 1000BASE-T Ethernet
0426
Y
N/A
OSA-Express5S 10GbE LR
0415
N
Y
OSA-Express5S 10GbE SR
0416
N
Y
OSA-Express5S GbE LX
0413
N
Y
OSA-Express5S GbE SX
0414
N
Y
OSA-Express5S 1000BASE-T Ethernet
0417
N
Y
OSA-Express4S 1000BASE-T Ethernet
0408
N
Y
Integrated Coupling Adapter (ICA SR)
0172
Y
Y
Coupling Express LR
0433
Y
Y
HCA3-O (12xIFB)
0171
Y
Y
HCA3-O LR (1xIFB)
0170
Y
Y
Crypto Express6S
0893
Y
N/A
Crypto Express5S
0890
N
Y
25GbE RoCE Express2
0430
Y
N/A
10GbE RoCE Express2
0412
Y
N/A
10GbE RoCE Express
0411
N
Y
zEDC Express
0420
Y
Y
zHyperLink Express
0431
Y
N/A
 
Important: IBM z14™ servers do not support the ISC-3, HCA2-O (12x), or HCA2-O LR (1x) features and cannot participate in a Mixed Coordinated Timing Network (CTN).
4.6.2 Physical channel ID report
A physical channel ID (PCHID) reflects the physical location of a channel-type interface. A PCHID number is based on the following factors:
I/O drawer and PCIe I/O drawer location
Channel feature slot number
Port number of the channel feature
A CHPID does not directly correspond to a hardware channel port. Instead, it is assigned to a PCHID in the hardware configuration definition (HCD) or IOCP.
A PCHID REPORT is created for each new build server and for upgrades on servers. The report lists all I/O features that are installed, the physical slot location, and the assigned PCHID. A portion of a sample PCHID REPORT is shown in Example 4-2. For more information about the AID numbering rules for InfiniBand coupling links, see Table 4-2 on page 158.
Example 4-2 PCHID REPORT
CHPIDSTART
12519541 PCHID REPORT May 05,2017
Machine: 3906-M05 NEW1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Source Drwr Slot F/C PCHID/Ports or AID Comment
A23/LG16 A23A LG16 0171 AID=07
A19/LG14 A19A LG14 0171 AID=09
A15/LG14 A15A LG14 0170 AID=0D
A27/LG05 A27A LG05 0172 AID=12
A15/LG09 A15A LG09 0172 AID=34
A19/LG07/J01 Z22B 01 0428 100/D1 101/D2
A19/LG07/J01 Z22B 02 0420 104 RG1
A19/LG07/J01 Z22B 03 0412 108/D1D2    RG1
A19/LG07/J01 Z22B 04 0431 10C/D1D2    RG1
A19/LG07/J01 Z22B 06 0433 110/D1D2    RG1
A19/LG07/J01 Z22B 07 0893 114/P00
A19/LG07/J01 Z22B 08 0424 118/D1
A19/LG07/J01 Z22B 09 0425 11C/D1
A27/LG07/J01 Z22B 11 0424 120/D1
A27/LG07/J01 Z22B 12 0424 124/D1
A27/LG07/J01 Z22B 13 0427 128/D1 129/D2
A27/LG07/J01 Z22B 14 0428 12C/D1 12D/D2
A27/LG07/J01 Z22B 16 0433 130/D1D2    RG3
A27/LG07/J01 Z22B 17 0431 134/D1D2    RG3
A27/LG07/J01 Z22B 18 0412 138/D1D2    RG3
A27/LG07/J01 Z22B 19 0420 13C RG3
A15/LG03/J01 Z22B 20 0420 140 RG4
A15/LG03/J01 Z22B 21 0412 144/D1D2    RG4
A15/LG03/J01 Z22B 22 0431 148/D1D2    RG4
A15/LG03/J01 Z22B 23 0433 14C/D1D2    RG4
A15/LG03/J01 Z22B 25 0428 150/D1 151/D2
A15/LG03/J01 Z22B 26 0427 154/D1 155/D2
A15/LG03/J01 Z22B 27 0893 158/P00
A15/LG03/J01 Z22B 28 0425 15C/D1
A23/LG03/J01 Z22B 30 0425 160/D1
A23/LG03/J01 Z22B 31 0424 164/D1
A23/LG03/J01 Z22B 32 0427 168/D1 169/D2
A23/LG03/J01 Z22B 33 0428 16C/D1 16D/D2
A23/LG03/J01 Z22B 35 0433 170/D1D2    RG2
A23/LG03/J01 Z22B 36 0433 174/D1D2    RG2
A23/LG03/J01 Z22B 37 0412 178/D1D2    RG2
A23/LG03/J01 Z22B 38 0412 17C/D1D2    RG2
Legend:
Source Book Slot/Fanout Slot/Jack
A15A CEC Drawer 1 in A frame
A27A CEC Drawer 4 in A frame
A23A CEC Drawer 3 in A frame
A19A CEC Drawer 2 in A frame
Z22B PCIe Drawer 1 in Z frame
Z15B PCIe Drawer 2 in Z frame
Z08B PCIe Drawer 3 in Z frame
0428 16GB FICON Express16S+ SX 2 Ports
RG1 Resource Group 1
0420 zEDC Express
0412 10GbE RoCE Express
0431 zHyperLink Express
0433 Coupling Express LR
0893 Crypto Express6S
0424 OSA Express6S 10 GbE LR 1 Ports
0425 OSA Express6S 10 GbE SR 1 Ports
0427 16GB FICON Express16S+ LX 2 Ports
RG3 Resource Group 3
RG2 Resource Group 2
RG4 Resource Group 4
0171 HCA3 O PSIFB 12x 2 Links
0170 HCA3 O LR PSIFB 1x 4 Links
0172 ICA SR 2 Links
The PCHID REPORT that is shown in Example 4-2 includes the following components:
Feature code 0170 (HCA3-O LR (1xIFB)) is installed in CPC drawer 1 (location A15A, slot LG14), and includes AID 0D assigned.
Feature code 0172 (Integrated Coupling Adapter (ICA SR) is installed in CPC drawer 4 (location A27A, slot LG05), and has AID 12 assigned.
Feature code 0424 (OSA-Express6S 10 GbE LR) is installed in PCIe I/O drawer 1 (location Z22B, slot 11), and has PCHID 120 assigned.
Feature code 0427 (FICON Express16S+ long wavelength (LX) 10 km (6.2 miles)) is installed in PCIe I/O drawer 2 (location Z22B, slot 26), and has PCHIDs 154 and 155 assigned.
Feature code 0431 (zHyperLink Express) is installed in PCIe I/O drawer 2 (location Z22B, slot 04), and includes PCHID10C assigned. PCHID 240 is shared by ports D1 and D2.
A resource group (RG) parameter is shown in the PCHID REPORT for native PCIe features. A balanced plugging of native PCIe features exists between four resource groups (RG1, RG2, RG3, and RG4).
The preassigned PCHID number of each I/O port relates directly to its physical location (jack location in a specific slot).
4.7 Connectivity
I/O channels are part of the CSS. They provide connectivity for data exchange between servers, between servers and external control units (CUs) and devices, or between networks.
For more information about connectivity to external I/O subsystems (for example, disks), see “Storage connectivity” on page 167.
For more information about communication to LANs, see “Network connectivity” on page 173.
Communication between servers is implemented by using CE LR, ICA SR, coupling over InfiniBand, or channel-to-channel (CTC) connections. For more information, see “Parallel Sysplex connectivity” on page 184.
4.7.1 I/O feature support and configuration rules
The I/O features that are supported are listed in Table 4-5. Listed in Table 4-5 are the number of ports per card, port increments, the maximum number of feature cards, and the maximum number of channels for each feature type. The CHPID definitions that are used in the IOCDS also are listed.
Table 4-5 z14 supported I/O features
I/O feature
Number of
Maximum number of
PCHID
CHPID definition
Ports per card
Port increments
Ports
I/O slots
FICON Express16S+ LX/SX
2
2
320
160
Yes
FC, FCP1
FICON Express16S LX/SX
2
2
320
160
Yes
FC, FCP
FICON Express8S LX/SX
2
2
320
160
Yes
FC, FCP
OSA-Express6S 25GbE SR
1
1
48
48
Yes
OSD, OSX
OSA-Express6S 10GbE LR/SR
1
1
48
48
Yes
OSD, OSX
OSA-Express6S GbE LX/SX
2
2
96
48
Yes
OSD
OSA-Express6S 1000BASE-T
2
2
96
48
Yes
OSC, OSD, OSE, OSM
OSA-Express5S 10GbE LR/SR
1
1
48
48
Yes
OSD, OSX
OSA-Express5S GbE LX/SX
2
2
96
48
Yes
OSD
OSA-Express5S 1000BASE-T
2
2
96
48
Yes
OSC, OSD, OSE, OSM
OSA-Express4S 1000BASE-T
2
2
96
48
Yes
OSC, OSD, OSE, OSM
25GbE RoCE Express2
2
2
16
8
Yes
N/A2
10GbE RoCE Express2
2
2
16
8
Yes
N/Ab
10GbE RoCE Express
2
2
16
8
Yes
N/Ab
Coupling Express LR
2
2
64
32
Yes
CL5
Integrated Coupling Adapter (ICA SR)
2
2
40
20
N/A
CS5
HCA3-O for 12x IFB and 12x IFB3
2
2
32
16
N/A
CIB
HCA3-O LR for 1x IFB
4
4
64
16
N/A
CIB
zHyperLink Express
2
2
32
16
Yes
N/Ab

1 Both ports must be defined with the same CHPID type.
2 These features are defined by using Function IDs (FIDs).
At least one I/O feature (FICON) or one coupling link feature (ICA SR or HCA3-O) must be present in the minimum configuration.
The following features can be shared and spanned:
FICON channels that are defined as FC or FCP
OSA-Express features that are defined as OSC, OSD, OSE, OSM, or OSX
Coupling links that are defined as CS5, CIB, or CL5
HiperSockets that are defined as IQD
The following features are exclusively plugged into a PCIe I/O drawer and do not require the definition of a CHPID and CHPID type:
Each Crypto Express (5S/6S) feature occupies one I/O slot, but does not have a CHPID type. However, LPARs in all CSSs have access to the features. Each Crypto Express adapter can be defined to up to 85 LPARs.
Each RoCE Express/Express2 feature occupies one I/O slot but does not have a CHPID type. However, LPARs in all CSSs have access to the feature. The 10GbE RoCE Express can be defined to up to 31 LPARs per PCHID. The 25 GbE RoCE Express2 and the 10GbE RoCE Express2 features support up to 126 LPARs per PCHID.
Each zEDC Express feature occupies one I/O slot but does not have a CHPID type. However, LPARs in all CSSs have access to the feature. The zEDC feature can be defined to up to 15 LPARs.
Each zHyperLink Express feature occupies one I/O slot but does not have a CHPID type. However, LPARs in all CSSs have access to the feature. The zHyperLink Express adapter works as native PCIe adapter and can be shared by multiple LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to each LPAR. This support gives a maximum of 254 VFs per adapter.
I/O feature cables and connectors
The IBM Facilities Cabling Services fiber transport system offers a total cable solution service to help with cable ordering requirements. These services can include the requirements for all of the protocols and media types that are supported (for example, FICON, Coupling Links, and OSA). The services can help whether the focus is the data center, SAN, LAN, or the end-to-end enterprise.
 
Cables: All fiber optic cables, cable planning, labeling, and installation are client responsibilities for new z14 installations and upgrades. Fiber optic conversion kits and mode conditioning patch cables are not orderable as features on z13 servers. All other cables must be sourced separately.
The Enterprise Fiber Cabling Services use a proven modular cabling system, the fiber transport system (FTS), which includes trunk cables, zone cabinets, and panels for servers, directors, and storage devices. FTS supports Fiber Quick Connect (FQC), a fiber harness that is integrated in the frame of a z14 server for quick connection. The FQC is offered as a feature on z13 servers for connection to FICON LX channels.
Whether you choose a packaged service or a custom service, high-quality components are used to facilitate moves, additions, and changes in the enterprise to prevent having to extend the maintenance window.
The required connector and cable type for each I/O feature on IBM z14™ servers are listed in Table 4-6.
Table 4-6 I/O feature connector and cable types
Feature code
Feature name
Connector type
Cable type
0427
FICON Express16S+ LX 10 km
LC Duplex
9 µm SM
0428
FICON Express16S+ SX
LC Duplex
50, 62.5 µm MM
0418
FICON Express16S LX 10 km
LC Duplex
9 µm SM
0419
FICON Express16S SX
LC Duplex
50, 62.5 µm MM
0409
FICON Express8S LX 10 km
LC Duplex
9 µm SM
0410
FICON Express8S SX
LC Duplex
50, 62.5 µm MM
0429
OSA-Express7S 25GbE SR
LC Duplex
50 µm MM OM4b
0424
OSA-Express6S 10GbE LR
LC Duplex
9 µm SM
0425
OSA-Express6S 10GbE SR
LC Duplex
50, 62.5 µm MM
0422
OSA-Express6S GbE LX
LC Duplex
9 µm SM
0423
OSA-Express6S GbE SX
LC Duplex
50, 62.5 µm MM
0426
OSA-Express6S 1000BASE-T
RJ-45
Category 5 UTP1
0415
OSA-Express5S 10GbE LR
LC Duplex
9 µm SM
0416
OSA-Express5S 10GbE SR
LC Duplex
50, 62.5 µm MM
0413
OSA-Express5S GbE LX
LC Duplex
9 µm SM
0414
OSA-Express5S GbE SX
LC Duplex
50, 62.5 µm MM
0417
OSA-Express5S 1000BASE-T
RJ-45
Category 5 UTP
0408
OSA-Express4S 1000BASE-T
RJ-45
Category 5 UTP
0430
25GbE RoCE Express2
LC Duplex
50 µm MM OM4b
0412
10GbE RoCE Express2
LC Duplex
50, 62.5 µm MM
0411
10GbE RoCE Express
LC Duplex
50, 62.5 µm MM
0433
CE LR
LC Duplex
9 µm SM
0172
Integrated Coupling Adapter (ICA SR)
MTP
50 µm MM OM4
(4.7 GHz-km)2
0171
HCA3-O (12xIFB)
MPO
50 µm MM OM3
(2 GHz-km)
0170
HCA3-O LR (1xIFB)
LC Duplex
9 µm SM
0431
zHyperLink Express
MPO
50 µm MM OM4
(4.7 GHz-km)

1 UTP is unshielded twisted pair. Consider the use of category 6 UTP for 1000 Mbps connections.
2 Or 50 µm MM OM3 (2 GHz-km), but OM4 is highly recommended.
MM = Multi-Mode
SM = Single-Mode
4.7.2 Storage connectivity
Connectivity to external I/O subsystems (for example, disks) is provided by FICON channels and zHyperLink4.
FICON channels
z14 supports the following FICON features:
FICON Express16S+
FICON Express16S
FICON Express8S (carry-forward only)
The FICON Express16S+, FICON Express16S, and FICON Express8S features conform to the following architectures:
Fibre Connection (FICON)
High Performance FICON on Z (zHPF)
Fibre Channel Protocol (FCP)
The FINCON features provide connectivity between any combination of servers, directors, switches, and devices (control units, disks, tapes, and printers) in a SAN.
Each FICON Express16S+, FICON Express16S, and FICON Express 8S feature occupies one I/O slot in the PCIe I/O drawer. Each feature has two ports, each supporting an LC Duplex connector, with one PCHID and one CHPID associated with each port.
All FICON Express16S+, FICON Express16S, and FICON Express8S features use SFP optics that allow for concurrent repairing or replacement for each SFP. The data flow on the unaffected channels on the same feature can continue. A problem with one FICON port no longer requires replacement of a complete feature.
All FICON Express16S+, FICON Express16S, and FICON Express8S features also support cascading, which is the connection of two FICON Directors in succession. This configuration minimizes the number of cross-site connections and helps reduce implementation costs for disaster recovery applications, IBM Geographically Dispersed Parallel Sysplex™ (GDPS), and remote copy.
IBM z14™ servers support 32K devices per FICON channel for all FICON features.
Each FICON Express16S+, FICON Express16S, and FICON Express8S channel can be defined independently for connectivity to servers, switches, directors, disks, tapes, and printers, by using the following CHPID types:
CHPID type FC: The FICON, zHPF, and FCTC protocols are supported simultaneously.
CHPID type FCP: Fibre Channel Protocol that supports attachment to SCSI devices directly or through Fibre Channel switches or directors.
FICON channels (CHPID type FC or FCP) can be shared among LPARs and can be defined as spanned. All ports on a FICON feature must be of the same type (LX or SX). The features are connected to a FICON capable control unit (point-to-point or switched point-to-point) through a Fibre Channel switch.
FICON Express16S+
The FICON Express16S+ feature is installed in the PCIe I/O drawer. Each of the two independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express16S+ optical transceivers are supported (no mix on same card):
FICON Express16S+ 10 km LX feature, FC #0427, with two ports per feature, supporting LC Duplex connectors
FICON Express16S+ SX feature, FC #0428, with two ports per feature, supporting LC Duplex connectors
Each port of the FICON Express16S+ 10 km LX feature uses an optical transceiver that supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express16S SX feature uses an optical transceiver that supports to up to 125 m (410 ft.) of distance variable with link data rate and fiber type.
 
Consideration: FICON Express16S+ features do not support auto-negotiation to a data link rate of 2 Gbps (only 4, 8, or 16 Gbps).
FICON Express16S
The FICON Express16S feature is installed in the PCIe I/O drawer. Each of the two independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express16S optical transceivers are supported:
FICON Express16S 10 km LX feature, FC #0418, with two ports per feature, supporting LC Duplex connectors
FICON Express16S SX feature, FC #0419, with two ports per feature, supporting LC Duplex connectors
Each port of the FICON Express16S 10 km LX feature uses an optical transceiver that supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express16S SX feature uses an optical transceiver that supports to up to 125 m (410 ft.) of distance depending on the fiber that is used.
 
Consideration: FICON Express16S features do not support auto-negotiation to a data link rate of 2 Gbps (only 4, 8, or 16 Gbps).
FICON Express8S
The FICON Express8S feature is installed in the PCIe I/O drawer. Each of the two independent ports is capable of 2 Gbps, 4 Gbps, or 8 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express8S optical transceivers are supported:
FICON Express8S 10 km LX feature, FC #0409, with two ports per feature, supporting LC Duplex connectors
FICON Express8S SX feature, FC #0410, with two ports per feature, supporting LC Duplex connectors
Each port of the FICON Express8S 10 km LX feature uses an optical transceiver that supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express8S SX feature uses an optical transceiver that supports up to 150 m (492 feet) of distance depending on the fiber used.
FICON enhancements
Together with the FICON Express16S+, IBM z14™ servers provide enhancements for FICON in both functional and performance aspects.
Forward Error Correction
Forward Error Correction (FEC) is a technique that is used for reducing data errors when transmitting over unreliable or noisy communication channels (improving signal to noise ratio). By adding redundancy error-correction code (ECC) to the transmitted information, the receiver can detect and correct several errors without requiring retransmission. This process features improved signal reliability and bandwidth utilization by reducing retransmissions because of bit errors, especially for connections across long distance, such as an inter-switch link (ISL) in a GDPS Metro Mirror environment.
The FICON Express16S+ and FICON Express16S are designed to support FEC coding on top of its 64b/66b data encoding for 16Gbps connections. This design can correct up to 11 bit errors per 2112 bits transmitted. Therefore, while connected to devices that support FEC at 16 Gbps connections, the FEC design allows FICON Express16S+ and FICON Express16S channels to operate at higher speeds, over longer distances, with reduced power and higher throughput while retaining the same reliability and robustness for which FICON channels are traditionally known.
With the IBM DS8870 or above, IBM z14 (and z13/z13s) servers can extend the use of FEC to the fabric N_Ports for a completed end-to-end coverage of 16 Gbps FC links. For more information, see the IBM DS8884 and z13s: A new cost optimized solution, REDP-5327.
FICON dynamic routing
With the IBM z14™, IBM z13, and IBM z13s servers, FICON channels are no longer restricted to the use of static SAN routing policies for ISLs for cascaded FICON directors. The Z servers now support dynamic routing in the SAN with the FICON Dynamic Routing (FIDR) feature. It is designed to support the dynamic routing policies that are provided by the FICON director manufacturers; for example, Brocade’s exchange-based routing (EBR) and Cisco's originator exchange ID (OxID)5 routing.
A static SAN routing policy normally assigns the ISL routes according to the incoming port and its destination domain (port-based routing), or the source and destination ports pairing (device-based routing).
The port-based routing (PBR) assigns the ISL routes statically that is based on “first come, first served” when a port starts a fabric login (FLOGI) to a destination domain. The ISL is round-robin that is selected for assignment. Therefore, I/O flow from same incoming port to same destination domain always is assigned the same ISL route, regardless of the destination port of each I/O. This setup can result in some ISLs overloaded while some are under-used. The ISL routing table is changed whenever Z server undergoes a power-on-reset (POR), so the ISL assignment is unpredictable.
Device-based routing (DBR) assigns the ISL routes statically that is based on a hash of the source and destination port. That I/O flow from same incoming port to same destination is assigned to same ISL route. Compared to PBR, the DBR is more capable of spreading the load across ISLs for I/O flow from the same incoming port to different destination ports within a destination domain.
When a static SAN routing policy is used, the FICON director features limited capability to assign ISL routes based on workload. This limitation can result in unbalanced use of ISLs (some might be overloaded, while others are under-used).
The dynamic routing ISL routes are dynamically changed based on the Fibre Channel exchange ID, which is unique for each I/O operation. ISL is assigned at I/O request time, so different I/Os from same incoming port to same destination port are assigned different ISLs.
With FIDR, IBM z14™ servers feature the following advantages for performance and management in configurations with ISL and cascaded FICON directors:
Support sharing of ISLs between FICON and FCP (PPRC or distributed)
I/O traffic is better balanced between all available ISLs
Improved utilization of FICON director and ISL
Easier to manage with a predicable and repeatable I/O performance
FICON dynamic routing can be enabled by defining dynamic routing capable switches and control units in HCD. Also, z/OS implemented a health check function for FICON dynamic routing.
Improved zHPF I/O execution at distance
By introducing the concept of pre-deposit writes, zHPF reduces the number of round trips of standard FCP I/Os to a single round trip. Originally, this benefit is limited to writes that are less than 64 KB. zHPF on IBM z14™, z13s, and z13 servers were enhanced to allow all large write operations (> 64 KB) at distances up to 100 km to be run in a single round trip to the control unit. This improvement avoids elongating the I/O service time for these write operations at extended distances.
Read Diagnostic Parameter Extended Link Service support
To improve the accuracy of identifying a failed component without unnecessarily replacing components in a SAN fabric, a new Extended Link Service (ELS) command called Read Diagnostic Parameters (RDP) was added to the Fibre Channel T11 standard to allow Z servers to obtain extra diagnostic data from the SFP optics that are throughout the SAN fabric.
IBM z14™ and z13 servers now can read this extra diagnostic data for all the ports that are accessed in the I/O configuration and make the data available to an LPAR. For z/OS LPARs that use FICON channels, z/OS displays the data with a new message and display command. For Linux on z Systems, z/VM, z/VSE, and KVM LPARs that use FCP channels, this diagnostic data is available in a new window in the SAN Explorer tool.
N_Port ID Virtualization enhancement
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel. First introduced with z9 EC, this feature can be used with earlier FICON features that were carried forward from earlier servers.
By using the FICON Express16S (or above) as an FCP channel with NPIV enabled, the maximum numbers of the following aspects for one FCP physical channel are doubled:
Maximum number of NPIV hosts defined: Increased from 32 to 64
Maximum number of remote N_Ports communicated: Increased from 512 to 1024
Maximum number of addressable LUNs: Increased from 4096 to 8192
Concurrent I/O operations: Increased from 764 to 1528
For more information about operating systems that support NPIV, see “N_Port ID Virtualization” on page 289.
Export/import physical port WWPNs for FCP Channels
IBM Z servers automatically assign worldwide port names (WWPNs) to the physical ports of an FCP channel based on the PCHID. This WWPN assignment changes when an FCP channel is moved to a different physical slot position. IBM z14™, z13, and z13s servers will allow for the modification of these default assignments, which also allows FCP channels to keep previously assigned WWPNs, even after being moved to a different slot position. This capability can eliminate the need for reconfiguration of the SAN in many situations, and is especially helpful during a system upgrade (FC #0099 - WWPN Persistence).
 
Note: For more information about the FICON enhancement of IBM z14™ servers, see Get More Out of Your IT Infrastructure with IBM z13 I/O Enhancements, REDP-5134.
FICON support for multiple-hop cascaded SAN configurations
Before the introduction of z13 and z13s servers, IBM Z FICON SAN configurations only supported a single ISL (a single hop) in a cascaded FICON SAN environment. The IBM z14™, z13, and z13s servers now support up to three hops in a cascaded FICON SAN environment. This support allows clients to more easily configure a three- or four-site disaster recovery solution. For more information about the FICON multi-hop, see the FICON Multihop: Requirements and Configurations white paper at the IBM Techdocs Library website.
FICON feature summary
The FICON feature codes, cable type, maximum unrepeated distance, and the link data rate on a IBM z14™ server are listed in Table 4-7. All FICON features use LC Duplex connectors.
Table 4-7 FICON Features
Channel feature
Feature codes
Bit rate
Cable type
Maximum unrepeated distance1 (MHz -km)
FICON Express16S+ 10KM LX
0427
4, 8, or 16 Gbps
SM 9 µm
10 km
FICON Express16S+ SX
0428
16 Gbps
MM 50 µm
35 m (500)
100 m (2000)
125 m (4700)
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
FICON Express16S 10KM LX
0418
4, 8, or 16 Gbps
SM 9 µm
10 km
FICON Express16S SX
0419
16 Gbps
MM 50 µm
35 m (500)
100 m (2000)
125 m (4700)
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
FICON Express8S 10KM LX
0409
2, 4, or 8 Gbps
SM 9 µm
10 km
FICON Express8S SX
 
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
2 Gbps
MM 62.5 µm
MM 50 µm
 
150 m (200)
300 m (500)
500 m (2000)
N/A (4700)

1 Minimum fiber bandwidths in MHz/km for multimode fiber optic links are included in parentheses, where applicable
 
 
zHyperLink Express (FC 0431)
zHyperLink is a new technology that provides up to 5x reduction in I/O latency times for Db2 read requests with the qualities of service IBM Z clients expect from I/O infrastructure for Db2 v12 with z/OS 2.1 with patches.
The zHyperLink Express feature (FC 0431) provides a low latency direct connection between z14 CPC and DS8880 I/O Port.
The zHyperLink Express is the result of new business requirements that demand fast and consistent application response times. It dramatically reduces latency by interconnecting the z14 CPC directly to I/O Bay of the DS8880 by using PCIe Gen3 x 8 physical link (up to 150 m (492 ft) distance). A new transport protocol is defined for reading and writing IBM ECKD™ data records, as documented in the zHyperLink interface specification.
On z14, zHyperLink Express card is a new PCIe adapter, which installed in the PCIe I/O drawer. HCD definition support was added for new PCIe function type with PORT attributes.
Requirements of zHyperLink
The zHyperLink Express feature is available on z14 servers, and requires:
z/OS 2.1 or later
DS888x with I/O Bay Planar board and firmware level 8.3
z14 with zHyperLink Express adapter (FC #0431) installed
FICON channel as a driver
Only ECKD supported
z/VM is not supported
Up to 16 zHyperLink Express adapters can be installed in a z14 CPC (up to 32 links).
The zHyperLink Express is managed as a native PCIe adapter and can be shared by multiple LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to each LPAR. This configuration gives a maximum of 254 VFs per adapter. The zHyperlink Express requires the following components:
zHyperLink connector on DS8880 I/O Bay
For DS8880 firmware R8.3 above, the I/O Bay planar is updated to support the zHyperLink interface. This update includes the update of the PEX 8732 switch to PEX8733 that includes a DMA engine for the zHyperLink transfers, and the upgrade from a copper to optical interface by a CXP connector (provided).
Cable
The zHyperLink Express uses optical cable with MTP connector. Maximum supported cable length is 150 m (492 ft).
4.7.3 Network connectivity
Communication for LANs is provided by the following features:
OSA-Express7S
OSA-Express6S
OSA-Express5S
OSA-Express4S (1000BaseT only)
10GbE RoCE Express2
10GbE RoCE Express features
OSA-Express7S
OSA-Express7S 25 Gigabit Ethernet SR (FC 0429) is installed in the PCIe I/O Drawer.
OSA-Express6S
The OSA-Express6S feature is installed in the PCIe I/O drawer. The following OSA-Express6S features can be installed on z14 servers:
OSA-Express6S 10 Gigabit Ethernet LR, FC 0424
OSA-Express6S 10 Gigabit Ethernet SR, FC 0425
OSA-Express6S Gigabit Ethernet LX, FC 0422
OSA-Express6S Gigabit Ethernet SX, FC 0423
OSA-Express6S 1000BASE-T Ethernet, FC 0426
The supported OSA-Express7S and 6S features are listed in Table 4-8.
Table 4-8 OSA-Express7S and 6S features
I/O feature
Feature code
Number of ports per feature
Port increment
Maximum number of ports
Maximum number of features
CHPID type
OSA-Express7S 25GbE SR
0429
1
1
48
48
OSD,
OSX
OSA-Express6S 10 GbE LR
0424
1
1
48
48
OSD,
OSX
OSA-Express6S 10 GbE SR
0425
1
1
48
48
OSD,
OSX
OSA-Express6S GbE LX
0422
2
2
96
48
OSD
OSA-Express6S GbE SX
0423
2
2
96
48
OSD
OSA-Express6S 1000BASE-T
0426
2
2
96
48
OSC, OSD, OSE, OSM
OSA-Express7S 25 Gigabit Ethernet SR (FC 0429)
The OSA-Express7S 25GbE Short Reach (SR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD and OSX. The 25GbE feature is designed to support attachment to a multimode fiber 25 Gbps Ethernet LAN or Ethernet switch that is capable of 25 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express7S 25GbE SR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express7S 25GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express6S 10 Gigabit Ethernet LR (FC 0424)
The OSA-Express6S 10 Gigabit Ethernet (GbE) Long Reach (LR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD and OSX. The 10 GbE feature is designed to support attachment to a single-mode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
 
Note: zBX Model 004 can be carried forward during an upgrade from z13 to IBM z14™ (as the z/BX is an independent Ensemble node, not tied to any IBM Z CPC); however, ordering any zBX features was withdrawn from marketing as of March 31, 2017.
The OSA-Express6S 10 GbE LR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR transceiver. The transceivers at both ends must be the same (LR to LR).
The OSA-Express6S 10 GbE LR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting this feature to the selected device.
OSA-Express6S 10 Gigabit Ethernet SR (FC 0416)
The OSA-Express6S 10 GbE Short Reach (SR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD and OSX. The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express6S 10 GbE SR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express6S 10 GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express5S Gigabit Ethernet LX (FC 0422)
The OSA-Express6S GbE LX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express6S GbE LX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an LX transceiver. The sending and receiving transceivers must be the same (LX to LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each end of the link.
OSA-Express6S Gigabit Ethernet SX (FC 0423)
The OSA-Express6S GbE SX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express6S GbE SX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an SX transceiver. The sending and receiving transceivers must be the same (SX to SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express5S 1000BASE-T Ethernet feature (FC 0426)
Feature code 0426 occupies one slot in the PCIe I/O drawer. It features two ports that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port has an SFP with an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle is required to be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 m (328 ft). The SFP allows a concurrent repair or replace action.
 
OSA-Express6S 1000BASE-T adapters1: OSA-Express6S 1000BASE-T adapters (FC 0426) are the last generation of OSA 1000BASE-T adapters to support connections operating at 100 Mbps link speed. Future OSA-Express 1000BASE-T adapter generations will support operation only at 1000 Mbps (1Gbps) link speed.

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion.
The OSA-Express6S 1000BASE-T Ethernet feature supports auto-negotiation when attached to an Ethernet router or switch. If you allow the LAN speed and duplex mode to default to auto-negotiation, the OSA-Express port and the attached router or switch auto-negotiate the LAN speed and duplex mode settings between them. They then connect at the highest common performance speed and duplex mode of interoperation. If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express port examines the signal that it is receiving and connects at the speed and duplex mode of the device at the other end of the cable.
The OSA-Express6S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, OSE, or OSM. Non-QDIO operation mode requires CHPID type OSE.
 
Note: CHPID type OSN is not supported on OSA-Express6S 1000BASE-T Ethernet feature for NCP (LP to LP).
The following settings are supported on the OSA-Express6S 1000BASE-T Ethernet feature port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified speed and duplex mode. If this specified speed and duplex mode do not match the speed and duplex mode of the signal on the cable, the OSA-Express port does not connect.
OSA-Express5S
The OSA-Express5S feature is installed in the PCIe I/O drawer. The following OSA-Express5S features can be installed on z14 servers (carry forward only):
OSA-Express5S 10 Gigabit Ethernet LR, FC 0415
OSA-Express5S 10 Gigabit Ethernet SR, FC 0416
OSA-Express5S Gigabit Ethernet LX, FC 0413
OSA-Express5S Gigabit Ethernet SX, FC 0414
OSA-Express5S 1000BASE-T Ethernet, FC 0417
The OSA-Express5S features are listed in Table 4-9.
Table 4-9 OSA-Express5S features for z14
I/O feature
Feature code
Number of ports per feature
Port increment
Maximum number of ports
Maximum number of features
CHPID type
OSA-Express5S 10 GbE LR
0415
1
1
48
48
OSD,
OSX
OSA-Express5S 10 GbE SR
0416
1
1
48
48
OSD,
OSX
OSA-Express5S GbE LX
0413
2
2
96
48
OSD
OSA-Express5S GbE SX
0414
2
2
96
48
OSD
OSA-Express5S 1000BASE-T
0417
2
2
96
48
OSC, OSD, OSE, OSM
OSA-Express5S 10 Gigabit Ethernet LR (FC 0415)
The OSA-Express5S 10 Gigabit Ethernet (GbE) Long Reach (LR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD and OSX.
The 10 GbE feature is designed to support attachment to a single-mode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express5S 10 GbE LR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR transceiver. The transceivers at both ends must be the same (LR to LR).
The OSA-Express5S 10 GbE LR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting this feature to the selected device.
OSA-Express5S 10 Gigabit Ethernet SR (FC 0416)
The OSA-Express5S 10 GbE Short Reach (SR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD and OSX. When defined as CHPID type OSX, the 10 GbE port provides connectivity and access control to the IEDN from IBM z14™ or z13 servers to zBX.
The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express5S 10 GbE SR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an SR transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express5S 10 GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express5S Gigabit Ethernet LX (FC 0413)
The OSA-Express5S GbE LX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express5S GbE LX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an LX transceiver. The sending and receiving transceivers must be the same (LX to LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each end of the link.
OSA-Express5S Gigabit Ethernet SX (FC 0414)
The OSA-Express5S GbE SX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express5S GbE SX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an SX transceiver. The sending and receiving transceivers must be the same (SX to SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express5S 1000BASE-T Ethernet feature (FC 0417)
Feature code 0417 occupies one slot in the PCIe I/O drawer. It has two ports that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port has an SFP with an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle is required to be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 m (328 ft). The SFP allows a concurrent repair or replace action.
The OSA-Express5S 1000BASE-T Ethernet feature supports auto-negotiation when attached to an Ethernet router or switch. If you allow the LAN speed and duplex mode to default to auto-negotiation, the OSA-Express port and the attached router or switch auto-negotiate the LAN speed and duplex mode settings between them. They then connect at the highest common performance speed and duplex mode of interoperation.
If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express port examines the signal that it is receiving and connects at the speed and duplex mode of the device at the other end of the cable.
The OSA-Express5S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, OSE, or OSM. Non-QDIO operation mode requires CHPID type OSE.
The following settings are supported on the OSA-Express5S 1000BASE-T Ethernet feature port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified speed and duplex mode. If this specified speed and duplex mode do not match the speed and duplex mode of the signal on the cable, the OSA-Express port does not connect.
OSA-Express4S features
This section describes the characteristics of all OSA-Express4S features that are supported on z14 servers.
The OSA-Express4S feature is installed in the PCIe I/O drawer. Only OSA-Express4S 1000BASE-T Ethernet, FC #0408 is supported on IBM z14™ servers as a carry forward during an MES.
The characteristics of the OSA-Express4S features that are supported on IBM z14™ are listed in Table 4-10.
Table 4-10 OSA-Express4S features on IBM z14™ servers
I/O feature
Feature code
Number of ports per feature
Port increment
Maximum number of ports (CHPIDs)
Maximum number of features
CHPID type
OSA-Express4S 1000BASE-T
0408
2
2
96
48
OSC, OSD, OSE, OSM
OSA-Express4S 1000BASE-T Ethernet feature (FC 0408)
The OSA-Express4S 1000BASE-T Ethernet feature occupies one slot in the PCIe I/O drawer. It includes two ports that connect to a 1000 Mbps (1 Gbps), 100 Mbps, or 10 Mbps Ethernet LAN. Each port has an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle must be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 m (328 ft).
The OSA-Express4S 1000BASE-T Ethernet feature supports auto-negotiation when attached to an Ethernet router or switch. If you allow the LAN speed and duplex mode to default to auto-negotiation, the OSA-Express port and the attached router or switch auto-negotiate the LAN speed and duplex mode settings between them. They connect at the highest common performance speed and duplex mode of interoperation.
If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express port examines the signal that it is receiving. It then connects at the speed and duplex mode of the device at the other end of the cable.
The OSA-Express4S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, OSE, or OSM. Non-QDIO operation mode requires CHPID type OSE.
The following settings are supported on the OSA-Express4 1000BASE-T Ethernet feature port:
Auto-negotiate
10 Mbps half-duplex or full-duplex
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified speed and duplex mode. If these settings do not match the speed and duplex mode of the signal on the cable, the OSA-Express port does not connect.
25GbE RoCE Express2
25GbE RoCE Express2 (FC 0430) is installed in the PCIe I/O drawer and is supported on IBM z14™. The 25GbE RoCE Express2 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
On IBM z14™, both ports are supported by z/OS and can be shared by up to 126 partitions (LPARs) per PCHID. The 25GbE RoCE Express2 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 25GbE switch are supported.
 
Switch configuration for RoCE Express2: If the IBM 25GbE RoCE Express2 features are connected to 25GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 25GbE RoCE Express feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group.
The maximum supported unrepeated distance, point-to-point, is 100 meters (328 ft). A client-supplied cable is required. Two types of cables can be used for connecting the port to the selected 25GbE switch or to the 25GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector; supports 70 meters (229 ft)
OM4 50-micron multimode fiber optic cable that is rated at 4700 MHz-km that ends with an LC Duplex connector; supports 100 meters (328 ft)
10GbE RoCE Express2
RoCE Express2 (FC 0412) is installed in the PCIe I/O drawer and is supported on IBM z14™. The 10GbE RoCE Express2 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
On IBM z14™ servers, both ports are supported by z/OS and can be shared by up to 126 partitions (LPARs) per PCHID. The 10GbE RoCE Express2 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 10 GbE switch are supported.
 
Switch configuration for RoCE Express2: If the IBM 10GbE RoCE Express2 features are connected to 10 GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The maximum supported unrepeated distance, point-to-point, is 300 meters (984 ft). A client-supplied cable is required. Three types of cables can be used for connecting the port to the selected 10 GbE switch or to the 10GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector (supports 300 meters (984 ft))
OM2 50-micron multimode fiber optic cable that is rated at 500 MHz-km that ends with an LC Duplex connector (supports 82 meters (269 ft))
OM1 62.5-micron multimode fiber optic cable that is rated at 200 MHz-km that ends with an LC Duplex connector (supports 33 meters (108 ft))
10GbE RoCE Express
The 10GbE RoCE Express feature (FC #0411) is installed in the PCIe I/O drawer. This feature is supported on z13, z13s, zEC12, and z1BC12 servers and can be carried forward during an MES upgrade to a z14.
The 10GbE RoCE Express is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
For zEC12 and zBC12, each feature can be dedicated to an LPAR only, and z/OS can use only one of the two ports. Both ports are supported by z/OS and can be shared by up to 31 partitions (LPARs) per PCHID on z14 and z13.
The 10GbE RoCE Express feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Point-to-point connections and switched connections with an enterprise-class 10 GbE switch are supported.
 
Switch configuration for RoCE: If the IBM 10GbE RoCE Express features are connected to 10 GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The maximum supported unrepeated distance, point-to-point, is 300 meters (984 ft). A client-supplied cable is required. The following types of cables can be used for connecting the port to the selected 10 GbE switch or to the 10GbE RoCE Express feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector (supports 300 meters (984 ft))
OM2 50-micron multimode fiber optic cable that is rated at 500 MHz-km that ends with an LC Duplex connector (supports 82 meters (269 ft))
OM1 62.5-micron multimode fiber optic cable that is rated at 200 MHz-km that ends with an LC Duplex connector (supports 33 meters (108 ft))
Shared Memory Communications functions
The Shared Memory Communication (SMC) capabilities of the z14 help optimize the communications between applications for server-to-server (SMC-R) or LPAR-to-LPAR (SMC-D) connectivity.
SMC-R provides application transparent use of the RoCE-Express feature. This feature reduces the network overhead and latency of data transfers, which effectively offers the benefits of optimized network performance across processors.
SMC-D was used with the introduction of the Internal Shared Memory (ISM) virtual PCI function. ISM is a virtual PCI network adapter that enables direct access to shared virtual memory providing a highly optimized network interconnect for IBM Z intra-CPC communications.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes. SMC-D completes the overall SMC solution, which provides synergy with SMC-R.
SMC-R and SMC-D use shared memory architectural concepts, which eliminates the TCP/IP processing in the data path, yet preserves TCP/IP Qualities of Service for connection management purposes.
Internal Shared Memory
ISM is a function that is supported by IBM z14™, z13, and z13s machines. It is the firmware that provides connectivity by using shared memory access between multiple operating system images within the same CPC. ISM creates virtual adapters with shared memory that is allocated for each OS image.
ISM is defined by the FUNCTION statement with a virtual CHPID (VCHID) in hardware configuration definition (HCD)/IOCDS. Identified by the PNETID parameter, each ISM VCHID defines an isolated, internal virtual network for SMC-D communication, without any hardware component required. Virtual adapters are defined by virtual function (VF) statements. Multiple LPARs can access the same virtual network for SMC-D data exchange by associating their VF with same VCHID.
Applications that use HiperSockets can have network latency and CPU reduction benefits and performance improvement by using the SMC-D over ISM.
IBM z14™ and z13 servers support up to 32 ISM VCHIDs per CPC. Each VCHID supports up to 255 VFs, with a total maximum of 8,000 VFs.
For more information about the SMC-D and ISM, see Appendix D, “Shared Memory Communications” on page 475.
HiperSockets
The HiperSockets function of IBM z14™ servers provides up to 32 high-speed virtual LAN attachments.
 
HiperSockets IOCP definitions on IBM z14™: A parameter was added for HiperSockets IOCP definitions on IBM z14™ and z13 servers. Therefore, the IBM z14™ IOCP definitions must be migrated to support the HiperSockets definitions (CHPID type IQD).
On IBM z14™ and z13 servers, the CHPID statement of HiperSockets devices requires the keyword VCHID. VCHID specifies the virtual channel identification number that is associated with the channel path. The vSalid range is 7E0 - 7FF.
VCHID is not valid on Z servers before z13.
For more information, see IBM Z Input/Output Configuration Program User's Guide for ICP IOCP, SB10- 7163.
HiperSockets can be customized to accommodate varying traffic sizes. Because HiperSockets does not use an external network, it can free up system and network resources. This advantage can help eliminate attachment costs and improve availability and performance.
HiperSockets eliminates the need to use I/O subsystem operations and traverse an external network connection to communicate between LPARs in the same z14 server. HiperSockets offers significant value in server consolidation when connecting many virtual servers. It can be used instead of certain coupling link configurations in a Parallel Sysplex.
HiperSockets internal networks support the following transport modes:
Layer 2 (link layer)
Layer 3 (network or IP layer)
Traffic can be IPv4 or IPv6, or non-IP, such as AppleTalk, DECnet, IPX, NetBIOS, or SNA.
HiperSockets devices are protocol-independent and Layer 3 independent. Each HiperSockets device (Layer 2 and Layer 3 mode) features its own Media Access Control (MAC) address. This address allows the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support helps facilitate server consolidation, and can reduce complexity and simplify network configuration. It also allows LAN administrators to maintain the mainframe network environment similarly to non-mainframe environments.
Packet forwarding decisions are based on Layer 2 information instead of Layer 3. The HiperSockets device can run automatic MAC address generation to create uniqueness within and across LPARs and servers. The use of Group MAC addresses for multicast is supported, and broadcasts to all other Layer 2 devices on the same HiperSockets networks.
Datagrams are delivered only between HiperSockets devices that use the same transport mode. A Layer 2 device cannot communicate directly to a Layer 3 device in another LPAR network. A HiperSockets device can filter inbound datagrams by VLAN identification, the destination MAC address, or both.
Analogous to the Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors, or multicast routers. This configuration enables the creation of high-performance and high-availability link layer switches between the internal HiperSockets network and an external Ethernet network. It also can be used to connect to the HiperSockets Layer 2 networks of different servers.
HiperSockets Layer 2 on IBM z14™ and z13 servers is supported by Linux on Z, and by z/VM for Linux guest use.
IBM z14™ supports the HiperSockets Completion Queue function that is designed to allow HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary. This feature combines ultra-low latency with more tolerance for traffic peaks.
With the asynchronous support, data can be temporarily held until the receiver has buffers that are available in its inbound queue during high volume situations. The HiperSockets Completion Queue function requires the following applications at a minimum6:
z/OS V1.13
Linux on Z distributions:
 – Red Hat Enterprise Linux (RHEL) 6.2
 – SUSE Linux Enterprise Server (SLES) 11 SP2
 – Ubuntu 16.04 LTS
z/VSE V5.1.17
z/VM V6.28 with maintenance
In z/VM (supported versions), the virtual switch function is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to communicate directly with the following systems:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
4.7.4 Parallel Sysplex connectivity
Coupling links are required in a Parallel Sysplex configuration to provide connectivity from the z/OS images to the coupling facility (CF). A properly configured Parallel Sysplex provides a highly reliable, redundant, and robust IBM Z technology solution to achieve near-continuous availability. A Parallel Sysplex is composed of one or more z/OS operating system images that are coupled through one or more CFs.
Coupling links
The type of coupling link that is used to connect a CF to an operating system LPAR is important. The link performance has a significant effect on response times and coupling processor usage. For configurations that cover large distances, the time that is spent on the link can be the largest part of the response time.
The following links are available to connect an operating system LPAR to a CF:
Integrated Coupling Adapter (ICA SR) for short distance connectivity, which is defined as CHPID type CS5. The ICA SR can be used only for coupling connectivity between z14, z13, and z13s servers. It does not support connectivity to zEC12 or zBC12 servers, and it cannot be connected to HCA3-O or HCA3-O LR coupling fanouts.
The ICA SR supports distances up to 150 m (492 ft) and a link data rate of 8 GBps. OM3 fiber optic cable is used for distances up to 100 m (328 ft), and OM4 for distances up to 150 m (492 ft). ICA SR supports four CHPIDs per port and seven subchannels (devices) per CHPID. ICA SR supports transmission of Server Time Protocol (STP) messages.
Parallel Sysplex that uses IFB 12x connects z14, z13, z13s, zEC12, and zBC12 servers. 12x IFB coupling links are fiber optic connections that support a maximum distance of up to 150 m (492 ft). IFB coupling links are defined as CHPID type CIB. IFB supports transmission of STP messages.
Parallel Sysplex that uses InfiniBand 1x Long Reach (IFB LR) connects z14, z13, z13s, zEC12, and zBC12. 1x InfiniBand coupling links are fiber optic connections that support a maximum unrepeated distance of up to 10 km (6.2 miles), and up to 100 km (62 miles) with an IBM Z qualified DWDM. IFB LR coupling links are defined as CHPID type CIB. IFB LR supports transmission of STP messages.
 
IBM z14 ZR1 (M/T 3907) coupling connectivity: InfiniBand features are not supported (nor available) on IBM z14 ZR1. z14 ZR1 supports only ICA SR and CE LR for sysplex coupling connectivity.
Coupling Express Long Reach: Coupling Express LR (FC #0433) is recommended for Long Distance Coupling IBM z14™/z13/z13s to z13 and above. It supports a maximum unrepeated distance to 10 km (6.2 miles) and up to 100 km (62 miles) with a qualified DWDM. CE LR coupling links are defined as CHPID CL5. CE LR uses same 9 µm single mode fiber cable as 1x IFB.
The coupling link options are listed in Table 4-11.
Table 4-11 Coupling link options that are supported on z14
Type
Description
Use for connecting
Link rate
Distance
z14 maximum number of ports
IFB
12x
InfiniBand
(HCA3-O)1
z14/z13/z13s/zEC12/zBC12 to z14/z13/z13s/zEC12/zBC12
6 GBps
150 meters
(492 feet)
32
IFB LR
1x IFB
(HCA3-O LR)
z14/z13/z13s/zEC12/zBC12 to z14/z13/z13s/zEC12/zBC12
5.0 Gbps
10 km unrepeated
(6.2 miles)
100 km repeated
(62 miles)
64
CE LR
Coupling Express LR
z14/z13/z13s to z14/z13/z13s
10 Gbps
10 km unrepeated
(6.2 miles)
100 km repeated
(62 miles)
64
ICA SR
Integrated Coupling Adapter
z14/z13/z13s to z14/z13/z13s
8 GBps
150 meters
(492 feet)
80
IC
Integrated Coupling Adapter
Internal communication
Internal speeds
N/A
40

1 12x IFB3 protocol supports a maximum of four CHPIDs and connects to the other HCA3-O port. When connected to a HCA2-O port, 12x IFB protocol is used. The protocol is auto-configured when conditions are met for IFB3. For more information, see 4.5.3, “HCA3-O (12x IFB) fanout (FC 0171)” on page 156.
The maximum number of combined external coupling links (active CE LR, ICA SR links, and IFB LR) is 144 per IBM z14™ server. IBM z14™ servers support up to 256 coupling CHPIDs per CPC. A z14 coupling link support summary for z14 is shown in Figure 4-6.
Figure 4-6 z14 Parallel Sysplex coupling connectivity
When defining IFB coupling links (CHPID type CIB), HCD defaults to seven subchannels. A total of 32 subchannels are supported on only HCA2-O LR (1xIFB) and HCA3-O LR (1xIFB) on zEC12 and later when both sides of the connection use IFB protocol.
 
Sysplex Coupling and Timing Connectivity: IBM z14 M0x (M/T 3906) supports N-2 sysplex connectivity (z14M0x, z14 ZR1, z13, z13s, zEC12, and zBC12), while IBM z14 ZR1 supports only N-1 sysplex connectivity (z14 M0x, z14 ZR1, z134, and z13s).
In a Parallel Sysplex configuration, z/OS and CF images can run on the same or on separate servers. There must be at least one CF that is connected to all z/OS images, even though other CFs can be connected only to selected z/OS images.
Two CF images are required for system-managed CF structure duplexing. In this case, each z/OS image must be connected to both duplexed CFs.
To eliminate any single points of failure in a Parallel Sysplex configuration, have at least the following components:
Two coupling links between the z/OS and CF images.
Two CF images not running on the same server.
One stand-alone CF. If using system-managed CF structure duplexing or running with resource sharing only, a stand-alone CF is not mandatory.
Coupling link features
IBM z14™ server supports the following coupling link features:
HCA3-O fanout for 12x InfiniBand, FC 0171
HCA3-O LR fanout for 1x InfiniBand, FC 0170
ICA SR fanout, FC 0172
CE LR adapter, FC 0433
Extended distance support
For more information about extended distance support, see System z End-to-End Extended Distance Guide, SG24-8047.
Internal coupling links
IC links are LIC-defined links that connect a CF to a z/OS LPAR in the same server. These links are available on all IBM Z servers. The IC link is a Z server coupling connectivity option. It enables high-speed, efficient communication between a CF partition and one or more z/OS LPARs that run on the same server. The IC is a linkless connection (implemented in Licensed Internal Code (LIC)), and so does not require any hardware or cabling.
An IC link is a fast coupling link that uses memory-to-memory data transfers. Although IC links do not have PCHID numbers, they do require CHPIDs.
IC links require an ICP channel path definition at the z/OS and the CF end of a channel connection to operate in peer mode. The links are always defined and connected in pairs. The IC link operates in peer mode, and its existence is defined in HCD/IOCP.
IC links feature the following attributes:
Operate in peer mode (channel type ICP) on IBM Z servers.
Provide the fastest connectivity, which is faster than any external link alternatives.
Result in better coupling efficiency than with external links, which effectively reduces the server cost that is associated with Parallel Sysplex technology.
Can be used in test or production configurations, and reduce the cost of moving into Parallel Sysplex technology while also enhancing performance and reliability.
Can be defined as spanned channels across multiple CSSs.
Are available for no extra fee (no feature code). Employing ICFs with IC channels results in considerable cost savings when you are configuring a cluster.
IC links are enabled by defining channel type ICP. A maximum of 32 IC channels can be defined on a Z server.
Migration considerations
Upgrading from previous generations of IBM Z servers in a Parallel Sysplex to IBM z14™ servers in that same Parallel Sysplex requires proper planning for coupling connectivity. Planning is important because of the change in the supported type of coupling link adapters and the number of available fanout slots of the z14 CPC drawer, as compared to the number of available fanout slots of the processor books of the previous generation Z servers, such as zEC12.
IBM z14™ does not support ISC-3 links, HCA2-O, or HCA2-O (LR).
 
HCA3-O link compatibility: HCA3-O (LR) links can connect to HCA2-O (LR) on zEC12 and zBC12.
z196 and z114 are not supported in same Parallel Sysplex or STP CTN with IBM z14™.
The ICA SR fanout provides short-distance connectivity to another IBM z14™/z13s/z13 server. For more information, see Table 4-11 on page 185.
The CE LR adapter provides long-distance connectivity to another IBM z14™/z13s/z13 server. For more information, see “Coupling links” on page 185
The z14 server fanout slots in the CPC drawer provide coupling links connectivity through the ICA SR and IFB fanout cards. In addition to coupling links for Parallel Sysplex, the fanout cards that the fanout slots provide connectivity for the PCIe I/O drawer (PCIe fanout).
Up to 10 PCIe and 4 IFB fanout cards can be installed in each CPC drawer, as shown in Figure 4-7.
Figure 4-7 CPC drawer front view showing the coupling links
Previous generations IBM Z platforms, in particular z196 and zEC12, use processor books, which provide connectivity for up to eight InfiniBand fanouts per book.
As an example of a possible migration case, assume a one-book zEC12 is used as stand-alone CF with all eight fanouts that are used for IFB connectivity. In a 1:1 link migration scenario, this server cannot be upgraded to a one z14 CPC drawer because the IBM z14™ server cannot accommodate more than four InfiniBand fanouts in a single CPC drawer. For more information, see 4.5.5, “Fanout considerations” on page 158.
In this case, a second CPC drawer is needed to fulfill all IFB connectivity, as shown in Figure 4-8.
Figure 4-8 HCA3-O Fanouts: z14 versus zEC12 servers
It is beyond the scope of this book to describe all possible migration scenarios. Always consult with subject matter experts to help you to develop your migration strategy.
The following considerations can help you assess possible migration scenarios. The objective of this list is to enable migration to IBM z14™ servers, support legacy coupling where essential, and adopt ICA SR where possible to avoid the need for more CPC drawers and other possible migration issues:
The IBM zEnterprise EC12 and BC12 are the last generation of Z servers to support ISC-3, 12x HCA2-O, and 1x HCA2-O LR. They also are the last Z servers that can be part of a Mixed Coordinated Timing Network (CTN).
Consider Long Distance Coupling requirements first:
 – HCA3-O 1x or CE LR are the long-distance coupling links that are available on IBM z14™ servers.
 – ICA SR or HCA3-O 12x should be used for short distance coupling requirements.
ISC-3 Migration (IBM z14™/z13 servers do not support ISC-3):
 – Evaluate current ISC-3 usage (long- and short-distance, coupling data, or timing only) to determine how to fulfill ISC-3 requirements with the links that are available on IBM z14™/z13 servers.
 – You can migrate from ISC-3 to CE LR, ICA SR, 12x InfiniBand, or 1x InfiniBand on IBM z14™/z13 servers.
 – 1:1 Mapping of ISC-3 to Coupling over InfiniBand. On previous servers, the HCA2-C fanouts enable ISC-3 coupling in the I/O Drawer. Two HCA2-C fanouts can be replaced by two 1x fanouts (eight 1x links) or two 12x fanouts (four 12x links).
 – ISC-3 supports one CHPID/link. Consolidate ISC-3 CHPIDs into CE LR, ICA SR or IFB, and use multiple CHPIDs per link.
Evaluate configurations for opportunities to eliminate or consolidate InfiniBand links:
 – Eliminate any redundant links. Two physical links between CPCs is the minimum requirement from a reliability, availability, and serviceability (RAS) perspective.
 – Share logical CHPIDs on a physical IFB link connection to reduce the usage of IFB links in z14 servers (even multiple sysplexes can share a single link).
 – Coupling Link Analysis: Capacity Planning tools and services can help.
For z14/z13 to z14/z13, use CE LR and ICA SR as much as possible, and use InfiniBand links for z13/zEC12/zBC12 to z13/zEC12/zBC12 connectivity.
Install all of the ICA SR links that are required to fulfill future short distance coupling requirements:
 – When upgrading a CPC to a IBM z14™ server, configure the IBM z14™ server with all the ICA SR coupling links that eventually will be needed (that is, avoid loose piece MES with ICA SR links) in your Parallel Sysplex configuration.
 – Consider a plan-ahead configuration. In a multi-CPC configuration, a final approach can be to establish z14-to-z14 connections by using mainly ICA SR connectivity. Even if the links are not used immediately, especially within the zEC12/zBC12 migration, most coupling connectivity uses InfiniBand. However, after the IBM z14™ server is migrated, coupling connectivity can be moved to ICA SR links.
Upgrade CPCs with fewer coupling constraints first:
 – Consider upgrading CPCs to have sufficient IFB links in the target z14 configuration first (for example, multi-CPC drawer CPCs).
 – Test the new CE LR or ICA SR link on the least constrained CPC (for example, z/OS Host CPCs that have the lowest number of links). For the CPCs that are involved in this test that do not have a CF, you might need to add a CF LPAR and ICF engine to one of the CPCs.
 – When migrating CPCs with more coupling links to a IBM z14™ server, begin by using enough CE LR or ICA SR links in place of Coupling over InfiniBand (for example, half ICA SR, half 12x) to maintain the CPC footprint (that is, avoid extra CPC drawers).
Consider replacing two servers that are close to each other at the same time. Assess the risk, and, if acceptable, immediately replace 12x IFB with ICA SR links in z14 peers, and replace 1x IFB with CE LR links, which frees InfiniBand links for connecting to old machine like zEC12/zBC12.
Consider temporary performance reduction when consolidating more than four CHPIDs per IFB link. After the peer system is migrated to a z14 or z13 server, the InfiniBand links can be migrated to CE LR or ICA SR links.
Coupling links and Server Time Protocol
All external coupling links can be used to pass time synchronization signals by using STP. STP is a message-based protocol in which timing messages are passed over data links between servers. The same coupling links can be used to exchange time and CF messages in a Parallel Sysplex.
 
Sysplex Coupling and Timing Connectivity: IBM z14 M0x (M/T 3906) supports N-2 sysplex connectivity (z14M0x, z14 ZR1, z13, z13s, zEC12, and zBC12), while IBM z14 ZR1 supports only N-1 sysplex connectivity (z14 M0x, z14 ZR1, z134, and z13s).
The use of the coupling links to exchange STP messages has the following advantages:
By using the same links to exchange STP messages and CF messages in a Parallel Sysplex, STP can scale with distance. Servers that are exchanging messages over short distances, such as IFB or ICA SR links, can meet more stringent synchronization requirements than servers that exchange messages over long IFB LR links (distances up to 100 km (62 miles)). This advantage is an enhancement over the IBM Sysplex Timer implementation, which does not scale with distance.
Coupling links also provide the connectivity that is necessary in a Parallel Sysplex. Therefore, a potential benefit can be realized of minimizing the number of cross-site links that is required in a multi-site Parallel Sysplex.
Between any two servers that are intended to exchange STP messages, configure each server so that at least two coupling links exist for communication between the servers. This configuration prevents the loss of one link from causing the loss of STP communication between the servers. If a server does not have a CF LPAR, timing-only links can be used to provide STP connectivity.
The z14 server does not support attachment to the IBM Sysplex Timer. A IBM z14™ server cannot be added into a Mixed CTN. It can participate in an STP-only CTN only.
STP enhancements on z14
 
 
Important: For more information about configuring an STP CTN with three or more servers, see the Important Considerations for STP server role assignments white paper that is available at the IBM Techdocs Library website.
If the guidelines are not followed, it might result in all the servers in the CTN becoming unsynchronized. This condition results in a sysplex-wide outage.
STP on z14 features the following enhancements:
CTN split and CTN merge
With HMC 2.14.1, Coordinated Timing Network split or merge are supported. CTN split defines a system-assisted method for dynamic splitting of a CTN, while CTN merge automates the time synchronization process and the STP role assignment in the merged CTN.
Extra stratum level
The limit was Stratum level 3 before z14. The extra stratum allows CPCs to operate as part of CTN stratum level 4, which can avoid the extra complexity and expense of system reconfiguration.
 
Warning: This extra stratum level should be used only as a temporary state during reconfiguration. Customer should not run with machines at stratum level 4 for extended periods because of the lower quality of the time synchronization.
Graphical display of a Coordinated Timing Network (CTN)
This graphical display improved the user interface to STP controls. This type of visual display of the CTN status, which provided a clearer view of CTNs, can avoid outages, such as a user intentionally took down the CTS, but did not realize the BTS was down.
For more information about STP configuration, see the following resources:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Implementation Guide, SG24-7281
Server Time Protocol Recovery Guide, SG24-7380
Pulse per second input
A pulse per second (PPS) signal can be received from an external time source (ETS) device. One PPS port is available on each of the two oscillator cards. These cards are installed on a small backplane that is mounted in the rear of the frame of z14 servers. It is 2x Oscillators with BNC connectors for pulse per second.
Connections to all the CEC drawers provide redundancy for continued operation and concurrent maintenance when a single oscillator card fails. Each oscillator card includes a Bayonet Neill-Concelman (BNC) connector for PPS connection support, which attaches to two different ETSs. Two PPS connections from two different ETSs are preferable for redundancy.
The time accuracy of an STP-only CTN is improved by adding an ETS device with the PPS output signal. STP tracks the highly stable accurate PPS signal from ETSs. It maintains accuracy of 10 µs as measured at the PPS input of the z14 server. If STP uses an NTP server without PPS, a time accuracy of 100 m (328 ft) to the ETS is maintained. ETSs with PPS output are available from various vendors that offer network timing solutions.
4.8 Cryptographic functions
Cryptographic functions are provided by the CP Assist for Cryptographic Function (CPACF) and the PCI Express cryptographic adapters. IBM z14™ servers support the Crypto Express6S feature.
4.8.1 CPACF functions (FC 3863)
FC #38639 is required to enable CPACF functions.
4.8.2 Crypto Express6S feature (FC 0893)
Crypto Express6S is a new feature on IBM z14™ servers. On the initial configuration, a minimum of two features are installed. The number of features then increases one at a time up to a maximum of 16 features. Each Crypto Express6S feature holds one PCI Express cryptographic adapter. Each adapter can be configured by the installation as a Secure IBM Common Cryptographic Architecture (CCA) coprocessor, as a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or as an accelerator.
The tamper-resistant hardware security module, which is contained on the Crypto Express6S feature, is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification. It supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as an IBM CCA coprocessor).
The following CCA compliance levels are available:
Non-compliant (default)
PCI-HSM 2016
PCI-HSM 2016 (migration, key tokens while migrating to compliant)
The following EP11 compliance levels are available (Crypto Express6S and Crypto Express5S):
FIPS 2009 (default)
FIPS 2011
BSI 2009
BSI 2011
Each Crypto Express6S feature occupies one I/O slot in the PCIe I/O drawer, and features no CHPID assigned. However, it has one PCHID.
4.8.3 Crypto Express5S feature (FC 0890)
Crypto Express5S was introduced from z13 servers. On the initial configuration, a minimum of two features are installed. The number of features then increases one at a time up to a maximum of 16 features. Each Crypto Express5S feature holds one PCI Express cryptographic adapter. Each adapter can be configured by the installation as a Secure IBM CCA coprocessor, as a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or as an accelerator.
Each Crypto Express5S feature occupies one I/O slot in the PCIe I/O drawer, and features no CHPID assigned. However, it has one PCHID.
4.9 Integrated firmware processor
The integrated firmware processor (IFP) was introduced with the zEC12 and zBC12 servers. The IFP is dedicated for managing a new generation of PCIe features. The following new features are installed in the PCIe I/O drawer:
zEDC Express
25GbE RoCE Express2
10GbE RoCE Express2
10GbE RoCE Express
zHyperLink Express
All native PCIe features should be ordered in pairs for redundancy. The features are assigned to one of the four resource groups (RGs) that are running on the IFP according to their physical location in the PCIe I/O drawer, which provides management functions and virtualization functions.
If two features of same type are installed, one always is managed by resource group 1 (RG 1) or resource group 3 (RG3) while the other feature is managed by resource group 2 (RG 2) or resource group 4 (RG 4). This configuration provides redundancy if one of the features or resource groups needs maintenance or has a failure.
The IFP and RGs support the following infrastructure management functions:
Firmware update of adapters and resource groups
Error recovery and failure data collection
Diagnostic and maintenance tasks
4.10 zEDC Express
zEDC Express is an optional feature (FC #0420) that is available on IBM z14™, z13, zEC12, and zBC12 servers. It is designed to provide hardware-based acceleration for data compression and decompression.
The IBM zEnterprise Data Compression (zEDC) acceleration capability in z/OS and the zEDC Express feature is designed to help improve cross-platform data exchange, reduce CPU consumption, and save disk space.
The feature installs exclusively on the PCIe I/O drawer. Up to 16 features can be installed on the system. One PCIe adapter or compression coprocessor is available per feature, which implements compression as defined by RFC1951 (DEFLATE).
The zEDC Express feature can be shared by up to 15 LPARs.
 

1 OSA-Express4S, 5S and 6S 1000BASE-T features do not have optics (copper only; RJ45 connectors).
2 On special request. For more information, see the Parallel Sysplex page of the IBM IT infrastructure website.
3 Certain I/O features do not have external ports, such as Crypto Express and zEDC
4 zHyperLink feature operates together with a FICON channel
5 Check with the switch provider for their support statement.
6 Minimum OS support for z14 can differ. For more information, see Chapter 7, “Operating system support” on page 243.
7 z/VSE 5.1.1 is end of support.
8 z/VM V6.2 and V6.3 are not longer supported. z/VM V6.4 or newer is needed.
9 Subject to export regulations.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.17.91