Central processor complex I/O structure
This chapter describes the I/O system structure and connectivity options that are available on the IBM z15 servers.
 
Note: Throughout this chapter, z15 refers to IBM z15 Model T01 (Machine Type 8561).
This chapter includes the following topics:
4.1 Introduction to I/O infrastructure
This section describes the I/O features that are available on the IBM z15 server. The z15 server support PCIe+ I/O drawers only.
I/O cage, I/O drawer, and PCIe I/O drawer are not supported.
 
Note: Throughout this chapter, the terms adapter and card refer to a PCIe I/O feature that is installed in a PCIe+ I/O drawer.
4.1.1 I/O infrastructure
IBM extends the use of industry standards on the IBM Z platform by offering a Peripheral Component Interconnect Express Generation 3 (PCIe Gen3) I/O infrastructure. The PCIe I/O infrastructure that is provided by the central processor complex (CPC) improves I/O capability and flexibility, while allowing for the future integration of PCIe adapters and accelerators.
The PCIe I/O infrastructure in z15 consists of the following components:
PCIe+ Gen3 dual port fanouts that support 16 GBps I/O bus for CPC drawer connectivity to the PCIe+ I/O drawers. It connects to the PCIe Interconnect Gen3 in the PCIe+ I/O drawers.
PCIe Gen3 feature that support coupling links, Integrated Coupling Adapter Short Reach (ICA SR and ICA SR1.1). The ICA SR and ICA SR1.1 features have two ports, each port supporting 8 GBps.
The 8U, 16-slot, and 2-domain PCIe+ I/O drawer for PCIe I/O features.
The z15 I/O infrastructure provides the following benefits:
The bus connecting the CPC drawer to the I/O domain in the PCIe+ I/O drawer bandwidth is 16 GBps.
Up to 32 channels (16 PCIe I/O cards) are supported in the PCIe+ I/O drawer.
Granularity for the storage area network (SAN) and the local area network (LAN):
 – The FICON Express16SA features two channels per feature for Fibre Channel connection (FICON), High-Performance FICON on Z (zHPF), and Fibre Channel Protocol (FCP) storage area networks. The FICON Express16S+ features two channels per feature for Fibre Channel connection (FICON), High-Performance FICON on Z (zHPF), and Fibre Channel Protocol (FCP) storage area networks. The FICON Express16S and FICON Express8S feature two channels per feature for Fibre Channel connection (FICON), High-Performance FICON on Z (zHPF), and Fibre Channel Protocol (FCP) storage area networks.
 – The Open Systems Adapter (OSA)-Express7S GbE, OSA-Express7S 1000BASE-T, OSA-Express6S GbE, and the OSA-Express6S 1000BASE-T features include two ports each (LAN connectivity); the OSA-Express7S 25GbE SR1.1, OSA-Express7S 25GbE, OSA-Express7S 10GbE, and the OSA-Express6S 10 GbE features have one port each (LAN connectivity).
Native PCIe features (plugged into the PCIe+ I/O drawer):
 – IBM zHyperLink Express 1.1
 – IBM zHyperlink Express
 – 25GbE and 10GbE Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) Express2.1
 – 25GbE and 10GbE RoCE Express2
 – Coupling Express Long Reach (CE LR) (available also on z14 M0x, z14 ZR1, z13, and z13s)
 – 10 GbE RoCE Express
 – Crypto Express7S (single/dual ports)
 – Crypto Express6s
 – Crypto Express5s
4.1.2 PCIe Generation 3
The PCIe Generation 3 uses 128b/130b encoding for data transmission. This configuration reduces the encoding overhead to about 1.54% versus the PCIe Generation 2 overhead of 20% that uses 8b/10b encoding.
The PCIe standard uses a low-voltage differential serial bus. Two wires are used for signal transmission, and a total of four wires (two for transmit and two for receive) form a lane of a PCIe link, which is full-duplex. Multiple lanes can be aggregated into a larger link width. PCIe supports link widths of 1, 2, 4, 8, 12, 16, and 32 lanes (x1, x2, x4, x8, x12, x16, and x32).
The data transmission rate of a PCIe link is determined by the link width (numbers of lanes), the signaling rate of each lane, and the signal encoding rule. The signaling rate of one PCIe Generation 3 lane is eight gigatransfers per second (GTps), which means that nearly 8 gigabits are transmitted per second (Gbps).
 
Note: I/O infrastructure for z15 is implemented as PCIe Generation 3. The PU chip PCIe interface is PCIe Generation 4 (x16 @32 GBps), but the CPC I/O Fanout infrastructure uses externally PCIe Generation 3 connectivity.
A PCIe Gen3 x16 link features the following data transmission rates:
The maximum theoretical data transmission rate per lane:
8 Gbps * 128/130 bit (encoding) = 7.87 Gbps=984.6 MBps
The maximum theoretical data transmission rate per link:
984.6 MBps * 16 (lanes) = 15.75 GBps
Considering that the PCIe link is full-duplex mode, the data throughput rate of a PCIe Gen3 x16 link is 31.5 GBps (15.75 GBps in both directions).
 
Link performance: The link speeds do not represent the performance of the link. The performance depends on many factors, including latency through the adapters, cable lengths, and the type of workload.
PCIe Gen3 x16 links are used in z15 servers for driving the PCIe+ I/O drawers, and for coupling links for CPC to CPC communications.
 
Note: Unless specified otherwise, PCIe refers to PCIe Generation 3 in remaining sections of this chapter.
4.2 I/O system overview
The z15 I/O characteristics and supported features are described in this section.
4.2.1 Characteristics
The z15 I/O subsystem is designed to provide great flexibility, high availability, and the following excellent performance characteristics:
High bandwidth
 
Link performance: The link speeds do not represent the performance of the link. The performance depends on many factors, including latency through the adapters, cable lengths, and the type of workload.
z15 servers use PCIe Gen3 protocol to drive PCIe+ I/O drawers and CPC to CPC (coupling) connections. The I/O bus infrastructure data rate of up to 128 GBps per system (12 PCIe+ Gen3 fanout slots). For more information about coupling link connectivity, see 4.6.4, “Parallel Sysplex connectivity” on page 196.
Connectivity options:
 – z15 servers can be connected to an extensive range of interfaces, such as FICON/FCP for SAN connectivity, OSA features for LAN connectivity and zHyperLink Express for storage connectivity (low latency compared to FICON).
 – For CPC to CPC connections, z15 servers use Integrated Coupling Adapter (ICA SR) and the Coupling Express Long Reach (CE LR). The Parallel Sysplex InfiniBand is not supported.
 – The 25GbE RoCE Express2.1, 10GbE RoCE Express2.1, 25GbE RoCE Express2, 10GbE RoCE Express2, and 10GbE RoCE Express features provide high-speed memory-to-memory data exchange to a remote CPC by using the Shared Memory Communications over RDMA (SMC-R) protocol for TCP (socket-based) communications.
Concurrent I/O upgrade
You can concurrently add I/O features to z15 servers if unused I/O slot positions are available.
Concurrent PCIe+ I/O drawer upgrade
Extra PCIe+ I/O drawers can be installed concurrently if free frame slots for the PCIe+ I/O drawers and PCIe fanouts in the CPC drawer are available.
Dynamic I/O configuration
Dynamic I/O configuration supports the dynamic addition, removal, or modification of the channel path, control units, and I/O devices without a planned outage.
Pluggable optics:
 – The FICON Express16SA, FICON Express16S+ FICON Express16S and FICON Express8S, OSA Express7S, OSA Express6S, OSA Express5S, RoCE Express2.1, RoCE Express2, and RoCE Express features include Small Form-Factor Pluggable (SFP) optics.1 These optics allow each channel to be individually serviced in a fiber optic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing.
 – The zHyperLink Express feature uses fiber optics cable with MTP2 connector and the cable uses a CXP connection to the adapter. The CXP3 optics are provided with the adapter.
Concurrent I/O card maintenance
Every I/O card that is plugged in PCIe+ I/O drawer supports concurrent card replacement during a repair action.
4.2.2 Supported I/O features
The following I/O features are supported (max. for each individual adapter type):
Up to 384 FICON Express16SA channels
Up to 384 FICON Express16S+ channels
Up to 384 FICON Express16S channels
Up to 384 FICON Express8S channels
Up to 48 OSA-Express7S 25GbE SR1.1 ports
Up to 48 OSA-Express7S 25GbE SR ports
Up to 48 OSA-Express7S 10GbE ports
Up to 96 OSA-Express7S GbE ports
Up to 96 OSA-Express7S 1000BASE-T ports
Up to 48 OSA-Express6S 10GbE ports
Up to 96 OSA-Express6S GbE ports
Up to 96 OSA-Express6S 1000BASE-T ports
Up to 48 OSA-Express5S 10GbE ports
Up to 96 OSA-Express5S GbE ports
Up to 96 OSA-Express5S 1000BASE-T ports
Up to eight 25GbE RoCE Express2.1 features
Up to eight 25GbE RoCE Express2 features
Up to eight 10GbE RoCE Express2.1 features
Up to eight 10GbE RoCE Express2 features
Up to eight 10GbE RoCE Express features
Up to 16 zHyperLink Express1.1 features
Up to 16 zHyperLink Express features
Up to 48 ICA SR1.1 features with up to 120 coupling links
Up to 48 ICA SR features with up to 120 coupling links
Up to 32 CE LR features with up to 32 coupling links
 
 
Notes: Consider the following points:
The number of I/O features that are supported might be affected by the machine power infrastructure (PDU versus BPA). For PDU models, a maximum of 12 PCIe+ I/O drawers is supported. For BPA models, the maximum number of PCIe+ I/O drawers is 11.
The maximum number of coupling CHPIDs on a z15 server was increased to 384, which is a combination of the following ports (not all combinations are possible; subject to I/O configuration options):
 – Up to 96 ICA SR ports (48 ICA SR features)
 – Up to 64 CE LR ports (32 CE LR features
IBM Virtual Flash Memory replaces IBM zFlash Express feature on z15 servers.
zEDC features are not supported.
The maximum combined number of RoCE features that can be installed is 8; that is, any combination of 25GbE RoCE Express2.1, 25GbE RoCE Express2, 10GbE RoCE Express2.1, 10GbE RoCE Express2, and 10GbE RoCE Express (carry forward only) features.
25GbE RoCE Express2 (or 2.1) features should not be configured in the same SMC-R link group with 10GbE RoCE Express2 (or 2.1) or RoCE Express features. However, 10GbE RoCE Express2 (or 2.1) can be mixed with 10 GbE RoCE Express.
4.3 PCIe+ I/O drawer
The PCIe+ I/O drawers (see Figure 4-1) are attached to the CPC drawer through a PCIe cable and use PCIe Gen3 as the infrastructure bus within the drawer. The PCIe Gen3 I/O bus infrastructure data rate is up to 16 GBps.
Figure 4-1 Rear view of PCIe+ I/O drawer
PCIe switch application-specific integrated circuits (ASICs) are used to fan out the host bus from the CPC drawer through the PCIe+ I/O drawer to the individual I/O features. Maximum 16 PCIe I/O features (up to 32 channels) per PCIe+ I/O drawer are supported.
The PCIe+ I/O drawer is a one-sided drawer (all I/O cards on one side, in the rear of the drawer) that is 8U high. The PCIe+ I/O drawer contains the 16 I/O slots for PCIe features, two switch cards, and two power supply units (PSUs) to provide redundant power, as shown in Figure 4-1 on page 160.
The PCIe+ I/O drawer slots numbers are shown in Figure 4-2.
Figure 4-2 PCIe+ I/O drawer slots numbers
The I/O structure in a z15 CPC is shown in Figure 4-3 on page 162. The PCIe switch card provides the fanout from the high-speed x16 PCIe host bus to eight individual card slots. The PCIe switch card is connected to the CPC drawer through a single x16 PCIe Gen 3 bus from a PCIe fanout card.
In the PCIe+ I/O drawer, the eight I/O feature cards that directly attach to the switch card constitute an I/O domain. The PCIe+ I/O drawer supports concurrent add and replace I/O features with which you can increase I/O capability as needed, depending on the CPC drawer.
Figure 4-3 z15 I/O connectivity (Max34 feature with two PCIe+ I/O drawers represented)
The PCIe slots in a PCIe+ I/O drawer are organized into two I/O domains. Each I/O domain supports up to eight features and is driven through a PCIe switch card. Two PCIe switch cards always provide a backup path for each other through the passive connection in the PCIe+ I/O drawer backplane. During a PCIe fanout card or cable failure, 16 I/O cards in two domains can be driven through a single PCIe switch card. It is not possible to drive 16 I/O cards after one of the PCIe switch cards is removed.
The two switch cards are interconnected through the PCIe+ I/O drawer board (Redundant I/O Interconnect, or RII). In addition, switch cards in same PCIe+ I/O drawer are connected to PCIe fanouts across clusters in CPC drawer for higher availability.
The RII design provides a failover capability during a PCIe fanout card failure. Both domains in one of these PCIe+ I/O drawers are activated with two fanouts. The flexible service processors (FSPs) are used for system control.
The domains and their related I/O slots are shown in Figure 4-2 on page 161.
Each I/O domain supports up to eight features (FICON, OSA, Crypto, and so on.) All I/O cards connect to the PCIe switch card through the backplane board. The I/O domains and slots are listed in Table 4-1.
Table 4-1 I/O domains of PCIe+ I/O drawer
Domain
I/O slot in domain
0
LG02, LG03, LG04, LG05, LG07, LG08, LG09, and LG10
1
LG12, LG13, LG14, LG15, LG17, LG18, LG19, and LG20
4.3.1 PCIe+ I/O drawer offerings
A maximum of 12 PCIe+ I/O drawers, depending on system power choice (PDU or BPA), can be installed for supporting up to 192 PCIe I/O features.
For an upgrade to z15 servers, only the following PCIe features can be carried forward:
FICON Express16S+
FICON Express16S
FICON Express8S
zHyperLink Express
OSA-Express7S 25GbE SR
OSA-Express6S (all features)
OSA-Express5S (all features)
25GbE RoCE Express2
10GbE RoCE Express2
10GbE RoCE Express
Crypto Express6S
Crypto Express5S
Coupling Express Long Reach (CE LR)
 
Consideration: On a z15 server, only PCIe+ I/O drawers are supported. No older generation drawers can be carried forward.
z15 server supports the following PCIe I/O new features that are hosted in the PCIe+ I/O drawers:
FICON Express16SA
OSA-Express7S 25GbE SR1.1
OSA-Express7S 10GbE
OSA-Express7S GbE
OSA-Express7S 1000BASE-T
25GbE RoCE Express2.1
10GbE RoCE Express2.1
Crypto Express7S (one or two ports)
Coupling Express Long Reach (CE LR)
zHyperLink Express1.1
4.4 CPC drawer fanouts
The z15 server uses fanout cards to connect the I/O subsystem in the CPC drawer to the PCIe+ I/O drawers. The fanout cards also provide the ICA SR (ICA SR and ICA SR1.1) coupling links for Parallel Sysplex. All fanout cards support concurrent add, delete, and move.
The z15 CPC drawer I/O infrastructure consists of the following features:
The PCIe+ Generation 3 fanout cards: Two ports per card (feature) that connect to PCIe+ I/O drawers.
ICA SR (ICA SR and ICA SR1.1) fanout cards: Two ports per card (feature) that connect to other (external) CPCs.
 
Note: IBM z15 does not support Parallel Sysplex InfiniBand (PSIFB) links.
Unless otherwise noted, ICA SR is used for ICA SR and ICA SR1.1 for the rest of the chapter.
The PCIe fanouts cards are installed in the rear of the CPC drawers. Each CPC drawer features 12 PCIe+ Gen3 fanout slots.
The PCIe fanouts and ICA SR fanouts are installed in locations LG01 - LG12 at the rear in the CPC drawers (see Figure 2-25 on page 69). The oscillator card (OSC) is combined with the Flexible Support Processor (FSP) and is in the front of the drawer. Two combined FSP/OSC cards are used per CPC drawer.
The following types of fanout cards are supported by z15 servers. Each CPC drawer fanout slot can hold one of the following fanouts:
PCIe+ Gen3 fanout card: This dual-port copper fanout provides connectivity to the PCIe switch card in the PCIe I/O drawer.
Integrated Coupling Adapter (ICA SR): This two-port adapter provides coupling connectivity between z14 ZR1, z14, z13, and z13s servers, up to 150 meters (492 feet), @8 GBps link rate.
An I/O connection diagram is shown in Figure 4-3 on page 162.
4.4.1 PCIe+ Generation 3 fanout (FC 0175)
The PCIe+ Gen3 fanout card provides connectivity to a PCIe+ I/O drawer by using a copper cable. One port on the fanout card is dedicated for PCIe I/O. This PCIe fanout card supports a link rate of 16 GBps (with two links per card).
A 16x PCIe copper cable of 1.5 meters (4.92 feet) to 4.0 meters (13.1 feet) is used for connection to the PCIe switch card in the PCIe+ I/O drawer. PCIe fanout cards are always plugged in pairs and provide redundancy for I/O domains within the PCIe+ I/O drawer.
 
PCIe fanout: The PCIe fanout is used exclusively for I/O and cannot be shared for any other purpose.
4.4.2 Integrated Coupling Adapter (FC 0172 and 0176)
Introduced with IBM z13, the IBM ICA SR (FC 0172) is a two-port fanout feature that is used for short distance coupling connectivity and uses channel type CS5. For z15, the new build feature is ICA SR1.1 (FC 0176).
The ICA SR (FC 0172) and ICA SR1.1 (FC 0176) use PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for coupling. No performance degradation is expected compared to the coupling over InfiniBand 12x IFB3 protocol.
Both cards are designed to drive distances up to 150 meters (492 feet) with a link data rate of 8 GBps. ICA SR supports up to four channel-path identifiers (CHPIDs) per port and eight subchannels (devices) per CHPID.
The coupling links can be defined as shared between images (z/OS) within a CSS. They also can be spanned across multiple CSSs in a CPC. For ICA SR features, a maximum four CHPIDs per port can be defined.
When STP FC 1021 is available, ICA SR coupling links can be defined as timing-only links to other z15, z14 ZR1, z14, and z13/z13s CPCs.
These two fanouts features are housed in the PCIe+ Gen3 I/O fanout slot on the z15 CPC drawers. Up 48 ICA SR and ICA SR1.1 features (up to 96 ports) are supported on a z15 server. This configuration enables greater connectivity for short distance coupling on a single processor node compared to previous Z generations.
The ICA SR can be used for coupling connectivity between z15, z14/z14 ZR1, and z13/z13s servers. It cannot be connected to HCA3-O or HCA3-O LR coupling fanouts.
OM3 fiber optic can be used for distances up to 100 meters (328 feet). OM4 fiber optic cables can be used for distances up to 150 meters (492 feet). For more information, see the following manuals:
Planning for Fiber Optic Links, GA23-1408
IBM Z 8561 Installation Manual for Physical Planning, GC28-7002
4.4.3 Fanout considerations
Fanout slots in the CPC drawer can be used to plug different fanouts. The CPC drawers can hold up to 60 PCIe fanout cards in a five-CPC drawers configuration.
Adapter ID number assignment
PCIe fanouts and ports are identified by an Adapter ID (AID) that is initially dependent on their physical locations, which is unlike channels that are installed in a PCIe+ I/O drawer. Those channels are identified by a physical channel ID (PCHID) number that is related to their physical location. This AID must be used to assign a CHPID to the fanout in the IOCDS definition. The CHPID assignment is done by associating the CHPID to an AID port (see Table 4-2).
Table 4-2 Fanout locations and their AIDs for the CPC drawer
Fanout
locations
CPC0
Location A10
AID(Hex)
CPC1
Location A15
AID(Hex)
CPC2
Location A20
AID(Hex)
CPC3
Location B10
AID(Hex)
CPC4
Location B15
AID(Hex)
LG01
00
0C
18
24
30
LG02
01
0D
19
25
31
LG03
02
0E
1A
26
32
LG04
03
0F
1B
27
33
LG05
04
10
1C
28
34
LG06
05
11
1D
29
35
LG07
06
12
1E
2A
36
LG08
07
13
1F
2B
37
LG09
08
14
20
2C
38
LG10
09
15
21
2D
39
LG11
0A
16
22
2E
3A
LG12
0B
17
23
2F
3B
Fanout slots
The fanout slots are numbered LG01 - LG12, from left to right, as listed in Table 4-2. All fanout locations and their AIDs for the CPC drawer are shown for reference only.
 
Important: The AID numbers that are listed in Table 4-2 are valid only for a new build system. If a fanout is moved, the AID follows the fanout to its new physical location.
The AID assignment is listed in the PCHID REPORT that is provided for each new server or for an MES upgrade on existing servers. Part of a PCHID REPORT for a z15 is shown in Example 4-1. In this example, four fanout cards are installed at in CPC drawer at location A10A, in slots LG01, LG02, LG04, and LG11 with AIDs 00, 01, 03, and 0A.
Example 4-1 AID assignment in PCHID REPORT
CHPIDSTART
23754978 PCHID REPORT Aug 25,2019
Machine: 8561-T01 NEW1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Source Drwr Slot F/C PCHID/Ports or AID Comment
A10/LG01 A10A LG01 0176 AID=00
A10/LG02 A10A LG02 0176 AID=01
A10/LG04 A10A LG04 0176 AID=03
A10/LG11 A10A LG11 0176 AID=0A
Fanout features that are supported by the z15 server are listed in Table 4-3, which includes the feature type, feature code, and information about the link supported by the fanout feature.
Table 4-3 Fanout summary
Fanout feature
Feature code
Use
Cable type
Connector type
Maximum distance
Link data rate1
PCIe+ Gen3
fanout
0173
Connect to PCIe I/O drawer
Copper
N/A
4 m (13.1 ft)
16 GBps
ICA SR
0172
Coupling link
OM4
MTP
150 m (492 ft)
8 GBps
OM3
MTP
100 m (328 ft)
8 GBps
ICA SR1.1
0176
Coupling link
OM4
MTP
150 m (492 ft)
8 GBps
OM3
MTP
100 m (328 ft)
8 GBps

1 The link data rates do not represent the performance of the link. The performance depends on many factors, including latency through the adapters, cable lengths, and the type of workload.
4.5 I/O features
I/O features (adapters) include ports4 to connect the z15 server to external devices, networks, or other servers. I/O features are plugged into the PCIe+ I/O drawers, based on the configuration rules for the server. Different types of I/O cards are available, one for each channel or link type. I/O cards can be installed or replaced concurrently.
4.5.1 I/O feature card ordering information
The I/O features that are supported by z14 ZR1 servers and the ordering information for them are listed in Table 4-4.
Table 4-4 I/O features and ordering information
Channel feature
Feature code
New build
Carry-forward
FICON Express16SA LX
0436
Y
N/A
FICON Express16SA SX
0437
Y
N/A
FICON Express16S+ LX
0427
N
Y
FICON Express16S+ SX
0428
N
Y
FICON Express16S LX
0418
N
Y
FICON Express16S SX
0419
N
Y
FICON Express8S LX
0409
N
Y
FICON Express8S SX
0410
N
Y
OSA-Express7S 25GbE SR1.1
0449
Y
N/A
OSA-Express7S 25GbE SR
0429
N
Y
OSA-Express7S 10GbE LR
0444
Y
N/A
OSA-Express7S 10GbE SR
0445
Y
N/A
OSA-Express7S GbE LX
0442
Y
N/A
OSA-Express7S GbE SX
0443
Y
N/A
OSA-Express7S 1000BASE-T Ethernet
0446
 
N/A
OSA-Express6S 10GbE LR
0424
N
Y
OSA-Express6S 10GbE SR
0425
N
Y
OSA-Express6S GbE LX
0422
N
Y
OSA-Express6S GbE SX
0423
N
Y
OSA-Express6S 1000BASE-T Ethernet
0426
N
Y
OSA-Express5S 10GbE LR
0415
N
Y
OSA-Express5S 10GbE SR
0416
N
Y
OSA-Express5S GbE LX
0413
N
Y
OSA-Express5S GbE SX
0414
N
Y
OSA-Express5S 1000BASE-T Ethernet
0417
N
Y
PCIe+ Gen3 fanout
0175
Y
N/A
Integrated Coupling Adapter (ICA SR1.1)
0176
Y
N/A
Integrated Coupling Adapter (ICA SR)
0172
Y
Y
Coupling Express LR
0433
Y
Y
Crypto Express7S (2 ports)
0898
Y
N/A
Crypto Express7S (1 port)
0899
Y
N/A
Crypto Express6S
0893
N
Y
Crypto Express5S
0890
N
Y
25GbE RoCE Express2.1
0450
Y
N/A
25GbE RoCE Express2
0430
Y
Y
10GbE RoCE Express2.1
0432
Y
N/A
10GbE RoCE Express2
0412
N
Y
10GbE RoCE Express
0411
N
Y
zHyperLink Express1.1
0451
Y
N/A
zHyperLink Express
0431
N
Y
 
Coupling links connectivity support: zEC12 and zBC12 are not supported in same Parallel Sysplex or STP CTN with z15.
4.5.2 Physical channel ID report
A physical channel ID (PCHID) reflects the physical location of a channel-type interface. A PCHID number is based on the following factors:
PCIe+ I/O drawer location
Channel feature slot number
Port number of the channel feature
A CHPID does not directly correspond to a hardware channel port. Instead, it is assigned to a PCHID in the hardware configuration definition (HCD) or IOCP.
A PCHID REPORT is created for each new build server and for upgrades on servers. The report lists all I/O features that are installed, the physical slot location, and the assigned PCHID. A portion of a sample PCHID REPORT is shown in Example 4-2.
Example 4-2 PCHID REPORT
CHPIDSTART
23754978 PCHID REPORT Aug 25,2019
Machine: 8561-T01 NEW1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Source Drwr Slot F/C PCHID/Ports or AID Comment
A10/LG01 A10A LG01 0176 AID=00
A10/LG02 A10A LG02 0176 AID=01
A10/LG04 A10A LG04 0176 AID=03
A10/LG11 A10A LG11 0176 AID=0A
  A10/LG12/J02 Z01B 02 0437 100/D1 101/D2
A10/LG12/J02 Z01B 03 0437 104/D1 105/D2
A10/LG12/J02 Z01B 04 0443 108/D1D2
...........<< snippet >>................
The PCHID REPORT that is shown in Example 4-2 on page 168 includes the following components (among others):
Feature code 0176 (Integrated Coupling Adapters (ICA SR1.1) is installed in the CPC drawer (location A10A, slots LG01, LG02, LG04. and LG11), and have AIDs 00, 01, 03, and 0A assigned.
Feature codes 0437 (FICON Express16SA SX) are installed in PCIe+ I/O drawer 1:
 – Location Z01B, slot 02 with PCHIDs 100/D1 and 101/D2 assigned
 – Location Z01B, slot 03 with PCHIDs 104/D1 and 105/D2 assigned
Feature code 0443 (OSA-Express7S GbE SX) installed in PCIe+ I/O drawer 1 in location Z01B, slots 04 with PCHID108/D1D2
A resource group (RG) parameter is also shown in the PCHID REPORT for native PCIe features. A balanced plugging of native PCIe features exists between four resource groups (RG1, RG2, RG3, and RG4).
The preassigned PCHID number of each I/O port relates directly to its physical location (jack location in a specific slot).
4.6 Connectivity
I/O channels are part of the CSS. They provide connectivity for data exchange between servers, between servers and external control units (CUs) and devices, or between networks.
For more information about connectivity to external I/O subsystems (for example, disks), see “Storage connectivity” on page 173.
For more information about communication to LANs, see “Network connectivity” on page 182.
Communication between servers is implemented by using CE LR, ICA SR, or channel-to-channel (CTC) connections. For more information, see “Parallel Sysplex connectivity” on page 196.
4.6.1 I/O feature support and configuration rules
The supported I/O features are listed in Table 4-5. Also listed in Table 4-5 are the number of ports per card, port increments, the maximum number of feature cards, and the maximum number of channels for each feature type. The CHPID definitions that are used in the IOCDS also are listed.
Table 4-5 z15 supported I/O features
I/O feature
Ports per card
Port increments
Max. ports
Max.
I/O slots
PCHID
CHPID definition
Storage access
FICON Express16SA LX/SX
2
2
384
192
Yes
FC, FCPa
FICON Express16S+ LX/SX
2
2
128
64
Yes
FC, FCP1
FICON Express16S LX/SX
2
2
128
64
Yes
FC, FCP2
FICON Express8S LX/SX
2
2
128
64
Yes
FC, FCPb
zHyperLink Express 1.1
2
2
32
16
Yes
N/Ae
zHyperLink Express
2
2
32
16
Yes
N/Ae
OSA-Express features3
OSA-Express7S 25GbE SR1.1
1
1
48
48
Yes
OSD
OSA-Express7S 25GbE SR
1
1
48
48
Yes
OSD
OSA-Express7S 10GbE LR/SR
1
1
48
48
Yes
OSD
OSA-Express7S GbE LR/SR
1
1
48
48
Yes
OSD, OSC
OSA-Express7S 1000BASE-T4
2
2
96
48
Yes
OSC, OSD, OSE
OSA-Express6S 10 GbE LR/SR
1
1
48
48
Yes
OSD
OSA-Express6S GbE LX/SX
2
2
96
48
Yes
OSD
OSA-Express6S 1000BASE-Td
2
2
96
48
Yes
OSC, OSD, OSE
OSA-Express5S 10 GbE LR/SR
1
1
48
48
Yes
OSD
OSA-Express5S GbE LX/SX
2
2
96
48
Yes
OSD
OSA-Express5S 1000BASE-Td
2
2
96
48
Yes
OSC, OSD, OSE
RoCE Express features
25GbE RoCE Express2.1
2
2
32
16
Yes
N/A
10GbE RoCE Express2.1
2
2
32
16
Yes
N/A
25GbE RoCE Express2
2
2
32
16
Yes
N/A5
10GbE RoCE Express2
2
2
32
16
Yes
N/Ae
10GbE RoCE Express
2
2
32
16
Yes
N/Ae
Coupling features
Coupling Express LR
2
2
32
16
Yes
CL5
Integrated Coupling Adapter (ICA SR1.1)
2
2
96
48
N/A
CS5
Integrated Coupling Adapter (ICA SR)
2
2
96
48
N/A
CS5
Cryptographic features6
Crypto Express7S (2 ports7)
2
2
16
8
n/a
n/a
1
1
16
16
n/a
n/a
Crypto Express6S
1
1
16
16
n/a
n/a
Crypto Express5S
1
1
16
16
n/a
n/a

1 Both ports must be defined with the same CHPID type.
2 CHPID type mixture is allowed. The keyword is MIXTYPE.
3 On z15, the OSX type CHPID cannot be defined. z15 cannot be part of an ensemble managed by zManager.
4 On z15, the OSM type CHPID cannot be defined for user configurations in PR/SM mode. It is used in DPM mode for internal management only.
5 These features are defined by using Virtual Functions IDs (FIDs).
6 Crypto Express features are defined through the HMC.
7 Crypto Express7S can be ordered with one or two IBM 4769 PCIeCC (cryptographic cop-processor) adapters. Each adapter supports 85 domains.
At least one I/O feature (FICON) or one coupling link feature (ICA SR or CE LR) must be present in the minimum configuration.
The following features can be shared and spanned:
FICON channels that are defined as FC or FCP
OSA-Express features that are defined as OSC, OSD, OSE or OSM
Coupling links that are defined as CS5 or CL5
HiperSockets that are defined as IQD
The following features are plugged into a PCIe+ I/O drawer and do not require the definition of a CHPID and CHPID type:
Each Crypto Express (7S/6S/5S) feature occupies one I/O slot, but does not include a PCHID type. However, LPARs in all CSSs can access the features. Each Crypto Express adapter can support up to 85 domains.
Each 25GbE RoCE Express2.1 feature occupies one I/O slot but does not include a CHPID type. However, LPARs in all CSSs can access the feature. The 25GbE RoCE Express2.1 can be defined to up to 126 virtual functions (VFs) per feature (port is defined in z/OS Communications Server). The 25GbE RoCE Express2.1 features support up to 63 VFs per port (up to 126 VFs per feature).
Each 10 RoCE Express2.1 feature occupies one I/O slot but does not include a CHPID type. The 10GbE RoCE Express2.1 can be defined to up to 126 virtual functions (VFs) per feature (port is defined in z/OS Communications Server). The 10GbE RoCE Express2.1 features support up to 63 VFs per port (up to 126 VFs per feature).
Each RoCE Express/Express2 feature occupies one I/O slot but does not include a CHPID type. However, LPARs in all CSSs can access the feature. The 10GbE RoCE Express can be defined to up to 31 LPARs per feature (port is defined in z/OS Communications Server). The 25GbE RoCE Express2 and the 10GbE RoCE Express2 features support up to 63 LPARs per port (up to 126 LPARs per feature).
Each zHyperLink Express/zHyperlink Express2.1 feature occupies one I/O slot but does not include a CHPID type. However, LPARs in all CSSs can access the feature. The zHyperLink Express adapter works as native PCIe adapter and can be shared by multiple LPARs.
Each port supports up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to each LPAR. This support gives a maximum of 254 VFs per adapter.
I/O feature cables and connectors
The IBM Facilities Cabling Services fiber transport system offers a total cable solution service to help with cable ordering requirements. These services can include the requirements for all of the protocols and media types that are supported (for example, FICON, Coupling Links, and OSA). The services can help whether the focus is the data center, SAN, LAN, or the end-to-end enterprise.
 
Cables: All fiber optic cables, cable planning, labeling, and installation are client responsibilities for new z15 installations and upgrades. Fiber optic conversion kits and mode conditioning patch cables are not orderable as features on z15 servers. All other cables must be sourced separately.
The Enterprise Fiber Cabling Services use a proven modular cabling system, the fiber transport system (FTS), which includes trunk cables, zone cabinets, and panels for servers, directors, and storage devices. FTS supports Fiber Quick Connect (FQC). FQC feature code is 7960. The FC 7961 is made of FQC feature, bracket and mounting hardware. The FC 7924 is made of FQC, bracket and mounting hardware and LC Duplex 2 meter (6.6 feet) harness. They are optional in z15.
Whether you choose a packaged service or a custom service, high-quality components are used to facilitate moves, additions, and changes in the enterprise to prevent the need to extend the maintenance window.
The required connector and cable type for each I/O feature on z15 servers are listed in Table 4-6.
Table 4-6 I/O feature connector and cable types
Feature code
Feature name
Connector type
Cable type
0436
FICON Express16SA LX
LC Duplex
9 µm SM
0437
FICON Express16SA SX
LC Duplex
50, 62.5 µm MM
0427
FICON Express16S+ LX 10 km
LC Duplex
9 µm SM
0428
FICON Express16S+ SX
LC Duplex
50, 62.5 µm MM
0418
FICON Express16S LX 10 km
LC Duplex
9 µm SM
0419
FICON Express16S SX
LC Duplex
50, 62.5 µm MM
0409
FICON Express8S LX 10 km
LC Duplex
9 µm SM
0410
FICON Express8S SX
LC Duplex
50, 62.5 µm MM
0449
OSA-Express7S 25GbE SR1.1
LC Duplex
50 µm MM OM4d
0429
OSA-Express7S 25GbE SR
LC Duplex
50 µm MM OM4d
0444
OSA-Express7S 10GbE LR
LC Duplex
9 µm SM
0445
OSA-Express7S 10GbE SR
LC Duplex
50, 62.5 µm MM
0442
OSA-Express7S GbE LX
LC Duplex
9 µm SM
0443
OSA-Express7S GbE SX
LC Duplex
50, 62.5 µm MM
0446
OSA-Express7S 1000BASE-T
RJ-45
Category 5 UTP1
0424
OSA-Express6S 10GbE LR
LC Duplex
9 µm SM
0425
OSA-Express6S 10 GbE SR
LC Duplex
50, 62.5 µm MM
0422
OSA-Express6S GbE LX
LC Duplex
9 µm SM
0423
OSA-Express6S GbE SX
LC Duplex
50, 62.5 µm MM
0426
OSA-Express6S 1000BASE-T
RJ-45
Category 5 UTP2
0415
OSA-Express5S 10 GbE LR
LC Duplex
9 µm SM
0416
OSA-Express5S 10 GbE SR
LC Duplex
50, 62.5 µm MM
0413
OSA-Express5S GbE LX
LC Duplex
9 µm SM
0414
OSA-Express5S GbE SX
LC Duplex
50, 62.5 µm MM
0417
OSA-Express5S 1000BASE-T
RJ-45
Category 5 UTP
0450
25GbE RoCE Express2.1
LC Duplex
50 µm MM
0405
10GbE RoCE Express2.1
LC Duplex
50, 62.5 µm MM
0430
25GbE RoCE Express2
LC Duplex
50 µm MM OM4d
0412
10GbE RoCE Express2
LC Duplex
50, 62.5 µm MM
0411
10GbE RoCE Express
LC Duplex
50, 62.5 µm MM
0433
CE LR
LC Duplex
9 µm SM
0176
Integrated Coupling Adapter SR1.1 (ICA SR1.1)
MTP
50 µm MM OM43
0172
Integrated Coupling Adapter (ICA SR)
MTP
50 µm MM OM44
0451
zHyperLink Express1.1
MPO
50 µm MM OM4d
0431
zHyperLink Express
MPO
50 µm MM OM4d

1 UTP is unshielded twisted pair. Consider the use of category 6 UTP for 1000 Mbps connections.
2 UTP is unshielded twisted pair. Consider the use of category 6 UTP for 1000 Mbps connections.
3 Or 50 µm MM OM3, but OM4/OM5 is highly recommended.
MM = Multi-Mode
SM = Single-Mode
4 Or 50 µm MM OM3, but OM4 is highly recommended.
MM = Multi-Mode
SM = Single-Mode
4.6.2 Storage connectivity
Connectivity to external I/O subsystems (for example, disks) is provided by FICON channels and zHyperLink5.
FICON channels
z15 supports the following FICON features:
FICON Express16SA (FC 0436/0437)
FICON Express16S+ (FC 0427/0428))
FICON Express16S (FC 0418/0419)
FICON Express8S (FC 0409/0410)
The FICON Express16SA, FICON Express16S+, FICON Express16S, and FICON Express8S features conform to the following architectures:
Fibre Connection (FICON)
High-Performance FICON on Z (zHPF)
Fibre Channel Protocol (FCP)
The FICON features provide connectivity between any combination of servers, directors, switches, and devices (control units, disks, tapes, and printers) in a SAN.
Each FICON Express feature occupies one I/O slot in the PCIe+ I/O drawer. Each feature includes two ports, each supporting an LC Duplex connector, with one PCHID and one CHPID that is associated with each port.
Each FICON Express feature uses SFP (SFP+ for FICON Express16SA) optics that allow for concurrent repairing or replacement for each SFP. The data flow on the unaffected channels on the same feature can continue. A problem with one FICON Express port does not require replacement of a complete feature.
Each FICON Express feature also supports cascading, which is the connection of two FICON Directors in succession. This configuration minimizes the number of cross-site connections and helps reduce implementation costs for disaster recovery applications, IBM Geographically Dispersed Parallel Sysplex (GDPS), and remote copy.
z15 servers support 32 K devices per FICON channel for all FICON features.
Each FICON Express channel can be defined independently for connectivity to servers, switches, directors, disks, tapes, and printers, by using the following CHPID types:
CHPID type FC: The FICON, zHPF, and FCTC protocols are supported simultaneously.
CHPID type FCP: Fibre Channel Protocol that supports attachment to SCSI devices directly or through Fibre Channel switches or directors.
FICON channels (CHPID type FC or FCP) can be shared among LPARs and defined as spanned. All ports on a FICON feature must be of the same type (LX or SX). The features are connected to a FICON capable control unit (point-to-point or switched point-to-point) through a Fibre Channel switch.
FICON Express16SA
The FICON Express16SA feature is installed in the PCIe+ I/O drawer. Each of the two independent ports is capable of 8 Gbps or 16 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express16SA optical transceivers are supported (no mix on same card):
FICON Express16SA LX feature, FC 0436, with two ports per feature, supporting LC Duplex connectors
FICON Express16SA SX feature, FC 0437, with two ports per feature, supporting LC Duplex connectors
For supported distances, see Table 4-6 on page 172.
 
Consideration: FICON Express16SA features do not support auto-negotiation to a data link rate of 2 or 4 Gbps (only 8, or 16 Gbps) for point-to-point connections except with through a switch with 8 or 16 Gb optics.
FICON Express16S+
The FICON Express16S+ feature is installed in the PCIe+ I/O drawer. Each of the two independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express16S+ optical transceivers are supported (no mix on same card):
FICON Express16S+ LX feature, FC 0427, with two ports per feature, supporting LC Duplex connectors
FICON Express16S+ SX feature, FC 0428, with two ports per feature, supporting LC Duplex connectors
For more information about supported distances, see Table 4-6 on page 172.
For more information, see FICON Express chapter in IBM Z Connectivity Handbook, SG24-5444.
 
Consideration: FICON Express16S+ features do not support auto-negotiation to a data link rate of 2 Gbps (only 4, 8, or 16 Gbps).
FICON Express16S
The FICON Express16S feature is installed in the PCIe+ I/O drawer. Each of the two independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express16S optical transceivers are supported:
FICON Express16S LX feature, FC 0418, with two ports per feature, supporting LC Duplex connectors
FICON Express16S SX feature, FC 0419, with two ports per feature, supporting LC Duplex connectors
For more information about supported distances, see Table 4-6 on page 172.
 
Consideration: FICON Express16S features do not support auto-negotiation to a data link rate of 2 Gbps (only 4, 8, or 16 Gbps).
FICON Express8S
The FICON Express8S feature is installed in the PCIe I/O drawer. Each of the two independent ports is capable of 2 Gbps, 4 Gbps, or 8 Gbps. The link speed depends on the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications.
The following types of FICON Express8S optical transceivers are supported:
FICON Express8S LX feature, FC 0409, with two ports per feature, supporting LC Duplex connectors
FICON Express8S SX feature, FC 0410, with two ports per feature, supporting LC Duplex connectors
For more information about supported distances, see Table 4-6 on page 172.
FICON enhancements
Together with the FICON Express16SA and FICON Express16S+, z15 servers provide enhancements for FICON in functional and performance aspects with IBM Endpoint Security solution.
Forward Error Correction
Forward Error Correction (FEC) is a technique that is used for reducing data errors when transmitting over unreliable or noisy communication channels (improving signal to noise ratio). By adding redundancy error-correction code (ECC) to the transmitted information, the receiver can detect and correct several errors without requiring retransmission. This process feature improves signal reliability and bandwidth use by reducing retransmissions because of bit errors, especially for connections across long distance, such as an inter-switch link (ISL) in a GDPS Metro Mirror environment.
The FICON Express16SA, FICON Express16S+, and FICON Express16S are designed to support FEC coding on top of its 64b/66b data encoding for 16 Gbps connections. This design can correct up to 11 bit errors per 2112 bits transmitted. Therefore, while connected to devices that support FEC at 16 Gbps connections, the FEC design allows FICON Express16SA, FICON Express16S+, and FICON Express16S channels to operate at higher speeds over longer distances with reduced power and higher throughput while retaining the same reliability and robustness for which FICON channels are traditionally known.
With the IBM DS8870 or newer, z15 servers can extend the use of FEC to the fabric N_Ports for a completed end-to-end coverage of 16 Gbps FC links. For more information, see the IBM DS8884 and z13s: A new cost optimized solution, REDP-5327.
FICON dynamic routing
With the IBM z15, IBM z14 ZR1, IBM z14 M0x, IBM z13, and IBM z13s servers, FICON channels are no longer restricted to the use of static SAN routing policies for ISLs for cascaded FICON directors. The Z servers now support dynamic routing in the SAN with the FICON Dynamic Routing (FIDR) feature. It is designed to support the dynamic routing policies that are provided by the FICON director manufacturers; for example, Brocade’s exchange-based routing (EBR) and Cisco’s originator exchange ID (OxID)6 routing.
A static SAN routing policy normally assigns the ISL routes according to the incoming port and its destination domain (port-based routing), or the source and destination ports pairing (device-based routing).
The port-based routing (PBR) assigns the ISL routes statically that is based on “first-come, first-served” when a port starts a fabric login (FLOGI) to a destination domain. The ISL is round-robin that is selected for assignment. Therefore, I/O flow from same incoming port to same destination domain always is assigned the same ISL route, regardless of the destination port of each I/O. This setup can result in some ISLs overloaded while some are under-used. The ISL routing table is changed whenever Z server undergoes a power-on-reset (POR), so the ISL assignment is unpredictable.
Device-based routing (DBR) assigns the ISL routes statically that is based on a hash of the source and destination port. That I/O flow from same incoming port to same destination is assigned to same ISL route. Compared to PBR, the DBR is more capable of spreading the load across ISLs for I/O flow from the same incoming port to different destination ports within a destination domain.
When a static SAN routing policy is used, the FICON director features limited capability to assign ISL routes based on workload. This limitation can result in unbalanced use of ISLs (some might be overloaded, while others are under-used).
The dynamic routing ISL routes are dynamically changed based on the Fibre Channel exchange ID, which is unique for each I/O operation. ISL is assigned at I/O request time, so different I/Os from same incoming port to same destination port are assigned different ISLs.
With FIDR, z15 servers feature the following advantages for performance and management in configurations with ISL and cascaded FICON directors:
Support sharing of ISLs between FICON and FCP (PPRC or distributed)
I/O traffic is better balanced between all available ISLs
Improved use of FICON director and ISL
Easier to manage with a predicable and repeatable I/O performance
FICON dynamic routing can be enabled by defining dynamic routing-capable switches and control units in HCD. Also, z/OS implemented a health check function for FICON dynamic routing.
Improved zHPF I/O execution at distance
By introducing the concept of pre-deposit writes, zHPF reduces the number of round trips of standard FCP I/Os to a single round trip. Originally, this benefit is limited to writes that are less than 64 KB. zHPF on z15, z14 ZR1, z14 M0x, z13s, and z13 servers were enhanced to allow all large write operations (greater than 64 KB) at distances up to 100 km (62.1 miles) to be run in a single round trip to the control unit. This improvement avoids elongating the I/O service time for these write operations at extended distances.
Read Diagnostic Parameter Extended Link Service support
To improve the accuracy of identifying a failed component without unnecessarily replacing components in a SAN fabric, a new Extended Link Service (ELS) command called Read Diagnostic Parameters (RDP) was added to the Fibre Channel T11 standard to allow Z servers to obtain extra diagnostic data from the SFP optics that are throughout the SAN fabric.
z15, z14 ZR1, z14 M0x, z13s, and z13 servers now can read this extra diagnostic data for all the ports that are accessed in the I/O configuration and make the data available to an LPAR. For z/OS LPARs that use FICON channels, z/OS displays the data with a new message and display command. For Linux on Z, z/VM, and z/VSE, and LPARs that use FCP channels, this diagnostic data is available in a new window in the SAN Explorer tool.
N_Port ID Virtualization enhancement
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel. First introduced with IBM z9® EC, this feature can be used with earlier FICON features that were carried forward from earlier servers.
By using the FICON Express16S (or newer) as an FCP channel with NPIV enabled, the maximum numbers of the following aspects for one FCP physical channel are doubled:
Maximum number of NPIV hosts defined: 64
Maximum number of remote N_Ports communicated: 1024
Maximum number of addressable LUNs: 8192
Concurrent I/O operations: 1528
Export/import physical port WWPNs for FCP Channels
IBM Z automatically assigns worldwide port names (WWPNs) to the physical ports of an FCP channel that is based on the PCHID. This WWPN assignment changes when an FCP channel is moved to a different physical slot position.
z15, z14 ZR1, z14 M0x, z13, and z13s servers allow for the modification of these default assignments, which also allows FCP channels to keep previously assigned WWPNs, even after being moved to a different slot position. This capability can eliminate the need for reconfiguration of the SAN in many situations, and is especially helpful during a system upgrade (FC 0099 - WWPN Persistence).
 
Note: For more information about the FICON enhancement, see Get More Out of Your IT Infrastructure with IBM z13 I/O Enhancements, REDP-5134.
FICON support for multiple-hop cascaded SAN configurations
Before the introduction of z13 and z13s servers, IBM Z FICON SAN configurations supported a single ISL (a single hop) in a cascaded FICON SAN environment only. The z15, z14 ZR1, z14 M0x, z13, and z13s servers now support up to three hops in a cascaded FICON SAN environment. This support allows clients to more easily configure a three- or four-site disaster recovery solution.
For more information about the FICON multi-hop, see the FICON Multihop: Requirements and Configurations white paper at the IBM Techdocs Library website.
FICON feature summary
Table 4-7 FICON Features
Channel feature
Feature codes
Bit rate
Cable type
Maximum unrepeated distance1 (MHz -km)
FICON Express16SA 10KM LX
0436
8, or 16 Gbps2
SM 9 µm
10 km
FICON Express16SA SX3
0437
16 Gbps
MM 50 µm
35 m (500)
100 m (2000)
125 m (4700)
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
FICON Express16S+ 10KM LX
0427
4, 8, or 16 Gbps
SM 9 µm
10 km
FICON Express16S+ SX
0428
16 Gbps
MM 50 µm
35 m (500)
100 m (2000)
125 m (4700)
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
FICON Express16S 10KM LX
0418
4, 8, or 16 Gbps
SM 9 µm
10 km
FICON Express16S SX
0419
16 Gbps
MM 50 µm
35 m (500)
100 m (2000)
125 m (4700)
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
FICON Express8S 10KM LX
0409
2, 4, or 8 Gbps
SM 9 µm
10 km
FICON Express8S SX
 
8 Gbps
MM 62.5 µm
MM 50 µm
 
21 m (200)
50 m (500)
150 m (2000)
190 m (4700)
4 Gbps
MM 62.5 µm
MM 50 µm
 
70 m (200)
150 m (500)
380 m (2000)
400 m (4700)
2 Gbps
MM 62.5 µm
MM 50 µm
 
150 m (200)
300 m (500)
500 m (2000)
N/A (4700)

1 Minimum fiber bandwidths in MHz/km for multimode fiber optic links are included in parentheses, where applicable.
2 2 and 4 Gbps connectivity is not supported for point-to-point connections
3 2 and 4 Gbps connectivity is supported through a switch with 8 or 16 Gb optics.
zHyperLink Express1.1 (FC 0451)
zHyperLink is a new technology that provides up to 5x reduction in I/O latency times for Db2 read requests with the qualities of service IBM Z clients expect from I/O infrastructure for Db2 v12 with z/OS. The z/OS supported versions for Reads are:
z/OS V2.4
z/OS V2.3 with PTFs
z/OS V2.2 with PTFs
z/OS V2.1 with PTFs
The z/OS supported versions for Writes support are:
z/OS V2.4
z/OS V2.3 with PTFs
z/OS V2.2 with PTFs
The zHyperLink Express1.1 feature (FC 0451) provides a low latency direct connection between z15 and DS8k storage system.
The zHyperLink Express1.1 is the result of new business requirements that demand fast and consistent application response times. It dramatically reduces latency by interconnecting the z15 directly to I/O Bay of the DS8k by using PCIe Gen3 x 8 physical link (up to 150-meter [492-foot] distance). A new transport protocol is defined for reading and writing IBM CKD data records7, as documented in the zHyperLink interface specification.
On z15, zHyperLink Express1.1 card is a new PCIe Gen3 adapter, which installed in the PCIe+ I/O drawer. HCD definition support was added for new PCIe function type with PORT attributes.
Requirements of zHyperLink Express1.1
The zHyperLink Express feature is available on z15 servers, and includes the following requirements:
z/OS 2.1 or later
150 m maximum distance in a point-to-point configuration
DS8k with I/O Bay Planar board and firmware level 8.4 or later
z15 with zHyperLink Express1.1 adapter (FC 0451) installed
FICON channel as a driver
Only ECKD supported
z/VM is not supported
Up to 16 zHyperLink Express adapters can be installed in a z15 (up to 32 links).
The zHyperLink Express1.1 is virtualized as a native PCIe adapter and can be shared by multiple LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to each LPAR. This configuration gives a maximum of 254 VFs per adapter. The zHyperLink Express requires the following components:
zHyperLink connector on DS8k I/O Bay
For DS8880 firmware R8.3 or newer, the I/O Bay planar is updated to support the zHyperLink interface. This update includes the update of the PEX 8732 switch to PEX8733 that includes a DMA engine for the zHyperLink transfers, and the upgrade from a copper to optical interface by a CXP connector (provided).
Cable
The zHyperLink Express1.1 uses optical cable with MTP connector. Maximum supported cable length is 150 meters (492 feet).
zHyperLink Express (FC 0431)
zHyperLink is a new technology that provides up to 5x reduction in I/O latency times for Db2 read requests with the qualities of service IBM Z clients expect from I/O infrastructure for Db2 v12 with z/OS. The z/OS supported versions for Reads are:
z/OS V2.4
z/OS V2.3 with PTFs
z/OS V2.2 with PTFs
z/OS V2.1 with PTFs
The z/OS supported versions for Writes support are:
z/OS V2.4
z/OS V2.3 with PTFs
z/OS V2.2 with PTFs
The zHyperLink Express feature (FC 0431) provides a low latency direct connection between z15 and DS8k I/O Port.
The zHyperLink Express is the result of new business requirements that demand fast and consistent application response times. It dramatically reduces latency by interconnecting the z15 directly to I/O Bay of the DS8880 by using PCIe Gen3 x 8 physical link (up to 150-meter [492-foot] distance). A new transport protocol is defined for reading and writing IBM CKD data records8, as documented in the zHyperLink interface specification.
On z15, zHyperLink Express card is a new PCIe adapter, which installed in the PCIe+ I/O drawer. HCD definition support was added for new PCIe function type with PORT attributes.
Requirements of zHyperLink
The zHyperLink Express feature is available on z15 servers, and includes the following requirements:
z/OS 2.1 or later
DS888x with I/O Bay Planar board and firmware level 8.4 or later
z15 with zHyperLink Express adapter (FC 0431) installed
FICON channel as a driver
Only ECKD supported
z/VM is not supported
Up to 16 zHyperLink Express adapters can be installed in a z14 ZR1 (up to 32 links).
The zHyperLink Express is virtualized as a native PCIe adapter and can be shared by multiple LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to each LPAR. This configuration gives a maximum of 254 VFs per adapter. The zHyperLink Express requires the following components:
zHyperLink connector on DS8880 I/O Bay
For DS8880 firmware R8.4 or newer, the I/O Bay planar is updated to support the zHyperLink interface. This update includes the update of the PEX 8732 switch to PEX8733 that includes a DMA engine for the zHyperLink transfers, and the upgrade from a copper to optical interface by a CXP connector (provided).
Cable
The zHyperLink Express uses optical cable with MTP connector. Maximum supported cable length is 150 meters (492 feet).
4.6.3 Network connectivity
Communication for LANs is provided by the OSA-Express7S, OSA-Express6S, OSA-Express5S, 25GbE RoCE Express2.1, 10GbE RoCE Express2.1, 25GbE RoCE Express2, 10GbE RoCE Express2, and 10GbE RoCE Express features.
OSA-Express7S 25GbE SR1.1 (FC 0449)
OSA-Express7S 25 Gigabit Ethernet SR1.1 (FC 0449) is installed in the PCIe+ I/O drawer.
OSA-Express7S 25 Gigabit Ethernet Short Reach1.1 (SR1.1) feature includes one PCIe Gen3 adapter and one port per feature. The port supports CHPID types OSD.
The OSA-Express7S 25GbE SR1.1 feature is designed to support attachment to a multimode fiber 25 Gbps Ethernet LAN or Ethernet switch that is capable of 25 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express7S 25GbE SR1.1 feature supports the use of an industry standard small form factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express7S 25GbE SR1.1 feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express7S 25GbE SR (FC 0429)
OSA-Express7S 25 Gigabit Ethernet SR (FC 0429) is installed in the PCIe+ I/O drawer.
The OSA-Express7S 25GbE Short Reach (SR) feature includes one PCIe adapter and one port per feature. The port supports CHPID types OSD. The OSA-Express7S 25GbE feature is designed to support attachment to a multimode fiber 25 Gbps Ethernet LAN or Ethernet switch that is capable of 25 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express7S 25GbE SR feature supports the use of an industry standard small form factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR-to-SR).
The OSA-Express7S 25GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
The following other OSA-Express7S features can be installed on z15 servers:
OSA-Express7S 10 Gigabit Ethernet LR, FC 0444
OSA-Express7S 10 Gigabit Ethernet SR, FC 0445
OSA-Express7S Gigabit Ethernet LX, FC 0442
OSA-Express7S Gigabit Ethernet SX, FC 0443
OSA-Express7S 1000BASE-T Ethernet, FC 0446
The supported OSA-Express7S features are listed in Table 4-5 on page 169.
OSA-Express7S 10 Gigabit Ethernet LR (FC 0444)
The OSA-Express7S 10 Gigabit Ethernet (GbE) Long Reach (LR) feature includes one PCIe Gen3 adapter and one port per feature. The port supports CHPID types OSD. The 10 GbE feature is designed to support attachment to a single-mode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express7S 10 GbE LR feature supports the use of an industry standard small form factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream device includes an LR transceiver. The transceivers at both ends must be the same (LR-to-LR).
The OSA-Express7S 10 GbE LR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting this feature to the selected device.
OSA-Express7S 10 GbE SR
The OSA-Express7S 10 GbE Short Reach (SR) feature (FC 0445) includes one PCIe Gen3 adapter and one port per feature. The port supports CHPID types OSD. The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express7S 10 GbE SR feature supports the use of an industry standard small form factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express7S 10 GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express7S Gigabit Ethernet LX (FC 0442)
The OSA-Express7S GbE LX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID types OSD or OSC). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express7S GbE LX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an LX transceiver. The sending and receiving transceivers must be the same (LX to LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each end of the link.
OSA-Express7S GbE SX
The OSA-Express7S GbE SX feature (FC 0443) includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID types OSD or OSC). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and shared among LPARs and across logical channel subsystems.
The OSA-Express7S GbE SX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an SX transceiver. The sending and receiving transceivers must be the same (SX-to-SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
OSA-Express7S 1000BASE-T Ethernet, FC 0446
Feature code 0446 occupies one slot in the PCIe+ I/O drawer. It features two ports that connect to a 1000 Mbps (1 Gbps) Ethernet LAN. Each port has an SFP+ with an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle is required to be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 meters (328 feet). The SFP allows a concurrent repair or replace action.
The OSA-Express7S 1000BASE-T Ethernet feature does not support auto-negotiation. It supports links at 1000 Mbps in full duplex mode only.
The OSA-Express6S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, or OSE. Non-QDIO operation mode requires CHPID type OSE.
 
Notes: Consider the following points:
CHPID type OSM is not supported on z15 for user configurations. It is used only in DPM mode for internal management.
CHPID types OSN and OSX are not supported on z15.
OSA-Express6S
The OSA-Express6S feature is installed in the PCIe+ I/O drawer. The following OSA-Express6S features can be installed on z15 servers (carry forward only):
OSA-Express6S 10 Gigabit Ethernet LR (FC 0424)
OSA-Express6S 10 Gigabit Ethernet SR (FC 0425)
OSA-Express6S Gigabit Ethernet LX (FC 0422)
OSA-Express6S Gigabit Ethernet SX (FC 0423)
OSA-Express6S 1000BASE-T Ethernet (FC 0426)
The supported OSA-Express6S features are listed in Table 4-5 on page 169.
OSA-Express6S 10 GbE LR
The OSA-Express6S 10 GbE LR feature (FC 0424) includes one PCIe adapter and one port per feature. On z15, the port supports CHPID type OSD. The 10 GbE feature is designed to support attachment to a single-mode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
The OSA-Express6S 10 GbE LR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR transceiver. The transceivers at both ends must be the same (LR-to-LR).
The OSA-Express6S 10 GbE LR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting this feature to the selected device.
For supported distances, see Table 4-8 on page 189.
OSA-Express6S 10 GbE SR
The OSA-Express6S 10 GbE SR feature (FC 0416) includes one PCIe adapter and one port per feature. On z15, the port supports CHPID type OSD. The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express6S 10 GbE SR feature supports the use of an industry-standard small form factor LC Duplex connector. Ensure that the attaching or downstream device has an SR transceiver. The sending and receiving transceivers must be the same (SR-to-SR).
The OSA-Express6S 10 GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
For supported distances, see Table 4-8 on page 189.
OSA-Express6S GbE LX
The OSA-Express6S GbE LX feature (FC 0422) includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express6S GbE LX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an LX transceiver. The sending and receiving transceivers must be the same (LX-to-LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each end of the link.
For supported distances, see Table 4-8 on page 189.
OSA-Express6S GbE SX
The OSA-Express6S GbE SX feature (FC 0423) includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and shared among LPARs and across logical channel subsystems.
The OSA-Express6S GbE SX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an SX transceiver. The sending and receiving transceivers must be the same (SX-to-SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
For supported distances, see Table 4-8 on page 189.
OSA-Express6S 1000BASE-T Ethernet feature
This feature (FC 0426) occupies one slot in the PCIe+ I/O drawer. It features two ports that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port has an SFP with an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle is required to be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 meters (328 feet). The SFP allows a concurrent repair or replace action.
The OSA-Express6S 1000BASE-T Ethernet feature supports auto-negotiation when attached to an Ethernet router or switch. If you allow the LAN speed and duplex mode to default to auto-negotiation, the OSA-Express port and the attached router or switch auto-negotiate the LAN speed and duplex mode settings between them. They then connect at the highest common performance speed and duplex mode of interoperation. If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express port examines the signal that it is receiving and connects at the speed and duplex mode of the device at the other end of the cable.
The OSA-Express7S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, or OSE. Non-QDIO operation mode requires CHPID type OSE.
 
Notes: Consider the following points:
CHPID type OSM is not supported on z15 for user configurations. It is used only in DPM mode for internal management.
CHPID types OSN and OSX are not supported on z15.
The following settings are supported on the OSA-Express6S 1000BASE-T Ethernet feature port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified speed and duplex mode. If this specified speed and duplex mode do not match the speed and duplex mode of the signal on the cable, the OSA-Express port does not connect.
For more information about supported distances, see Table 4-8 on page 189.
OSA-Express5S
The OSA-Express5S feature is installed in the PCIe I/O drawer. The following OSA-Express5S features can be installed on z14 servers (carry forward only):
OSA-Express5S 10 Gigabit Ethernet LR, FC 0415
OSA-Express5S 10 Gigabit Ethernet SR, FC 0416
OSA-Express5S Gigabit Ethernet LX, FC 0413
OSA-Express5S Gigabit Ethernet SX, FC 0414
OSA-Express5S 1000BASE-T Ethernet, FC 0417
The OSA-Express5S features are listed in Table 4-5 on page 169.
OSA-Express5S 10 GbE LR
The OSA-Express5S 10 GbE LR feature (FC 0415) includes one PCIe adapter and one port per feature. On z15, the port supports CHPID type OSD.
The 10 GbE feature is designed to support attachment to a single-mode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express5S 10 GbE LR feature supports the use of an industry-standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR transceiver. The transceivers at both ends must be the same (LR-to-LR).
The OSA-Express5S 10 GbE LR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting this feature to the selected device.
For supported distances, see Table 4-8 on page 189.
OSA-Express5S 10 Gigabit Ethernet SR
The OSA-Express5S 10 GbE SR feature (FC 0416) includes one PCIe adapter and one port per feature. On z15, the port supports CHPID type OSD.
The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and shared among LPARs within and across logical channel subsystems.
The OSA-Express5S 10 GbE SR feature supports the use of an industry standard small form factor LC Duplex connector. Ensure that the attaching or downstream device includes an SR transceiver. The sending and receiving transceivers must be the same (SR-to-SR).
The OSA-Express5S 10 GbE SR feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
For more information about supported distances, see Table 4-8 on page 189.
OSA-Express5S Gigabit Ethernet LX (FC 0413)
The OSA-Express5S GbE LX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and shared among LPARs and across logical channel subsystems.
The OSA-Express5S GbE LX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an LX transceiver. The sending and receiving transceivers must be the same (LX-to-LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each end of the link.
For more information about supported distances, see Table 4-8 on page 189.
OSA-Express5S Gigabit Ethernet SX (FC 0414)
The OSA-Express5S GbE SX feature includes one PCIe adapter and two ports. The two ports share a channel path identifier (CHPID type OSD exclusively). The ports support attachment to a 1 Gbps Ethernet LAN. Each port can be defined as a spanned channel and can be shared among LPARs and across logical channel subsystems.
The OSA-Express5S GbE SX feature supports the use of an LC Duplex connector. Ensure that the attaching or downstream device has an SX transceiver. The sending and receiving transceivers must be the same (SX-to-SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for connecting each port on this feature to the selected device.
For more information about supported distances, see Table 4-8 on page 189.
OSA-Express5S 1000BASE-T Ethernet feature
This feature (FC 0417) occupies one slot in the PCIe I/O drawer. It has two ports that connect to a 1000 Mbps (1 Gbps) or 100 Mbps Ethernet LAN. Each port has an SFP with an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle is required to be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 meters (328 feet). The SFP allows a concurrent repair or replace action.
The OSA-Express5S 1000BASE-T Ethernet feature supports auto-negotiation when attached to an Ethernet router or switch. If you allow the LAN speed and duplex mode to default to auto-negotiation, the OSA-Express port and the attached router or switch auto-negotiate the LAN speed and duplex mode settings between them. They then connect at the highest common performance speed and duplex mode of interoperation. If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express port examines the signal that it is receiving and connects at the speed and duplex mode of the device at the other end of the cable.
The OSA-Express5S 1000BASE-T Ethernet feature can be configured as CHPID type OSC, OSD, or OSE. Non-QDIO operation mode requires CHPID type OSE.
 
Notes: Consider the following points:
CHPID type OSM is not supported on z15 for user configurations. It is used only in DPM mode for internal management.
CHPID types OSN and OSX are not supported on z15.
The following settings are supported on the OSA-Express5S 1000BASE-T Ethernet feature port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified speed and duplex mode. If this specified speed and duplex mode do not match the speed and duplex mode of the signal on the cable, the OSA-Express port does not connect.
For more information about supported distances, see Table 4-8 on page 189.
OSA-Express features summary
The OSA-Express feature codes, cable type, maximum unrepeated distance, and the link rate on a z15 server are listed in Table 4-8.
Table 4-8 OSA features
Channel feature
Feature code
Bit rate
in Gbps
Cable type
Maximum unrepeated distance1 (MHz - km)
OSA-Express7S 25GbE SR1.1
0449
25
MM 50 µm
70 m (2000)
100 m (4700)
OSA-Express7S 25GbE SR
0429
OSA-Express7S 10GbE LR
0444
10
SM 9 µm
10 km (6.8 miles)
OSA-Express7S 10GbE SR
0445
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
OSA-Express7S GbE LX
0442
1.25
SM 9 µm
5 km (3.1 miles)
OSA-Express7S GbE SX
0443
1.25
MM 62.5 µm
MM 50 µm
275 m (200)
550 m (500)
OSA-Express7S 1000BASE-T
0446
1000 Mbps
Cat 5, Cat 6
unshielded twisted pair (UTP)
100 m
OSA-Express6S 10GbE LR
0424
10
SM 9 µm
10 km (6.8 miles)
OSA-Express6S 10GbE SR
0425
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
OSA-Express6S GbE LX
0422
1.25
SM 9 µm
5 km (3.1 miles)
OSA-Express6S GbE SX
0423
1.25
MM 62.5 µm
MM 50 µm
275 m (200)
550 m (500)
OSA-Express6S 1000BASE-T
0426
100 or 1000 Mbps
Cat 5, Cat 6
unshielded twisted pair (UTP)
100 m
OSA-Express5S 10GbE LR
0415
10
SM 9 µm
10 km (6.8 miles)
OSA-Express5S 10GbE SR
0416
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
OSA-Express5S GbE LX
0413
1.25
SM 9 µm
5 km (3.1 miles)
OSA-Express5S GbE SX
0414
1.25
MM 62.5 µm
MM 50 µm
275 m (200)
550 m (500)
OSA-Express5S 1000BASE-T2
0417
100 or 1000 Mbps
Cat 5, Cat 6
unshielded twisted pair (UTP)
100 m

1 Minimum fiber bandwidths in MHz/km for multimode fiber optic links are included in parentheses, where applicable.
2 With OSA-Express5S, the only link rate supported is 1000 Mbps.
25GbE RoCE Express2.1
25GbE RoCE Express2.1 (FC 0450) is installed in the PCIe+ I/O drawer and is supported only on IBM z15 servers. The 25GbE RoCE Express2.1 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
 
Switch configuration for RoCE Express2.1: If the IBM 25GbE RoCE Express2.1 features are connected to 25GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 25GbE RoCE Express2.1 feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group. Mixing same speed RoCE features in the same z/OS SMC-R link group is allowed.
The maximum supported unrepeated distance, point-to-point, is 100 meters (328 feet). A client-supplied cable is required. Two types of cables can be used for connecting the port to the selected 25GbE switch or to the 25GbE RoCE Express2.1 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector, which supports 70 meters (229 feet)
OM4 50-micron multimode fiber optic cable that is rated at 4700 MHz-km that ends with an LC Duplex connector, which supports 100 meters (328 feet)
On IBM z15 servers, both ports are supported by z/OS and can be shared by up to 126 partitions (LPARs) per PCHID. The 25GbE RoCE Express2.1 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 25GbE switch are supported.
On z15, RoCE Express2 and 2.1 features support 63 Virtual Functions per port (126 VFs per feature).
10GbE RoCE Express2.1
10GbE RoCE Express2.1 (FC 0432) is installed in the PCIe+ I/O drawer and is supported on IBM z15 servers. The 10GbE RoCE Express2.1 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
 
Switch configuration for RoCE Express2.1: If the IBM 10GbE RoCE Express2.1 features are connected to 10GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 10GbE RoCE Express2.1 feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group. Mixing same speed RoCE features in the same z/OS SMC-R link group is allowed.
The maximum supported unrepeated distance, point-to-point, is 100 meters (328 feet). A client-supplied cable is required. Two types of cables can be used for connecting the port to the selected 10GbE switch or to the 10GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector, which supports 70 meters (229 feet)
OM4 50-micron multimode fiber optic cable that is rated at 4700 MHz-km that ends with an LC Duplex connector, which supports 100 meters (328 feet)
The 10GbE RoCE Express2.1 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 10GbE switch are supported.
On z15, RoCE Express2 and 2.1 support 63 Virtual Functions per port (126 VFs per feature).
25GbE RoCE Express2
25GbE RoCE Express2 (FC 0430) is installed in the PCIe I/O drawer and is supported on IBM z15 servers. The 25GbE RoCE Express2 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
The 25GbE RoCE Express2 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 25GbE switch are supported.
On z15, RoCE Express2 and 2.1 features support 63 Virtual Functions per port (126 VFs per feature).
 
Switch configuration for RoCE Express2: If the IBM 25GbE RoCE Express2 features are connected to 25GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 25GbE RoCE Express2 feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group. Mixing same speed RoCE features in the same z/OS SMC-R link group is allowed.
The maximum supported unrepeated distance, point-to-point, is 100 meters (328 feet). A client-supplied cable is required. Two types of cables can be used for connecting the port to the selected 25GbE switch or to the 25GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector, which supports 70 meters (229 feet)
OM4 50-micron multimode fiber optic cable that is rated at 4700 MHz-km that ends with an LC Duplex connector, which supports 100 meters (328 feet)
10GbE RoCE Express2
RoCE Express2 (FC 0412) is installed in the PCIe+ I/O drawer and is supported on z15 servers. The 10GbE RoCE Express2 is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
On z15 servers, both ports are supported by z/OS and can be shared by up to 126 partitions (LPARs) per PCHID. The 10GbE RoCE Express2 feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both point-to-point connections and switched connections with an enterprise-class 10 GbE switch are supported.
On z15, RoCE Express2 and 2.1 features support 63 Virtual Functions per port (126 VFs per feature). The RAS was improved and ECC double bit correction added staring with FC 0412.
 
Switch configuration for RoCE Express2: If the IBM 10GbE RoCE Express2 features are connected to 10GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 10GbE RoCE Express2 feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group. Mixing same speed RoCE features in the same z/OS SMC-R link group is allowed.
The maximum supported unrepeated distance, point-to-point, is 300 meters (984 feet). A client-supplied cable is required. The following types of cables can be used for connecting the port to the selected 10 GbE switch or to the 10GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector; supports 300 meters (984 feet)
OM2 50-micron multimode fiber optic cable that is rated at 500 MHz-km that ends with an LC Duplex connector; supports 82 meters (269 feet)
OM1 62.5-micron multimode fiber optic cable that is rated at 200 MHz-km that ends with an LC Duplex connector; supports 33 meters (108 feet)
10GbE RoCE Express
The 10GbE RoCE Express feature (FC 0411) is installed in the PCIe+ I/O drawer. This feature is supported on z14, z14 ZR1, z13, z13s servers and can be carried forward during an MES upgrade to a z15.
The 10GbE RoCE Express is a native PCIe feature. It does not use a CHPID and is defined by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
Both ports are supported by z/OS and can be shared by up to 31 partitions (LPARs) per PCHID on z15, z14 ZR1, z14 M0x, z13s, and z13.
The 10GbE RoCE Express feature uses SR optics and supports the use of a multimode fiber optic cable that ends with an LC Duplex connector. Point-to-point connections and switched connections with an enterprise-class 10 GbE switch are supported.
 
Switch configuration for RoCE: If the IBM 10GbE RoCE Express features are connected to 10 GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 10GbE RoCE Express feature does not support auto-negotiation to any other speed and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group. Mixing same speed RoCE features in the same z/OS SMC-R link group is allowed.
The maximum supported unrepeated distance, point-to-point, is 300 meters (984 feet). A client-supplied cable is required. The following types of cables can be used for connecting the port to the selected 10 GbE switch or to the 10GbE RoCE Express feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an LC Duplex connector; supports 300 meters (984 feet)
OM2 50-micron multimode fiber optic cable that is rated at 500 MHz-km that ends with an LC Duplex connector; supports 82 meters (269 feet)
OM1 62.5-micron multimode fiber optic cable that is rated at 200 MHz-km that ends with an LC Duplex connector; supports 33 meters (108 feet)
Shared Memory Communications functions
The Shared Memory Communication (SMC) capabilities of the z15 help optimize the communications between applications for server-to-server (SMC-R) or LPAR-to-LPAR (SMC-D) connectivity.
SMC-R
SMC-R provides application transparent use of the RoCE-Express feature. This feature reduces the network overhead and latency of data transfers, which effectively offers the benefits of optimized network performance across processors.
SMC-D
SMC-D was used with the introduction of the Internal Shared Memory (ISM) virtual PCI function. ISM is a virtual PCI network adapter that enables direct access to shared virtual memory, which provides a highly optimized network interconnect for IBM Z intra-CPC communications.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes. SMC-D completes the overall SMC solution, which provides synergy with SMC-R.
SMC-R and SMC-D use shared memory architectural concept, which eliminates the TCP/IP processing in the data path, yet preserves TCP/IP Qualities of Service for connection management purposes.
Internal Shared Memory (ISM)
ISM is a function that is supported by z15, z14 ZR1, z14 M0x, z13, and z13s machines. It is the firmware that provides connectivity by using shared memory access between multiple operating system images within the same CPC. ISM creates virtual adapters with shared memory that is allocated for each operating system image.
ISM is defined by the FUNCTION statement with a virtual CHPID (VCHID) in hardware configuration definition (HCD)/IOCDS. Identified by the PNETID parameter, each ISM VCHID defines an isolated, internal virtual network for SMC-D communication, without any hardware component required. Virtual adapters are defined by virtual function (VF) statements. Multiple LPARs can access the same virtual network for SMC-D data exchange by associating their VF with same VCHID.
Applications that use HiperSockets can realize network latency and CPU reduction benefits and performance improvement by using the SMC-D over ISM.
z15 servers support up to 32 ISM VCHIDs per CPC. Each VCHID supports up to 255 VFs, with a total maximum of 8,000 VFs.
HiperSockets
The HiperSockets function of z15 servers provides up to 32 high-speed virtual LAN attachments.
HiperSockets can be customized to accommodate varying traffic sizes. Because HiperSockets does not use an external network, it can free up system and network resources. This advantage can help eliminate attachment costs and improve availability and performance.
HiperSockets eliminates the need to use I/O subsystem operations and traverse an external network connection to communicate between LPARs in the same z15 server. HiperSockets offers significant value in server consolidation when connecting many virtual servers. It can be used instead of certain coupling link configurations in a Parallel Sysplex.
HiperSockets internal networks support the following transport modes:
Layer 2 (link layer)
Layer 3 (network or IP layer)
Traffic can be IPv4 or IPv6, or non-IP, such as AppleTalk, DECnet, IPX, NetBIOS, or SNA.
HiperSockets devices are protocol-independent and Layer 3-independent. Each HiperSockets device (Layer 2 and Layer 3 mode) features its own Media Access Control (MAC) address. This address allows the use of applications that depend on the existence of Layer 2 addresses, such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.
Layer 2 support helps facilitate server consolidation and can reduce complexity and simplify network configuration. It also allows LAN administrators to maintain the mainframe network environment similarly to non-mainframe environments.
Packet forwarding decisions are based on Layer 2 information instead of Layer 3. The HiperSockets device can run automatic MAC address generation to create uniqueness within and across LPARs and servers. The use of Group MAC addresses for multicast is supported, and broadcasts to all other Layer 2 devices on the same HiperSockets networks.
Datagrams are delivered only between HiperSockets devices that use the same transport mode. A Layer 2 device cannot communicate directly to a Layer 3 device in another LPAR network. A HiperSockets device can filter inbound datagrams by VLAN identification, the destination MAC address, or both.
Analogous to the Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors, or multicast routers. This configuration enables the creation of high-performance and high-availability link layer switches between the internal HiperSockets network and an external Ethernet network. It also can be used to connect to the HiperSockets Layer 2 networks of different servers.
HiperSockets Layer 2 is supported by Linux on Z, and by z/VM for Linux guest use.
z15 supports the HiperSockets Completion Queue function that is designed to allow HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary. This feature combines ultra-low latency with more tolerance for traffic peaks.
With the asynchronous support, data can be temporarily held until the receiver has buffers that are available in its inbound queue during high volume situations. The HiperSockets Completion Queue function requires the following minimum applications9:
z/OS V2.2 with PTFs
Linux on Z distributions:
 – Red Hat Enterprise Linux (RHEL) 6.2
 – SUSE Linux Enterprise Server (SLES) 11 SP2
 – Ubuntu server 16.04 LTS
z/VSE V6.2
z/VM V6.4 with maintenance
In z/VM V6.4 and newer, the virtual switch function transparently bridges a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to communicate directly with the following systems:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
RoCE Express features summary
The RoCE Express feature codes, cable type, maximum unrepeated distance, and the link rate on a z15 server are listed in Table 4-9.
Table 4-9 RoCE Express features summary
Channel feature
Feature code
Bit rate
in Gbps
Cable type
Maximum unrepeated distance1 (MHz - km)
25GbE RoCE Express2.1
0450
25
MM 50 µm
70 m (2000)
100 m (4700)
10GbE RoCE Express2.1
0432
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
25GbE RoCE Express2
0430
25
MM 50 µm
70 m (2000)
100 m (4700)
10GbE RoCE Express2
0412
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)
10GbE RoCE Express
0411
10
MM 62.5 µm
MM 50 µm
33 m (200)
82 m (500)
300 m (2000)

1 Minimum fiber bandwidths in MHz/km for multimode fiber optic links are included in parentheses, where applicable.
4.6.4 Parallel Sysplex connectivity
Coupling links are required in a Parallel Sysplex configuration to provide connectivity from the z/OS images to the coupling facility (CF). A properly configured Parallel Sysplex provides a highly reliable, redundant, and robust IBM Z technology solution to achieve near-continuous availability. A Parallel Sysplex is composed of one or more z/OS operating system images that are coupled through one or more CFs.
This section describes coupling link features supported in a Parallel Sysplex in which a z15 can participate.
Coupling links
The type of coupling link that is used to connect a CF to an operating system LPAR is important. The link performance significantly affects response times and coupling processor usage. For configurations that extend over large distances, the time that is spent on the link can be the largest part of the response time.
IBM z15 supports three coupling link types:
Integrated Coupling Adapter Short Reach (ICA SR) links connect directly to the CPC drawer and are intended for short distances between CPCs of up to 150 meters (492.1 feet).
Coupling Express Long Reach (CE LR) adapters are in the PCIE+ drawer and support unrepeated distances of up to 10 km (6.21 miles) or up to 100 km (62.1 miles) over qualified WDM services.
Internal Coupling (IC) links are for internal links within a CPC.
 
Attention: Parallel Sysplex supports connectivity between systems that differ by up to two generations (n-2). For example, an IBM z15 can participate in an IBM Parallel Sysplex cluster with z14, z14 ZR1, z13, and z13s systems.
However, the IBM z15 and IBM z14 ZR1 do not support InfiniBand connectivity so these servers support connectivity by using only Integrated Coupling Adapter Short Reach (ICA SR) and Coupling Express Long Reach (CE LR) features. z15 can connect to z13 and z13s only if these servers have ICA SR or CE LR coupling features.
Figure 4-4 shows the following supported Coupling Link connections for the z15:
InfiniBand links are supported between z13, z13s and z14 machines
Only ICA SR and CE LR links are supported on z15 and z14 ZR1 machines
Figure 4-4 Parallel Sysplex connectivity options
The coupling link options that are listed in Table 4-10. Also listed are the coupling link support for each IBM Z platform. Restrictions on the maximum numbers can apply, depending on the configuration. Always check with your IBM support team for more information.
Table 4-10 Coupling link options that are supported on z15
Type
Description
Feature
Code
Link rate
Max unrepeated distance
Maximum number of supported links
 
z15
z14 ZR1
z14
z13s
z13
CE LR
Coupling Express LR
0433
10 Gbps
10 kms
(6.2 miles)
64
32
64
32
64
ICA SR1.1
Integrated Coupling Adapter
0176
8 GBps
150 meters
(492 feet)
96
N/A
N/A
N/A
N/A
ICA SR
Integrated Coupling Adapter
0172
8 GBps
150 meters
(492 feet)
96
16
80
16
40
IC
Integrated Coupling Adapter
N/A
Internal speeds
N/A
64
32
32
32
32
HCA3-O LR
InfiniBand
Long Reach
(1 x IFB)
0170
 
10 kms
(6.2 miles)
N/A
N/A
64
32
64
HCA3-O
InfiniBand
(12 x IFB)
0171
 
150 meters
(492 feet)
N/A
N/A
32
16
32
The maximum supported links depends on the IBM Z model or capacity feature code and the numbers are marked with an asterisk (*).
z15 ICA SR maximum depends on the number of CPU drawers. A total of 12 PCIe+ fanouts are used per CPU drawer, which gives a maximum of 24 ICA SR ports. The z15 machine maximum ICA SR1.1 and ICA SR ports combined is 96.
For more information about distance support for coupling links, see System z End-to-End Extended Distance Guide, SG24-8047.
Internal Coupling link
IC links are Licensed Internal Code-defined links to connect a CF to a z/OS logical partition in the same CPC. These links are available on all IBM Z platforms. The IC link is an IBM Z coupling connectivity option that enables high-speed, efficient communication between a CF partition and one or more z/OS logical partitions that are running on the same CPC. The IC is a linkless connection (implemented in LIC) and does not require any hardware or cabling.
An IC link is a fast coupling link that uses memory-to-memory data transfers. IC links do not have PCHID numbers, but do require CHPIDs.
IC links have the following attributes:
They provide the fastest connectivity that is significantly faster than external link alternatives.
They result in better coupling efficiency than with external links, effectively reducing the CPU cost that is associated with Parallel Sysplex.
They can be used in test or production configurations, reduce the cost of moving into Parallel Sysplex technology, and enhance performance and reliability.
They can be defined as spanned channels across multiple channel subsystems.
They are available at no extra hardware cost (no feature code). Employing ICFs with IC links results in considerable cost savings when configuring a cluster.
IC links are enabled by defining CHPID type ICP. A maximum of 64 IC links can be defined on an IBM 15 CPC.
Integrated Coupling Adapter Short Range
The ICA SR (FC 0172) was introduced with the IBM z13. z15 introduces ICA SR1.1 (FC 0176). ICA SR and ICA SR1.1 are two-port, short-distance coupling features that allow the supported IBM Z systems to connect to each other. ICA SR and ICA SR1.1 use coupling channel type CS5. The ICA SR uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for coupling. ICA SR1.1 uses PCIe Gen4 technology, with x16 lanes that are bifurcated into x8 lanes for coupling.
The ICA SR & SR1.1 are designed to drive distances up to 150 m and supports a link data rate of 8 GBps. It is designed to support up to four CHPIDs per port and seven subchannels (devices) per CHPID.
For more information, see IBM Z Planning for Fiber Optic Links (FICON/FCP, Coupling Links, and Open System Adapters), GA23-1407 as this web page.
Coupling Express Long Reach
The Coupling Express LR occupies one slot in a PCIe I/O drawer or PCIe+ I/O drawer10. It allows the supported IBM Z systems to connect to each other over extended distance. The Coupling Express LR (FC 0433) is a two-port that uses coupling channel type CL5.
The Coupling Express LR uses 10GbE RoCE technology and is designed to drive distances up to 10 km (6.21 miles) unrepeated and support a link data rate of 10 Gigabits per second (Gbps). For distance requirements greater than 10 km (6.21 miles), clients must use a Wavelength Division Multiplexer (WDM). The WDM vendor must be qualified by IBM Z.
Coupling Express LR is designed to support up to four CHPIDs per port, 32 buffers (that is, 32 subchannels) per CHPID. The Coupling Express LR feature is in the PCIe+ I/O drawer on IBM z15.
For more information, see IBM Z Planning for Fiber Optic Links (FICON/FCP, Coupling Links, Open Systems Adapters, and zHyperLink Express), GA23-1408, which is available at this web page.
Extended distance support
For more information about extended distance support, see System z End-to-End Extended Distance Guide, SG24-8047.
Migration considerations
Upgrading from previous generations of IBM Z systems in a Parallel Sysplex to z15 servers in that same Parallel Sysplex requires proper planning for coupling connectivity. Planning is important because of the change in the supported type of coupling link adapters and the number of available fanout slots of the z15 CPC drawers.
The ICA SR fanout features provide short-distance connectivity to another z15, z14 ZR1, z14, z13s, or z13 server.
The CE LR adapter provides long-distance connectivity to another z15, z14 ZR1, z14, z13s, or z13 server.
The z15 server fanout slots in the CPC drawer provide coupling link connectivity through the ICA SR fanout cards. In addition to coupling links for Parallel Sysplex, the fanout cards provide connectivity for the PCIe+ I/O drawer (PCIe+ Gen3 fanout).
Up to 12 PCIe fanout cards can be installed in a z15 CPC drawer.
To migrate from an older generation machine to a z15 without disruption in a Parallel Sysplex environment requires that the older machines are no more than n-2 generation (namely, at least z13) and that they carry enough coupling links to connect to the existing systems while also connecting to the new machine. N-2 generations rule and enough coupling links are necessary to allow individual components (z/OS LPARs, CFs) to be shut down and moved to the target machine and continue connect to the remaining systems.
It is beyond the scope of this book to describe all possible migration scenarios. Always consult with subject matter experts to help you to develop your migration strategy.
Coupling links and Server Time Protocol
All external coupling links can be used to pass time synchronization signals by using Server Time Protocol (STP). STP is a message-based protocol in which timing messages are passed over data links between servers. The same coupling links can be used to exchange time and CF messages in a Parallel Sysplex.
The use of the coupling links to exchange STP messages has the following advantages:
STP can scale with distance by using the same links to exchange STP messages and CF messages in a Parallel Sysplex. Servers that are exchanging messages over short distances, such as IFB or ICA SR links, can meet more stringent synchronization requirements than servers that exchange messages over long IFB LR links, with distances up to 100 kilometers (62 miles). This advantage is an enhancement over the IBM Sysplex Timer implementation, which does not scale with distance.
Coupling links also provide the connectivity that is necessary in a Parallel Sysplex. Therefore, a potential benefit can be realized of minimizing the number of cross-site links that is required in a multi-site Parallel Sysplex.
Between any two servers that are intended to exchange STP messages, configure each server so that at least two coupling links exist for communication between the servers. This configuration prevents the loss of one link from causing the loss of STP communication between the servers. If a server does not have a CF LPAR, timing-only links can be used to provide STP connectivity.
4.7 Cryptographic functions
Cryptographic functions are provided by the CP Assist for Cryptographic Function (CPACF) and the PCI Express cryptographic adapters. z15 servers support the Crypto Express6S feature.
4.7.1 CPACF functions (FC 3863)
FC 386311 is required to enable CPACF functions.
4.7.2 Crypto Express7S feature (FC 0898 and FC 0899)
The Crypto Express7S represents the newest generation of the Peripheral Component Interconnect Express (PCIe) cryptographic coprocessors, which are an optional feature that is available on the z15. These coprocessors are Hardware Security Modules (HSMs) that provide high-security cryptographic processing as required by banking and other industries.
This feature provides a secure programming and hardware environment wherein crypto processes are performed. Each cryptographic coprocessor includes general-purpose processors, non-volatile storage, and specialized cryptographic electronics, which are all contained within a tamper-sensing and tamper-responsive enclosure that eliminates all keys and sensitive data on any attempt to tamper with the device. The security features of the HSM are designed to meet the requirements of FIPS 140-2, Level 4, which is the highest security level defined.
The Crypto Express7S two port includes two PCIe adapters; the Crypto Express7S one port includes one PCIe adapter per feature. For availability reasons, a minimum of two features is required for the one port feature. Up to eight Crypto Express7S two port features are supported. The maximum number of the one-port features is 16. The Crypto Express7S feature occupies one I/O slot in a PCIe+ I/O drawer.
Each adapter can be configured as a Secure IBM CCA coprocessor, Secure IBM Enterprise PKCS #11 (EP11) coprocessor, or accelerator.
Crypto Express7S provides domain support for up to 85 logical partitions.
The accelerator function is designed for maximum-speed Secure Sockets Layer and Transport Layer Security (SSL/TLS) acceleration, rather than for specialized financial applications for secure, long-term storage of keys or secrets. The Crypto Express7S can also be configured as one of the following configurations:
The Secure IBM CCA coprocessor includes secure key functions with emphasis on the specialized functions that are required for banking and payment card systems. It is optionally programmable to add custom functions and algorithms by using User Defined Extensions (UDX).
A new mode, called Payment Card Industry (PCI) PIN Transaction Security (PTS) Hardware Security Module (HSM) (PCI-HSM), is available exclusively for Crypto Express6S in CCA mode. PCI-HSM mode simplifies compliance with PCI requirements for hardware security modules.
The Secure IBM Enterprise PKCS #11 (EP11) coprocessor implements an industry-standardized set of services that adheres to the PKCS #11 specification v2.20 and more recent amendments. It was designed for extended FIPS and Common Criteria evaluations to meet industry requirements.
This cryptographic coprocessor mode introduced the PKCS #11 secure key function.
 
TKE feature: The Trusted Key Entry (TKE) Workstation feature is required for supporting the administration of the Crypto Express6S when configured as an Enterprise PKCS #11 coprocessor or managing the CCA mode PCI-HSM.
When the Crypto Express7S PCI Express adapter is configured as a secure IBM CCA co-processor, it still provides accelerator functions. However, up to 3x better performance for those functions can be achieved if the Crypto Express7S PCI Express adapter is configured as an accelerator.
CCA enhancements include the ability to use triple-length (192-bit) Triple-DES (TDES) keys for operations, such as data encryption, PIN processing, and key wrapping to strengthen security. CCA also extended the support for the cryptographic requirements of the German Banking Industry Committee Deutsche Kreditwirtschaft (DK).
Several features that support the use of the AES algorithm in banking applications also were added to CCA. These features include the addition of AES-related key management features and the AES ISO Format 4 (ISO-4) PIN blocks as defined in the ISO 9564-1 standard. PIN block translation and the use of AES PIN blocks in other CCA callable services are supported. IBM continues to add enhancements as AES finance industry standards are released.
4.7.3 Crypto Express6S feature (FC 0893) as carry forward only
Crypto Express5S was introduced from z14 servers. On the initial configuration, a minimum of two features are installed. The number of features then increases one at a time up to a maximum of 16 features.
Each Crypto Express6S feature holds one PCI Express cryptographic adapter. Each adapter can be configured by the installation as a Secure IBM Common Cryptographic Architecture (CCA) coprocessor, as a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or as an accelerator.
The tamper-resistant hardware security module, which is contained on the Crypto Express6S feature, conforms to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification. It supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as an IBM CCA coprocessor).
The following CCA compliance levels are available:
Non-compliant (default)
PCI-HSM 2016
PCI-HSM 2016 (migration, key tokens while migrating to compliant)
The following EP11 compliance levels are available (Crypto Express6S and Crypto Express5S):
FIPS 2009 (default)
FIPS 2011
BSI 2009
BSI 2011
Each Crypto Express6S feature occupies one I/O slot in the PCIe I/O drawer, and features no CHPID assigned. However, it includes one PCHID.
4.7.4 Crypto Express5S feature (FC 0890) as carry forward only
Crypto Express5S was introduced from z13 servers. On the initial configuration, a minimum of two features are installed. The number of features then increases individually to a maximum of 16 features.
Each Crypto Express5S feature holds one PCI Express cryptographic adapter. Each adapter can be configured by the installation as a Secure IBM CCA coprocessor, as a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or as an accelerator.
Each Crypto Express5S feature occupies one I/O slot in the PCIe I/O drawer, and features no CHPID assigned. However, it includes one PCHID.
4.8 Integrated Firmware Processor
The Integrated Firmware Processor (IFP) was introduced with the zEC12 and zBC12 servers. The IFP is dedicated for managing a new generation of PCIe features. The following features are installed in the PCIe+ I/O drawer:
25GbE RoCE Express2.1
10GbE RoCE Express2.1
25GbE RoCE Express2
10GbE RoCE Express2
10GbE RoCE Express
Coupling Express Long Reach (CE LR)
All native PCIe features should be ordered in pairs for redundancy. The features are assigned to one of the four resource groups (RGs) that are running on the IFP according to their physical location in the PCIe+ I/O drawer, which provides management functions and virtualization functions.
If two features of the same type are installed, one always is managed by resource group 1 (RG 1) or resource group 3 (RG 3) while the other feature is managed by resource group 2 (RG 2) or resource group 4 (RG 4). This configuration provides redundancy if one of the features or resource groups needs maintenance or fails.
The IFP and RGs support the following infrastructure management functions:
Firmware update of adapters and resource groups
Error recovery and failure data collection
Diagnostic and maintenance tasks
 

1 OSA-Express5S and 6S 1000BASE-T features do not have optics (copper only, RJ45 connectors).
2 Multifiber Termination Push-On.
3 For more information, see this web page: https://cw.infinibandta.org/document/dl/7157
4 Certain I/O features do not have external ports, such as Crypto Express.
5 zHyperLink feature operates with a FICON channel.
6 Check with the switch provider for their support statement.
7 CKD data records are handled by using IBM Enhanced Count Key Data (ECKD) command set.
8 CKD data records are handled by using IBM Enhanced Count Key Data (ECKD) command set.
9 Minimum OS support for z15 can differ. For more information, see Chapter 7, “Operating system support” on page 255.
10 PCIe+ I/O drawer (FC 4021) is introduced with z15 as-is and built in a 19-inch format. FC 4021 contains 16 I/O slots. FC 4021 can host up to 16 PCIe I/O features (adapters). The PCIe I/O drawer (4032 on z14) cannot be carried forward during and MES upgrade to z15. z15 support only PCIe+ I/O drawers.
11 Subject to export regulations.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.163.238