Planning considerations for CPC in PR/SM mode
This chapter describes the following scenarios for planning and configuration of an IBM z14 Model ZR1 server by using traditional tools (CPC in PR/SM mode):
Upgrading or migrating and existing configuration to a z14 ZR1
Installing a new z14 ZR1
 
Dynamic Partition Manager: For more information about planning and configuration for a CPC by using Dynamic Partition Manager, see Chapter 16, “Configuring a z14 ZR1 server by using Dynamic Partition Manager” on page 387.
Whenever possible, worksheets that support the planning tasks that are described in this chapter are provided. Throughout this book, we provide various definition examples that use hardware configuration definition (HCD) as the preferred method for configuration. Other tools, such as Hardware Configuration Manager (HCM) and ICP/IOCP, are included for reference only.
This chapter also provides a short overview of tools that IBM provides to help with the complexity of configuring a z14 ZR1, and information about where to obtain the tools and their intended use.
This chapter includes the following topics:
2.1 Scenarios
Throughout this book, we use two distinct scenarios by which we explain the tasks and procedures that are involved to successfully install and configure a z14 ZR1 server.
2.1.1 Scenario 1: Upgrading an existing IBM Z server to a z14 ZR1
This scenario assumes that an IBM Z environment where the IBM Z server is upgraded by using miscellaneous equipment specifications (MES) to a z14 ZR1. The scenario includes a planned outage period for the time of the physical upgrade of the machine. The software environments that are supported by this machine are not available during this period. The serial number of the machine remains the same after the upgrade.
2.1.2 Scenario 2: Installing a new z14 ZR1 server
This scenario assumes that a new z14 ZR1 is installed in a mainframe environment. The z14 ZR1 machine is physically installed along with an existing IBM Z machine. After the installation of the z14 ZR1 is successfully completed and the system is handed over by the IBM service representative, the software environment on the machine to be replaced must be stopped and recabling actions must be performed.
When recabling is complete, postinstallation activities must be performed and the software environment can be brought back online on the new system (z14 ZR1). An outage has still to be planned for this scenario, and a new serial number must be considered, so software keys for the new system must be available.
2.1.3 Differences in planning for the two scenarios
In the first scenario, the physical platform identity to be configured remains the same. No hardware configuration files must be physically migrated to another platform. Because the machine serial number remains the same after the upgrade, no changes to the software licenses are required.
In the second scenario, the physical platform to be configured changes. Hardware configuration files must be prepared on the existing machine, and must be migrated to the new z14 ZR1 server with the attached cabling. The serial number changes with the activation of the z14 ZR1 machine, which means that planning and preparing for software license changes must be considered.
In both scenarios, we assume that bringing up the existing features and functions has highest priority. Adding features and functions that were acquired with the system upgrade or installed in the new z14 ZR1 have a lower priority. The elapsed time of the planned outage can vary significantly, depending on the approach that is chosen in either scenario.
In both scenarios, the following information must be obtained before starting the process of changing to or installing the new z14 ZR1:
The new processor ID: The processor ID is used to assign a unique name to identify the processor in the HCD. For more information, see HCD Users Guide. SC34-2669.
The CFReport file: The CFReport file is downloadable from IBM Resource Link® by entering the Configuration Control Number (CCN). The CCN is provided by your IBM representative.
The system serial number: If a new z14 ZR1 is installed, a new serial number is provided by your IBM representative.
2.2 Tools
IBM provides several tools to help with the complexity of configuring an IBM Z server. This section summarizes the various tools that are available for the IBM Z platform. It also briefly outlines their benefits for the planning process.
The machine types for the current IBM Z platform are listed in Table 2-1.
Table 2-1 Machine types
Server name
Server short name
Machine type (M/T)
IBM Z z14 Model ZR1
z14 ZR1
3907
IBM Z z14
z14
3906
IBM z Systems® z13s®
z13s
2965
IBM z Systems z13®
z13
2964
IBM zEnterprise® BC12
zBC12
2828
IBM zEnterprise EC12
zEC12
2827
IBM zEnterprise 114
z114
2818
IBM zEnterprise 196
z196
2817
IBM System z10® Business Class
z10 BC
2098
IBM System z10 Enterprise Class
z10 EC
2097
IBM System z9® Business Class
z9 BC
2096
IBM System z9 Enterprise Class
z9 EC
2094
The examples in this book use tools, such as the HCD and channel-path identifier (CHPID) Mapping Tool (CMT) that refer to the machine type as opposed to server names. For more information, see Chapter 4, “Mapping CHIDs to CHPIDs by using the CMT” on page 49.
2.3 IBM Resource Link
The first step in planning for the installation of the z14 ZR1 is to access IBM Resource Link. You must register with Resource Link by providing a client site number, ID, and a valid email address. Your IBM representative can assist you with the registration process. After you have an IBM ID, you can customize your profile to accommodate the servers for which you are responsible.
On the Resource Link website, you can access various resources and tools that are designed to help the installation process. Several tools are available to simplify the installation process of a z14 ZR1 server. Even if you worked with most of these tools before, be sure to check for the latest versions that are relevant to z14 ZR1.
The Education and Library tabs on the website display information about the IBM Z family and some online tutorials. Under the Tools tab, you can download the latest version of the most frequently used tools and obtain system and configuration information.
2.4 Hardware Configuration Definition tool
HCD is an application that runs on z/OS and IBM z/VM that supplies an interactive dialog to generate the input/output definition file (IODF) and the input/output configuration data set (IOCDS). Generally, use HCD or HCM to generate the I/O configuration, rather than writing your own IOCP statements.
HCD performs validation as you enter the data, thus minimizing the risk of errors. This book provides examples for using HCD, with some examples of the use of HCM (see 2.4.1, “Hardware Configuration Manager” on page 10).
New hardware (z14 ZR1) requires program temporary fixes (PTFs) to enable definition support in HCD.
For the most current information about HCD, see the Hardware Configuration page.
When defining devices in HCD, the hardware features can be selected according to the physical setup of the devices that are attached to the z14 ZR1. Detailed forms and charts that describe the current environment facilitate the planning process.
2.4.1 Hardware Configuration Manager
HCM provides a graphical user interface to HCD and the associated IODF. HCM runs on a workstation and can also define and store more information about the physical hardware to which the IODF is defined.
HCM does not replace HCD. It is used with HCD and the associated IODF. However, HCM can be used in a stand-alone mode after an IODF is built and the configuration files (IODF##.HCM or IODF##.HCR) are created on your HCM workstation.
For most updated information about HCM, see the Hardware Configuration page.
2.5 CHPID Mapping Tool
The CMT provides a mechanism to map physical channel IDs (PCHIDs) to CHPIDs as required on a z14 ZR1. The CMT is optional but is preferred to manually mapping the PCHIDs to CHPIDs. The use of the CMT provides the best availability recommendations for a particular configuration.
The following files are needed to obtain an IODF file that contains the correct PCHID numbers by using CMT:
A production IODF file without PCHID numbers. For more information about how to obtain this file, see Chapter 4, “Mapping CHIDs to CHPIDs by using the CMT” on page 49.
The CFReport file reflecting the physical configuration of the ordered z14 ZR1 server, which is obtained from the Resource Link website. The CCN is generated by your IBM Client Representative when building the order for your configuration.
2.5.1 HCD and the CMT
The HCD process flow for a new z14 ZR1 installation is shown in Figure 2-1.
Figure 2-1 CMT: I/O configuration definition flow for a new installation
Part of the tasks that are shown in Figure 2-1 might also be valid for an upgrade, depending on the hardware configuration of the upgraded machine.
To download the CMT, log in to the Resource Link site by using a registered Resource Link ID.
For more information, see the CHPID Mapping Tool Users Guide, GC28-6984.
For more information about how to use the CMT, see Chapter 4, “Mapping CHIDs to CHPIDs by using the CMT” on page 49.
2.6 Other tools
The tools that are described in this section are not referenced in this book. However, they can help speed up the process of planning and configuring for specific topics that are outside of this book.
2.6.1 Input/output configuration program
ICP IOCP Version 5 Release 4 or later is required for a z14 ZR1 server. You can define the z14 ZR1 configuration by using only IOCP. However, HCD is suggested because of its verification and validation capabilities. By using ICP IOCP, it is possible to write an IOCDS in preparation for a CPC upgrade.
For more information about the changes and requirements for ICP IOCP, see IBM Z Input/Output Configuration Program User's Guide for ICP IOCP, SB10-7172.
2.6.2 World Wide Port Name Prediction Tool
The Worldwide Port Name (WWPN) Prediction Tool for IBM Z Fibre Channel Protocol (FCP) Channels helps prepare configuration files that are required or generated by the IBM Z platform when FCP Channels are installed. In particular, this tool helps during the installation of new systems and system upgrades.
One of the most important configuration parameters are WWPNs, which uniquely identify physical or virtual Fibre Channel ports. They are typically used in Fibre Channel Storage Area Network (SAN) switches to assign the corresponding ports to zones of a SAN. They are used in storage subsystems to grant access from these ports to specific storage devices that are identified by logical unit numbers (LUNs).
The capability of the WWPN Prediction Tool is extended to calculate and show WWPNs for both virtual and physical ports before system installation.
The WWPN Prediction Tool is available for download from IBM Resource Link and is applicable to all FICON® channels that are defined as CHPID type FCP (for communication with SCSI devices) on z14 ZR1. For more information about this tool, see this web page (IBMid required).
WWPN Persistence
The FCP WWPNs are determined based on the I/O serial number of the CPC, the IOCDS configuration details (for NPIV WWPNs), and the PCHID values (for physical WWPNs). With the introduction of the z13, the WWPN Persistence configuration option was introduced. When FC 0099 (WWPN Persistence) is ordered as part of a new or upgraded configuration for a z14 ZR1, the I/O serial number part of the WWPN for the new z14 ZR1 is the same serial number as for the source machine configuration.
For more information, see the Techdocs website.
2.6.3 Coupling Facility Structure Sizer
Moving to a new z14 ZR1 means migrating to a higher CFCC level (CFCC level 22). If your existing CF data structures are adequately sized, and you want to know how much these structures might need to grow to accommodate the same workload at the new CFCC level, you can use the current structure sizes to calculate the new sizes. The Coupling Facility Structure Sizer (CFSizer) Tool helps you evaluate the sizing of the CF structures.
Use the CFSizer tool to plan the amount of storage that must be allocated for coupling facility partitions more accurately. For more information about this tool, see the CFSizer page.
2.6.4 Power Estimation Tool
The Power Estimation Tool is a web-based tool with which you estimate the power consumption for your IBM Z server. The tool also estimates the machine’s weight.
For more information about this tool, see the IBM Resource Link.
2.6.5 Shared Memory Communications Applicability Tool
A tool that is called Shared Memory Communications (SMC) Applicability Tool (SMCAT) was created that helps customers to determine the value of SMC-R and SMC-D in their environment with minimal effort and minimal impact.
SMCAT is integrated within the TCP/IP stack and gathers new statistics that are used to project SMC applicability and benefits for the current system. For more information, see the Shared Memory Communications Reference Information website.
2.6.6 zBNA Tool
zBNA is a PC-based productivity tool that provides a means of estimating the elapsed time for batch jobs solely based on the differences in CPU speeds for a base processor and a target processor, the number of engines on each system, and system capacities. Data sharing is not considered. zBNA provides powerful, graphic demonstration of the z/OS batch window.
The zBNA Tool also provides the capability to project the benefits of deploying the zEDC Express feature and the ability to estimate the benefit of zHyperLink I/O activity.
The zBNA tool and its Users Guide can be downloaded from the IBM z Systems Batch Network Analyzer (zBNA) Tool website.
2.7 Hardware Management Console/Support Element setup
This section introduces the configuration and management tools and procedures available on the Hardware Management Console (HMC) and the Support Element (SE).
2.7.1 Defining the HMC Activation Profiles
Activation profiles must be customized by using the HMC. Activation profiles are required for central processor complex (CPC) and CPC image activation. They are used to tailor the operation of a CPC and are stored in the SE that is associated with the CPC. The following types of activation profiles are available:
Reset: A reset profile is used to activate a CPC and its images.
Image: An image profile is used to activate an image of a CPC previously activated.
Load: A load profile is used to load an activated image with a control program or operating system.
Group: A group profile is used to define the group capacity value for all logical partitions belonging to that group.
Default profiles of each of these types are provided. The Activate task activates the CPC or CPC image. Initially, the Default profile is selected. You can specify an activation profile other than the Default. This feature provides you with the capability to have multiple profiles, for example one for every IOCDS file managed by the CPC.
Reset Profile
Every CPC in the processor cluster needs a reset profile to determine the mode in which the CPC Licensed Internal Code is loaded and how much main storage is used. Using the reset profile, you must provide the order in which the LPARs are activated during power-on reset (POR). The maximum number of Reset profiles that is allowed for each CPC is 26.
Image Profile
Select the appropriate RESET profile and within the profile, select the appropriate IOCDS. The list of LPARs that are defined in the IOCDS is displayed. Parameters must be set for each LPAR before it can be activated and IPLed. The parameters for each LPAR define the following settings:
General: The mode of operation and its identifier
Processor: The number of logical CPs, zIIPs, and the weight assigned to the processor
Security: The security options for this LPAR
Storage: Memory and Virtual Flash Memory assigned to this LPAR
Options: The I/O priority and defined capacity options
Load: The load parameters necessary to IPL this LPAR
Crypto: The Crypto Express parameters (also see 2.7.2, “Cryptographic configuration” on page 15)
 
Note: To help you gathering the necessary input, a worksheet is provided with this book. For more information about downloading the worksheet that is associated with this material, see Appendix A, “Additional material” on page 429.
For more information about how to define an Image Profile, see 5.4, “Creating an Image Profile on the 3907 Support Element” on page 99.
Load profile
A Load profile is needed to define the channel address of the device from which the operating system is loaded. Depending on the SE model and machine type, the maximum number of Load profiles that are allowed for each CPC is 511.
Group profile
A Group profile defines the group capacity value that can be customized in determining the allocation and management of processor resources that are assigned to the logical partition in a group.
2.7.2 Cryptographic configuration
The activation profile that you use to activate a logical partition prepares it for running software products that use the Crypto Express feature. The use of the feature’s cryptographic facilities and functions requires customizing the logical partition’s activation profile to complete the following tasks:
Install the CP Assist for Cryptographic Facility (CPACF) DES/TDES Enablement feature if you are planning to use ICSF.
Provide it access to at least one Crypto Express feature. This goal is accomplished by selecting from the Usage Domain Index and the Cryptographic Candidate list.
Load it with an operating system, such as z/OS, that supports the use of cryptographic functions.
2.7.3 Defining the LPAR Group Control
The following methods can be used to limit the processor capacity usage for a group of LPARs and help you control software cost:
Group Capacity is capping the processor consumption to the value of the four-hour rolling average (4HRA) for a group of LPARs.
LPAR group absolute capping value is independent of the four-hour rolling average consumption and limits the amount of physical processor capacity that is used by a group of LPARs.
Both of these methods can be used concurrently and in combination with LPAR capping.
Consider reevaluating the parameters in a scenario where the values must be migrated from a previous generation CPC to a z14 ZR1 so that they fit to the new CPC.
 
Tip: Capacity management that uses capping technologies is an ongoing process that must be monitored and adjusted over time. Temporary or permanent capacity changes also must be considered when capping technologies are used.
2.7.4 Defining the Console (HMC part)
The OSA-ICC function of the OSA-Express 1000Base-T feature supports TN3270 enhancements (TN3270E) and non-SNA distributed function terminal (DFT) 3270 emulation. Planning for an IBM z14 Model ZR1 OSA-ICC implementation requires input from the following disciplines within a customer organization:
IBM Z server I/O subsystem configuration
Operating system configuration
OSA-Express feature configuration
Ethernet LAN configuration
Client TN3270E configuration
The OSA-Express feature configuration requires configuration tasks to be performed on the HMC by using the OSA Advanced Facilities task. Collect information for the following parameters before starting the configuration activities:
OSA-ICC server: Name, Host IP address, TCP port number, Gateway IP address, the netmask, the network type, and the MTU size
OSA-ICC session definitions: Channel subsystem, the MIF (LPAR) ID, Device number, LU-name, clients’ IP address, clients’ DHDTO/RSP/RTO
 
Note: Consider defining multiple sessions per LPAR to allow access for a number of users at the same time.
For an upgrade of an IBM Z server to a z14 ZR1, these definitions can be exported from the source machine by using on-board HMC facilities and imported back again after the upgrade is complete.
For more information about the definitions, see Chapter 7, “Defining console communication” on page 143. For more information about implementation, see OSA-Express Integrated Console Controller Implementation Guide, SG24-6364.
2.7.5 Support Element settings
The SEs that are supplied with the z14 ZR1 are two appliances based on 1U x86 servers. Both units are installed at the top of the A frame. One is a primary SE and the other is the alternative SE.
Generally, the SE settings are considered part of the physical installation of the z14 ZR1 server and not presented in this book.
 
For a new z14 ZR1 server, a new range of TCP/IP addresses must be provided by the customer to the system services representative (SSR) who performs the physical installation. As an extra measure of security, provisioning a separate LAN segment for the management functions is preferred. During an upgrade from an older IBM Z server to a z14 ZR1, the current settings on the SEs should be backed up for migration purposes.
In addition to the standard SE configuration, other parameters should be backed up, such as the API Settings. These parameters can be accessed through the Customize API Settings task on the SE.
2.7.6 Setting up Server Time Protocol
STP provides the means by which the time of day (TOD) clocks in various systems can be synchronized by using messages that are transported over coupling links. STP operates along with the TOD-clock steering facility, which provides a new timing mode, timing states, external interrupts, and machine check conditions.
 
STP connectivity for z14 ZR1 and CTN roles: The z14 ZR1 server does not support coupling connectivity by using the InfiniBand feature. As such, the z14 ZR1 CPC only can connect for transmitting coupling or timing (STP) data to a z13/z13s or to another z14 M0x/z14 ZR1 CPC. In a CTN that also contains zEC12/zBC12 servers, z14 ZR1 cannot play a role in the CTN (PTS/BTS/Arbiter) for availability reasons.
The HMC provides the user interface to manage an STP-only Coordinated Timing Network (CTN).
Consider the following points when setting up an HMC for STP:
A CTN ID must be unique for all IBM Z servers that will be part of the CTN.
To synchronize IBM Z servers to an External Time Source (ETS), network Time Protocol (NTP) server information (and network connectivity that uses NTP/NTPS protocol with optional pulse per second [PPS]) must be provided.
Customer must have the time zone offset, Daylight Saving Time offset, and leap second offset.
Optional, the HMC can be configured as an NTP server.
For the IBM Z servers that are part of a CTN, STP roles must be planned (Preferred, Backup, and Current Time Servers and Arbiter).
As part of a migration, changing the Current Time Server must be done before migration to the new platform (z14 ZR1).
 
Note: The z14 ZR1 supports STP stratum level 4. This feature avoids the added complexity and expense of system reconfiguration. This change must be installed all systems that might become exposed to this situation. Stratum level 4 should be used only during a migration, and for a short period.
For more information about planning, implementing, and managing an STP environment, see the following publications:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Recovery Guide, SG24-7380
2.8 Activities centered on the IODF
This section describes the information (I/O configuration) in the IODF.
2.8.1 Logical channel subsystems
An IBM Z processor manages I/O resources (including logical partitions, channel paths, control units, and I/O devices) by housing them in multiple logical channel subsystems. Each logical channel subsystem (LCSS) can have up to 256 channel paths. The z14 ZR1 supports up to 3 LCSSs.
A spanned channel path is one that can be used by partitions in more than one logical channel subsystem. You must use the same CHPID value across all logical channel subsystems that share a spanned channel. However, logical channel subsystems that do not share a spanned channel can use that CHPID for other channels.
For more information, see z/OS Hardware Configuration Definition Planning, GA32-0907.
Consider the use of multiple logical channel subsystems during the planning phase. By using multiple logical channel subsystems, you can logically partition your physical channel resources to accommodate large-scale enterprise workload connectivity and high-bandwidth demands.
Each LCSS can have up to 256 CHPIDs. On the z14 ZR1, you can define up to three LCSSs. Each LCSS can support up to 15 logical partitions (LPARs) except for LCSS 2, which can support up to 10 LPARs for a total of 40 LPARs per z14 ZR1 server.
Also, LCSSs provide for multiple subchannel sets for expanding the number of I/O devices that are managed in each CSS. The z14 ZR1 supports up to three subchannel sets per LCSS.
Not all device types are eligible for nonzero subchannel sets. Subchannel set 0 (SS0) can be used for any type of device. More subchannel sets (for example: subchannel set 1 [SS1]) can be used for certain classes of devices only, such as parallel access volume alias devices.
For more information, see IBM z14 Model ZR1 Technical Guide, SG24-8651. Use multiple subchannel sets to move devices of eligible device types to extra subchannel sets, then define more physical devices to SS0.
2.8.2 Defining partitions
The IBM Processor Resource/System Manager (PR/SM) feature allows a single CPC to run multiple operating systems in LPAR mode. Each operating system has its own logical partition, which is a separate set of system resources that includes the following items:
A portion of storage (memory).
One or more central and specialty processors. The processors can be dedicated or shared.
Only LPAR mode (not basic mode) is supported on IBM Z servers.
Profile data can be exported on the older server and imported on the z14 ZR1. If the LPAR data was imported from an older server, consider the LPAR sizing before the LPAR migration to the z14 ZR1. For more information, see the IBM Resource Link (log in required).
For more information about how to define LPARs in IODF, see Chapter 3, “Preparing for a new z14 ZR1” on page 31.
Planning considerations for Virtual Flash Memory
IBM Virtual Flash Memory (VFM - Feature Code 0614) is the replacement for the Flash Express features (FC 0402 and FC 0403).
IBM VFM includes the following minimum software requirements:
z/OS V2.3.
z/OS V2.2.
z/OS V2.1.
z/OS V1.13 with PTFs, the z/OS V1.13 RSM Enablement Offering web deliverable installed, and an extended support contract for IBM Software Support Services. The web deliverable is available at the z/OS downloads page.
VFM (FC 0614) is available in 512 GB increments, each feature providing for 512 GB of memory. Up to four VFM features can be ordered, which results in a total of 2 TB of virtual flash memory. The plan ahead memory option must consider VFM requirements.
With the introduction of VFM, the existing operating system interface is not changed to handling the storage-class memory (SCM). Operating systems handle VFM the same way as the Flash Express. The allocation of VFM storage is done during LPAR activation because the LPAR hypervisor manages the partition memory.
The initial and maximum amounts of VFM are specified in the LPAR image profile. VFM can be added or deleted to or from operating systems by using SCM commands after the LPAR is activated. VFM allocation and definition for all partitions can be displayed on the Storage Information window on the HMC and by using SCM commands in z/OS.
 
Virtual Flash Memory allocation: The VFM values for Initial and Maximum allocations cannot be dynamically changed. One or more partitions must be activated (or reactivated) for VFM allocation changes to take effect.
As such, it is recommended to assign the maximum amount installable (2 TB) for all LPARs that are candidates for the use of VFM and set initial allocation to zero for the LPARs that do not require immediate activation of VFM. By doing so, you ensure that you can later use any available VFM when required.
At partition activation time, over-commitment of VFM storage is supported. This setting allows more storage to be added to partitions subject to the amount that is not assigned to other partitions. For more information, see 10.3.3, “Configuring VFM” on page 247.
If the total amount of VFM that is allocated to all active partitions is equal to the LICCC value, but the sum of active partition maximums is larger than the installed amount, a customer might concurrently add VFM and increase allocations without reactivating partitions. This feature is shown in the examples that are described next.
Non-disruptive migration
An example of a non-disruptive migration includes the following features:
A z14 ZR1 CPC has three VFM features installed (512 GB each), LICCC = 1.5 TB.
LPAR A has 1.0 TB assigned, max = 1.5 TB.
LPAR B has 512 GB assigned, max = 1.0 TB.
LPAR B must be altered to have 1.0 TB assigned. This change is not possible within the constraints of the installed VFM.
Another 512 GB VFM feature is purchased and installed concurrently. Now up to 512 GB can be added concurrently to LPAR B without reactivating the LPAR.
Figure 2-2 shows the non-disruptive migration example.
Figure 2-2 Non-disruptive VFM migration example
Disruptive migration
An example of a disruptive migration includes the following features:
A z14 ZR1 CPC has two VFM features installed (512 GB per feature), LICCC = 1.0 TB.
LPAR A has 512 GB TB assigned, max = 1.0 TB.
LPAR B has 256 GB assigned, max = 1.0 TB.
LPAR A must be altered to have up to 1.5 TB. This change falls outside the range of maximum installed VFM.
Two extra 512 GB VFM features are purchased and activated concurrently (assuming plan ahead memory was ordered and memory is available). LPAR A must be reactivated with the new maximum VFM value of at least 1.5 TB and less than or equal to 2.0 TB.
Figure 2-3 shows the disruptive migration example.
Figure 2-3 Disruptive VFM migration example
For more information about how to configure VFM, see 10.3, “Virtual Flash Memory” on page 247.
2.8.3 Defining Storage I/O - FICON and FCP
FICON Express16S+, FICON Express 16S, and FICON Express8S features provide connectivity to storage devices by using Fibre Connection (FICON) or Fibre Channel Protocol (FCP). FICON Express16S+ and FICON Express 16S features support auto negotiation for the link data rate: 4 Gbps, 8 Gbps, and 16 Gbps. FICON Express8S supports auto negotiation for the link data rate at 2 Gbps, 4 Gbps, and 8 Gbps.
FICON Express16S+, FICON Express16S, and FICON Express8S support High-Performance FICON for z IBM z Systems (zHPF). zHPF is an extension to the FICON architecture that provides performance improvement for single-track and multi-track operations.
On a new build z14 ZR1 server, only the FICON Express16S+ feature can be ordered. The FICON Express16S and FICON Express8S features can be carried forward when upgrading from an older IBM Z server.
 
Note: On a FICON Express16S+ feature, both ports must be configured as channel type FC or FCP. A mixed configuration is not allowed.
For more information about how to configure a FICON Express16S+ feature, see Chapter 12, “Adding storage devices” on page 275.
2.8.4 Defining the IBM zHyperLink Express
For more information about defining zHyperLink Express, see 10.6, “IBM zHyperlink Express” on page 262.
 
Important: IBM intends to deliver IMS exploitation of IBM z14 and DS8880 zHyperLink WRITE operations1. zHyperLink Express is a direct connect short distance IBM Z I/O adapter that is designed to work with a FICON or High-Performance FICON SAN infrastructure.

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
2.8.5 Defining Network
This section provides planning considerations for deploying the following network-related features:
Open Systems Adapter (OSA)
Shared Memory Communications (SMC-R and SMC-D):
 – SMC - RDMA over Converged Ethernet (RoCE) Express features (SMC-R)
 – SMC - Direct Memory Access over Internal Shared Memory (SMC-D)
HiperSockets
Open Systems Adapter
The OSA Express features are installed in an IBM z14 Model ZR1server PCIe+ I/O drawer. The features are available as different types and support several networking protocols. Depending on the types of OSAs installed in the z14 ZR1, the CPC supports attachment with the following characteristics:
Copper-based Ethernet (10, 100 and 1000 Mbps)
Fiber-based Gigabit Ethernet (GbE), Short Wave (SX), and Long Wave (LX)
Fiber-based 10-Gigabit Ethernet Short Reach (SR) and Long Reach (LR)
Based on the intended use, the operating modes must be defined with channel type and device address. For more configuration information, see Chapter 6, “Configuring network features” on page 123 and the OSA-Express Implementation Guide, SG24-5948.
Starting with Driver Level 22 (HMC 2.13.0) installed on z13, HMC was enhanced to take advantage of the Open Systems Adapter/Support Facility (OSA/SF) function for the OSA-Express6S, OSA-Express5S, and OSA-Express4S features. OSA/SF on the HMC or the OSA/SF in the operating system component can be used for the OSA-Express4S features. For the OSA-Express6S and OSA-Express5S features, OSA/SF on the HMC is required. The OSA/SF is used primarily for the following purposes:
Manage all OSA ports.
Configure all OSA non-QDIO ports.
Configure local MAC addresses.
Display registered IPv4 addresses (in use and not in use). It is supported on IBM Z platform for QDIO ports.
Display registered IPv4 or IPv6 Virtual MAC and VLAN ID associated with all OSA Ethernet features configured as QDIO Layer 2.
Provide status information about an OSA port and its shared or exclusive use state.
For more information about the use of OSA/SF on the HMC, see 6.3, “Customizing OSA-Express using OSA Advanced facilities” on page 127.
 
OSA-Express6S 1000BASE-T adapters1: OSA-Express6S 1000BASE-T adapters (FC 0426) will be the last generation of OSA 1000BASE-T adapters to support connections operating at 100 Mbps link speed. Future OSA-Express 1000BASE-T adapter generations will support operation only at 1000 Mbps (1 Gbps) link speed.

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
Shared Memory Communications - RDMA
The 10GbE RoCE Express (FC 0411) and 10GbE RoCE Express2 (FC 0412) features are designed to help reduce CPU consumption for applications that use the TCP/IP stack without requiring application changes. The use of the RoCE Express features also helps to reduce network latency by using the SMC-R protocol in z/OS V2.1 or later. For more information, see RFC 7609. SMC-R is transparent to applications and can be used for LPAR-to-LPAR communications on a single CPC or for server-to-server communications across multiple IBM Z CPCs.
Deployment of the RoCE Express features is supported in a point-to-point configuration or switched configurations. When planning to deploy RoCE Express features in a switched configuration, the switches must support the following requirements:
Global Pause function frame (as described in the IEEE 802.3x standard) should be enabled
Priority Flow Control (PFC) disabled
No firewalls, no routing
IBM provides the SMC Applicability Tool (SMCAT) that helps determine the potential gains of using SMC-R in an environment (see 2.6.5, “Shared Memory Communications Applicability Tool” on page 13).
With z14 ZR1, the new 10GbE RoCE Express2 feature is available. This feature provides increased virtualization (sharing) capabilities. For more information, see IBM z14 Model ZR1Technical Guide, SG24-8651.
 
RoCE Express features port configuration: Consider the following points:
For 10GbE RoCE Express2 feature (FC 0412), the port number is now configured with the FID number in HCD (or IOCDS) and Port number must be configured (no default exists).
Port number for 10GbE RoCE Express (FC 0411) is configured in z/OS TCP/IP profile and does not change.
When defining a FID in the TCP/IP profile for 10GbE RoCE Express2 (FC 0412), the port number is no longer applicable.
When preparing to deploy the RoCE Express features, consider the following items:
The RoCE Express features are “Native” PCIe features; therefore, the following configuration items must be provided:
 – Function ID
 – Type
 – PCHID
 – Virtual Function ID (VF)
 – Port number
Determine which LPARs are to be shared by one 10GbE RoCE Express port.
Assign the VFs between the sharing LPARs.
For more configuration information, see 15.2.3, “Defining a RoCE-2 PCIe function” on page 375.
For 10GbE RoCE Express2 feature management information, see 10.4.4, “SMC-R Management” on page 259.
Consider Native PCIe feature Plugging and Resource Groups
The native PCIe feature support is provided by Resource Group (RG) code running on the integrated firmware processor (IFP). For resilience, four independent RGs are always on the system that share the IFP. For high availability purposes, always use at least two PCIe features located in different RGs, as shown in Figure 2-4.
Figure 2-4 Relationship among PCIe+ I/O drawer slots, domains, and RGs in the z14 ZR1
Shared Memory Communications - Direct Memory Access (SMC-D)
With the z13 (Driver 27) and z13s servers, IBM introduced SMC-D. SMC-D uses Internal Shared Memory (ISM) virtual PCIe adapter to provide direct memory access communications between LPARs inside the same IBM Z CPC.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes. SMC-D completes the overall Shared Memory Communications solution, which provides synergy with SMC-R. Both protocols use shared memory architectural concepts, which eliminates TCP/IP processing in the data path, yet preserves TCP/IP Qualities of Service for connection management purposes.
From a planning standpoint, SMC-D is similar to SMC-R; therefore, the same planning considerations apply. The objective is to provide consistent operations and management tasks for SMC-D and SMC-R. SMC-D uses a new virtual PCI adapter that is called Internal Shared Memory (ISM). The ISM Interfaces are associated with IP interfaces; for example, HiperSockets or OSA. ISM interfaces do not exist without an IP interface.
ISM interfaces are not defined in software. Instead, ISM interfaces are dynamically defined and created, and automatically started and stopped. You do not need to operate (Start or Stop) the ISM interfaces. Unlike RoCE, ISM FIDs (PFIDs) are not defined in software. Instead, they are auto-discovered based on their PNet ID.
Before implementing SMC-R or SMC-D, check your environment for the following items:
Run the SMCAT to evaluate applicability and potential value. For more information about the SMCAT, see the IBM z/OS SMC Applicability Test (SMC-AT) document.
Review and adjust as needed the available real memory and fixed memory usage limits (z/OS and CS). SMC requires fixed memory. You might need to review the limits and provision extra real memory for z/OS.
Review IP topology, VLAN usage considerations, and IPSec.
Review changes to messages, monitoring information, and diagnostic tools. Many updates are available for the following items:
 – Messages (IBM VTAM® and TCP stack)
 – Netstat (status, monitoring, and display information)
 – CS diagnostic tools (VIT, Packet trace, CTRACE, and IPCS formatted memory dumps)
For more information about SMC-R planning and security considerations, see the SMC-R tab on the Shared Memory Communications Reference Information page.
For more information about SMC-D planning and security considerations, see the SMC-D tab on the Shared Memory Communications Reference Information page.
For more information about how to define SMC-D, see 15.2.2, “Defining an ISM PCIe function” on page 372.
For an overview of how to manage an SMC-D connection, see 10.5.4, “SMC-D management” on page 261.
HiperSockets
HiperSockets provides the fastest TCP/IP communications between z/OS, z/VM, IBM z/VSE®, and Linux logical partitions within a z14 ZR1 CPC, that act like internal “virtual” local area networks. This HiperSockets implementation is achieved by using the Licensed Internal Code (LIC) and supporting device drivers in the operating systems. HiperSockets establish a network with higher availability, security, simplicity, performance, and cost effectiveness than can be achieved by using an external IP network.
The HiperSockets function is based on the OSA-Express queued direct input/output (QDIO) protocol and therefore, HiperSockets is called internal QDIO (iQDIO). The LIC emulates the link control layer of an OSA-Express QDIO interface, and uses no physical cabling or external networking connections. Data access is performed at memory speeds, which bypasses external network delays and provides users high-speed logical LANs with minimal system and network overhead.
HiperSockets can be defined as Multiple Image Facility (MIF)-shared in a CSS and as spanned channels across multiple CSSs. A HiperSockets CHPID can be seen as an internal LAN to the server. The level of sharing is determined by the logical partitions you want to grant access to that LAN.
HiperSockets is supported by the following operating systems:
All in-service z/OS releases
All in-service z/VM releases
All in service z/VSE releases
Linux on Z
On a z14 ZR1, HiperSockets supports the following functions:
HiperSockets Broadcast
Supported across HiperSockets on Internet Protocol Version 4 (IPv4) for applications. Applications that use the broadcast function can propagate the broadcast frames to all TCP/IP applications that use HiperSockets. This support is applicable in Linux, z/OS, and z/VM environments.
VLAN support
Virtual local area networks (VLANs) are supported by Linux on z Systems and z/OS V1R8 or later for HiperSockets. VLANs can reduce overhead by allowing networks to be organized by traffic patterns rather than physical location. This enhancement allows traffic flow on a VLAN connection over HiperSockets and between HiperSockets and OSA-Express Ethernet features.
IPv6 support on HiperSockets
HiperSockets Network Concentrator
Traffic between HiperSockets and OSA-Express can be transparently bridged by using the HiperSockets Network Concentrator. This configuration eliminates intervening network routing overhead, which results in increasing performance and a simplified network configuration. This improvement is achieved by configuring a connector Linux system that has HiperSockets and OSA-Express connections defined to it.
HiperSockets Layer 2 support
HiperSockets supports two transport modes on the z14 ZR1 Layer 2 (Link Layer) and Layer 3 (Network and IP Layer).
As with Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors or multicast routers. These configurations enable high-performance and highly available Link Layer switches between the HiperSockets network and an external Ethernet.
HiperSockets multiple write facility
HiperSockets performance was increased by allowing streaming of bulk data over a HiperSockets link between logical partitions. Multiple writes with fewer I/O interrupts reduce processor usage both the sending and receiving logical partitions, and is supported in z/OS.
HiperSockets Completion Queue
The HiperSockets Completion Queue function is designed to allow HiperSockets to transfer data synchronously if possible, and asynchronously if necessary. This function combines ultra-low latency with more tolerance for traffic peaks.
With the asynchronous support, during high volume situations, data can be temporarily held until the receiver has buffers available in its inbound queue. This function provides end-to-end performance improvement for LPAR to LPAR communication.
HiperSockets Virtual Switch Bridge Support
The z/VM virtual switch is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. z/VM 6.2 or later, TCP/IP, and Performance Toolkit APARs are required for this support.
This bridge allows a single HiperSockets guest virtual machine network connection to also directly communicate with the following devices:
 – Other guest virtual machines on the virtual switch
 – External network hosts through the virtual switch OSA UPLINK port
zIIP-Assisted HiperSockets for large messages
In z/OS, HiperSockets was enhanced for zIIP exploitation. Specifically, the z/OS Communications Server allows the HiperSockets Multiple Write Facility processing for large outbound messages that originate from z/OS to be run on a zIIP.
z/OS application workloads that are based on XML, HTTP, SOAP, Java, and traditional file transfer can benefit from zIIP enablement by lowering general-purpose processor usage.
When the workload is eligible, the HiperSockets device driver layer processing (write command) is redirected to a zIIP, which unblocks the sending application.
For more information about the technical details of each function, see IBM Z Connectivity Handbook, SG24-5444.
2.8.6 Defining the console (OSA-ICC)
The OSA-ICC function of the OSA-Express 1000Base-T feature supports TN3270 enhancements (TN3270E) and non-SNA DFT 3270 emulation. Planning for an IBM Z z14 Model ZR1 OSA-ICC implementation requires input from several disciplines within a customer organization.
The following aspects of system configuration provide input for configuring OSA-ICC:
IBM Z server I/O subsystem configuration
Operating system configuration
OSA-Express feature configuration
Ethernet LAN configuration
Client TN3270E configuration
In HCD, the OSA-Express feature must be defined to operate as an Integrated Console Controller (ICC). The configuration includes the following requirements:
IBM Z server I/O subsystem configuration: The same basic rules for adding an OSA-ICC adapter apply as to any other new device.
Operating system configuration: To have a Nucleus Initialization Program (NIP) console available, ensure that the correct device number is defined in the HCD Operating system Work with consoles dialog.
During an upgrade from an IBM Z server to a z14 ZR1, the same definitions can be used for the new machine as on the source configuration.
For more implementation information, see OSA-Express Integrated Console Controller Implementation Guide, SG24-6364.
The following planning topics must be considered:
Reserve at least one OSA-Express 1000Base-T port to be defined as channel type OSC
Define 3270-X Devices in HCD to act as system consoles
The use of OSA/Advanced facilities to configure the sessions
For more information about how to configure non-SNA consoles, see Chapter 7, “Defining console communication” on page 143.
2.8.7 Defining coupling and timing only links
Support for Parallel Sysplex includes the Coupling Facility Control Code and coupling links. A new Coupling connectivity in support of Parallel Sysplex environments is provided on the z14 ZR1 by the following features:
Coupling Express Long Reach (CE LR). The feature (FC 0433) has two ports coupling link connectivity for a distance up to 10 km (6.2 miles).
Integrated Coupling Adapter (ICA SR), which is FC 0172.
Internal Coupling (ICs) channels operate at memory speed.
For more information, see IBM Z Connectivity Handbook, SG24-5444.
All coupling link types can be used to carry STP messages.
 
Note: The CE LR is a two-port card that occupies one PCIe+ I/O drawer slot. Therefore, an IBM z14 Model ZR1server that is configured as a stand-alone Coupling Facility (CF) must include at least one PCIe+ I/O drawer.
Planning considerations
The relationship between one or more CF link connections between CPCs must be configured in HCD to enable the exchange of CF link signals. HCD generates the Control Unit (CU) and device definitions automatically, if the CPCs are known within the same IODF file and the AID or PCHIDs are not reserved by other definitions.
 
Coupling connectivity for z14 ZR1: The z14 ZR1 CPC does not support coupling connectivity using InfiniBand features. As such, it can connect only for transmitting coupling or timing (STP) data to a z13/z13s or to another z14 M0x/z14 ZR1 CPC.
In a Parallel Sysplex that also contains zEC12/zBC12 servers, the z14 ZR1 or the zEC12/zBC12 cannot be used for running the Coupling Facility LPAR. The CF LPAR must be run on a CPC that includes coupling connectivity to z14 ZR1 and the zEC12/zBC12.
Depending on the hardware that is configured on the CPC, a different channel type must be defined.
Depending on the type of the CF link hardware, CF links operate up to a set distance. Physical placement of the CPCs or CFs must be considered to avoid exceeding the maximum distance that is supported by the CF link. For the Coupling Express Long Reach links, dense wavelength division multiplexing (DWDM) technology can be used to extend the maximum length of the CF links.
For more information about qualified devices, see the IBM Resource Link.
STP signals can be exchanged between two CPCs without any CF LPARs involved. If physical coupling links are established between two CPCs, HCD allows the configuration of STP links (timing-only links).
For more information, see z/OS HCD User’s Guide, SC34-2669, and Chapter 8, “Preparing for Sysplex and configuring Server Time Protocol” on page 159.
 
IBM z14 Model M0x (machine type 3906) will be the last z Systems and IBM Z server to support HCA3-O and HCA3-O LR adapters1: z14 M0x will be last z Systems and IBM Z server to support HCA3-O fanout for 12x IFB (#0171) and HCA3-O LR fanout for 1x IFB (#0170). As announced previously, z13s is the last mid-range z Systems server to support these adapters. Enterprises should begin migrating from HCA3-O and HCA3-O LR adapters to ICA SR and Coupling Express Long Reach (CE LR) adapters on z14, z13, and z13s.
For high-speed short-range coupling connectivity, enterprises should migrate to the Integrated Coupling Adapter (ICA-SR). For long-range coupling connectivity, enterprises should migrate to the new Coupling Express LR coupling adapter. For long-range coupling connectivity requiring a DWDM, enterprises must determine their wanted DWDM vendor’s plan to qualify the planned replacement long-range coupling link.

1 IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
2.8.8 Planning considerations for zEDC
This section provides planning considerations for installing the zEDC Express feature in a z14 ZR1.
The zEDC Express feature is a hardware feature that allows for data compression and decompression. It is a PCIe native feature that allows for high-performance, low-latency compression that reduces processor use. The hardware device is a standard computer expansion card that is installed into the PCIe I/O drawer.
Be sure to install a minimum of two zEDC Express features, one per Resource Group (RG). For the best data throughput and availability (two features per RG) for a total of four features, must be installed. For the full zEDC benefit, zEDC should be active on all systems that might access or share compressed format data sets. This configuration eliminates instances where software inflation is used when zEDC is not available.
A more information about the zEDC Express feature, see Reduce Storage Occupancy and Increase Operations Efficiency with IBM zEnterprise Data Compression, SG24-8259.
In this section, a short summary about planning consideration of the zEDC Express feature is given. Several tasks must be completed to use zEDC features:
1. Planning the installation:
 – Consider the number and sharing of one or more zEDC Express features.
 – Update the IFAPRDxx PARMLIB member in z/OS 2.1.
 – Plan for IPLs before activating the prized software feature for the first time.
2. z/OS: Verifying the prerequisites: Look up the IBM.Function.zEDC fixcat for proper PTFs.
3. z/OS: Enabling the Priced Software Feature.
4. HCD: Defining the PCIe features:
5. Managing the zEDC Express PCIe features:
For more information, see 10.2.4, “Handling zEDC” on page 244.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.75.133