Strengths of the z14 ZR1
Computer systems that stay relevant for more than 50 years demonstrate a forward-looking system architecture. IBM Z platforms are a prime example of this idea with the IBM z/Architecture1.
Advanced capabilities are introduced with each new Z platform that cause it to move ahead of its predecessor platforms in terms of efficiency, flexibility, security, reliability, and much more. Whenever new capabilities are implemented, the z/Architecture is extended rather than replaced, which helps sustain the compatibility, integrity, and longevity of the Z platform. Therefore, protection and compatibility with an earlier version of existing workloads and solutions are key for the new capabilities that are introduced in the z/Architecture.
To handle new and different workloads, the scope of software and application options must be accommodated by the operating system. Also, the hardware and firmware components of the system must provide a viable option to integrate functionality into the architecture’s capabilities. The Z platforms and operating systems always conform to the z/Architecture to ensure support of current and future workloads and solutions.
This chapter highlights several z14 ZR1 capabilities and strengths, and explains how they can be of value for businesses and organizations. Throughout the chapter, reference is made to the following IBM Redbooks publications:
IBM z14 Model ZR1 Technical Guide, SG24-8651
IBM Z Connectivity Handbook, SG24-5444
This chapter includes the following topics:
4.1 Technology improvements
The z14 ZR1 includes technology improvements that are intended to make systems integration more scalable, flexible, manageable, and securable.
The following sections provide more information about the technology improvements for the z14 ZR1.
4.1.1 Processor design highlights
The z/Architecture that underlies the z14 ZR1 offers a rich complex instruction set computer (CISC) that supports multiple arithmetic formats.
 
z/Architecture addressing modes: The z/Architecture simultaneously supports 24-bit, 31-bit, and 64-bit addressing modes. This feature provides compatibility with earlier versions and, with that, investment protection.
Compared to its predecessor system, the z14 ZR1 processor design includes the following improvements and architectural extensions:
Better performance and throughput:
 – Faster processor units (4.5 GHz compared to 4.3 GHz in the z13s).
 – More capacity (up to 30 characterizable processor units versus 20 on the z13s).
 – Larger cache (and shorter path to cache) means faster uniprocessor performance
 – Innovative core-cache design (L1 and L2), processor chip-cache design (L3), and cluster design (L4). The objective is to keep more data closer to the processor by increasing the cache sizes and decreasing the latency to access the next levels of cache.
Reoptimized pipeline depth for power and performance:
 – Improved instruction delivery
 – Faster branch wakeup
 – Reduced execution latency
 – Improved Operand Store Compare (OSC) avoidance on Dispatch Store Table (DST)
 – Optimized second-generation SMT2
New translation design:
 – Four concurrent translations (from one in the z13s)
 – Reduced latency
 – Lookup that is integrated into L2 access pipe
 – Translation Lookaside Buffer enhancements:
 • 2x CRSTE growth
 • 1.5x PTE growth
 • New 64 entry 2 GB
Dedicated co-processor for each processor unit (PU):
 – The Central Processor Assist for Cryptographic Function (CPACF) in the z14 ZR1 Model ZR1 is optimized to provide up to 6x faster encryption functions than the z13s. CPACF on z14 ZR1 supports new SHA-3 standard, True Random Number Generator, and 4x Advanced Encryption Standard (AES) speedup.
 – On-chip Compression CMPSC on z14 ZR1 offers up to 2x expansion speed up and supports Entropy Encoding (Huffman Coding) and Order Preserving Compression for index/sort-file compression.
Transactional Execution Facility
The Transactional Execution Facility, which is known in the industry as hardware transactional memory, allows instructions to be issued atomically. Therefore, all results of the instructions in the group are committed or no results are committed, in a truly transactional manner. The execution is optimistic.
The instructions are issued, but previous state values are saved in transactional memory. If the transaction succeeds, the saved values are discarded. If it fails, they are used to restore the original values. Software can test the success of execution and redrive the code, if needed, by using the same path or a different path.
The Transactional Execution Facility provides several instructions, including instructions to declare the start and end of a transaction and to cancel the transaction. This capability can provide performance benefits and scalability to workloads by helping to avoid most of the locks on data. This ability is especially important for heavily threaded applications, such as Java.
Guarded Storage Facility
Also known as less-pausing garbage collection, Guarded Storage Facility is a new architecture that was introduced with z14 ZR1 to enable enterprise scale Java applications to run without periodic pause for garbage collection on larger heaps. This facility improves Java performance by reducing program pauses during Java Garbage Collection.
Instruction Execution Protection
Instruction Execution Protection (IEP) is a hardware function on the z14 ZR1 that enables software, such as Language Environment, to mark certain memory regions (for example, a heap or stack) as non-executable to improve the security of programs that are running on Z against stack-overflow or similar attacks.
Simultaneous multithreading
Simultaneous multithreading (SMT) is built into the z14 ZR1 IFLs, zIIPs, and system assist processors (SAPs), which allows more than one thread to simultaneously run in the same core and shares all of its resources. This function improves utilization of the cores and increases processing capacity.
Adjusted with the growth in the core cache and TLB2, second-generation SMT on z14 ZR1 improves thread balancing, supports multiple outstanding translations, optimizes hang avoidance mechanisms, and delivers improved virtualization performance to benefit Linux. z14 ZR1 provides economies of scale with next generation multithreading (SMT) for Linux and zIIP-eligible workloads while adding support for the I/O SAP.
Hardware decimal floating point function
The hardware decimal floating point (HDFP) function is designed to speed up calculations and provide the precision demanded by financial institutions and others. The HDFP fully implements the IEEE 754r standard.
Vector Packed Decimal Facility
Vector Packed Decimal Facility allows packed decimal operations to be performed in registers rather than memory by using new fast mathematical computations. Compilers, such as Enterprise COBOL for z/OS, V6.2, Enterprise PL/I for z/OS, V5.2, z/OS V2.3 XL C/C++, the COBOL optimizer, Automatic Binary Optimizer for z/OS, V1.3, and Java, are optimized on z14 ZR1.
Single instruction, multiple data
The z14 ZR1 includes a set of instructions called single instruction, multiple data (SIMD) that can improve the performance of complex mathematical models and analytics workloads. This improvement is accomplished through vector processing and complex instructions that can process a large volume of data with a single instruction.
SIMD is designed for parallel computing and can accelerate code that contains integer, string, character, and floating point data types. This system enables better consolidation of analytics workloads and business transactions on the Z platform.
Runtime Instrumentation Facility
The Runtime Instrumentation Facility provides managed run times and just-in-time compilers with enhanced feedback about application behavior. This capability allows dynamic optimization of code generation as it is being run.
Large page support
The size of pages and page frames remained at 4 KB for a long time. IBM Z platforms can include large pages of 1 MB, in addition to supporting pages of 4 KB. This capability relates primarily to large main storage usage. VFM supports large pages and can provide increased performance. Both page frame sizes can be used simultaneously.
Large pages enable the translation lookaside buffer (TLB) to better represent the working set and suffer fewer misses by allowing a single TLB entry to cover more address translations. Large pages are better represented in the TLB and are expected to perform better.
 
Note: Large pages can benefit long-running applications that are memory-access intensive and might not be the best fit for general use. Short-lived processes with small working sets see little to no improvement. Base the decision to use large pages on measurements of memory usage and page translation overhead for specific workloads.
Support for 2 GB large pages
z14 ZR1 uses 2 GB page frames to increase efficiency for DB2 buffer pools, Java heaps, and other large structures. The use of 2 GB pages increases TLB coverage without proportional growth in the size of the TLB. Consider the following points:
A 2 GB memory page is 2048x larger than a 1 MB large page, and 524,288x larger than an ordinary 4 KB base page.
A 2 GB page allows a single TLB entry to fulfill many more address translations than a large page or ordinary base page.
A 2 GB page provides users with much better TLB coverage, which improves the following aspects of performance:
 – Decreases the number of TLB misses that an application incurs
 – Spends less time converting virtual addresses into physical addresses
 – Uses less real storage to maintain DAT structures
Central Processor Assist for Cryptographic Function
CPACF is a high-performance, low-latency co-processor that can use DES, TDES, AES-128, AES-256, SHA-1, SHA-2, and SHA-3 ciphers to perform symmetric key encryption and calculate message digests in hardware. It is well-suited for encrypting large amounts of data in real time because of its proximity to the processor unit. For the z14 ZR1 Model ZR1, CPACF encryption modes are accelerated 4 - 6x over the z13s.
Compression Coprocessor
Compression Coprocessor (CMPSC) is a high-performance coprocessor that uses compression algorithms to help reduce disk space and memory usage. Each processor unit features a dedicated CMPSC that connects to the main cache-structure for better throughput of the compression dictionaries.
In the z14 ZR1, the compression and expansion performance are improved with fewer CPU cycles. In addition, the compression ratio with Huffman coding garners more disk space and memory usage savings, even where compression is in use. Also, order-preserving compression for search trees and sort files can be used for large parts of data and DB2 indexes that were not practical to compress previously.
4.1.2 Memory
Memory is significantly greater in the new Z models. The z14 ZR1 can have up to 8 TB of usable memory installed, compared with the 4 TB maximum on the z13s.
In addition, the hardware system area (HSA) on the z14 ZR1 is expanded to 64 GB (from 40 GB on z13s). The HSA includes a fixed size and is not counted in the memory that the client orders.
The maximum memory size per logical partition (LPAR) Also changed. For example, on the z14 ZR1, up to 8 TB of memory can now be defined to an LPAR in the image profile. Each operating system can allocate main storage up to the maximum memory amount supported.
Plan-ahead memory
If you anticipate someday increasing the installed memory, the initial system order can contain starting and potential extra memory sizes. The extra memory is referred to as plan-ahead memory, which includes a specific memory pricing model to support it.
The starting memory size is activated when the system is installed, and the rest remains inactive. When more physical memory is required, it is fulfilled by activating the appropriate number of plan-ahead memory features. This activation is concurrent and might be nondisruptive to applications, depending on the level of operating system support. z/OS and z/VM support this function.
 
Note: Do not confuse plan-ahead and flexible1 memory support. Consider the following points:
Plan-ahead memory is for a permanent increase of installed memory.
Flexible memory provides a temporary replacement of a part of memory that becomes unavailable.

1 Flexible memory option is not supported on z14 ZR1 (which includes a single CPC drawer).
IBM Virtual Flash Memory
The Virtual Flash Memory (VFM) feature is offered from the main memory capacity. For z14 ZR1, up to four VFM features can be ordered, each of 512 GB. VFM replaces the Flash Express adapters that were available on the zBC12 and z13s.
VFM provides much simpler management and better performance by eliminating the I/O of the adapters that are in the PCIe drawers. VFM does not require any application changes when moving from IBM Flash Express.
VFM can help improve availability and handling of paging workload spikes when z/OS is run. VFM can also be used in coupling facility images to provide extended capacity and availability for workloads that use WebSphere MQ Shared Queues structures.
VFM can improve availability by reducing latency from paging delays that can occur during peak workload periods. It is also designed to help eliminate delays that can occur when diagnostic data is collected during failures.
4.2 Virtualization
Virtualization is a key strength of Z platforms. It is embedded in the architecture and built into the hardware, firmware, and operating systems. For decades, Z platforms were designed based on the concept of partitioning resources (such as CPU, memory, storage, and network resources) so that each set of features can be used independently with its own operating environment.
Virtualization requires a hypervisor, which is the control code that manages resources that are required for multiple independent operating system images. Hypervisors can be implemented as software or hardware, and the z14 ZR1 has both.
The hardware hypervisor is IBM Processor Resource/System Manager (PR/SM). PR/SM is implemented in firmware as part of the base system, fully virtualizes the system resources, and runs without any other software.
A software hypervisor is implemented with the z/VM operating system or the KVM hypervisor2, both of which use PR/SM functions.
The hypervisors are designed to enable simultaneous execution of multiple operating systems, which provides operating systems with virtual resources.
Multiple software hypervisors can exist on the same Z platform (see Figure 4-1).
Figure 4-1 Support for coexistence of different hypervisors (in PR/SM mode)
The various virtualization options in Z platforms allow you to build flexible virtualized environments to take advantage of open source software or upgrade to new cloud service offerings, such as infrastructure as a service (IaaS) and platform as a service (PaaS)
PR/SM
Unique to Z platforms, PR/SM is a Type-1 hypervisor that runs directly on bare metal, which allows you to create multiple LPARs on the same physical server. PR/SM is a highly stable, proven, and secure, firmware-encapsulated virtualization technology that allows multiple operating systems to run on the same physical platform. Each operating system runs in its own logical partition.
PR/SM logically partitions the platform across the various LPARs to share resources, such as processor units, memory, and I/O (for networks and storage), which allows for a high degree of virtualization.
Dynamic Partition Manager
Dynamic Partition Manager (DPM) is a management infrastructure mode in the z14 ZR1. It is intended to simplify virtualization management and is easy to use, especially for users who are less experienced with Z. It does not require you to learn complex syntax or command structures.
Software hypervisors, operating systems, secure service containers, and software appliances that can exist on the Z platform in DPM mode are shown in Figure 4-2 on page 66.
Figure 4-2 Support for coexistence of different software hypervisors (in DPM mode)
DPM provides simplified hardware and virtual infrastructure management, including partition lifecycle and integrated dynamic I/O and PCIe functions management for Linux that is running in an LPAR, under the KVM hypervisor, and under z/VM. By using DPM, an environment can be created, provisioned, and modified without disrupting running workloads. It also can be monitored for troubleshooting.
DPM provides the following capabilities through the Hardware Management Console (HMC):
Create and provision an environment, including new partitions, assignment of processors and memory, and configuration of I/O adapters.
Manage the environment, including the ability to modify system resources without disrupting workloads.
Monitor and troubleshoot the environment to identify system events that might lead to degradation.
Enhancements to DPM on z14 ZR1 simplify the installation of the Linux operating system, support more hardware cards, and enable base cloud provisioning through Openstack, including the following enhancements:
Support for auto-configuration of devices to simplify Linux Operating System Installation, where Linux distribution installers use functions
Secure FTP through HMC for starting and installing an Operating system by using FTP
Support for OSA-Express6S, FICON Express 16S+, Crypto Express6S, and 10GbE RoCE Express2 features
 
Configuration note: The z14 ZR1 can be configured in DPM mode or in PR/SM mode, but cannot be configured in both modes at the same time. As of this writing, DPM supports FCP storage.
z/VM
z/VM is a Type-2 hypervisor that allows sharing the mainframe’s physical resources, such as disk, memory, network adapters, and CPUs (called CPs and IFLs). These resources are managed by the z/VM hypervisor, which typically runs on an LPAR and other virtual machines (VMs) that run on top of the hypervisor. Typically, the z/VM hypervisor is used to run Linux virtual servers, but other operating systems (such as z/OS) can also run on z/VM. z/VM is a proven and well-established virtualization platform. It provides industry-leading capabilities to efficiently scale both horizontally and vertically.
KVM hypervisor
The KVM hypervisor available in recent Linux on Z distributions is a Type-2 hypervisor that provides simple, cost-effective server virtualization for Linux workloads that are running on the Z platform. It enables you to share real CPUs (called IFLs), memory, and I/O resources through platform virtualization and can coexist with z/VM virtualization environments, Linux on IBM Z, z/OS, z/VSE, and z/TPF.
The KVM hypervisor support information is provided by the Linux distribution partners. For more information, see the documentation for your distribution.
For more information about the use of KVM on Z, see the Linux on KVM page of IBM Knowledge Center.
4.2.1 Hardware virtualization
PR/SM was first implemented in the mainframe in the late 1980s. It allows you to define and manage LPARs. PR/SM virtualizes processor units, memory, and I/O features. Certain features are purely virtualized implementations.
PR/SM technology on the Z platform received Common Criteria EAL5+ security certification. PR/SM is always active on the system and is enhanced to provide better performance and platform management benefits.
The LPAR definition includes several logical processor units (LPUs), memory, and I/O devices. IBM z/Architecture is designed to meet requirements with low overhead with a Specific Target of Evaluation (Logical Partitions). This design was proven in many installations over several decades.
Up to 40 LPARs can be defined on the IBM z14 Model ZR1 and hundreds or even thousands of virtual servers can be run under z/VM or the KVM hypervisor.
Logical processors
Logical processors are defined and managed by PR/SM and are perceived by the operating systems as real processors. These processors are sorted into the following characterizations:
Central processors (CP) are standard processors for use with any supported operating system and user applications.
IBM System z Integrated Information Processor (zIIP) is used under z/OS for designated workloads, including the following workloads:
 – IBM Java virtual machine (JVM)
 – Various XML System Services
 – IPSec offload
 – Certain parts of IBM DB2 DRDA
 – DFSMS System Data Mover for z/OS Global Mirror
 – IBM HiperSockets for large messages
 – IBM GBS Scalable Architecture for Financial Reporting (SAFR) enterprise business intelligence reporting
IFL is exclusively used with Linux on IBM Z, and for running the z/VM and KVM hypervisors in support of Linux VMs (also called guests).
Internal Coupling Facility (ICF) is used for z/OS clustering. ICF is dedicated to this function and exclusively run the Coupling Facility Control Code (CFCC).
In addition, the following pre-characterized processors are part of the base system configuration and are always present:
SAP that runs I/O operations
IFP for native PCIe features
Although these processors provide support for all LPARs, they are never part of an LPAR configuration.
PR/SM accepts requests for work on logical processors by dispatching logical processors on physical processors. Physical processors can be shared across LPARs, but can also be dedicated to an LPAR. However, the logical processors of an LPAR must be all shared or all dedicated.
The sum of logical processors that are defined in all active LPARs in a Z system might be higher than the number of physical processor units. The maximum number of LPUs that can be defined in a single LPAR cannot exceed the total number physical processor units that are available in the CPC. To achieve optimal ITR performance in sharing LPUs, the total number of online LPUs should be kept to a minimum, which reduces software and hardware overhead.
PR/SM ensures that the processor state is properly saved and restored (including all registers) when switching a physical processor from one logical processor to another. Data isolation, integrity, and coherence inside the system are always strictly enforced.
Logical processors can be dynamically added to and removed from LPARs. Operating system support is required to use this capability. z/OS, z/VM, and z/VSE each can dynamically define and change the number and type of reserved processor units in an LPAR profile. No pre-planning is required.
The new resources are immediately available to the operating systems and, for z/VM, to its guest images. Linux on IBM Z provides the Standby CPU activation and deactivation functions.
Memory
To ensure security and data integrity, memory cannot be concurrently shared by active LPARs. In fact, a strict isolation is maintained.
A logical partition can be defined with an initial and reserved amount of memory. At activation time, the initial amount is made available to the partition, and the reserved amount can later be added, partially or totally. Those two memory zones do not have to be contiguous in real memory, but the addressing area (for initial and reserved memory) is presented to the operating system that runs in the LPAR as contiguous.
By using the plan-ahead option, memory can be physically installed without being enabled. It can then be enabled when necessary. z/OS can use this support by nondisruptively acquiring and releasing memory from the reserved area.
z/VM can acquire memory nondisruptively and quickly make it available to guests. z/VM virtualizes this support to its guests, which can also increase their memory nondisruptively. Releasing memory is still a disruptive operation.
LPAR memory is said to be virtualized in the sense that, within each LPAR, memory addresses are contiguous and start at address zero. LPAR memory addresses are different from the system’s absolute memory addresses, which are contiguous and have a single address of zero. Do not confuse this capability with the operating system that virtualizes its LPAR memory, which is done through the creation and management of multiple address spaces.
The z/Architecture features a robust virtual storage architecture that allows LPAR-by-LPAR definition of an unlimited number of address spaces and the simultaneous use by each program of up to 1023 of those address spaces. Each address space can be up to 16 EB (1 exabyte = 260 bytes). Thus, the architecture has no real limits. Practical limits are determined by the available hardware resources, including disk storage for paging.
Isolation of the address spaces is strictly enforced by the Dynamic Address Translation hardware mechanism. A program’s right to read or write in each page frame is validated by comparing the page key with the key of the program that is requesting access. This mechanism was in use since the System/370. Memory keys were part of, and used by, the original System/360 systems.
Definition and management of the address spaces is under operating system control. Three addressing modes (24-bit, 31-bit, and 64-bit) are simultaneously supported, which provides compatibility with earlier versions and investment protection.
z14 ZR1 supports 4 KB, 1 MB, and 2 GB pages, and an extension to the z/Architecture that is called Enhanced Dynamic Address Translation-2 (EDAT-2).
Operating systems can allow sharing of address spaces, or parts of them, across multiple processes. For example, under z/VM, a single copy of the read-only part of a kernel can be shared by all VMs that use that operating system. Known as discontiguous shared segment (DCSS), this shared memory exploitation for many VMs can result in large savings of real memory and improvements in performance.
I/O virtualization
The z14 ZR1 supports three logical channel subsystems (LCSSs), each with 256 channels, for a total of 768 channels. In addition to the dedicated use of channels and I/O devices by an LPAR, I/O virtualization allows concurrent sharing of channels. The z/Architecture also allows sharing the I/O devices that are accessed through these channels by several active LPARs. This function is known as multiple image facility (MIF). The shared channels can belong to different channel subsystems, in which case they are known as spanned channels.
Data streams for the sharing LPARs are carried on the same physical path with total isolation and integrity. For each active LPAR that includes the channel configured online, PR/SM establishes one logical channel path. For availability reasons, multiple logical channel paths should be available for critical devices (for instance, disks that contain vital data sets).
When more isolation is required, configuration rules allow restricting the access of each logical partition to particular channel paths and specific I/O devices on those channel paths.
Many installations use the parallel access volume (PAV) function, which allows accessing a device by several addresses (normally one base address and an average of three aliases). This feature increases the throughput of the device by using more device addresses.
HyperPAV takes the technology a step further by allowing the I/O Supervisor (IOS) in z/OS (and the equivalent function in the Control Program of z/VM) to create PAV structures dynamically. The structures are created depending on the current I/O demand in the system, which lowers the need for manually tuning the system for PAV use.
In large installations, the total number of device addresses can be high. Therefore, the concept of channel sets is part of the z/Architecture.
Subchannel sets
On the z14 ZR1, up to three sets of approximately 64,000 device addresses are available. This availability allows the base addresses3 to be defined on set 0 (IBM reserves 256 subchannels on set 0) and the aliases on set 1, and set 2. In total 196,349 subchannel addresses are available per channel subsystem.
Subchannel sets are used by the Metro Mirror (also referred to as synchronous Peer-to-Peer Remote Copy [PPRC]) function by having the Metro Mirror primary devices that are defined in subchannel set 0. Secondary devices can be defined in subchannel sets 1, and 2, which provides more connectivity through subchannel set 0.
To reduce the complexity of managing large I/O configurations further, Z introduced extended address volumes (EAV). EAV provides large disk volumes. In addition to z/OS, z/VM and Linux on IBM Z support EAV.
By extending the disk volume size, potentially fewer volumes are required to hold the same amount of data, which simplifies systems and data management. EAV is supported by the IBM DS8000® series. For more information about EAV compatibility, see the devices from other vendors.
The dynamic I/O configuration function is supported by z/OS and z/VM. It provides the capability of concurrently changing the currently active I/O configuration. Changes can be made to channel paths, control units, and devices. A fixed HSA area in the z14 ZR1 greatly eases the planning requirements and enhances the flexibility and availability of these reconfigurations.
The health checker function in z/OS includes a health check in the I/O Supervisor that can help system administrators identify single points of failure in the I/O configuration.
4.2.2 IBM Z based clouds
Cloud computing capitalizes on the ability to rapidly and securely deliver standardized service offerings, while retaining the capacity for customizing the environment. Elasticity and just-in-time provisioning allow the system to deal with the ebbs and flows of demand dynamically.
Virtualization is critical to the economic and financial viability of cloud service offerings because it allows minimizing the over-provisioning of resources and reusing them at the end of the virtual server lifecycle.
Because of the extreme integration in the hardware, virtualization on z14 ZR1 is highly efficient (the best in the industry) and encompasses computing and I/O resources, including the definition of internal virtual networks with virtual switches. These characteristics are common to software-defined environments.
These characteristics also allow support on a single platform, dense sets of virtual servers and server networks with up to 100% sustained resource utilization, and the highest levels of isolation and security. Therefore, the cloud solution costs, whether hardware, software, or management, are minimized.
Cloud elasticity requirements are covered by the z14 ZR1 granularity offerings, including capacity levels and capacity on demand. These and other technologic leadership characteristics make the Z platforms the server golden standard.
In addition, managing a cloud environment requires tools that can take advantage of a pool of virtualized compute, storage, and network resources, and present them to the consumer as a service in a secure way. A cloud management system should also help with the following tasks:
Offering open cloud management and application programming interfaces (APIs)
Improving the usage of the infrastructure
Lowering administrative overhead and improving operations productivity
Reducing management costs and improving responsiveness to changing business needs
Automating resource allocation
Providing a self-service interface
Tracking and metering resource usage
A cloud management system must also support the management of virtualized IT resources to support different types of cloud service models and cloud deployment models. OpenStack (offered for z/VM and the KVM hypervisor) can satisfy a wide range of cloud management demands. It integrates various components to automate IT infrastructure service provisioning.
The Z cloud architecture that provides an industrial-strength base for hybrid cloud and API economy is shown in Figure 4-3.
Figure 4-3 IBM Z cloud architecture
4.2.3 Secure Service Container
The IBM Secure Service Container provides the base infrastructure to create and deploy an IBM Z Appliance, which includes operating system, middleware, SDK, and firmware support. With a Secure Service Container, deploying an appliance that provides a function or a service takes minutes instead of days, while providing simplified management and maintenance. When deployed in a Secure Service Container LPAR (SSC LPAR), the workload is protected from inadvertent access from an external attacker or even from a system administrator.
IBM Z Appliance is an integration of operating system, middleware, and various software components that work autonomously to provide core infrastructure services while focusing on consumability and security. The appliance is deployed in an SSC LPAR that is running in an IBM Z platform. The SSC LPAR in the Z platform provides support for the following components:
Encapsulated Operating Systems
Remote APIs (RESTful) and web interfaces
Embedded monitoring and self-healing
Tamper-protection
Protected IP
The platform also is tested and qualified by IBM for a specific use case and can be delivered as firmware, platform, or software.
At the time of this writing, the following IBM Z Appliances were available to be deployed in a Secure Service Container:
z/VSE Network Appliance (VNA)
IBM z Systems Advanced Workload Analysis Reporter (IBM zAware), now deployed as software appliance that is running in a secure service container and integrated with IBM Operations Analytics for Z
4.3 Capacity and performance
The z14 ZR1 offers significant increases in capacity and performance over its predecessor, the z13s. Several elements contribute to this effect, including the larger number of processors, individual processor performance, memory caches, and SMT and machine instructions, including the SIMD. Subcapacity settings continue to be offered.
 
Note: Capacity and performance ratios are based on measurements and projections that use standard IBM benchmarks in a controlled environment. Actual throughput can vary, depending on several factors, such as the job stream, I/O and storage configurations, and workload type.
4.3.1 z14 ZR1 capacity settings
The z14 ZR1 offers processor subcapacity settings. The fine granularity in capacity levels allows the growth of installed capacity to more closely follow the enterprise growth, for a smoother, pay-as-you-go investment profile. Many performance and monitoring tools are available on Z environments that are coupled with the flexibility of the capacity on-demand options (see 4.3.2, “Capacity on demand” on page 73). These features help to manage growth by making capacity available when needed.
Capacity levels
The z14 ZR1 offers 26 distinct capacity levels for up to six CPs in the configuration, for a total of 156 capacity settings (26 x 6). These processors deliver the scalability and granularity to meet the needs of small and medium-sized enterprises.
A Processor Unit that is characterized as anything other than a CP, such as a zIIP, IFL, or an ICF, is always set to full capacity.
The z14 ZR1 capacity settings are shown in Figure 4-4.
Figure 4-4 z14 ZR1 capacity settings offerings
A capacity level is a setting of each CP4 to a subcapacity of the full CP capacity. The clock frequency of those processors remains unchanged. The capacity adjustment is achieved through other means.
To help you size a Z platform, IBM provides a no-cost tool that reflects the latest IBM LSPR measurements, called the IBM Processor Capacity Reference for Z (zPCR). You can download the tool here.
For more information about LSPR measurements, see 4.3.3, “z14 ZR1 performance” on page 75.
4.3.2 Capacity on demand
The z14 ZR1 continues to provide capacity on-demand (CoD) offerings. They provide flexibility and control to the client, ease the administrative burden in the handling of the offerings, and give the client finer control over resources that are needed to meet the resource requirements in various situations.
The z14 ZR1 can perform concurrent upgrades, which provide an increase of processor capacity with no server outage. In most cases, a concurrent upgrade can also be nondisruptive to the operating system with operating system support. It is important to consider that these upgrades are based on the enablement of resources that are physically present in the z14 ZR1.
Capacity upgrades cover permanent and temporary changes to the installed capacity. The changes can be done by using the Customer Initiated Upgrade (CIU) facility, without requiring IBM service personnel involvement. Such upgrades are started through the web by using IBM Resource Link. Use of the CIU facility requires a special contract between the client and IBM, through which terms and conditions for online CoD buying of upgrades and other types of CoD upgrades are accepted. For more information, see the IBM Resource Link.
For more information about the CoD offerings, see IBM z14 Model ZR1 Technical Guide, SG24-8651.
Permanent upgrades
Permanent upgrades of processors (CP, IFL, ICF, zIIP, and SAP) and memory, or changes to a platform’s Model-Capacity Identifier up to the limits of the installed processor capacity on an existing z14 ZR1, can be performed by the client through the IBM Online Permanent Upgrade offering by using the CIU facility.
Temporary upgrades
Temporary upgrades of a z14 ZR1 can be done by On/Off CoD, Capacity Backup (CBU), or Capacity for Planned Event (CPE) that is ordered from the CIU facility.
On/Off CoD function
On/Off CoD is a function that is available on the z14 ZR1 that enables concurrent and temporary capacity growth of the CPC. On/Off CoD can be used for client peak workload requirements, for any length of time. It features a daily hardware charge and can include an associated software charge.
On/Off CoD offerings can be prepaid or post-paid. Capacity tokens are available on z14 ZR1. Capacity tokens are always present in prepaid offerings and can be present in post-paid if so wanted by the client. In both cases, capacity tokens are used to control the maximum resource and financial consumption.
When the On/Off CoD function is used, the client can concurrently add processors (CP, IFL, ICF, zIIP, and SAP), increase the CP capacity level, or both.
Capacity Backup function
CBU allows the client to perform a concurrent and temporary activation of more CP, ICF, IFL, zIIP, and SAP, an increase of the CP capacity level, or both. This function can be used during an unforeseen loss of Z capacity within the client’s enterprise, or to perform a test of the client’s disaster recovery procedures. The capacity of a CBU upgrade cannot be used for peak workload management.
CBU features are optional and require unused capacity to be available on CPC drawers of the backup system as unused processor units, as a possibility to increase the CP capacity level on a subcapacity system, or both. A CBU contract must be in place before the LIC-CC code that enables this capability can be loaded on the system.
An initial CBU record provides for one test for each CBU year (each up to 10 days in duration) and one disaster activation (up to 90 days in duration). The record can be configured to be valid for up to five years. Client can also order more tests for a CBU record in quantities of five tests up to a maximum of 15 tests.
Proper use of the CBU capability does not incur any extra software charges from IBM.
Capacity for Planned Event function
CPE allows the client to perform a concurrent and temporary activation of more CPs, ICFs, IFLs, zIIPs, and SAPs, an increase of the CP capacity level, or both. This function can be used during a planned outage of Z capacity within the client’s enterprise (for example, data center changes, system or power maintenance). CPE cannot be used for peak workload management and can be active for a maximum of three days.
The CPE feature is optional and requires unused capacity to be available on CPC drawers of the backup system, as unused processor units, as a possibility to increase the CP capacity level on a subcapacity system, or both. A CPE contract must be in place before the LIC-CC that enables this capability can be loaded on the system.
z/OS capacity provisioning
Capacity provisioning helps clients manage the CP and zIIP capacity of z14 ZR1 that is running one or more instances of the z/OS operating system. By using the z/OS Capacity Provisioning Manager (CPM) component, On/Off CoD temporary capacity can be activated and deactivated under control of a defined policy. Combined with functions in z/OS, the z14 ZR1 provisioning capability gives the client a flexible, automated process to control the configuration and activation of On/Off CoD offerings.
4.3.3 z14 ZR1 performance
The Z microprocessor chip of the z14 ZR1 features a high-frequency design that uses IBM leading technology and offers more cache per core than other chips. In addition, an enhanced instruction execution sequence, along with processing technologies such as SMT, delivers world-class per-thread performance. z/Architecture is enhanced by providing more instructions, including SIMD, that are intended to deliver improved CPU-centric performance and analytics.
For CPU-intensive workloads, more gains can be achieved by multiple compiler-level improvements. Improved performance of the z14 ZR1 is a result of the enhancements that are described in Chapter 2, “IBM z14 ZR1 hardware overview” on page 17 and in 4.1, “Technology improvements” on page 60.
A fully configured z14 ZR1 (Max30, FC 0639) offers up to 54% more capacity than the largest z13s Model N20. Uniprocessor performance also increased significantly. Single processor capacity of a z14 ZR1 is approximately 10% greater than a z13s with equal n-way configurations. Performance varies depending on workload type and configuration.
LSPR workload suite: z14 ZR1 changes
To help you better understand workload variations, IBM provides a no-cost tool, zPCR, which is available at the IBM Presentation and Tools website.
IBM continues to measure performance of the systems by using various workloads and publishes the results in the Large Systems Performance Reference (LSPR) report.
IBM also provides a list of MSU ratings for reference.
Capacity performance is closely associated with how a workload uses and interacts with a particular processor hardware design. Workload capacity performance is sensitive to the following major factors:
Instruction path length
Instruction complexity
Memory hierarchy
The CPU measurement facility (MF) data allows you to gain insight into the interaction of workload with the hardware design. CPU MF data helps LSPR to adjust workload capacity curves that are based on the underlying hardware sensitivities, in particular the processor access to caches and memory. With the Z, the LSPR introduced the following workload capacity categories that replace all older primitives and mixes:
LOW (relative nest intensity): Represents light use of the memory hierarchy.
AVERAGE (relative nest intensity): Represents average use of the memory hierarchy. This category is expected to represent most production workloads.
HIGH (relative nest intensity): Represents heavy use of the memory hierarchy.
These categories are based on the relative nest intensity, which is influenced by many variables, such as application type, I/O rate, application mix, CPU usage, data reference patterns, LPAR configuration, and the software configuration that is running, among others. CPU MF data can be collected by z/OS System Measurement Facility on SMF 113 records or z/VM Monitor starting with z/VM V5R4.
In addition to low, average, and high categories, the latest zPCR provides the low-average and average-high mixed categories, which allow better granularity for workload characterization.
The LSPR tables continue to rate all z/Architecture processors that are running in LPAR mode and 64-bit mode. The single-number values are based on a combination of the default mixed workload ratios, typical multi-LPAR configurations, and expected early-program migration scenarios. In addition to z/OS workloads that are used to set the single-number values, the LSPR tables contain information that pertains to Linux and z/VM environments.
The LSPR includes the internal throughput rate ratios (ITRRs) for the z14 ZR1 and the previous generations of processors that are based on measurements and projections that use standard IBM benchmarks in a controlled environment. The actual throughput that any user might experience varies depending on several factors, such as the amount of multiprogramming in the user’s job stream, I/O configuration, and processed workload.
Experience shows that Z platforms can be run at up to 100% utilization levels, sustained. However, most clients prefer to leave a bit of white space and run at 90% or slightly under. For any capacity comparison, the use of “one number,” such as the MIPS or MSU metrics, is not a valid method. Therefore, use zPCR and include IBM technical support when you are planning for capacity. For more information about z14 ZR1 performance, see IBM z14 Model ZR1 Technical Guide, SG24-8651.
Throughput optimization with z14 ZR1
The memory and cache structure that is implementation in the z14 ZR1 was significantly enhanced compared to previous generations to provide sustained throughput and performance improvements. Processors within the z14 ZR1 CPC drawer feature different distance-to-memory attributes. To minimize latency, the system attempts to dispatch and later redispatch work to a group of physical CPUs that share cache levels.
PR/SM manages the use of physical processors by LPARs by dispatching the logical processors on the physical processors. However, PR/SM is not aware of which workloads are being dispatched by the operating system in what logical processors. The Workload Manager (WLM) component of z/OS has the information at the task level, but is unaware of physical processors.
This disconnect is solved by enhancements that enable PR/SM and WLM to work more closely together. They can cooperate to create an affinity between task and physical processor rather than between logical partition and physical processor, which is known as HiperDispatch.
HiperDispatch
HiperDispatch combines two functional enhancements, one of which is in the z/OS dispatcher and the other in PR/SM. This function is intended to improve computing efficiency in the hardware, z/OS, and z/VM.
The PR/SM dispatcher assigns work to the minimum number of logical processors that are needed for the priority (weight) of the LPAR. On the z14 ZR1, PR/SM attempts to group the logical processors into the same logical cluster or in the neighbor logical cluster in the same CPC drawer and, if possible, in the same chip. This configuration results in reducing the multi-processor effects, maximizing use of shared cache, and lowering the interference across multiple partitions.
The z/OS dispatcher is enhanced to operate with multiple dispatching queues, and tasks are distributed among these queues. Specific z/OS tasks can be dispatched to a small subset of logical processors. PR/SM ties these logical processors to the same physical processors, which improves the hardware cache reuse and locality of reference characteristics, such as reducing the rate of cross communication.
To use the correct logical processors, the z/OS dispatcher obtains the necessary information from PR/SM through interfaces that are implemented on the z14 ZR1. The entire z14 ZR1 stack (hardware, firmware, and software) tightly collaborates to obtain the full potential of the hardware. z/VM HiperDispatch provides support similar to the z/OS HiperDispatch in z/OS. It is possible to dynamically turn on and off HiperDispatch without requiring an initial program load (IPL).
 
Note: HiperDispatch is required if SMT is enabled.
4.4 Reliability, availability, and serviceability
The IBM Z family presents numerous enhancements in reliability, availability, and serviceability (RAS). Focus was given to reducing the planning requirements, while continuing to reduce planned, scheduled, and unscheduled outages. One of the contributors to scheduled outages are Licensed Internal Code (LIC) driver updates that are performed in support of new features and functions. Enhanced Driver Maintenance (EDM) can help reduce the necessity and eventual duration of a scheduled outage.
When properly configured, the z14 ZR1 can concurrently activate a new LIC Driver level. Concurrent activation of the select new LIC Driver level is supported at specifically released synchronization points. However, for certain LIC updates, a concurrent update or upgrade might not be possible.
On a z14 ZR1 concurrent repair or upgrade on the CPC drawers is not supported
z14 ZR1 builds on the RAS characteristics of the z13s, with the following RAS improvements:
z14 ZR1 Level 3 cache enhancements were made by using powerful symbol ECC to extend the reach of prior z13s cache and memory improvements for improved availability. The level 3 cache powerful symbol ECC is designed to make it resistant to more failure mechanisms. The z13s hardened the level 4 cache and the main memory was hardened with RAIM before that.
Preemptive DRAM marking was added to the main memory to isolate and recover failures more quickly.
Small array error handling was improved in the processor cores.
Error thresholding was added to the processor core to isolate “sick but not dead” failure scenarios.
The number of Resource Groups was increased to four to reduce the affect of firmware updates and failures.
An OSA-Express6S TCP checksum was added on large sends.
z14 ZR1 also continues to feature a redundant array of independent memory5 (RAIM) that provides a method to increase memory availability, where a fully redundant memory system can identify and correct memory errors without stopping. The implementation is similar to the RAID concept that is used in storage systems for several years. For more information about RAS features, see IBM z14 ZR1 Technical Guide, SG24-8651.
The z14 ZR1 consists of a single CPC drawer that is designed as a field replaceable unit (FRU). The CPC drawer uses a modular, field upgradable construction that consists of two CP clusters and contains one, two, or four processing unit (PU) single-chip modules (SCMs) and one storage controller (SC) SCM. In addition to SCMs, CPC drawer hosts memory DIMMs, connectors for I/O, oscillator interface, and Flexible Support Processors (FSPs).
The CPC drawer is equipped with two or four redundant Power Supply Units (AC to DC), depending on the CPC drawer configuration (redundant PSUs). The PSUs connect to point of Load.
A redundant pair of Distributed Converter Assemblies (DCAs) step down the bulk power and connect to six point of load (POL) cards, which provide power conversion and regulation. Two redundant oscillators are connected to the drawer.
Time domain reflectometry (TDR) techniques are applied to isolate failures between chips (PU-PU and PU-SC), and between the processor unit chips and DIMMs. More redundancy is designed into N+1 Ethernet switches, which replace the System Control Hubs (SCHs), and associated Power Distribution Units (PDUs) and 1U Support Elements (SEs).
z14 ZR1 inherits I/O infrastructure reliability improvements from z13s, including Forward Error Correction (FEC) technology that enables better recovery of FICON channels facilitated. The system is air cooled with redundant N+1 fans for the CPC drawer and the PCIe+ I/O drawer.
The following RAS enhancements also are included:
Improved integrated sparing
Error detection and recovery improvements in caches and memory
Fibre Channel Protocol support for T10-DIF
A fixed HSA with its size increased to 64 GB on the z14 ZR1
OSA-Express firmware changes to increase the capability of concurrent maintenance change level (MCL) updates
Air cooled system with redundant fans (N+1) for all major components
New CFCC level
Enhanced IBM RMF™ reporting
z14 ZR1 continues to support the concurrent addition of resources, such as processors or I/O cards, to an LPAR to achieve better serviceability. If another SAP is required on a z14 ZR1 (for example, as a result of a disaster recovery situation), the SAPs can be concurrently added to the CPC configuration.
CP, zIIP, IFL, and ICF processors can be added concurrently to an LPAR. This function is supported by z/VM, and by z/OS and z/VSE with appropriate PTFs. Previously, proper planning was required to concurrently add CP, zAAP, and zIIP to a z/OS LPAR. Concurrently adding memory to an LPAR also is possible. This ability is supported by z/OS and z/VM.
z14 ZR1 supports adding Crypto Express features to an LPAR dynamically by changing the cryptographic information in the image profiles. Users can also dynamically delete or move Crypto Express features. This enhancement is supported by z/OS, z/VM, and Linux on IBM Z.
4.4.1 RAS capability of the PDUs and Ethernet switches
The IBM z14 ZR1 uses a modular construction with single phase power that is provided to the rack components by way of up to four redundant (N+1) intelligent PDUs. The System Control Hubs were replaced with two redundant GbE switches that connect the internal management infrastructure (FSPs, SEs, and PDUs).
4.4.2 RAS capability for the Support Element
Enhancements are made to the SE design for z14 ZR1. Notebooks that were used on past generations of Z servers were replaced with rack-mounted 1U servers in a redundant configuration on z14 ZR1. The more powerful SEs offer RAS improvements, such as ECC memory, redundant physical networks for SE networking requirements, redundant power modules, and better thermal characteristics. Also, the SEs provide Firmware Integrity Monitoring.
4.4.3 RAS capability for the Hardware Management Console
Enhancements are also made to the HMC designs for z14 ZR1. New for z14 ZR1 is an option to order 1U servers for traditional and ensemble HMC configurations. This 1U HMC offers the same RAS improvements as the improvements in the 1U SE. The 1U HMC option is a customer-supplied rack and power consolidation solution that can save space in data centers. The MiniTower design used before z14 ZR1 is still available.
4.5 High availability with parallel sysplex
The parallel sysplex technology is a clustering technology for logical and physical servers, which allows highly reliable, redundant, and robust Z technology to achieve near-continuous availability. Hardware and software tightly cooperate to achieve this result.
A parallel sysplex features the following minimum components:
Coupling facility (CF)
The CF is the cluster center. It can be implemented as an LPAR of a stand-alone Z platform, or as another LPAR of a Z platform in which other LPARs are running. Processor units that are characterized as CPs or ICFs can be configured to this LPAR. ICFs are often used because they do not require any software license charges. Two or more CFs are recommended for availability.
Coupling Facility Control Code
This IBM LIC is the operating system and application that runs in the CF. No other code runs in the CF. The code is used to create and maintain the structures. These structures are used under z/OS by software components, such as z/OS, DB2 for z/OS, CICS TS, and WebSphere MQ.
CFCC can also run in a z/VM virtual machine (as a z/VM guest system). A complete sysplex can be set up under z/VM, which allows testing and operations training, for example. This setup is not recommended for production environments.
Coupling links
These high-speed links connect the several system images (each running in its own logical partition) that participate in the parallel sysplex. At least two connections between each physical server and the CF must exist. When all of the system images belong to the same physical server, internal coupling links are used.
On the software side, the z/OS operating system uses the hardware components to create a parallel sysplex. One example of z/OS and CF collaboration is the system-managed CF structure duplexing, which provides a general-purpose, hardware-assisted, easy-to-use mechanism for duplexing structure data held in CFs. This function provides a robust recovery mechanism for failures, such as loss of a single structure on CF or loss of connectivity to a single CF. The recovery is done through rapid failover to the other structure instance of the duplex pair.
For more information about deploying system-managed CF structure duplexing, see the technical paper System-Managed CF Structure Duplexing, ZSW01975USEN, which is available by clicking Learn more at the Parallel Sysplex website.
 
Note: z/TPF can also use the CF hardware components. However, the term sysplex exclusively applies to z/OS use of the CF.
Normally, two or more z/OS images are clustered to create a Parallel Sysplex. Multiple clusters can span several Z platforms, although a specific image (logical partition) can belong to only one Parallel Sysplex.
A z/OS Parallel Sysplex implements shared-all access to data. This configuration is facilitated by Z I/O virtualization capabilities, such as MIF. MIF allows several logical partitions to share I/O paths in a secure way, which maximizes use and greatly simplifies the configuration and connectivity.
A Parallel Sysplex comprises one or more z/OS operating system images that are coupled through one or more coupling facilities. A properly configured Parallel Sysplex cluster is designed to maximize availability at the application level. Rather than a quick recovery of a failure, the Parallel Sysplex design objective is zero failure.
Parallel Sysplex includes the following major characteristics:
Data sharing with integrity
The CF is key to the implementation of share-all access to data. Every z/OS system image can access all of the data. Subsystems in z/OS declare resources to the CF. The CF accepts and manages lock and unlock requests on those resources, which helps ensure data integrity. A duplicate CF further enhances the availability. Key users of the data sharing capability are DB2, WebSphere MQ, WebSphere ESB, IMS, and CICS.
Because these components are major infrastructure components, applications that use them inherently benefit from sysplex characteristics. For example, many large SAP implementations have the database component on DB2 for z/OS in a Parallel Sysplex.
Near-continuous (application) availability
Changes, such as software upgrades and patches, can be introduced one image at a time, while the remaining images continue to process work. For more information, see Improving z/OS Application Availability by Managing Planned Outages, SG24-8178.
High capacity
Parallel sysplex scales 2 - 32 images. Each image can have 1 - 1706 processor units. The scalability is near-linear as z/OS images are added to a sysplex. This structure contrasts with other forms of clustering that use n-to-n messaging, which leads to rapidly degrading performance with a growing number of nodes.
Dynamic workload balancing
Because the system is viewed as a single logical resource, work can be directed to any of the Parallel Sysplex cluster operating system images where capacity is available.
Systems management
This architecture provides the infrastructure to satisfy a client requirement for continuous availability and enables techniques for achieving simplified systems management consistent with this requirement.
Resource sharing
Several base z/OS components use CF shared storage. This usage enables the sharing of physical resources with significant improvements in cost, performance, and simplified systems management.
Single system image
The collection of system images in the Parallel Sysplex is displayed as a single entity to the operator, user, database administrator, and so on. A single-system image ensures reduced complexity from operational and definition perspectives.
N-1 support
Three hardware generations (the current and the two previous generations) often are supported in the same parallel sysplex. However, the z14 ZR1 supports N/N-1 coupling and STP connectivity only. The z14 ZR1 supports the following coupling features:
 – Integrated Coupling Adapter Short Reach (ICA SR; 8x link, up to 150 m)
 – Coupling Express Long Reach (CE LR; 1x link, up to 10 km unrepeated)
Software support for multiple releases or versions is supported.
CF Encryption support
Provides support for encrypted data while it is being transferred to and from the CF because it is in the Coupling Facility Structure:
 – z/OS Systems must have the cryptographic hardware configured and activated to perform cryptographic functions and hold AES master keys within a secure boundary. Feature 3863, CPACF DES/TDES Enablement must be installed to use the Crypto Express4 Coprocessor (CEX4C), the Crypto Express5 Coprocessor (CEX5C), or the Crypto Express6 Coprocessor (CEX6C) feature.
 – Support provided can be enabled only when all systems are z/OS 2.3 or higher. Toleration support with reduced functionality will be provided for z/OS 2.2 and z/OS 2.1.
The components of a Parallel Sysplex as implemented within the Z architecture are shown in Figure 4-5. The configuration is one of many possible parallel sysplex configurations.
Figure 4-5 Sysplex hardware overview
A z14 ZR1 that contains multiple z/OS sysplex partitions and an internal coupling facility (CF02), a z14 ZR1 server containing a stand-alone CF (CF01), and a z13 containing multiple z/OS sysplex partitions also is shown in Figure 4-5. Server Time Protocol (STP) over coupling links provide time synchronization to all servers.
 
Note: The z14 ZR1 does not include InfiniBand coupling. The zEC12 or zBC12 does not include ICA SR or CE LR coupling links; therefore, they cannot connect directly.
The z14 ZR1 allows only direct connectivity back to z13 or z13s that migrated to ICA SR or CE LR coupling links.
Appropriate CF link technology (1x IFB, 12x IFB, ICA SR, or CE LR) selection depends on server configuration and how distant they are physically located. ICA SR links can be used within a short distance only, while a CE LR can support a distance up to 10 km (6.2 miles). For more information about coupling link options, see 3.7, “Coupling and clustering” on page 51.
4.6 Pervasive encryption
Data protection and security are business imperatives, and regulatory compliance is increasingly complex. Extensive use of encryption is one of the best ways to reduce the risks and financial losses of a data breach and meet complex compliance mandates. However, implementing encryption can be a complex process for organizations. The following factors must be determined:
What data should be encrypted?
Where should encryption occur?
Who is responsible for encryption?
Because the data is the new perimeter, encryption policies must cover data in-flight and data at-rest, but should not require costly application changes to achieve this goal. Organizations need a transparent and consumable approach to enable extensive encryption of data in-flight and at-rest to substantially simplify and reduce the costs that are associated with protecting the data at the core of their enterprise and achieving compliance mandates.
With solutions around privileged identity management, sensitive data protection, and integrated security intelligence, Z security offers the next generation of secure, trusted transactions.
Pervasive encryption is a data-centric approach to information security that entails protecting data entering and exiting the z14 ZR1 platform. It involves encrypting data in-flight and at-rest to meet complex compliance mandates and reducing the risks and financial losses of a data breach. It is a paradigm shift from selective encryption (where only the data that is required to achieve compliance is encrypted) to pervasive encryption. Pervasive encryption with z14 ZR1 is enabled through tight platform integration that includes the following features:
Integrated cryptographic hardware: CPACF is a co-processor on every processor unit that accelerates encryption. Crypto Express features can be used as hardware security modules (HSMs)7.
Data set and file encryption: You can protect Linux file systems and z/OS data sets by using policy-controlled encryption that is not apparent to applications and databases.
Network encryption: You can protect network data traffic by using standards-based encryption from endpoint to endpoint.
Full disk encryption: You can use disk drive encryption that protects data at rest when disk drives are retired, sent for repair, or repurposed.
CF encryption: This encryption secures the parallel sysplex infrastructure, including the CF links and data that stored in the CF, by using policy-based encryption.
Secure Service Container: This container secures the deployment of software appliances, including tamper protection during installation and runtime, restricted administrator access, and encryption of data and code in-flight and at-rest.
Data is encrypted when in-flight and at-rest, as shown in Figure 4-6. Data is decrypted only when it is processed by the operating system.
Figure 4-6 Protecting data in-flight and at-rest
Pervasive encryption includes the following advantages:
The ability to encrypt data by policy without application change
A simplified way to protect data at a much coarser scale with industry best performance
Greatly simplified audit, which enables clients to pass compliance audits more easily

1 IBM z/Architecture is the conceptual structure of the Z platform that determines its basic behavior. The architecture was first introduced as System/360 in 1964.
2 IBM is changing how KVM for IBM Z is delivered. KVM hypervisor will now be offered through our Linux distribution partners.
3 Only a z/OS base device must be in subchannel set 0. Linux on IBM Z supports base devices in the other subchannels sets.
4 The CP is the standard processor for use with any supported operating system, but is required to run z/OS.
5 Meaney, P.J.; Lastras-Montano, L.A.; Papazova, V.K.; Stephens, E.; Johnson, J.S.; Alves, L.C.; O'Connor, J.A.; Clarke, W.J., “IBM zEnterprise redundant array of independent memory subsystem,” IBM Journal of Research and Development, vol.56, no.1.2, pp.4:1,4:11, Jan.-Feb. 2012, doi: 10.1147/JRD.2011.2177106
6 The IBM z14 ZR1 can have a maximum of 6 CPs or up to 30 ICFs. The maximum number of PUs per CF LPAR is 16
7 An HSM is a physical computing device that safeguards and manages digital keys for strong authentication and provides crypto processing.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.88.110