ISV IBM Z Program Development Tool concepts and terminology
 
Important: This IBM Redbooks publication contains material from Independent Software Vendor (ISV) IBM Z Program Development Tool (IBM zPDT) (ISV zPDT) GA11. Some of the details might not match earlier versions of ISV zPDT.
This chapter contains various topics that provide conceptual information about ISV zPDT technology. Later chapters provide specific installation, operation, and management details. The topics in this chapter are grouped as follows:
Some of the terminology that is used throughout this publication.
2.2, “IBM zSystems characteristics for ISV zPDT” on page 21 describes higher-level design details for ISV zPDT, and it also describes the practical usage and limitations of the design. Topics include the Extended Binary Coded Decimal Interchange Code (EBCDIC) character set; emulated I/O devices; excluded IBM zSystems features; ISV zPDT design and operational environment considerations; security; reliability, availability, and serviceability (RAS) considerations; ISV zPDT performance; concurrent personal computer (PC) workloads; IBM zSystems architecture levels; and virtual environments.
2.2.9, “Hardware tokens” on page 33 provides a brief description of token details.
2.3, “PC selection overview for ISV zPDT” on page 35 provides a description of the practical details that are involved when you select or configure a PC for ISV zPDT usage. Topics include a general overview of PCs to be considered, some basic PC notes, PC memory use understanding, PC disk space for emulated direct access storage device volumes, and an “official” IBM statement about PC selections for ISV zPDT.
2.4, “Practical ISV zPDT operational notes” on page 38 describes Linux software levels, Linux user IDs, ISV zPDT operational components, consoles, devmaps, Linux directories, PC local area network (LAN) topics, and multiple ISV zPDT instances.
2.5, “ISV zPDT releases” on page 44 contains notes about the ISV zPDT release (at the time of writing) and minor notes about previous releases.
As a reminder, this publication is about the ISV zPDT product. Many of the details that are provided also apply to the IBM IBM Z Development and Test Environment (ZD&T) product, but you must consult IBM ZD&T documentation for more information. A brief overview of
IBM ZD&T differences is in Appendix D, “IBM Z Development and Test Environment notes” on page 387.
2.1 Terminology
In this publication, we use the following terminology:
The base machine, underlying host, underlying Linux, or host Linux is the Intel compatible PC that is running Linux.
z/OS is used to refer to a recent release of the z/OS operating system, and likewise for z/VM and others.
A device map (devmap) is used to specify the operational configuration of ISV zPDT. It is a simple Linux flat file.
A token or license key refers to a hardware device that supplies an ISV zPDT license. The terms token, key, and license are used interchangeably. A license also can be supplied by a software-only mechanism when using IBM ZD&T. One license is needed for each
IBM zSystems CP emulation. CP refers to a general IBM zSystems processor that is the major functional element of ISV zPDT. By default, ISV zPDT provides IBM zSystems CPs. Optionally, you can convert a CP to a IBM zSystems Integrated Information Processor (zIIP), IBM zSeries Application Assist Processor (zAAP), or IBM Integrated Facility for Linux (IFL) processor.1
Processor or core normally refers to the Intel or AMD processor cores in the base machine. A 2-core machine has two processors, although both are typically in one hardware “processor” module.
Open Systems Adapter (OSA) is sometimes used as shorthand for an OSA-Express adapter. The ISV zPDT system provides OSA-Express6s emulation at the time of writing.
Many Linux commands for the base Linux system, are shown throughout this publication. If the command is preceded with # (a hash or pound symbol), the command is entered in root mode. If the command is preceded with a dollar sign ($), it is not entered in root mode. The mode is important: Do not attempt to use zPDT constantly as root.
ISV zPDT releases are denoted by GA6, GA7, GA8, and so on, where GA means general availability. GA10, for example, means ISV zPDT Version 1 Release 10. All ISV zPDT releases, to date, have been Version 1. GA10.1 means the first “fix pack” for the GA10 release.
IBM zSystems I/O devices have device numbers that are used to specify a specific device or interface. An older term for a device number is address, and this older term is still widely used. This publication uses address and device number interchangeably.
Just In Time (JIT) technology has different meanings in different environments. In
ISV zPDT, it provides a method of improving performance by consolidating the emulation of a string of IBM zSystems instructions into an optimized string of Intel-compatible instructions.
The primary operational characteristic of ISV zPDT, in which the instruction set of one computer platform (IBM zSystems) is implemented through another platform (Intel or AMD), has a long history in the computer business. This design has been described with many terms, including microcode, millicode, simulation, emulation, translation, interception, assisted instructions, machine interface (MI) architecture, machine level code, and others. We attempt to avoid all this terminology and simply refer to the ISV zPDT product.
This publication supports the ISV zPDT product. Many of the details also apply to the
IBM ZD&T product, but any differences are not described here. Appendix D, “IBM Z Development and Test Environment notes” on page 387 describes some of the minor differences as related to tokens.
2.2 IBM zSystems characteristics for ISV zPDT
ISV zPDT functions include IBM zSystems processor (CP) operation and the emulation of various I/O devices. As a general statement at the time of writing, all the functions (instructions and I/O) that are needed to run IBM zSystems operating systems are provided.
2.2.1 ISV zPDT character sets
ISV zPDT character data is typically in EBCDIC, which is true for any IBM zSystems processor. Emulated disks and tapes typically contain EBCDIC data, although they logically contain whatever mix of EBCDIC, binary, ASCII, Unicode, or other formats that are produced by the IBM zSystems operating system and applications. There is no routine translation to the ASCII of the underlying host Linux system. The same binary data representation that is used on large IBM zSystems servers also is used on ISV zPDT systems. This representation extends to fixed point, packed decimal, and all floating point formats. All ISV zPDT data is in IBM zSystems representation.
IBM zSystems software running in an ISV zPDT environment is binary compatible with large IBM zSystems machines. For example, application programs that are compiled and linked in one environment can generally run unchanged in the other environment,2 assuming that the configuration elements are compatible. There can be a few exceptions to this compatibility, usually based on rather obscure application designs.
There are special cases for emulated card readers and printers, where the character set involved is relevant, and conversions between ASCII and EBCDIC are needed and are automatically provided. (Usage of zPDT emulated card readers and printers is rare.)
2.2.2 ISV zPDT emulated I/O devices
An ISV zPDT system includes 12 device managers, each of which provides emulation for a related group of devices. A device manager can emulate multiple instances of its devices.
aws3274 Emulates a local, channel-attached 3274 control unit. This device manager is almost always used to provide the IBM MVS operator console and 3270 application sessions. Each terminal appears (to the IBM zSystems operating system) as operating through a channel-attached non-Systems Network Architecture (SNA) DFT
IBM 3274 control unit. TN3270 sessions are used through the base Linux TCP/IP interface. Typically, 3279 terminals (and rarely, 3284 printers) are the devices that are emulated.
awsckd Emulates IBM 3390 (and IBM 3380) disk units by using a single Linux file for each 3390 or 3380 device. 3390-1, -2, -3, and -9 are the most common devices that are emulated, although 3390 volumes of almost any size can be emulated.
awsosa Emulates an IBM OSA-Express6s adapter in either QDIO (OSA-Express Direct (OSD)) or non-QDIO (OSA-Express (OSE)) mode. The hardware that involved is an Ethernet adapter on the underlying PC.3 This device manager can support TCP/IP operation. SNA operation is not supported.4 It can also support Open Systems Adapter/Support Facility (OSA/SF) usage when using older
IBM zSystems operating systems that provide it.
awstape Emulates a 3420, 3480, 3490, or 3590 tape drive by using a Linux file in place of the tape media.
awscmd Emulates a tape drive, but routes output records to the base Linux system, where they are run as commands, and returns Linux output to the emulated tape drive.
awsfba Emulates Fixed Block Architecture (FBA) devices, which are supported by z/VM and a few other operating systems. A Linux file is used for each emulated device. This device manager is not the Fibre Channel (Open Systems) FBA on recent IBM zSystems machines. IBM 3996-1 or -2 devices are emulated.
awsoma Emulates the Optical Media Attach interface, working with Linux files in this format. This function is read-only.
aws3215 Emulates a 3215 console device (seldom used today) by using a Linux terminal window for the interface.
awsprt Emulates a 1403 or 3211 printer by using a Linux file for output. Provides emulation of a 1403-N1 or 3211 printer. Forms Control Buffer (FCB) emulation for 3211 is provided, but Universal Character Set (UCS) functions are not provided. Automatic ASCII translation (fixed translation table) is provided.
awsrdr Emulates a 2540 card reader by using Linux files as input. (The 2540 card punch functions are not emulated.) Both EBCDIC and ASCII data can be used.
awsscsi Uses a Linux SCSI-attached tape drive as an IBM zSystems tape drive, which provides a way to read/write “real” mainframe tape volumes. For specific drive and adapter details, see Chapter 14, “Tape drives and tapes” on page 303.
awsctc Emulates an IBM 3088 channel-to-channel (CTC) adapter by using TCP/IP as the communication mechanism. The connection can be the same ISV zPDT instance, another instance in the same PC, or an
ISV zPDT instance in a LAN-connected machine.
A typical ISV zPDT user, running z/OS, normally uses aws3274, awsckd, awsosa (if connectivity other than local 3270s is needed), and awstape. The other device managers are used much less often.
The current ISV zPDT design allows a maximum of 2048 emulated I/O devices, which are often described as 2048 subchannels. A practical number of emulated I/O devices is usually much less than 2048, depending on many factors.
2.2.3 Excluded IBM zSystems functions
Not all IBM zSystems instructions and functions are available with ISV zPDT. Instructions that are related to specific hardware facilities or optionally used by specialized programs might not be present. This excluded list includes these items:
Base Control Program internal interface (BCPii) functions.
List-directed initial program load (IPL) and Internal IPL.
The accelerator; PKCS#11, EP11, and customized cryptographic routines (user-defined extensions (UDXs)) functions of cryptographic coprocessors; and IBM Trusted Key Entry (TKE) functions and interfaces.
Time-of-day (TOD) steering.
IBM zEnterprise® BladeCenter Extension (zBX) functions.
CPU Measurement Facility or Hardware Instrumentation Services (HIS), including various counters and activities.
Asynchronous data movers.
IBM FICON®, and Transport Mode I/O.
Parallel Access Volumes (PAVs).
Logical channel subsystems.
IBM HiperSockets functions.
Logical partitions (LPARs).
Functions involving the usage of Hardware System Area (HSA).
Flash memory.
Multiple I/O paths.
Multithreading (MT) CPs (MT or symmetric multithreading (SMT)).
Various fields within the Channel Measurement Facility reports.
Hardware Management Consoles (HMCs).
Some CHSC commands (such as crypto adapter measurements).
System Recovery Boost.
The DFLTCC asynchronous facility is missing, but the synchronous instruction is present. However, the feature indicator for the instruction is disabled. (It can be enabled with the zPDT command dflt.)
The IBM S/390® Compatibility Mode function has limitations. For more information, see “S/390 Compatibility Mode” on page 274.
IBM Secure Service Container (SSC) functions.
OSA Address Table (OAT) configuration with the QUERYINFO command (for example, in z/OS).
Dynamic sense for I/O operations in some cases, such as for mini-disks).
The OSPROTECT=1 function of z/OS. (OSPROTECT=SYSTEM is accepted.)
Advanced I/O attachments or functions, such as IBM HyperSwap®.
The Single System Image (SSI) and Live Guest Migration, as used by z/VM.
ISV zPDT crypto master keys are not always usable between a previous zPDT release and a new zPDT release. In such cases, you must enter a new master key for the new zPDT release, and it can be the same key as used in the previous release.
Various forms of sophisticated performance management, which are provided by complex instructions on recent IBM zSystems system processors, are not implemented by
ISV zPDT. The actual instructions can be run (to avoid ABENDs), but they generally do nothing or effectively provide zero data.
The zPDT emulation of 3590 tape drives has minor restrictions regarding the meaning of some sense data.
Current releases of zPDT do not provide options to emulate older IBM architecture functions that changed in the architecture at the time of writing. For example, the OSAINFO function that was in OSA-Express3 is no longer provided.
The IBM z16™ architecture offers certain “counters” reflecting the operation of selected features, such as some crypto operations. Such counters might not be present (or might not reflect appropriate values) during ISV zPDT operation.
IBM z15™ and later systems do not have zAAP specialty processors. zAAP specialty processors are mentioned throughout this publication for compatibility with earlier ISV zPDT releases.
2.2.4 ISV zPDT design and operational environments
The section lists several specific IBM zSystems features that are generally available with
ISV zPDT. In addition to this list, ISV zPDT users should be aware of the general environments for which ISV zPDT is designed and tested, and what environments might not fit it well.
ISV zPDT is designed and tested to include the following environments:
z/OS usage is accepted if the emulated memory for ISV zPDT is a reasonable size for the z/OS workload that is used. A 4 GB definition within the devmap for ISV zPDT might be for a minimal z/OS, and much larger sizes might be needed for reasonable performance of a particular workload.
z/VM usage is accepted if the necessary memory sizes are present. For example, a z/VM plus two z/OS plus two CF images to produce an IBM Parallel Sysplex system is acceptable.
z/VSE usage is accepted if necessary memory sizes are present. However, there is no formal zPDT support for z/VSE. For more information, contact your z/VSE provider.
Linux for IBM zSystems (or Linux for S/390) is typically accepted if the necessary memory sizes are present. You (the ISV zPDT owner) must obtain, install, and configure the Linux for z package. IBM does not produce Linux for IBM zSystems. New releases sometimes contain unexpected installation “techniques” that might take time to resolve.
Multiple ISV zPDT instances can be used, potentially with some shared emulated devices.
Multiple concurrent users (for appropriate IBM zSystems software applications) are allowed and usually expected.
A base Linux (to run ISV zPDT) that has sufficient memory (beyond the amount that is defined in ISV zPDT) to provide an effective Linux disk cache is important. This cache can be critical for reasonable performance by ISV zPDT.
ISV zPDT supports “large memory” environments (1 MB and 2 GB pages for z/OS) and IBM zSystems system applications that use much memory. These environments can work well, but, especially with such usage, you must understand the performance implications of PC memory size, Linux disk cache usage, paging implications, and others.
Operating in a virtual or container environment generally works if memory allocations are handled reasonably and the overall workload remains reasonable for the PC that is involved. LAN setup and operation can be more complex.
The “internals” (hardware and firmware) for an IBM zSystems machine are complex to support complex environments with z/OS, z/VM, IBM z/Transaction Processing Facility (z/TPF), various Linux distributions for IBM zSystems, and others. ISV zPDT emulates much of this complexity when it is used in the environments that are listed here.
ISV zPDT is generally not sensitive to the particular PC model that is used as the base, although a suitably configured high-end PC server can provide modest performance improvements and improved reliability.
ISV zPDT is usually operated with a simple Linux desktop GUI for control and a relatively simple emulated 3270 window for a z/OS operator console. However, simple remote command-line interface (CLI) access to a PC server running ISV zPDT can be used instead of a GUI Linux console.
The DFLTCC instruction is present as a synchronous instruction. The DFLTCC asynchronous function is not available in ISV zPDT. Most usages of the DFLTCC operation (by z/OS system code) detect and work with this configuration, but few exceptions exist. One known case involves early versions of z/OS 2.4 that result in 11E ABENDs followed by a wait state during a z/OS IPL. APAR fixes are available. One bypass is to use the zPDT dflt instruction after starting zPDT but before performing an IPL on z/OS.
ISV zPDT is not designed and tested to include the following areas; nevertheless, these options and directions might work for you. However, there is no unique ISV zPDT support for these areas, and you might need to address and resolve individual problems that you encounter.
Many concurrent guest operating systems under z/VM can create issues. ISV zPDT design and testing usually stop at two z/OS systems (typically in a Parallel Sysplex environment) or two or three smaller operating systems. With larger environments, you must resolve any special problems that arise in your environment. Similar potential exposures exist with many virtual machines (VMs) (or containers) running ISV zPDT.
IBM does not provide specific performance specifications for any version of ISV zPDT.
Multiple ISV zPDT instances, when used excessively, can create system overloads that are difficult to resolve. At a basic level, the multiple instances have approximately the same considerations as multiple operating systems instances under z/VM or with containers.
There is no formal testing or support for using ISV zPDT in a container. We are aware of many customers working in container environments and as best we understand there have been no basic problems. However, good planning is needed with special considerations for the amount of memory that is usable by zPDT and networking setups.
There is no formal planning, testing, or support for using ISV zPDT in a cloud environment. We are aware of some customers doing so, but the ZPDT developers do not provide or recommend any details about such operations.
Massive emulated disk operations can be a problem. Normal ISV zPDT operation typically involves a single emulated IBM zSystems channel. (OSA emulation uses extra emulated channels.) Excessive loads (as with too many separate operating system images under z/VM or through containers or other virtual environments) can result in significant I/O overloads and delays that can trigger watchdog timer issues or other complications. This situation is seldom a problem for “normal” ISV zPDT usage, but occurs in extreme situations.
Java performance within ISV zPDT is slower than most other emulation functions, and extensive dependence on Java performance might not be the best option under current ISV zPDT versions.
Some IBM zSystems I/O functions can create obscure problems. For example, converting z/OS volumes between z/VM mini-disks and “real” disks (both are zPDT emulated 3390 disks) involves thorough understanding of many rules and conventions to avoid obscure issues.5 Also, projects that require alterations to the emulated cryptographic adapter are not generally supported. In particular, TKE operation is not supported.
Multiple concurrent users (such as Time Sharing Option (TSO) users) are permitted, but the general design is for “some” with whatever limitations are appropriate for the hardware and the software configuration. Dozens of concurrent users are not considered to be within the general ISV zPDT sphere, although the specific numbers that are involved depend on many factors.
The ISV zPDT license typically includes copying a particular z/OS system (the Application Development Controlled Distribution (ADCD) z/OS system) that is prepared by IBM.
(A z/VM system is also available for ISV zPDT users.) Any other IBM software is not included with ISV zPDT, and exceptions must be discussed with an IBM representative. (Each IBM ZD&T license contract might be different.)
Complex virtual or container environments can have many issues, especially for memory allocation, networking, or shared I/O accesses. Overloading the base system can create various problems. Although there are many zPDT owners operating in these environments, the setup skills are up to the zPDT owner, and they can be more complex than anticipated.
ISV zPDT is tested with some of the more basic virtual and container environments, but not with all versions, and there is a growing number of “other” versions and options. This area can be more complex than expected. In rare cases, we have seen zPDT errors such as “Processor family not supported” when running in “other” virtual or container environments. More often, we hear about performance problems that often are related to the amount of memory or the usage if it in the zPDT environment.
When started, ISV zPDT coordinates TOD with the underlying Linux (with some leap second adjustment), but then manages its various internal clocks and timers separately.6 Extensive and substantial system loads or “production-like” intervals between ISV zPDT restarts can skew these clocks or times to the point where the ISV zPDT license is invalid and an ISV zPDT restart is needed.
The ISV zPDT package (with the normal ADCD z/OS system) is not designed for people unfamiliar with IBM zSystems machines and z/OS. Most ISV zPDT users are familiar with these topics. The basic ISV zPDT can be used in a learning environment, although a simpler, more basic z/OS package is appropriate, as is appropriate “educational” documentation, monitoring, and support.
ISV zPDT development and testing are done with several different PC Linux distributions. There is no testing (or design) suitable for all Linux releases and versions. ISV zPDT is assumed to be the major function on its base Linux PC, and it is better to avoid using “other” Linux versions or in some cases the most frequent updates. A stable base is a better goal. The Linux distributions and levels that are used for ISV zPDT releases are briefly listed in 2.5, “ISV zPDT releases” on page 44.
New Linux distributions or updates sometimes change the default installed library modules and other factors, and the ISV zPDT developers cannot always foresee these changes. One solution is to stay with the Linux distributions and levels that are documented for your ISV zPDT release. Another solution is to observe and search for various internet discussions about new Linux versions.
New Linux distribution levels sometimes change setup options, the configuration controls, or the firewall functions that are used for network usage. An existing ISV zPDT release might require user assistance when encountering these issues.
Terminal connections (through emulated OSA or emulated IBM 3274 interfaces) depend on ISV zPDT operation, base Linux operation and configuration, and the IBM zSystems operating system parameters for the links that are involved. This situation can become complex, and solutions depend on user or installation skills.
IBM has not tested extreme ISV zPDT configurations. For example, in theory an ISV zPDT instance can have up to 2048 devices, up to 8 CPs, and base Linux can have up to
15 concurrent ISV zPDT instances. In a wildly extreme configuration, this setup might represent 15 * 2048 = 30,000 emulated devices being used by 120 CPs. Extreme configurations, even though much smaller than this example, might not be practical. Among other considerations, each emulated device requires control blocks in Linux shared memory, and a large configuration might cause difficulties with Linux shared memory and swap file configurations.
ISV zPDT performance is partly determined by the effectiveness of the Linux disk cache, and ISV zPDT cannot directly manage this aspect of normal Linux operation. The effectiveness of the Linux disk cache can change as workload characteristics change, and it is determined by many factors. It is easier to understand in simple cases where it is not affected by virtual operations, multiple IBM zSystems operating systems, or container usage.
At the time of writing, network-attached storage (NAS) disks had mixed reviews by
ISV zPDT users. The issues are at the Linux level. ISV zPDT is unaware of the nature of the Linux disks except when access delays are so extensive that z/OS timeouts are triggered. If Linux detects I/O problems with fairly intensive usage of the NAS disks, they are not appropriate for ISV zPDT use. In some cases, the usability might be related to the congestion and bandwidth of the LAN that is involved.
Linux root authority is needed to install ISV zPDT and run a few of the administrative commands. Once ISV zPDT is installed, the release of ISV zPDT at the time of writing (GA10.3 and later) does not need base Linux root access for normal operation.
New releases of Linux for S/390 (or for IBM zSystems) sometimes have differing technology that can cause problems with ISV zPDT. For example, at the time of writing, a particular new version fails when installing it directly on ISV zPDT, but it can be installed under z/VM. This particular problem probably will be resolved by the time you read this publication, but other similar problems might be in future releases. Future ISV zPDT releases can address some of these problems, but there is usually a time gap between the occurrence of such problems and a future ISV zPDT release or fix pack that addresses some such problems.
zPDT does not specify how you might arrange emulated volumes (and other files that zPDT might access) within your Linux environment, but zPDT must have at least read access to whatever Linux directories are involved. In a realistic sense, zPDT probably should have read/write access to such directories and the files representing emulated volumes.7
z/OS is not an integrated part of ISV zPDT, but the ADCD implementation of zPDT is often included in an ISV zPDT product package. z/OS questions and issues are not a direct part of ISV zPDT support, and some issues might require more user administration skills. For example, the ADCD z/OS package uses RACF for security controls. Usage of other security controls instead of RACF is an example that can introduce additional administration effects.
2.2.5 ISV zPDT security, integrity, and RAS concepts
ISV zPDT emulates IBM zSystems architecture while running as a PC Linux application.
ISV zPDT has no control over the security or integrity environment of this “base” Linux. Although ISV zPDT generally follows reasonable Linux application standards, ISV zPDT should not be considered a secure system unless all aspects of access to the base Linux are also considered. Within ISV zPDT itself (while running an IBM operating system), the normal security and integrity of that environment exists.8 For example, the z/OS ADCD package that is typically available to ISV zPDT users contains IBM RACF, which can be used to manage security within the z/OS environment.
At the base Linux level, there is potential for many exposures. For example, a root user can inspect any emulated 3390 or emulated tape file and potentially uncover confidential data that is stored in these emulated devices. A malicious Linux user might “front end” various
ISV zPDT administrative commands (which run as ordinary Linux commands), although
ISV zPDT provides some protection against this action.
Ideally, access to the base Linux system running ISV zPDT might be limited to only the necessary trusted administrative personnel. General user access to the IBM zSystems operating system running under ISV zPDT would be only through IBM zSystems interfaces, such as emulated 3270 terminals.9 Such a restrictive environment is not always possible, and even where this environment is intended, skills are needed to create and maintain it.
ISV zPDT does not provide the RAS of a standard IBM zSystems system. ISV zPDT has no control over exposures in the underlying Linux system or the underlying PC system. Careful selection, configuration, and management of these elements can produce a good system, but there is no claim that it equals the RAS of a standard IBM zSystems system. Prudent users should have defined backup procedures (for emulated volumes), procedures for monitoring Linux and z/OS consoles (for error messages), periodic checking of emulated volume structure validity (by using the alcckd -rs command, among other tools), and monitoring to verify that the system is not routinely overloaded.
zPDT can emulate some features of the IBM zSystems system cryptographic adapter, including the use of “master keys.” These (emulated) master keys are stored in a normal base Linux file, which is not secure.
 
Important: ISV zPDT is intended for development work. It is not intended as a secure system. Think carefully before moving any confidential data to your ISV zPDT systems. You should not enter a “real” master key (as used on a real IBM zSystems system) in a zPDT system without fully understanding the potential exposures that are involved.
2.2.6 Performance
IBM does not provide performance or capacity specifications for ISV zPDT. Specifying performance or capacity for ISV zPDT is too difficult for many reasons, including the following ones:
Performance depends on the power of the underlying hardware, which changes frequently. Performance is related to the clock speed of the underlying processor (such as 3.4 GHz for an Intel processor) and the memory design, pipelining, caching, instruction availability, and translation design of the underlying processor.
Linux performance (including applications such as ISV zPDT) can be greatly influenced by how the Linux disk cache (and swap file) is performing, and the nature of the Linux disks.
The number of IBM zSystems CPs that is emulated by ISV zPDT has an obvious effect, as do the number of cores in the PC processor, but the effect is not linear.
Every new release or update of ISV zPDT can change performance.
The IBM zSystems instruction mix and memory reference pattern has a profound impact on performance, which is a greater impact than is observed on a larger IBM zSystems system.
Million instructions per second (MIPS) is a rather discredited metric, although it is still informally used with smaller IBM zSystems machines. Any MIPS number for ISV zPDT is very dependent on the nature of the workload and the Linux configuration.
I/O performance must be considered. For example, all emulated disk and tape operations for ISV zPDT might be from a single (relatively slow) computer disk drive or solid-state drives (SSDs). Workloads with modest I/O loads (when run on a larger IBM zSystems system) might be I/O-bound on an ISV zPDT system.
Virtualization or containers can add much more variability, especially when the host computer is overcommitted.
z/VM can be used with ISV zPDT. The performance of guest operating systems under z/VM (such as z/OS running under z/VM) is influenced by the usage of the START INTERPRETIVE EXECUTION (SIE) function. On a large IBM zSystems machine, this function provides a “microcode assist”10 for many of the virtualization functions that are performed by z/VM. Most SIE functions are provided by ISV zPDT, but there is no direct equivalent of a “microcode assist” level, and the virtualization performance boost that is provided by SIE is modest.
In most cases, ISV zPDT performance partly depends on the JIT operation that is employed by the ISV zPDT emulator.11 This function is internal to ISV zPDT and is not documented other than this brief note. The function is, in a sense, a parallel operation to the simple emulation of each IBM zSystems instruction. This parallel function can detect reasonable
IBM zSystems instruction loops. When the loops are detected, the instructions in the loop are forwarded to the JIT “dynamic compiler”, where they are consolidated into an optimized string of Intel compatible instructions. This optimized string is run instead of using individual emulation of each IBM zSystems instruction that is encountered in the loop. The implementation is complex, and it tends to be updated with each new ISV zPDT release.
ISV zPDT is not intended to replace normal IBM zSystems configurations. If you are considering a configuration with more than, for example, a hundred emulated devices, with heavily loaded I/O devices, or with many z/VM guests, discuss your requirements with your ISV zPDT supplier. Moderately large configurations are possible and can be acceptable, but you should review your plans with knowledgeable ISV zPDT people. The key to understanding ISV zPDT capacity and performance is a detailed understanding of your workload.
2.2.7 IBM zSystems architecture levels
IBM zSystems machines have architectural characteristics, such as new instructions on newer systems or changed firmware characteristics. The architectures are not always compatible with earlier versions, and might require software updates to run older operating systems on newer architectures. For example, z15 machines (including ISV zPDT GA10) do not run with the original z/VM 6.2 release. In this case, a program temporary fix (PTF) is needed to resolve the incompatibility. Such software updates are often known as “toleration PTFs”, and they are usually available for some older operating system releases when a new IBM zSystems series becomes available. IBM does not provide such updates for much earlier operating systems.
The IBM zSystems architecture levels for the ISV zPDT CPs are shown in Table 2-1.
Table 2-1 IBM zSystems architecture levels
Release date
ISV zPDT release
ISV zPDT build level
IBM zSystems architecture
ARCH level
(for compilers)
2009 and 2010
V1R1 “GA1”
39.xx
z800 and z900
ARCH(7)
2011
V1R2 “GA2”
41.xx
IBM z10
ARCH(8)
2012
V1R3 “GA3”
43.xx
z196
ARCH(9)
2013
V1R4 “GA4”
45.xx
EC 12
ARCH(10)
2014
V1R5 “GA5”
47.xx
EC 12 GA 2
ARCH(10)
2015
V1R6 “GA6”
49.xx
IBM z13®
ARCH(11)
1Q 2017
V1R7 “GA7”
49.xx
z13 GA2
ARCH(11)
4Q 2017
V1R8 “GA8”
51.xx
IBM z14
ARCH(12)
2Q2019
V1R9 “GA 9”
53.xx
IBM z14 GA2
ARCH(12)
1Q2020
V1R10 “GA10”
55.xx
IBM z15
ARCH(13)
3Q2022
V1R11 “GA11”
57.xx
z16
ARCH(14)
ISV zPDT does not have a facility to emulate older IBM zSystems architectures. For example, the current release (ISV zPDT 1.11) is at the z16 level. It cannot be set to an IBM zSystems 196 level or a z10 level, for example. Providing a switchable architectural level facility results in reduced performance.
If you want to test software on older IBM zSystems architectures (and older z/OS releases), you must retain older versions of ISV zPDT. Older ISV zPDT releases might or might not work correctly with the latest Linux distributions, and IBM cannot help in this area. In general, you must retain older PC hardware, older Linux releases, older z/OS releases, and older
ISV zPDT releases if you want to consistently run your software in older operating environments. IBM does not have a way to distribute older ISV zPDT releases or older ADCD releases.
2.2.8 Virtualization and containers
ISV zPDT can be used in a virtual environment or in Docker containers. As a best practice, you should have some experience with ISV zPDT (and whatever operating systems are used under it) in a more basic environment before attempting to use ISV zPDT in a virtual environment or Docker containers.
The most common performance problem that we encountered with virtual or container environments is constrained memory and memory planning.
Docker
IBM has done basic testing of ISV zPDT running in a Docker environment, which can be a workable environment for ISV zPDT operation. As with other types of virtualization, we caution against over-commitment of the base system. These cautions are related to memory sizes, availability, and system overload.
Simple Docker usage involves applications that are represented by a single Linux process. ISV zPDT operation has many Linux processes; furthermore, ISV zPDT operation requires the availability of auxiliary Linux programs such as the awsckd and aws3274 device managers, and the awsstart and ipl commands. The network configuration can be especially complex to implement.
ISV zPDT does not provide a sample Docker script or image. If you build your own image, you should consider the following goals:
The Linux CLI that is used to start ISV zPDT (the 3270 window for the z/OS master console and a 3270 window for TSO usage) should be available in some manner outside the immediate Docker environment. Likewise, the ISV zPDT logs directory and files must be available for debugging.
Any ISV zPDT core image files should be exposed to the base Linux. They might need to be examined or retained for debugging assistance.
The need for Linux root authority is limited in basic ISV zPDT operation. However, your Docker configuration and parameters might increase the need for Linux root authority when running ISV zPDT.
Virtual environments
The VMware, Kernel-based Virtual Machine (KVM), and Xen virtual environments are sometimes used during ISV zPDT development, although they are not the basic focus of
ISV zPDT. Other virtual environments might operate correctly, but have not been tested by ISV zPDT developers.
ISV zPDT (running z/OS) is typically a “heavy” workload. We strongly advise that you do not run ISV zPDT in overcommitted virtual environments. Among other effects, a substantially overcommitted virtual server might cause delays that trigger z/OS missing interrupt handler or SPINLOOP warnings. Another danger might be cascading page faults, where the VM hypervisor and the guest Linux running ISV zPDT and z/OS might all be paging due to several levels of overcommitted memory.
You (the ISV zPDT owner, user, or administrator) must obtain the necessary skills to install, configure, and use your virtual server. We do not attempt to document or provide instructions about installing and managing the virtual server environment. We encountered several cases of virtual ISV zPDT operation in servers that are managed by staff who are not familiar with the workload characteristics of ISV zPDT, z/OS, and others. In some cases, the server is overcommitted, with the server staff commenting, “We always plan this way”. Insufficient resources for ISV zPDT can produce many problems that are difficult to diagnose.
The tested virtual environments normally use remote ISV zPDT license and Unique Identity Manager (UIM) servers. These servers might not be necessary for smaller configurations where a separate USB port (for a zPDT license token) could be assigned to each virtual guest ISV zPDT. A single ISV zPDT token (connected to a USB port) typically cannot be shared by multiple VMs. In general, the first VM that starts (that specifies USB usage) occupies the USB interface to the token. Using a remote license server allows a token to be shared by multiple virtual ISV zPDT instances (assuming the token can provide sufficient licenses).
It is possible to configure logical disk drives to be shared among multiple virtual guest machines. Do not do take this action unless you are certain that you know what you are doing. Linux (which we assume is the basic operating system on all the VMs) does not routinely support shared disks.
We found that z/OS guests in a virtual environment have performance ranging from excellent to unacceptable, depending on the nature of the workload and whether the server was overcommitted in some way. A virtualized environment cannot create more machine capacity than what exists in the underlying hardware. The key consideration is the nature of the workloads.
The focus in this publication is z/OS, but this focus does not imply any particular workload under z/OS. A z/OS system with many TSO users (mostly editing source code or doing occasional compilations) might be considered lightly loaded, and another z/OS system with only a few users running large Db2 or Java jobs might be heavily loaded. You cannot draw any conclusions about performance unless you can realistically understand your workloads.
2.2.9 Hardware tokens
The hardware tokens at the time of writing are shown in Figure 2-1.
Figure 2-1 The 1090 (top one in photo) and 1091 hardware keys
These tokens are the IBM 1090 (for ISV zPDT) and IBM 1091 (for IBM ZD&T) tokens. Here are the specific “token” aspects:
An ISV zPDT hardware token is a USB device that resembles a typical USB flash drive, and it must be installed in a USB port for ISV zPDT to be operational.
The hardware tokens that are shown are Generation 1 tokens. At the time of writing, these tokens are the only hardware tokens that are available. Generation 2 hardware tokens are briefly described in Appendix C, “Generation 2 tokens and licenses” on page 379, and they might be available in the future.
An ISV zPDT hardware token can be installed in a USB port of the PC that is running
ISV zPDT or it can be installed in a “remote license server” that has a network connection to the PC that is running ISV zPDT. The remote hardware token server also has the
ISV zPDT product that is installed to provide the token interface and the network interface for ISV zPDT operation. A remote license server can provide more security for the hardware token itself and ISV zPDT licenses for multiple operational ISV zPDT systems if sufficient licenses are available on the token.
Generation 1 hardware tokens require Linux “drivers” (furnished by the token vendor) that are 32-bit Linux functions. It might be necessary to take special steps with a Linux installation to find the 32-bit Linux libraries that are needed. These “drivers” are packaged with ISV zPDT and only these versions can be used.
The current ISV zPDT tokens contain one, two, or three licenses, and they have numbers such as 1090-L01, 1090-L02, and 1090-L03. The number of licenses corresponds to the number of IBM zSystems CPs that can be concurrently emulated. The licenses are typically good for 1 year, and they can be renewed through your ISV zPDT vendor.
Multiple ISV zPDT hardware tokens can be installed on the PC to obtain more licenses. For example, using two 1090-L03 tokens provides up to six CPs. The maximum number of CPs that is supported for a single instance of ISV zPDT operation is eight (including zIIP, zAAP, and IFL processors, but not counting CFs).
Starting with ISV zPDT GA8, zIIPs do not “count” when considering the number of licenses in your tokens, although they do count toward the maximum number of CPs. For more information, see 13.1, “Minor ISV zPDT notes” on page 274.
When “token” is mentioned in this publication, we are referring to 1090 tokens (for
ISV zPDT). The token can be installed on the PC running ISV zPDT, or this PC can be connected to a remote ISV zPDT license server.
A 1090 token should always have an attached tag, as shown in Figure 2-1 on page 33. This tag contains serial numbers that are needed to renew the ISV zPDT licenses in the token. (IBM ZD&T tokens have a serial number that is engraved on the back of the token.)
A Generation 2 “software” license is available for IBM ZD&T usage, which is briefly described in Appendix C, “Generation 2 tokens and licenses” on page 379 and Appendix D, “IBM Z Development and Test Environment notes” on page 387. This license is more generally described in IBM ZD&T documentation.
If the token is removed while ISV zPDT is operational or if the connection to a remote license server is lost, the operation pauses with a series of messages. If the intervening time interval does not disrupt the operating system or application programs, ISV zPDT operation can be resumed by connecting the license again.
An ISV zPDT USB token is normally valid for 1 year after it is initialized or activated. It can be reinitialized12 at any time, which normally extends the validity for 1 year beyond the date of the most recent reinitialization.13 The procedure for initializing the key (or reinitializing it) depends on the channel that you used to obtain your ISV zPDT system, such as an
IBM Business Partner or other supplier. For more information about token updating, see 8.11, “Token activation and renewal” on page 196.
Various “SMP effects” reduce the effectiveness of extra CPs. For example, going from seven to eight CPs might offer minimal performance enhancements for many workloads. The I/O capability of the underlying PC must also be considered. However, this performance determination is left to you.
2.3 PC selection overview for ISV zPDT
Various PCs can be chosen for ISV zPDT operation if you consider some basic parameters.
ISV zPDT is generally not sensitive to the brand or model of the underlying PC. We see systems ranging from rather old laptops (two cores with 8 GB of memory) to large servers
(24 cores or more with up to 256 GB or more memory). Considerations for selecting a PC include the following items:
Is absolute peak performance required? The fastest machine that we have seen (at the time of writing) in terms of raw emulated CPU performance is a large server. In second place is a high-end laptop. However, the performance differences among recent high-end PCs (assuming they are reasonably configured for the workload involved) are not great.
How important is reliability? Elements include RAID, dual power supplies, memory recovery technology, inclusion of support processor functions, battery backup, and others.
Do you need a graphic desktop? Laptops are good at GUIs, but servers are often rack-mounted in an “operations” area without a good display, keyboard, or mouse. A remote CLI connection to your zPDT system (probably by using SSH) can be used for zPDT administration.
How much memory do you need? This topic can be complex. In this publication, we recommend at least 1 GB more than the emulated IBM zSystems size, but at a minimum. The Linux disk cache is an important performance element that depends on sufficient memory. Likewise, the ability to avoid Linux swapping is important for performance. As a starting point, use at least 8 GB more than the zPDT memory that is defined in the devmap. Typical ISV zPDT systems might have a PC memory size at least twice as large as the defined ISV zPDT IBM zSystems memory.
How many PC cores do you need? ISV zPDT requires a number of cores that is at least equal to the number of ISV zPDT CPs, including zIIPs, zAAPs, and IFLs. Again, this number is the minimum, and two or three more cores can improve performance. If significant additional workloads are present in Linux, then the number of cores that is needed is likely much larger.
Do you plan a virtual or container environment? These items always have some performance implications, but can have a major impact if the server is over-committed or if memory usage is not planned.
Does your z/OS system (including expected applications and subsystems) need a large amount of memory? A z/OS system with a number of TSO users editing and submitting compilations might perform better (that is, not paging) with 8 GB of memory, but a large
IBM Db2 workload that uses several buffers might perform better with 100+ GB of memory. We observed a tendency to overestimate the z/OS memory that is needed (in the devmap), and underestimate the importance of memory for Linux. Available PC memory is not dedicated to either ISV zPDT or the disk cache, but is managed by Linux.
Will your terminal users be on a local network or the internet? Local networks (not directly connected to the internet) typically have IP addresses of 192.168.x.x or 10.x.x.x and allow local management of specific IP addresses. Local networks often allow general routing that effectively interconnects all the PCs on the network. Conversely, connections to the internet typically require a specially assigned IP address that is known to be on specific routing throughout the internet. Such IP addresses tend to be not portable. (DHCP addresses for a PC running ISV zPDT are alternative options for internet connections, but tend to be impractical for external users wanting to connect to the
ISV zPDT system.)
Base PC hardware notes
IBM uses various PCs for core ISV zPDT development and testing. It is not possible to list all the hardware that is used, but practical notes on the hardware include the following items:
In all test cases, a minimum of 16 GB PC of memory was available. Systems with up to 256 GB were used.
Hardware RAID adapters are used for most systems above the laptop level, especially when the systems are rack-mounted in areas where there is not much visual inspection. Furthermore, routine verification of the “normal” RAID operation is suggested.
Even for rack-mounted servers, a GUI display and keyboard (often mounted on a movable cart) are available when needed.
In cases where many PCs are involved, the ISV zPDT tokens are installed in remote
ISV zPDT license servers. (Remote often means in the same floor area and not distantly remote.) These cases typically require an additional PC for the license server and a reliable local network.
A suitable USB port must be available for the hardware tokens. This item applies whether tokens are used on the base ISV zPDT machine or on a remote license server. Do not use an unpowered USB port expander for ISV zPDT tokens.
Multiple LAN interfaces might be useful in larger configurations, although this situation is rare. Do not use multiple LAN interfaces unless you know that they are needed for a specific reason.
Disable hyperthreading (if available) at the BIOS level. Hyperthreading can produce slowdowns when z/OS is running due to spinloops. If many PC cores are available, the slowdowns might be resolved before z/OS console messages are produced, which means that there is no indication of a problem other than reduced performance. For more information, see 13.1, “Minor ISV zPDT notes” on page 274.
The Linux distribution must operate correctly on the base PC. New adapters, various power management options, new USB chips, new display parameters, new disk technology, and other technology-related items might not work correctly with all Linux distributions or might require extra Linux device drivers or Linux updates.
Some SCSI tape drives can be used with ISV zPDT, but not all SCSI tape drives are usable by ISV zPDT. The usability depends on the exact model, the exact firmware level, the exact SCSI adapter that is used, and the firmware options that are set in the drive. IBM cannot predict whether your SCSI drive works with ISV zPDT. If this item is important to you, discuss your requirements with your ISV zPDT provider. For more information, see Chapter 14, “Tape drives and tapes” on page 303.
Although PC hardware incompatibility has not been extensively explored by IBM, it is rare. The only two cases that are known at the time of writing are related to old PCs (preventing any ISV zPDT operation), and a problem with crypto adapter emulation on an older machine (since resolved with an ISV zPDT update).
Although some ISV zPDT testing is done with virtual or container environments, this testing does not attempt to explore all the possible alternatives in these areas. You must have the skills to explore and solve unexpected problems in these environments.
2.3.1 zPDT and PC memory
The complete ISV zPDT memory environment exists in Linux virtual memory. Linux is aggressive in allocating real PC memory frames to virtual memory pages and disk file data by using its own judgment about what is the best usage of real memory. The situation is variable when Linux caching of disk I/O is considered, and disk caching is an important element of Linux performance.
ISV zPDT exists in Linux virtual memory. We might informally say something like, “With an 8 GB machine, we can allow1 GB for Linux and 7 GB for ISV zPDT,” but such statements must not be taken literally. ISV zPDT does not physically partition PC memory. If we inspect the machine in this example at a random time, we might find 1.2 GB of memory that is owned by the primary ISV zPDT module, 0.2 GB of memory that is owned by recognizable core Linux functions, 3.8 GB of memory that is used for disk data cache, 0.2 of memory that is used by various other processes (such as ISV zPDT device managers), and the rest unassigned. A few seconds later, the usage statistics might be different.
The PC memory size should be substantially larger than the sum of all concurrent ISV zPDT defined IBM zSystems memory. More is usually better. An arrangement might have PC memory at least twice the size of the defined IBM zSystems memory. The primary goals are to avoid Linux paging that stalls ISV zPDT operation and allow Linux to have an effective disk cache. There is no easy way to directly manage either of these goals. They are indirectly managed by providing ample PC memory. This management is often overlooked for virtual or container environments, which results in unexpected poor performance. Some practical experimentation, based on your configurations and your workloads, might be needed.
2.3.2 PC disk space
The disk space for the ISV zPDT executable programs and control files is relatively small.14 The disk space for emulated IBM zSystems volumes is not small, so some planning is needed. The space for emulated disk volumes can be calculated accurately, but the space for emulated tape volumes depends on the amount of data on the emulated tape volumes.
For practical purposes, we consider only 3390 emulated disk volumes. For the standard 3390 models, the required space is shown in Table 2-2.
Table 2-2 Required disk space for 3390 emulated disk volumes
3390 model
Approximate space that is required
Exact space that is required
3390-1
0.95 GB
948,810,752 bytes
3390-2
1.9 GB
1,897,620,992 bytes
3390-3
2.8 GB
2,846,431,232 bytes
3390-9
8.5 GB
8,539,292,672 bytes
One 3390 cylinder is equal to 852,48015 bytes.
The per cylinder space can be used to calculate the disk space that is needed for nonstandard 3390 sizes.
Disk space is needed for 3390 volumes that contain the operating system, and for whatever local data volumes that you might produce. For example, the z/OS ADCD system, at the time of writing, needs 112 GB - 224 GB, depending on which volumes you decide to include.
Emulated tape sizes reflect the size of the data that is written on the tape with a small, additional space (less than 1%) that is needed for awstape control blocks.16 (Optionally, the awstape device manager can compress these files, which can reduce the amount of space that is used.)
2.3.3 PC model information for IBM
The ISV zPDT formal IBM license statement regarding base systems includes the following text:
“The Program may be used on the following systems that are running versions of Linux as specified in the Program’s read-me file: IBM System x 3500 M1, 3500 M2, 3500 M3, 3500 M4, 3650 M1, 3650 M2, 3650 M3, or 3650 M4; Lenovo Thinkpad W Series; or systems that are otherwise approved by IBM.”
The license agreements might contain reporting requirements that must be understood by the user. These requirements are not covered in this publication, but they can be reviewed with your IBM representative or ISV zPDT supplier. (As a practical matter, most ISV zPDT operation is on later systems than the ones that are listed here.)
2.4 Practical ISV zPDT operational notes
A number of general concepts that are involved in ISV zPDT usage are described in this section. Specific instructions about ISV zPDT installation, operation, and management are provided in later chapters in this publication.
2.4.1 PC software levels
Both PC hardware and base Linux software change frequently. ISV zPDT changes are needed to maintain a reasonable level of compatibility. ISV zPDT is not intended to be compatible with all levels of Linux or with all available PC hardware.
Base Linux
At the time of writing, ISV zPDT is built for operation on the Linux levels that are listed in 2.5.1, “Current release” on page 44. These levels are the “supported” base Linux releases. Earlier Linux distributions should not be used due to potential Linux library differences. Various Linux distributions and levels might require you to make administrative adjustments. For example, at the time of writing, some Linux distributions require additional commands to provide reasonable OSA performance.
 
Important: Over time, ISV zPDT will follow general Linux developments and changes, but constantly following the latest Linux distributions and updates is not a primary ISV zPDT goal.
Do not confuse the following two Linux distributions:
The Linux that you install on your PC to run ISV zPDT, which is your base Linux.
The Linux for IBM zSystems (or Linux for S/390) distribution that you might run under
ISV zPDT.
These two distributions are separate topics. With few exceptions, all mentions of “Linux” in this publication refer to the Linux that you install on your PC.
3270 emulator
A suitable 3270 emulator is usually needed, but it is not distributed with ISV zPDT. The most common 3270 emulator for Linux is x3270. Some current Linux distributions might not include the x3270 package, but it can be downloaded from various sites. Other 3270 emulators might be used, but their operation with ISV zPDT must be checked by you. IBM developers have used recent releases of the IBM Personal Communications package (on Microsoft Windows systems).
2.4.2 Linux user IDs
In principle, any Linux user ID can be used to install17 or operate ISV zPDT, with the exception that an ISV zPDT operational Linux user ID cannot be longer than 8 characters (unless the system_name statement is used in the devmap, in which case there is no special limit on the Linux user ID length). All our examples assume user ID ibmsys1 is used, but there is nothing special about this name. The ISV zPDT system uses several default path names that are related to the current Linux user ID.
In principle, a different Linux user ID can be used to create a different ISV zPDT operational environment with different control files. Also, multiple Linux user IDs must be used when running multiple ISV zPDT instances concurrently. We use ibmsys2 and ibmsys3 as examples of these additional user IDs.
Current Linux operating systems automatically create home directories for user IDs in /home/<user ID>. For example, the home directory for user ID ibmsys1 is /home/ibmsys1. It is possible to specify a different home directory for a user ID. Throughout this publication, we often use /home/ibmsys1 to indicate the home directory for the ISV zPDT user ID, even though the specific usage of ibmsys1 is not required.
2.4.3 ISV zPDT operational components
At the highest level, ISV zPDT has or needs the following components:
A base Linux system, which is not provided with ISV zPDT. You must acquire it directly.
A suitable 3270 emulator (which is usually run on the same PC that is hosting ISV zPDT, although this setup is not required). The ISV zPDT package does not provide a 3270 emulator.
A hardware USB token, which is required for ISV zPDT operation.
An ISV zPDT program package file. Within this single file are the following items:
 – Two prerequisite SafeNet driver programs for communicating with the token. These two drivers are provided with ISV zPDT and only these provided versions can be used. Other versions that are available from the web must not be used even if they appear to be a later level. These two programs are installed even if a remote license server is used.
 – A program for communicating with the license servers.
 – The Red Hat Enterprise Linux (RHEL) version of ISV zPDT.
 – The Novell (SUSE Linux Enterprise Server) version of ISV zPDT.
 – The Ubuntu version of ISV zPDT.
 – An installer program that displays a license, installs the prerequisite drivers (if not already present), and then selects and installs the correct ISV zPDT version.
 – Components that provide remote license and identity management functions.
IBM zSystems software, such as z/OS, is not part of ISV zPDT, although it is typically included in the IBM product package that includes ISV zPDT.
With only a few exceptions (usually dealing with installation), ISV zPDT discussions are the same whether the Red Hat, Novell, or Ubuntu distributions are used. Different versions of
ISV zPDT (for Red Hat, Novell, and Ubuntu) are provided within the ISV zPDT package due to slightly different library levels or contents in these three environments.
The ISV zPDT installation also creates the /usr/z1090/man and /usr/z1090/uim directories. The uim directory contains several small files that are used to provide a consistent serial number for IBM zSystems compatibility. The man directory contains normal Linux man pages for ISV zPDT.
The first start of ISV zPDT creates several subdirectories (that are placed in the z1090 subdirectory) in the user’s home directory.18 Briefly, these subdirectories are as follows:
cards, lists: Can be used to provide input files to an emulated card reader or output from an emulated printer.19 If not used, they are empty. Few ISV zPDT customers use these files.
disks, tapes: Can be used to hold emulated disk or tape volumes, but these subdirectories are typically not used for anything. The emulated volumes are usually placed elsewhere, in other Linux file systems.
logs: Used by ISV zPDT to hold various dumps, logs, and traces. ISV zPDT partly manages the contents of this subdirectory. The contents of this directory are important if it becomes necessary to investigate an ISV zPDT failure.
configs, pipes, srdis: Used for ISV zPDT internal processing. Do not erase or alter the contents of these small subdirectories.
Minor usage of the /tmp file system occurs during ISV zPDT installation, ADCD installation, aws3270 device manager start, and optionally for Server Time Protocol (STP) logs.
2.4.4 ISV zPDT console
An ISV zPDT system is partly administered from Linux CLIs. This operation can be done remotely through telnet or SSH connections. A graphics connection is not needed.
Do not confuse the MVS operator console, for example, running in a 3270 window with Linux terminal commands that are used to administer ISV zPDT, which are run from a Linux CLI. We are describing zPDT administrative commands, not z/OS operator commands.
There is no dedicated console program for sending commands to an operational ISV zPDT environment. All ISV zPDT commands are Linux executable files that are run from a Linux shell. The commands require that the ISV zPDT instance is started by the same Linux user ID that issues the subsequent ISV zPDT commands for that instance. For example, if Linux user ID ibmsys1 starts ISV zPDT, then only Linux user ID ibmsys1 can issue an ipl command. The ipl command is a Linux executable file that is supplied with the other executable files that constitute the ISV zPDT package.
ISV zPDT sometimes issues asynchronous messages, which are sent to the Linux CLI that was used to start that ISV zPDT instance. If that CLI is closed, the asynchronous messages are not seen.20 You can issue ISV zPDT commands from any Linux CLI running under the Linux user ID that started the ISV zPDT instance.
2.4.5 ISV zPDT device maps
A devmap is a simple Linux text file. You might have many devmaps, each of which are a separate Linux file. A devmap is specified when ISV zPDT starts. You may use a different devmap each time an ISV zPDT instance starts. A devmap specifies the IBM zSystems characteristics to use and the device managers (with their parameters) to use for an instance of ISV zPDT operation.
The following devmap illustrates a simple IBM zSystems configuration:
[system]
memory 8000m # emulated IBM zSystems to have 8000 MB memory
3270port 3270 # tn3270e connections specify this Linux port
processors 1 # create one CP
 
[manager]
name awsckd 0008 # define two 3390 units
device 0a80 3390 3990 /z/SARES1
device 0a81 3390 3990 /z/WORK02
 
[manager]
name awstape 0020
device 0580 3480 3480 /z/SAINIT #tape drive with premounted tape volume
device 0581 3480 3480 #tape drive with no premounted volume
 
[manager]
name aws3274 0300 # define two local 3270s
device 0700 3279 3274
device 0701 3279 3274
Device managers (such as awsckd, awstape, and aws3274 in the example) are the ISV zPDT programs that emulate various device types. The number after the device manager name is an arbitrary hexadecimal number (up to 4 digits) that must be different for each name statement.
Device statements in the devmap specify details such as a device number (“address”), device type, the Linux file that is used for volume emulation, and various other parameters. The volume that is mounted at an address can be specified in the devmap or changed by running the awsmount command while ISV zPDT is running. In this example, the emulated tape volume in Linux file /z/SAINIT is already mounted when ISV zPDT is started. We can change the volume (while ISV zPDT is running) by running an awsmount command that specifies a different Linux file. (The files must be in the proper emulated format, of course.) This action corresponds to changing a tape volume on a tape drive or changing a disk “pack” in earlier years.
2.4.6 ISV zPDT directory structure in Linux
A Linux user running ISV zPDT has the following default directory structure in Linux:
Directory path Purpose
/home/<userid>/z1090/logs/ various logs and traces are placed here
/home/<userid>/z1090/configs/ (internal 1090 functions)
/home/<userid>/z1090/disks/ default emulated disk volumes
/home/<userid>/z1090/tapes/ default emulated tape volumes
/home/<userid>/z1090/cards/ input to the emulated card reader
/home/<userid>/z1090/lists/ emulated printer output
/home/<userid>/z1090/pipes/ (internal 1090 functions)
/home/<userid>/z1090/srdis/ (internal 1090 functions)
 
/usr/z1090/bin executable 1090 code, scripts
/usr/z1090/man minor documentation
/usr/z1090/uim identity manager files
Different Linux user IDs would have different default 1090 directories and files. The ISV zPDT operation is sensitive to the Linux user ID that is used. The usage of the default logs, lists, srdis, and configs directories is mandatory for some operations, but is optional for other files, such as emulated disk and tape volumes and for emulated card reader and printer devices. Emulated devices have default file names that are based on the assigned device number, but they can use specified file names instead of the default file names. (We always use specified file names in our examples. None of our examples use the default disks and tapes subdirectories, and they are typically empty.)
These subdirectories are created in the current home directory (if they do not exist) when the ISV zPDT operation first starts.
Use a separate Linux file system for emulated disk volumes, which insulates them from Linux reinstallations and also insulates both the emulated volume file system and the base Linux file system (or systems) from unplanned growth in each other. For these reasons, most of the examples in this publication assume that all emulated I/O files are placed in the /z directory.21 In our case, when we installed Linux, we created a separate partition (with a large amount of disk space) that is mounted at /z. We use this directory to hold all the emulated volumes. The cards, tapes, disks, and lists directories, in the default directory path, are seldom used in typical operation.
There are no default file or directory names for ISV zPDT emulated volumes if you are not using locations such as /home/<userid>/z1090/disks. (Few zPDT customers use these default emulated locations.) In our examples, we tend to use file names that match the volser of an emulated disk volume, but this approach is not required.
2.4.7 PC LAN adapters
Both the base Linux and the IBM zSystems operating system can use (at the same time) a single Ethernet adapter (NIC) in the base PC. More than one adapter can be used, but this approach is usually unnecessary, and it can lead to external routing complications. Wired or wireless adapters can be used, although care must be taken not to disrupt a wireless connection when it is used for 3270 sessions or CTC connections.
For more information about LAN usage, see Chapter 7, “Local area networks” on page 149.
LAN setup is often the most complex part of ISV zPDT installation, especially when you are configuring both the base Linux LAN connections and the IBM zSystems operating system TCP/IP connections. Some degree of LAN and TCP/IP understanding is required for any ISV zPDT configuration beyond the most basic “local” 3270 operation.
2.4.8 ISV zPDT control structure
The general structure of the ISV zPDT control files is shown in Figure 2-2. This structure involves the ISV zPDT awsstart command, which has a parameter pointing to a devmap, and the devmap contains the names of the Linux files that are used for emulated devices (such as IBM 3390 volumes) and the amount of storage that is seen by the emulated IBM zSystems functions.
Figure 2-2 Control files: general structure
A devmap is a simple Linux text file containing specifications for the IBM zSystems machine.
2.4.9 ISV zPDT instances
Logging in to Linux and starting an ISV zPDT operation creates an instance of ISV zPDT usage. This instance might have one or more IBM zSystems CPs that are associated with it, depending on the licenses that are available and the parameters in the devmap. Then, if you log in to Linux with a second Linux user ID and start another ISV zPDT operation, this action creates a second instance. Multiple instances mean that multiple, independent ISV zPDT environments are run in parallel. The total number of CPs across all concurrent instances cannot exceed the number that is allowed by the token.22
A 1090 model L03 token can have up to three IBM zSystems CPs (or mixtures of CPs, zAAPs, and IFLs) plus up to three zIIPs. These CPs can be used for three ISV zPDT instances, each with a single CP, separate IBM zSystems memory,23 and a separate
IBM zSystems operating system. Alternatively, a single ISV zPDT instance could be used with one, two, or three CPs, which is more likely for most ISV zPDT users.
The use of multiple CPs is subject to the following restrictions and considerations:
The number of defined CPs (including zIIPs, zAAPs, or IFLs) in one ISV zPDT instance must not be more than the number of processor cores on the base Linux system. If there are fewer cores than requested CPs, ISV zPDT does not start all the requested CPs.
A full ISV zPDT operation can use more cores in the base PC system than there are
IBM zSystems CPs that are defined in any one instance. The additional processors are used for I/O, help prepare IBM zSystems instructions for use, and for non ISV zPDT Linux processes.
In basic usage, emulated I/O devices are unique to an ISV zPDT instance. However, there are advanced ISV zPDT options that permit sharing emulated I/O devices among multiple instances. The minimum number of base processor cores, as stated earlier, should be at least equal to the maximum number of CPs in any instance. Otherwise, there is no association of particular base processor cores to CPs.
Most of this publication is focused on single instance operation. Chapter 10, “Multiple instances and guests” on page 215 provides setup and usage instructions for multiple ISV zPDT instances.
2.5 ISV zPDT releases
There have been many ISV zPDT releases over the years. The following sections summarize significant changes. In the tables that follow, the information about Linux levels is important. Lower-level Linux systems should not be used when running the associated ISV zPDT release. Other Linux distributions with libraries at an equivalent level (or later) might be used, although only the listed distributions were tried by the developers.
2.5.1 Current release
Key elements in GA11 (or in GA10 fix packs) include the following items, which are summarized in Table 2-3 on page 45:
General implementation of IBM zSystems z16 instructions (with some exceptions), which involves extensive additions to the IBM zSystems system architecture.
Corrections for IBM z/OS Container Extensions (zCX) startup problems.
The Linux distribution levels that are used to build this release are updated from the ones that are used for the GA10 release.
Fixes for obscure channel command word (CCW) program failures to detect end of file.
Implementation of the DFLTCC instruction, but only in synchronous mode. However, the facility indication for this deflate function is turned off. The indicator can be turned on with the zPDT dflt command. For more information, see 4.2.31, “The dflt command” on page 93.
Removes the need to run ISV zPDT with some access to Linux root authority. However, root authority is still needed to install ISV zPDT or for some administrative commands. (This function was added in fix packs for the previous ISV zPDT release, but is consolidated in release GA11.)
Multiple minor fixes for small problems, including a method of handling “dynamic sensing” when z/OS runs under z/VM.
The MSA9 facility is included. This facility can be important for some of the more advanced cryptographic environments. The general Common Cryptographic Architecture (CCA) level is now 8.0.
The zPDT ztrace command was added.
An acptool command was added for specialized cryptographic control.
Various minor fixes that are related to Ubuntu 20.04 were resolved in this release or in fix packs for the previous ISV zPDT release.
OSA emulation changed from the OSAExpress5s level to the OSAExpress6s level.
Minor problems with special usages of the CMPSC instruction were corrected.
Cryptographic emulation is now level CEX8S. This change involved changing to a new format that zPDT uses to store the customer-provided master keys. If customer-encrypted data is carried forward from earlier zPDT releases, the customer must reinitialize the GA11 emulated cryptographic adapter master keys by using the same keys that were used on the previous zPDT system.
The acptool command was added, although it is intended for limited usage.
Some outdated commands (senderrdata and snapdump) were removed.
An updated version of Coupling Facility Control Code (CFCC) is included.
The zPDT emulation option for IBM zEnterprise Data Compression (zEDC) was removed.
Table 2-3 Version 1 Release 11. (known as GA11)
Characteristic
Version 1 Release 11
Date released
August 2022
Initial ISV zPDT driver level
z1090-1-11.57.06
IBM zSystems architecture level
z16
Linux levels that were used to build and test the ISV zPDT release. (The “official” levels.)
RHEL 8.6, SUSE Linux Enterprise Server 15 SP3, and Ubuntu 20.04
Informally tested Linux levels
(Earlier levels are not recommended.)
CentOS Stream 8.6 and 9.0, Fedora 35 and 36, and Open SUSE 15.3
Tested z/OS levels
2.4 and 2.5
z/VM that were used during development
7.2 (not all functions)
z/VSE
No formal testing.
Tested Linux for IBM zSystems level
RHEL 8, with other testing in progress.
Machines that were used for testing
A wide range of servers and laptops
Virtual environments that were used during testing
KVM
2.6 Previous ISV zPDT releases
Key characteristics of earlier releases are provided in Table 2-4.
Table 2-4 GA 1 - 10
GA
Date
ISV zPDT
Architecture
Selected details
GA1
Oct.
2009
39.11
Z900
ALS3
Base: RHEL 5.2 or openSUSE 10.3
Informal: Fedora or openSUSE 11
z/OS: 1.9 or 1.10
Note: Has a 32-bit version.
GA2
June 2011
41.21
z10
ALS3
Base: RHEL5.3 or openSUSE 10.3 or 11.1
Informal: openSUSE 11.1 or Fedora
z/OS: 1.10 or 1.11
Note: pdsUtil, listVtoc, Customized Offering Driver (COD), MSA 3 & 4, and a 32-bit version
GA3
March
2012
43.20
z196
Base: RHEL 5.4, SUSE Linux Enterprise Server 11, or openSUSE 11.2
Informal: openSUSE 11.3 and 11.4, or Fedora 12
z/OS: 1.11, 1.12, or 1.13; z/VM 5.3, 5.4, or 6.1; or z/VSE 4.2, 4.3, or 5.1
Note: Remote license servers, devmap updates, CFCC17, LAN interface names, token RAS, 8 CP + specialty (32-bit version no longer available), shared direct access storage device, migration utility, and RDzUT
GA4,
4.1
Dec
2012
45.18
z196,
EC12
Base: RHEL 6.1 or openSUSE 11.3
Informal: openSUSE 11.3 or 12.2; SUSE Linux Enterprise Server 11 SP2; or Fedora 15 or 17
z/OS: 1.12 or 1.13; or z/VM: 6.1 or 6.2
Note: Many new instructions, zBX, virtualization, integrated consoles, z1090term, CEX4C, and CFCC18
GA5
Feb
2014
47.xx
EC12
GA
Base: RHEL 6.x openSUSE 11.3 or SUSE Linux Enterprise Server 11 SP2
Informal: Fedora 17 or 19, or openSUSE 11.4, 12.1
z/OS: 1.12, 1.13, or 2.1; or z/VM: 6.2, and some 6.3
Note: CEX4, CP Assist for Cryptographic Functions (CPACF) updates, CFCC19, and SCSI 359x
GA6
March
2015
49.xx
z13
Base: RHEL: 7.0 or SUSE Linux Enterprise Server 11 SP3
Informal: openSUSE 13.1, SLEX 11 SP3, or Fedora 20
z/OS: 1.13 or 2.1; z/VM 6,2 or 6.3; or z/VSE 5.1 or 5.2
Note: Many new instructions, CEX5S, CPACF updates, STP, CFCC20 SL16, r/o direct access storage device, KVM, and zBX
GA7
March
2017
49.xx
z13
GA2
Base: RHEL 6.3 - 7.1, SUSE Linux Enterprise Server 11 SP2, or Ubuntu 16.04
Informal: Fedora 25, Leap42.1, or SUSE Linux Enterprise Server 11 SP3
z/OS: 1.13, 2.1, or 2.2; z/VM 6.2 - 6.4; or z/VSE 6.1
Note: Ubuntu base, z/TPF, software license server, CFCC21, up to 2048 devices, zBX dropped
GA8
Dec
2017
51.xx
IBM z14
Base: RHEL 7.3, SUSE Linux Enterprise Server 12 SP1, Ubuntu 16.04, or Leap 42.1
Informal: Leap 42.1, openSUSE 13.1, or Fedora 25
z/OS: 2.1, 2.2, or 2.3; z/VM 6.2 - 6.4; or z/VSE 6.1
Note: New instructions IBM z14, “free” zIIPs, CFCC22, revised awsckd (2017, MIDAW), jumbo frames, and I/O counts by using awsstat
GA9
March
2019
53.xx
IBM z14
GA2
Base: RHEL 7.5, SUSE Linux Enterprise Server SP3, or Ubuntu 18.04
Informal: Leap 42.3; Leap 15; Fedora 27 or 28; or CentOS 7.0 or 7.4
z/OS: 2.2, 2.3, or 2.4
Note: Special tape CCWs, LPAR name, console file, CFCC23, zEDC, channel measurements, layer 2 fixed, and 64-bit support for gen2 licenses
GA10
March
2020
55.xx
z15
Base: RHEL 7.7, SUSE Linux Enterprise Server 12 SP3, or Ubuntu 16.04.6 LTS
Informal: RHEL 7.x, 8.0; SUSE Linux Enterprise Server 12 SP3; openSUSE Leap 42.3 or 15.0; or Ubuntu 16.04LTS or 18.04
z/OS 2.2, 2.3, or 2.4; or z/VM 6.4 or 7.1
Notes: DFLTCC missing, CFCC level 24, Crypto 7S, MSA9 missing, and various bugs fixed.
 

1 Using more general IBM zSystems terminology, ISV zPDT provides processing units (PUs). By default, the PUs are characterized as CPs, but can be characterized as zIIP, zAAP, or IFL processors instead. Throughout this publication, we refer only to CPs, and this reference should be understood to include zIIP, zAAP, and IFL processors when they are used. zAAPs are no longer available at the IBM z14® level, but were available in earlier ISV zPDT releases.
2 This general statement assumes that relevant operating system and other libraries are at compatible levels. It also assumes that relevant licenses allow such use.
3 Wireless can be considered an Ethernet adapter.
4 Initiating SNA operations (in non-QDIO mode) might be possible, but this usage has not been tested and is not supported by IBM at the time of writing.
5 One problem element, which is related to I/O dynamic sensing, was recently bypassed with ISV zPDT.
6 Date and time values in Linux can be handled in many ways, which is seldom a problem in a single ISV zPDT system, but if several ISV zPDT systems are linked together (on separate PCs), time management on the different Linux PCs might need to be done in a unified manner.
7 Intentional planning for read-only emulated volumes is an exception for requiring write access.
8 A notable exception is that cryptographic keys for emulated cryptographic adapters are stored in standard Linux files. These files are secure from the IBM zSystems viewpoint, but not from the Linux viewpoint.
9 The awscmd device manager provides a method for z/OS or IBM z/VM users to send commands to the base Linux. This function is useful to some ISV zPDT customers, but might provide a security exposure for other customers.
10 This term is the common one for SIE operations, although the actual implementation might be much more complex than implied by this statement.
11 JIT is often connected to Java operation, which is not the case here, where JIT is used as a more generalized term. Also, JIT is undergoing changes at the time of writing.
12 This action is also known as a “lease extension.”
13 The extension period might differ depending on the IBM channel that was used to obtain the ISV zPDT system.
14 It is typically less than about 70 MB.
15 Each volume has an extra 512 bytes overhead.
16 The actual overhead is 6 bytes for each block that is written (including a tape mark, which counts as a block).
17 Part of the installation process must be done as root, but the initial login should be with the user ID that will be used to operate ISV zPDT.
18 When ISV zPDT starts, a z1090 subdirectory is created in the home directory of the user (if it does not exist). The subdirectories that are described here are under the z1090 subdirectory.
19 These directories are not required for emulated card reader or printer operation.
20 However, the messages are added to a console log file. For more information about these log files, see 13.5, “zPDT log files” on page 282.
21 The mount point name, /z in our examples, is arbitrary.
22 Or by the total number of licenses from multiple tokens or software licenses, with a design limit of eight for any
ISV zPDT instance.
23 The combined IBM zSystems memory is subject to a later discussion about memory.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.91.2