Multiple instances and guests
Independent Software Vendor (ISV) IBM Z Program Development Tool (IBM zPDT)
(ISV zPDT) supports both guest operations (under z/VM) and multiple instances of ISV zPDT. Both approaches are a way to run multiple z/OS or other operating systems.
In this chapter, we present basic information about the usage of multiple ISV zPDT instances. Practical operation with multiple instances might be complex. You might need to work with your ISV zPDT provider to clarify usage of more complex configurations.
 
Tip: For newer ISV zPDT users, as a best practice, first work with a basic system (not in virtual, container, or multiple instances environments) to become familiar with routine operations before venturing into more complex environments.
10.1 Multiple instances or multiple guests
As a best practice, use a single ISV zPDT instance in native mode to become familiar with basic ISV zPDT operation. Avoid running z/OS (or other IBM zSystems operating systems) under z/VM or in multiple ISV zPDT instances until you are comfortable with basic ISV zPDT usage.
Also, you cannot exceed the number of ISV zPDT licenses in your token (tokens or license server). The total number of ISV zPDT licenses applies whether the ISV zPDT CPs are in a single instance or spread over multiple instances.
Multiple instances mean running more than one copy of ISV zPDT. Each instance must run under a different Linux user ID. This task can be accomplished by logins through Telnet (or SSH) or by careful usage of su commands from different windows on the Linux desktop. Each instance must have its own device map (devmap).
The usage of TCP/IP interfaces is an essential part of this discussion. For this reason, we combine a discussion of multiple ISV zPDT instances with the usage of guests under z/VM in a single instance. Our examples use OSA-Express Direct (OSD) (QDIO) interfaces for TCP/IP. Using OSA-Express (OSE) interfaces (non-QDIO) might not be possible because you must configure the Open Systems Adapter (OSA) Address Table (OAT) by using the Open Systems Adapter/Support Facility (OSA/SF) utility, and this utility is no longer provided with z/OS.
10.2 Multiple guests in one instance
A typical z/VM configuration is outlined in Figure 10-1. z/VM itself typically “owns” all the 3270 sessions. Guests (z/OS and Conversational Monitor System (CMS)) acquire a 3270 when a user logs on as a guest or uses a DIAL command.
Figure 10-1 Guests in a single ISV zPDT instance
Each guest (under z/VM) can access a LAN interface. The awsosa device manager can handle up to 16 of these “stacks”. An awsosa device manager that uses a tunnel to Linux can be used, as shown in Figure 10-1 on page 216. Each z/VM guest uses different IP addresses on each OSA interface. Alternatively, which is not shown in Figure 10-1 on page 216, z/VM can establish an internal VSWITCH for guest use. Using the IP address patterns from our other examples, we might have the following addresses in Figure 10-1 on page 216:
192.168.1.80 Linux IP address for the Ethernet adapter
192.168.1.80 port 3270 Address for external TN3270e connections to aws3274
10.1.1.1 Linux IP address for the tunnel interface
192.168.1.81 z/VM IP address for Ethernet
10.1.1.2 z/VM IP address for the tunnel interface
192.168.1.82 z/OS #1 address for Ethernet
10.1.1.3 z/OS #1 address for the tunnel interface
192.168.1.83 z/OS #2 address for Ethernet
10.1.1.4 z/OS #2 address for the tunnel interface
127.0.0.1 Localhost connection for local x3270 sessions
10.3 Independent instances
We can have two independent ISV zPDT instances, meaning that emulated I/O devices are not shared between the instances. In common terms, there is no shared direct access storage device (or any other shared device).
Figure 10-2 shows an example of independent instances.
Figure 10-2 Independent instances
In this example, we assume that there are three ISV zPDT licenses (and a base Linux machine with at least four cores), and we assigned two CPs to one instance and one CP to the other instance. Different port numbers are needed in the 3270port statements in the devmaps. Emulated device addresses (device numbers) are independent between the instances, and both might use the same addresses.
Each emulated OSA requires its own Ethernet adapter, and two adapters are necessary in this case. Two emulated OSAs cannot share an Ethernet adapter. This example uses LAN Channel Station (LCS) (non-QDIO) mode for both instances, but they both can be QDIO or a mixture of LCS and QDIO.1 The following devmaps create a tunnel interface for only one of the instances to illustrate that different configurations are possible for independent instances.
Simplified devmaps, matching Figure 10-2 on page 217, might be as follows:
(file /home/ibmsys1/aprof1)
[system]
memory 6000m # emulated zSeries to have 6000 MB memory
3270port 3270 # tn3270e connections specify this port.
processors 2 cp cp
 
[manager]
name awsckd 0001 # Define a single 3390 disk.
device 0a80 3390 3990 /z/SARES1
 
[manager]
name aws3274 0003 # Define two local 3270 devices.
device 0700 3279 3274 mstcon
device 0701 3279 3274 tso
 
[manager]
name awsosa 00C0 --path=F0 --pathtype=OSE
device E20 osa osa --unitadd=0
device E21 osa osa --unitadd=1
 
[manager]
name awsosa 00A0 --path A0 --pathtype=OSE --tunnel_intf=y
device E22 osa osa --unitadd=0
device E23 osa osa --unitadd=1
 
 
(file /home/ibmsys2/profSB)
[system]
memory 4000m # emulated zSeries to have 4 GB memory
3270port 3271 # tn3270e connections specify this port.
processors 1
 
[manager]
name awsckd 0001 # Define a single 3390 disk.
device 0a80 3390 3990 /z/SA9999
device 0200 3390 3990 /z/VMBASE
 
[manager]
name aws3274 0003 # Define two local 3270 devices
device 0700 3279 3274 L700
device 0701 3279 3274 L701
 
[manager]
name awsosa 0123 --path=F1 --pathtype=OSE
device E20 osa osa --unitadd=0
device E21 osa osa --unitadd=1
Starting these instances when working from the Linux desktop might go as follows. Log in as root and open a terminal window.
# xhost + #Allow multiple users to start x3270.
# su ibmsys1
$ cd /home/ibmsys1
$ awsstart aprof1 #Working as ibmsys1
(startup messages) #ibmsys1 instance
$ x3270 -port 3270 mstcon:localhost & #Working as ibmsys1
$ x3270 -port 3270 tso:localhost & #Working as ibmsys1
$ ipl a80 parm 0a8200 #Working as ibmsys1. Perform an IPL                                               #of z/OS.
(open another terminal window)
# su ibmsys2
$ cd /home/ibmsys2
$ awsstart profSB #Working as ibmsys2
(startup messages) #ibmsys2 instance
$ x3270 -port 3271 localhost & #Working as ibmsys2
$ x3270 -port 3271 localhost & #Working as ibmsys2
$ ipl 200 #Working as ibmsys2 Perform an IPL                                               #of VM.
Each instance starts with its own devmap. Each devmap must specify a different port address for local 3270 connections. Each instance must specify different emulated disk volumes. Attempting to share an emulated disk volume in this situation (by specifying the same Linux file for the emulated volume) might result in corrupted data on the volume.
The usage of xhost + presents a security exposure. Tailor this command to suit your security environment.
10.4 Instances with shared I/O
It is possible for multiple instances to share certain devices, such as emulated direct access storage device and emulated OSA. Also, a single pool of 3270 devices can be used and accessed through a common Linux port number, although this option has more complex side effects. The most common usage of a shared configuration is to provide shared direct access storage device among the instances.
 
Tip: We assume that readers are familiar with shared direct access storage device. zPDT allows shared emulated direct access storage device, which is equivalent to shared direct access storage device hardware among multiple IBM zSystems servers (or among logical partitions (LPARs)). Shared direct access storage device among multiple z/OS systems typically requires more sophisticated software control, usually through global resource serialization (GRS) and JCL (or equivalent) parameters.
The --shared option in a devmap causes zPDT to emulate the RESERVE function. zPDT does not automatically emulate GRS or any other software elements that are used to protect user data on shared direct access storage device. This level of protection might be obtained through a Parallel Sysplex operation or through a GRS operation through channel-to-channel (CTC).
ISV zPDT does not support the virtual MAC (VMAC) function of z/OS. The only virtual MAC that is supported is generated on z/VM with the layer-2 vswitch.
A configuration with shared I/O devices requires a group controller, as shown in Figure 10-3. The group controller is like another ISV zPDT instance, but without an associated CP or defined memory. The group controller must have its own Linux user ID and devmap, and it starts with its own awsstart command. It must be started before other instances are started. As a basic concept, the I/O devices that are defined in the group controller’s devmap are inherited and shared by the other instances.
Figure 10-3 Shared emulated I/O
The Linux user ID that is associated with the group controller must have the correct path information that is set in the .bashrc file, like that for the user IDs that are associated with each instance. All the user IDs that are involved (the group controller and the instances) must be in the same Linux group, which is group ibmsys in our examples.
The Linux user ID (and group ID) for running ISV zPDT is arbitrary. However, there is a special case if you plan to run multiple ISV zPDT instances with a group controller. In this case (with a group controller), the Linux group names must not be the same as the user IDs. For example, if user IDs ibmsys1, ibmsys2, and ibmsys3 are used for ISV zPDT with the controller instance (ibmsys1) and the ISV zPDT operational instances (ibmsys2 and ibmsys3), then there must not be Linux groups that are named ibmsys1, ibmsys2, or ibmsys3.
With recent Linux distributions, creating a user ID (ibmsys1, for example) automatically creates a group with the same name. This action prevents starting an ISV zPDT operation that involves a group controller and one or more ISV zPDT operational instances. The error message is AWSSTA020E Unable to load DEVMAP file, which might not be helpful in this situation.
Furthermore, the home directories of the user IDs that are involved (ibmsys1, ibmsys2, and ibmsys3 in our example) should be mutually readable/writable. Recent Linux distributions have default home directory permissions of 700 (rwx------), which prevents the operational instances from starting under the group controller.2
All of the emulated volume files must be readable and writable (possibly through the group ID) by all the user IDs that are involved.3 For our example, assume that we have three user IDs that are defined (ibmgroup, ibmsys1, and ibmsys2) and all of them are in group ibmsys. We can define three devmaps, as follows:
/home/ibmgroup/group1
[system]
members ibmsys1 ibmsys2 # user IDs for the instances
[manager]
name awsckd 8765 --shared
device A80 3390 3990 /z/Z9RES1
device A81 3390 3990 /z/Z9RES2
device A82 3390 3990 /z/Z9SYS1
device A83 3390 3990 /z/Z9RES3
device A84 3390 3990 /z/Z9USS1
device A90 3390 3990 /z/SARES1
 
[manager]
name awsosa 1223 path=F0 --pathtype=OSD
device 400 osa osa
device 401 osa osa
device 402 osa osa
device 403 osa osa
device 404 osa osa
device 405 osa osa
device 406 osa osa
device 407 osa osa
 
/home/ibmsys1/aprof1
[system]
memory 8000m
3270port 3270
processors 1
group ibmgroup #user ID of the group controller
 
[manager]
name aws3274 4455
device 0700 3279 3274 mstcon
device 0701 3279 3274
/home/ibmsys2/aprofSB
[system]
memory 5000m
3270port 3271
processors 2
group ibmgroup #user ID of the group controller
 
[manager]
name aws3274 5544
device 0700 3279 3274 mstcon
device 0701 3279 3274
Notice the two new devmap statements in this example. Both are in the [system] stanzas:
members name1 name2 is used in the group controller definitions and specifies the Linux user ID that is associated with each instance in the group.
group cntlname is used in each instance and specifies the Linux user ID that is associated with the group controller.
TN3270e sessions are directed to the wanted instance by using the appropriate 3270port number:
$ x3270 -port 3270 localhost & #Connects to the ibmsys1 instance.
$ x3270 -port 3271 localhost & #Connects to the ibmsys2 instance.
There is no need to coordinate device numbers or unit addresses among multiple instances that use shared OSA. For example, each instance might use an OSA interface at addresses 400 - 403. Each instance might start unit addresses (as specified in the devmap) at address zero. (Multiple guests under z/VM, in a single instance, must manage the addresses. Do not confuse multiple guests under z/VM with multiple instances.)
Only direct access storage device (CKD or Fixed Block Architecture (FBA)), aws3274, and OSA can be shared. Additional devices, such as tape drives, can be included in the group controller devmap. These additional device definitions are inherited by all instances, but each instance uses the definitions as though they were part of the devmap for that instance. The two instances in the previous example have different 3270port addresses. We decided to not use shared 3270 definitions in this example.4 No direct access storage device is defined for the instances in this example, so the instances share the direct access storage device that is defined for the group controller.
All sharing instances use the same addresses (device numbers) for the shared devices. There is no provision for different addresses (for different instances) for the same shared device.
If an ISV zPDT instance operates under the group controller, then any OSA devices might be shared devices that are managed by the group controller, or each instance can have a private OSA. If the OSA is used in OSE (non-QDIO mode), then the OAT definitions must be customized with the names of the instance members (specified as “MEMBER names”) and the IP addresses for each instance. (As a best practice, use OSD mode.)
Standard operating rules apply. You cannot perform an initial program load (IPL) for the same z/OS system into two instances at the same time.5 In our small example, we have two z/OS systems (the second one is on the SARES1 volume that is provided with the Application Development Controlled Distribution (ADCD) package). In the absence of shared ENQ functions,6 you must manage any active data set sharing. The ISV zPDT system correctly emulates disk RESERVE and RELEASE functions, and they protect the Volume Table of Contents (VTOC), catalog, and some other updates in the normal z/OS manner.
Starting the controller and two instances, working from the Linux desktop, for our example, might go as follows. Log in as root, and open a terminal window:
# xhost + #Allow multiple users to start x3270.
# su ibmgroup #Work as a group controller.
$ cd /home/ibmgroup
$ awsstart group1 #Start as a group controller.
(startup messages)
(open another terminal window)
# su ibmsys1
$ cd /home/ibmsys1
$ awsstart aprof1 #Working as ibmsys1
(startup messages) #ibmsys1 instance
$ x3270 -port 3270 mstcon@localhost & #Working as ibmsys1
$ x3270 -port 3270 tso@localhost & #Working as ibmsys1
$ ipl a80 parm 0a8200 #Working as ibmsys1. Perform an IPL                                               #of z/OS.
Open another terminal window:
# su ibmsys2
$ cd /home/ibmsys2
$ awsstart profSB #Working as ibmsys2
(startup messages) #ibmsys2 instance
$ x3270 -port 3271 mstcon@localhost & #Working as ibmsys2
$ x3270 -port 3271 tso@localhost & #Working as ibmsys2
$ ipl a90 parm 0a90sa #Working as ibmsys2. Perform an IPL                                               # of z/OS.
In this example, we used a different terminal window to start the group controller and each z/OS instance. We can send commands (such as awsstop) to the appropriate application later. Three base Linux user IDs are used: ibmgroup, ibmsys1, and ibmsys2.
10.5 Additional shared functions
The previous section outlined the key shared device usage rules as they apply to direct access storage device and OSA devices. The group controller also can include a shared aws3270 function and passive definitions that are inherited by all instances.
Shared aws3270 options
The group controller devmap can include aws3270 definitions, such as in this example:
/home/ibmgroup/group1
[system]
3270port 3270
members ibmsys1 ibmsys2
 
[manager]
name aws3270 1234
device 700 3279 3274 mstcon
device 701 3279 3274
device 702 3270 3274
In this case, the group controller specified a port address for TN3270e connections. Each instance inherits the complete set of 3270 device definitions (700, 701, and 702) but not the 3270port address. Each instance has a 3270 at address 700, 701, and so forth. If a user starts a TN3270e session that is connected to the 3270port number on Linux, the user has several options:
$ x3270 -port 3270 localhost & #Example 1
$ x3270 -port 3270 ibmsys1@localhost & #Example 2
$ x3270 -port 3270 ibmsys2.701@localhost & #Example 3
$ x3270 -port 3270 ibmsys1.mstcon@localhost & #Example 4
In Example 1, the instance is not specified. In this case, the group controller displays a selection menu (on the new 3270 session) and you must indicate which instance you want and, optionally, which terminal in that instance.7 This selection menu is illustrated in Figure 10-4.
*** Welcome to the zPDT selection menu ***
Select the member to connect from the list below
or type in the member or LU name and depress ENTER
 
Selection => __ (0 to disconnect) MEMBER:________ LU:________
1) IBMSYS1
2) IBMSYS2
Figure 10-4 Selection menu with two instances running
In Example 2, the first available 3270 in the ibmsys1 instance is assigned. (The instance name corresponds to the Linux user ID that started the instance.) Examples 3 and 4 specify both an instance name and the 3270 device identifier.
You can specify a different 3270port number and aws3274 device definitions in each instance. In this case, the shared aws3274 conditions do not apply. You can specify a 3270port number and aws3274 devices in the group controller and also specify aws3274 devices in each instance (but without a 3270port number in the instances). In this case, all the 3270 devices (from the controller list that is inherited by all instances, and from the unique list in each instance) can be accessed from the selection menu.
Yet another option exists for accessing shared aws3274 functions. It involves an InetD service that automatically detects which 1090 instances are running and constructs a selection menu that is based on this information. The InetD setup varies with different Linux distributions.
Inherited devices
The group controller devmap can include definitions for device managers other than awsckd, awsfba, awsosa, and aws3270, as in the following example:
/home/ibmgroup/group1
[system]
membersibmsys1 ibmsys2
 
[manager]
name awstape 4444
device 580 3480 3480
device 581 3480 3480
In this case, each instance (ibmsys1 and ibmsys2) has emulated 3480 tape drives at addresses 580 and 581. There is no connection between these drives in the two instances. It is exactly as though the awstape stanza appeared in the devmaps for each instance. The sole purpose is to remove the necessity for defining these devices for each instance, which is not meaningful for a small device list as shown here, but might be more meaningful for longer lists.
 

1 Because each instance has its own OSA, the user does not need to do OAT configuration if the user uses the default OAT definitions.
2 There are many options for solving this situation, depending on your security requirements. You might, for example, create a unique Linux group for ISV zPDT and allow access to the home directories through that group. The access that is needed is to the z1090 subdirectory in each home directory, and you can arrange suitable permissions to allow only this access.
3 This setup is controlled by normal Linux permission settings for each file. For example, the command chmod g+w /z/* can be used to make all the files in directory /z writable by members of the current group.
4 An example that uses a single 3270port number is given later in the text.
5 This statement ignores situations where the usage of different parmlib members allows an IPL of the same z/OS in multiple LPARs or instances. This task involves separate paging, spooling, and various Virtual Storage Access Method (VSAM) data sets for each LPAR or instance. The z/OS ADCD system that is used for many of our examples is not configured for this type of usage.
6 Sharing ENQ/DEQ functions is typically done by the GRS functions of a sysplex configuration. We do not have a sysplex here, and there are no global ENQ/DEQ controls.
7 If a specific terminal within the instance is not specified (by address or by LU name), then the first available 3270 in that instance is used.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.13.51