Fast Path to Linux on System z
This chapter describes the Fast Path to Linux on System z function that allows selected TCP/IP applications that are running under z/VSE to communicate with a TCP/IP stack on Linux on System z without the use of a TCP/IP stack on z/VSE.
Linux Fast Path (LFP) was introduced with z/VSE V4.3 for use in a z/VM guest environment. With z/VSE 5.1, support for the use LFP in a z/VM IP Assist (IBM VIA®) environment and in a logical partition (LPAR) environment was added to extend the connectivity options for z/VSE clients. Linux Fast Path in an LPAR environment requires IBM zEnterprise technology with the HiperSockets Completion Queue function.
Although LFP provides TCP/IP socket APIs for programs that are running on z/VSE, it is not intended to replace existing TCP/IP stacks, such as TCP/IP for VSE/ESA and IPv6/VSE. You still need vendor applications, such as FTP, Telnet, LPR, or LPD in your IT environment.
This chapter includes the following topics:
 
4.1 Overview
By using LFP, selected TCP/IP applications can communicate with the TCP/IP stack on Linux on System z without the use of a TCP/IP stack on z/VSE. All socket requests are transparently forwarded to a Linux on System z system that is running in the same z/VM, or in an LPAR on the same System z processor. Since z/VSE V5.1, LFP supports IPv6.
LFP can be used in the following environments:
LFP in a z/VM environment
When LFP is used in a z/VM environment, z/VSE and Linux on System z run as z/VM guests in the same z/VM-mode LPAR on IBM z10, z114, z196, zEC12, or zBC12 servers. They use an Inter-User Communication Vehicle (IUCV) connection between z/VSE and Linux.
LFP in a z/VM IP Assist (z/VSE VIA) environment
The z/VSE VIA function uses a pre-configured appliance that is loaded from the system’s Support Element (SE). This removes the need for configuring a Linux guest system. Another z/VM CMS guest is required to configure the z/VSE VIA function.
LFP in an LPAR environment
When LFP is used in an LPAR environment, z/VSE and Linux on System z run in their own LPARs on a zEnterprise server and a HiperSockets connection is used between z/VSE and Linux on System z. In this case, LFP requires the HiperSockets Completion Queue (HSCQ) function that is available with a zEnterprise server (z196, z114, and zEC12).
The LFP on System z provides standard TCP/IP socket APIs for programs that are running on z/VSE. Other than the basic socket API, no other tools are provided.
Performance improvements include the following results:
Less effect on TCP/IP processing on z/VSE (TCP, sequence numbers and acknowledging, checksums, resends, and so on).
A more reliable communication method (IUCV) compared to HiperSockets, which is a network device with packet drops, resends, and so on.
Table 4-1 summarizes the availability of LFP.
Table 4-1 Availability of LFP
z/VSE
5.1.1
5.1.0
4.3
LFP
yes
yes
yes
LFP-VIA
yes
yes
N/A
LFP in LPAR (requires z196 GA2 or later, or z114 or later)
yes
N/A
N/A
 
Note: Connectivity Systems International (CSI) applications, such as FTPBATCH, do not work with the LFP stack because they use internal CSI interfaces. However, applications that are provided with the IPv6/VSE product from Barnard Software, Inc. and applications and utilities that are provided by IBM can be used. For more information, see 4.6, “IBM applications that support LFP” on page 144.
For a complete list of hardware and software prerequisites, see z/VSE TCP/IP Support, SC34-2640.
4.2 Concept of LFP instances and LFP daemons
Setting up LFP always means configuring one or more LFP instances on z/VSE, each having one counterpart on Linux or VIA and an LFP daemon (LFPD). LFP instances are identified by their system ID (SYSID) (00 - 99), which allows for a maximum of 100 instances per VSE system.
Figure 4-1 shows an overview of two VSE systems with a total of three LFP instances that are connecting to one Linux with three LFP daemons. All systems run under z/VM.
Figure 4-1 LFP instances and their counterparts on Linux on System z
In general, each LFP instance and LFP daemon identify their peers by using a destination system name and a destination application name. Depending on which environment is used (LFP under z/VM, VIA, or LFP in LPAR), these names have different meanings.
An LFP daemon often is dedicated to exactly one z/VSE LFP instance. This is controlled by an optional parameter peer_iucv_vmid, which specifies the peer z/VM ID that can connect to this LFPD. If this parameter is not specified, connections from any LFP instance on any VSE system are accepted. However, only one LFP instance can use an LFPD at the same time.
For more information about LFP configuration parameters, see z/VSE TCP/IP Support, SC34-2640.
The following sections describe each of the three LFP environments.
4.3 LFP in a z/VM environment
In a z/VM environment, the LFP uses an IUCV connection between z/VSE and Linux, where both systems run in the same z/VM-mode LPAR on a System z10TM or later processor. Selected TCP/IP applications can communicate with the TCP/IP stack on Linux without the use of a TCP/IP stack on z/VSE.
Figure 4-2 shows a scenario in which an IBM DB2 client that is running on z/VSE communicates with a DB2 Server for Linux by using LFP. No TCP/IP stack is running on z/VSE.
Figure 4-2 LFP in a z/VM environment
LFP directly communicates with the Linux device driver for IUCV, which passes the data to the LFP daemon. The LFPD receives the data, translates it into socket calls, and sends it to the Linux TCP/IP stack, which, in turn, forwards the data to the DB2 Server.
The following sections describe how to set up the VSE and Linux guest systems. The complete setup includes two z/VM users:
VM user that is running a VSE guest
VM user that is running the Linux guest
4.3.1 Linux guest setup
On the Linux side, you must install and configure the LFPD.
Downloading the LFPD
The LFPD is part of VSE Connectors and VSE/AF (5686-CF9-38/06). It is shipped on the z/VSE Extended Base Tape as part of component 5686CF8-38 CONN.C/W. After the VSE Connectors Workstation Code component is installed, it is available as member IJBLFPLX.W in PRD2.PROD. However, you can always download the latest version from the following VSE home page:
Installing the LFPD
When you download member IJBLFPLX.W from z/VSE, you must rename the downloaded file to IJBLFPLX.zip before you can install it by using IBM Rational® Portfolio Manager.
For more information about installing and configuring the LFPD, see Chapter 13, “Running z/VSE With a Linux Fast Path” of z/VSE TCP/IP Support, SC34-2640.
Configuring the LFPD
For each LFPD, you must provide a configuration file, which is read during the start of LFPD and cannot be changed while LFPD is running.
According to TCP/IP Support, each configuration file must be named lfpd-XXX.conf, where XXX is the IUCV_SRC_APPNAME or HS_SRC_APPNAME that is specified in the configuration file. The XXX characters in the file name must be specified in uppercase. For example, the configuration file with an IUCV_SRC_APPNAME of LINLFP is named lfpd-TESTVSE.conf.
You must store all available configuration files in the /etc/opt/ibm/vselfpd/confs-available directory. The /etc/opt/ibm/vselfpd/confs-enabled directory contains the enabled configuration files. If a configuration is enabled, it can be easily started with the LFPD control script lfpd-ctl. Additionally, the SysV init script starts all enabled configurations during the start of the Linux system. For each available configuration that is to be enabled, you must create a symbolic link.
In our tests, we created a configuration with the name LINLFP, so the configuration file is /etc/opt/ibm/vselfpd/confs-available/lfpd-TESTVSE.conf. Example 4-1 shows the contents of the configuration file.
Example 4-1 LFPD configuration file for LFP under z/VM
# lfpd configuration file
IUCV_SRC_APPNAME = LINLFP
# ensure that only TESTVSE from TESTVSE can connect
PEER_IUCV_VMID = TESTVSE
PEER_IUCV_APPNAME = TESTVSE
IUCV_MSGLIMIT = 1024
MTU_SIZE = 8192
MAX_SOCKETS = 1024
INITIAL_IO_BUFS = 128
WINDOW_SIZE = 65535
WINDOW_THRESHOLD = 25
VSE_CODEPAGE = EBCDIC-US
VSE_HOSTID = 10.0.0.1
RESTRICT_TO_HOSTID = yes
LOG_INFO_MSG = no
To enable the configuration, place a symbolic link from the configuration file to the enabled configurations directory by using the following command:
ln -s /etc/opt/ibm/vselfpd/confs-available/lfpd-LINLFP.conf
/etc/opt/ibm/vselfpd/confs-enabled
Starting the LFPD
Example 4-2 shows how to start the LFPD on Linux.
Example 4-2 Starting the LFPD on Linux
linlfp:~ # lfpd-ctl start testvse
Starting lfpd (TESTVSE): success
The next section describes how to set up the corresponding VSE guest.
4.3.2 VSE guest setup
In our test setup, the configuration in Example 4-3 was used for the VSE guest system.
Example 4-3 VSE configuration for LFP
User = TESTVSE
IP address: none
The VSE guest system was named TESTVSE. There is no IP stack active and there is no assigned IP address. To enable the VSE system for LFP, the VM directory entry must be updated. Then, an LFP instance must be configured and started on VSE.
VM directory entry
The statements that are shown in Example 4-4 must be included into the VM directory entry to enable IUCV communication.
Example 4-4 IUCV definitions in VM directory entry for LFP
IUCV ANY PRIORITY MSGLIMIT 1024
IUCV ALLOW
Example 4-5 shows the complete directory entry from our test system.
Example 4-5 Complete z/VM directory entry
USER TESTVSE BBRK 64M 128M GUB
*----------------------------------------------
ACCOUNT 203 DOSSYS
CPU 00 CPUID 100080 NODEDICATE
CPU 01 CPUID 100081 NODEDICATE
CPU 02 CPUID 100082 NODEDICATE
IPL CMS
IUCV ANY PRIORITY MSGLIMIT 1024
IUCV ALLOW
MACHINE ESA
OPTION MAXCONN 15 QUICKDSP MAINTCCW TODENABLE
DEDICATE 0D00 F54F
DEDICATE 0D01 F550
DEDICATE 0D02 F551
*--------------------
CONSOLE 0009 3215 T
SPECIAL 0060 3270
SPECIAL 0061 3270
SPECIAL 0062 3270
SPECIAL 0063 3270
SPECIAL 0064 3270
*--------------------
SPOOL 000C 2540 READER A
SPOOL 000D 2540 PUNCH A
SPOOL 000E 4248 W
*--OSA---------------
LINK MAINT 0190 0190 RR
LINK MAINT 019E 019E RR
LINK TCPMAINT 0592 0592 RR
After the VM directory entry is set up, LFP must be configured in Linux and z/VSE.
LFP configuration in VSE
ICCF library 59 contains the following skeletons for LFP-related configuration tasks:
SKLFPACT: Activate Language Environment-socket interface for LFP. This is necessary to allow Language Environment programs that use LFP
SKLFPINF: Get information for an LFP instance
SKLLFPLST: List active LFP instances
SKLLFPSTA: Start an LFP instance
SKLFPSTO: Stop an LFP instance
Starting an LFP instance
The job control language (JCL) that is shown in Example 4-6 is derived from skeleton SKLFPSTA.
Example 4-6 JCL for starting an LFP instance
* $$ JOB JNM=LFPSTART,CLASS=A,DISP=D
// JOB LFPSTART START AN LFP INSTANCE
// EXEC IJBLFPOP,PARM='START DD:SYSIPT LOGALL'
ID = 01 <- SYSID of the LFP stack
MTU = 8192
IUCVMSGLIMIT = 1024
INITIALBUFFERSPACE = 512 K
MAXBUFFERSPACE = 4M
IUCVSRCAPPNAME = TESTVSE <- must match the LFPD config file on Linux
IUCVDESTAPPNAME = LINLFP    <- name of the LFP application on Linux
IUCVDESTVMID = LINLFP    <- Linux VM user
WINDOWSIZE = 65535
WINDOWTHRESHOLD = 25
/*
/&
* $$ EOJ
When this LFP instance is active, it works like another IP stack on your VSE system. We used ID = 01 because SYSID=00 was already used by the CSI stack.
Running the console output that is shown in Example 4-7 starts an LFP instance with the SYSID.
Example 4-7 Console output from starting an LFP instance
BG 0000 // JOB LFPSTART START AN LFP INSTANCE
DATE 08/24/2011, CLOCK 13/16/53
BG 0000 LFPB013I STARTED LFP INSTANCE '01'.
BG 0000 EOJ LFPSTART MAX.RETURN CODE=0000
An LFP instance is no long-running server task. When a TCP/IP application makes a socket call, LFP code is loaded and run in the caller’s partition. When data arrives from outside, LFP code is triggered by the z/VSE interrupt handler and runs in the z/VSE supervisor’s context.
When you are querying the status of the LFP daemon on Linux, it shows the TESTVSE application that is connected to the VSE system, as shown in Example 4-8.
Example 4-8 Status of LFP daemon
linlfp:~ # lfpd-ctl list
name enabled running status
----------------------------------
TESTVSE yes yes Connected to TESTVSE
The next step enables VSE Language Environment applications to use the LFP stack.
Configuring the Language Environment multiplexer
Enabling Language Environment applications for LFP is done by configuring the TCP/IP multiplexer phase, as shown in Example 4-9. You can use skeleton EDCTCPMC in ICCF library 62.
Example 4-9 Configuring the TCP/IP multiplexer phase
* $$ JOB JNM=EDCTCPMC,CLASS=A,DISP=D,LDEST=*,PDEST=*
// JOB EDCTCPMC - GENERATE TCP/IP MULTIPLEXER CONFIG PHASE
// LIBDEF *,CATALOG=PRD2.CONFIG
// LIBDEF *,SEARCH=(PRD2.SCEEBASE,PRD1.BASE)
// OPTION ERRS,SXREF,SYM,NODECK,CATAL,LISTX
PHASE EDCTCPMC,*,SVA
// EXEC ASMA90,SIZE=(ASMA90,64K),PARM='EXIT(LIBEXIT(EDECKXIT)),SIZE(MAXC
-200K,ABOVE)'
EDCTCPMC CSECT
EDCTCPMC AMODE ANY
EDCTCPMC RMODE ANY
*
EDCTCPME SYSID='00',PHASE='$EDCTCPV' For use with CSI/IBM
EDCTCPME SYSID='01',PHASE='IJBLFPLE' For use with LFP
EDCTCPME SYSID='02',PHASE='BSTTTCP6' For use with IPV6/VSE
* EDCTCPME SYSID='03',PHASE='VENDTCPI' Other vendor interface
*
END
/*
// IF $MRC GT 4 THEN
// GOTO NOLINK
// EXEC LNKEDT,PARM='MSHP'
/. NOLINK
/*
/&
* $$ EOJ
Phase IJBLFPLE provides the socket API for LFP. The SYSID, with the Language Environment sockets interface phase, allows an application to use a specific IP stack. Although there might be multiple IP stacks active at any time, this means that one given application can use only one IP stack (which is specified by the SYSID in its startup JCL).
In the next section, we describe how to replace the Linux side by the z/VM IP Assist (VIA) function. The VSE side remains unchanged.
4.4 z/VM IP Assist
This section describes the setup of LFP by using z/VSE VIA (z/VM IP Assist). This simplifies the setup of LFP by using a pre-configured appliance with the system's Support Element (SE). The VIA image is booted from a file that is on the SE and loaded initially in a z/VM user that must be configured for IUCV. The complete setup includes the following VM users:
VM user that is running a VSE guest
VM user that is running the VIA guest
VM user that is used for configuring the VIA user
Figure 4-3 shows an overview of the involved VM users and how they are connected.
Figure 4-3 Overview of z/VM users for LFP
The VIA guest and configuration user must be configured to access two CMS minidisks with static addresses D4C (config disk) and D4D (data disk). The data disk is needed for debugging only, so it is normally not used. Table 4-2 shows how the disks are accessed.
Table 4-2 Access of config and data disks
Disk
VIA user
Configuration user
Config disk (D4C)
Read Only
Read/Write
Data disk (D4D)
Read/Write
Not needed
In the following sections, we describe how the three VM users are configured.
4.4.1 Configuration user setup
The configuration user is a standard CMS user that must be authorized to link the VIA configuration disk D4C in read/write mode.
 
Note: The first thing to check is that the two disks are formatted as CMS minidisks.
In our test setup, the configuration that is shown in Example 4-10 was used.
Example 4-10 Configuration user setup for LFP
VM user = JSCHMIDB
You must create the following files on the configuration disk:
The SENDERS.ALLOWED file contains a list of authorized configuration users
The LFPDCONF file contains information about the LFPD configuration
Configuring the SENDERS.ALLOWED file
To prevent unauthorized SMSG traffic, each sender is validated against a list of authorized users that is contained in a CMS file SENDERS.ALLOWED, which is on the configuration disk (0D4C). This file contains one z/VM user ID per line. All specified IDs are authorized to send SMSG commands to the z/VSE VIA guest.
To set up the CMS file, you must link the configuration disk in read/write mode, as shown in Example 4-11. Then, you use XEDIT to create the file.
Example 4-11 Linking the config disk
link vsevia d4c d4c mr
Ready; T=0.01/0.01 11:32:02
CP Q DASD
...
DASD 0D4C 3390 35456R R/W 10 CYL ON DASD 6078 SUBCHANNEL = 0008
The configuration disk is empty when it is accessed the first time. You can use XEDIT to create the SENDERS.ALLOWED file. Insert one line with the name of your configuration user, as shown in Example 4-12.
Example 4-12 Creating the senders.allowed file
===== * * * Top of File * * *
===== JSCHMIDB
===== * * * End of File * * *
Configuring the LFPDCONF files
You must configure one LFPDCONF file for each LFPD instance. For more information, see Chapter 13, Running z/VSE with a Linux Fast Path”, of z/VSE V4 R3.0 TCP/IP Support, SC34-2640.
The configuration files have the same content as described in the official documentation of the LFPD config files in the z/VSE TCP/IP Support manual. Each configuration file must have the same file name as the IUCV_SRC_APP that it configures, and it must have the CMS file type LFPDCONF.
In Example 4-13, we create one configuration file that is called TESTVSE.LFPDCONF.
Example 4-13 Creating the lfpdconf file
# lfpd configuration file for TESTVSE
IUCV_SRC_APPNAME = TESTVSE
IUCV_MSGLIMIT = 1024
MTU_SIZE = 8192
MAX_SOCKETS = 1024
INITIAL_IO_BUFS = 128
WINDOW_SIZE = 65535
WINDOW_THRESHOLD = 25
VSE_CODEPAGE = EBCDIC-US
VSE_HOSTID = 9.152.111.11 <- valid static IP address of VSEVIA
RESTRICT_TO_HOSTID = no
At the end of this step, we have these files on the configuration disk, as shown in Example 4-14.
Example 4-14 Overview on configuration files
Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time
TESTVSE LFPDCONF C1 F 80 10 1 8/24/11 13:31:19
SENDERS ALLOWED C1 F 80 1 1 8/24/11 11:36:06
The configuration user is now ready. The next step is configuring the VIA guest.
4.4.2 Setting up the VIA guest
VM setup for the VIA user includes the following tasks:
Enabling the VIA user for IUCV, and
Setting up the network connection to the VSE user
In our test setup, the configuration that is shown in Example 4-15 was used.
Example 4-15 VIA guest setup summary
VM user = VSEVIA
IP address: 9.152.111.11
Network address : 9.152.131.0/24
Subnet mask: 255.255.255.0
Broadcast: 9.152.131.255
Gateway: 9.152.108.1
Name server: ET : 9.152.120.241; 9.152.64.172
MTU Size: 1492
After initially loading the VIA image, there is no need (and no possibility) for any manual action from this VM user. The underlying appliance is configured by the configuration user by creating and modifying the configuration files on disk D4C. Therefore, you can use AUTOONLY to get the VM user loaded automatically at VM startup without any manual intervention.
VM directory entry
Example 4-16 shows the VM directory entry of our VSEVIA guest.
 
Note: The network definitions contain brackets ([and ]) and braces ({“ and }). Therefore, you must use EBCDIC code page 924 in your terminal emulator to enter the characters correctly.
Example 4-16 VM directory entry for VIA user
USER VSEVIA BRITTA 1G 1G G
COMMAND SET D8ONECMD * OFF
COMMAND SET RUN ON
COMMAND TERM LINEND #
* IPL VIA Image from SE
IPL _0___1__
* IUCV definitions
IUCV ANY PRIORITY MSGLIMIT 1024
IUCV ALLOW
* Standard virtual devices
LOADDEV PORT 0
LOADDEV LUN 0
LOADDEV BOOT 0
* Load Image 601 from SE
LOADDEV BR_LBA 601
* Network adapters and configuration
LOADDEV SCPDATA '{"profiles":["zVSE-VIA"],"networkCards":[{"OSA"',
':"0D00","staticIPv4":"9.152.111.11/22"}],"defaultGateway":',
'"9.152.108.1","DNS":["9.152.120.241","9.152.64.172"]',
',"hostName":"vsevia"}'
* Machine type and number of CPUs
MACH XA 1
OPTION LXAPP LANG AMENG MAXCONN 128
DEDICATE 0D00 F552
DEDICATE 0D01 F553
DEDICATE 0D02 F554
* Console
CONSOLE 0009 3215 T
SPOOL 000C 2540 READER *
SPOOL 000D 2540 PUNCH A
SPOOL 000E 1403 A
* OSA
LINK MAINT 0190 0190 RR
LINK MAINT 019D 019D RR
LINK MAINT 019E 019E RR
* Minidisks
MDISK 0D4C 3390 9279 10 35456R RR R W M
MDISK 0D4D 3390 9289 10 35456R MR R W M
Administering the LFP daemon
After reloading the VSEVIA user, the LFPD daemon is automatically started in VSEVIA. You can check this by using SMSG commands from the configuration user.
The LFPD command features the following syntax:
SMSG <guest> APP LFPD (start|stop|restart|force-reload|status|startdbg) <IUCVNAME>
SMSG <guest> APP LFPD (list|startall|stopall)
Now we check to see whether the configuration is OK and if the LFPD is started. Normally, the output should look as it does in Example 4-17.
Example 4-17 Displaying the LFP daemon by using SMSG
smsg vsevia app lfpd list
Ready; T=0.01/0.01 14:13:40
14:13:40 * MSG FROM VSEVIA : NAME ENABLED RUNNING STATUS
14:13:40 * MSG FROM VSEVIA : ----------------------------------
14:13:40 * MSG FROM VSEVIA : TESTVSE YES YES LISTENING
Debugging
If an error occurs, you can use the STARTDBG parameter of the LFPD command. In our first test (as shown in Example 4-18), the LFPD did not start and the debug output showed the reason.
Example 4-18 Debugging the LFP daemon
smsg vsevia app lfpd startdbg testvse
Ready; T=0.01/0.01 13:53:34
13:53:38 * MSG FROM VSEVIA : STARTING LFPD (TESTVSE): FAILED
13:53:38 * MSG FROM VSEVIA : STARTUP LOG:
13:53:38 * MSG FROM VSEVIA : LOGGING STARTUP MESSAGES TO FILE '/TMP/LFPD-CTL
.2567.TMP'.
13:53:38 * MSG FROM VSEVIA : READING CONFIGURATION FROM FILE '/ETC/OPT/IBM/V
SELFPD/CONFS-ENABLED/LFPD-TESTVSE.CONF'.
13:53:38 * MSG FROM VSEVIA : CONFIG_READCFGFILE: ERROR: CONFIGURATION PARAME
TER RESTRICT_TO_HOSTID IS ENABLED, BUT VSE_HOSTID6 IS NOT SET.
13:53:38 * MSG FROM VSEVIA : ERROR: CONFIGURATION COULD NOT BE LOADED OR IS
INCOMPLETE, FILE='/ETC/OPT/IBM/VSELFPD/CONFS-ENABLED/LFPD-TESTVSE.CONF'.
13:53:38 * MSG FROM VSEVIA : STOPPING TO LOG THE STARTUP BECAUSE LFPD IS ABO
UT TO EXIT.
Adding RESTRICT_TO_HOSTID = no to the TESTVSE.LFPDCONF file solved the problem. You can use the LFPD-CTL command to restart the LFPD daemon after you make any configuration changes.
Now that the VIA user is configured, we can enable the VSE system for LFP.
4.4.3 Setting up VSE guest
Setting up the VSE guest system in a VIA environment is identical to the configuration that is done with LFP in a VM environment. Ensure that the IUCVDESTVMID is now set correctly to your VIA user, as shown in Example 4-19.
Example 4-19 Specifying the IUCVDESTVMID in the LFP start JCL
* $$ JOB JNM=LFPSTART,CLASS=A,DISP=D
// JOB LFPSTART START AN LFP INSTANCE
// EXEC IJBLFPOP,PARM='START DD:SYSIPT LOGALL'
ID = 01 <- SYSID of the LFP stack
MTU = 8192
IUCVMSGLIMIT = 1024
INITIALBUFFERSPACE = 512 K
MAXBUFFERSPACE = 4M
IUCVSRCAPPNAME = TESTVSE <- must match the LFPD config file on D4C
IUCVDESTAPPNAME = TESTVSE <- name of the LFP application on VSEVIA
IUCVDESTVMID = VSEVIA
WINDOWSIZE = 65535
WINDOWTHRESHOLD = 25
/*
/&
* $$ EOJ
After the LFP instance is started, you should be able to query the VIA status. When you are querying the status of the LFP daemon from the control user, Example 4-20 shows that the TESTVSE application is connected to the VSE system.
Example 4-20 Displaying the LFP daemon status using SMSG
smsg vsevia app lfpd list
Ready; T=0.01/0.01 15:20:03
15:20:03 * MSG FROM VSEVIA : NAME ENABLED RUNNING STATUS
15:20:03 * MSG FROM VSEVIA : ----------------------------------
15:20:03 * MSG FROM VSEVIA : TESTVSE YES YES CONNECTED TO TESTVSE
4.4.4 Using the LFP trace with VIA
By using the LFPD-ADMIN command, the communication between VSE and VIA can be traced. The command syntax is shown in Example 4-21.
Example 4-21 LFPD-ADMIN command syntax
SMSG <guest> APP LFPD-ADMIN <IUCVNAME> trace start FILENAME>(debug|packets|all) (single|wrap) (maxsize)
SMSG <guest> APP LFPD-ADMIN <IUCVNAME> trace stop
SMSG <guest> APP LFPD-ADMIN <IUCVNAME> status
Before the trace is started, ensure that the VIA user linked the debug disk read/write and that the D4D disk is formatted as a CMS minidisk, as shown in Example 4-22.
Example 4-22 Accessing the data disk for LFP trace
DASD 0D4C 3390 35456R R/O 10 CYL ON DASD 6078 SUBCHANNEL = 000A
DASD 0D4D 3390 35456R R/W 10 CYL ON DASD 6078 SUBCHANNEL = 000B
 
Note: Only one trace can be started at one time.
The trace output is written to a CMS file whose name is specified in the trace start command. The name testvse is used in Example 4-23.
Example 4-23 Starting the LFP trace
SMSG vsevia APP LFPD-ADMIN testvse trace start testvse all single
Ready; T=0.01/0.01 16:23:39
16:23:40 * MSG FROM VSEVIA : TRYING TO CONNECT TO LFPD WITH IUCV APPLICATION
NAME TESTVSE...
16:23:40 * MSG FROM VSEVIA : CONNECTED.
16:23:40 * MSG FROM VSEVIA : ANSWER FROM LFPD:
16:23:40 * MSG FROM VSEVIA : -----------------
16:23:40 * MSG FROM VSEVIA : TRACE STARTED TO OUTPUTFILE /MNT/LFPD-TRACE/TES
TVSE.LFPDTRC
16:23:40 * MSG FROM VSEVIA : -----------------
The trace is stopped by using the trace stop command, as shown in Example 4-24.
Example 4-24 Stopping the LFP trace
SMSG vsevia APP LFPD-ADMIN testvse trace stop
Ready; T=0.01/0.01 16:29:38
16:29:38 * MSG FROM VSEVIA : TRYING TO CONNECT TO LFPD WITH IUCV APPLICATION
NAME TESTVSE...
16:29:38 * MSG FROM VSEVIA : CONNECTED.
16:29:38 * MSG FROM VSEVIA : ANSWER FROM LFPD:
16:29:38 * MSG FROM VSEVIA : -----------------
16:29:38 * MSG FROM VSEVIA : TRACE IS STOPPED.
16:29:38 * MSG FROM VSEVIA : -----------------
When you are reviewing the trace file from the configuration user (as shown in Example 4-25), it still appears to be empty.
Example 4-25 Checking the size of the trace file
Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time
TESTVSE LFPDTRC D1 V - 0 0 8/29/11 15:23:40
After the D4D disk is detached and relinked, the trace contents are available, as shown in Example 4-26.
Example 4-26 Accessing the trace file
Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time
* TESTVSE LFPDTRC D1 V 4096 20 20 8/29/11 15:29:38
The trace data is written in plain ASCII text; therefore, it cannot be read on z/VM. Download the trace file in binary to any ASCII platform, such as PC. The trace file contains UNIX style line breaks; therefore, when a Windows based system is used, you must use an editor that can correctly read and display UNIX style text files.
On Windows XP, you can use the WordPad editor, as shown in Figure 4-4.
Figure 4-4 Displaying the LFP trace
4.5 LFP in an LPAR environment
From a functionality perspective, LFP in LPAR is based on HiperSockets Completion Queue support (HSCQ).
Running an LFP in an LPAR environment means Linux on System z primarily runs on the cheaper, full-speed Integrated Facility for Linux (IFL) processors in one LPAR, whereas z/VSE runs on standard processors in a different LPAR.
Figure 4-5 shows the overall setup.
Figure 4-5 LFP in an LPAR environment
In this case, LFP starts the HiperSockets device driver to send the data to the Linux image. The HiperSockets Completion Queue function ensures successful data transmission. The Linux HiperSockets device driver forwards the data directly to the LFP daemon.
 
Note: The communication between z/VSE and Linux on System z is established directly by using HiperSockets without the involvement of the OSA device driver.
4.5.1 Prerequisites
The use of LFP in an LPAR environment includes the following prerequisites:
A zEnterprise server at driver level 93 or later because of the HiperSockets Completion Queue function
z/VSE Version 5 Release 1 with the following APARs/PTFs installed:
 – DY47300 / UD53758
 – PM56023 / UK76218
 – PM56056 / UK76252
One of the following Linux on System z Operating Systems:
 – SUSE SLES 11 SP2 or later
 – Red Hat RHEL 7 or later
One z/VSE system and one Linux on System z system that is running in LPAR mode
4.5.2 Hardware setup
To use LFP in an LPAR environment, you must define a HiperSockets network between the two LPARs. Use a separate network for LFP; do not use the same network for other TCP/IP stacks.
4.5.3 VSE setup
Configuring an LFP instance when it is running in LPAR mode is similar to z/VM mode. Table 4-3 shows the LPAR-specific parameters.
Table 4-3 LPAR-specific parameters for z/VSE
Parameter
Meaning
HSDevices
Specifies device addresses of a HiperSockets device. Must be defined with the device type OSAX in the IPL procedure.
HSSrcAppName
Defines a local port. Must be unique in combination with the HsSrcSystemName for the z/VSE system.
HSSrcSystemName
Defines the system name of the LFP instance. The name must be unique across the HiperSockets network that is used.
HSDestAppName
Must match the HS_SRC_APPNAME configuration parameter for an LFPD that is running on Linux.
HSDestSystemName
Must match the HS_SRC_SYSTEMNAME configuration parameter (HSUID) for an LFPD that is running on Linux.
HSKeepAliveTime
A decimal number that represents seconds. The default is 5 seconds. The keep alive mechanism is used to detect an unexpected termination of the connection between z/VSE and Linux.
HSMsgLimit
Specifies the maximum number of outstanding messages that are not yet received by the LFP instance. Higher values might result in a better performance, but need more memory. A reasonable value is 32.
Example 4-27 shows a sample setup that is taken from z/VSE TCP/IP Support, SC34-2640.
Example 4-27 Starting an LFP instance for LFP in LPAR
* $$ JOB JNM=LFPSTART,CLASS=0,DISP=L
// JOB LFPSTART
// EXEC IJBLFPOP,PARM='START DD:SYSIPT LOGALL'
* Instance ID
ID = 02
InitialBufferSpace = 1M
MaxBufferSpace = 4M
WindowSize = 65535
WindowThreshold = 25
DeviceType = HS
HSDevices = 500,501,502
HSMsgLimit = 128
HSSrcAppName = TESTV
HSDestAppName = LNXSYS1
HSSrcSystemName = VSELPAR
HSDestSystemName = LNXLPAR
HSKeepAliveTime = 30
/*
/&
* $$ EOJ
4.5.4 Linux setup
Similar to the LFP setup for LPAR on the z/VSE side, there also are LPAR-specific parameters for the LFPD configuration for Linux, as shown in Table 4-4.
Table 4-4 LPAR-specific parameters for Linux
Parameter
Meaning
HS_MSGLIMIT
Specifies the maximum number of outstanding messages that are not yet received by LFPD. Higher values might result in a better performance, but need more memory.
HS_SRC_APPNAME
Defines a local port. Must be unique for the Linux on System z system. This parameter value must match the HSDestAppName that is used for an LFP instance on z/VSE.
HS_SRC_SYSTEMNAME
Defines the HSUID of the HiperSockets device that LFPD uses. Internally, the HSUID is converted to an IPv6 link-local address that is used to start the HiperSockets device. Multiple LFPDs can run on the same HiperSockets device, if each one uses a different HS_SRC_APPNAME. This parameter value must match with the HsDestSystemName that is used for an LFP instance on z/VSE.
PEER_HS_APPNAME
Defines the HsSrcAppName of an instance on z/VSE. If this parameter is set, LFPD checks the name of the source application of any incoming connection. If the name does not match the name in the configuration, the incoming connection is revoked.
PEER_HS_SYSTEMNAME
Defines the HsSrcSystemName of an instance on z/VSE. If this parameter is set, LFPD checks the name of the source system of any incoming connection. If the name does not match the name in the configuration, the incoming connection is revoked.
Example 4-28 shows a sample setup that is taken from z/VSE TCP/IP Support, SC34-2640.
Example 4-28 LFPD configuration for LPAR
DEVICETYPE = HS
HS_MSGLIMIT = 128
HS_SRC_APPNAME = LNXSYS1
HS_SRC_SYSTEMNAME = LNXLPAR
# ensure that only TESTV from VSELPAR can connect
PEER_HS_APPNAME = TESTV
PEER_HS_SYSTEMNAME = VSELPAR
MAX_SOCKETS = 1024
INITIAL_IO_BUFS = 128
WINDOW_SIZE = 65535
WINDOW_THRESHOLD = 25
VSE_CODEPAGE = EBCDIC-US
VSE_HOSTID = 10.0.0.1
RESTRICT_TO_HOSTID = no
LOG_INFO_MSG = no
Starting, stopping, and operating LFP in an LPAR environment is the same as for LFP in a z/VM environment.
4.6 IBM applications that support LFP
The following IBM applications can be used with LFP:
VSE Connector Server
CICS Web Support
VSE Web Services (SOAP) support (client and server)
CICS Listener
DB2/VSE Server and Client
WebSphere MQ Server and Client
VSAM Redirector
VSE VTAPE
VSE LDAP Support
VSE Script Client
VSE/POWER PNET
All applications that are included in the IPv6/VSE product
DB Call Level Interface (DBCLI)
z/VSE Monitoring Agent
z/VSE SNMP Trap Client
IBM Geographically Dispersed Parallel Sysplex™ (GDPS) Client
In addition to these applications, you can download tools that support LFP. For example, the z/VSE Virtual FTP daemon was developed especially for LFP.
Customer applications should run unchanged if they use one of the supported Socket API (Language Environment/C, EZA, or ASM SOCKET).
In the next sections, we describe the setup that is necessary for the use of the VSE Connector Server and z/VSE virtual FTP server.
4.6.1 VSE Connector Server
The only change to enable the VSE Connector Server for LFP is specifying the correct SYSID as assigned to the LFP stack, as shown in the following example:
// OPTION SYSPARM='01'
However, the STARTVCS job can contain a statement to check whether the TCP/IP stack is running, as shown in the following example:
* WAITING FOR TCP/IP TO COME UP
// EXEC REXX=IESWAITR,PARM='TCPIP00'
Remove this statement when LFP is used. Starting the Connector Server by using the LFP stack is not apparent for any connector client application.
4.6.2 Using the Virtual z/VSE FTP Daemon
The Virtual z/VSE FTP Daemon is a Java application that provides FTP server functionality and uses the VSE Connector Server as the VSE back end. Therefore, this is a type of proxy solution. It requires that the VSE Connector Server is running on VSE and can connect to a VSE host only.
 
Note: The Virtual z/VSE FTP daemon can be used with all IP stacks, not with LFP only.
You can download the Virtual z/VSE FTP Daemon from the VSE home page, which is available at this website:
After the Virtual z/VSE FTP Daemon is installed, you must edit its properties file (VirtualVseFtpServer.properties), which is in the server’s installation directory. Here, you specify the IP address of the Linux system or VIA guest, as shown in the following example:
defaultVseHost=9.152.111.11
Before the FTP daemon is started, check whether the VSE Connector Server runs LFP-enabled on VSE.
In our VIA environment, we started the FTP daemon on a workstation, as shown in Figure 4-6. Therefore, the FTP goes to the local host first.
Figure 4-6 Using the z/VSE virtual FTP daemon for LFP
The FTP daemon then forwards the connection to the VIA user, which, in turn, uses IUCV or HiperSockets to get to the VSE system.
Example 4-29 shows some sample output of the FTP connect.
Example 4-29 FTP connect by using the Virtual z/VSE FTP Daemon
D:>ftp 127.0.0.1
Connected to 127.0.0.1.
220 IBM Virtual z/VSE FTP Server on BR8ERNAN (version 1.0) ready to serve.
User (127.0.0.1:(none)): jsch
331 Password required for jsch.
Password:
230 User jsch logged in. Idle timeout is 15 minutes.
ftp> dir
200 PORT command successful.
150 ASCII data connection for / (127.0.0.1,2570).
drwxrwxrwx 1 vse folder 0 Aug 29 14:52 ICCF
drwxrwxrwx 1 vse folder 0 Aug 29 14:52 LIBR
drwxrwxrwx 1 vse folder 0 Aug 29 14:52 POWER
drwxrwxrwx 1 vse folder 0 Aug 29 14:52 VSAM
226 ASCII transfer complete.
ftp: 261 bytes received in 0.00Seconds 261000.00Kbytes/sec.
ftp>
 
Note: When a Linux on System z is used for running the LFPD instead of the pre-configured VIA image, you install the FTP Daemon on this Linux for best performance. In this case, the FTP goes to the Linux IP, as shown in Figure 4-7.
Figure 4-7 Running the Virtual z/VSE FTP Daemon server on Linux on System z
On Windows, the virtual FTP daemon can be started as a Windows service. For more information, see the server’s online help.
4.7 Using secure connections with SSL
The LFP stack does not provide an SSL implementation. To use SSL, you must have the CSI stack installed and running. LFP then uses the SSL implementation from CSI, but transfers the data by using the LFP stack.
4.7.1 Using a VIA guest
Figure 4-8 shows the data flow when VIA is used. In this case, the VIA guest serves as a proxy that forwards data between VSE and any remote platform. It is not possible to install any other applications in the VIA guest. In particular, it is not possible to run an FTP daemon there.
Figure 4-8 Using the z/VSE virtual FTP server in a VIA environment
In this example, the virtual FTP daemon runs on the same workstation as the FTP client. It acts as the SSL client and connects to the VSE Connector Server, which acts as the SSL server. The VIA guest, which acts as a proxy that forwards data to the VSE guest, is not visible to the two SSL end points. VIA forwards the encrypted data over the IUCV connection to VSE and vice versa. Therefore, SSL is configured between the workstation and VSE. The VIA guest is not visible.
4.7.2 Using a Linux on System z guest
Figure 4-9 shows the setup with a Linux on System z that is running the LFP daemon. In this case, the FTP daemon can be installed on the same Linux instance.
Figure 4-9 Running the z/VSE virtual FTP server on Linux on System z
In this example, the virtual FTP daemon is an SSL server and client at the same time. In the next section, we describe how to configure the virtual FTP daemon for both sides.
4.7.3 Configuring the z/VSE virtual FTP daemon for SSL
The Virtual z/VSE FTP daemon can be configured by using one of the following methods:
As an SSL client with multiple SSL servers
Use the HostAliases.properties file to specify the SSL properties for one or multiple SSL servers (for example, VSE Connector Servers).
As SSL server
Use the VirtualVseFtpServer.properties file to configure the SSL server properties.
Figure 4-10 shows the overall scenario.
Figure 4-10 Scenario with Virtual z/VSE FTP daemon
For more information about configuring SSL with the z/VSE Virtual FTP daemon, see the FTP daemon’s online help. Open the index.html file that is in the /doc directory within the server’s installation directory.
For more information about setting up SSL with the VSE Connector Server and creating the necessary keys and certificates, see Security on IBM z/VSE, SG24-7691.
4.8 Known problems
In this section, we describe issues that we experienced and insights that we had during our test setup.
4.8.1 Error accessing the config disk
An error in accessing the config disk was reported.
Symptom
acc d4c c
DMSACP112S C(D4C) device error
Possible reason
The disk is not formatted. Try format d4c c.
4.8.2 SE file transfer had a problem
An error in the SE file transfer was reported.
Symptom
Following messages appear during the VIA boot process:
100.584937š Wait: DIAG 0x2C4 RETURNED BAD STATUS_CODE=0x11.
100.584942š VMSE: SE file transfer had a problem !
100.584946š VMSE: Could not get container size.
Reason
VIA attempted to download a file from the SE that is needed only for debugging. This is not an error. Everything should work as expected.
4.8.3 User ID not authorized for SMSG
An error in the user ID not authorized for SMSG was reported.
Symptom
smsg vsevia app lfpd list
Ready; T=0.01/0.01 14:38:33
14:38:33 * MSG FROM VSEVIA : YOUR USER ID IS NOT AUTHORIZED TO CALL THIS FUNCTION.
Reason
You did not add your configuration user ID to the SENDERS.ALLOWED file on the configuration disk.
4.8.4 Invalid command response from VIA user
An error concerning an invalid command response from the VIA user was reported.
Symptom
Apparently, the LFPD command is not recognized. Although the command is entered correctly, the following command syntax is displayed:
smsg vsevia app lfpd startdbg testvse
Ready; T=0.01/0.01 14:00:01
14:00:01 * MSG FROM VSEVIA : USAGE: SMSG <GUEST> APP LFPD (START|STOP|RESTAR
T|FORCE-RELOAD|STATUS|STARTDBG) <IUCVNAME>
14:00:01 * MSG FROM VSEVIA : SMSG <GUEST> APP LFPD (LIST|STARTALL|STO
PALL)
Possible reason
The VIA user cannot link the configuration disk D4C. In our case, the disk was linked read/write by the configuration user, but the VIA user also was configured to link it read/write.
4.8.5 No response from VIA user
An error concerning receiving no response from the VIA user was reported.
Symptom
There is no response from the VIA user on an SMSG command.
Possible reason
It is likely that there is a typographical error in the called LFP script, so that you are, in fact, calling a non-existent script, as shown in the following example:
SMSG vsevia APP LFP-ADMIN2
Ready; T=0.01/0.01 17:12:19
The LFP script is called by using the Linux uevent mechanism. SMSG sends the string APP LFPD-ADMIN2 to the VIA guest. In this example, the uevent mechanism calls a script that is named LFPD-ADMIN2, which, in this case, does not exist. Normally, an entry is written to the Linux system log, but this is not visible to the VSE user.
4.8.6 Profile cannot be loaded
An error in loading the profile was reported.
Symptom
At the end of the VIA boot process, the following messages are displayed:
fpcConfigurationExpander(704): Sorry: cannot receive file 'profiles/LFP.json' from SE as '/tmp/LFP.json' under VM
fpcConfigurationExpander(704): Could not load file profiles/LFP.json from SE. RC=2
Possible reason
You are using the incorrect name for the LFP profile. The correct name is zVSE-LFP.json. Change the VM directory entry and try again, as shown in the following example:
* Network adapters and configuration
LOADDEV SCPDATA '{"profiles":["zVSE-VIA"],"networkCards":[{"OSA"',
':"0D00","staticIPv4":"9.152.111.11"}],"defaultGateway":',
'"9.152.108.1","DNS":["9.152.120.241","9.152.64.172"]',
',"hostName":"vsevia"}'
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.167.114