HiperSockets technology
This chapter presents a high-level overview of the IBM HiperSockets capabilities as they pertain to the IBM z14, z13, z13s, zEC12, and zBC12 platforms.
This chapter includes the following sections:
8.1 Description
The HiperSockets function, also known as internal queued direct input/output (iQDIO) or internal QDIO, is an integrated function of the Licensed Internal Code (LIC) of the z14, z13, z13s, zEC12, and zBC12. It provides an attachment to high-speed logical LANs with minimal system and network overhead.
HiperSockets provides internal virtual LANs that act like IP networks within the IBM Z platform. Therefore, HiperSockets provides the fastest IP network communication between consolidated Linux, IBM z/VM, IBM z/VSE, and IBM z/OS virtual servers on a z14, z13, z13s, zEC12, or zBC12.
The virtual servers form a virtual LAN. Using iQDIO, the communication between virtual servers is through I/O queues set up in the system memory of the z14, z13, z13s, zEC12, or zBC12. Traffic between the virtual servers is passed at memory speeds. See 8.2, “Connectivity” on page 145, describes the number of HiperSockets that are available for each z14, z13, z13s, zEC12, and zBC12.
This LIC function, coupled with supporting operating system device drivers, establishes a higher level of network availability, security, simplicity, performance, and cost effectiveness than is available when connecting single servers or logical partitions together by using an external IP network.
HiperSockets is supported by the following operating systems:
All in-service z/OS releases
All in-service z/VM releases
All in service z/VSE releases
Linux on z Systems
8.1.1 HiperSockets benefits
Using the HiperSockets function has several benefits:
HiperSockets eliminates the need to use I/O subsystem operations and the need to traverse an external network connection to communicate between logical partitions in the same z14, z13, z13s, zEC12, or zBC12.
HiperSockets offers significant value in server consolidation, connecting many virtual servers in the same z14, z13, z13s, zEC12, or zBC12. It can be used rather than certain coupling link configurations in a Parallel Sysplex cluster. All of the consolidated hardware servers can be eliminated, along with the cost, complexity, and maintenance of the networking components that connect them.
Consolidated servers that must access data on the z14, z13, z13s, zEC12, or zBC12 can do so at memory speeds, bypassing all of the network overhead and delays.
HiperSockets can be customized to accommodate varying traffic sizes. In contrast, LANs such as Ethernet and Token Ring have a maximum frame size that is predefined by their architecture. With HiperSockets, a maximum frame size can be defined according to the traffic characteristics transported for each of the possible HiperSockets virtual LANs.
Because there is no server-to-server traffic outside of the z14, z13, z13s, zEC12, or zBC12, a much higher level of network availability, security, simplicity, performance, and cost-effectiveness is achieved, compared to servers that communicate across an external LAN. For example:
 – Because the HiperSockets feature has no external components, it provides a secure connection. For security purposes, servers can be connected to different HiperSockets. All security features, such as firewall filtering, are available for HiperSockets interfaces, as they are for other IP network interfaces.
 – HiperSockets looks like any other IP network interface. Therefore, it is apparent to applications and supported operating systems.
HiperSockets can also improve IP network communications within a sysplex environment when the DYNAMICXCF facility is used.
HiperSockets integration with the intraensemble data network (IEDN) function extends the reach of the HiperSockets network outside of the CPC to the entire ensemble, where it appears as a single Layer 2 network.
8.1.2 Server integration with HiperSockets
Many data center environments are multi-tiered server applications, with various middle-tier servers that surround the z14, z13, z13s, or zEnterprise data and transaction server. Interconnecting multiple servers affects the cost and complexity of many networking connections and components. The performance and availability of the interserver communication depends on the performance and stability of the set of connections. The more servers that are involved, the greater the number of network connections and complexity to install, administer, and maintain.
Figure 8-1 shows two configurations. The configuration on the left shows a server farm that surrounds an IBM Z platform, with its corporate data and transaction servers. This configuration has a great deal of complexity involved in the backup of the servers and network connections. This environment also results in high administration costs.
Figure 8-1 Server consolidation
Consolidating that mid-tier workload onto multiple Linux virtual servers that run on a z14, z13, z13s, zEC12, or zBC12 requires a reliable, high-speed network for those servers to communicate over. HiperSockets provides that. In addition, those consolidated servers also have direct high-speed access to database and transaction servers that are running under z/OS on the same z14, z13 zEC12, or zBC12.
This configuration is shown on the right in Figure 8-1 on page 135. Each consolidated server can communicate with others on the z14, z13, z13s, zEC12, or zBC12 through HiperSockets. In addition, the external network connection for all servers is concentrated over a few high-speed OSA-Express interfaces.
8.1.3 HiperSockets function
HiperSockets implementation is based on the OSA-Express QDIO protocol. Therefore, HiperSockets is also called internal QDIO (iQDIO). The LIC emulates the link control layer of an OSA-Express QDIO interface.
Typically, before a packet can be transported on an external LAN, a LAN frame must be built. The MAC address of the destination host or router on that LAN must be then inserted into the frame. HiperSockets does not use LAN frames, destination hosts, or routers. IP network stacks are addressed by inbound data queue addresses rather than MAC addresses.
The z14, z13, z13s, or zEnterprise LIC maintains a lookup table of IP addresses for each HiperSockets function. This table represents a virtual LAN. At the time that an IP network stack starts a HiperSockets device, the device is registered in the IP address lookup table with its IP address and its input and output data queue pointers. If an IP network device is stopped, the entry for this device is deleted from the IP address lookup table.
HiperSockets copies data synchronously from the output queue of the sending IP network device to the input queue of the receiving IP network device by using the memory bus to copy the data through an I/O instruction.
The controlling operating system that performs I/O processing is identical to OSA-Express in QDIO mode. The data transfer time is similar to a cross-address space memory move, with hardware latency close to zero. For total elapsed time for a data move, the operating system I/O processing time must be added to the LIC data move time.
HiperSockets operations run on the processor where the I/O request is initiated by the operating system. HiperSockets starts write operations. The completion of a data move is indicated by the sending side to the receiving side with a signal adapter (SIGA) instruction. Optionally, the receiving side can use dispatcher polling rather than handling SIGA interrupts. The I/O processing is performed without using the system assist processor (SAP). This implementation is also called thin interrupt.
The data transfer is handled much like a cross-address space memory move that uses the memory bus, not the IBM Z I/O bus. Therefore, HiperSockets does not contend with other system I/O activity in the system. Figure 8-2 shows the basic operation of HiperSockets.
Figure 8-2 HiperSockets basic operation
The HiperSockets operational flow consists of five steps:
1. Each IP network stack registers its IP addresses in a server-wide common address lookup table. There is one lookup table for each HiperSockets virtual LAN. The scope of each LAN is the logical partitions that are defined to share the HiperSockets IQD channel-path identifier (CHPID).
2. The address of the IP network stack’s receive buffers is appended to the HiperSockets queues.
3. When data is being transferred, the send operation of HiperSockets performs a table lookup for the addresses of the sending and receiving IP network stacks and their associated send and receive buffers.
4. The sending virtual server copies the data from its send buffers into the target virtual server’s receive buffers (z14, z13, z13s, zEC12, or zBC12 system memory).
5. The sending virtual server optionally delivers an interrupt to the target IP network stack. This optional interrupt uses the thin interrupt support function of the z14, z13, z13s, zEC12, or zBC12. This feature means that the receiving virtual server looks ahead, detecting and processing inbound data. This technique reduces the frequency of real I/O or external interrupts.
Hardware assists
A complementary virtualization technology that includes the following features is available for z14, z13, zEC12, and zBC12:
QDIO Enhanced Buffer-State Management (QEBSM)
Two hardware instructions that are designed to help eliminate the overhead of hypervisor interception.
Host Page-Management Assist (HPMA)
An interface to the z/VM main storage management function designed to allow the hardware to assign, lock, and unlock page frames without z/VM hypervisor assistance.
These hardware assists allow a cooperating guest operating system to start QDIO operations directly to the applicable channel, without interception by the z/VM operating system. This process improves performance. Support is integrated in IBM Z. However, always check the appropriate subsets of the 3906DEVICE, 2964DEVICE, 2965DEVICE, 2828DEVICE, and 2827DEVICE Preventive Service Planning (PSP) buckets before implementation.
8.1.4 Supported functions
This section describes additional functions that are supported by HiperSockets technology.
Broadcast support
Broadcasts are supported across HiperSockets on Internet Protocol version 4 (IPv4) for applications. Applications that use the broadcast function can propagate the broadcast frames to all IP network applications that are using HiperSockets. This support is applicable to Linux, z/OS, and z/VM environments.
VLAN support
Virtual local area networks (VLANs), IEEE standard 802.1q, are supported by Linux on z Systems and z/OS version 1.8 or later for HiperSockets. VLANs can reduce overhead by allowing networks to be organized by traffic patterns rather than physical location. This enhancement permits traffic flow on a VLAN connection both over HiperSockets and between HiperSockets and OSA-Express Ethernet features.
IPv6 support
HiperSockets supports Internet Protocol version 6 (IPv6). IPv6 is the protocol that was designed by the Internet Engineering Task Force (IETF) to replace IPv4 to help satisfy the demand for additional IP addresses.
The support of IPv6 on HiperSockets (CHPID type IQD) is available on z14, z13, z13s, zEC12, or zBC12, and is supported by z/OS and z/VM. IPv6 support is available on the OSA-Express6S, OSA-Express5S, OSA-Express4S, and OSA-Express3 features in the z/OS, z/VM, and Linux on z System environments.
Support of guests is expected to be apparent to z/VM if the device is directly connected to the guest (pass through).
HiperSockets Network Concentrator
Traffic between HiperSockets and OSA-Express can be transparently bridged by using the HiperSockets Network Concentrator. This technique does not require intervening network routing overhead, thus increasing performance and simplifying the network configuration. This goal is achieved by configuring a connector Linux system that has HiperSockets and OSA-Express connections that are defined to it.
The HiperSockets Network Concentrator registers itself with HiperSockets as a special network entity to receive data packets that are destined for an IP address on the external LAN through an OSA port. The HiperSockets Network Concentrator also registers IP addresses to the OSA feature on behalf of the IP network stacks by using HiperSockets, thus providing inbound and outbound connectivity.
HiperSockets Network Concentrator support uses the next-hop IP address in the QDIO header, rather than a Media Access Control (MAC) address. Therefore, VLANs in a switched Ethernet fabric are not supported by this support. IP network stacks that use only HiperSockets to communicate with no external network connection see no difference, so the HiperSockets support and networking characteristics are unchanged.
To use HiperSockets Network Concentrator unicast and multicast support, a Linux distribution is required. You also need s390-tools. You can get them from IBM developerWorks® at:
HiperSockets Layer 2 support
The IBM HiperSockets feature supports two transport modes on the z14, z13, z13s, zEC12, and zBC12:
Layer 2 (link layer)
Layer 3 (Network and IP layer)
HiperSockets is protocol-independent and supports the following traffic types:
Internet Protocol (IP) version 4 or version 6
Non-IP (such as AppleTalk, DECnet, IPCX, NetBIOS, SNA)
Each HiperSockets device has its own Layer 2 MAC address and allows the use of applications that depend on a Layer 2 address, such as DHCP servers and firewalls. LAN administrators can configure and maintain the mainframe environment in the same fashion as they do in other environments. This feature eases server consolidation and simplifies network configuration.
The HiperSockets device automatically generates a MAC address to ensure uniqueness within and across logical partitions and servers. MAC addresses can be locally administered, and the use of group MAC addresses for multicast and broadcasts to all other Layer 2 devices on the same HiperSockets network is supported. Datagrams are delivered only between HiperSockets devices that use the same transport mode (for example, Layer 2 with Layer 2 and Layer 3 with Layer 3).
A HiperSockets device can filter inbound datagrams by VLAN identification, the Ethernet destination MAC address, or both. This feature reduces the amount of inbound traffic, which leads to lower processor use by the operating system.
As with Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors or multicast routers that enable high-performance and highly available Link Layer switches between the HiperSockets network and an external Ethernet.
HiperSockets Layer 2 support is available on z14, z13, z13s, zEC12, and zBC12 with Linux on z Systems and by z/VM guest use.
HiperSockets multiple write facility
HiperSockets performance has been increased by allowing streaming of bulk data over a HiperSockets link between logical partitions. The receiving partition can process larger amounts of data per I/O interrupt. The improvement is apparent to the operating system in the receiving partition. Multiple writes with fewer I/O interrupts reduce processor use of both the sending and receiving logical partitions and is supported in z/OS.
zIIP-Assisted HiperSockets for large messages
In z/OS, HiperSockets is enhanced for IBM z Integrated Information Processor (zIIP) use. Specifically, the z/OS Communications Server allows the HiperSockets Multiple Write Facility processing of large outbound messages that originate from z/OS to be performed on zIIP.
z/OS application workloads that are based on XML, HTTP, SOAP, Java, and traditional file transfer can benefit from zIIP enablement by lowering general-purpose processor use.
When the workload is eligible, the HiperSockets device driver layer processing (write command) is redirected to a zIIP, which unblocks the sending application.
HiperSockets Network Traffic Analyzer
HiperSockets Network Traffic Analyzer (HS NTA) is a function available in the z14, z13, z13s, and zEnterprise LIC. It can make problem isolation and resolution simpler by allowing Layer 2 and Layer 3 tracing of HiperSockets network traffic.
HiperSockets NTA allows Linux on z Systems to control tracing of the internal virtual LAN. It captures records into host memory and storage (file systems) that can be analyzed by system programmers and network administrators by using Linux on z Systems tools to format, edit, and process the trace records.
A customized HiperSockets NTA rule enables you to authorize an LPAR to trace messages only from LPARs that are eligible to be traced by the NTA on the selected IQD channel.
HS NTA rules can be set up on the Support Element (SE). There are four types of rules for the HS NTA:
Tracing is disabled for all IQD channels in the system (the default rule).
Tracing is disabled for a specific IQD channel.
Tracing is allowed for a specific IQD channel. All LPARS can be set up for NTA, and all LPARs are eligible to be traced by an active Network Traffic Analyzer.
Customized tracing is allowed for a specific IQD channel.
HiperSockets Completion Queue
The HiperSockets Completion Queue function enables HiperSockets to transfer data synchronously, if possible, and asynchronously if necessary. This process combines ultra-low latency with more tolerance for traffic peaks. With the asynchronous support, during high volume situations, data can be temporarily held until the receiver has buffers available in its inbound queue. This feature provides end-to-end performance improvement for LPAR to LPAR communication.
The HiperSockets Completion Queue function is supported on the z14, z13, z13s, zEC12, and zBC12, and requires at a minimum:
z/OS version 1.13
Linux on z Systems distributions:
 – Red Hat Enterprise Linux (RHEL) version 6.2
 – SUSE Linux Enterprise Server version 11, SP2
 – Ubuntu: 16.04 LTS
z/VSE version 5.1.1
z/VM V6.3 with PTFs provide guest exploitation support
HiperSockets Completion Queue is supported by Linux on z Systems through AF_IUCV socket communication. Fast Path to Linux in a Linux LPAR requires the HiperSockets Completion Queue function of the IBM Z platform.
HiperSockets integration with the IEDN
The zEnterprise systems provide the capability to integrate HiperSockets connectivity to the IEDN. This feature extends the reach of the HiperSockets network outside the CPC to the entire ensemble, where it appears as a single Layer 2 network.
Within each CPC that is a member of an ensemble, a single iQDIO CHPID for HiperSockets can be defined to provide connectivity to the IEDN. The designated IQD CHPID is configured by using a channel parameter in the hardware configuration definition (HCD), which enables the internal Queued Direct I/O extensions (iQDX) function of HiperSockets. The IQDX function is a channel function and is not a new CHPID type. When the IQDX function is configured, the single IQD CHPID is integrated with the IEDN.
The support of HiperSockets integration with the IEDN function is available on z/OS Communications Server 1.13 and z/VM 6.2. z/VM V6.3 does not support the IEDN function.
In a z/OS environment, HiperSockets connectivity to the IEDN is referred to as the z/OS Communications Server IEDN-enabled HiperSockets function.
In an IBM z/OS environment, HiperSockets integration with the IEDN function provides the following benefits:
Combines the existing high-performance attributes of HiperSockets for intra-CPC communications with the secure access control, virtualization, and management functions of the IEDN that are provided by the Unified Resource Manager.
Converges OSA connectivity and HiperSockets connectivity into a single logical network interface.
Eliminates the HiperSockets configuration tasks within z/OS Communications Server and the Unified Resource Manager.
Simplifies z/OS movement by eliminating or minimizing reconfiguration tasks.
Enables sysplex network traffic to use HiperSockets connectivity for VIPAROUTE processing over OSA-Express for z Systems BladeCenter Extension (zBX) (CHPID type OSX) interfaces.
(CHPID type OSX) interfaces. Figure 8-3 shows a sample configuration of HiperSockets integration with IEDN.
Figure 8-3 HiperSockets IEDN Access IQDX sample configuration
HiperSockets virtual switch bridge support
The z/VM virtual switch is enhanced to transparently bridge a guest virtual machine network connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest virtual machine network connection to also directly communicate with either of these points:
Other guest virtual machines on the virtual switch
External network hosts, through the virtual switch OSA UPLINK port
 
Note: IBM z/VM 6.2, IP network, and Performance Toolkit APARs are required for this support.
A HiperSockets channel alone can provide only intra-CPC communications. The HiperSockets bridge port allows a virtual switch to connect IBM z/VM guests by using real HiperSockets devices. This feature provides the ability to communicate with hosts that are external to the CPC. A single IP address and virtual machine network connection can be used to communicate over the internal and external segments of the LAN. The fact that any particular destination address might be on the local HiperSockets channel or outside of the CPC is apparent to the bridge-capable port.
Incorporating the HiperSockets channel into the flat Layer 2 broadcast domain through OSD or OSX adapters simplifies networking configuration and maintenance. The virtual switch HiperSockets bridge port eliminates the need to configure a separate next-hop router on the HiperSockets channel to provide connectivity to destinations that are outside of a HiperSockets channel. This configuration avoids the need to create routes for this internal route in all hosted servers and the extra hop of a router to provide the Layer 3 routing functions.
Figure 8-4 shows an example of a bridged HiperSockets configuration.
Figure 8-4 Bridge HiperSockets channels
The virtual switch HiperSockets bridge support expands the use cases and capabilities of the HiperSockets channel to include the following items:
Full-function, industry-standard robust L2 bridging technology.
Single NIC configuration, which simplifies network connectivity and management.
No guest configuration changes are required for use (apparent to guest OS).
Live Guest Relocation (LGR) of guests with real HiperSockets bridge-capable IQD connections within and between bridged CPCs.
Option of including a HiperSockets connected guest within the IEDN network to allow communications with other LPARs and zBXs that are part of an ensemble.
Support for the network management attributes of the IEDN.
No limit on the number of z/VM LPARs that can participate in a bridged HiperSockets LAN.
Ability to create a single broadcast domain across multiple CPCs (Cross CPC bridged HiperSockets channel network).
Highly available network connection to the external network, provided by the z/VM virtual switch by default.
Figure 8-5 shows a sample Cross CPC HiperSockets LAN configuration. This configuration enables the creation of a single broadcast domain across CPCs and HiperSockets channels at the same time delivering a highly available configuration on both CPCs.
In this configuration, VSwitch B in LPAR B and VSwitch D in LPAR D are the active bridge ports that provide external connectivity between the external bridged IQD channel in CPC OPS1 and the external IQD channel in CPC OPS2. This flat Layer 2 LAN essentially joins or extends the HiperSockets LAN between CPCs across the external Ethernet network VSwitch UPLINK port and VSwitch D UPLINK port.
Figure 8-5 HiperSockets LAN spanning multiple CPCs
IBM z/VSE Fast Path to Linux
Fast Path to Linux allows z/VSE IP network applications to communicate with an IP network stack on Linux without using an IP network stack on z/VSE. Fast Path to Linux in an LPAR requires that the HiperSockets Completion Queue function is available on z14, z13, z13s, zEC12, and zBC12 platforms. The Fast Path to Linux function is supported starting with z/VSE 5.1.1 (version 5, release 1.1).
Figure 8-6 introduces a sample configuration of z/VSE and the Fast Path to Linux function. z/VSE applications can directly communicate with Linux IP network through the HiperSockets without involving the IP network stack of z/VSE.
Figure 8-6 Fast Path to Linux on z Systems in LPAR
 
8.2 Connectivity
HiperSockets has no external components or external network. There is no internal or external cabling. The HiperSockets data path does not go outside of the z14, z13, z13s, or zEnterprise platform. In addition, HiperSockets integration with the IEDN enables CPC to CPC communication in the same ensemble.
HiperSockets is not allocated a CHPID until it is defined. It does not occupy an I/O cage, an I/O drawer, or a PCIe I/O drawer slot. HiperSockets cannot be enabled if all of the available CHPIDs on the z14, z13, z13s, zEC12, or zBC12 have been used. Therefore, HiperSockets must be included in the overall channel I/O planning.
HiperSockets IP network devices are configured similarly to OSA-Express QDIO devices. Each HiperSockets requires the definition of a CHPID like any other I/O interface. The CHPID type for HiperSockets is IQD, and the CHPID number must be in the range of hex 00 to hex FF. No other I/O interface can use a CHPID number that is defined for a HiperSockets, even though HiperSockets does not occupy any physical I/O connection position.
Real LANs have a maximum frame size limit that is defined by their architecture. The maximum frame size for Ethernet is 1492 bytes. Gigabit Ethernet has a jumbo frame option for a maximum frame size of 9 KB. The maximum frame size for a HiperSocket is assigned when the HiperSockets CHPID is defined. Frame sizes of 16 KB, 24 KB, 40 KB, and 64 KB can be selected. The default maximum frame size is 16 KB. The selection depends on the characteristics of the data that is transported over a HiperSockets and is also a trade-off between performance and storage allocation.
The MTU size that is used by the IP network stack for the HiperSockets interface is also determined by the maximum frame size. Table 8-1 lists these values.
Table 8-1 Maximum frame size and MTU size
Maximum frame size
Maximum transmission unit size
16 KB
8 KB
24 KB
16 KB
40 KB
32 KB
64 KB
56 KB
The maximum frame size is defined in the hardware configuration (IOCP) by using the CHPARM parameter of the CHPID statement.
z/OS allows the operation of multiple IP network stacks within a single image. The read control and write control I/O devices are required only once per image, and are controlled by VTAM. Each IP network stack within the same z/OS image requires one I/O device for data exchange.
Running one IP network stack per logical partition requires three I/O devices for z/OS (the same requirement as for z/VM and Linux on z Systems). Each additional IP network stack in a z/OS Logical Partition requires only one more I/O device for data exchange. The I/O device addresses can be shared between z/OS systems that are running in different logical partitions. Therefore, the number of I/O devices is not a limitation for z/OS.
An IP address is registered with its HiperSockets interface by the IP network stack when the IP network device is started. IP addresses are removed from an IP address lookup table when a HiperSockets device is stopped. Under operating system control, IP addresses can be reassigned to other HiperSockets interfaces on the same HiperSockets LAN. This feature allows flexible backup of IP network stacks.
Reassignment is only possible within the same HiperSockets LAN. A HiperSockets is one network or subnetwork. Reassignment is only possible for the same operating system type. For example, an IP address that is originally assigned to a Linux IP network stack can be reassigned only to another Linux IP network stack. A z/OS dynamic VIPA can be reassigned only to another z/OS IP network stack, and a z/VM IP network VIPA can be reassigned only to another z/VM IP network stack. The LIC forces the reassignment. It is up to the operating system’s IP network stack to control this change.
Enabling HiperSockets requires the CHPID to be defined as type=IQD by using HCD and IOCP. This CHPID is treated like any other CHPID and is counted as one of the available channels within the IBM Z platform.
 
HiperSockets definition change: A new parameter for HiperSockets IOCP definition was introduced with the z13 and z13s. As such, the z14, z13, and z13s IOCP definitions for HiperSockets devices require the keyword VCHID. VCHID specifies the virtual channel identification number that is associated with the channel path (type IQD). The valid range is 7E0 - 7FF.
The HiperSockets LIC on z14, z13, z13s, zEC12, or zBC12 supports these features:
Up to 32 independent HiperSockets.
For z/OS, z/VM, Linux, and z/VSE the maximum number of IP network stacks or HiperSockets communication queues that can concurrently connect on a single IBM Z platform is 4096.
Maximum total of 12288 I/O devices (valid subchannels) across all HiperSockets.
Maximum total of 12288 IP addresses across all HiperSockets. These IP addresses include the HiperSockets interface and virtual IP addresses (VIPA) and dynamic VIPA that are defined for the IP network stack.
With the introduction of the channel subsystem, sharing of HiperSockets is possible with the extension to the multiple image facility (MIF). HiperSockets channels can be configured to multiple channel subsystems (CSSes). They are transparently shared by any or all of the configured logical partitions without regard for the CSS to which the partition is configured.
Figure 8-7 shows spanned HiperSockets that are defined on an IBM Z platform. For more information about spanning, see 2.1.8, “Channel spanning” on page 24.
Figure 8-7 Spanned and non-spanned HiperSockets defined
8.3 Summary
HiperSockets is part of IBM z/Architecture technology and includes QDIO and advanced adapter interrupt handling. The data transfer is handled much like a cross-address space memory move, by using the memory bus. Therefore, HiperSockets does not contend with other I/O activity in the system.
HiperSockets can be defined to separate traffic between specific virtual servers and logical partitions on one IBM Z system. Virtual private networks (VPNs) or network virtual LANs across HiperSockets are supported to further isolate traffic as required. With integrated HiperSockets networking, there are no server-to-server traffic flows outside the IBM Z system. The only way to probe these VLANs is by using the NTA function, and strict controls are required for that procedure.
The z14, z13, z13s, zEC12, and zBC12 support up to 32 HiperSockets. Spanned channel support allows sharing of HiperSockets across multiple CSSes and LPARs.
8.4 References
For more information about the HiperSockets function and configuration, see IBM HiperSockets Implementation Guide, SG24-6816.
For more information about the HiperSockets virtual bridge support for z/VM, see z/VM Connectivity, SC24-6174.
For more information about the HiperSockets integration with IEDN and z/OS Communications Server, see z/OS Communications Server: IP Configuration Guide, SC31-8775.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.104.230