Virtual I/O Server
SAP clients have used the Virtual I/O Server (VIOS) in their IT landscapes for several years. Multiple white papers and technical reference papers are available to document their experiences.
The white paper System landscape at Stadtwerke Düsseldorf on a virtualized IBM POWER5 environment, GM 12-6851-00, describes the system landscape of a large utility provider in Germany. It covers the technical implementation and some utilization data of the VIOS, as shown in Figure 5-1.
Figure 5-1 IT landscape of Stadtwerke Düsseldorf from 2007
A large-scale example of an automotive supplier is described in the technical reference paper Advantages of IBM POWER technology for mission-critical environments, GE 12-2885, which is available at:
5.1 Motivation
Using the Virtual I/O Server you can define logical partitions (LPARs) independent from physical I/O resources. The VIOS itself runs in a special logical partition. The physical I/O resources, such as SCSI controllers and network adapters, are assigned to the VIOS partition. The VIOS provides virtual SCSI adapters, virtual network and virtual Fibre Channel access to the LPARs and maps the virtual resources to the physical resources. The LPARs only use the virtual network, virtual SCSI, or Fibre Channel adapters that the VIOS provides.
Using a VIOS provides the following advantages:
You can use LPARs independently from physical I/O resources. Several LPARs can share the same physical I/O resource; therefore, the amount of physical adapters is reduced.
Because all physical resources are assigned to the VIOS, no special hardware drivers are needed inside the client LPARs. The LPARs only need drivers for a virtual SCSI (vSCSI) host adapter and a virtual network adapter. These drivers are delivered together with the operating system.
On Linux LPARs, using a VIOS avoids using proprietary hardware drivers in the Linux operating system.
Using a VIOS is a prerequisite for using the Live Partition Mobility feature. See also Chapter 6, “IBM PowerVM Live Partition Mobility” on page 63.
In the first part of this chapter, we describe the basic usage types of a VIOS and show the schematic setup for using virtual network and virtual SCSI.
In the next part of this chapter, we show the management tasks that are necessary to set up a VIOS and connect a client LPAR to it.
In the last part of this chapter, we describe how to monitor a VIOS.
More detailed information and advanced scenarios are described in the IBM Redpaper publication Advanced POWER Virtualization on IBM System p Virtual I/O Server Deployment Examples, REDP-4224.
5.2 Virtual I/O Server: basic usage types
Next we cover:
Using virtual SCSI
High availability for vSCSI
Using virtual network
High availability for virtual networks
5.2.1 Using virtual SCSI
The VIOS provides disk resources to LPARs using virtual SCSI (vSCSI) server adapters. Inside the VIOS, the administrator maps a disk resource (backing device) to a vSCSI server adapter. The backing device can be a physical hard disk or a logical volume inside a volume group. A vSCSI target device provides the connection between the vSCSI server adapter and the backing device.
The client LPAR uses a vSCSI client adapter to connect to the disk resource through the VIOS. Figure 5-2 shows a vSCSI setup with three client LPARs connected to the VIOS. Inside the operating system of the client LPAR, the hard disk resource is recognized as a standard SCSI disk device (for example, hdisk0 on AIX, DC01 on IBM i and /dev/sda on Linux).
Figure 5-2 Basic virtual SCSI setup
5.2.2 High availability for vSCSI
The setup in Figure 5-2 does not provide any high-availability features. If the VIOS fails, all client LPARs connected to it are disconnected from the hard disk resources.
To achieve a higher availability, you can set up two redundant Virtual I/O Servers. Both are connected to the same hard disk resource (for example, a SAN volume). Each VIO Server provides this resource using vSCSI to the client partition. The client partition uses multipath I/O (MPIO) features to connect to this resource. If one VIOS fails, the hard disk access is still possible through the second VIOS. Figure 5-3 on page 46 shows the schematic setup for this usage type.
Figure 5-3 Redundant access to hard disk resources using two VIO servers
5.2.3 Using the virtual network
The POWER Hypervisor implements a virtual LAN (VLAN) aware Ethernet switch that the client LPARs and the Virtual I/O Server can use to provide network access. Figure 5-4 shows a scenario using the virtual network.
Figure 5-4 Basic virtual network setup
The physical network interfaces are assigned to the VIOS partition. In Figure 5-4 on page 46, ent0 is the physical adapter that is assigned to the VIOS.
Virtual Ethernet adapters are assigned to the VIOS and the client LPAR. All virtual Ethernet adapters that use the same virtual LAN ID are connected through a virtual Ethernet switch that the Hypervisor provides. In Figure 5-4 on page 46, ent1 is the virtual network adapter of the VIOS. In the client LPARs, the virtual network adapters appear as standard network interfaces (for example, ent0 in the AIX LPAR, CMN02 in the IBM i LPAR, and eth0 in the Linux LPAR).
The VIOS uses a Shared Ethernet Adapter (SEA) to connect the virtual Ethernet adapter to the physical adapter. The SEA acts as a layer-2 bridge, and it transfers packets from the virtual Ethernet adapter to the physical adapter. In Figure 5-4 on page 46, ent2 is the shared Ethernet adapter, and en2 is the corresponding network interface.
5.2.4 High availability for virtual networks
The setup in Figure 5-4 on page 46 does not provide any high-availability features. If the VIOS fails, all connected LPARS lose their network connection. To overcome this single point-of-failure, use a setup with two Virtual I/O Servers. Each VIOS uses a separate VLAN ID to provide network access to the client LPARs. The client LPARs use link aggregation devices (LA) for the network access. If one VIOS fails, network access is still possible through the remaining VIOS. Figure 5-5 shows the schematic setup for this usage type.
Figure 5-5 Redundant network access using two Virtual I/O Servers
5.3 Setting up a VIOS partition
Next we discuss:
Defining the VIOS LPAR
Installing the VIOS
Creating virtual SCSI server adapters
Gathering information about existing virtual adapters
Connecting a client LPAR to a virtual SCSI server adapter
Creating virtual Ethernet adapters
Connecting a client LPAR to the virtual network
TCP/IP address for the VIOS
5.3.1 Defining the VIOS LPAR
To set up a new VIOS, create a new LPAR and use the partition type “VIO Server”. It is recommended that you create an uncapped shared processor partition and assign all required physical I/O resources to it. The amount of RAM that the VIOS needs does not depend on the number of client partitions or on the amount of expected I/O operations. Assigning one GB of RAM to the VIOS partition is a good choice. Run the VIOS partition in shared processor mode. Defining two virtual processors is sufficient for most cases.
 
Important: The VIOS partition must have the highest uncapped weight on the system. If there are two VIOS defined, give each of them the same weight.
5.3.2 Installing the VIOS
The VIOS is based on the AIX operating system. The installation is the same as a standard AIX operating system installation, but uses a different installation source:
1. After the installation finishes, log on as user padmin, and define a password for this user.
2. Before you use the VIOS, you must accept the VIOS license using the license -accept command.
5.3.3 Creating virtual SCSI Server Adapters
To create a virtual SCSI server adapter:
1. In the partition profile on the HMC, click the Virtual Adapters tab, and select Action  Create  SCSI Adapter, as shown in Figure 5-6 on page 49 and Figure 5-7 on page 49.
Figure 5-6 Virtual SCSI server adapters in the VIOS LPAR profile
Figure 5-7 Create a Virtual SCSI server adapter
In the VIOS, the virtual SCSI server adapters appear as vhost-devices:
$ lsdev -virtual
name status description
[...]
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
2. After you set up the virtual SCSI server adapter, map it to the physical resource. To map the virtual adapter to the physical resource, create a mapping device using the mkvdev command:
$ mkvdev -vdev hdisk28 -vadapter vhost0 -dev vls1234_sda
In this example, the backing device hdisk28 is mapped to the virtual SCSI server adapter vhost0. The name of the mapping device is vls1234_sda.
5.3.4 Gathering information about existing virtual adapters
Sometimes it is useful to obtain the actual configuration of a virtual SCSI server adapter. To do so, execute the lsmap command, as shown in Example 5-1.
Example 5-1 Obtaining configuration of a virtual SCSI server adapter
$ lsmap -vadapter vhost0
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U9117.MMA.65121DA-V2-C21 0x00000004
 
VTD vls1234_sda
Status Available
LUN 0x8100000000000000
Backing device hdisk28
Physloc U789D.001.DQD51K5-P1-C1-T1-W200300A0B817B55D-L4000000000000
 
To get information about all existing virtual SCSI server adapters, run the lsmap -all command.
5.3.5 Connecting a client LPAR to a virtual SCSI server adapter
To connect a client LPAR to a hard disk resource, you must define a virtual SCSI client adapter in the partition profile of the client LPAR:
1. Open the partition profile on the HMC, and click the Virtual Adapters tab. Select Actions  Create  SCSI adapter. Choose an adapter ID, and mark the adapter as a required resource for partition activation, as shown in Figure 5-8.
Figure 5-8 Create vSCSI Client Adapter
2. Press System VIOS Info to get an overview of vSCSI server adapters on the selected VIOS partition. Figure 5-9 shows an overview of server adapters.
3. Select the required resource and activate the partition.
Figure 5-9 Overview of vSCSI server adapters
5.3.6 Creating virtual Ethernet adapters
Virtual Ethernet adapters are defined in the LPAR profile on the HMC:
1. In the VIOS partition profile, click the Virtual Adapters tab, and select Actions  Create  Ethernet Adapter. Select the This adapter is required for partition activation option. Because this adapter is used to connect the client LPARs to the external network, select the Access external network option, as shown in Figure 5-10.
Figure 5-10 Create a virtual Ethernet adapter in the VIOS partition profile
After you activate the changes, the new adapter appears in the VIOS:
$ lsdev -virtual
name status description
[...]
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
[...]
2. Create the Shared Ethernet Adapter (SEA) to connect the virtual adapter to the physical adapter:
$ mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 1
ent2 Available
In Figure 5-10, ent0 is the physical adapter, and ent1 is the VLAN adapter. The parameter defaultID defines the VLAN ID to use. The new shared adapter is created with the name ent2.
5.3.7 Connecting a client LPAR to the virtual network
In the partition profile of the client LPAR, create a virtual Ethernet adapter that is similar to the setup of the VIOS. Do not select the Access external network option for the client LPAR because the network access is provided by the VIOS; instead, click View Virtual Network to get an overview of all VLAN IDs in the system.
In the client LPAR, the virtual network device appears as a standard network device (for example, ent0 in AIX and eth0 in Linux). Configure it similar to a standard physical network device.
5.3.8 TCP/IP address for the VIOS
The VIOS does not necessarily need a TCP/IP network address. The VIOS can be accessed from the HMC using the secure and private HMC to service processor network, as shown in Figure 5-11.
You must assign a TCP/IP address to the VIOS, at least, for the following reasons:
Upgrading the VIOS through the network
You can upgrade the VIOS using a CD or using network access (for example, through FTP). Upgrading the VIOS using a network needs a TCP/IP address assigned to it, at least for the time of the upgrade.
Monitoring the VIOS
If you plan to monitor the VIOS, as described in 5.4, “VIO Server monitoring” on page 54, assign a TCP/IP address to it permanently.
Prerequisite for Live Partition Mobility
There are two possibilities of assigning a TCP/IP address to the VIOS:
 – Use a free physical network adapter that is not used for the communication between the public network and the client LPARs.
 – Create an additional virtual network adapter in the VIOS, and assign a TCP/IP address to it, for example, using the mktcpip command. You might want to put the VIOS on a specific VLAN that is different from the VLAN that the LPARs use. See Figure 5-11.
For more information, visit:
Figure 5-11 TCP/IP address for the VIOS using an additional virtual network adapter (ent3/en3)
5.3.9 N_Port ID Virtualization
With a new generation of fiber channel adapters, the VIO server also supports N_Port ID Virtualization (NPIV) as an alternative technology to attach SAN devices to VIO clients. NPIV is a fiber channel standard that enables multiple fiber channel initiators to share a single physical fiber channel port. PowerVM facilitates this feature with the introduction of virtual fiber channel adapters, which can be defined and mapped in the partition profiles for the VIO server and clients. The virtual fiber channel adapters are then connected to a physical fiber channel port.
After the initial setup the client partitions can access storage subsystems directly. On a storage subsystem the configured storage device is assigned directly to a client partition instead of a VIO server partition. The biggest advantage of this method is a simplified assignment of storage devices to client partitions. A client partition will immediately see all disks that are assigned to its virtual fiber channel adapters—no additional configuration steps are required in the VIO server.
One disadvantage can be that the client partitions’ virtual fiber channel adapters now have to participate in the SAN zoning again. On the other hand, this brings back the more traditional zoning model that most storage and SAN administrators are familiar with.
Client partitions with NPIV use traditional multipath drivers again. These drivers typically enable load balancing over multiple paths, spreading the load over multiple fiber channel adapters in single or dual VIO server configurations.
Refer to the NPIV section in the IBM PowerVM Virtualization Managing and Monitoring Redbook for detailed information about the design, configuration, and usage of virtual fiber channel adapters on Power Systems:
IBM PowerVM Virtualization Introduction and Configuration
IBM PowerVM Virtualization Managing and Monitoring
5.3.10 VIOS backup
After the VIOS is set up, as well as after any configuration change, you should take a backup of the VIOS.
The viosbr command can be used to back up the virtual and logical configuration, listing the configuration, and restoring the configuration of the Virtual I/O Server.
For complete documentation of this command, see:
5.4 VIO Server monitoring
The VIO Server is integrated into the SAP CCMS monitoring, as far as the operating system is concerned. In this section, we cover the metrics that are available also for monitoring the various I/O components that are available for further integration. This section is based on a simple load example that demonstrated how some of these metrics are captured and interpreted. This is not an SAP example because SAP systems normally have a far more sophisticated I/O layout.
In this section, we show a subset of the nmon values, which can be useful when you monitor a landscape using Virtual I/O (VIO). The nmon version 12e onwards supports the VIO Server. This overview is based on a functional test that creates non-SAP load on a simple landscape to capture and demonstrate the metrics that are available from nmon.
Test scenario and setup
On this POWER6-based machine, as shown in Figure 5-12, we have:
VIO Server = VIOS 1.5.2.1 that manages shared Ethernet and shared SAN infrastructure.
VIO Client = LPAR5 running AIX 6.1 with virtual Ethernet through the VIOS and virtual Disk, which is a SAN LUN on the VIOS.
Client has no local disk or direct network connection.
Figure 5-12 Test scenario and setup
The tests were performed in the following manner, as shown in Figure 5-13:
Inbound: External machine uses FTP to LPAR5 through the VIO Server, sending a
200 MB file over a physical network to VIO Server and then over the virtual network to LPAR5, where it is written to the virtual disk that is attached through the VIO Server.
Outbound: Before this test, the file system that is used in test 1 on LPAR5 is unmounted and remounted to clear the file system cache so that access to this file is from the disk rather than cache. A reverse FTP then caused the file to be read by LPAR5 from disk using the VIO Server and SAN, and then transferred to the external network through the virtual network and VIO Server. FTP was running at about 8 MByte/s, which was limited by the speed of the physical network (100 Mbit/s roughly means 10MByte/s in a maximum).
Four programs were started on LPAR5. Each program writes as fast as possible to the LPAR5 virtual disk using the VIO Server and SAN. This is the famous
“yes >/home/filename” benchmark.
Figure 5-13 View of activities on LPAR5: VIOS client
1. FTP inbound: Data is written to the disk (attached through the VIO Server) but is still visible here as disk I/O. AIX thinks it has a real disk. Only the disk device driver knows differently.
2. Outbound: The data is read from the disk (attached through the VIO Server). The difference in the disk I/O, in the case of reading, is that we can spot the potential of read ahead and AIX changes from 128 KB block in the above test to 256 KB reads. This reduces the number of disk I/O transfers, increases the disk throughput, and reduces the processor required to drive the disk I/O, which is a double win.
3. Four programs writing to disk in parallel (disk attached through VIOS). Instead of a single threaded application, such as file transfer protocol (FTP), after the application is multiple threaded, we can see the opportunity for much higher disk I/O.
Figure 5-14 Processor usage and disk I/O
Figure 5-15 is a view of the client that shows that the I/O is directly related to the processor time in LPAR5. In this case, there is little application time (“FTP” and “yes” do practically no processing on the data). This highlights that processor time is required to perform I/O operations in terms of AIX kernel and device driver work.
Figure 5-15 Read and write load on disk adapter in KB/s
Figure 5-15 on page 57 shows the disk activities on the virtual SCSI adapter. The blue Average is calculated over the complete time (including the time when no disk I/O occurred). The Weighted Average (WAvg.) only includes the time when the disk was actually busy, which is the number we look at, or the average includes idle time and is artificially low.
Figure 5-16 View of activities on VIO server
Figure 5-16 shows the number of physical processors on the VIO Server versus the time. For Shared Processor LPARs (SPLPAR) that are uncapped, this is the best number to monitor. The utilization numbers are unsafe because it gets near 100% busy as you get to your entitlement, and you cannot detect when you are using more processor time. Ignore the Unfolded VP line here because it just shows AIX optimizing the number of processors that it is scheduling for maximum efficiency.
Figure 5-17 Virtual Ethernet I/O read and write throughput
Figure 5-17 on page 58 shows the Virtual Ethernet I/O throughput. The read and write lines are on top of each other, which is because every byte read from a VIOS Client is also written to the physical network. If you are also network connected on the VIO Server (perhaps logged on using telnet or performing an update using FTP) using the SEA (for example, the IP address is not the SEA) then numbers can be slightly different. It is not possible to determine which client is active to what share of the capacity, but we can see what capacity is consumed. Look at the performance data from each VIO client to determine which is doing the I/O. In this case, we only have one active LPAR, so we know who to “blame”.
Figure 5-18 Shared Ethernet adapter read and write activities
Figure 5-18 shows the average (Avg.), maximum, and the Weighted Average (WAvg.) for both read and write activities over the measurement period for the Shared Ethernet Adapter. When busy, it was just over 10 MBps for both receive and transfer.
Figure 5-19 shows the SEA ent10 and the loopback network pseudo adapter lo0.
Figure 5-19 Network packets on the Shared Network Adapter and loopback
The number of write packets across the Shared Ethernet Adapter is much less then the number of read packets. Although the same amount of data is transferred, it looks like the VIO Server optimized the network and merged packets. These larger network packets mean that the network is used more efficiently.
Figure 5-20 shows the load on the fiber channel adapter (called fcs0) that is used for VIO Server client LPARs and the internally used SAS adapters (called sissas0) both for read and write.
Figure 5-20 Load on fiber channel adapter
The third test was 100% write and thus hit higher disk I/O throughput, which we can see for Weighted Average in the fcs0_write column on the throughput graph in kbytes per second, as shown in Figure 5-20 and Figure 5-21.
Figure 5-21 Throughput in KB7s on the fiber channel adapter
Figure 5-22 shows an overview of disk I/O on a disk controller. In this simple benchmark case, we only have the one LUN, so the graphs are a lot simpler than in a production type machine.
Figure 5-22 Disk transfers per second on the disk controller
Figure 5-23 includes the Service time, which is extremely healthy. Most of the time for this value is between two and three milliseconds.
Figure 5-23 Disk service time
Figure 5-24 shows the VIO Server view of disk I/O for each disk. The VIO Server is performing a little disk I/O to the internal disks, but the bulk of the disk I/O is to hdisk6, and the VIO Server client’s LUN is running at near 250 transfers per second.
Figure 5-24 VIO server view of disk I/O for each disk
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.176.99