IBM PowerVC planning
This chapter describes the key aspects of IBM® Power Virtualization Center (IBM PowerVC) installation planning divided into the following sections:
Section 2.1, “IBM PowerVC requirements” on page 14 and 2.2, “IBM PowerVM NovaLink requirements” on page 19 presents the hardware and software requirements for the various components of an IBM PowerVC environment: management station, managed hosts, network, storage area network (SAN), and storage devices.
Sections 2.3, “Host and partition management planning” on page 22 through 2.10, “Product information” on page 66 provide detailed planning information for various aspects of the environment’s setup including:
2.1 IBM PowerVC requirements
This section describes the necessary software and hardware components to implement IBM PowerVC to manage AIX, IBM i, and Linux platforms.
IBM PowerVC for Private Cloud continues to be included along with IBM PowerVC in the IBM PowerVC 2.0.0 installation media. For information about available releases, see this website:
In addition to the functionalities offered by IBM PowerVC, IBM PowerVC for Private Cloud provides cloud capabilities such as self-service portal that allows for the provisioning of new virtual machines (VMs) in a PowerVM-based private cloud, without direct system administrator intervention.
2.1.1 Hardware and software information
The following sections describe the hardware, software, and minimum resource requirements for Version 2.0.0 of IBM PowerVC for Private Cloud and IBM PowerVC. For the complete requirements, see the following IBM Knowledge Center:
IBM PowerVC Version 2.0.0
IBM PowerVC for Private Cloud Version 2.0.0
 
Note: Support for KVM has been deprecated in IBM PowerVC Version 2.0.0.
2.1.2 Hardware and software requirements for IBM PowerVC
The following information provides a consolidated view of the hardware and software requirements for both IBM PowerVC and IBM PowerVC for Private Cloud.
IBM PowerVC management host and managed hosts
The IBM PowerVC architecture supports a single management host for each managed domain.It is not possible to configure redundant IBM PowerVC management hosts that control the same objects.
The VM that hosts the IBM PowerVC management host should be dedicated to this function. No other software or application should be installed on this VM. However, you can install software for the management of this VM, such as monitoring agents and data collection tools for audit or security. Table 2-1 on page 15 lists the IBM PowerVC hardware and
software requirements.
Table 2-1 Hardware and OS requirements
Host type
Supported hardware
Supported operating systems
IBM PowerVC management server
ppc64le (Power8 and above)
x86_64
Support for PowerVC installation on ppc64 architecture is being withdrawn.
Power Platform: Red Hat Enterprise Linux 8.2 and Red Hat Enterprise Linux 8.3; SUSE Linux Enterprise Server 15 SP1 and SUSE Linux Enterprise Server 15 SP2
x86_64 Platform: Red Hat Enterprise Linux 8.2 and Red Hat Enterprise Linux 8.3
Managed Hosts
IBM POWER7 or later processor-based servers. All form factors are included such as chassis, rack, blade and Power Flex.
Managed hosts should have a minimum of 4 cores and 8 GB memory.
Table 2-2 describes the minimum and recommended resources that are required for IBM PowerVC VMs. In the table, the meaning of the processor capacity row depends on the type of host that is used as the IBM PowerVC management host:
If the IBM PowerVC management host is PowerVM, processor capacity refers to either the number of processor units of entitled capacity for shared processors, or the number of dedicated processors.
If the IBM PowerVC management host is x86, processor capacity refers to the number of physical cores.
Table 2-2 Resource requirements for 5 hosts, 2 storage providers, and 2 fabrics
Item
Minimum
Recommended
Number of VMs
 
Up to 500
501-1000
1001-2000
2001-3000
3001-6000
Processor capacity
1
2
4
8
8
12
Virtual CPUs
2
2
4
8
8
12
Memory and swap space (GB)
22
32
35
40
45
55
Disk used (GB)
80
100
120
140
160
180
 
Note: The resource requirements for PowerVC and PowerVC for Private Cloud will vary when there are more than five hosts, two storage providers and two fabrics. Refer to the 2.1.1, “Hardware and software information” on page 14 section for the links to
IBM Knowledge Center.
PowerVC has the following file system space requirements:
/tmp 250 MB
/usr 250 MB
/opt 5 GB
/srv 2 GB
/var 3 GB
It is recommended that 20% of the disk space is assigned to /var.
Guest operating system support
Table 2-3 lists the supported virtual machine (VM) operating systems on the managed hosts.
Table 2-3 Supported operating systems for VMs on the managed hosts
Operating system
Little Endian (LE) or Big Endian (BE)
Version
AIX
BE
7.1 TL0, SP0
7.2 TL0, SP0
IBM i
BE
7.2 TL1
7.3 TR3
7.4
Note: Version 7.2 TR8 or 7.3 TR4 is required for POWER9
Red Hat Enterprise Linux
LE
7.6, Red Hat Enterprise Linux 7.6-ALT, 7.7, 7.8, 7.9
8.0, 8.1, 8.2, and 8.3
SUSE Linux Enterprise Server
LE
12 & 15
SUSE Linux Enterprise Server 15 SP1 and SUSE Linux Enterprise Server 15 SP2
Ubuntu
LE
16.04
 
Note: The operating system versions shown above are the minimum supported levels by PowerVC. Newer versions may be available from the vendor.It is recommended that systems be maintained at versions that are currently supported. Check the support lifecycle documentation for the respective vendor.
Hardware Management Console
Table 2-4 shows the Hardware Management Console (HMC) version and release requirements to support IBM PowerVC.
Table 2-4 HMC minimum requirements
Item
Requirement
Software level
V9.1.941
V9.2.950
Hardware level
For the HMC hardware that supports the software levels listed above, refer to: https://www.ibm.com/support/pages/recommended-fixes-hmc-code-upgrades
HMC memory capacity
Requirements
Up to 300 VMs: 4 GB
More than 300VMs: 8 GB
Recommended
Up to 300 VMs: 8 GB
More than 300 VMs: 16 GB
As a preferred practice, update to the latest HMC fix pack for the specific HMC release. You can check the fixes for HMC and other IBM products by using the IBM Fix Level Recommendation Tool at the following website:
You can get the latest fix packages from IBM Fix Central at the following website:
Virtual I/O Server
Table 2-5 includes the Virtual I/O Server (VIOS) version requirements for IBM PowerVC managing PowerVM.
Table 2-5 Supported VIOS platforms
Platform
Requirement
Virtual I/O Server
Version 3.1.0.30
Version 3.1.1.25
Version 3.1.2.10
 
Note: A minimum of 6 GB is required for VIOS 3.1.x.
Network
Table 2-6 lists the network infrastructure that is supported by IBM PowerVC.
PowerVC can communicate with hosts and storage over IPv4 addresses. PowerVC Version 2.0.0 does not support IPv6.
Table 2-6 Supported network hardware and software
Item
Requirement
Network switches
IBM PowerVC does not manage network switches, but it supports network configurations that use virtual LAN (VLAN)-capable switches.
Virtual Networks
PowerVC Supports Shared Ethernet Adapters (SEA) for virtual machine networking and SR-IOV based vNICs on PowerVM.
Storage
Table 2-7 lists the storage systems and drivers, SAN switches and connectivity types that are supported by PowerVC.
Table 2-7 Supported Storage and Fabric drivers for PowerVM
Item
Supported
Storage systems and drivers
IBM Storwize family.
IBM FlashSystem® A9000 and A9000R.
IBM SAN Volume Controller (SVC).
IBM XIV® Storage system.
IBM System Storage DS8000®.
Hitachi Enterprise Hitachi Block Storage Driver (HBSD) 10.0.0.
Dell EMC VNX and Dell EMC PowerMAX (VMAX).
Pure Storage.
Pluggable (A pluggable storage device is an OpenStack supported storage device.)
Dell EMC VNX requires the PowerVC management server running on an x86 host.
SAN switches and Fabric Drivers
Brocade zone OpenStack driver for Ussuri.
Cisco Fibre Channel zone driver. VSAN support is included.
Pluggable. Any fabric supported by OpenStack driver can be registered with PowerVC.
PowerVC supports up to 25 fabrics.
Storage connectivity type
FC attachment through at least one N_PORT Virtualization (NPIV)-capable hosts bus adapter (HBA) on each host.
Virtual SCSI (vSCSI).
Shared Storage Pools.
Supported storage connectivity options
Each virtual machine is associated with a single connectivity group that manages the virtual machine’s connectivity to storage volumes. Each storage connectivity group supports a single connectivity type for volumes in the boot set. Each VM can have only one type of boot connectivity and one type of data connectivity.
Table 2-8 shows the supported storage connectivity options. Both NPIV and vSCSI data volumes are shown supported for a vSCSI boot volume, but you can have either NPIV or vSCSI data volumes, not both. This is because storage connectivity groups only support one type of connectivity.Similarly, you cannot have both shared storage pool volumes and NPIV boot volumes in a VM.
Table 2-8 Supported storage connectivity options
Boot volume
Data volume
 
Shared storage pool
NPIV
vSCSI
Shared storage pool (SSP)
X
X
 
NPIV
 
X
 
vSCSI
 
X
X
Security
PowerVC can be optionally configured to work with an existing Lightweight Directory Access Protocol (LDAP) server.
Table 2-9 shows the supported LDAP software and requirements.
Table 2-9 Supported LDAP server versions
Item
Requirement
Lightweight Directory Access Protocol (LDAP) server (optional)
OpenLDAP version 2.0
Microsoft Active Directory 2016
IBM Security™ Directory Suite 8.0.1
2.1.3 Other hardware compatibility
IBM PowerVC is based on OpenStack, so rather than being compatible with specific hardware devices, IBM PowerVC is compatible with drivers that conform to OpenStack standards. They are called pluggable devices in IBM PowerVC. Therefore, IBM PowerVC can take advantage of hardware devices that are available from vendors that provide OpenStack-compatible drivers for their products. The level of functionality that the pluggable devices have, will depend on the driver. IBM cannot state the support of other hardware vendors for their specific devices and drivers that are supported by IBM PowerVC, so check with the vendors to learn about their drivers. For more information about pluggable devices, see IBM Knowledge Center:
2.1.4 Web browser supported
IBM PowerVC works on many web browsers:
Mozilla Firefox ESR 83
 
Note: PowerVC does not load if Ask me every time is selected as the custom
history setting.
Google Chrome, Version 86.0.4240.198
Microsoft Edge Version 86.0.622.58 (official build) (64-bit) and later
Safari, Version 12
2.2 IBM PowerVM NovaLink requirements
PowerVM NovaLink is a software interface that is used for virtualization management. You can install PowerVM NovaLink on a supported PowerVM server. PowerVM NovaLink enables highly scalable modern cloud management and deployment of critical enterprise workloads. PowerVM NovaLink can be used to provision large numbers of Virtual Machines (VMs) on PowerVM servers quickly and at a reduced cost.
2.2.1 PowerVM NovaLink system requirements
For normal operation, PowerVM NovaLink requires specific hardware and software criteria to be met.
Power Systems requirements
PowerVM NovaLink can be installed on POWER8 processor-based servers running firmware level FW840 or later, PowerVM NovaLink can also be installed on POWER9 processor-based servers running firmware level FW910, or later. If the target server does not meet the minimum firmware requirements, it must be updated/upgraded before installing
PowerVM NovaLink.
 
Note: NovaLink levels older than NovaLink 1.0.0.16 Feb 2020 release with partitions running certain SR-IOV capable adapters are NOT supported on POWER9 servers running firmware level FW930 and FW940. Upgrading systems in this configuration is supported only if NovaLink is first updated to NovaLink 1.0.0.16 Feb 2020 release or later. Additional details are available on IBM Support Fix Central website, in the firmware description file.
IBM PowerVC requirements
For normal operation, IBM PowerVC Version 2.0.0 requires PowerVM NovaLink version 2.0.0.
PowerVM NovaLink partition requirements
PowerVM NovaLink requires its own partition on the managed system. The PowerVM NovaLink Version 2.0.0 partition requires the following system resources:
0.5 shared processors that are uncapped with a non-zero weight and two
virtual processors.
8 GB of memory is required for Red Hat Enterprise Linux 8.2. The minimum required memory capacity for Ubuntu 18.04 is 6.5 GB, the memory capacity can be lowered to 2.5 GB after installation. See Table 2-10 for the memory requirements when scaling VMs.
At least 30 GB of vSCSI storage (LV, PV, or NPIV).
A virtualized network that is bridged through SEA.
Minimum virtual slots that are set to 200 or higher.
Table 2-10 Amount of memory that is needed for a single NovaLink Version 2.0.0 partition
Number of VMs
Up to 250
251 - 500
More than 500
Red Hat Enterprise Linux 8.2 - Memory needed in a standard environment (in GB)
8
14
18
Ubuntu 18.04 - Memory needed in a standard environment (in GB)
2.5
5
10
PowerVM NovaLink considerations:
When the PowerVM NovaLink environment is installed on a new managed system, the PowerVM NovaLink installer creates the PowerVM NovaLink partition automatically. In such cases, the PowerVM NovaLink installer creates the PowerVM NovaLink partition on a new managed system, the PowerVM NovaLink installer always uses virtualized storage that is provisioned from the Virtual I/O Server. The installer allocates logical volumes from the VIOS rootvg to the PowerVM NovaLink partition.
If the PowerVM NovaLink installer is set to use I/O redundancy, the storage for the PowerVM NovaLink partition is automatically mirrored to accomplish redundancy by using RAID 1.
If the PowerVM NovaLink software is installed on a HMC-managed system, the HMC is used to create a Linux logical partition (LPAR) and define its associated amounts of resources. When the HMC is used to create the Linux logical partition (LPAR), the powervm_mgmt_capable flag must be set to true.
The default OS installed by the PowerVM NovaLink media is Ubuntu 16.04.1 LTS. Red Hat Enterprise Linux version 7.3, or later, is also supported.
NovaLink 2.0.0 is also supported on Linux partitions running Ubuntu 18.04 with latest fixes applied, as well as Red Hat Enterprise Linux 8.2 and Red Hat Enterprise Linux 8.3 (LE) (valid as of the date when this document was created). The installer provides an option to install Red Hat Enterprise Linux once the required setup and configuration steps are complete.
 
Note: Software-defined infrastructure (SDI) is no longer supported with NovaLink 2.0.0 valid as of the date when this document was created.
Supported operating systems for hosted logical partitions
PowerVM NovaLink supports all operating systems that are supported on the machine type and model of the managed system.
Virtual I/O Server partition requirements
Table 2-11 shows the PowerVC, PowerVM NovaLink and Virtual I/O Server minimum
required versions.
Table 2-11 Supported virtualization platforms
PowerVC
NovaLink
Platform
VIOS
1.4.2.0
1.0.0.12
POWER8 and POWER9
Version 3.1.0.10
Version 2.2.6.23
Version 2.2.5.101
1.4.2.1
1.0.0.13
1.4.3.0
1.0.0.14
POWER8 and POWER9
Version 2.2.6.41 with mandatory iFix IJ16853 applied
Version 3.1.0.21 with mandatory iFix IJ16854 applied
1.4.3.1
1.0.0.15
1.4.4.0
1.0.0.16
POWER8 and POWER9
Version 2.2.6.51
Version 3.1.0.21 with mandatory iFix IJ16854 applied
Version 3.1.0.30
Version 3.1.1.10
Version 3.1.1.25
Version 3.1.2.0
1.4.4.1
1.0.0.16
1.4.4.2
1.0.0.16
2.0.0.0
2.0.0.0
POWER8 and POWER9
Version 3.1.0.30
Version 3.1.1.25
Version 3.1.2.10

1 PowerVM Virtual I/O Server compatible only with POWER8 systems
Before installing PowerVM NovaLink, review the following considerations to determine if any of any are applicable for your environment:
If you install the PowerVM NovaLink environment on a new managed system, configure one disk with at least 60 GB of storage for each Virtual I/O Server instance that you plan to create on the system. You can configure the system’s internal serial-attached Small Computer System Interface (SAS) disk units, alternatively disk units residing within SAN (Storage Area Network) can be used as well.
If you create two instances of Virtual I/O Server, create each disk on a separate SAS controller or Fibre Channel (FC) card to accomplish redundancy. Otherwise, the resource requirements for Virtual I/O Servers that are installed by the PowerVM NovaLink installer are the same as the resource requirements for Virtual I/O Servers that are not installed by the PowerVM NovaLink.
Reliable Scalable Cluster Technology (RSCT) for Resource Monitoring Control (RMC) connections
To enable IPv6 link-local address support for Resource Monitoring Control (RMC) connections, update the Reliable Scalable Cluster Technology (RSCT) packages on AIX and Linux logical partitions to be at version 3.2.1.0 or later.
IBM PowerVC requirements
IBM PowerVC Version 1.3 (or later) is required to manage a PowerVM NovaLink host. IBM PowerVC Version 2.0.0 is required to manage a host that has PowerVM NovaLink version 2.0.0. Support for Software Defined Networking (SDN) is discontinued with NovaLink 2.0.0. Also, make sure that your NovaLink version 2.0.0 native Linux host meets the following levels:
Ubuntu 18.04 with latest fixes applied (recommended)
Red Hat Enterprise Linux 8.2 and Red Hat Enterprise Linux 8.3 (LE)
Hardware Management Console requirements
HMC Version 8.4.0 service pack 1, or later, is required to co-manage a system with
PowerVM NovaLink.
2.3 Host and partition management planning
When planning your IBM Power Virtualization Center (PowerVC) environment, consider your host and I/O management strategy, number of hosts' limitations and VMs that can be managed by IBM PowerVC, and the benefits from using multiple VIOSs.
2.3.1 Physical server configuration
If you plan to use Live Partition Mobility (LPM), you must ensure that all servers are configured with the same logical memory block size. The logical memory block size can be changed from the Advanced System Management Interface (ASMI) as shown in Figure 2-1.
Figure 2-1 Changing the logical memory block size
2.3.2 HMC or PowerVM NovaLink planning
Data centers can contain hundreds of hosts and thousands of VMs. For IBM PowerVC Version 2.0.0, the following maximum numbers are recommended:
­IBM PowerVC version 2.0.0 managing PowerVM hosts by an HMC:
 – A maximum of 30 HMC-managed hosts are supported.
 – Each host can have up to 1000 virtual machines on it.
 – A maximum of 3000 virtual machines can be on all of the hosts combined.
 – Each HMC can manage up to 2000 virtual machines.
IBM PowerVC Version 2.0.0 managing PowerVM hosts by using PowerVM NovaLink:
 – Up to 50 NovaLink-managed hosts are supported.
 – Up to 10000 virtual machines and 20000 volumes can be on all of the NovaLink-managed hosts combined.
 – Up to 1000 virtual machines (NovaLink, Virtual I/O Servers, or client workloads) per PowerVM host are supported (This number is valid as of the date when this document was released, the maximum number is also dependent on the PowerVM system firmware versions). For more information on scaling your system, see IBM PowerVM Best Practices, SG24-8062.
When your IBM PowerVC version 2.0.0 environment consists of both HMC-managed and NovaLink-managed hosts:
 – Up to 50 hosts are supported, 30 of which can be HMC-managed.
 – A maximum of 3000 virtual machines can be on all of the hosts combined.
 
Notes:
No hard limitations exist in IBM PowerVC. These are the suggested values from performance standpoint only.
KVM is no longer supported on PowerVC 2.0.0 and NovaLink 2.0.0 (valid as of the date when this document was created).
You must consider how to partition your HMC, and PowerVM NovaLink in subsets, where each is managed by a IBM PowerVC management host.
Advanced system planning and installation typically use redundant HMCs to manage the hosts. Support for redundant HMCs was added with the release of PowerVC Version 1.2.3. Support for redundant HMCs was enhanced with version 1.4.2.0, which allowed the user to manage a host via more than one HMC, where the first registered HMC would be the primary HMC for the host.
If there is another HMC managing the host and it is added to PowerVC, it is set as the secondary HMC. In case the primary HMC fails, PowerVC automatically fails over the host connection to the secondary HMC. Alternatively, in case of a planned HMC outage, you can manually switch the connection to the Secondary HMC, so that the secondary takes over the role of primary HMC. See Figure 2-2 on page 25 for how to add redundant HMC in
PowerVC 2.0.0.
 
Note: It is recommended to create a separate HMC user, dedicated for PowerVC to authenticate to the HMC.
Figure 2-2 Add redundant HMC
1. To add a redundant HMC, select your host from the host list menu and click on Add secondary HMC located under Details on Secondary HMC connection field (Figure 2-3).
Figure 2-3 Add HMC details
2. Enter a unique name that will be displayed in PowerVC, HMC IP address, user ID, and password for authentication.
Note: PowerVC does not support management of OpenPOWER systems through HMC.
Now the secondary HMC was successfully added as shown in Figure 2-4.
Figure 2-4 Secondary HMC available
Planning for AME
Active Memory Expansion (AME) improves memory usage for virtual machines. It expands the physical memory that is assigned to a virtual machine by allowing the operating system to effectively use more memory than it was assigned. The hypervisor compresses the least used memory pages to achieve the expansion. The increase in available memory is called the expansion factor. To enable AME, use an advanced compute template. See Figure 2-5 on for how to enable Active Memory Expansion (AME) to expand the effective memory capacity of an AIX virtual machine beyond its physical allocation.
Figure 2-5 Enable Active Memory Expansion (AME)
AME Requirements in PowerVC 2.0.0
Your system must meet the following requirements to use AME with PowerVC:
PowerVC Version 2.0.0
AIX Version 6.1, or later
POWER7, or later
2.3.3 Virtual I/O Server planning
IBM PowerVC supports more than one VIOS server.
Consider a second VIOS to provide redundancy and I/O connectivity resilience to the hosts. Use two VIOSs to avoid outages to the hosts when you must perform maintenance, updates, or changes in the VIOS configuration.
If you plan to make partitions mobile, define the VIOS that provides the mover service on all hosts, and ensure that the Mover service partition option is enabled in the profile of these VIOSs. Save configuration changes to profile must be set to Enable on all VIOSs. On the HMC, verify the settings of all VIOSs, as shown in Figure 2-6.
.
Figure 2-6 VIOS settings that must be managed by IBM PowerVC
Important: Configure the maximum number of virtual resources (virtual adapters) for the VIOS to at least 200. This setting provides sufficient resources on your hosts while you create and migrate VMs throughout your environment. Otherwise, IBM PowerVC indicates a warning during the verification process.
Changing the maximum virtual adapters in a Virtual I/O Server
The preferred way to change the Maximum Virtual Adapters in the HMC running at version 9 is using the Enhanced GUI:
1. Select Resources → All Systems then click the name of the desired managed server.
2. The partitions view is displayed. Under Power VM click Virtual I/O Servers, select
the VIOS.
3. Click Actions → View Virtual I/O Server Properties.
4. Now click Advanced (see Figure 2-7), that is displayed on the upper right-hand corner.
5. Check the Maximum Virtual Adapters field. In order to edit that field, the target VIOS must be powered off in advance (Figure 2-7).
Figure 2-7 How to change the Maximum Virtual Adapters
 
Note: On certain HMC versions, the Advanced settings tab is located on the upper right-hand side corner and is represented by the Advanced button. See Figure 2-8, for an example of an HMC running version V9R1M942 (MH01876).
Figure 2-8 HMC Advanced button in HMC Version V9R1M942
Another way to change the Maximum Virtual Adapters in HMC Version 9:
1. Select Resources → All Systems, then click the name of the desired managed server to open the Partitions view.
2. Under PowerVM click Virtual I/O Servers.
3. Now click the name of the desired VIOS.
4. Under VIOS Actions click Profiles → Manage Profiles, select the desired profile.
5. Then under Actions click Edit and switch to the Virtual Adapters tab.
6. Now replace the value in the Maximum virtual adapters field with a new value and click OK to save it. See Figure 2-9 for an example.
Figure 2-9 Maximum virtual adapters in a VIOS profile
Updating the profile settings will set Save configuration changes to profile to Disable until next activated as shown in Figure 2-10.
Figure 2-10 Save configuration changes to profile disable until next activated
 
Note: Save the current configuration of the Virtual I/O Server to a new partition profile. To save the current configuration to a new partition profile, you must be a super administrator, service representative, operator, or product engineer.
2.4 Placement policies and templates
One goal of IBM PowerVC is to simplify the management of VMs and storage by providing the automated creation of partitions, virtual storage disks, and the automated placement of partitions on physical hosts. This automation replaces the manual steps that are needed when you use PowerVM directly. In the manual steps, you must create disks, select all parameters that define each partition to deploy, and configure the mapping between the storage units and the partitions in the VIOSs.
This automation is performed by using placement policies and various templates.
2.4.1 Host groups
Use host groups to group hosts logically regardless of the features that they might share. For example, the hosts do not need the same architecture, network configuration, or storage. Host groups have these important features:
Every host must be in a host group.
Any hosts that do not belong to a user-defined host group are members of the default host group. The default host group cannot be deleted.
VMs are kept within the host group.
A VM can be deployed to a specific host or to a host group. After deployment, if that VM is migrated, it must always be migrated within the host group. Also advanced features of PowerVC like the Dynamic Resource Optimizer (DRO), Maintenance mode, or Automated Simplified Remote Restart only operate inside a host group.
Placement policies are associated with host groups.
Every host within a host group is subject to the host group’s placement policy. The default placement policy is striping.
An enterprise client can group its hosts to meet different business needs, for example, for test, development, and production, as shown in Figure 2-11. With different placement policies, even with different hardware, the client can archive at different service levels.
Figure 2-11 Host group sample
2.4.2 Placement policies
If you want to deploy a new partition, you can indicate to IBM PowerVC the host on which you want to create this partition. You can also ask IBM PowerVC to identify the hosts on which the partitions will best fit in a host group, based on a policy that matches your business needs. If you ask IBM PowerVC to identify the hosts on which the partitions will best fit in a host group, it compares the requirements of the partitions with the availability of resources on the possible set of target hosts. IBM PowerVC considers the selected placement policy to make a choice.
IBM PowerVC offers six policies to deploy VMs:
Striping placement policy
The striping placement policy distributes your VMs evenly across all of your hosts. For each deployment, IBM PowerVC determines the hosts with sufficient processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, IBM PowerVC chooses the host that contains the fewest number of VMs and places the VM on that host.
Packing placement policy
The packing placement policy places VMs on a single host until its resources are fully used, and then it moves on to the next host. For each deployment, IBM PowerVC determines the hosts with sufficient processing units and memory to meet the requirements of the VM. Other factors for determining eligible hosts include the storage and network connectivity that are required by the VM. From the group of eligible hosts, IBM PowerVC chooses the host that contains the most VMs and places the VM on that host. After the resources on this host are fully used, IBM PowerVC moves on to the next eligible host that contains the most VMs.
This policy can be useful when you deploy large partitions on small servers. For example, you must deploy four partitions that require eight, nine, and seven cores on two servers, each with 16 cores. If you use the striping policy, the first two partitions are deployed on the two servers, which leaves only eight free cores on each. IBM PowerVC cannot deploy the 9-core partition because an LPM operation must be performed before the 9-core partition can be deployed.
By using the packing policy, the first two 8-core partitions are deployed on the first hosts, and IBM PowerVC can then deploy the 9-core and 7-core partitions on the second host. This example is simplistic, but it illustrates the difference between the two policies: The striping policy optimizes performance, and the packing policy optimizes human operations.
CPU utilization balance placement policy
This placement policy places VMs on the host with the lowest CPU utilization in the host group. The CPU utilization is computed as a running average over the last 15 minutes.
CPU allocation balance placement policy
This placement policy places VMs on the host with the lowest percentage of its CPU that is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
 – Host 1 has 16 total processors, four of which are assigned to VMs.
 – Host 2 has four total processors, two of which are assigned to VMs.
Assume that the user deploys a VM that requires one processor. Host 1 has (4+1)/16, or 5/16 of its processors that are allocated. Host 2 has (2+1)/4, or 3/4 of its processors that are allocated. Therefore, the VM is scheduled to Host 1.
Memory utilization balanced
This placement policy places virtual machines on the host that has the lowest memory utilization in the host group. The memory utilization is computed as a running average over the last 15 minutes.
 – HMC managed hosts do not accurately report their memory utilization (only NovaLink managed systems). Therefore, host groups that use this policy should not contain HMC managed hosts. If there are any HMC managed hosts in the host group, PowerVC always targets the HMC hosts for placement because their utilization is recorded as 0.
 – All virtual machines on a PowerVC host should have RMC running for the most accurate memory utilization estimates.
Memory allocation balanced placement policy
This placement policy places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation.
For example, consider an environment with two hosts:
 – Host 1 has 24 GB total memory, 11 GB of which are assigned to VMs.
 – Host 2 has 8 GB total memory, 2 GB of which are assigned to VMs.
Assume that the user deploys a VM that requires 1 GB of total memory. Host 1 has (11+1)/24, or 1/2 of its memory that is allocated. Host 2 has (2+1)/8, or 3/8 of its memory that is allocated. Therefore, the VM is scheduled to Host 2.
 
Note: A default placement policy change does not affect existing VMs. It affects only new VMs that are deployed after the policy setting is changed. Therefore, changing the placement policy for an existing environment does not result in moving existing partitions.
Tip: The following settings might increase the throughput and decrease the duration of deployments:
Use the striping policy rather than the packing policy.
Limit the number of concurrent deployments to match the number of hosts.
When a new host is added to the host group that is managed by IBM PowerVC, if the placement policy is set to the striping mode, new VMs are deployed on the new host until it catches up with the existing hosts. IBM PowerVC allocates partitions only on this new host until the resources use of this host is about the same as on the previously installed hosts.
When a new partition is deployed, the placement algorithm uses several criteria to select the target server for the deployment, such as availability of resources and access to the storage that is needed by the new partitions. By design, the IBM PowerVC placement policy is deterministic. Therefore, the considered resources are the amounts of processing power and memory that are needed by the partition, as defined in the partition profile (virtual processors, entitlement, and memory). Dynamic resources, such as I/O bandwidth, are not considered, because they result in a non-deterministic placement algorithm.
 
Note: The placement policies are predefined. You cannot create your own policies.
The placement policy can also be used when you migrate a VM. Figure 2-12 shows the IBM PowerVC user interface for migrating a partition. Use this interface to select between specifying a specific target or letting IBM PowerVC select a target according to the current placement policy.
Figure 2-12 Migration of a partition by using a placement policy
2.4.3 Template types
Rather than define all characteristics for each partition or each storage unit that must be created, the usual way to create them in IBM PowerVC is to instantiate these objects from a template that was previously defined. The amount of effort that is needed to define a template is similar to the effort that is needed to define a partition or storage unit. Therefore, reusing templates saves significant effort for the system administrator, who must deploy
many objects.
IBM PowerVC provides a GUI to help you create or customize templates. Templates can be easily defined to accommodate your business needs and your IT environment.
Three types of templates are available:
Compute templates These templates are used to define processing units, memory, and disk space that are needed by a partition. Compute templates also define additional details of a VM, such as remote restartable, Compatibility mode or Secure boot. They are described in 2.4.4, “Information that is required for compute template planning” on page 34.
Deploy templates These templates are used in PowerVC for Private Cloud to allow authorized self-service users to quickly, easily, and reliably deploy an image. They are described in 2.4.5, “Tips for deploy template planning” on page 40.
Storage templates These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider. They are described in 2.6.2, “Storage templates” on page 49.
Use the templates to deploy new VMs. This approach propagates the values for all of the resources into the VMs. The templates accelerate the deployment process and create a baseline for standardization.
2.4.4 Information that is required for compute template planning
The IBM PowerVC management host provides six predefined compute templates (sizes from tiny to xxlarge). Your redefined templates can be edited and removed. You can also create your own templates.
Before you create templates, plan for the amount of resources that you need for the different classes of partitions. For example, different templates can be used for partitions that are used for development, test, and production, or you can have different templates for database servers, application servers, and web servers.
The following information about the attributes of a compute template helps your planning efforts regarding compute templates:
Template name
The name to use for the template.
Virtual processors
The number of virtual processors. A VM usually performs best if the number of virtual processors is close to the number of processing units that is available to the VM. You can specify the following values:
Minimum The smallest number of virtual processors that you accept for deploying
a VM.
Desired The number of virtual processors that you want for deploying a VM.
Maximum The largest number of virtual processors that you allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you need to power off the VM, edit the profile, change the maximum to a new value, and restart the VM.
Use shared processors
If checked, a VM with shared processors will be deployed. If not checked, the VM gets dedicated processor that belong to just that LPAR.
The following attributes are only visible if Use shared processor is checked, meaning the template creates a shared processor VM:
Processing units
Number of entitled processing units. A processing unit is the minimum amount of processing resource that the VM can use. For example, a value of 1 (one) processing unit corresponds to 100% use of a single physical processor.
Processing units are split between virtual processors, so a VM with two virtual processors and one processing unit appears to the VM user as a system with two processors, each running at 50% speed.
You can specify the following values:
Minimum The smallest number of processing units that you accept for deploying a VM. If this value is not available, the deployment does not occur.
Desired The number of processing units that you want for deploying a VM. The deployment occurs with a number of processing units that is less than or equal to the wanted value and greater than or equal to the minimum value.
Maximum The largest number of processing units that you allow when you resize a VM. This value is the upper limit to which you can resize dynamically. When it is reached, you must power off the VM, edit the profile, change the maximum value to a new value, and restart the VM.
Shared processor pool
PowerVC supports multiple shared processor pools. This allows you to share a group of processors between multiple virtual machines. You can group your applications together and set a shared processor pool size limit on the total number of processing units for each pool, which limits the software license exposure for that pool.
The default is DefaultPool.
Uncapped
If checked, this template creates uncapped VMs that can use processing units that are not being used by other VMs, up to the number of virtual processors that is assigned to the uncapped VM.
If not checked, this template creates capped VMs can use only the number of processing units that are assigned to them.
Weight (0 - 255)
The Weight attribute is only available for uncapped VMs.
If multiple uncapped VMs require unused processing units, the uncapped weights of the uncapped VMs determine the ratio of unused processing units that are assigned to each VM. For example, an uncapped VM with an uncapped weight of 200 receives two processing units for every processing unit that is received by an uncapped VM with an uncapped weight of 100.
Important: Processing units and virtual processor are values that work closely and must be calculated carefully. For more information about virtual processor and processing units, see IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
The following attribute is only visible if Use shared processor is unchecked, meaning the template creates a dedicated processor VM:
Idle sharing
This setting enables this VM to share its dedicated processors with other VMs running in shared processor (also known as a dedicated donating partition).
The possible selections are:
 – Allow when virtual machine is inactive
 – Allow when virtual machine is active
 – Allow always
 – Never
The rest of the attributes are not related to shared or dedicated processor usage.
Memory (GB)
Amount of memory, expressed in GB. The value for memory must be a multiple of the memory region size that is configured on your host. The minimum value is 16 MB. To see the region size for your host, open the Properties window for the selected host on the HMC, and then open the Memory tab to view the memory region size. Figure 2-13 shows an example.
You can specify the following values:
Minimum The smallest amount of memory that you want for deploying a VM. If the value is not available, the deployment does not occur.
Desired The total memory that you want in the VM. The deployment occurs with an amount of memory less than or equal to the wanted amount and greater than or equal to the minimum amount that is specified.
Maximum The largest amount of memory that you allow when you resize a VM. This value is the upper limit to resize a VM dynamically. When it is reached, you must power off the VM, edit the profile, change the maximum to a new value, and restart the VM.
Figure 2-13 Memory region size view on the HMC
Enable Active memory expansion (AME)
AME is an AIX only feature.
Active Memory Expansion (AME) improves memory usage for virtual machines. It expands the physical memory that is assigned to a virtual machine by allowing the operating system to effectively use more memory than it was assigned. The hypervisor compresses the least used memory pages to achieve the expansion. The increase in available memory is called the expansion factor.
AME expansion factor
The expansion factor can only be set, when AME is enabled / checked in the template. A factor of 1.5 means that 50% memory expansion will be provided. You can set the factor from 1 to 10. A factor of 1.0 mean AME is enabled, but effectively not running. If AME is enabled in a VM, the factor can be changed while the VM is running.
 
Note: Compressing memory needs processor resources. When setting the value to high and the VM need much more memory than physically available, this can cause a lot of processor overhead.
For more information about AME, consult IBM Knowledge Center:
The attributes discussed until now, are on the Details tab of a compute template. No let’s discuss the attributes from the Miscellaneous tab.
Enable virtual machine remote restart
Users can remote restart a VM on another host easily if the current host fails. This feature enhanced the availability of applications in addition to the solutions that are based on IBM PowerHA and LPM.
 
Note: This function is based on the PowerVM simplified remote restart function and is supported only by POWER8 an POWER9 servers. For the requirements of remote restart, see IBM Knowledge Center:
Enable performance information collection
Enable the operating system on a partition to collect performance information.
Availability priority
To avoid shutting down mission-critical workloads when your server firmware unconfigures a failing processor, set availability priorities for the VMs (0 - 255). A VM with a failing processor can acquire a replacement processor from a VM with a lower availability priority. The acquisition of a replacement processor allows the VM with the higher availability priority to continue running after a processor failure.
Compatibility mode
Select the processor compatibility that you need for your VM. Table 2-12 describes each compatibility mode and the servers on which the VMs that use each mode can operate.
Table 2-12 Processor compatibility modes
Processor compatibility mode
Description
Supported servers
POWER6
Use the POWER6 processor compatibility mode to run operating system versions that use all of the standard features of the POWER6 processor.
POWER6 can still be selected, but POWER6 server are no longer supported by PowerVC.
VMs that use the POWER6 processor compatibility mode can run servers that are based on POWER6, IBM POWER6+, POWER7, or POWER8 processors.
POWER6+
Use the POWER6+ processor compatibility mode to run operating system versions that use all of the standard features of the POWER6+ processor.
POWER6+ can still be selected, but POWER6+ server are no longer supported by PowerVC.
VMs that use the POWER6+ processor compatibility mode can run on servers that are based on POWER6+, POWER7, or POWER8 processors.
POWER7, including POWER7+
Use the POWER7 processor compatibility mode to run operating system versions that use all of the standard features of the POWER7 processor.
VMs that use the POWER7 processor compatibility mode can run servers that are based on POWER7, POWER8 or POWER9 processors.
POWER8
Use the POWER8 processor compatibility mode to run operating system versions that use all of the standard features of the POWER8 processor.
VMs that use the POWER8 processor compatibility mode can run servers that are based on POWER8 or POWER9 processors.
POWER9_Base
The operating system version on the logical partition is set to use all the standard features of the POWER9 processor enabled by the firmware at level FW910.
VMs that use POWER9 servers with firmware level 910 or later
POWER9
In POWER9 mode, all the features of POWER9 are available, as introduced by Firmware version 940.
New features with FW 940 are for example:
Support for the External Interrupt Virtualization Engine (XIVE)
User Mode NX Acceleration Enablement for PowerVM
Extended support for PowerVM Firmware Secure Boot.
For more information visit the following website:
VMs that use POWER9 servers with firmware level 940 or later
Default
The default processor compatibility mode is a preferred processor compatibility mode that enables the hypervisor to determine the current mode for the VM. When the preferred mode is set to Default, the hypervisor sets the current mode to the most fully featured mode that is supported by the operating environment. In most cases, this mode is the processor type of the server on which the VM is activated. For example, assume that the preferred mode is set to Default and the VM is running on a POWER9 processor-based server. The operating environment supports the POWER9 processor with all capabilities, so the hypervisor sets the current processor compatibility mode to POWER9.
The servers on which VMs with the preferred processor compatibility mode of Default can run depend on the current processor compatibility mode of the VM. For example, if the hypervisor determines that the current mode is POWER9, the VM can run on servers that are based on POWER9 processors.
Secure boot
Secure boot is a PowerVM feature introduced in POWER9 systems. On supported operating systems, the kernel and applications will have their digital signatures verified before being allowed to run. This attribute can be enabled or disabled.
Physical page table ratio
The Physical Page Table (PPT) ratio. The PPT ratio is the ratio of the maximum memory to the size of the Physical Page Table. It controls the size of the page table that is used by the hypervisor when performing live partition migration. The larger the PPT, the more entries are available for use by the partition for mapping virtual addresses to physical real addresses. If the PPT is set too small and the partition is running workloads, performance in the partition can decline during live partition migration. If this ratio is too large, more memory is reserved for the PPT with no performance gain. A higher ratio reserves less memory than a lower ratio. For example, a ratio of 1:4096 reserves less memory for the PPT than a ratio of 1:64 does.
If a value other than default is selected, hosts are filtered during deploys, resizes, and migrations to exclude hosts that do not support specifying the PPT ratio. However, this filtering is not applied during remote restarts.
This setting is particularly useful for SAP HANA workloads. The recommended setting for SAP HANA workloads is 1:1024.
Check affinity score check during migrations
This attribute applies to POWER9 hosts only. When this value is true, the affinity score for the partition on the destination host is calculated before migration. If migrating to the proposed target host would result in a lower affinity score, the migration fails.
The recommended value for SAP HANA workloads is true. The default value is false.
Use SAP HANA recommended Values
If this attribute is checked, the physical page table ratio and check affinity score can not be selected. In this case the physical page table ratio will be set to 1:1024 and the Check affinity score to true.
Persistent memory volumes
Persistent memory is a new concept available with POWER9 servers. Persistent memory will be available to the LPAR as volumes that are persistent when the VM is shutdown. Persistent memory volumes can be defined as part of a compute template, or individually in the deployment of a new VM.
To Add a persistent memory volume, click Add a persistent memory to add a new line. Then enter the size and check Affinity if you want the physical used memory near the physical used processors. To delete a persistent memory volume click the trash symbol.
Special considerations:
 – Users cannot perform migrate or remote restart operations on the virtual machine with persistent memory volume.
 – Virtual machines with persistent volumes are hard pinned to the host.
 – Users cannot unpin a virtual machine with persistent memory volume.
 – PowerVC allows only DRAM based persistent memory volumes.
To use persistent memory for SAP HANA implementations, have a look into the paper SAP HANA and PowerVM Virtual Persistent Memory under following link:
2.4.5 Tips for deploy template planning
Use the following information when planning deploy templates
Deploy templates are only available in PowerVC for Private Cloud.
Administrators can configure image deployment properties and save them as a deploy template. A deploy template includes an images that includes the operating system and may also include data volumes. Additionally a deploy template has the necessary information to create a VM quickly, including the deployment target, storage connectivity group, compute template, and so on. This reduces a lot of resource details from self-server users.
For more information on PowerVC Private Cloud see Chapter 5, “IBM PowerVC for Private Cloud” on page 255.
2.5 IBM PowerVC storage access SAN planning
IBM PowerVC and IBM PowerVC for Private Cloud can manage different storage volumes types, which can be attached to virtual servers. The virtual servers can access their storage by using either of three protocols:
Normal vSCSI
NPIV
vSCSI to SSP
These storage volumes can be integrated devices or pluggable devices. A minimum configuration of the SAN and storage is necessary before IBM PowerVC can use them. Since IBM PowerVC creates virtual disks on storage devices, but these devices must be set up first, you must perform the following actions before you use IBM PowerVC:
Configuration of the FC fabric for the IBM PowerVC environment must be planned first including cable attachments, SAN fabrics, and redundancy. It is always recommended to create at least two independent fabrics to provide SAN redundancy.
IBM PowerVC provides storage for VMs through the VIOS.
The storage is accessed by using NPIV, vSCSI, or an SSP that uses vSCSI. The VIOS and SSP must be configured before IBM PowerVC can manage them.
Be aware of the following information when working with Virtual Fabrics:
PowerVC supports a total of 25 total fabrics. Each Virtual Fabric counts toward this total.
The number of Virtual Fabrics that you can create depends on the switch model.
The switch must be at the appropriate fabric operating system level.
The SAN switch administrator user ID and password must be set up. These IDs are used by IBM PowerVC.
The storage controller administrator user ID and password must be set up.
2.5.1 vSCSI storage access
Before using virtual vSCSI attached storage in your environment, ensure that your environment is configured correctly and be aware of the below considerations:
The supported multipathing software driver solutions are AIX path control module (PCM) and EMC PowerPath. The number of volumes that can be attached to a virtual machine is not limited by PowerVC. However, each virtual machines operating system limits how many volumes its vSCSI driver supports.
PowerVM supports migration only between Virtual I/O Servers that use the same multipathing software driver solution.
Before using vSCSI attached storage, perform the following steps:
1. Turn off SCSI reserves for volumes being discovered on all the Virtual I/O Servers being used for vSCSI connections. This is required for live partition mobility (LPM) operations and for dual VIOSs. For the IBM Storwize family, XIV, IBM System Storage DS8000, and EMC using the AIX PCM model, you must run the following command on every Virtual I/O Server where vSCSI operations will be run:
$ chdef -a reserve_policy=no_reserve -c disk -s fcp -t mpioosdisk
2. If using DS8000, you must run the following command on every Virtual I/O Server where vSCSI operations will be run:
$ chdef -a reserve_policy=no_reserve -c disk -s fcp -t aixmpiods8k
3. If using Hitachi ODM attributes in the Virtual I/O Server, you must run the following command on every Virtual I/O Server where vSCSI operations will be run:
$ chdef -a reserve_policy=no_reserve -c disk -s fcp -t htcvspmpio
4. Zoning between the Virtual I/O Server and the storage device ports must be configured to allow vSCSI environments to be imported easily and also to allow you to use many fabrics with vSCSI.
Figure 2-14 shows how VMs in IBM PowerVC access storage by using vSCSI.
Figure 2-14 IBM PowerVC storage access by using vSCSI
vSCSI best practices
It is important to understand the following information when working with virtual SCSI (vSCSI) attached storage for the best practices purposes:
After a volume is attached to a virtual machine, log onto that virtual machine and discover the disk by running the appropriate configuration manager command per the operating system type:
 – For IBM AIX operating system, run the following command as root:
# cfgmgr
 – For On Linux, run the following command as root:
# ls /sys/class/scsi_host/ | while read host ; do echo "- - -" > /sys/class/scsi_host/$host/scan ; done
Before you detach a volume from a virtual machine, that virtual machine should stop using the disk device for the volume. If a file system is mounted on the disk device on the virtual machine, that file system should first be unmounted and the disk device closed.
After you detach a volume from the virtual machine, a disk device is left on the virtual machine. This device can be removed by running the appropriate command:
 – For IBM AIX operating system, run the following command as root:
# rmdev -Rdl hdiskX
 – For On Linux, run the following command as root:
# echo 1 > /sys/block/<device-name>/device/delete
It is always recommended to turn off disconnected ports. Using disconnected Fibre Channel ports for vSCSI operations can cause volume attach or detach operations to take longer because the Virtual I/O Servers will try to discover the attached volume on the disconnected ports. To turn off ports, follow these steps:
a. From the PowerVC user interface, select Storages → FC ports.
b. For each port you want to turn off, select None for Connectivity.
c. After making all necessary changes, click Save.
The default setting for Virtual I/O Server pathing is failover, which the Virtual I/O Server uses a single path until it fails. So it is recommended to tell the Virtual I/O Server to use all available paths to send I/O requests by running the command:
$ chdef -a algorithm=round_robin -c PCM -s friend -t fcpother
For fast failure detection in the Fibre Channel fabric, run the following command on each Virtual I/O Server for each Fibre Channel adapter:
$ chdev -l fscsiX -a fc_err_recov=fast_fail
To detect when Fibre Channel cables are moved, the following command should be run on each Virtual I/O Server for each Fibre Channel adapter:
$ chdev -l fscsiX -a dyntrk=yes
2.5.2 NPIV storage access
PowerVC performs automatic actions to manage the flow of storage from physical storage LUNs to VMs. The following are list of the actions:
Access to the SAN from VMs is configured on VIOSs by using an FC adapter pair and NPIV by running the vfcmap command to maps the virtual fiber adapters to
physical adapters.
LUNs are provisioned on a supported storage controller. The following are storage providers that are supported to work with PowerVC:
 – EMC VNX and EMC PowerMax (VMAX)
 – Enterprise Hitachi Block Storage Driver (HBSD)
 – Hitachi Global-Active Device (GAD)
 – IBM System Storage DS8000
 – IBM Storwize family
 – IBM XIV
 – Pure Storage
See the following link for more details:
LUNs are masked to VM virtual FC ports.
SAN zoning is adjusted so VMs have access from their virtual FC ports to the storage controller host ports. Changes in zoning are performed automatically by IBM PowerVC.
LUNs are viewed as logical devices in VMs.
Figure 2-15 shows how VMs in IBM PowerVC access storage by using NPIV.
Figure 2-15 IBM PowerVC storage access by using NPIV
2.5.3 Shared storage pools
Shared storage pools allow a set of SAN volumes on one or more supported storage controllers to be managed as a clustered storage device from any Virtual I/O Server that is in the cluster. Those Virtual I/O Servers share access to aggregated logical volumes and present this aggregated storage space as a single pool of storage, optionally divided into separate tiers for Quality of Service (QoS) needs. PowerVC can manage and allocate storage volumes from this pool as it does from Fibre Channel SAN providers.
A cluster consists of up to 24 Virtual I/O Servers with a shared storage pool that provides distributed storage access to the Virtual I/O Servers in the cluster.
Below is the flow of storage management from physical storage LUNs to VMs in
IBM PowerVC:
The SSP is configured manually by creating the SSP cluster, inclusion of VIOSs in the cluster, and additions of disk to the pool.
See the following two links for more details:
IBM PowerVC discovers the SSP when it discovers the VIOSs.
IBM PowerVC can create additional logical units (LUs) in the SSP when it creates a VM.
IBM PowerVC instructs the VIOS to map the SSP LUs for the VIO clients partitions that access them through vSCSI devices. They can be shown by several SSP commands and from VIOS lsmap command as well.
SSPs are supported on hosts that are managed either by HMC or NovaLink.
Figure 2-16 shows how VMs in IBM PowerVC access SSP storage using vSCSI.
Figure 2-16 BM PowerVC storage access by using an SSP
2.6 Storage management planning
IBM PowerVC offers a platform for aspects of the enterprise infrastructure to be managed. Supported storage sub systems can be added and managed by IBM PowerVC. Functions such as creating, attaching, detaching, sharing, deleting and cloning of volumes can be performed through IBM PowerVC. IBM PowerVC requires IP connectivity to the storage provider or a REST API server that is connected to the storage system in order to manage the storage. IBM PowerVC uses the OpenStack cinder block storage service to interact with storage devices.
2.6.1 IBM PowerVC storage terminology
The following sections will explain the storage terminology in PowerVC.
Storage Provider
A system that provides volumes in IBM PowerVC is termed a storage provider. Currently, the supported storage providers in IBM PowerVC 2.0 include the following:
EMC VNX and EMC PowerMax (VMAX)
Enterprise Hitachi Block Storage Driver (HBSD)
Hitachi Global-Active Device (GAD)
IBM System Storage DS8000
IBM FlashSystem family
IBM XIV
Pure Storage
Figure 2-17 shows the managed storage provider and how its access by clicking on Storage list under Storages in the drop-down list.
Figure 2-17 List of managed storages
Fabric
The Storage area network (SAN) switches are the link between the servers and the SAN storage providers named fabrics. Fabrics must be initially configured before being managed by IBM PowerVC. Figure 2-18 shows the fabrics list page and the managed fabrics.
Figure 2-18 List of managed fabrics
Service node
This is the IBM PowerVC management server showing an instance of the Cinder backup service running. Service nodes are used when exporting and importing images to and from other service nodes. The service node of IBM PowerVC is shown in Figure 2-19.
Figure 2-19 IBM PowerVC Service node
Volumes
These are disks or LUNs that can either be boot or data volumes. These volumes are created from storage pools of storage providers.
Storage templates
Storage templates define the properties of a volume created within IBM PowerVC. These properties include the definition of the type of volume, whether it is a thin, generic or compressed volume. The template also defines properties such as the storage pool the volume is created from, and the port groups to use when zoning the volume to a virtual machine managed by IBM PowerVC. Figure 2-20 shows the properties of a storage template.
Figure 2-20 Storage template of an IBM FlashSystem
Storage connectivity groups
Storage connectivity groups is a concept that is local to IBM PowerVC. They are a logical grouping of resources that connect to the storage. Using storage connectivity groups, redundancy rules at the Virtual I/O Server layer and optionally at the fabric layer are defined for connections to the storage provider. Figure 2-21 shows created storage
connectivity groups.
Figure 2-21 Storage connectivity groups created in IBM PowerVC
Fibre Channel port tags
Fibre Channel port tags are strings placed on Fibre Channel ports from a host system. Storage connectivity groups can be configured to only connect through tagged Fibre Channel ports. Fibre Channel ports are mainly tagged for two reasons, separate workload and create redundancy on the Fibre Channel adapter, Virtual I/O servers and the fabric layer.
Volume snapshots
Volume snapshots provide the capability of a non disruptive backup of disks that will be taken at a point in time. These volumes can be used to boot up virtual machines at a point in time.
Consistency groups
Consistency groups are groups of volumes with consistent data which are used to create a point in time snapshot or used to create a consistent copy of volumes. IBM PowerVC offers three types of consistency groups.
1. Consistent Group Snapshot Enabled
2. Consistent Group Replication Enabled
3. Replication Enabled VMRM
2.6.2 Storage templates
When a storage provider is managed by IBM PowerVC, a default storage template is created. This template can be modified or more templates can be created to suit a purpose. However, every storage provider must have one default template which can either be the one created by IBM PowerVC after managing the storage provider or a user defined template. A storage templates speeds up the process of creating volumes by utilizing predefined properties. If a storage template is not used by any volume, you can delete and modify it.
However, if the storage template is in use by existing volumes you cannot delete the template. The name, whether it is the default template for that storage provider and the port groups are the properties that can be altered. If you want to change a storage template that is already being used, you can make a copy and update the new version. Preexisting storage volumes managed by IBM PowerVC do not have an associated storage template. Although after the volumes are managed, they can be assigned a storage template using the set storage template function. In the succeeding section is an example of a storage template for the
IBM FlashSystem.
Storage template definition
Figure 2-22 shows the definition of a new storage template.
Figure 2-22 Storage template definition
The following are defined in a storage template:
Template name The template name is the name used to identify the storage template in IBM PowerVC. This name is local to IBM PowerVC.
Storage provider The storage provider refers to the storage system that volumes utilizing this template will be provisioned from. A storage template cannot be used by more than one storage provider.
Template type Three different volume types can be provisioned in IBM PowerVC: Thin, thick (generic) or compressed volumes. Thin and compressed volume types have additional settings that can be set through the storage template, such as:
Real capacity Real capacity is the percentage of size of the provisioned volume that is actually provisioned at the time of volume creation. This percentage determines the real storage capacity of the volume at the time of creation. For example, if a 200 GB disk is thin provisioned using the values in Figure 2-22, after provisioning the actual size of the disk is 4 GB.
Warning threshold Warning threshold is a percentage of the virtual capacity. Once the percentile value is reached, an alert is issued. For example, if a 200 GB disk is thin provisioned using the values in Figure 2-22 on page 50 (i.e. warning threshold 80%), when the real capacity of the volume reaches 160 GB, a notification will appear in the IBM PowerVC logs.
Description Description is an optional text description of the storage template. In this description, the purpose of the template or information about the port grouping can be documented here.
Checkboxes on the details tab:
 – The Use this storage template as default ensures that at the time of provisioning volumes from the selected storage provider, this storage template will be used as
the default.
 – The Use all available WWPN ensures IBM PowerVC uses all available WWPNs from all of the I/O groups in the storage controller to attach the volume to the VM.
 – The Auto expand ensures thin provisioned volumes do not go offline when they reach their virtual capacity. As a thin-provisioned volume uses more of its capacity, this feature maintains a fixed amount of unused real capacity, which is called the contingency capacity.
 – The Select I/O groups select the I/O group to which to add the volume. For the SAN Volume Controller, the maximum I/O groups that are supported is four.
 – The Throttle I/O is used to achieve a better distribution of storage controller resources by limiting the volume of I/O processed by the storage controller at various levels. IOPS and bandwidth throttle limits can be set. Throttling can be set at a volume level, host, host cluster and storage pool level. The throttling check box on the storage template sets the throttling at the storage volume level.
 – Flash copy rate controls the rate at which updates are propagated from a source volume to a target volume. IBM FlashCopy® mapping copy rate values can range from 128 KBps (10) to 2 GBps (150) and can be changed when the FlashCopy mapping is in any state. The default value on IBM FlashSystem systems is 50 (2 MBps). The user can determine the priority that is given to the background copy process and adjust.
 – Disable fast formatting of disk.
Storage pools Each storage template can have only one storage pool from which volumes are created.
Enable mirroring When checked, you must select another pool for volume mirroring. The volume that is created has one more copy in the mirroring pool. IBM FlashSystem systems clients can use two pools from two different back-end storage devices to provide high availability.
Figure 2-23 shows the powervc_data pool selected with mirroring enabled.
Figure 2-23 Selected storage pool with mirroring enabled
Port groups Storage port groups provide the capability to balance workload across the storage array and improve redundancy when zoning NPIV-attached volumes to virtual machines. When multiple port groups are defined on a storage template, deployed VMs utilize port groups in an iterative fashion, ensuring balance and not based on I/O metrics from the array. Figure 2-24 shows two port groups created, the blue and green port groups. If two VMs are deployed using this storage template, VM1 will use one port group and VM2 will use the other port group, creating a balance.
Figure 2-24 Storage port groups page with blue and green port groups created
2.6.3 Storage connectivity group and tags
The basic use of storage connectivity groups is to isolate storage traffic. Storage connectivity groups can be required for various reasons. Popular reasons are as follows:
Ensuring workloads are separated based on function, for example production, test and development workloads. In this scenario, Fibre Channel ports are tagged based on
their use.
Ensuring specific Virtual I/O Server pairs are used when deploying a VM.
Ensuring node or I/O drawer redundancy.
Ensuring virtual machines are deployed on only certain hosts.
Ensuring specific virtual machines can be migrated to only certain hosts.
The default storage connectivity groups are created for NPIV, vSCSI or for shared storage pools (SSP) depending on the situation.
The default storage connectivity group for NPIV connectivity is created when IBM PowerVC first initializes, and as resources are managed in the environment, they are added to this group when applicable.
Default groups that allow vSCSI connectivity are created only when an existing VM with vSCSI connectivity is managed in the environment. A default storage connectivity group that is specific for an SSP is created when the SSP is first automatically managed into the environment. The default storage connectivity groups can be disabled but not deleted.
When a new virtual machine is deployed from an image, you must specify a storage connectivity group. The virtual machine will be deployed to a hosts that satisfies the storage connectivity group settings.
Fibre Channel port tags are vital in ensuring adapter and port level redundancy, assuming a configuration such a Power E980 system with Fibre Channel adapters split between two I/O drawers. It is essential to have virtual machine’s virtual Fibre Channel adapters, mapped to physical Fibre Channel ports on both I/O drawers in order to provide adapter and port level redundancy. This can be ensured using Fibre Channel port tags. Figure 2-25 shows two ports tagged with SCG1.
Figure 2-25 Fibre Channel port tags
Assuming fcs0 and fcs1 are on the same Fibre Channel adapter assigned to vios7, and it is located on separate I/O drawer relative to the adapter with ports fcs0 and fcs1 assigned to vios8. Creating a storage connectivity group CG1, and restricting all virtual machine deployments to Fibre Channel ports with the tag SCG1 will ensure Fibre Channel port and adapter layer high availability for the deployed virtual machines. This is the foundation for workload separation in IBM PowerVC.
If we tag other ports on another VIOS pair on a different host with the same SCG1 and ensure the VIOS pair is part of CG1, during live migrations, virtual machines deployed with the storage connectivity group CG1, will migrated to the VIOS pair that is part of CG1 and virtual machines will be mapped to the ports tagged SCG1. This ensures priority workloads are migrated to priority ports.
Working with initiator port groups
Initiator port groups (IPGs) define the set of VIOS ports to be used for volume attachment when using NPIV storage. This feature provides an IBM PowerVC administrator with further flexibility in specifying VIOS ports for volume attachment. It also affords a virtual machine the capability to scale on the amount of volume attachment. IPGs boot and data volumes can be attached to separate ports. IPGs are defined in storage connectivity groups.
Multiple IPGs can be defined for a single storage connectivity group, but it must contain ports from all VIOS members of the storage connectivity group. During live migration, the ports within the same IPG are selected on the target host. A VIOS port can only be a member of one IPG and once a virtual machine is associated with a shared storage connectivity group, you cannot edit the IPG. IPGs can be used in the storage template to define an exact path for attached volumes by matching IPGs and storage port groups. This is possible in the storage template for IBM storages (DS8000 XIV/A9000), PowerMax and Hitachi.
2.6.4 Combining storage connectivity groups, tags and storage port groups
By using the storage connectivity groups, Fibre Channel port tags and storage port group functions, you can tailor them to the specific needs of virtual machines and ensure high availability in different layers of the infrastructure: port, adapter, VIOS, fabric and the storage port layer. Figure 2-26 shows an example of two virtual machines, one for production and the other one for testing. Their paths to storage has been separated using storage connectivity groups and storage port groups.
Figure 2-26 Utilizing storage connectivity groups and storage port groups example
Using the IBM PowerVC GUI the following groups is been made to achieve the
workload separation:
VIOS 1 and VIOS 2 ports fcs0 and fcs1 have been labelled as SCG1.
VIOS 1 and VIOS 2 ports fcs2 and fcs3 have been labelled as SCG2.
Two storage connectivity groups have been created with the following rules:
 – VIOS redundancy: At least two VIO servers must be mapped to a VM for
storage connectivity.
 – Fabric redundancy: Every fabric per VIOS.
Two storage port groups were defined on the same storage template with the WWPN of fc0 and fc1 belonging to SPG1 and fc2 and fc3 belonging to SPG2. When multiple port groups are defined on a storage template, deployed VMs utilize port groups in an iterative fashion ensuring balance.
2.7 Network management planning
A network represents a set of Layer 2 and 3 network specifications, such as how your network is subdivided with VLANs. It also provides information about the subnet mask, gateway, and other characteristics. When you are deploying an image, you choose one or more existing networks to apply to the new virtual machine. Setting up networks beforehand reduces the amount of information that you need to input during each deployment and helps to ensure a successful deployment.
During deploy time, the network with the Primary Network flag provides the system default gateway address. You can add more networks to segregate and manage the network traffic. If using vNIC, note that a maximum of 32 vNICs can be attached to a virtual machine.
IBM PowerVC supports IP addresses by using hardcoded /etc/hosts or Domain Name Server (DNS)-based host name resolution. IBM PowerVC also supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. For DHCP, an external DHCP server is required to provide the address on the VLANs of the objects that are managed by IBM PowerVC.
2.7.1 Infoblox support
Typically, when a virtual machine is deployed or deleted, you must manually create or delete the DNS record. However, you can optionally register an Infoblox vNIOS server into PowerVC. Infoblox then automatically updates the DNS records when virtual machines are deployed or deleted; including details such as the IP address, virtual machine name.
Infoblox Version 6.8 and later is supported. Running create_ea_defs.py requires Version 7.2 or later. This command is used to create environment variables in Infoblox. Previous versions of Infoblox require the user interface to create environment variables.
To register Infoblox, go to the Network page, click Configure DNS Authority and add network details. The details are saved as a zone name in the Infoblox appliance. Make sure that you have valid Infoblox server credentials for configuring the DNS authority.
 
Note: The value that is provided for hostname or IP address must match the name of the grid member in the Infoblox DNS configuration authority.
You need to configure several attributes that act as environment variables in Infoblox to generate zones and DNS records. Set the following values in the Members section of the Infoblox appliance:
Default Domain Name Pattern: {instance_name} - Neutron network name that is used to create the authoritative zone if not present in Infoblox.
Default Host Name Pattern: {network_name} - Virtual machine name that is used to create a DNS record.
DNS Support: True.
Admin Network Deletion: True - Deletes the zone when a network is deleted.
 
Note: To automatically configure the above settings, run $ python create_ea_defs.py from the PowerVC management system. This file is in the following directory:
/usr/lib/python2.7/site-packages/networking_infoblox/tools/create_ea_defs.py
When you deploy a virtual machine, PowerVC adds the virtual machine name as the hostname and when you delete a virtual machine PowerVC deletes the DNS record in the Infoblox appliance. If two virtual machines have identical names, Infoblox creates a duplicate record with a separate IP address allotted to each of the virtual machines. In this situation, DNS resolution might not work as expected.
Considerations
You need to understand these considerations before adding Infoblox into PowerVC:
1. When you migrate a virtual machine, DNS reconfiguration is not needed. This is because the hostname is set by using cloud-init during the initial deployment on source host, the same hostname is used on the destination host.
2. During upgrade or update operations, DNS configuration is not required for existing virtual machines. DNS configuration is only performed for virtual machines that are deployed post DNS configuration.
3. When a virtual machine is unmanaged, no DNS record is updated.
4. When a virtual machine is managed or remanaged, you must manually update the IP address to create a DNS record.
Limitations
Consider the following limitations while working with vNICs, performing virtual machine activities, or adding a network.
1. When a vNIC is attached or detached from a virtual machine, no DNS records are created or removed.
2. After registering Infoblox support, you should run sync_neutron_to_infoblox.py to add existing networks to Infoblox. Currently, when you run the sync tool, zones are not created because Infoblox does not support Keystone v3. In this case, DNS records will not be written in Infoblox when a virtual machine is deployed.
3. You cannot create a duplicate network with existing subnet details.
4. When you delete virtual machine or network details on PowerVC, Infoblox does not delete the DNS records if they were manually created.
2.7.2 Multiple network planning
Each virtual machine that you deploy must have one or more networks. Using multiple networks provide allow you to separate traffic. If you have multiple projects, you can create shared networks from the Networks page.
PowerVM Enterprise Edition generally uses three types of networks when you are deploying virtual machines.
Data network
Data network provides the route over which workload traffic is sent. At least one data network is required for each virtual machine, and more than one data network is allowed.
Management network
Management network type of network is optional but highly recommended to provide a higher level of function with the virtual machine. A management network provides the Remote Monitoring and Control (RMC) connection between the management console and the client LPAR. Several PowerVC features, such as live migration and dynamic LPAR, add or remove NIC require an active RMC connection between the management console (HMC or the NovaLink partition) and the virtual machine.
In a NovaLink environment, the system will try to use an internal virtual switch (named MGMTSWITCH) to provide the RMC connections. This internal management network requires that images be created with rsct 3.2.1.0-15216 or later. Virtual machines are not required to have a dedicated management network, but having one provides beneficial advanced features.
PowerVC provides the ability for you to connect to a management network, but you must first set up the networking on the switches and the shared Ethernet adapter to support it.
Live Partition Migration network
This optional network provides the route over which migration data is sent from one host to another. Creating a separate network for migration data helps you to control and prioritize your network traffic. For example, you can specify a higher or lower priority for the migration data as compared with standard data or management traffic. If you do not want to use a separate network for Live Partition Migration (LPM), you can reuse an existing data network connection or a management network connection.
2.7.3 Shared Ethernet Adapter planning
If you plan to use Shared Ethernet Adapters for your virtual machine networking, the Shared Ethernet Adapters must be created outside of PowerVC. The configuration of each host's Shared Ethernet Adapters determines how networks treat each host.
When you create a network in PowerVC, a Shared Ethernet Adapter is automatically chosen from each registered host. The Shared Ethernet Adapter is chosen based on the VLAN that you specified when you defined the network. You can always change the Shared Ethernet Adapter to which a network maps or remove the mapping altogether. However, consider the automatic assignment when you set up your networks if you do not want to change many settings later.
For each host, you can change the network mapping of the Shared Ethernet Adapter. You can also opt to not map the network to any of the host's Shared Ethernet Adapters. If you want virtual machines that use a network to not reside on a particular host, do not assign to the network any Shared Ethernet Adapters for that host.
The Shared Ethernet Adapter is chosen based on the VLAN that you specified when you defined the network. The Shared Ethernet Adapter that is chosen as the default is the one with the same network VLAN as the new network. If no such Shared Ethernet Adapter exists, the adapter with the lowest primary VLAN ID (PVID) that is in an available state is chosen.
Note the following regarding Shared Ethernet Adapters:
If there are no usable Shared Ethernet Adapters on any host for a specific VLAN ID, you are directed to choose a different VLAN ID.
If a Shared Ethernet Adapter is set to Do Not Use, you can select it. However, you cannot use it in a deploy until it is not set to Do Not Use.
If the status for the Shared Ethernet Adapter is Unavailable, the RMC connection may be down. The connection must be fixed before you can select this adapter.
Certain configurations might assure the assignment of a particular Shared Ethernet Adapter to a network. For example, assume that you create a new network in PowerVC and choose the PVID of the Shared Ethernet Adapter or one of the additional VLANs of the primary virtual Ethernet adapter as the VLAN. In this case, the chosen Shared Ethernet Adapter must back the network; no other options are made available.
Note these considerations if you change the Shared Ethernet Adapters after the
initial configuration:
If you create a network, deploy virtual machines to use it, and then change the Shared Ethernet Adapter to which that network is mapped, your workloads are impacted. At a minimum, the network experiences a short outage while the reconfiguration takes place.
If you modify a network to use a different Shared Ethernet Adapter and that existing VLAN is already deployed by other networks, those other networks also move to the new adapter. To split a single VLAN across multiple Shared Ethernet Adapters, you must have separate virtual switches assigned to each of those Shared Ethernet Adapters.
Multiple servers and switches
In PowerVC, a host can have multiple dual VIOS pairs. A dual VIOS setup promotes redundancy, accessibility, and serviceability. It can enhance virtual I/O client partition performance and allows you easily expand hardware or add new functions. It also offers load balancing capabilities for Multipath I/O (MPIO) and multiple shared Ethernet
adapter configurations.
Note the following regarding network creation and modification:
If you create a network, deploy virtual machines to use it, and then change the Shared Ethernet Adapter to which that network is mapped, your workloads are impacted. At a minimum, the network experiences a short outage while the reconfiguration takes place.
If you modify a network to use a different Shared Ethernet Adapter and that existing VLAN is already deployed by other networks, those other networks also move to the
new adapter.
2.7.4 Planning Single Root I/O Virtualization networks
You can deploy virtual machines that leverage Single Root Input/Output Virtualization (SR-IOV). SR-IOV supports pass-through of Ethernet data from guest virtual machines directly to hardware. This improves performance by allowing data to pass directly from guest virtual machines to physical adapters with minimal processing between, allowing a guest virtual machine to achieve near wire-speed Ethernet performance. SR-IOV also supports some additional configuration options, such as Quality of Service (QoS) for enforcing bandwidth allocations to guest virtual machines.
A given SR-IOV adapter can have multiple physical ports, connected to external switches. Each physical port is divided into logical ports. These logical ports are connected to a virtual machine for network connectivity. These logical ports allow a single physical hardware device to appear as multiple devices to guest virtual machines.
SR-IOV versus Shared Ethernet adapters
In PowerVC, without SR-IOV, you have an SEA and a virtual Ethernet adapter on the Virtual I/O Server (VIOS). These adapters connect to a physical network adapter and to a client network adapter on the virtual machine. This setup allows you to segment your network using VLAN IDs and allows you virtualize your network hardware; providing migration and failover support. An SEA environment also supports higher virtual machine density. However, all network traffic is routed through the SEA on the VIOS, which adds processing cycles.
SR-IOV with vNIC provides a separation of the control plane and data plane for Ethernet within a host. Therefore, an SR-IOV environment performs better because the VIOS is only used to set up and manage the communication channel and provide failover and migration support. SR-IOV does not scale to hundreds or thousands of virtual machines per host. Instead, it is used to set up a few very fast virtual machines.
You can use VLAN IDs to segment your network whether you are using SR-IOV or SEA, but virtual machines cannot be migrated in an SR-IOV environment.
Redundancy support
When deploying a virtual machine that uses SR-IOV networks, PowerVC creates a vNIC adapter for that virtual machine. If you select Redundant adapter when deploying a virtual machine, the vNIC adapter created for the virtual machine includes multiple logical ports. These logical ports are isolated in order to provide redundancy by using multiple physical ports, multiple SR-IOV adapters, and multiple VIOSs if available.
Quality of service
When you create an SR-IOV network, you can use the Virtual NIC capacity field to specify the minimum bandwidth of the network. If that capacity is not available when deploying a virtual machine, the deploy is not allowed.
Requirements
The requirements for implementing this is:
POWER8 or later system.
The adapter must be in SR-IOV mode.
 – On NovaLink hosts, use pvmctl sriov to view and update the adapter mode.
 – On HMC hosts, in the HMC user interface, modify the SR-IOV adapter, then choose Shared mode.
The operating system on the virtual machine must be supported.
 
Note: Ensure the slot on your server is capable for an SR-IOV adapter.
See more details about SR-IOV and vNIC adapters in the following links:
Restrictions
You cannot directly connect the SR-IOV adapter, physical ports, or logical ports to a
virtual machine.
Activation engine is not supported.
2.8 Planning users and groups
The following sections describe the planning that is required for users and groups.
2.8.1 User management planning
When you install IBM PowerVC, it is configured to use the security features of the operating system on the management host by default. This configuration sets the root operating system user account as the only initially available account with access to the IBM PowerVC server.
Upon installation of IBM PowerVC, a new operating system group named powervc-filter is created. The root user account gets added to this group by default. IBM PowerVC has visibility only to the user and group accounts that are part of the powervc-filter group. The other operating system users and groups are not exposed to PowerVC unless they are added to the powervc-filter group.
As a preferred practice, create at least one system administrator user account to replace the root user account as the IBM PowerVC management administrator. For more information, see section 4.17.1, “Adding user accounts” on page 247. After a new administrator ID is defined, remove the IBM PowerVC administrator rights from the root user ID, as explained in 4.17.3, “Disabling the root user account from IBM PowerVC” on page 250.
 
Important: IBM PowerVC also requires user IDs that are defined in /etc/passwd that must not be modified, such as nova, neutron, keystone, and cinder. All of these users use OpenStack and they must not be changed or deleted.
For security purposes, you cannot connect remotely to these user IDs. These users are configured with the login shell /sbin/nologin.
User account planning is important to define standard accounts and the process and requirements for managing these accounts. An IBM PowerVC management host can take advantage of user accounts that are managed by the Linux operating system security tools or can be configured to use the services that are provided by LDAP.
IBM PowerVC does not create users or groups in the underlying operating system. PowerVC backups include information about the configured user and group filters. If operating system users and groups are configured differently when the backup is restored, it may lead to administration issues.
Table 2-13 describes the available attributes to use when working with user and group filters.
Table 2-13 User and Group filters
Attribute name
Description
User filter
Limits which users are visible to PowerVC. The default is “(memberOf=powervc-filter)”.
Group filter
Limits which groups are visible to PowerVC. The default is “(name=powervc-filter)”.
A freshly installed PowerVC system displays the default values, as shown in Example 2-1.
Example 2-1 Default user and group filter settings
#powervc-config identity repository
Type: os
User filter: (memberOf=powervc-filter)
Group filter: (name=powervc-filter)
2.8.2 Projects and role management planning
This section describes the settings that are required for each user and group to operate and perform actions and work with projects.
Managing projects
A project, sometimes referred to as a tenant, is a unit of ownership. VMs, volumes, images, and networks belong to a specific project. Only users with a role assignment for a given project can work with the resources belonging to that project. At the time of installation, the ibm-default project is created, but IBM PowerVC also supports the creation of more projects for resource segregation.
To work with projects, an admin can login to the ibm-default project and click Projects from the configuration page.
You can also use the openstack project command to manage projects as needed. As a OpenStack administrator, you can create, delete, list, set, and show projects:
Create a project by running the following command:
openstack project create project-name
Delete an existing project by running the following command:
openstack project delete project-name
List projects by running the following command:
openstack project list
Set project properties (name, or description) by running the following commands:
openstack project set --name <name> project-name
openstack project set --description <description> project-name
Display project details by running the following command:
openstack project show project-name
After you create a project, you must grant at least one user a role on that project.
Project quotas
Project quotas sets limits on the various types of resources within each project. Administrators can edit, enable, and disable the quotas. Project quotas are set from the Project quotas tab of the user interface in the Dashboard menu.
 
Notes:
When a quota is disabled, that resource is unlimited.
You can set a quota to be smaller than its current value. The quota is considered exceeded in this case. PowerVC does not change the effective resource usage, but subsequent requests for resources will fail.
Table 2-14 provides the quotas that can be set per project.
Table 2-14 Available quotas
Quota
Description
Default
Collocation Rules
The total number of collocation rules allowed
25
External IP addresses
The maximum number of external (floating) IP addresses that can be assigned in the project
100
Injected files
The total number of injected files allowed for a project. The data is injected at the time of VM provisioning
5
Injected File Content (Bytes)
The maximum size of each injected file that is allowed in the project
10,240
Injected File Path (Bytes)
The maximum length of each injected file path
255
Memory (GB)
The total memory that can be used across all virtual machines in
the project
40000 GB
Per Volume (GB)
The maximum amount of storage that can be allocated to each volume in the project in GB
Unlimited (disabled)
Processing Units
The total number of entitled processing units of all virtual machines within the project
5500
Snapshots
The total number of volume snapshots that are allowed in the project
100,000
Virtual Machines
The total number of virtual machines that are allowed in the project
5500
Virtual Processors
The total number of virtual processors (cores) allowed across all virtual machines in the project
55000
Volume Backup (GB)
The total amount of storage for volume backups allowed per project
15,000
Volume Backups
The number of volume backups allowed per project
30
Volume Groups
The number of volume groups allowed per project
200
Volume Storage (GB)
The total amount of disk space that can be used across all volumes within the project
10,000,000
Volumes
The total number of volumes that can be part of the project
100,000
Managing roles
Roles are assigned to a user or group. They are inherited by all users in that group. A user or group can have more than one role, allowing them to perform any action that at least one of their roles allows.
Roles are used to specify what actions users can perform. Table 2-15 shows the available roles and actions each role is allowed to perform.
Table 2-15 IBM PowerVC Security Roles
Role
Action
Administrator (admin)
Users with this role can perform all tasks and have access to
all resources.
Administrator assistant (admin_assist)
Users with this role can perform create and edit tasks but do not have privileges to perform remove or delete operations. The admin_assist user can perform all virtual machine, image, and volume lifecycle operations except Delete.
Deployer (deployer)
Users with this role can perform the following tasks:
Deploying a virtual machine from an image
Viewing all resources except users and groups
Image manager (Image_manager)
Users with this role can perform the following tasks:
Creating, capturing, importing, or deleting an image
Editing description of an image
Viewing all resources except users and groups
Storage manager (storage_manager)
Users with this role can perform the following tasks:
Creating, deleting, or resizing a volume
Viewing all resources except users and groups
Viewer (viewer)
Users with this role can view resources and the properties of resources, but can perform no tasks. They cannot view users
and groups.
Virtual Machine Manager (vm_manager)
Users with this role can perform the following tasks:
Deploying a virtual machine from an image
Deleting, resizing, starting, stopping, or restarting a
virtual machine
Attaching or detaching volume
Snapshot and restore a volume
Attaching or detaching network interface
Editing details of a deployed virtual machine
Viewing all resources except users and groups
Creating, attaching, detaching, and deleting floating
IP addresses
Virtual machine user (vm_user)
Users with this role can perform the following tasks:
Starting, stopping, or restarting a virtual machine
Viewing all resources except users and groups
Role assignments are specific to a project. Users can log in to only one project at a time in the IBM PowerVC user interface. If they have a role on multiple projects, they can switch to one of those other projects without having to log out and log back in. When users log in to a project, they see only resources, messages, and other information, that belong to that project. They cannot see or manage resources that belong to a project where they have no role assignment. There is one exception to this rule. The admin role can operate across projects in many cases. Be mindful of this when handing out admin role assignments.
 
 
 
 
Important: OpenStack does not support moving resources from one project to another project. You can move volumes by unmanaging them and then remanaging them in the new project, but it is not possible to perform the same action for VMs because the network on which that VM depends is tied to the original project.
2.9 Security management planning
IBM PowerVC provides security services that support a secure environment and, in particular, the following security features:
Starting with BM PowerVC Version 2.0.0, an additional authentication mechanism called Time-based One-Time Password (TOTP) has been added to provide enhanced security for the users logging into IBM PowerVC. For a user to be authenticated, TOTP along with a password must be provided by the user.
PowerVC uses the HSTS, X-XSS-Protection, and X-Content-Type-Options type HTTP security response headers.
Signing packages adds an extra level of trustworthiness towards a product. IBM PowerVC ships both RPM packages and Debian packages with its installer.
2.9.1 Ports that are used by IBM PowerVC
Information about the ports that are used by IBM PowerVC management hosts for inbound and outbound traffic is on the following IBM Knowledge Center websites:
Ports used on the management server
Ports used by IBM PowerVC on the management server
Ports used by PowerVM NovaLink managed host
2.9.2 Providing a certificate
The IBM PowerVC management server uses a self-signed X.509 certificate, by default, to secure its web interface and REST APIs. Because that self-signed certificates can be created by anyone, they are not trusted by client’s web browsers automatically. In order to improve security, a certificate signed by a certificate authority should be used to replace the default self-signed certificate. Expiring or revoked certificates will also need to be replaced.
The web interface and REST APIs use the private key and certificate at the
following locations:
/etc/pki/tls/private/powervc.key
/etc/pki/tls/certs/powervc.crt
The process to replace the existing certificates can be found here:
2.10 Product information
For additional planning information, see the following resources.
IBM support
IBM Support is your gateway to technical support tools and resources that are designed to help you save time and simplify support. IBM Support can help you find answers to questions, download fixes, troubleshoot, submit and track problem cases, and build skills:
Learn and stay informed about the transformation of IBM Support, including new tools, new processes, and new capabilities, by going to the IBM Support Insider:
IBM Support Guide
IBM Support gives you an advantage by helping you drive success with your IBM products and services across cloud, on-premises, and hybrid cloud platforms:
Offering Information
Product information is available on the IBM Offering Information website:
Packaging
This offering is delivered through IBM My Entitled System Support Site (ESS) as an electronic download. There is no physical media.
It is possible to obtain the electronic version at IBM My Entitled System Support Site:
Click My entitled software in the left pane, and then click My entitled software.
Software maintenance
The IBM Agreement for Acquisition of Software Maintenance (Z125-6011) applies for Subscription and Support and does not require client signatures.
Licenses under the IBM International Program License Agreement (IPLA) provide for support with ongoing access to releases and versions of the program. IBM includes one year of Software Subscription and Support (also referred to as Software Maintenance) with the initial license acquisition of each program acquired. The initial period of Software Subscription and Support can be extended by the purchase of a renewal option, if available. Two charges apply: a one-time license charge for use of the program and an annual renewable charge for the enhanced support that includes telephone assistance (voice support for defects during normal business hours), as well as access to updates, releases, and versions of the program as long as support is in effect.
IBM Enterprise Support and Preferred Care
IBM System Storage or Power Systems Hardware and Software Support Services provide around-the-clock integrated hardware and software services backed by our global support infrastructure, product expertise and proprietary analytics tools:
Licensing
IBM International Program License Agreement including the License Information document and Proof of Entitlement (PoE) govern your use of the program. PoEs are required for all authorized use.
This software license includes Software Subscription and Support (also referred to as Software Maintenance).
Hardware requirements
Any IBM system that includes an IBM POWER7+, and later, processor.
Software requirements
PowerVM Standard Edition (5765-VS3) for basic functions, and PowerVM Enterprise Edition (5765-VE3) or PowerVM PowerLinux Edition (5765-VL3) for full function.
Firmware v8.2, or higher, is required for the new Remote Restart function for PowerVM that is managed by PowerVC.
The program's specifications and specified operating environment information may be found in documentation accompanying the program, if available, such as a readme file, or other information published by IBM, such as an announcement letter. Documentation and other program content may be supplied only in the English language.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.162.87