Planning
In this chapter we provide some guidelines on the planning topic for PowerHA SystemMirror 7.1.2 Enterprise Edition. The planning is an important step during a PowerHA cluster deployment. We focus on several infrastructure and software requirements for deploying a PowerHA SystemMirror 7.1.2 Enterprise Edition cluster. For general planning purposes, refer also to the PowerHA SystemMirror planning publication for the version you are using at:
http://pic.dhe.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.powerha.plangd/hacmpplangd_pdf.pdf
This chapter contains the following topics:
3.1 Infrastructure considerations and support
In this section, we detail several aspects regarding the infrastructure requirements for PowerHA SystemMirror 7.1.2 Enterprise Edition.
3.1.1 Hostname and node name
Unlike earlier versions, now PowerHA SystemMirror 7.1.1 has strict rules for which interface can be the hostname due to the new CAA layer requirements.
 
Important:
The hostname cannot be an alias in the /etc/hosts file.
The name resolution for the hostname must work for both ways, therefore a limited set of characters can be used.
The IP address that belongs to the hostname must be reachable on the server, even when PowerHA is down.
The hostname cannot be a service address.
The hostname cannot be an address located on a network which is defined as private in PowerHA.
The hostname, the CAA node name, and the “communication path to a node” must be the same.
By default, the PowerHA, nodename, the CAA nodename, and the “communication path to a node” are set to the same name.
The hostname and the PowerHA nodename can be different.
The hostname cannot be changed after the cluster configuration.
The rules leave the base addresses and the persistent address as candidates for the hostname. You can use the persistent address as the hostname only if you set up the persistent alias manually before you configure the cluster topology.
On the Domain Name System (DNS) server, you can specify any “external” name for the service addresses so that the clients can use this name when they connect to the application that runs in the cluster.
 
Note: We could not configure a PowerHA 7.1.1 cluster because of a mismatch in the way the hostname was used. The hostname was defined using lowercase characters but in the /etc/hosts file it was written using uppercase. All the default TCP/IP commands worked fine. But the cluster setup failed. We updated the hosts file and then cluster setup worked fine.
3.1.2 Network considerations
The network infrastructure plays a major role in defining the cluster configuration. When using sites several considerations apply: network technologies used for node communication within or between the sites, network bandwidth and latency for the case of replicating the data over TCP/IP with GLVM, IP segments for each site, the firewall, and the DNS configurations. The network configuration also dictates what communication paths are used for heartbeating.
You can find general aspects regarding the networking in an environment using PowerHA Enterprise Edition in Exploiting IBM PowerHA SystemMirror Enterprise Edition, SG24-7841.
Cluster multicast IP address and PowerHA site configuration
Cluster monitoring and communication require that multicast communication be used. Multicast consists of sending messages or information from the source to a group of hosts simultaneously in a single transmission. This kind of communication uses network infrastructure very efficiently because the source sends a packet only once, even if it needs to be delivered to a large number of receivers, and other nodes in the network replicate the packet to reach multiple receivers only when necessary. Cluster multicast communication is implemented at the CAA level. A multicast address is also known as a class D address. Every IP datagram whose destination address starts with 1110 is an IP multicast datagram. The remaining 28 bits identify the multicast group on which the datagram is sent.
Starting with PowerHA SystemMirror 7.1.2 sites can be used in the cluster configuration in both Standard and Enterprise editions. In this version, only two sites can be configured in a cluster. When using sites, the multicast communication depends on the cluster type:
Stretched clusters use multicast communication between all nodes across all sites. In this case, a single multicast IP address is used in the cluster. CAA is unaware of the PowerHA site definition.
Linked clusters use multicast communication only within the site and unicast protocol for communication between the cluster nodes across the sites. Unlike multicast, unicast communication involves a one-to-one communication path with one sender and one receiver. This cluster type uses two multicast IP group addresses, one in each site. For the particular case of a linked cluster with two nodes, one at each site, the multicast addresses are defined, but the node-to-node communication is using only unicast.
For PowerHA operation, the network infrastructure must handle the IP multicast traffic properly:
Enable multicast traffic on all switches used by the cluster nodes.
Check the available multicast traffic IP address allocation.
Ensure that the multicast traffic is properly forwarded by the network infrastructure (firewalls, routers) between the cluster nodes, according to your cluster type requirements (stretched or linked).
You can specify the multicast address when you create the cluster, or you can have it generated automatically when you synchronize the initial cluster configuration. The following examples detail the multicast IP address generation by the CAA software for the two types of PowerHA clusters:
A stretched cluster. The multicast IP address is generated as in the case of a cluster without sites. CAA generates the multicast address based on a local IP, associated with the hostname of a cluster node, by replacing the first byte of the IP address with 228: The address is generated during the first synchronization, at the time of CAA cluster creation. For example, you have a 3-node cluster with a single IP segment at both sites:
Site1:
      svca1: 192.168.100.60 (Site 1)
      svca2: 192.168.100.61 (Site 1)
Site2:
      svcb1: 192.168.100.62 (Site 2)
The default generated multicast IP is 228.168.100.60.
You can check the generated multicast address, after CAA cluster creation, using lscluster -c. See Example 3-1.
Example 3-1 Node IP addresses and multicast IP - stretched cluster
root@svca1:/>lscluster -c
Cluster Name: ihs_cluster
Cluster UUID: 73a14dec-42d5-11e2-9986-7a40cdea1803
Number of nodes in cluster = 3
Cluster ID for node svca1: 1
Primary IP address for node svca1: 192.168.100.60
Cluster ID for node svca2: 2
Primary IP address for node svca2: 192.168.100.61
Cluster ID for node svcb1: 3
Primary IP address for node svcb1: 192.168.100.62
Number of disks in cluster = 1
Disk = hdisk2 UUID = ffe72932-2d01-1307-eace-fc7b141228ed cluster_major = 0 cluster_minor = 1
Multicast for site LOCAL: IPv4 228.168.100.60 IPv6 ff05::e4a8:643c
In our test environment we observed that the IP address used for building the multicast IP is the IP address of the node initializing the CAA cluster configuration, at the time of first synchronization of the PowerHA cluster configuration (node svca1 in our case).
A linked cluster. Two multicast IP addresses are generated: one at each site. For Site1 the multicast IP is generated as in the stretched cluster case. The second multicast IP address is generated using the first three bytes of the multicast IP in Site1, adding the last byte of the IP address of one of the nodes in Site2. Site1 and Site2 are associated according to their IDs in the HACMPsite ODM class. For example, for a cluster with four nodes, two in each site and one IP segment in each site:
Site1:
    glvma1:192.168.100.55
    glvma2:192.168.100.56
Site2:
    glvmb1:10.10.100.57
    glvmb2:10.10.100.58
The generated multicast IP addresses are:
Site1: 228.168.100.56
Site2: 228.168.100.57
Example 3-2 shows the output of lscluster -c command for this cluster.
Example 3-2 Node IP addresses and multicast IP - linked cluster
# lscluster -c
Cluster Name: glvma2_cluster
Cluster UUID: 6609a6cc-4512-11e2-9fce-7a40cf0aea03
Number of nodes in cluster = 4
Cluster ID for node glvma1: 1
Primary IP address for node glvma1: 192.168.100.55
Cluster ID for node glvma2: 3
Primary IP address for node glvma2: 192.168.100.56
Cluster ID for node glvmb1: 4
Primary IP address for node glvmb1: 10.10.100.57
Cluster ID for node glvmb2: 8
Primary IP address for node glvmb2: 10.10.100.58
Number of disks in cluster = 2
Disk = hdisk2 UUID = 565cf144-52bd-b73d-43ea-4375b50d290b cluster_major = 0 cluster_minor = 1
Disk = UUID = c27d8171-6da8-728a-7ae8-ee9d455b95df cluster_major = 0 cluster_minor = 2
Multicast for site siteA: IPv4 228.168.100.56 IPv6 ff05::e4a8:6438
Multicast for site siteB: IPv4 228.168.100.57 IPv6 ff05::e4a8:6439
 
In our case, we generated the CAA cluster by creating the PowerHA cluster definition on node glvma2 and synchronizing the configuration. We observed that the multicast IP address in the second site is the one associated with the node having the lowest ID in the second site, as indicated by the lscluster command in Example 3-2 on page 32.
You can specify the multicast IP addresses at the time of creating the PowerHA cluster rather than having a generated one. In that case, do not use the following multicast groups:
224.0.0.1 This is the all-hosts group. If you ping that group, all multicast-capable hosts on the network should answer because every multicast-capable host must join that group at startup on all its multicast-capable interfaces.
224.0.0.2 This the all-routers group. All multicast routers must join that group on all its multicast-capable interfaces.
224.0.0.4 This is the all DVMRP (Distance Vector Multicast Routing Protocol) routers group.
224.0.0.5 This is the all OSPF (Open Shortest Path First) routers group.
224.0.0.13 This is the all PIM (Protocol Independent Multicast) routers group.
 
Note: The range 224.0.0.0 to 224.0.0.255 is reserved for local purposes such as administrative and maintenance tasks, and data that they receive is never forwarded by multicast routers. Similarly, the range 239.0.0.0 to 239.255.255.255 is reserved for administrative purposes. These special multicast groups are regularly published in the Assigned Numbers RFC. You can check the RFCs published on the Internet at:
http://www.ietf.org/rfc.html
To verify whether nodes in your environment support multicast-based communication, use the mping command, which is part of the CAA framework in AIX, and is included in the bos.cluster.rte fileset. The mping command options are shown in Figure 3-1 on page 34.
# mping
mping version 1.1
Must specify exactly one of -r or -s
 
Usage: mping -r|-s [-a address] [-p port] [-t ttl] [-c|-n pings] [-v]
-r|-s Receiver or sender. Required argument,
and are mutually exclusive
-a address Multicast address to listen/send on,
overrides the default of 227.1.1.1.
This can accept either an IPv4 or IPv6 address in decimal notation.
-p port Multicast port to listen/send on,
overrides the default of 4098.
-t ttl Multicast Time-To-Live to send,
overrides the default of 1.
-n|-c pings The number of pings to send,
overrides the default of 5.
-6 Sets the default multicast group address to the IPv6 default of ff05::7F01:0101.
-v Verbose mode. Declare multiple times to increase verbosity.
-?|-h This message.
Figure 3-1 mping command options
For an example of usingthe mping command we used a stretched cluster with three nodes (nodea1, nodea2, and nodeb1) and the multicast group address 228.100.100.10. We sent five packets from nodea1 and received them on nodea2 and nodeb1, using the multicast IP address. We first started mping in the receive mode on nodes nodea2 and nodeb1 using the following command options:
mping -r -v -a 228.100.100.10
Then we started the sender (nodea1) using the following command options:
mping -s -v -c 5 -a 228.100.100.10
The whole picture of the test is illustrated in Figure 3-2 on page 35.
Figure 3-2 mping example
IPv6 address planning
This section explains the IPv6 concepts and provides details for planning a PowerHA cluster using IPv6 addresses. IPv6 support is available for PowerHA 7.1.2, and later versions.
IPv6 address format
IPv6 increases the IP address size from 32 bits to 128 bits, thereby supporting more levels of addressing hierarchy, a much greater number of addressable nodes, and simpler auto configuration of addresses.
Figure 3-3 shows the basic format for global unicast IPv6 addresses.
Figure 3-3 IPv6 address format
IPv6 addresses contain three parts:
Global Routing Prefix
The first 48-bits (in general) are for a global prefix, distributed for global routing.
Subnet ID
The second 16 bits are freely configurable in a field available to define within sites.
Interface ID
The last 64 bits for distributing for each network device.
Subnet prefix considerations
The subnet prefix, which corresponds to the subnet mask for IPv4, is a combination of the global routing prefix and subnet ID. Although you are free to have longer subnet prefixes, in general 64 bits is a suitable length. IPv6 functions such as Router Advertisement are designed and assumed to use the 64-bit length subnet prefix. Also, the 16-bit subnet ID field allows 65,536 subnets, which are usually enough for general purposes.
IPv6 address considerations
The three basic IPv6 addresses are the following:
Link-local address
Link-local addresses are IP addresses that are configured for the purpose to communicate within the site (that cannot go outside the network router). This term also exists in IPv4. The IP range of the link-local address for IPv4 and IPv6 is:
 – 169.254.0.0/16 for IPv4
 – fe08::/10 for IPv6
Although this address was optional in IPv4, in IPv6 it is now required. Currently in AIX, these are automatically generated based on the EUI-64 format which uses the network cards MAC address. The logic of the link-local address creation is as follows:
Say you have a network card with a MAC address of 96:D8:A1:5D:5A:0C.
a. The 7th bit will be flipped and FFEE will be added after the 24th bit making it 94:D8:A1:FF:EE:5D:5A:0C.
b. The subnet prefix fe08:: will be added, providing you with the link-local address FE08::94D8:A1FF:EE5D:5A0C.
In AIX, the autoconf6 command is responsible for creating the link-local address.
Global unicast address
These are IP addresses configured to communicate outside of the router. The range 2000::/3 is provided for this purpose. The following ones are predefined global unicast addresses:
 – 2001:0000::/32 - Teredo address defined in RFC 4380
 – 2001:db8::/32 - Provided for document purposes defined in RFC 3849
 – 2002::/16 - 6 to 4 address defined in RFC 3056
Loopback address
The same term as for IPv4. This uses the following IP address:
 – ::1/128
For PowerHA, you can have your boot IPs configured to the link-local address if this suits you. However, for configurations involving sites, it will be more suitable for configuring boot IPs with global unicast addresses that can communicate with each other. The benefit is that you can have additional heartbeating paths, which helps prevent cluster partitions.
The global unicast address can be configured manually and automatically as follows:
Automatically configured IPv6 global unicast address
 – Stateless IP address
Global unicast addresses provided through a Neighbor Discovery Protocol (NDP). Similar to link-local addresses, these IPs will be generated based on the EUI-64 format. In comparison, the subnet prefix will be provided by the network router. The client and the network router must be configured to communicate through the NDP for this address to be configured.
 – Stateful IP address
Global unicast addresses provided through an IPv6 DHCP server.
Manually configured IPv6 global unicast address
The same term as for IPv4 static address.
In general, automatic IPv6 addresses are suggested for unmanaged devices such as client PCs and mobile devices. Manual IPv6 addresses are suggested for managed devices such as servers.
For PowerHA, you are allowed to have either automatic or manual IPv6 addresses. However, take into consideration that automatic IPs will have no guarantee to persist. CAA restricts you to have the hostname labeled to a configured IP address, and also does not allow you to change the IPs when the cluster services are active.
IPv4/IPv6 dual stack environment
When migrating to IPv6, in most cases, it will be suitable to keep your IPv4 networks. An environment using a mix of different IP address families on the same network adapter is called a dual stack environment.
PowerHA allows you to mix different IP address families on the same adapter (for example, IPv6 service label on the network with IPv4 boot, IPv4 persistent label on the network with IPv6 boot). However, the best practice is to use the same family as the underlying network for simplifying planning and maintenance.
Figure 3-4 on page 38 shows an example of this configuration.
Figure 3-4 IPv6 dual stack environment
Multicast and IPv6
PowerHA SystemMirror 7.1.2 or later supports IP version 6 (IPv6). However, you cannot explicitly specify the IPv6 multicast address. CAA uses an IPv6 multicast address that is derived from the IP version 4 (IPv4) multicast address.
To determine the IPv6 multicast address, a standard prefix of 0xFF05 is combined using the logical OR operator with the hexadecimal equivalent of the IPv4 address. For example, the IPv4 multicast address is 228.8.16.129 or 0xE4081081. The transformation by the logical OR operation with the standard prefix is 0xFF05:: | 0xE4081081. Thus, the resulting IPv6 multicast address is 0xFF05::E408:1081.
The netmon.cf file
The netmon.cf file is an optional configuration file that can be used to complement the normal detection of interface failures based on the available communication paths between the cluster nodes by providing additional targets not part of the cluster itself that can be reached by the cluster nodes.
This file is used by the RSCT services. It was introduced in earlier releases for clusters with single adapter networks. With the introduction of CAA features exploited by PowerHA 7.1, this file is optional for the clusters using real network adapters, since the interface status is determined by CAA. However, in a VIOS environment, failure of the physical network adapter and network components outside the virtualized network might not be detected reliably. To detect external network failures, you must configure the netmon.cf file with one or more addresses outside of the virtualized network.
As with the virtual environment case, the netmon.cf file can also be used for site configurations to determine a network down condition when the adapters are active in the site, but the link between the sites is down.
In the current implementation of RSCT 3.1.4 and PowerHA 7.1.2, the netmon functionality is supported by the RSCT group services instead of the topology services. Group services will report adapter DOWN if all the following conditions are met simultaneously:
!REQD entries are present in netmon.cf for a given adapter
Netmon reports adapter down
The adapter is reported “isolated” by CAA
You can configure the netmon.cf file in a manner similar to previous releases. In PowerHA clusters, the file has the following path: /usr/es/sbin/cluster/netmon.cf, and it needs to be manually populated on each cluster node. Environments using virtual Ethernet and shared Ethernet adapters on VIOS to access external resources require the use of the netmon.cf file populated with IP addresses outside the machine hosting the logical partition for proper interface status determination. In this case, the specification of the netmon.cf file is the same as in prior releases:
!REQD <owner> <target>
<owner> It is the originating interface name or IP address of that interface.
<target> It is the target ping address to be used to test the connectivity.
You can find more details in the description text of APAR IZ01331, which introduces the netmon functionality for VIOS environments:
http://www.ibm.com/support/docview.wss?uid=isg1IZ01332
Additional considerations for the netmon.cf file are:
You can add up to 32 lines for one adapter. If there is more than one entry for an adapter, then PowerHA tries to ping them all. As long as at least one target replies, the interface is marked good.
Ensure that you add at least one line for each base adapter. It is advised to add a ping target for your persistent addresses.
The file can be different on each cluster node.
The target IP address should be associated with a critical device in your network, such as a gateway, a firewall system, or a DNS.
Example 3-3 shows a netmon.cf file with multiple IP/host interface examples.
Example 3-3 netmon.cf definition examples
#This is an example for monitoring en0 Virtual Ethernet interface on an LPRAR #using two IP addresses of the DNS systems
!REQD en0 192.168.100.1
!REQD en0 192.168.100.2
 
#Example using the IP address of the interface instead of the interface name
#
!REQD 10.10.10.60 10.10.10.254
 
#Example using the IP interface names (they need to be resolvable to the actual IP #addresses !!!)
!REQD host1_en2 gateway1
 
Note: PowerHA 7.1.2 supports IPv6. In this case, netmon.cf can be configured with IPv6 addresses in a similar manner as the IPv4 case. There are no special considerations for the IPv6 case.
3.1.3 Storage and SAN considerations
This section describes the storage and SAN considerations.
SAN-based heartbeat
PowerHA 7.1.2 supports SAN-based heartbeat only within a site. IBM intends to enhance its facility for inter-site heartbeating in upcoming releases of the PowerHA SystemMirror software solution.
The SAN heartbeating infrastructure can be accomplished in several ways:
Using real adapters on the cluster nodes and enabling the storage framework capability (sfwcomm device) of the HBAs. Currently, FC and SAS technologies are supported. Refer to the following Infocenter page for further details about supported HBAs and the required steps to set up the storage framework communication:
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com.ibm.aix.clusteraware/claware_comm_setup.htm
In a virtual environment using NPIV or vSCSI with a VIO server, enabling the sfwcomm interface requires activating the target mode (the tme attribute) on the real adapter in the VIO server and defining a private VLAN (ID 3358) for communication between the partition containing the sfwcomm interface and the VIO server. The real adapter on the VIO server needs to be a supported HBA as indicated in the previous reference link. For a practical example of setting up a storage communication framework interface in a virtual environment, refer to Chapter 3.7.13,”Configure SAN heart beating in virtual environment” of IBM PowerHA SystemMirror Standard Edition 7.1.1 for AIX Update, SG24-8030.
Storage-based replication environments
The PowerHA clusters using storage-based replication can be implemented only with the PowerHA Enterprise Edition version. In this section, we describe the storage-based replication technologies integrated with PowerHA SystemMirror 7.1.2 Enterprise Edition.
Out-of-band versus in-band storage control
The communication between the cluster nodes and the storage system is required in a cluster using storage-based replication for management of the replication pairs under the PowerHA software control. This communication is dependent on the storage technology; it can be performed in two ways:
Out-of-band control
The cluster software uses the storage management CLI functions and communicates with the storage in a different communication path from the current I/O traffic, usually on a TCP/IP network.
In-band control
The cluster software communicates with the storage system using the same path as the disk I/O path, usually the SAN fiber channel network.
In PowerHA 7.1.2, in-band communication can be used with DS8800 storage systems while out-of-band control can be used with all DS8000 storage models. Figure 3-5 shows a comparative approach between the two communication models.
Figure 3-5 Out-of-band versus in-band storage control
Using in-band communication offers multiple benefits over the out-of-band model:
Better performance - A TCP network is usually slower than the Fiber Channel infrastructure. Using a storage agent (storage HMC) for the CLI command execution causes higher delays in the command execution and event flow. As a direct result, a reduced failover time is archived using the in-band communication.
Facilitates tighter integration with the host SCSI disk driver.
Ease of configuration - In-band control is embedded in the disk driver, so no PowerHA configuration is required for storage communication.
Enhanced reliability - Disks on an AIX host can be easily mapped to their corresponding DS8000 volume IDs, resulting in enhanced RAS capability. Due to its tight integration with the host disk driver, it allows for more robust error checking.
In PowerHA 7.1.2, the following storage-based replication technologies are included:
Out-of-band:
 – ESS/DS8000/DS6000™ PPRC
 – DS8700/DS8800 Global Mirror
 – XIV
 – SVC
In-band
 – DS8800 in-band and HyperSwap
DS8000 Metro and Global Mirror
ESS/DS6000/DS8000 Metro Mirror and DS8000 Global Mirror are technologies supported with PowerHA Enterprise Edition.
ESS/DS replicated resources can use the following replication configurations:
For Metro Mirror Replication (formerly PPRC):
 – Direct Management (ESS 800)
This is the oldest configuration type used for IBM ESS storage systems. In this configuration, PowerHA directly manages the fallover and resynchronization of the PPRC pairs by issuing commands directly to the ESS systems. PowerHA manages the PPRC resources by communicating with the Copy Services Server (CSS) on ESS systems via the ESS CLI.
 – DSCLI management (ESS/DS6000/DS8000)
This type of configuration uses the DS CLI interface management for issuing commands to the storage system, via an Enterprise Storage Server® Network Interface (ESSNI) server on either storage controller or storage HMC.
DSCLI-based PowerHA SystemMirror supports more than one storage system per site. You can configure and use more than one DS storage on a single site. Each PPRC-replicated resource still has only one primary and one secondary storage per site, but you can use any of the configured DSs having a single one per site in a PPRC-replicated resource group.
Global Mirror
This is an asynchronous replication technology using a Global Copy relationship between two storage systems and a flash copy relationship at the remote storage site, coordinated by a Global Mirror session that ensures consistent point-in-time data generated at the remote site. Currently DS8700 and DS8800 models are supported with PowerHA SystemMirror 7.1.2 using Global Mirror replicated resources.
 
Note: PowerHA Enterprise Edition with DSCLI Metro Mirror replicated resources supports Virtual I/O environments with vSCSI and N-PIV disk attachment. For Global Mirror replicated resources, vSCSI attachment is not supported.
For more details on DS8000 copy services features refer, to the following publication:
IBM System Storage DS8000: Copy Services in Open Environments, SG24-6788-03 at:
http://www.redbooks.ibm.com/abstracts/sg246788.html
For DS8700/DS8800 specific information, refer to:
IBM System Storage DS8000: Architecture and Implementation, SG24-8886-02
http://www.redbooks.ibm.com/abstracts/sg248886.html
IBM SAN Volume Controller (SVC)
PowerHA Enterprise Edition with SVC-based replication provides a fully automated, highly available disaster recovery management solution by taking advantage of SVC’s ability to provide virtual disks derived from varied disk subsystems.
SVC hardware supports only FC protocol for data traffic inside and between sites. It requires a SAN switched environment. FCIP routers can also be used to transport the Fiber Channel (FC) data frames over a TCP/IP network between the sites.
Management of the SVC replicated pairs is performed using ssh over a TCP/IP network. Each cluster node needs to have the openssh package installed and configured to access the SVCs in both sites.
PowerHA Enterprise Edition using SVC replication supports the following options for data replication between the SVC clusters:
Metro Mirror providing synchronous remote copy
Changes are sent to both primary and secondary copies, and the write confirmation is received only after the operations are complete at both sites.
Global Mirror providing asynchronous replication
Global Mirror periodically invokes a point-in-time copy at the primary site without impacting the I/O to the source volumes. Global Mirror is generally used for greater distances and complements the synchronous replication facilities. This feature was introduced in SVC Version 4.1.
For PowerHA Enterprise Edition integration, SAN Volume Controller code Version 4.2 or later is required. At the time of writing, the latest supported SVC code version is 6.4.
 
Note: PowerHA using SVC/V7000 replicated resources can be used in an environment using Virtual I/O resources and supports both vSCSI and N-PIV disk attachment.
At this time SVC and Storwize® V7000 storage systems use the same version of code (v6.x). Storwize V7000 replication is also supported with PowerHA Enterprise Edition under the same circumstances as SVC replication. A mixed environment using SVC and V7000 is also supported.
 
Note: In a mixed SVC/V7000 replication environment, V7000 storage needs to be changed from the default “storage” mode of interoperation with an SVC to the “replication” mode. This can be accomplished only using the CLI by issuing the command:
chsystem -layer replication
For more details on PowerHA Enterprise Edition with SVC replicated resources and for practical implementation examples refer to Chapter 6, “Configuring PowerHA SystemMirror Enterprise Edition linked cluster with SVC replication” on page 181.
IBM XIV Remote Mirroring
PowerHA Enterprise Edition supports XIV Remote Mirroring technology. The XIV Remote Mirror function of the IBM XIV Storage System enables a real-time copy between two or more storage systems over Fibre Channel or iSCSI links. This function provides a method to protect data from site failures for both synchronous and asynchronous replication.
XIV enables a set of remote mirrors to be grouped into a consistency group. When using synchronous or asynchronous mirroring, the consistency groups handle many remote mirror pairs as a group to make mirrored volumes consistent. Consistency groups simplify the handling of many remote volume pairs because you do not have to manage the remote volume pairs individually.
PowerHA using XIV replicated resources requires the use of consistency groups for both types of replication. Plan to allocate the volumes on the XIV systems in consistency groups according to the application layout. All the volumes of an application should be part of the same consistency group. Remember when creating the consistency groups that the same name must be used for a set of XIV replicated pairs on both XIV systems for proper integration with PowerHA.
PowerHA with XIV replicated resources can be implemented in an environment using virtual storage resources. Disk volumes in a Virtual I/O server environment can be attached using NPIV. vSCSI disks are not supported.
For more details about PowerHA Enterprise Edition with XIV replicated resources and for practical implementation examples, refer to Chapter 7, “Configuring PowerHA SystemMirror 7.1.2 Enterprise Edition with XIV replication” on page 239.
DS8800 in-band communication and HyperSwap
PowerHA SystemMirror 7.1.2 Enterprise Edition and later can use in-band communication for management of the replication pairs on a DS8800 storage system. This functionality is enabled by the following components:
DS8800 storage system with the appropriate level of firmware
AIX MPIO with AIX 61TL8/AIX 71TL2. SDDPCM is not supported
PowerHA 7.1.2 Enterprise Edition, or later
The in-band communication and HyperSwap features are configured using common smit panels. The following storage-related considerations apply for an environment using DS8000 in-band resources and HyperSwap:
Currently only the DS8800 model is supported. The minimum code bundle required is 86.30.49.0.
DS8800 can be attached to the cluster nodes using FC, NPIV or FCoE. vSCSI is not supported.
HyperSwap is supported only with Metro Mirror replication between the DS8800 storage systems. DS8800 requires the Metro Mirror license to be applied on both storage systems.
 
Note: Metro/Global Mirror is not supported at this time with PowerHA Enterprise Edition. Also the HyperSwap functionality cannot be used with volumes on a storage, part of the Metro/Global Mirror relationships.
The storage systems need to be accessed from all nodes in both sites. Hardware connectivity and SAN zoning configuration needs to allow a cluster node to access the local storage in the local site and the secondary storage in the remote site.
SCSI Reservations are not supported with HyperSwap disks. AIX hdisks must be configured with the reserve_policy attribute set to no_reserve.
The host profile for the volumes attached to the cluster nodes must be set to “IBM pSeries - AIX with Powerswap”. You can check this setting using the lshostconnect dscli command. See Example 3-4.
Example 3-4 Host type attachment required for HyperSwap
dscli> lshostconnect
Date/Time: December 22, 2012 7:51:44 AM CST IBM DSCLI Version: 6.6.0.305 DS: IBM.2107-75TL771
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
======================================================================================================================================
r9r2m12_fcs0 0008 10000000C951075D - IBM pSeries - AIX with Powerswap support 46 V4 all
Storage level Peer-to-Peer Remote Copy (PPRC) relationships and PPRC paths must be defined before you configure HyperSwap for PowerHA SystemMirror. Plan for the volumes and the logical subsystem (LSS) assignment. The LSS is the first two bytes of the LUNID on a DS8000 storage. You can check the LSS and LUNID of a volume in the storage using the dscli CLI tool and the lsfbvol command. See Example 3-5.
Example 3-5 lsfbvol output example
dscli> lsfbvol
Date/Time: December 22, 2012 6:22:09 AM CST IBM DSCLI Version: 6.6.0.305 DS: IBM.2107-75TL771
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
====================================================================================================================
r1r4m27-m38_1    B000 Online Normal Normal 2107-900 FB 512 P0 2.0 - 4194304
r1r4m27-m38_2    B001 Online Normal Normal 2107-900 FB 512 P0 2.0 - 4194304
r1r4m27-m38_caa  B200 Online Normal Normal 2107-900 FB 512 P0 2.0 - 4194304
r1r4m27-m38_rvg  B401 Online Normal Normal 2107-900 FB 512 P0 2.0 - 4194304
........
We suggest allocating different LSSs for the HyperSwap storage resources than other replicated volumes not being part of the HyperSwap. Also take into consideration to allocate distinct LSSs between the application, system and CAA repository disks, because they will be included in different PowerHA mirror groups.
Take into consideration that suspended operations on DS8800 must function on the entire LSS. If a single DS8800 LSS contains PPRC volumes from more than one application and if one of the replication connections breaks, all PPRC paths are removed. If the applications are not managed by PowerHA SystemMirror, the PPRC paths must be manually recreated after the replication connection is reestablished.
 
Note: The DS8800 in-band communication can be used without HyperSwap enablement for DS8800 Metro Mirror replicated resources. This case is similar to the DSCLI PPRC case, except that the cluster nodes communicate with the storage system for managing the PPRC pairs using the SAN FC infrastructure used for disk I/O, rather than using dscli and the IP network connectivity.
For more details about HyperSwap functionality and practical implementation examples, refer to Chapter 4, “Implementing DS8800 HyperSwap” on page 59.
Storage and Firmware requirements
Table 3-1 on page 46 summarizes the particular types of replication technologies currently supported with PowerHA SystemMirror 7.1.2 Enterprise Edition and the storage software related requirements.
Table 3-1 Storage technology replication supported with PowerHA 7.1.2
Storage type
Type of replication environment
Firmware/management software requirements
DS8000/DS6000/ESS
Metro Mirror
ibm2105cli.rte 32.6.100.13(ESS)
ibm2105esscli.rte 2.1.0.15(ESS)
DSCLI version 5.3.1.236 or later(DS6K/DS8K)
DS8700/DS8800
Global Mirror
Minimum levels for DS8700:
Code bundle 75.1.145.0 or later
DSCLI version 6.5.1.203 or later
SVC/V7000
Metro/Global
SVC code 4.2, or later (Currently 6.3 is the latest supported version)
SVC code 6.2/6.3 (V7000 storage)
openssh version 3.6.1 or later (for access to SVC interfaces)
XIV- Gen2/Gen3 models
Sync/Async
System Firmware 10.2.4 (Gen2)
System Firmware 11.0.0a (Gen3)
XCLI 2.4.4 or later
DS8800
DS8000 in and/ HyperSwap
DS8800 Code bundle 86.30.49.0, or later
We suggest using the latest supported version of firmware and storage management software for integration with PowerHA Enterprise Edition.
Note that IBM intends to support the following storage-based mirroring in upcoming releases of the PowerHA SystemMirror software solution. While management interfaces might be visible for these HADR solutions, their support is still pending qualification:
PowerHA SystemMirror Enterprise Edition for EMC Symmetrix Remote Data Facility (SRDF)
PowerHA SystemMirror Enterprise Edition for Hitachi TrueCopy and Universal Replicator (HUR)
Firmware levels can be verified using the management tools GUI/CLI for that particular storage type. Table 3-2 provides the CLI subcommands that can be used to get the current firmware level for the currently supported storage with PowerHA SystemMirror 7.1.2 Enterprise Edition.
Table 3-2 Getting the firmware level of the storage system
Storage system
CLI tool
Subcommand
DS6000/DS8000
dscli
ver -l
SVC
ssh
svcinfo lscluster
XIV
xcli
version_get
In case of a DS8000 storage system, dscli can be used for querying the current firmware version of the storage system using the command ver -l as shown in Example 3-6.
Example 3-6 DS8800 verifying the code version
dscli> lssi
Date/Time: December 13, 2012 8:12:03 PM CST IBM DSCLI Version: 6.6.0.305 DS: -
Name ID Storage Unit Model WWNN State ESSNet
=============================================================================
DS8K4 IBM.2107-75TL771 IBM.2107-75TL770 951 500507630AFFC16B Online Enabled
 
dscli> ver -l
Date/Time: December 13, 2012 8:14:26 PM CST IBM DSCLI Version: 6.6.0.305 DS: -
DSCLI 6.6.0.305
StorageManager 7.7.3.0.20120215.1
================Version=================
Storage Image LMC
===========================
IBM.2107-75TL771 7.6.30.160
Status of the replication relationships and the MANUAL recovery option
PowerHA Enterprise Edition using storage-based replication provides the possibility for using manual recovery of the storage replicated resources in certain conditions.
When defining the replicated resources, two recovery options are available:
AUTO - Involves automatic recovery of the storage-replicated resource during the site failover. The PowerHA software performs the activation of the mirrored copy at the recovery site, and brings up the resource group at the remote site.
MANUAL - User action is required at site failover time. The cluster will not automatically bring up the replicated resources and the related resource groups. User intervention is required to manually recover the replicated volumes at the recovery site. This option is taken into consideration only in certain situations, such as in a replication link down case. At the time of site failover, PowerHA software checks the replication status. If the replication is found in a normal state, the cluster performs an automatic failover of the associated resource group in the recovery site.
The status of the replicated pairs determines whether the MANUAL recovery action prevents a failover. The states vary between the replication types. The states shown in Table 3-3 do not failover automatically during a site failover.
Table 3-3 Replication states for the MANUAL option
Replication type
Replication state
DS Metro Mirror
Target-FullDuplex-Source-Unknown
Source-Suspended-Source-Unknown
Source-Unknown-Target-FullDuplex
Source-Unknown-Source-Suspended
SVC Metro/Global Mirror
idling_disconnected
consistent_disconnected
XIV Remote Mirror
Unsynchronized
RPO lagging
3.1.4 Cluster repository disk
PowerHA SystemMirror uses a shared disk to store Cluster Aware AIX (CAA) cluster configuration information. You must have at least 512 MB and no more than 460 GB of disk space allocated for the cluster repository disk. This feature requires that a dedicated shared disk be available to all nodes that are part of the cluster. This disk cannot be used for application storage or any other purpose.
When planning for a repository disk in case of a multi-site cluster solution:
Stretched cluster
Requires and shares only one repository disk. When implementing the cluster configuration with multiple storages in different sites, consider allocating the CAA repository and the backup repositories in different storages across the sites for increasing the availability of the repository disk in case of a storage failure. As example, when using a cross-site LVM mirroring configuration with a storage subsystem in each site, you can allocate the primary disk repository in Site1 and the backup repository on the storage in Site2.
Linked clusters
Requires a repository disk to be allocated to each site. If there is no other storage at a site, plan to allocate the backup repository disk on a different set of disks (other arrays) within the same storage for increasing the repository disk availability in case of disk failures.
Considerations for a stretched cluster
This section describes some considerations when implementing a stretched cluster:
There is only one repository disk in a stretched cluster.
Repository disks cannot be mirrored using AIX LVM. Hence we strongly advise to have it RAID protected by a redundant and highly available storage configuration.
All nodes must have access to the repository disk.
In the event the repository disk fails or becomes inaccessible by one or more nodes, the nodes stay online and the cluster is still able to process events such as node, network or adapter failures, and so on. Upon failure, the cluster ahaFS event REP_DOWN occurs. However, no cluster configuration changes can be performed in this state. Any attempt to do so will be stopped with an error message.
A backup repository disk can be defined in case of a failure. When planning the disks that you want to use as repository disks, you must plan for a backup or replacement disks, which can be used in case the primary repository disk fails. The backup disk must be the same size and type as the primary disk, but could be in a different physical storage. Update your administrative procedures and documentation with the backup disk information. You can also replace a working repository disk with a new one to increase the size or to change to a different storage subsystem. To replace a repository disk, you can use the SMIT interface or PowerHA SystemMirror for IBM Systems Director. The cluster ahaFS event REP_UP occurs upon replacement.
Additional considerations for linked clusters
This section provides additional considerations when implementing linked clusters:
The nodes within a site share a common repository disk with all its characteristics specified earlier.
The repositories between sites are kept in sync internally by CAA.
When sites sunder and then are merged, CAA provides a mechanism to reconcile the two repositories. This can be done either through reboot (of all the nodes on the losing side) or through APIs implemented exclusively for RSCT; see 10.4.2, “Configuring the split and merge policy” on page 441.
3.1.5 Tie breaker disk
The tie breaker disk concept was introduced in PowerHA 7.1.2 as an additional mechanism for preventing cluster partitioning. In case of a long distance cluster where nodes communicate between the sites using only IP networks, the risk of cluster partitioning by the network links failing can be mitigated by using a tie breaker disk in a tertiary location accessible by both sites. Although it proves to be more efficient in the linked clusters, the tie breaker disk can be used in both stretched and linked cluster types.
Requirements for a tie breaker
The following requirements and restrictions apply for a tie breaker:
A disk supporting SCSI-3 persistent reserve. Such a device can include a Fibre Channel, FCoE or i-SCSI attachment supporting the SCSI-3 persistent reservation.
Disk device must be accessible by all nodes.
The repository disk cannot be used as tie breaker.
 
Note: There is no specific capacity or bandwidth access required for the tie breaker disk. PowerHA SystemMirror software does not store any configuration data on it or use it for heartbeating. It is only used with SCSI-3 persistent reservation for handling site split/merge events.
Where to locate the tie breaker disk
The tie breaker disk should be located where the sites can access it in any situation including total site failures. Ideally, this should be configured to a different site from where the cluster nodes are being activated.
Figure 3-6 illustrates this design.
Figure 3-6 Tie breaker disk location
Keep in mind that the AIX nodes must be able to initiate SCSI commands to the tie breaker disk to gain the reservation.
The simplest way to accomplish this is to configure a physical fiber channel access to where the tie breaker disk is located. However, in many cases this is not realistic. Instead, methods that imitate SCSI commands through IP networks should suit users.
The following are examples of technology that can be used for connecting the tie breaker disk:
Fiber channel (FC) using direct attachment or FCIP
Fibre Channel over Ethernet (FCoE)
Internet Small Computer Systems Interface (iSCSI)
 
Important: The current AIX iSCSI device driver does not support SCSI-3 persistent reservations. IBM PowerHA development is currently working to remove this limitation, but at the time of writing, iSCSI devices could not be used for tie breaker disks.
For further details regarding the tie breaker and site split/merge handling, see Chapter 10, “Cluster partition management” on page 435.
3.2 Hardware and software requirements
This section describes the hardware and software requirements for the IBM PowerHA SystemMirror for AIX solution.
3.2.1 Hardware requirements for the AIX solution
PowerHA SystemMirror 7.1.2 can be installed on any hardware supported by AIX 6.1 TL8 SP2 or AIX 7.1 TL2 SP2. For supported IBM server models, extension cards, storage subsystems, SAN Volume Controller, NAS, Adapters, refer to the PowerHA SystemMirror hardware support matrix at:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105638
For the list of supported Fibre Channel adapters for SAN heart beating, refer to the “Setting up cluster storage communication” page at:
http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=/com.ibm.aix.clusteraware/claware_comm_setup.htm
PowerHA supports full system partitions as well as fully virtualized LPARs. You can use a mix of physical and virtual interfaces for cluster communication.
3.2.2 Software requirements for PowerHA 7.1.2
This section provides the software requirements for PowerHA 7.1.2.
Base requirements for PowerHA 7.1.2
PowerHA SystemMirror 7.1.2 requires the following minimum levels of software on the nodes:
AIX 6.1 TL8 or AIX 7.1 TL2
RSCT 3.1.4
Since the AIX installation might be a new or a migrated one, check also that the fileset bos.cluster.rte is at 6.1.8 level for the AIX 6.1 case or at 7.1.2 level for the case of AIX 7.1.
The latest service pack for PowerHA SystemMirror software should be installed for all components. At the time of writing, we are referring to Service Pack 1 as the latest SP, available as of November 2012. We suggest installing the latest service pack (SP) for the AIX operating system. You can download the latest SP for PowerHA SystemMirror and for AIX from the IBM FixCentral page at:
http://www-933.ibm.com/support/fixcentral/
The IBM PowerHA SystemMirror Enterprise Edition requires the installation and acceptance of license agreements for both the Standard Edition cluster.license fileset and the Enterprise Edition cluster.xd.license fileset as shown in Table 3-4, in order for the remainder of the filesets to install.
Table 3-4 PowerHA Enterprise Edition - required fileset
Required package
Filesets to install
Enterprise Edition license
cluster.xd.license
The base filesets in the Standard Edition are required to install the Enterprise Edition filesets. The Enterprise package levels must match those of the base runtime level (cluster.es.server.rte). Table 3-5 displays the itemized list of filesets for each of the integrated offerings.
Table 3-5 PowerHA Enterprise Edition - integrated offering solution filesets
Replication type
Fileset to install
ESS-Direct Management PPRC
cluster.es.pprc.rte
cluster.es.pprc.cmds
cluster.msg.en_US.pprc
ESS/DS6000/DS8000 Metro Mirror (DSCLI PPRC)
cluster.es.spprc.cmds
cluster.es.spprc.rte
cluster.es.cgpprc.cmds
cluster.es.cgpprc.rte
cluster.msg.en_US.cgpprc
SAN Volume Controller (SVC)
cluster.es.svcpprc.cmds
cluster.es.svcpprc.rte
cluster.msg.en_US.svcpprc
XIV, DS8800 in-band and HyperSwap, DS8700/DS8800 Global Mirror
cluster.es.genxd.cmds
cluster.es.genxd.rte
cluster.msg.en_US.genxd
Geographic Logical Volume Mirroring
cluster.doc.en_US.glvm.pdf
cluster.msg.en_US.glvm
cluster.xd.glvm
glvm.rpv.client (part of AIX base install)
glvm.rpv.man.en_US(part of AIX base install)
glvm.rpv.msg.en_US(part of AIX base install)
glvm.rpv.server(part of AIX base install)
glvm.rpv.util(part of AIX base install)
EMC SRDF (see Note)
cluster.es.sr.cmds
cluster.es.sr.rte
cluster.msg.en_US.sr
Hitachi TrueCopy/Universal Replicator (see Note)
cluster.es.tc.cmds
cluster.es.tc.rte
cluster.msg.en_US.tc
 
Note: Although the current package of PowerHA 7.1.2 includes the filesets for Hitachi and EMC replication technologies, IBM intends to provide support in upcoming releases of the PowerHA SystemMirror software solution.
Additional software requirements
The following optional filesets can be installed:
devices.common.IBM.storfwork (for SAN-based heartbeating)
cas.agent (optional, used for IBM Systems Director plug-in)
clic.rte (for secure encryption communication option of clcomd)
The following additional software requirements apply:
PowerHA SystemMirror Enterprise Edition and HyperSwap support requires PowerHA SystemMirror 7.1.2 Service Pack 1 with APAR IV27586.
IBM Systems Director plug-in for PowerHA SystemMirror 7.1.2 has been certified with IBM Systems Director version 6.3.1.
In a virtual environment, use VIOS 2.2.0.1-FP24 SP01 or later.
Visit the IBM FixCentral website for all available service packs for AIX, Virtual I/O server, PowerHA SystemMirror, RSCT and Systems Director, at:
http://www.ibm.com/support/fixcentral/
Multipath software requirements
The multipath software is required for redundant access to the storage system using the SAN communication paths. You have to consider the appropriate device driver option for a specific environment.
When using PowerHA SystemMirror 7.1.2 Enterprise Edition 7.1.2 storage-replicated resources, you have the following software multipath options:
Native AIX MPIO driver
AIX 6.1 TL8 and 7.1 TL2 have been enhanced to support DS8800 in-band communication and HyperSwap features. The XIV FC storage attachment is also natively supported by the AIX MPIO driver with no additional software needed. This driver supports two options: active/passive (AIX_AAPCM) used for the active/passive controllers such as DS4000/5000 storage models and active/active (AIX_AAPCM), currently supported with DS8000 models and XIV storage systems.
Subsystem Device Driver (SDD)
This is the legacy multipath driver for ESS/DS6000/DS8000/SVC. When using SDD an hdisk# is created for each path to a LUN and a pseudo device named vpath# is created for multipath I/O access, which is used for AIX LVM operations.
Subsystem Device Driver MPIO (SDDPCM)
This is an extension to AIX MPIO designed for ESS, DS6000, DS8000, and SVC and also for DS4000, DS5000, and DS3950 storage systems. With SDDPCM, a single hdisk# is presented by AIX for use in the LVM operations.
At the time of writing, the support for integrating EMC and Hitachi storage replications with PowerHA 7.1.2 is pending qualification. In such cases, vendor-specific multipath software might be required. For example, EMC storage uses Powerpath software for AIX.
Multiple driver options can apply for a particular environment. As example for an environment using DS8000 dscli metro mirror replicated resources, you can use either SDD or SDDPCM. You must use a consistent multipath driver option across the cluster nodes.
 
Note: The SDD and SDDPCM multipath drivers cannot coexist on the same server.
The following SDD software levels are required:
IBM 2105 Subsystem Device Driver (SDD): ibmSdd_510nchacmp.rte 1.3.3.6 or later
IBM SDD FCP Host Attachment Script: devices.fcp.disk.ibm.rte version 1.0.0.12 or later
IBM Subsystem Device Driver (SDD): devices.sdd.XX.rte version 1.6.3.0 or later (where XX corresponds to the associated level of AIX); see Table 3-6.
The latest AIX SDD levels available at the time of writing that we suggest to install, and the supported operating system, are shown in Table 3-6.
Table 3-6 SDD and operating system levels
AIX OS level
ESS 800
DS8000
DS6000
SVC
AIX 6.1
1.7.2.1
1.7.2.8
1.7.2.3
1.7.2.5
For AIX 7.1 use AIX MPIO or SDDPCM.
To obtain the latest version of this driver, go to:
http://www.ibm.com/servers/storage/support/software/sdd/
The following SDDPCM prerequisite software and microcode levels are required:
IBM Multipath Subsystem Device Driver Path Control Module (SDDPCM): devices.sddpcm.XX.rte version 2.2.0.0 or later (where XX corresponds to the associated level of AIX); see Table 3-7.
The host attachment for SDDPCM adds 2105, 2145, 1750, and 2107 device information to allow AIX to properly configure 2105, 2145, 1750, and 2107 hdisks. The device information allows AIX to:
Identify the hdisk as a 2105, 2145, 1750, or 2107 hdisk.
Set default hdisk attributes such as queue_depth and timeout values.
Indicate to the AIX device driver configure method to configure these hdisks as MPIO-capable devices.
The AIX SDDPCM levels available at the time of writing, which we suggest to install, and the supported operating systems, are shown in Table 3-7.
Table 3-7 SDDPCM and operating system levels
AIX OS level
ESS
DS8000
DS6000
SVC
AIX 6.1
2.2.0.4
2.6.3.2
2.4.0.2
2.6.3.2
AIX 7.1
N/A
2.6.3.2
N/A
2.6.3.2
 
Note: Persistent Reservation with PowerHA SystemMirror 7.1 is not supported. Shared volume groups managed by PowerHA SystemMirror and accessed through SDDPCM must be set in enhanced concurrent mode.
For the latest version of SDDPCM for AIX, refer to:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000201
In the case of DS8800 in-band configuration and HyperSwap, only the AIX PCM device driver is supported. The minimum required AIX level is AIX 6.1 TL8 or AIX 7.1 TL2. The manage_disk_drivers command was updated to support DS8000 storage as an option. The default option for DS8000 storage is NO_OVERRIDE, which uses the highest priority ODM mapping. Note that with this option, when SDDPCM is present, it has a higher priority than the AIX PCM (AIX_AAPCM) option.
To set AIX PCM as driver option, run:
manage_disk_drivers –d 2107DS8K –o AIX_AAPCM
To set NO_OVERRIDE as driver option, run:
manage_disk_drivers –d 2107DS8K –o NO_OVERRIDE
 
Note: A reboot is required for applying the changes to the driver option.
Refer also to the System Storage Interoperability Center (SSIC) for verifying the combination between the multipath software level, AIX, and storage firmware, at:
http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss
GLVM considerations for PowerHA Enterprise Edition
The following considerations apply for Geographical LVM mirroring:
Sites
GLVM for PowerHA SystemMirror Enterprise Edition requires two PowerHA SystemMirror sites. Each PowerHA SystemMirror site must have the same name as the RPV server site name.
Enhanced concurrent volume groups
In addition to non-concurrent volume groups, you can have enhanced concurrent mode volume groups configured with RPVs, so that they can serve as geographically mirrored volume groups. You can include such volume groups in both concurrent and non-concurrent resource groups in PowerHA SystemMirror.
 
 
Note: Enhanced concurrent volume groups can be accessed concurrently only on nodes within the same site. You cannot access enhanced concurrent mode geographically mirrored volume groups across sites concurrently. Fast disk takeover is not supported for remote disks that are part of a geographically mirrored volume group.
 
Replication networks
In a PowerHA SystemMirror cluster that has sites configured, you can have up to four XD_data networks used for data mirroring. This increases data availability and mirroring performance. For instance, if one of the data mirroring networks fails, the GLVM data mirroring can continue over the redundant networks. Also, you have the flexibility to configure several low bandwidth XD_data networks and take advantage of the aggregate network bandwidth. Plan the data mirroring networks so that they provide similar network latency and bandwidth. This is because, for load balancing, each RPV client communicates with its corresponding RPV server over more than one IP-based network at the same time (by sending I/O requests across each of the networks in a round-robin order).
PowerHA SystemMirror lets you configure site-specific service IP labels, thus you can create a resource group that activates a given IP address on one site and a different IP address on another site. Site-specific IP labels are supported within different subnets, thus allowing you to have subnetting between sites. When using site-specific IP addresses you can also plan for DNS integration. At the time of site failover, a user-defined script can update the DNS entry associated with a unique application service name with the service IP address specific to the active site. See an example in Appendix B, “DNS change for the IBM Systems Director environment with PoweHA” on page 481.
Asynchronous mirroring allows the local site to be updated immediately and the remote site to be updated as network bandwidth allows. The data is cached and sent later, as network resources become available. While this can greatly increase application response time, there is some inherent risk of data loss after a site failure due to the nature of asynchronous replication. Asynchronous mirroring requires AIX super-strict mirror pools.
For further details about PowerHA SystemMirror using GLVM, refer to:
http://pic.dhe.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.powerha.geolvm/ha_glvm_kick.htm
Required release of AIX and PowerHA SystemMirror for GLVM
GLVM data mirroring functionality is provided by the following filesets, which are available from the base AIX installation media:
glvm.rpv.client Remote Physical Volume Client
glvm.rpv.server Remote Physical Volume Server
glvm.rpv.util Geographic LVM Utilities
The software requirements for PowerHA 7.1.2 also apply for GLVM. See “Base requirements for PowerHA 7.1.2” on page 50. For implementing GLVM for PowerHA SystemMirror Enterprise Edition we suggest using the latest version of PowerHA SystemMirror, AIX, and RSCT.
Refer also to the release notes file for the PowerHA version you are using, located in: /usr/es/sbin/cluster/release_notes_xd.
3.2.3 Capacity on Demand (CoD) support
PowerHA supports Dynamic LPAR, Capacity on Demand (CoD) processor, and memory resources. You can configure the minimum and desired number of processors, virtual processors, and memory for each application. When an application is activated on a node, PowerHA contacts the HMC to acquire this resource in addition to the current resources of the LPAR.
Table 3-8 on page 56 describes the types of Capacity on Demand (CoD) licenses that are available. It also indicates whether PowerHA supports the use of a particular license.
Table 3-8 Capacity on Demand (CoD) licenses
License Type
Description
PowerHA support
Comments
On/Off
CPU: Allows you to start and stop using processors as needs change.
Memory: not allowed.
CPU:Yes
Memory: N/A
PowerHA does not manage licenses. The resources remain allocated to an LPAR until PowerHA releases them through a DLPAR operation, or until you release them dynamically outside of PowerHA.If the LPAR node goes down outside of PowerHA, the CoD resources are also released.
Trial
CPU and Memory: The
resources are activated for a single period of 30 consecutive days. If your system was ordered with CoD features and they have not yet been activated, you can turn the features on for a one-time trial period.
With the trial capability, you can gauge how much capacity you might need in the future, if you decide to permanently activate the resources you need.
CPU: Yes
Memory: Yes
PowerHA activates and
deactivates trial CoD resources.
Note: Once the resources are
deactivated, the trial license is used and cannot be
reactivated.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.216.249