Extending resource group capabilities
In this chapter, we describe how PowerHA advanced resource group capabilities can be used to meet specific requirements of particular environments. The attributes listed in Table 10-1 can influence the behavior of resource groups during startup, fallover, and fallback.
Table 10-1 Resource group attribute behavior relationships
Attribute
Startup
Fallover
Fallback
Settling time
Yes
 
 
Node distribution policy
Yes
 
 
Dynamic node priority
 
Yes
 
Delayed fallback timer
 
 
Yes
Resource group parent/child dependency
Yes
Yes
Yes
Resource group location dependency
Yes
Yes
Yes
Resource group start after/stop after dependency
Yes
Yes
Yes
This chapter contains the following topics:
10.1 Settling time attribute
With the settling time attribute, you can delay the acquisition of a resource group so that, in the event of a higher priority node joining the cluster during the settling period, the resource group will be brought online on the higher priority node instead of being activated on the first available node.
10.1.1 Behavior of settling time attribute
The following characteristics apply to the settling time:
If configured, settling time affects the startup behavior of all offline resource groups in the cluster for which you selected the Online on First Available Node startup policy.
The only time that this attribute is ignored is when the node joining the cluster is the first node in the node list for the resource group. In this case, the resource group is acquired immediately.
If a resource group is currently in the ERROR state, PowerHA waits for the settling time period before attempting to bring the resource group online.
The current settling time continues to be active until the resource group moves to another node or goes offline. A DARE operation might result in the release and re-acquisition of a resource group, in which case the new settling time values are effective immediately.
10.1.2 Configuring settling time for resource groups
To configure a settling time for resource groups, follow these steps:
1. Enter the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure a Resource Group Run-Time Policies  Configure Settling Time for Resource Group, and press Enter.
2. Enter a field value for Settling Time; Enter any positive integer number in this field. The default is zero (0):
Settling Time (sec.)
If this value is set and the node that joins the cluster is not the highest priority node, the resource group will wait the duration of the settling time interval. When this time expires, the resource group is acquired on the node that has the highest priority among the list of nodes that joined the cluster during the settling time interval.
Remember that this is valid only for resource groups that use the startup policy, Online of First Available Node.
10.1.3 Displaying the current settling time
To display the current settling time in a cluster that is already configured, you can run the clsettlingtime list command:
#/usr/es/sbin/cluster/utilities/clsettlingtime list
#SETTLING_TIME
120
During the acquisition of the resource groups on cluster startup, you can also see the settling time value by running the clRGinfo -t command as shown in Example 10-1 on page 381.
Example 10-1 Displaying the RG settling time
#/usr/es/sbin/cluster/utilities/clRGinfo -t
-------------------------------------------------------------------------------
Group Name Group State Node Delayed Timers
-------------------------------------------------------------------------------
xsiteGLVMRG ONLINE jessica@dallas 120 Seconds
               ONLINE SECONDARY shanley@fortwo 120 Seconds
 
newconcRG ONLINE jessica        120 Seconds
               OFFLINE cassidy        120 Seconds
 
Note: A settling time with a non-zero value will be displayed only during the acquisition of the resource group. The value will be set to 0 after the settling time expires and the resource group is acquired by the appropriate node.
10.1.4 Settling time scenarios
To demonstrate how this feature works, we created two settling time scenarios and configured a two-node cluster using a single resource group. In our scenario, we showed the following characteristics:
The settling time period is enforced and the resource group is not acquired on the node startup (while the node is not the highest priority node) until the settling time expires.
If the highest priority node joins the cluster during the settling period, then it does not wait for settling time to expire and acquires the resource group immediately.
We specified a settling time of 6 minutes and configured a resource group named SettleRG1 to use the startup policy, Online on First Available Node. We set the node list for the resource group so node jessica would fallover to node cassidy.
For the first test, the following steps demonstrate how we let the settling time expire and how the secondary node acquires the resource group:
1. With cluster services inactive on all nodes, define a settling time value of 600 seconds.
2. Synchronize the cluster.
3. Validate the settling time by running clsettlingtime as follows:
[jessica:root] / # /usr/es/sbin/cluster/utilities/clsettlingtime list
#SETTLING_TIME
360
4. Start cluster services on node cassidy.
We started cluster services on this node because it was the last node in the list for the resource group. After starting cluster services, the resource group was node acquired by node cassidy. Running the clRGinfo -t command displays the 360 seconds settling time as shown in Example 10-2.
Example 10-2 Checking settling time in /var/hacmp/log/hacmp.out
[cassidy:root] / # clRGinfo -t
-----------------------------------------------------------------------------
Group Name State Node Delayed Timers
-----------------------------------------------------------------------------
SettleRG1 OFFLINE jessica 360 Seconds
OFFLINE cassidy 360 Seconds
5. Wait for the settling time to expire.
Upon the expiration of the settling time, SettleRG1 was acquired by node cassidy. Because the first node in the node list (jessica) did not become available within the settling time period, the resource group was acquired on the next node in the node list (cassidy).
Figure 10-1 Settling time scenario waiting
For the next test scenario, we demonstrate how the primary node will start the resource group when the settling time does not expire.
1. Repeat the previous step 1 on page 381 through step 4 on page 381.
2. Start cluster services on node jessica.
After waiting about two minutes, after the cluster stabilized on node cassidy, we start cluster services on node jessica. This results in the resource group being brought online to node jessica, as shown in Figure 10-2 on page 383.
Figure 10-2 Settling time scenario, no waiting
 
Note: This feature is effective only when cluster services on a node are started. This is not enforced when C-SPOC is used to bring a resource group online.
10.2 Node distribution policy
One of the startup policies that can be configured for resource groups is the Online Using Node Distribution policy.
This policy causes resource groups having this startup policy to spread across cluster nodes in such a way that only one resource group is acquired by any node during startup. This can be used, for instance, for distributing CPU-intensive applications on different nodes.
If two or more resource groups are offline when a particular node joins the cluster, this policy determines which resource group is brought online based on the following criteria and order of precedence:
1. The resource group with the least number of participating nodes will be acquired.
2. In case of a tie, the resource group to be acquired is chosen alphabetically.
3. A parent resource group is preferred over a resource group that does not have any child resource group.
10.2.1 Configuring a resource group node-based distribution policy
To configure this type of startup policy, follow these steps:
1. Enter the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Add a Resource Group, and press Enter.
2. Specify a resource group name.
3. Select the Online Using Node Distribution Policy startup policy and press Enter, as shown in Example 10-3.
Example 10-3 Configuring resource group node-based distribution policy
                       Add a Resource Group (extended)
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Resource Group Name [RG1]
* Participating Nodes (Default Node Priority) [node1 node2 node3]
 
Startup Policy Online On Home Node O>
Fallover Policy Fallover To Next Prio>
Fallback Policy Fallback To Higher Pr>
+--------------------------------------------------------------------------+
¦ Startup Policy ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Online On Home Node Only ¦
¦ Online On First Available Node ¦
¦ Online Using Node Distribution Policy
¦ Online On All Available Nodes ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
¦ /=Find n=Find Next ¦
+--------------------------------------------------------------------------
10.2.2 Node-based distribution scenario
To show how this feature functions and understand the difference between this policy and the Online On Home Node Only policy, we created a node-based distribution scenario and configured a two-node cluster having three resource groups; all of them use the Online Using Node Distribution policy. Cluster nodes and resource groups are shown in Figure 10-3 on page 385. The number of resource groups having the Online Using Node Distribution policy is greater than the number of cluster nodes.
Figure 10-3 Online Using Node Distribution policy scenario
For our scenario, we use the following steps:
1. Start cluster services on node jessica. RG1 was acquired because of alphabetical order, as shown in Example 10-4.
Example 10-4 Node jessica starts RG1
[jessica:root] / # clRGinfo
-----------------------------------------------------------------------------
Group Name State Node
-----------------------------------------------------------------------------
RG1 ONLINE jessica
OFFLINE cassidy
 
RG2 OFFLINE cassidy
OFFLINE jessica
 
RG3 OFFLINE jessica
OFFLINE cassidy
2. Start cluster services on node cassidy. RG2 was acquired because of alphabetical order.
Example 10-5 Node Cassidy starts RG2
[jessica:root] / # clRGinfo
-----------------------------------------------------------------------------
Group Name State Node
-----------------------------------------------------------------------------
RG1 ONLINE jessica
OFFLINE cassidy
 
RG2 ONLINE cassidy
OFFLINE jessica
 
RG3 OFFLINE jessica
OFFLINE cassidy
RG3 stays offline. RG3 can be brought online manually through C-SPOC. This is done by running smitty cspoc, selecting Resource Group and Applications  Bring a Resource Group Online, choosing RG3, and then choosing the node on which you want to start it.
10.3 Dynamic node priority (DNP)
The default node priority order for a resource group is the order in the participating node list. Implementing a dynamic node priority for a resource group allows you to go beyond the default fallover policy behavior and influence the destination of a resource group upon fallover. The two types of dynamic node priorities are as follows:
Predefined Resource Monitoring and Control (RMC) based: These are included as standard with the PowerHA base product.
Adaptive failover: These are two extra priorities that require customization by the user.
 
Important: Dynamic node priority is relevant only to clusters with three or more nodes participating in the resource group.
Predefined RMC based dynamic node priorities
These priorities are based on the following three RMC preconfigured attributes:
cl_highest_free_mem - node with highest percentage of free memory
cl_highest_idle_cpu - node with the most available processor time
cl_lowest_disk_busy - node with the least busy disks
The cluster manager queries the RMC subsystem every three minutes to obtain the current value of these attributes on each node and distributes them cluster wide. The interval at which the queries of the RMC subsystem are performed is not user-configurable. During a fallover event of a resource group with dynamic node priority configured, the most recently collected values are used in the determination of the best node to acquire the resource group.
For dynamic node priority (DNP) to be effective, consider the following information:
DNP cannot be used with fewer than three nodes.
DNP cannot be used for Online on All Available Nodes resource groups.
DNP is most useful in a cluster where all nodes have equal processing power and memory.
 
Important: The highest free memory calculation is performed based on the amount of paging activity taking place. It does not consider whether one cluster node has less real physical memory than another.
For more details about how predefined DNP values are used, see step 3 on page 391.
Adaptive failover dynamic node priority
Introduced in PowerHA v7.1, you can choose dynamic node priority based on the user-defined property by selecting one of the following attributes:
cl_highest_udscript_rc
cl_lowest_nonzero_udscript_rc
When you select one of the these criteria, you must also provide values for the DNP script path and DNP time-out attributes for a resource group. When the DNP script path attribute is specified, the given script is invoked on all nodes and return values are collected from all nodes. The failover node decision is made by using these values and the specified criteria. If you choose the cl_highest_udscript_rc attribute, collected values are sorted and the node that returned the highest value is selected as a candidate node to failover. Similarly, if you choose the cl_lowest_nonzero_udscript_rc attribute, collected values are sorted and the node which returned lowest nonzero positive value is selected as a candidate node to failover. If the return value of the script from all nodes are the same or zero, the default node priority is considered. PowerHA verifies the script existence and the execution permissions during verification.
 
Demonstration: See the demonstration about user-defined adaptive fallover node priority:
10.3.1 Configuring a resource group with predefined RMC-based DNP policy
When DNP is set up for a resource group, no resources can already be assigned to the resource group. You must assign the fallover policy of Dynamic Node Priority at the time when the resource group is created. For your resource group to be able to use one of the three DNP policies, you must set the fallover policy as shown in Example 10-6.
1. Enter the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Add a Resource Group, and press Enter.
2. Set the Fallover Policy field to Fallover Using Dynamic Node Priority (Example 10-6).
Example 10-6 Adding a resource group using DNP
                         Add a Resource Group (extended)
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Resource Group Name [DNP_test1]
* Participating Nodes (Default Node Priority) [alexis jessica jordan] +
 
Startup Policy Online On Home Node O> +
Fallover Policy Fallover Using Dynami> +
Fallback Policy Fallback To Higher Pr> +
3. Assign the resources to the resource group by selecting Change/Show Resources and Attributes for a Resource Group and press Enter, as shown in Example 10-7.
Example 10-7 Selecting the dynamic node priority policy to use
        Change/Show All Resources and Attributes for a Resource Group
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[TOP] [Entry Fields]
Resource Group Name DNP_test1
Participating Nodes (Default Node Priority) alexis jessica jordan
* Dynamic Node Priority Policy [] +
 
Startup Policy Online On Home Node O>
Fallover Policy Fallover Using Dynami>
Fallback Policy Fallback To Higher Pr>
4. Select one of the three available RMC based policies from the pull-down list:
 – cl_highest_free_mem
 – cl_highest_idle_cpu
 – cl_lowest_disk_busy
5. Continue selecting the resources that will be part of the resource group.
6. Verify and synchronize the cluster.
You can display the current DNP policy for an existing resource group (Example 10-8).
Example 10-8 Displaying DNP policy for a resource group
root@ xdsvc1[] odmget -q group=test_rg HACMPresource|more
 
HACMPresource:
group = "test_rg"
name = "NODE_PRIORITY_POLICY"
value = "cl_highest_free_mem"
id = 21
monitor_method = ""
 
Notes:
Using the information retrieved directly from the ODM is for informational purposes only because the format within the stanzas might change with updates or new versions.
Hardcoding ODM queries within user-defined applications is not supported and should be avoided.
10.3.2 How predefined RMC based dynamic node priority functions
ClstrmgrES polls the Resource Monitoring and Control (ctrmc) daemon every three minutes and maintains a table that stores the current memory, CPU, and disk I/O state of each node.
The following resource monitors contain the information for each policy:
IBM.PhysicalVolume
IBM.Host
Each of these monitors can be queried during normal operation by running the commands shown in Example 10-9.
Example 10-9 Querying resource monitors
root@ xdsvc1[] lsrsrc -Ad IBM.Host | grep TotalPgSpFree
TotalPgSpFree = 128829
PctTotalPgSpFree = 98.2887
root@ xdsvc1[] lsrsrc -Ad IBM.Host | grep PctTotalTimeIdle
PctTotalTimeIdle = 99.0069
root@ xdsvc1[] lsrsrc -Ap IBM.PhysicalVolume
Resource Persistent Attributes for IBM.PhysicalVolume
resource 1:
Name = "hdisk2"
PVId = "0x000fe401 0xd39e2344 0x00000000 0x00000000"
ActivePeerDomain = ""
NodeNameList = {"xdsvc1"}
resource 2:
Name = "hdisk1"
PVId = "0x000fe401 0xd39e0575 0x00000000 0x00000000"
ActivePeerDomain = ""
NodeNameList = {"xdsvc1"}
resource 3:
Name = "hdisk0"
PVId = "0x000fe401 0xafb3c530 0x00000000 0x00000000"
ActivePeerDomain = ""
NodeNameList = {"xdsvc1"}
root@ xdsvc1[] lsrsrc -Ad IBM.PhysicalVolume
Resource Dynamic Attributes for IBM.PhysicalVolume
resource 1:
PctBusy = 0
RdBlkRate = 0
WrBlkRate = 39
XferRate = 4
resource 2:
PctBusy = 0
RdBlkRate = 0
WrBlkRate = 0
XferRate = 0
resource 3:
PctBusy = 0
RdBlkRate = 0
WrBlkRate = 0
XferRate = 0
You can display the current table maintained by clstrmgrES by running the command shown in Example 10-10.
Example 10-10 DNP values maintained by cluster manager
[cassidy:root] /inst.images # lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.118 src/43haes/usr/sbin/cluster/hacmprd/main.C,hacmp.pe,61haes_r713,1343A_hacmp713 10/21/"
build = "May 6 2014 15:08:06 1406D_hacmp713"
i_local_nodeid 0, i_local_siteid 2, my_handle 3
ml_idx[3]=0
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 15
local node vrmf is 7131
cluster fix level is "1"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 0 NodeName - cassidy
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
DNP Values for NodeId - 0 NodeName - jessica
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000
CAA Cluster Capabilities
CAA Cluster services are active
There are 4 capabilities
Capability 0
id: 3 version: 1 flag: 1
Hostname Change capability is defined and globally available
Capability 1
id: 2 version: 1 flag: 1
Unicast capability is defined and globally available
Capability 2
id: 0 version: 1 flag: 1
IPV6 capability is defined and globally available
Capability 3
id: 1 version: 1 flag: 1
Site capability is defined and globally available
trcOn 0, kTraceOn 0, stopTraceOnExit 0, cdNodeOn 0
Last event run was JOIN_NODE_CO on node 3
The values in the table are used for the DNP calculation in the event of a fallover. If clstrmgrES is in the middle of polling the current state when a fallover occurs, then the value last taken when the cluster was in a stable state is used to determine the DNP.
10.3.3 Configuring resource group with adaptive fallover DNP policy
When you define DNP for a resource group, no resources can already be assigned to the resource group. You must assign the fallover policy of Dynamic Node Priority at the time when the resource group is created. For your resource group to be able to use one of the DNP policies, you must set the fallover policy as shown in Example 10-6 on page 387.
Complete the following steps:
1. Enter the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Add a Resource Group, and press Enter. Set the Fallover Policy field to Fallover Using Dynamic Node Priority (Example 10-11).
Example 10-11 Adding a resource group using DNP
                            Add a Resource Group
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Resource Group Name [DNPrg]
* Participating Nodes (Default Node Priority) [jessica cassidy shanl> +
 
Startup Policy Online On Home Node O> +
Fallover Policy Fallover Using Dynami> +
Fallback Policy Never Fallback +
2. Assign the resources to the resource group by selecting Change/Show Resources and Attributes for a Resource Group and then press Enter, as shown in Example 10-12.
Example 10-12 Selecting the dynamic node priority policy to use
           Change/Show All Resources and Attributes for a Resource Group
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[TOP] [Entry Fields]
Resource Group Name DNPrg
Participating Nodes (Default Node Priority) jessica cassidy shanl>
* Dynamic Node Priority Policy [cl_lowest_nonzero_uds> +
DNP Script path [/HA713/DNP.sh] /
DNP Script timeout value [20] #
 
Startup Policy Online On Home Node O>
Fallover Policy Fallover Using Dynami>
Fallback Policy Never Fallback
 
Service IP Labels/Addresses [dallasserv] +
Application Controllers [dummyap] +
3. Select one of the two adaptive faillover policies from the pull-down list:
 – cl_highest_udscript_rc
 – cl_lowest_nonzero_udscript_rc
Continue selecting the resources that will be part of the resource group.
4. Verify and synchronize the cluster.
You can display the current DNP policy for an existing resource group as shown in Example 10-13.
Example 10-13 Displaying DNP policy for a resource group
[jessica:root] /HA713 # odmget -q group=DNPrg HACMPresource|pg
 
HACMPresource:
group = "DNPrg"
type = ""
name = "NODE_PRIORITY_POLICY"
value = "cl_lowest_nonzero_udscript_rc"
id = 1
monitor_method = ""
10.3.4 Testing adaptive fallover dynamic node priority
We created a three-node cluster by using the DNP of cl_lowest_nonzero_udscript_rc, as shown in Example 10-13. The contents of our DNP.sh scripts is shown Example 10-14.
Example 10-14 DNP.sh script contents
[cassidy:root] /HA713 # clcmd more /HA713/DNP.sh
 
-------------------------------
NODE cassidy
-------------------------------
#!/bin/ksh
exit 5
 
-------------------------------
NODE shanley
-------------------------------
#!/bin/ksh
exit 3
 
-------------------------------
NODE jessica
-------------------------------
#!/bin/ksh
exit 1
Although our default node priority list has cassidy listed next because we are using DNP with the lowest return code when a fallover occurs, it will actually fallover to node shanley.
 
Demonstration: See the demonstration about this exact fallover:
10.4 Delayed fallback timer
With this feature you can configure the fallback behavior of a resource group to occur at one of the predefined recurring times: daily, weekly, monthly, yearly. Alternatively you can specify a particular date and time. This feature can be useful for scheduling fallbacks to occur during off-peak business hours. Figure 10-4 shows how the delayed fallback timers can be used.
Figure 10-4 Delayed fallback timer usage
Consider a simple scenario with a cluster having two nodes and a resource group. In the event of a node failure, the resource group will fallover to the standby node. The resource group remains on that node until the fallback timer expires. If cluster services are active on the primary node at that time, the resource group will fallback to the primary node. If the primary node is not available at that moment, the fallback timer is reset and the fallback will be postponed until the fallback timer expires again.
10.4.1 Delayed fallback timer behavior
When using delayed fallback timers, observe these considerations:
The delayed fallback timer applies only to resource groups that have the fallback policy set to Fallback To Higher Priority Node In The List.
If there is no higher priority node available when the timer expires, the resource group remains online on the current node. The timer is reset and the fallback will be retried when the timer expires again.
If a specific date is used for a fallback timer and at that moment there is no higher priority node, the fallback will not be rescheduled.
If a resource group that is part of an Online on the Same Node dependency relationship has a fallback timer, the timer will apply to all resource groups that are part of the Online on the Same Node dependency relationship.
When you use the Online on the Same Site dependency relationship, if a fallback timer is used for a resource group, it must be identical for all resource groups that are part of the same dependency relationship.
10.4.2 Configuring delayed fallback timers
To configure a delayed fallback policy, complete the following steps:
1. Use the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Delayed Fallback Timer Policies  Add a Delayed Fallback Timer Policy, and then press Enter.
2. Select one of the following options:
 – Daily
 – Weekly
 – Monthly
 – Yearly
 – Specific Date
3. Specify the following data (see Example 10-15):
 – Name of Fallback Policy
Specify the name of the policy using no more than 32 characters. Use alphanumeric characters and underscores only. Do not use a leading numeric value or any reserved words.
 – Policy-specific values
Based on the previous selection enter values suitable for the policy selected.
Example 10-15 Create fallback timer
                     Configure Daily Fallback Timer Policy
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[Entry Fields]
* Name of the Fallback Policy [daily515]
* HOUR (0-23) [17] #
* MINUTES (0-59) [15]
To assign a fallback timer policy to a resource group, complete the following steps:
1. Use the smitty sysmirror fast path and select Cluster Applications and Resources  Resource Groups  Change/Show Resources and Attributes for a Resource Group. Select a resource group from the list and press Enter.
2. Press the F4 to select one of the policies configured in the previous steps. The display is similar to Example 10-16 on page 394.
Example 10-16 Assigning a fallback timer policy to a resource group
     Change/Show All Resources and Attributes for a Resource Group
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[TOP] [Entry Fields]
Resource Group Name FBtimerRG
Participating Nodes (Default Node Priority) jessica cassidy shanley
 
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node I>
Fallback Policy Fallback To Higher Priority Node>
Fallback Timer Policy (empty is immediate) [daily515] +
 
Service IP Labels/Addresses [dallasserv] +
Application Controllers [dummyapp] +
 
3. Select a fallback timer policy from the pick list and press Enter.
4. Add any extra resources to the resource group and press Enter.
5. Run verification and synchronization on the cluster to propagate the changes to all cluster nodes.
10.4.3 Displaying delayed fallback timers in a resource group
You can display existing fallback timer policies for resource groups by using the clshowres command as shown in Example 10-17.
Example 10-17 Displaying resource groups having fallback timers
[cassidy:root] /utilities # rRG|egrep -i "resource group|timer" <
Resource Group Name FBtimerRG
Delayed Fallback Timer daily515
An alternative is to query the HACMPtimer object class as shown in Example 10-18.
Example 10-18 Displaying fallback timers using ODM queries
[jessica:root] / # odmget HACMPtimer
 
HACMPtimer:
policy_name = "daily515"
recurrence = "daily"
year = -3800
month = 0
day_of_month = 1
week_day = 0
hour = 17
minutes = 30
 
 
Demonstration: See the demonstration about this exact scenario:
10.5 Resource group dependencies
Having large business environments that accommodate more sophisticated business solutions is common. Complex applications often contain multiple modules that rely on availability of various resources. Highly-available applications that have multitiered architecture can use PowerHA capabilities to ensure that all required resources remain available and are started in proper order. PowerHA can include components that are used by an application in resource groups and establish resource group dependencies that will accurately reflect the logical relationships between application components.
For instance, a database must be online before the application server is started. If the database goes down and falls over to a different node, the resource group that contains the application server will also be brought down and back up on any of the available cluster nodes. If the fallover of the database resource group is not successful, then both resource groups (database and application) will be put offline.
To understand how PowerHA can be used to ensure high availability of multitiered applications, understand the following concepts:
Parent resource group
The parent resource group is the first resource group to be acquired during the resource groups acquisition. This resource group does not have any other resource group as a prerequisite. Here, you should include application components or modules that do not rely on the presence of other components or modules.
Child resource group
A child resource group depends on a parent resource group. This type of resource group assumes the existence of another resource group. Here, you should include application components or modules that do rely on the availability of other components or modules.
A child resource group cannot and will not be brought online unless the parent resource group is online. In the parent resource group is put offline, the child resource group will also be put offline.
Parent/child dependency
A parent/child dependency allows binding resource groups in a hierarchical manner. There can be only three levels of dependency for resource groups. A resource group can act both as a parent and a child. You cannot specify circular dependencies among resource groups. You can also configure a location dependency between resource groups in order to control the collocation of your resource groups.
Location dependency
Resource group location dependency gives you the means to ensure that certain resource groups will always be online on the same node or site, or that certain resource groups will always be online on different nodes or sites.
Start after dependency
This dependency means the target resource group must be online on any node in the cluster before a source (dependent) resource group can be activated on a node. There is no dependency when releasing resource groups and the groups are released in parallel.
Stop after dependency:
In this type of dependency, the target resource group must be offline on any node in the cluster before a source (dependent) resource group can be brought offline on a node. There is no dependency when acquiring resource groups and the groups are acquired in parallel.
10.5.1 Resource group parent/child dependency
You can configure parent/child dependencies between resource groups to ensure that resource groups are processed properly during cluster events.
Planning for parent/child resource group dependencies
When you plan to use parent/child resource group dependencies, consider these factors:
Carefully plan which resource groups will contain which application component. Ensure that application components that rely on the availability of other components are placed in various resource groups. The resource group parent/child relationship should reflect the logical dependency between application components.
A parent/child relationship can span up to three levels.
No circular dependencies should exist between resource groups.
A resource group can act as a parent for a resource group and as a child for another resource group.
Plan for application monitors for each application that you are planning to include in a child or parent resource group.
For an application in a parent resource group, configure a monitor in the monitoring startup mode. After the parent resource group is online, the child resource groups will also be brought online.
Configuring a resource group parent/child dependency
To configure parent/child resource group dependency, complete the following steps:
1. Use the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Parent/Child Dependency  Add Parent/Child Dependency between Resource Groups, and press Enter.
2. Complete the fields as follows:
 – Parent Resource Group
Select the parent resource group from the list. During resource group acquisition the parent resource group will be brought online before the child resource group.
 – Child Resource Group
Select the child resource group from the list and press Enter. During resource group release, PowerHA takes the child resource group offline before the parent resource group. PowerHA will prevent you from specifying a circular dependency.
3. Use the Verify and Synchronize option to validate the dependencies and propagate them on all cluster nodes.
10.5.2 Resource group location dependency
You can configure location dependencies between resource groups to control the location of resource groups during cluster events. With PowerHA you can configure the following types of resource group location dependencies:
Online on the Same Node dependency
Online on the Same Site dependency
Online on Different Nodes dependency
You can combine resource group parent/child and location dependencies.
Planning for Online on the Same Node dependency
When you plan to use Online on the Same Node dependencies, consider these factors:
All resource groups that have an Online on the Same Node dependency relationship must have the same node list and the participating nodes must be listed in the same order.
Both concurrent and non-concurrent resource groups are allowed.
You can have more than one Online on the Same Node dependency relationship in the cluster.
All non-concurrent resource groups in the same Online on the Same Node dependency relationship must have identical startup, fallover, and fallback policies.
 – Online Using Node Distribution Policy is not allowed for startup policy.
 – If Dynamic Node Priority policy is being used as the fallover policy, all resource group in the dependency must use the same DNP policy.
 – If one resource group has a fallback timer configured, the timer will also apply to the resource groups that take part in the dependency relationship. All resource groups must have identical fallback time setting
 – If one or more resource groups in the Online on the Same Node dependency relationship fail, cluster services will try to place all resource groups on the node that can accommodate all resource groups being currently online plus one or more failed resource groups.
Configuring Online on the Same Node location dependency
To configure an Online on the Same Node resource group dependency, do these steps:
1. Use the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Online on the Same node Dependency  Add Online on the Same Node Dependency Between Resource Groups, and select the resource groups that will be part of that dependency relationship.
To have resource groups activated on the same node, they must have identical participating node lists.
2. Propagate the change across all cluster nodes by verifying and synchronizing your cluster.
Planning for Online On Different Nodes dependency
When you configure resource groups in the Online On Different Nodes dependency relationship, you assign priorities to each resource group in case there is contention for a particular node at any point in time. You can assign High, Intermediate, and Low priorities. Higher priority resource groups take precedence over lower priority resource groups upon startup, fallover, and fallback.
When you plan to use Online on Different Nodes dependencies, consider these factors:
Only one Online On Different Nodes dependency is allowed per cluster.
Each resource group must have a different home node for startup.
When using this policy, a higher priority resource group takes precedence over a lower priority resource group during startup, fallover, and fallback:
 – If a resource group with High priority is online on a node, no other resource group that is part of the Online On Different Nodes dependency can be put online on that node.
 – If a resource group that is part of the Online On Different Nodes dependency is online on a cluster node and a resource group that is part of the Online On Different Nodes dependency and has a higher priority falls over or falls back to the same cluster node, the resource group with a higher priority will be brought online. The resource group with a lower priority resource group is taken offline or migrated to another cluster node if available.
 – Resource groups that are part of the Online On Different Nodes dependency and have the same priority cannot be brought online on the same cluster node. The precedence of resource groups that are part of the Online On Different Nodes dependency and have the same priority is determined by alphabetical order.
 – Resource groups that are part of the Online On Different Nodes dependency and have the same priority do not cause each other to be moved from a cluster node after a fallover or fallback.
 – If a parent/child dependency is being used, the child resource group cannot have a priority higher than its parent.
Configuring Online on Different Node location dependency
To configure an Online on Different Node resource group dependency, do these steps:
1. Use the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Online on Different Nodes Dependency → Add Online on Different node Dependency between Resource Groups, and press Enter.
2. Complete the following fields and press Enter:
 – High Priority Resource Group(s)
Select the resource groups that will be part of the Online On Different Nodes dependency and should be acquired and brought online before all other resource groups.
On fallback and fallover, these resource groups are processed simultaneously and brought online on different cluster nodes before any other resource groups. If different cluster nodes are unavailable for fallover or fallback, then these resource groups, having the same priority level, can remain on the same node.
The highest relative priority within this set is the resource group listed first.
 – Intermediate Priority Resource Group(s)
Select the resource groups that will be part of the Online On Different Nodes dependency and should be acquired and brought online after high priority resource groups and before the low priority resource groups.
On fallback and fallover, these resource groups are processed simultaneously and brought online on different target nodes before low priority resource groups. If different target nodes are unavailable for fallover or fallback, these resource groups, having same priority level, can remain on the same node.
The highest relative priority within this set is the resource group that is listed first.
 – Low Priority Resource Group(s)
Select the resource groups that will be part of the Online On Different Nodes dependency and that should be acquired and brought online after all other resource groups. On fallback and fallover, these resource groups are brought online on different target nodes after the all higher priority resource groups are processed.
Higher priority resource groups moving to a cluster node can cause these resource groups to be moved to another cluster node or be taken offline.
3. Continue configuring runtime policies for other resource groups or verify and synchronize the cluster.
Planning for Online on the Same Site dependency
When you plan to use Online on the Same site dependencies, consider these factors:
All resource groups in an Online on the Same Site dependency relationship must have the same Inter-Site Management policy. However, they might have different startup, fallover, and fallback policies. If fallback timers are used, these must be identical for all resource groups that are part of the Online on the Same Site dependency.
The fallback timer does not apply to moving a resource group across site boundaries.
All resource groups in an Online on the Same Site dependency relationship must be configured so that the nodes that can own the resource groups are assigned to the same primary and secondary sites.
The Online Using Node Distribution policy is supported.
Both concurrent and non-concurrent resource groups are allowed.
You can have more than one Online on the Same Site dependency relationship in the cluster.
All resource groups that have an Online on the Same Site dependency relationship are required to be on the same site, although some of them might be in an the OFFLINE or ERROR state.
If you add a resource group that is part of an Online on the Same Node dependency to an Online on the Same Site dependency, you must add all other resource groups that are part of the Online on the Same Node dependency to the Online on the Same Site dependency.
Configuring Online on the Same Site Location dependency
To configure an Online on the Same Site resource group dependency, do these steps:
1. Use the smitty sysmirror fast path, select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Online on the Same Site Dependency  Add Online on the Same Site Dependency Between Resource Groups, and press Enter.
2. Select from the list the resource groups to be put online on the same site. During acquisition, these resource groups are brought online on the same site according to the site and the specified node startup policy for the resource groups. On fallback or fallover, the resource groups are processed simultaneously and brought online on the same site.
3. Verify and synchronize the cluster.
10.5.3 Start and stop after dependency
Dependency policies also include Start After dependency and Stop After dependency.
Start After dependency
In this type of dependency, the target resource group must be online on any node in the cluster before a source (dependent) resource group can be activated on a node. There is no dependency when releasing resource groups and the groups are released in parallel.
These are the guidelines and limitations:
A resource group can serve as both a target and a source resource group, depending on which end of a given dependency link it is placed.
You can specify three levels of dependencies for resource groups.
You cannot specify circular dependencies between resource groups.
This dependency applies only at the time of resource group acquisition. There is no dependency between these resource groups during resource group release.
A source resource group cannot be acquired on a node until its target resource group is fully functional. If the target resource group does not become fully functional, the source resource group goes into an OFFLINE DUE TO TARGET OFFLINE state. If you notice that a resource group is in this state, you might need to troubleshoot which resources might need to be brought online manually to resolve the resource group dependency.
When a resource group in a target role falls over from one node to another, there will be no effect on the resource groups that depend on it.
After the source resource group is online, any operation (bring offline, move resource group) on the target resource group will not effect the source resource group.
A manual resource group move or bring resource group online on the source resource group is not allowed if the target resource group is offline.
To configure a Start After resource group dependency, do these steps:
1. Use the smitty sysmirror fast path and select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Start After Resource Group Dependency → Add Start After Resource Group Dependency.
2. Choose the appropriate resource group to complete each field:
 – Source Resource Group
Select the source resource group from the list and press Enter. PowerHA SystemMirror prevents you from specifying circular dependencies. The source resource group depends on services that another resource group provides. During resource group acquisition, PowerHA SystemMirror acquires the target resource group on a node before the source resource group is acquired.
 – Target Resource Group
Select the target resource group from the list and press Enter. PowerHA SystemMirror prevents you from specifying circular dependencies. The target resource group provides services which another resource group depends on. During resource group acquisition, PowerHA SystemMirror acquires the target resource group on a node before the source resource group is acquired. There is no dependency between source and target resource groups during release.
Stop After dependency
In this type of dependency, the target resource group must be offline on any node in the cluster before a source (dependent) resource group can be brought offline on a node. There is no dependency when acquiring resource groups and the groups are acquired in parallel.
These are the guidelines and limitations:
A resource group can serve as both a target and a source resource group, depending on which end of a given dependency link it is placed.
You can specify three levels of dependencies for resource groups.
You cannot specify circular dependencies between resource groups.
This dependency applies only at the time of resource group release. There is no dependency between these resource groups during resource group acquisition.
A source resource group cannot be released on a node until its target resource group is offline.
When a resource group in a source role falls over from one node to another, first the target resource group will be released and then the source resource group will be releases. After that, both resource groups will be acquired in parallel, assuming that there is no start after or parent/child dependency between these resource groups.
A manual resource group move or bring resource group offline on the source resource group is not allowed if the target resource group is online.
To configure a Stop After resource group dependency, do these steps:
1. Use the smitty sysmirror fast path and select Cluster Applications and Resources  Resource Groups  Configure Resource Group Run-Time Policies  Configure Dependencies between Resource Groups  Configure Stop After Resource Group Dependency → Add Start After Resource Group Dependency
2. Choose the appropriate resource group to complete each field:
 – Source Resource Group: Select the source resource group from the list and press Enter. PowerHA SystemMirror prevents you from specifying circular dependencies. The source resource group will be stopped only after the target resource group is completely offline. During the resource group release process, PowerHA SystemMirror releases the target resource group on a node before releasing the source resource group. There is no dependency between source and target resource groups during acquisition.
 – Target Resource Group: Select the target resource group from the list and press Enter. PowerHA SystemMirror prevents you from specifying circular dependencies. The target resource group provides services on which another resource group provides. During the resource group release process, PowerHA SystemMirror releases the target resource group on a node before releasing the source resource group. There is no dependency between source and target resource groups during acquisition.
10.5.4 Combining various dependency relationships
When combining multiple dependency relationships, consider the following information:
Only one resource group can belong to both an Online on the Same Node dependency relationship and an Online on Different Nodes dependency relationship.
If a resource group belongs to both an Online on the Same Node dependency relationship and an Online on Different Node dependency relationship, then all other resource groups than are part of the Online of Same Node dependency will have the same priority as the common resource group.
Only resource groups having the same priority and being part of an Online on Different Nodes dependency relationship can be part of an Online on the Same Site dependency relationship.
10.5.5 Displaying resource group dependencies
You can display resource group dependencies by using the clrgdependency command, as shown in Example 10-19.
Example 10-19 Displaying resource group dependencies
[jessica:root] / #clrgdependency -t PARENT_CHILD -sl
# Parent Child
rg_parent rg_child
An alternative is to query the HACMPrg_loc_dependency and HACMPrgdependency object classes, as shown in Example 10-20.
Example 10-20 Displaying resource group dependencies using ODM queries
[jessica:root] / # odmget HACMPrgdependency
 
HACMPrgdependency:
id = 0
group_parent = "rg_parent"
group_child = "rg_child"
dependency_type = "PARENT_CHILD"
dep_type = 0
group_name = ""
root@ xdsvc1[] odmget HACMPrg_loc_dependency
 
HACMPrg_loc_dependency:
id = 1
set_id = 1
group_name = "rg_same_node2"
priority = 0
loc_dep_type = "NODECOLLOCATION"
loc_dep_sub_type = "STRICT"
 
HACMPrg_loc_dependency:
id = 2
set_id = 1
group_name = "rg_same_node_1"
priority = 0
loc_dep_type = "NODECOLLOCATION"
loc_dep_sub_type = "STRICT"
 
HACMPrg_loc_dependency:
id = 4
set_id = 2
group_name = "rg_different_node1"
priority = 1
loc_dep_type = "ANTICOLLOCATION"
loc_dep_sub_type = "STRICT"
 
HACMPrg_loc_dependency:
id = 5
set_id = 2
group_name = "rg_different_node2"
priority = 2
loc_dep_type = "ANTICOLLOCATION"
loc_dep_sub_type = "STRICT"
 
Note: Using the information retrieved directly from the ODM is for informational purposes only, because the format within the stanzas might change with updates or new versions.
Hardcoding ODM queries within user-defined applications is not supported and should be avoided.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.152.123