CHAPTER 14

image

RAC One Node

by Syed Jaffar Hussain

RAC One Node, widely acknowledged as “the always-on single–server database,” was first introduced with Oracle 11gR2 (11.2.0.1) Enterprise Edition to render High Availability solutions for single-instance databases. RAC One Node is a single-instance RAC database that runs on a single node in a cluster at any given point. The core objective of RAC One Node features is to minimize the impact of single-instance database availability during planned and unplanned server outages through its exceptional online database migration capabilities. In addition, with the online database conversion capabilities, a RAC One Node database can easily be scaled out to a fully functional traditional RAC database.

The Big Picture

RAC One Node typically provides a traditional cold-failover solution to confront unplanned database server outages by automatically starting up the impacted database, either on the same node or on a preconfigured candidate node in the cluster. RAC One Node provides an option for online database migration between active nodes of a cluster for any prolonged planned database or server maintenance outages. Also, when a node with RAC One Node existence becomes overloaded, the database can be easily moved online to another node in the same cluster. Relocating a RAC One Node database instance from one node to another in a cluster requires no application downtime, as it is performed absolutely online. Moreover, when an application workload ramps up over time and business demands additional database resources to handle the workload efficiently, the RAC One Node database can be easily and quickly upgraded to a multinode, fully functional, and traditional RAC database without any application downtime.

RAC One Node fully supports Oracle Data Guard, and runs on Exadata machines and Oracle virtual machines (VM). With RAC One Node, multiple single-instance databases can be consolidated into a single server with less overhead. In a nutshell, RAC One Node reduces planned & unplanned database outages and significantly improves overall database availability with its unique online migration capabilities.

image Note   RAC One Node is not supported on third-party clustering solutions.

Some of the key benefits of RAC One Node are as follows:

  • Easy and fast online database conversion to a traditional RAC database to meet and satisfy increased workload demands
  • Quick online database migration from one node to another node in a cluster during planned software/hardware downtimes and when the node becomes overloaded
  • Uninterrupted single-instance service, except during unplanned outage
  • Ability to automatically fail over a database during unplanned circumstances with no admin intervention
  • Runs over Oracle’s Exadata machine
  • Provides better server consolidation solutions where many single-instance databases can be consolidated into a single server and manage the CPU resources with the use of instance caging feature
  • Certified compatibility with Oracle VM environments
  • Automated failover of a database when the database or cluster availability becomes unhealthy
  • Less expensive than third-party vendor cold-failover technologies

Oracle RAC One Node comes with an additional cost factor. You will need an additional licensing agreement with Oracle to be able to use RAC One Node in your production environment and also to access the support for the product. The license cost is subject to the number of CPUs on the node where RAC One Node is configured; hence, all the nodes where RAC One Node is configured must be licensed. However, the licensing agreement includes a ten-day-per-year rule allowing the database to be active without additional cost for ten days on a nonlicensed node upon database relocation. For more details about licensing and cost, visit the official Oracle website or contact your local Oracle Support.

Upgrading to 11.2.0.2 or Higher

To upgrade a pre-11.2.0.2 RAC One Node database to 11.2.0.2 or higher, the following steps need to be completed:

  1. Upgrade the existing Grid Infrastructure (GI) using the out-of-place upgrade method, if not upgraded already
  2. Install 11.2.0.2 or higher RAC or RAC One Node RDBMS binaries
  3. Use Database Upgrade Assistant (DBUA) tool to upgrade the existing RAC One Node database

    image Note   Pre-11.2.0.3 DBUA is not aware of and doesn’t sense the RAC One Node database. Hence, after upgrading, the database type will be automatically be set to RAC instead of RACOneNode. Therefore, after database upgrade, you need to explicitly convert the database back to RACOneNode using the srvctl convert command.

  4. As a workaround, before upgrading the database, first convert the RAC One Node database to RAC using the racone2rac.sh and then use DBUA to upgrade the RAC database. After completing the upgrade, convert the database back to RACOneNode using the srvctl convert command

Deploying RAC One Node Binaries

Although you can very well make use of the DBCA tool from a typical RAC RDBMS (binaries) home to create a RAC One Node database, optionally, you can also deploy RAC One Node binaries in a separate Oracle Home on the nodes, as per your licensing terms. However, before you install RAC One Node software on any node, ensure the Clusterware software is configured and the cluster stack is up and running on the local node.

The following procedure exhibits a step-by-step procedure of RAC One Node software installation:

  • Go to the RDBMS software staging location and initiate the ./runInstaller from the command prompt.
  • Select the Oracle RAC One Node installation option on the Grid Installation Option and click Next, as shown in Figure 14-1.

9781430250449_Fig14-01.jpg

Figure 14-1. Oracle RAC One Node database installation option

  • On the Node Selection screen, select the appropriate node names from the given list where RAC One Node software binaries should be installed and click Next.

9781430250449_Fig14-02.jpg

Figure 14-2. Node selection

  • Input the software location details on the Specify Installation Location screen and click Next (Figure 14-3).

9781430250449_Fig14-03.jpg

Figure 14-3. Specify Installation Location screen

  • From here on, just follow the typical installation procedure, running through the rest of the interactive screens, and click Next to move forward.

Deploying a RAC One Node Database

There is no substantial difference in the database creation procedures between RAC and RAC One Node databases. The steps remain the same except for selecting the database type option. Simply follow the same database creation steps that you have already been following over the years.

image Note   Prior to 11gR2 Patchset 1 (11.2.0.2), one could create only a single-instance or RAC database with the Database Configuration Assistant (DBCA). Beginning with 11.2.0.2, DBCA is capable of creating a RAC One Node database.

Satisfying Prerequisites

The following are some prerequisites you need to have in place before creating a RAC One Node database:

  • GI must be configured, and the cluster stack should be up and running across the nodes
  • RAC One Node software or RAC binaries must be installed across nodes
  • ASM instance should be up and running
  • Required ASM disk groups need to be prepared and mounted among the ASM instances

Initiating DBCA’s Creation Process

On the local node command prompt, initiate the DCBA tool from the $ORACLE_HOME/bin location, as shown in the following example. Doing so will start the database creation procedure. Our explanation that follows focuses mainly on those aspects of creation unique to creating RAC One Node database using DBCA. Here is the command to invoke DBCA:

$ORACLE_HOME/bin/dbca

Follow these steps on the Database Template screen to create a RAC One Node database:

  • Select the Oracle RAC One Node database option from the Database Type drop-down list.
  • Choose the Admin-Managed database type from the RAC Configuration Type drop-down list.
  • Under the Select Template section, select the appropriate database template option, as shown in Figure 14-4.

    9781430250449_Fig14-04.jpg

    Figure 14-4. The DBCA Database Template screen

  • Click Next to proceed.

    9781430250449_Fig14-05.jpg

    Figure 14-5. Database Identification screen

On the Database Identification screen (Figure 14-5), do the following:

Input Global Database name, SID prefix, and a service name which will be later used by the application to connect to the database. You need to avoid making the service name the same as the database name, as you will get an error if you do so.

Click Next to continue to the next steps, which are illustrated in Figure 14-6.

9781430250449_Fig14-06.jpg

Figure 14-6. Database placement and node selection options

Select required nodes from the available nodes list under the Select Nodes section. If you leave selection to the local node, you will get a warning message that you have selected only one node. However, when multiple nodes are selected, the database will be initially created on the local node and the list of other nodes will be kept as preferred available nodes for online migration purposes.

Click Next to continue further. From here on, follow the typical database creation procedure steps that you have been using it over the years to create a new database.

Parameters Specific to RAC One Node

You might be wondering whether RAC One Node database–specific parameters exist. As of now, there are no parameters specific to RAC One Node database; however, the following parameters on RAC One Node reflect a RAC database type:

cluster_database = TRUE
 instance_type = RDBMS
 instance_number = <instance_number>

Managing RAC One Node Database

This section focuses on RAC One Node database administration concepts. You will learn how to extract RAC One Node database configuration details and identify the predefined candidate server list for instance failover or online migration. You’ll also learn to distinguish the type of database and relocate the database online.

Verifying Configuration Details

Verify the RAC One Node database configuration details after creating the database. The following example extracts useful information about the RAC One Database configuration:

$ srvctl config database -d RONDB
 
            Database unique name: RONDB
      Database name: RONDB
      Oracle home: /u01/app/oracle/product/12.1.0/ron
      Oracle user: oracle
      Spfile: +DG_RONDB/RONDB/spfileRONDB.ora
                  Password file: +DG_RONDB/rondb/orapwrondb
      Domain:
      Start options: open
      Stop options: immediate
      Database role: PRIMARY
      Management policy: AUTOMATIC
      Server pools: RONDB
      Database instances:
      Disk Groups: DG_RONDB
      Mount point paths:
      Services: RONDB_MAIN
      Type: RACOneNode
      Online relocation timeout: 30
      Instance name prefix: RONDB
      Candidate servers: rac1,rac2
      Database is administrator managed

Let’s have a closer look at some of the key configuration elements in the preceding listing.

  • Type: This is one of the vital parameters that require your attention. The parameter helps to identify the database type: RAC or RAC One Node.
  • Online relocation timeout: Displays the default timeout duration (in minutes) to complete any ongoing transactions of a session on the instance before it gets terminated on the local node as part of an online database movement.
  • Typically, when online database relocation is initiated, the instance will wait for the specified amount of time to allow active transactions to complete before it gets terminated. If a session is unable to complete all its active transactions within the allotted time frame, the transaction will then be canceled and the session will be terminated subsequently. The –w parameter provides an option to specify a user-preferred timeout duration of up to 720 minutes.
  • Candidate servers: Lists the names of all candidate nodes that are eligible for instance failover operations. In the event of instance failure, it is important to know that Oracle will select a server randomly, in no particular order, to fail over the instance. Hence, it is strongly recommended to use SCAN to connect to the database to avoid any connection failures on instance migration or failover over different nodes.

Verifying the Online Relocation Status

Execute the following command to verify the database’s online relocation status:

# srvctl status database -d RONDB
Instance RONDB_1 is running on node rac1
      Online relocation: INACTIVE

This output illustrates that the RONDB_1 instance is currently active on node rac1. More importantly, no online database relocation is taking place at this moment.

Stop and Start the Database

Like that of other typical RAC and non-RAC databases, a RAC One Node database's shutdown/startup can be performed either from the SQL*Plus prompt or using the srvctl utility. Though the srvctl usage is highly recommended for such database management operations, you can also use the SQL*Plus utility to start/stop the database if srvctl is unable to complete the job.

The following cluster command stops the database:

# srvctl stop database -d RONDB

In contrast to a RAC and Non-RAC database, starting the RAC One Node database has something new to offer. On instance failure or stop, Oracle will randomly select an eligible candidate server from the predefined candidate server list to bring up the instance automatically. Presuming that Node A and Node B are listed in the candidate server, and the database is running on Node A prior to instance failure or startup, the instance will pick either a different server or the same server from the list to start a replacement instance. Therefore, under some circumstances, there will is no guarantee in the context that the new instance will restart on the same node.

When you want to start the database on a particular node using the srvctl utility, you must use the –n argument to specify the name of the node on which the database should start. For automatic instance failover, Oracle might select the candidate server in the order displayed in the config output.

Use the following example to start up the database:

$srvctl start database –d RONDB -n rac1

Performing Online Database Relocation

Just for a moment, imagine yourself in any of the following circumstances surrounding common maintenance activity:

  • You are planning a prolonged maintenance activity on the database server
  • The node on which your RAC One Node database is running becomes overloaded, running out of resources and capacity
  • You are applying Oracle patches on the node
  • You need to move the database over to another node in the cluster to minimize the downtime during a planned outage, perhaps as a prelude to decommissioning an older node
  • You are performing a server upgrade or patching

In these contexts, database availability is critically important. The uniquely powerful online database relocation feature supersedes other traditional third-party failover technologies. An online database movement from one node to another in a cluster is one of the key features of RAC One Node. The ability to perform database relocations without disrupting applications or end-users can't be overemphasized when discussing RAC One Node.

The following example kicks off an online database migration procedure with a 5-minute timeout to complete any ongoing transactions in the current instance. The -w option specifies the preferred timeout in minutes. The –v option enables verbose output.

$srvctl relocate database -d RONDB -n rac2 -w 5 –v

The following will be displayed if the target node is not part of the predefined server availability list:

Added target node rac2
Configuration updated to two instances
 
Instance RONDB_2 started
Services relocated
Waiting for 5 minutes for instance RONDB_1 to stop.....
Instance RONDB_1 stopped
Configuration updated to one instance

If you are migrating the database over a new node on which instance was not configured previously, RAC One Node automatically prepares the prerequisites for the second instance: it creates UNDO tablespace, adds new redo logs, and so forth.

  1. RAC One Node successfully configures instance 2 and starts up the instance on the target host.
  2. Database services are automatically relocated to the target host.
  3. Transactional shutdown is issued on the first instance to shut down the database in the context.
  4. RAC One Node will wait for 5 minutes for any active sessions to complete their ongoing transactions on the local instance. During this short period of time, the database will be an active/active cluster having two instances up and running. However, only instance 2 will receive new connections.
  5. If a session couldn’t complete its current transaction within the specified timeout, the transaction will be canceled and the session will be terminated.
  6. However, if transactions are completed earlier than the given timeout, the instance will be stopped earlier.
  7. All existing connections are gracefully migrated to instance 2.
  8. If any active connections remain on instance 1, that instance will be stopped by a shutdown abort command.

In the alert.log of the first instance, you will notice the transactional shutdown, as mentioned in the step 5. In addition, you will also notice that Oracle updated the timeout settings for active session to complete the transactions. The command to look for in the log is the following:

ALTER SYSTEM SET shutdown_completion_timeout_mins=30 SCOPE=MEMORY;

When an active session can’t complete the ongoing transaction within the defined timeout slot, the instance shutdown transaction will be replaced by a shutdown abort, which will kill the still-active transactions without waiting to complete.

While online database relocation is taking place, from the other window on the local node you can verify the relocation status by executing a srvctl status command, as shown in the following example:

$ srvctl status database -d RONDB
 
Instance RONDB_1 is running on node rac1
 Online relocation: ACTIVE
 Source instance: RONDB_1 on rac1
 Destination instance: RONDB_2 on rac2

The online database relocation status shows an ACTIVE status during relocation. Furthermore, the output gives a clear idea about where the source and target instances are located and whether they are up and configured.

Now, have a look at the post–database online relocation verification by issuing the srvctl command in this next example:

$srvctl config database -d RONDB
 
 Database unique name: RONDB
 Database name: RONDB
 ...
 Services: RONDB_MAIN
 Type: RACOneNode
 Online relocation timeout: 30
 Instance name prefix: RONDB
 Candidate servers: rac2,rac1
 Database is administrator managed

THE OMOTION UTILITY

In this section, I will briefly discuss the utility that used to perform the online database. The Omotion utility was initially shipped with Oracle 11gR2 (11.2.0.1) on the Linux platform to support Oracle RAC One Node online database migration between nodes in the same cluster. The utility indeed plays a significant role in online instance movement from one node to another node with no downtime or disruption to the service availability.

In pre-11.2.0.2, the Omotion utility was explicitly used to perform an online instance movement between nodes in the same cluster. Starting in 11.2.0.2, Omotion functionality is now part of the srvctl relocate command to perform the instance relocation procedure.

Handling Unplanned Node and Cluster Reboots

Here is a delicate question for your consideration: how do you deal with unplanned server outages? Perhaps you are using a cold-failover technology from a third-party vendor, a technology such as IBM’s HACMP, HP’s Serviceguard, or Veritas’ Cluster Server. Although this software provides a solution to handle unplanned outages, you are still required to have an administration team—composed of either DBAs or operating system administrators—to intervene to manage the situation. This could mean that it takes a while to complete the failover. Even if you can automate the process to handle unplanned node outages, how do you deal with the problem of hung and unresponsive databases?

A RAC One Node database requires no DBA or operating system administrator intervention to complete the activities for recovering from unplanned server outages. RAC One Node automatically performs cold failover without administrator intervention. If you suspect that a failover has occurred, you can execute the previously mentioned srvctl status command to verify the node on which the database is currently active.

Unlike the third-party cold-failover technologies, Oracle RAC One Node is tightly integrated with Oracle Clusterware and is closely monitored by the cluster. When a node suffers an unplanned outage, Oracle Clusterware will detect the failure and try to bring the database up on the same server in the first attempt. If the server is inaccessible for any reason, Oracle Clusterware will automatically relocate the impacted database onto a predetermined alternate server within the cluster. This is all managed automatically and avoids any DBA intervention.

Figure 14-7 depicts the cluster node reboot scenario in a two-node cluster environment where the RONDB database automatically fails over from Node A to Node B. As you can see in the following picture, once Node A became unavailable for some reason, the RONDB_1 instance switched automatically to Node B as RONDB_2, and at this point in time, Node B had two instances running, namely, RONDB_2 and the other database instance.

9781430250449_Fig14-07.jpg

Figure 14-7. The cluster node reboot scenario

Although there is very little application downtime involved, Oracle RAC One Node’s ability to manage automatic restart and relocate the impacted database services onto a survival node puts Oracle RAC One Node technology far ahead of traditional third-party cold-failover solutions and virtualization solutions.

Converting Between RAC One Node and Standard RAC

With Oracle RAC One Node, you can easily scale out the database to a fully functional, traditional RAC database, subject to the licensing terms, to meet and serve increased business workload demands. With RAC One Node, these scaling changes can be made online. No application downtime is needed.

Scaling Up to Standard RAC

The following command initiates the online upgrade of a RAC One Node database to a fully functional RAC database:

srvctl convert database -d RONDB -c RAC

The following slightly modified syntax is for use on a 12c database for database conversion:

$srvctl convert database -db db_unique_name -dbtype RAC [-node node_name]

In this command, the parameter -c (replaced as –dbtype in 12c) has two options: RAC and RACONENODE. Specify RAC to scale upwards, taking a RAC One Node database to a standard RAC database.

Execute the srvctl convert database command on the node on which RAC One Node is running. Otherwise, you will get the following error message:

PRKO-2159 : Option '-i' should be specified to convert an administrator-managed RAC database to its equivalent RAC One Node database configuration

Specify the –n option when the RAC One Node database is not running on the local node. The following is the syntax to use:

$ rvctl convert database -d <dbname> -c RAC [-n <node_name>]

In this syntax, the parameter -n specifies the node name on which the RAC One Node database is running.

When the conversion procedure completes successfully, you can verify the database configuration to ensure that the database has successfully converted to RAC. Do so by issuing the srvctl config command as in the following example:

$ srvctl config database -d RONDB
      
Database unique name: RONDB
Database name: RONDB
..........
Database instances: RONDB_2
Disk Groups: DG_RONDB
Mount point paths:
Services: RONDB_MAIN
Type: RAC
Database is administrator managed

At this point, you now have a full-fledged RAC database with a single instance. You will have to add additional instances according to your business needs. You can do so using the following syntax:

$ srvctl add instance -d RONDB -i RONDB_1 –n rac2

After adding an instance, bring up the new instance using the following srvctl command:

$ srvctl start instance -d RONDB -i RONDB_1

The RONDB database now has two instances. Verify using the following command:

$ srvctl status database -d RONDB
 
Instance RONDB_2 is running on node rac2
Instance RONDB_1 is running on node rac1

Scaling Down to RAC One Node

The process to fall back or scale down a RAC database to RAC One Node is pretty simple and straightforward. However, the following prerequisites must be fulfilled:

  • The RAC database must not contain more than one instance. If so, all instances except one must be stopped and removed from the RAC database to avoid PRCD-1214 : Administrator-managed RAC database RONDB has more than one instance error. Use the following commands to stop and remove instance 2:
  • Srvctl stop instance –d RONDB –i RONDB_2
  • Srvctl remove instance –d RONDB –i RONDB_2
  • At the same time, ensure that the instances that are going to be removed are not in the preferred instances list of any database services. If they are, modify the existing database services accordingly.

When you’ve met all the prerequisites, issue the following command to convert a RAC database to a RAC One Node database:

$ srvctl convert database -d RONDB -c RACONENODE -i RONDB_2 –w 5

When the command completes, your standard RAC database will now be a RAC One Node database. To verify the database type, use the srvctl config command.

Managing RAC One Node with Cloud Control 12c

As stated earlier, an Oracle RAC One Node database can be thoroughly monitored and managed with Cloud Control 12c. This section is going to demonstrate some of the actions using sequences of screenshots. However, we won't describe how to install Oracle Enterprise Manager (OEM) agents and discover the targets, as there won't be any difference with regard to the process as it’s usually done.

image Note   Management with OEM 11gR2 is possible, although making use of Cloud Control 12c is the preferred method. For this reason, we will limit our discussion to the use of Cloud Control 12c. If you currently have OEM 11gR2, then visit Oracle Support’s website to obtain required patches to make RAC One Node work with OEM 11gR2.

Database Relocation with Cloud Control 12c

To perform relocation, go to the availability pulldown menu. Select Cluster Database Operations ➤ Online Database Relocation. The screen is shown in Figure 14-8.

9781430250449_Fig14-08.jpg

Figure 14-8. Database Availability screenshot

You then need to input the database user connect credentials to connect to the database. After you’ve input the host credentials, the relocation screen shown in Figure 14-9 will be displayed.

9781430250449_Fig14-09.jpg

Figure 14-9. The RAC One Node Online Database Relocation screen

Verify the current online relocation status, which in this case is “Inactive.” The online relocation timeout can be changed from the current (default) value. Whatever value you define will be passed to the srvrctl utility’s relocate statement in the –w parameter.

The destination server is empty if only one server is available to relocate to; if your cluster contains more servers, then the destination server will contain a list with available servers. If a previous relocation took place, the name of the relocation job would be listed as well.

Provide the required relocation details and click the Start Online Database Relocation button. This button is shown in Figure 14-10. A job will be created to execute the relocation.

9781430250449_Fig14-10.jpg

Figure 14-10. The buttons at the bottom of the Online Database Relocation screen

When the job is submitted, a message will be displayed. Then, in the background, the relocation takes place. You see such a submission in Figure 14-11.

9781430250449_Fig14-11.jpg

Figure 14-11. Startup of an online database relocation job

Figure 14-12 shows that the relocation is complete and that the job was successful. Figure 14-13 shows the resulting status. You can see that one instance is up, and the other is down. The database has been successfully relocated to the instance that shows as up.

9781430250449_Fig14-12.jpg

Figure 14-12. Relocation job status

9781430250449_Fig14-13.jpg

Figure 14-13. EM12c reporting one instance as up and the other as down

Third-Party Cold Failover vs. RAC One Node

A third-party technology such as HP’s Serviceguard, IBM’s HACMP, or Veritas’ Cluster Server can be used to address unplanned server outages. These technologies manage a database in active/passive configuration to offer failover solutions to a single-instance database. In the event of a server failure, the passive instance on the other node will be started manually or with automated scripts. However, application connections need be redirected to the active instance.

With RAC One Node, Oracle offers a similar solution to deal automatically with long unplanned as well as planned downtimes. However, Oracle RAC One Node has the upper hand over any traditional third-party database failover solution. The following are some things that Oracle RAC One Node provides for that the competition does not:

  • Database and OS patching while the database continues to be available
  • Online storage migration and consolidation when ASM is used
  • Easy upgrade to a full RAC database, with no downtime
  • Monitoring of database and Clusterware health status
  • Faster database failure notification to clients, leading to quicker reconnections
  • Automatic restart of an instance in the event of problems, either on the same node in the cluster or on a different node, whichever is needed

Most importantly, RAC One Node provides a single-vendor solution. You can easily implement it without having to tie together multiple products. Support comes from a single source.

Summary

In a nutshell, this chapter explains the concepts and overall benefits of Oracle’s RAC One Node feature. We’ve also covered deployment of RAC One Database binaries and how to create a RAC One Node database using the DBCA tool. Also, we demonstrated how to convert a database online and showed how to scale out to a full RAC database to achieve load balancing solutions. When you need High Availability solutions for your single-instance database to confront planned and unplanned outages, RAC One Node is the perfect choice.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.178.151