CHAPTER 13

image

Clusterware and Database Upgrades

by Syed Jaffar Hussain

Whenever Oracle announces a new version, the million-dollar question facing every organization and every DBA is whether to upgrade their current (and mostly stable) environment. There is little doubt that the prospect of an upgrade gives you, as a DBA, butterflies in your stomach and the feeling that it will take a lot of hard work to make the process successful. It’s hardly surprising that many organizations are afraid to accept the risks of an upgrade.

The objective of this chapter is to explain the key elements that drive an organization to consider upgrading. It also offers the best practices, tips, and tricks to upgrade your current Oracle Grid Infrastructure (GI) and databases to the new Oracle 12c.

Typically, people reject the risk of an upgrade for the following main reasons:

  • The organization’s legacy application doesn’t support the new Oracle release.
  • The organization doesn’t want to face new challenges that might materialize after an upgrade.
  • There are no experienced or skilled DBAs to perform such an important action.
  • Resources, such as a nonproduction environment to simulate the change before taking them into production, are lacking.

Here are a few key factors that drive an organization or DBA to consider an upgrade of their existing environment nevertheless:

  • To stay up to date with the latest technology as dictated by standards or security policies
  • To address or fix some of the critical issues encountered in the current environment
  • To leverage the enhancements of the new version to improve database performance or flexibility
  • To continue to use Oracle Support; Oracle will stop supporting the older version in terms of providing bug fixes, patches, etc., after a grace period or a version’s extended support policy

Configuration

This chapter used the following setup configuration to demonstrate an Oracle Clusterware 11gR2 (11.2.0.3) and Database upgrade to Oracle 12c (12.1.0.0.2):

  • Oracle 11gR2: 11.2.0.3
  • RedHat Enterprise Linux 5: OS
  • Oracle ASM: for database storage
  • Two-node RAC: rac1, rac2
  • RAC database with two instances: PRDDB (prddb1,prddb2)
  • Oracle 12c: 12.1.0.0.2
  • OCR (Oracle Cluster Registry) and voting disk: on ASM diskgroup
  • /u00 filesystem: 100GB space
  • Following RPMs:
  • binutils-2.17.50.0.6
  • compat-libstdc++-33-3.2.3
  • compat-libstdc++-33-3.2.3 (32 bit)
  • gcc-4.1.2
  • gcc-c++-4.1.2
  • glibc-2.5-58
  • glibc-2.5-58 (32 bit)
  • glibc-devel-2.5-58
  • glibc-devel-2.5-58 (32 bit)
  • ksh
  • libaio-0.3.106
  • libaio-0.3.106 (32 bit)
  • libaio-devel-0.3.106
  • libaio-devel-0.3.106 (32 bit)
  • libgcc-4.1.2
  • libgcc-4.1.2 (32 bit)
  • libstdc++-4.1.2
  • libstdc++-4.1.2 (32 bit)
  • libstdc++-devel 4.1.2
  • libXext-1.0.1
  • libXext-1.0.1 (32 bit)
  • libXtst-1.0.1
  • libXtst-1.0.1 (32 bit)
  • libX11-1.0.3
  • libX11-1.0.3 (32 bit)
  • libXau-1.0.1
  • libXau-1.0.1 (32 bit)
  • libXi-1.0.1
  • libXi-1.0.1 (32 bit)
  • make-3.81
  • sysstat-7.0.2

Pre-Upgrade Checklist

Every successful deployment needs careful planning and must go through prerequisite verification before upgrading an Oracle environment. You are therefore advised to work through some of the important pre-upgrade tasks mentioned in the subsequent sections to ensure a smooth and successful upgrade.

Oracle Software Backups

Before any upgrade, it is essential to have an up-to-date backup copy of the existing Oracle software: GI and RDBMS, and also the Cluster’s critical components (OCR and OLR [Oracle Local Registry]) in order to secure yourself from any sort of disaster situations that might arise before and after the upgrade. We recommend that you consider the following backup actions:

Back up your current GI and RDBMS homes, preferably the entire Oracle software filesystem.

Perform a manual physical OCR/OLR backup.

Perform a full Oracle Database backup of the source databases.

Validating Node Readiness

Validating node (server) readiness prior to the upgrade procedure is one of the key steps of a successful upgrade. It is mandatory to verify all the prerequisites, such as storage, OS kernel parameters, OS patches, etc., on the current nodes to ensure that they meet the minimal requirements to proceed with the upgrade.

Prerequisite validations can be performed in two ways: with the Cluster verification utility (runcluvfy.sh) or by letting the Oracle Universal Installer (OUI) perform the verification as part of the prerequisite checking during the upgrade process. The following command demonstrates an example how to use the cluvfy utility to perform the verification prior to installation of the Clusterware:

./runcluvfy.sh stage –pre crsinst –upgrade –n rac1,rac2 –rolling –fixup–src_crshome
/u01/app/11203/grid –dest_home /u01/app/12.1.0/grid –dest_version12.1.0 –verbose

image Note   The runcluvfy.sh must be executed from the 12c software staging location.

In the event of any validation failures produced by the preceding command, you should fix the issues and re-execute the command to ensure that all the prerequisite checks have been met.

Unsetting Oracle Variables

Ensure the Oracle variables for GRID and RDBMS homes on the local node are unset prior to cluster upgrade. The following example demonstrates how to unset Oracle variables on the Unix/Linux platforms:

$ unset ORACLE_HOME ORA_CRS_HOME TNS_ADMIN ORA_NLS10 ORACLE_HOME

The $ORACLE_HOME/bin path should not be part of the current PATH variable settings on the local node. If it is, remove the path from the current PATH settings.

Upgrading Flexibility and Restrictions

A few important points to keep in mind before proceeding with an upgrade process:

  • Depending on your needs, you have the flexibility to choose a rolling cluster upgrade or a non-rolling cluster upgrade method. A rolling upgrade option lets you perform an upgrade on one node at a time while the other nodes in the cluster go on working, hence requiring less application downtime for the activity.
  • For a non-rolling cluster upgrade, on the other hand, you will have to stop the cluster stack completely across all nodes in the cluster. This type of upgrade requires more downtime and a complete system stop.
  • Clusterware and ASM instance can be upgraded together at once, or you may choose to perform Clusterware and ASM instance upgrades individually. In this approach, you will have to upgrade the Clusterware first, after a successful cluster upgrade to 12c, and subsequently upgrade the ASM instance using the ASM Configuration Assistant tool (asmca) from the 12c Grid Home. Clusterware and ASM instance upgrade together is highly recommended.
  • Before upgrading to 12c, the OCR and voting disks must be migrated from RAW/Block devices to ASM diskgroup (refer to Chapter 2 to learn how to migrate OCR/voting disks to ASM diskgroups), as Oracle 12c no longer supports RAW/Block devices. If the OCR/voting disk files are already on ASM, then you can safely ignore this suggestion.
  • You can’t directly upgrade Oracle 10g, 11gR1, or 11gR2 Clusterware to Oracle 12c Flex Cluster. It can be upgraded only to standard cluster upgrade. To make use of Oracle 12c Flex Cluster benefits, all nodes in the cluster must be successfully upgraded to 12c before the Flex Cluster is configured.
  • Only the owner of existing GI software can perform an upgrade to Oracle 12c. You can’t use a different user to upgrade the existing GI home.
  • Upgrades on shared Oracle homes are feasible.

Cluster and ASM Oracle 12c Upgrade Compatibility Matrix

Before upgrading the current environment, review the compatibility matrix to determine whether the current environment is capable of performing a direct upgrade to Oracle 12c or not. You need to be prepared for situations such as the current environment not supporting a direct upgrade; how then would you proceed?

Table 13-1 depicts a compatibility matrix for performing direct Oracle 12c upgrades from earlier Oracle versions.

Table 13-1. Cluster Upgrade Compatibility Matrix

Oracle Version Compatibility
Oracle 10gR1 (10.1.0.5) Direct upgrade possible
Oracle 10gR2 (10.2.0.3) Direct upgrade possible
Oracle 11gR1 (11.1.0.6) Direct upgrade possible
Oracle 11gR2 (11.2.0.2) Direct upgrade possible: patch set 11.2.0.2.3 (PSU 3) or later must be applied

If your current Oracle environment doesn’t match any of the Oracle versions listed in Table 13-1, you will have to perform an indirect upgrade, in which you need to first either apply a patch or upgrade it to the version specified in the table before you upgrade it to 12c.

Launching OUI

When all the prerequisites are met, and all upgrade best practices have been performed, you are good to go with the upgrade process. So, let’s get into the real action. Navigate through the Oracle 12c GI software staging location and launch OUI by running the following command:

./runInstaller

Initiating an Oracle Clusterware Upgrade

The guidelines provided here, with the help of the screenshots, will demonstrate a two-node Oracle 11gR2 (11.2.0.3) GI upgrade to Oracle 12c (12.1.0.0.2) on Linux 5 OS.

  1. If you don’t want to receive updates about software updates from My Oracle Support, select the Skip Software updates option on the Software Upgrade screen.
  2. Select the Upgrade Oracle GI or Automatic Storage Management option on the Installation Option screen, as demonstrated in Figure 13-1.

    9781430250449_Fig13-01.jpg

    Figure 13-1. Select Installation Option screenshot

  3. Select the preferred languages based on your requirements on the Select Product Languages screen.
  4. Select all nodes from the given list as part of the cluster upgrade procedure on the Node Selection screen, as demonstrated in Figure 13-2.

    9781430250449_Fig13-02.jpg

    Figure 13-2. GI Node Selection screenshot

  5. When the GI management repository is configured as part of an Oracle 12c new installation or upgrade, a new database in the context will be created to maintain Cluster Health Monitor (CHM) data and other Cluster futures. The historical data kept in the database can be used later on to analyze the cluster issue. At the time of writing this chapter, there is no other option available to configure the GI management repository if you miss it during the installation upgrade. If you select the configuration (Figure 13-3), the OCR and voting disks will be automatically created in the same ASM diskgroup used for the MGMT database.

    9781430250449_Fig13-03.jpg

    Figure 13-3. GI Management Repository Option screenshot

  6. Set the appropriate groups on the Operating System Groups screen.
  7. Input the Oracle 12 GI location details on the Installation Location, as shown in Figure 13-4.

    9781430250449_Fig13-04.jpg

    Figure 13-4. Specify Installation Location screenshot

  8. With 11gR2, all upgrades are now performed as out-of-box upgrades, where you must input a different location for the new GI home in contrast to the existing GI home. Therefore, you must enter a different GI home location for 12c.
  9. Starting in Oracle 12c, you have the flexibility to automate the root.sh or rootupgrade.sh execution across all nodes in the cluster during a new GI installation or upgrade deployment. You can prefer to automate the execution of the script either by providing the root user login credentials or by defining a sudo user to execute the script automatically. Automating the script execution eliminates the need for manual execution.
  10. If you don’t select the Automatically run configuration scripts option, you will have to run the script manually when prompted, as you used to do in earlier releases (Figure 13-5).

    9781430250449_Fig13-05.jpg

    Figure 13-5. Root script execution configuration screenshot

  11. We all know the possibilities of running the root.sh script in parallel to expedite the deployment when there are a huge number of nodes involved in the process. For example, in earlier Oracle releases, when we have, say, about six nodes as part of the deployment, we can initiate the script in parallel as follows:
    • First, run the script on the local node, that is, node 1.
    • Run the script in parallel on nodes 2 to 5.
    • Finally, execute the script on the last node, that is, node 6.
  12. A similar behavior can be achieved in 12c now by defining the “pooling the nodes into batches” feature. You can scatter all your nodes into batches, up to a maximum of three batches. The Installer will then run the root.sh/rootupgrade.sh script on the nodes as mentioned in the batches. It will also ask for the confirmation before proceeding to the next batch. Figure 13-6 describes this.

    9781430250449_Fig13-06.jpg

    Figure 13-6. Select batches for nodes screenshot

  13. The Installer then performs the prerequisite verification checks and displays the result on the Prerequisite Checks screen. Look for any verification failures and take appropriate action before moving forward.
  14. Review the Summary report, and if you find anything colored red, take the appropriate steps and then proceed to the real action.
  15. The Installer subsequently proceeds with the installation of Oracle 12c GI and other steps, as shown in Figure 13-7.

    9781430250449_Fig13-07.jpg

    Figure 13-7. Install Product screenshot

  16. Since we enabled automation of root.sh execution, you will have to respond with Yes to execute the root.sh automatically, as demonstrated in Figure 13-8.

    9781430250449_Fig13-08.jpg

    Figure 13-8. Install Product root execution screenshot

    If you have opted to run the root.sh/rootupgrade.sh script manually, the screen shown in Figure 13-9 will appear.

    9781430250449_Fig13-09.jpg

    Figure 13-9. rootupgrade script execution screenshot

    Based on your response or action, the Installer then proceeds with running the script on the nodes, as per the instructions (batch) specified previously. While the script is being executed on the respective nodes, you can see the feedback on the Install Product screen on which node the script is currently being executed.

    After the completion of the script on the local node, from the other command prompt on the local node, you can query the CRS active/software versions. You will notice that the crssoftwareversion will be shown as the new version; however, the crsactiveversion will remain to previous release. This is the expected result: until the script doesn’t complete on all nodes, the crsactiveversion will remain to previous release.

    $ ./crsctl query crsactiveversion
    Oracle Clusterware active version on the cluster is [11.2.0.3.0]
     
    $ ./crsctl query crssoftwareversion
    Oracle Clusterware version on node [rac1] is [12.1.0.0.2]

    After successfully completing the execution of rootupgrade.sh on all nodes, the Installer will then proceed forward in configuring the GI management database, if selected, where a new database will be configured in the context. This can be observed in Figure 13-10.

    9781430250449_Fig13-10.jpg

    Figure 13-10. Progress bar screenshot

  17. Complete the upgrade procedure when the status bar shows 100% completion. You can then verify the active and software version using the following examples. You must see both outputs mentioning the same cluster version, that is, 12.1.0.0.2 in our example.
    $ ./crsctl query crs activeversion
    Oracle Clusterware active version on the cluster is [12.1.0.0.2]
     
    $ ./crsctl query crs softwareversion
    Oracle Clusterware version on node [rac1] is [12.1.0.0.2]

The Importance of the Rootupgrade.sh Script

Only software binaries are deployed under the respective homes on the local and remote nodes until the rootupgrade.sh script is executed. The rootupgrade.sh actually performs the core Clusterware stack configuration and upgrade. In the context, the script does the following core duties:

On the local node (first node):

  1. ASM upgrade will be performed.
  2. OLR will be configured accordingly.
  3. The cluster auto start entries in the /etc/inittab file on the OS will be modified.
  4. The existing cluster stack will be shut down.
  5. The Oracle 12c cluster stack on the local node will be started up.
  6. The OCR will be configured with necessary information before displaying the success Cluster configuration message.

The following output is expected when the rootupgrade.sh script is executed on the local and other nodes of the cluster, except the last node in the cluster:

[root@rac1 oracle]# /u01/app/12.1.0/grid_1/rootupgrade.sh
Performing root user operation for Oracle 12c
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid_1
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_1/crs/install/crsconfig_params
2013/04/12 10:07:18 CLSRSC-363: User ignored prerequisites during installation
 
ASM upgrade has started on first node.
 
OLR initialization - successful
2013/04/12 10:12:29 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
 
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
2013/04/12 10:17:24 CLSRSC-343: Successfully started Oracle clusterware stack
 
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2013/04/12 10:20:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

On the last node:

  1. OLR will be configured accordingly.
  2. Modify the cluster auto start entries in the /etc/inittab file on the OS.
  3. Existing cluster stack will be shut down.
  4. Start-up the Oracle 12c cluster stack on the local node.
  5. Configure the OCR with necessary information.
  6. Starts the actual OCR,CSS,ASM & CRS process.

The following output is expected when the rootupgrade.sh script is executed on the last node in the cluster:

[root@rac2 ∼]# /u01/app/12.1.0/grid_1/rootupgrade.sh
Performing root user operation for Oracle 12c
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid_1
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_1/crs/install/crsconfig_params
2013/04/12 10:22:05 CLSRSC-363: User ignored prerequisites during installation
 
OLR initialization - successful
2013/04/12 10:25:18 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'
 
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
2013/04/12 10:29:28 CLSRSC-343: Successfully started Oracle clusterware stack
 
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.0.2
2013/04/12 10:35:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Post-Upgrade Tasks

After a Clusterware upgrade, quickly go through the following post-upgrade checklist to confirm success:

  1. Verify Cluster active/software versions.
  2. Ensure that the Clusterware daemons are started from the new Oracle 12c home, as shown in Figure 13-11.

    9781430250449_Fig13-11.jpg

    Figure 13-11. 12c cluster output screenshot

  3. Verify the status of all cluster-critical resources using the following example:
    $ ./crsctl stat res -t -init
    ------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    ------------------------------------------------------------------------------
    Cluster Resources
    ------------------------------------------------------------------------------
    ora.asm
          1        ONLINE  ONLINE       rac1                     Started
    ora.cluster_interconnect.haip
          1        ONLINE  ONLINE       rac1
    ora.crsd
          1        ONLINE  ONLINE       rac1
    ora.cssd
          1        ONLINE  ONLINE       rac1
    ora.cssdmonitor
          1        ONLINE  ONLINE       rac1
    ora.ctssd
          1        ONLINE  ONLINE       rac1                     ACTIVE:0
    ora.diskmon
          1        OFFLINE OFFLINE
    ora.drivers.acfs
          1        ONLINE  ONLINE       rac1
    ora.evmd
          1        ONLINE  ONLINE       rac1
    ora.gipcd
          1        ONLINE  ONLINE       rac1
    ora.gpnpd
          1        ONLINE  ONLINE       rac1
    ora.mdnsd
          1        ONLINE  ONLINE       rac1
    ora.storage
          1        ONLINE  ONLINE       rac1
  4. Modify all existing scripts to reflect the new Oracle 12c GI home settings; for example, in the .bash_profile, other manual scripts that you may maintain at your premises.
  5. Perform the filesystem backup of the new GI home.
  6. Perform the manual physical backup of the OCR.
  7. Optionally deconfigure and clean the previous Cluster.

Clusterware Downgrade

Whenever you prepare to introduce a new change to the existing environment, it is always a recommended and safe approach to have a complete backout plan to recover from any disasters that may occur after the change. If you work for a reputable organization, the management naturally expects a back-out plan to accompany all changes to the current environment. The back-out plan is a procedure to roll back the changes that were applied if the change didn’t go well, or the change creates new problems.

The objective of this section is to summarize all the necessary steps that are required to downgrade from a successfully upgraded or from a failed Clusterware/ASM to earlier versions.

Oracle’s out-of-place upgrade option makes the downgrade procedure faster by keeping the old and new versions of Oracle Clusterware software in different homes/locations. This makes the downgrade procedure a very light and quick process, in contrast with pre-11gR2 versions.

Initiating Downgrade Procedure

In order to kick-start the downgrade procedure, perform the following steps:

From the new 12c Grid Home, run the following command on all nodes of the cluster sequentially, preferably starting from the first node to the last node:

$GRID_HOME/crs/install/rootcrs.pl –downgrade

Use the [-force] option in case if you encounter any issues with the preceding command due to the failed upgrade.

The command will stop all currently running resources on the server and shut down the 12c cluster stack subsequently.

After successfully executing the preceding command across all nodes in the cluster, execute the following command on the first node:

$GRID_HOME/crs/install/rootcrs.pl –downgrade -lastnode

This process will downgrade the OCR and set to the previous release.

As GI installation owner, run the following commands from the 12c $GRID_HOME/oui/bin location on every successfully upgraded node to complete the downgrade procedure:

./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid
 
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/11.2.0/grid

image Note   You need to provide the 12c Grid Home location in the first command, and in the second command, you need to provide the previous release Grid Home location.

For Oracle version 11gR2 or later, after successfully executing the preceding steps, as the root user, bring up the previous release cluster stack using the following command on each node in the cluster:

$/u01/app/11.2.0/grid/bin/crsctl start crs

For pre–Oracle 11gR2 versions, as root user, execute the root.sh script manually from the earlier Cluster release on each node in sequence to bring up the previous cluster stack.

Oracle 12c GI supports downgrade option to Oracle 11gr2 and Oracle 11gR1.

image Note   Keep in mind that any pre-/post-upgrade configuration modifications performed will be removed and unrecoverable after the downgrade.

Forcing Upgrade—When Some Nodes Are Inaccessible

Pre–Oracle 12c, if some nodes are not accessible or become unreachable before or during the upgrade, then the Clusterware upgrade won’t be completed due to missing/inactive nodes in the cluster. In contrast, with the introduction of the –force option with the rootupgrade.sh script, you can perform a forced upgrade despite missing/inactive nodes in the cluster. Therefore, when you query active crs version to confirm the upgraded, you will see that the crs active version is set to the new Cluster version in contrast to the pre–Oracle 12c releases.

Here is an example to run the rootupgrade.sh script with the –force option:

$ /u01/app/12.1.0/grid/rootupgrade –force

When the inactive nodes become accessible after a forced upgrade, you can join them to the cluster to either upgrade or delete them from the cluster. Execute the following examples on the first node of the cluster to join the node that was not accessible during the course of the upgrade:

$/u01/app/12.1.0/grid/crs/install/rootcrs.pl –join –existingNode rac1 upgrade_node rac2

In the preceding example, existingNode indicates the node on which the upgrade was successful, and upgrade_node is the node which was not reachable during the course of upgrade and wants to join the node in the cluster after a forced upgrade.

Installing Oracle 12c RDBMS Software

You will have to install the new Oracle 12c RDBMS binaries into a new RDBMS location right after your Clusterware upgrade to ally with the complete Clusterware environment upgrade procedure.

As stated earlier on, from 11gR2 onward, all upgrades are now out-of-place upgrades; you will have to create a new location in the context for the Oracle 12c binaries and install the software. This procedure will not have any impact on the active databases on the respective nodes unless you select the “Install database software only” option. In other words, we advise you not to select the “Upgrade an existing database option” on the Select Installation Option screen, as shown in Figure 13-12.

9781430250449_Fig13-12.jpg

Figure 13-12. Database Select Installation Option screenshot

You will have to follow the typical RDBMS software installation in the RAC environment and complete the software installation considering the preceding recommendations discussed a little earlier.

Once you successfully complete the software installation, you may then proceed with the database upgrade process as per your schedule and convenience. Manual and Database Upgrade Assistant (DBUA) upgrade methods will be demonstrated in more detail in the subsequent section.

Database Upgrade

Although there are various methods, processes, and tools to perform an upgrade of an existing database to the new Oracle release, this section mainly focuses on the manual and DBUA upgrade methods. Just before the upgrade, it is important for you as a DBA to become familiar with those key enhancements brought into the upgrade procedure of the new release for a successful database upgrade. The new enhancements with regard to the database upgrade include the following:

Pre-upgrade tool:The new preupgrd.sql script in Oracle database 12c replaces the previous Oracle release pre-upgrade tool, utlu112i.sql script, which will provide recommendations to fix any pre– and post–database upgrade chaos that might emerge and cause disruption in the upgrade process. Therefore, pre–database upgrade, you must execute the script from the Oracle 12c home on the source database to gather the recommendations for pre– and post–database upgrade. The DBUA tool uses this script by default.

Parallel Upgrade Utility (catctl.pl):Unlike earlier releases of Oracle, with the introduction of the new Parallel Upgrade Utility in Oracle 12c, you can now make full use of the system’s CPU capacity by initiating the database upgrade script, catupgrd.sql, in parallel. The new Parallel Upgrade Utility (catctl.pl) with Oracle 12c greatly improves the database upgrade runtime by executing the catupgrd.sh script in parallel. The DBUA tool takes full advantage of this utility by running it by default, as the utility is integrated with the DBUA tool. The following example summarizes the use of the utility when you do a manual database upgrade:

$ORACLE_HOME/per/bin/perl catctl.pl –n 2 –l /tmp catupgrd.sql

The -n parameter indicates the range of parallelism: the value must be 0-8.

-l is the location where the logs will be spooled.

Restart DBUA database upgrades:Unlike the earlier Oracle releases, the DBUA tool in Oracle 12c has the ability to resume the database upgrade process from the point where it failed without being started over again. The new RMAN feature in 12c, Guaranteed Restore Point, in this context will help in resuming the failed database upgrades initiated by the DBUA tool.

Database upgrade compatibility matrix: Before you start upgrading the existing database to the new release, it is essential to determine whether the new release will support a direct upgrade of an existing database version or not. Table 13-2 outlines the database upgrade compatibility matrix of various Oracle releases that support direct and indirect database upgrades to Oracle 12c.

Table 13-2. Database Upgrade Compatibility Matrix

Direct Database Upgrade Indirect Database Upgrade
10.2.0.5 11.1.0.7 11.2.0.2 or later 9.2.0.8 or earlier 10.1.0.5 10.2.0.2,10.0.2.3,10.0.2.4 11.1.0.6 11.2.0.1

For example, if the existing Database version is 11.2.0.1, you need to upgrade it to 11.2.0.2 or higher first and then proceed with 12c upgrade.

Deploying Manual Database Upgrade

In this section, you will learn all the mandatory steps that are required to perform a manual database upgrade to Oracle 12c. The following demonstrates the procedure to upgrade an Oracle 11.2.0.3 database, named PRDDB, to Oracle 12c:

Back up the source database, if no recent backup exists.

Connect to the source database as/sysdba and execute the pre-upgrade script from the Oracle 12c home. This is a mandatory step when you perform a manual database upgrade; on the flip side, if the script doesn’t run, the post–database upgrade step ultrp.sql script will be failed. Figure 13-13 shows how to run the pre-upgrade script on the source database:

export ORACLE_SID=PRDDB1
SQL> sqlplus / as sysdba
SQL> @/u01/app/oracle/product/12.0.1/rdbms/admin/preupgrd.sql
            
            You are likely to see the following output on running the pre-upgrade tool:

9781430250449_Fig13-13.jpg

Figure 13-13. Pre-upgrade screenshot

Unlike earlier Oracle database releases, the pre-upgrade output is written to a log file. You will have to review the preupgrd.log file generated by the pre-upgrade tool for any warnings and recommendations. Similarly, review and apply the recommendations stated in the preupgrade_fixups.sql andpostupgrade_fixups.sql scripts to fix any pre– and post–database upgrade issues recommended by the pre-upgrade tool on the source database, for example, collecting dictionary and fixed-objects statistics, database initialization parameter adjustments, invalid object details, tablespace size adjustments, etc. The log file and script files will typically be created under the $ORACLE_BASE/cfgtoolslogs/PRDDB/preupgrade location. If no $ORACLE_BASE variable is set on the server, the files in the context will be created under the $ORACLE_HOME/cfgtoolslogs/PRDDB/preupgrade location. The DBUA tool uses the pre-upgrade tool by default. The following shows the existence of the files under the $ORACLE_HOME location:

-rw-r--r-- 1 oracle dba 1444 Apr 13 09:34 postupgrade_fixups.sql
-rw-r--r-- 1 oracle dba 2491 Apr 13 09:34 preupgrade_fixups.sql
-rw-r--r-- 1 oracle dba 6397 Apr 13 09:34 preupgrade.log

Another important task while performing a manual database upgrade is to copy the source database password file, init/spfile, from the Oracle 11g home to the Oracle 12c Database home. Also, ensure that the local/remote listener parameter TNS configuration details are added in the new Oracle 12c tnsnames.ora file to avoid database startup failures when database is started from the Oracle 12c home. For a RAC database, you must do it for all instances. The following example demonstrates the copy command on Linux OS:

$cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwPRDDB1 /u01/app/oracle/product/12.1.0/db_1/dbs

If required, run the preupgrade_fixups.sql script and subsequently shut down and mount the database to disable the archivelog mode to avoid excessive archive generation during upgrade and to expedite the upgrade process.

srvctl stop database –d PRDDB
 
export ORACLE_SID=PRDDB1
 
sqplus / as sysdba
 
SQL> STARTUP MOUNT
SQL> ALTER DATABASE NOARCHIVELOG;
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;

Set the $ORACLE_HOME to Oracle Database 12c version on the server.

export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
cd $ORACLE_HOME

On the SQL prompt, run the following database upgrade command:

SQL> CONNECT / AS SYSDBA
SQL> STARTUP UPGRADE

Exit from the SQL prompt and subsequently initiate the upgrade process in parallel mode, using the new Parallel Upgrade Utility, as explained here:

$/u01/app/oracle/product/12.0.1/db_1/perl/bin/perl catctl.pl–n 2 –l /tmp catupgrd.sql

If you prefer to perform an upgrade in a serial mode, like in previous releases, do the following at the SQL prompt:

SQL>@?/rdbms/admin/catupgrd.sql parallel=no

Database will be automatically shut down once the preceding command successfully completes. You therefore have to perform the following steps to enable the CLUSTER_DATABASE, set the database COMPATIBLE parameter to 12c, enable archive log, etc.:

SQL> STARTUP MOUNT
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE;
SQL> ALTER SYSTEM SET COMPATIBLE=12.1.0 scope=spfile;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP
SQL> execute dbms_stats.gather_fixed_objects_stats;
SQL> @/u01/app/oracle/prdoduct/12.1.0/rdbms/admin/utlrp.sql
SQL> @/u01/app/oracle/product/12.1.0/rdbms/admin/utlobj.sql
SQL> @/u01/app/oracle/product/12.1.0/rdbms/admin/utlu121s.sql
SQL> SHUTDOWN IMMEDIATE;
SQL> EXIT
 
utlrp.sql             recomiples all invalid objects on the database
ultobj.sql             verfies the validity of all packages/classes on the database
utlu121s.sql             displays database upgrade summary

After successfully completing the database upgrade process, you will have to modify the cluster database configuration details in the OCR using the following command:

srvctl upgrade database –d PRODB –o /u01/app/oracle/product/12.1.0/db_1

At this point in time, your cluster database is successfully upgraded to 12c. Now, open the database using the following cluster command, which in turn will start all RAC database instances:

srvctl start database –d PRODB

image Note   Executing the utlrp.sql is mandatory to avoid performance degradation when accessing the data dictionary objects after the database upgrade completion. If the script doesn’t run right after the database upgrade completion, whenever the data dictionary is being accessed first, it will have a significant performance impact. Therefore, it is advised to run the script after the upgrade.

Post-Database Upgrade Steps

Once the database is successfully upgraded to Oracle 12c, go through the following recommended post–database upgrade steps:

  • Run the postupgrade_fixups.sql script as explained earlier
  • Back up the database
  • Also migrate the source database listener to Oracle 12 by dropping and re-creating the existing listener
  • Adjust any RMAN backups, cron tab or other scripts, environment, profile variables, etc., to reflect the new Oracle Home settings
  • Ensure that the database points to the new Oracle Home in the oratab file
  • Adjust the ATTRIBUTE 'compatible.asm'='12.1' to the ASM diskgroups to make use of new ASM features
  • Verify database configuration details with the following:
    $ ./srvctl config database –d PRODB

Database Upgrade Using the DBUA

After learning a manual database upgrade procedure, it’s time now to shift the focus to know how to perform a database upgrade using the DBUA tool in Oracle 12c. In addition to portraying the DBUA database upgrade procedure with a bunch of screenshots, this section also going to explain why DBUA is a preferred method over other methods to perform a RAC database procedure.

The step-by-step procedure that outlines database upgrade with DBUA tool is as follows:

Launch ./duba from the Oracle 12c home.

Select the Upgrade Oracle database option on the Select Operation page, as shown in Figure 13-14.

9781430250449_Fig13-14.jpg

Figure 13-14. Select Operation screenshot

Select the database from the given database list on the Select Database page, as shown in Figure 13-15.

9781430250449_Fig13-15.jpg

Figure 13-15. Select Database screenshot

Prerequisite checks are carried out and the result will be displayed. You will have an option to Ignore, Fix or Revalidate any prerequisite failures on this page. Also, you can run the prerequisite checks by clicking the Check Again button on the top right (Figure 13-16).

9781430250449_Fig13-16.jpg

Figure 13-16. Prerequisite Checks screenshot

On the Upgrade Options screen, set the degree of parallelism to perform the upgrade and recompiling invalid objects in parallel. Optionally, you can also run any custom pre–/post–database upgrade Scripts.

Specify the configuration options for EM Cloud Control on the Management Options screen.

The new Move Database Files screen allows you to move the datafiles and FRA location between Filesystem, ASM, etc.

Also, the Network Configuration screen permits you to migrate an existing listener to the new Oracle release.

Optionally, the Recovery Options screen has the flexibility to define a recovery point to recover the database to the same point in case of any failures during the database upgrade process.

Once you input all the required information through the interactive screens, the Database Upgrade Information screen provides all details that you entered for this process.

Click on the Finish button to kick-start the upgrade action.

You will see the Upgrade Screen after the database upgrade is complete.

After going through the preceding steps, you will have a successful database upgrade for an Oracle 11gR2 (11.2.0.3) to Oracle 12c Database.

DBUA Advantages

This section will quickly take you through some of the following advantages of DBUA over other database upgrade options:

  • When a RAC database is upgraded with DBUA, the tool automatically takes appropriate action, such as enabling, disabling CLUSTER_DATABAE parameter.
  • There is the option to disable the archivelog mode during the course of upgrade, to avoid excessive archive log generation, which ultimately expedites the upgrade.
  • DBUA makes use of the new tools by default: initiating the catupgrd process in parallel, running the pre-upgrade tool, etc.
  • Recommendations in the forms of pre/postupgrade.slq also fix any recommendation failures.
  • An option to recompile all objects post–database upgrade and gather pre-/post-database statistics is provided.
  • The existing database listener is upgraded to the new Oracle home.
  • Most importantly, the tool allows you to define a recovery point to roll back the upgrade in case of failure.
  • You will also have an option to migrate the database from one Oracle 12c home to another 12c home.

Database Downgrade

Sometimes after a successful database upgrade, you might have to downgrade the database to its previous release for various factors; hence, you should have a back-out action plan in mind when you do an upgrade procedure. The following is the database downgrade procedure which lets you downgrade the database from 12c to a previous Oracle database release:

Perform the current database full backup.

Disable the database vault, if it exists.

Ensure that the database COMPATIBLE initialization parameter is set to the Database version that supports a direct upgrade (refer to Table 13-2).

Drop the sysman user if OEM is configured. You will have to reconfigure the OEM after you finish the downgrade process.

Disable the CLUSTER_DATABASE initialization parameter, and stop the database as follows:

export ORACLE_SID=PRDDB1
SQL> sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;
 
srvctl stop database –d PRDDB

Start up the database in downgrade mode and then execute the downgrade script, as follows:

SQL> STARTUP DOWNGRADE;
SQL> SPOOL /tmp/dbdowngrade.log
SQL> @?/rdbms/admin/catdwgrd.sql

The catdwgrd.sql script under the Oracle 12c /rdbms/admin home downgrades the 12c Database components to the previous release. If you encounter any ORA issues during the course of downgrade, fix the issue and rerun the downgrade script once again.

After completing the script, shut down the database and exit from the SQL prompt.

Set the previous Oracle version to ORACLE_HOME, PATH OS env variables.

Start up the database in UPGRADE mode to reload the previous version components by running the catreload.sql script from the Oracle 11g /rdbms/admin home. Examples are as follows:

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
sqlplus / as sysdba
 
SQL> STARTUP UPGRADE
SQL> spool /tmp/dbdowngrade.log
SQL> @?/rdbms/admin/catreload.sql

Enable the CLUSTER_DATABASE parameter, and shut down and start up the database.

SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP

Recompile all invalid objects running the utlrp.sql script.

SQL> @?/rdbms/admin/utlr.sql

Downgrade the database version in the OCR using the srvctl downgrade command from the Oracle 12c database home.

srvctl downgrade database –d PRDDB –o /u01/app/oracle/product/11.2.0/db_1 –to_version 11.2.0.3.0

Summary

This chapter explains the core advantages of upgrading and why an organization needs to upgrade its current (stable) environment with new releases. Before demonstrating a two-node cluster and a RAC database upgrade with a several hands-on examples and slide-decks, all the important prerequisites and precautions that are required for a smooth and successful upgrade were explained. You have also learned a step-by-step procedure for downgrading a successful, failed, or partially upgraded Clusterware and Database to the previous version.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.15.100