Manual Duplication Techniques
This chapter will walk you through manual methods for cloning databases and tablespaces. If you’re already familiar with these techniques, then feel free to move on to the next chapters in this book, which illustrate how to employ the RMAN duplication process. Otherwise, there are several manual replication scenarios covered in this chapter:
Knowledge of these methods will help you understand when it’s appropriate to use a technique and its advantages and disadvantages. This information will help you better understand the other chapters in this book that contrast these techniques with the RMAN DUPLICATE functionality (covered in Chapters 3, 4, and 5). First up is cloning a database using a cold backup.
Cloning from Cold Backup
If you worked with Oracle twenty or so years ago, you probably used a cold backup to move a database from one server to another. Even though this is an old technique, I still find myself occasionally using this method for cloning a database. For example, recently my supervisor asked me to copy a database (about 20 gig in size) from one server to another. In this scenario, the destination server directory structure was different from the source server directory structure, and the destination database needed to have a different name that the source database. In this situation I used a cold backup to move the database for the following reasons:
For this example to work you need the same version of Oracle installed on both the source and destination servers. The scenario is depicted in Figure 2-1.
Figure 2-1. Cloning with a cold backup
Next are the detailed descriptions of each of the steps shown in Figure 2-1.
SQL> select name from v$datafile;
Here’s some output for the database used in this example:
NAME
--------------------------------
/u01/dbfile/TRG/repdata.dbf
/u01/dbfile/TRG/repidx.dbf
/u01/dbfile/TRG/sysaux01.dbf
/u01/dbfile/TRG/system01.dbf
/u01/dbfile/TRG/undotbs01.dbf
/u01/dbfile/TRG/users01.dbf
SQL> alter database backup controlfile to trace as '/tmp/dk.sql' resetlogs;
$ scp /tmp/dk.sql oracle@shrek2:/tmp
$ sqlplus / as sysdba
SQL> shutdown immediate;
$ mkdir /u01/dbfile/DUP
$ mkdir /u01/oraredo/DUP
$ scp oracle@shrek:/u01/dbfile/TRG/*.dbf /u01/dbfile/DUP
Notice there’s no need to copy the control files or the online redo logs in this scenario. Since the destination directory structure and destination database name will be different from the source name, the control files and online redo logs will need to be recreated. If the directory structure and the database name were the same on both the source and the destination, the procedure would be as simple as shutting down the source database, copying all control files, data files, online redo logs, and initialization file to the destination server, and then starting the database.
Note After the copy is complete you can restart the source database.
SQL> create pfile from spfile;
This command will place a text-based initialization file with the name of init<SID>.ora in the ORACLE_HOME/dbs directory. If you don’t want the text-based file to be placed in that directory you can override the default behavior as follows:
SQL> create pfile='/tmp/initTRG.ora' from spfile;
This example uses the Linux/UNIX scp command (initiated from the destination server). Modify this appropriately for your environment. Assuming the file is in the default location of ORACLE_HOME/dbs:
$ scp oracle@shrek:$ORACLE_HOME/dbs/initTRG.ora $ORACLE_HOME/dbs/initDUP.ora
$ vi $ORACLE_HOME/dbs/initDUP.ora
Here is the content of the initDUP.ora file after the modifications:
db_name='DUP'
control_files='/u01/dbfile/DUP/control01.ctl','/u01/dbfile/DUP/control02.ctl'
db_block_size=8192
fast_start_mttr_target=500
job_queue_processes=10
memory_max_target=500M
memory_target=500M
open_cursors=100
os_authent_prefix=''
processes=100
remote_login_passwordfile='EXCLUSIVE'
resource_limit=true
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
workarea_size_policy='AUTO'
$ vi /tmp/dk.sql
Change the first line to include the SET keyword and change the database name and directory structures to reflect the destination environment. Here are the contents of dk.sql after the modifications:
CREATE CONTROLFILE REUSE SET DATABASE "DUP" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 4
MAXDATAFILES 1024
MAXINSTANCES 1
MAXLOGHISTORY 876
LOGFILE
GROUP 1 '/u01/oraredo/DUP/redo01a.rdo' SIZE 50M BLOCKSIZE 512,
GROUP 2 '/u01/oraredo/DUP/redo02a.rdo' SIZE 50M BLOCKSIZE 512
DATAFILE
'/u01/dbfile/DUP/system01.dbf',
'/u01/dbfile/DUP/sysaux01.dbf',
'/u01/dbfile/DUP/undotbs01.dbf',
'/u01/dbfile/DUP/users01.dbf',
'/u01/dbfile/DUP/repdata.dbf',
'/u01/dbfile/DUP/repidx.dbf'
CHARACTER SET AL32UTF8;
$ export ORACLE_SID=DUP
$ sqlplus / as sysdba
SQL> startup nomount;
SQL> @/tmp/dk.sql
You should see this message if successful:
Control file created.
At this point you have new control files and the database is in mount mode.
SQL> alter database open resetlogs;
Database altered.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/dbfile/DUP/temp01.dbf'
SIZE 524288000 REUSE AUTOEXTEND OFF;
Also keep in mind that other steps may be required for your environment depending on your standards. For example, you might want to ensure the database SID is listed in the oratab file, or you might require the use of an SPFILE, enabling a password file, changing passwords, enabling archiving, taking a backup, adding entries into Oracle Net files, and so on.
The advantages of the cold backup approach to cloning a database are:
The downside to this approach is that it requires you to shut down the source database while it is being copied. Thus, if you work in an environment that can’t afford any downtime with the source database, then this approach isn’t appropriate.
Copying from an RMAN Backup
When you think about architecting your backup strategy, as part of the process you must also consider how you’re going to restore and recover. Your backups are only as good as the last time you tested a restore and recovery. A backup can be rendered worthless without a good restore and recovery strategy. The last thing you want to happen is to experience a media failure, go to restore your database, and then find out you’re missing a file, you don’t have enough space to restore, something is corrupt, and so on.
One of the best ways to test an RMAN backup is to restore and recover it to a different database server. This will exercise all your backup, restore, and recovery DBA skills. If you can restore and recover an RMAN backup on a different server, it will give you confidence when a real disaster hits. Moving a database from one server to another using an RMAN backup requires an expert-level understanding of the Oracle architecture and how backup and recovery works. The next example will do just that; it uses an RMAN backup to restore and recover a database on a different server. This scenario is depicted in Figure 2-2.
Figure 2-2. Manually cloning a database using an RMAN backup
Notice in Figure 2-2 that only step 1 occurs on the source database server. All remaining steps are performed on the destination server. For this example the source database is named TRG, and the destination database is named DUP. Also notice that the originating source server and destination server have different directory names. You’ll have to adjust these directory names to reflect the directory structures on your database servers. Let’s get started with step 1:
$ rman target /
RMAN> configure controlfile autobackup on;
Also include the archive redo logs as part of the backup, as shown:
RMAN> backup database plus archivelog;
Verify that a backup of the control file exists:
RMAN> list backup of controlfile;
Here’s some sample output:
Piece Name: /u01/rman/TRG/TRGctl_c-1251088236-20141228-00.bk
You’ll need to reference the prior backup piece file when you restore the control file on the destination server (step 8). Also notice for this example that the backup pieces on the source server are in the /u01/rman/TRG directory.
$ mkdir -p /u01/rman/DUP
$ mkdir -p /u01/dbfile/DUP
$ mkdir -p /u01/oraredo/DUP
$ mkdir -p /u01/arch/DUP
$ scp oracle@shrek:/u01/rman/TRG/*.* /u01/rman/DUP
Note If the RMAN backups are on tape instead of on disk, then the same media manager software must be installed/configured on the destination server. Also, that server must have direct access to the RMAN backups on tape.
$ export ORACLE_SID=TRG
$ echo $ORACLE_SID
TRG
$ echo $ORACLE_HOME
/orahome/app/oracle/product/12.1.0.2/db_1
db_name='TRG'
control_files='/u01/dbfile/DUP/control01.ctl','/u01/dbfile/DUP/control02.ctl'
log_archive_dest_1='LOCATION=/u01/arch/DUP'
log_archive_format='DUP%t_%s_%r.arc'
db_block_size=8192
fast_start_mttr_target=500
job_queue_processes=10
memory_max_target=800M
memory_target=800M
open_cursors=100
processes=100
remote_login_passwordfile='EXCLUSIVE'
resource_limit=true
standby_file_management='auto'
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
workarea_size_policy='AUTO'
$ rman target /
RMAN> startup nomount;
RMAN> restore controlfile from
'/u01/rman/DUP/TRGctl_c-1251088236-20141228-00.bk';
The control file will be restored to all locations specified by the CONTROL_FILES initialization parameter in the destination init.ora file. Here is some sample output from the restore operation:
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output file name=/u01/dbfile/DUP/control01.ctl
output file name=/u01/dbfile/DUP/control02.ctl
You may see an error like this:
RMAN-06172: no AUTOBACKUP found or specified handle ...
In this situation, ensure that the path and backup piece names are correctly specified.
RMAN> alter database mount;
At this point, your control files exist and have been opened, but none of the data files or online redo logs exist yet.
RMAN> crosscheck backup; # Crosscheck backups
RMAN> crosscheck copy; Crosscheck image copies and archive logs
You’ll probably see output indicating that RMAN can’t validate that archive redo logs exist:
archived log file name=/u01/arch/TRG/TRG1_16_869840124.arc
RECID=765 STAMP=869842623
That’s the expected behavior because those archive redo logs do not exist on the destination server.
Next use the CATALOG command to make the control file aware of the location and names of the backup pieces that were copied to the destination server.
Note Don’t confuse the CATALOG command with the recovery catalog schema. The CATALOG command adds RMAN metadata to the control file, whereas the recovery catalog schema is a user, generally created in a separate database, which can be used to store RMAN metadata.
In this example, any RMAN files that are in the /u01/rman/DUP directory will be cataloged in the control file:
RMAN> catalog start with '/u01/rman/DUP';
Here is some sample output:
List of Files Unknown to the Database
=====================================
File Name: /u01/rman/DUP/TRGctl_c-1251088236-20141228-00.bk
File Name: /u01/rman/DUP/TRGrman1_b7pr9m9q_1_1.bk
File Name: /u01/rman/DUP/TRGrman2_b6pr9m82_1_1.bk
File Name: /u01/rman/DUP/TRGrman2_b4pr9m6k_1_1.bk
File Name: /u01/rman/DUP/TRGrman1_b2pr9m4c_1_1.bk
File Name: /u01/rman/DUP/TRGrman2_b3pr9m4c_1_1.bk
File Name: /u01/rman/DUP/TRGrman1_b5pr9m82_1_1.bk
Do you really want to catalog the above files (enter YES or NO)?
Now, type YES (if everything looks okay). You should then be able to use the RMAN LIST BACKUP command to view the newly cataloged backup pieces:
RMAN> list backup;
You should see output indicating that RMAN is aware of the backups that were copied to the destination server. Here’s a small snippet of the output:
BP Key: 280 Status: AVAILABLE Compressed: NO Tag:
TAG20150108T203552
Piece Name: /u01/rman/DUP/TRGrman2_jkps7th9_1_1.bk
RMAN> restore database;
However, when restoring data files to locations that are different from the original directories, you’ll have to use the SET NEWNAME command. Create a file that uses an RMAN run{} block that contains the appropriate SET NEWNAME and RESTORE commands. I like to use a SQL script that generates SQL to give me a starting point. Here is a sample script:
set head off feed off verify off echo off pages 0 trimspool on
set lines 132 pagesize 0
spo newname.sql
--
select 'run{' from dual;
--
select
'set newname for datafile ' || file# || ' to ' || '''' || name || '''' || ';'
from v$datafile;
--
select
'restore database;' || chr(10) ||
'switch datafile all;' || chr(10) ||
'}'
from dual;
--
spo off;
Run the prior script from SQL*Plus as SYS. In this example, the prior code is placed in a file named gen.sql and executed as follows:
SQL> @gen.sql
After running the script, these are the contents of the newname.sql script that was generated:
run{
set newname for datafile 1 to '/u01/dbfile/TRG/system01.dbf';
set newname for datafile 2 to '/u01/dbfile/TRG/sysaux01.dbf';
set newname for datafile 3 to '/u01/dbfile/TRG/undotbs01.dbf';
set newname for datafile 4 to '/u01/dbfile/TRG/users01.dbf';
set newname for datafile 5 to '/u01/dbfile/TRG/repdata.dbf';
set newname for datafile 6 to '/u01/dbfile/TRG/repidx.dbf';
restore database;
switch datafile all;
}
Then, modify the contents of the newname.sql script to reflect the directories on the destination database server. Here is what the final newname.sql script looks like for this example:
run{
set newname for datafile 1 to '/u01/dbfile/DUP/system01.dbf';
set newname for datafile 2 to '/u01/dbfile/DUP/sysaux01.dbf';
set newname for datafile 3 to '/u01/dbfile/DUP/undotbs01.dbf';
set newname for datafile 4 to '/u01/dbfile/DUP/users01.dbf';
set newname for datafile 5 to '/u01/dbfile/DUP/repdata.dbf';
set newname for datafile 6 to '/u01/dbfile/DUP/repidx.dbf';
restore database;
switch datafile all;
}
Now, connect to RMAN and run the prior script to restore the data files to the new locations:
$ rman target /
RMAN> @newname.sql
Here’s a small sample of the output from the prior script:
executing command: SET NEWNAME
executing command: SET NEWNAME
...
channel ORA_DISK_1: restoring datafile 00001 to /u01/dbfile/DUP/system01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/dbfile/DUP/users01.dbf
...
input datafile copy RECID=16 STAMP=869854446 file
name=/u01/dbfile/DUP/repidx.dbf
RMAN> **end-of-file**
All the data files have been restored to the new database server. You can use the RMAN REPORT SCHEMA command to verify that the files have been restored and are in the correct locations:
RMAN> report schema;
Here is some sample output:
RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name TRG
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 500 SYSTEM *** /u01/dbfile/DUP/system01.dbf
2 500 SYSAUX *** /u01/dbfile/DUP/sysaux01.dbf
3 200 UNDOTBS1 *** /u01/dbfile/DUP/undotbs01.dbf
4 10 USERS *** /u01/dbfile/DUP/users01.dbf
5 10 REPDATA *** /u01/dbfile/DUP/repdata.dbf
6 10 REPIDX *** /u01/dbfile/DUP/repidx.dbf
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 500 TEMP 500 /u01/dbfile/TRG/temp01.dbf
From the prior output you can see that the database name and temporary tablespace data file still don’t reflect the destination database (DUP). These will be modified in subsequent steps.
RMAN> recover database;
RMAN will restore and apply as many archive redo logs as it has in the backup pieces; it may throw an error when it reaches an archive redo log that doesn’t exist. For example:
RMAN-06054: media recovery requesting unknown archived log for...
That error message is fine. The recovery process will restore and recover archive redo logs contained in the backups, which should be sufficient to open the database. The recovery process doesn’t know when to stop applying archive redo logs and therefore will continue to attempt to do so until it can’t find the next log. Having said that, now is a good time to verify that your data files are online and not in a fuzzy state:
SQL> select file#, status, fuzzy, error, checkpoint_change#,
to_char(checkpoint_time,'dd-mon-rrrr hh24:mi:ss') as checkpoint_time
from v$datafile_header;
Here is a small sample of the output:
FILE# STATUS FUZ ERROR CHECKPOINT_CHANGE# CHECKPOINT_TIME
---------- ------- --- ---------- ------------------ ---------------------
1 ONLINE NO 1.3790E+13 23-jan-2015 15:23:37
2 ONLINE NO 1.3790E+13 23-jan-2015 15:23:37
...
If you do have a file with a fuzzy status of YES, this indicates more redo logs need to be applied to the data file (normally this should not happen in this scenario).
set head off feed off verify off echo off pages 0 trimspool on
set lines 132 pagesize 0
spo renlog.sql
select
'alter database rename file ' || chr(10)
|| '''' || member || '''' || ' to ' || chr(10) || '''' || member || '''' ||';'
from v$logfile;
spo off;
set feed on verify on echo on
For this example, assume the prior code was placed in a file named genredo.sql and run it as follows:
SQL> @genredo.sql
Here is a snippet of the renlog.sql file that was generated:
alter database rename file
'/u01/oraredo/TRG/redo01a.rdo' to
'/u01/oraredo/TRG/redo01a.rdo';
alter database rename file
'/u01/oraredo/TRG/redo02a.rdo' to
'/u01/oraredo/TRG/redo02a.rdo';
The contents of renlog.sql need to be modified to reflect the directory structure on the destination server. Here is what renlog.sql looks like after being edited:
alter database rename file
'/u01/oraredo/TRG/redo01a.rdo' to
'/u01/oraredo/DUP/redo01a.rdo';
alter database rename file
'/u01/oraredo/TRG/redo02a.rdo' to
'/u01/oraredo/DUP/redo02a.rdo';
Update the control file by running the renlog.sql script:
SQL> @renlog.sql
You can select from V$LOGFILE to verify that the online redo log names are correct:
SQL> select member from v$logfile;
Here is the output for this example:
/u01/oraredo/DUP/redo01a.rdo
/u01/oraredo/DUP/redo02a.rdo
SQL> alter database open resetlogs;
If successful, you should see this message:
Statement processed
Note Keep in mind that all the passwords from the newly restored copy are as they were in the source database. You may want to change the passwords in a replicated database, especially if it was copied from production.
SQL> alter database tempfile '/u01/dbfile/TRG/temp01.dbf' offline;
SQL> alter database tempfile '/u01/dbfile/TRG/temp01.dbf' drop;
Next, add a temporary tablespace file to the TEMP tablespace that matches the directory structure of the destination database server:
SQL> alter tablespace temp add tempfile '/u01/dbfile/DUP/temp01.dbf'
size 100m;
You can run the REPORT SCHEMA command to verify that all files are in the correct locations.
Tip If you don’t rename the database, be careful about connect and resync operations to the same recovery catalog used by the original/source database. This causes confusion in the recovery catalog as to which is the real source database, which may jeopardize your ability to recover and restore the real source database.
Also keep in mind that other steps may be required for your environment depending on your standards. For example, you might want to ensure the database SID is listed in the oratab file, or you might require the use of an SPFILE, enabling a password file, changing passwords, taking a backup, adding entries into Oracle Net files, and so on.
This section shows you how to rename a database. If you’re working with a critical database, make sure you have a good backup of the data files, control files, and any relevant archive redo logs before you change the name.
Two different ways of renaming your database are described next. The first renaming method walks you through the manual steps. The second technique describes renaming a database with the Oracle NID utility. If you need to assign a new DBID to the renamed database, then you should use the NID utility.
Manual
In this example, the database is renamed from TRG to DUP. The steps for manually renaming your database are as follows:
SQL> alter database backup controlfile to trace as '/tmp/cf.sql' resetlogs;
SQL> shutdown immediate;
CREATE CONTROLFILE REUSE SET DATABASE "DUP" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 4
MAXDATAFILES 1024
MAXINSTANCES 1
MAXLOGHISTORY 876
LOGFILE
GROUP 1 '/u01/oraredo/DUP/redo01a.rdo' SIZE 50M BLOCKSIZE 512,
GROUP 2 '/u01/oraredo/DUP/redo02a.rdo' SIZE 50M BLOCKSIZE 512
DATAFILE
'/u01/dbfile/DUP/system01.dbf',
'/u01/dbfile/DUP/sysaux01.dbf',
'/u01/dbfile/DUP/undotbs01.dbf',
'/u01/dbfile/DUP/users01.dbf',
'/u01/dbfile/DUP/repdata.dbf',
'/u01/dbfile/DUP/repidx.dbf'
CHARACTER SET AL32UTF8;
If you don’t specify SET DATABASE in the top line of this script, when you run the script (as shown later in this example) you’ll receive an error such as this:
ORA-01161: database name ... in file header does not match...
$ cd $ORACLE_HOME/dbs
$ cp init<old_sid>.ora init<new_sid>.ora
In this example, the prior line of code looks like this:
$ cp initTRG.ora initDUP.ora
db_name='DUP'
SQL> shutdown immediate;
$ export ORACLE_SID=DUP
$ echo $ORACLE_SID
DUP
$ sqlplus / as sysdba
SQL> startup nomount;
SQL> @/tmp/cf.sql
If successful you should see:
Control file created.
Note In this example, the control files already exist in the location specified by the CONTROL_FILES initialization parameter; therefore, the REUSE parameter is used in the CREATE CONTROL FILE statement.
SQL> alter database open resetlogs;
If successful you should see:
Database altered.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/dbfile/DUP/temp01.dbf'
SIZE 104857600 REUSE AUTOEXTEND OFF;
You now have a database that is a copy of the original database. All the data files, control files, archive redo logs, and online redo logs are in the new locations, and the database has a new name. Now would be a good time to take a backup of the newly renamed database, recreate the password file (if using), and modify service name values in Oracle Net files.
If you need to assign the database a new DBID, then you can use a utility such as NID to accomplish this.
NID
This section describes using the NID utility to rename a database. This procedure will rename the database and assign it a new DBID.
$ sqlplus / as sysdba
SQL> startup mount;
$ nid target=sys/foo dbname=DUP_NEW
In the output you should see a line similar to this:
Change database ID and database name DUP to DUP_NEW? (Y/[N]) =>
Respond with Y if you wish to proceed. Here’s a sample of the output for this example:
Proceeding with operation
Changing database ID from 1251088236 to 1191846239
Changing database name from DUP to DUP_NEW
Control File /u01/dbfile/DUP/control01.ctl - modified
Control File /u01/dbfile/DUP/control02.ctl - modified
Datafile /u01/dbfile/DUP/system01.db - dbid changed, wrote new name
...
All previous backups and archived redo logs for this database are unusable.
Database has been shut down, open database with RESETLOGS option.
Successfully changed database name and ID.
DBNEWID - Completed successfully.
Next create an initialization file that corresponds to the new database name:
$ cd $ORACLE_HOME/dbs
$ cp initDUP.ora initDUP_NEW.ora
db_name=DUP_NEW
$ export ORACLE_SID=DUP_NEW
$ sqlplus / as sysdba
SQL> startup mount;
SQL> alter database open resetlogs;
You can verify that the database has the new DBID assigned to it using the following:
SQL> select dbid from v$database;
DBID
----------
1191846239
Now would be a good time to take a backup of the newly renamed database, recreate the password file (if using), and modify service name values in Oracle Net files.
Tip See MOS note 863800.1 for more details regarding NID. You can leverage NID to only change the DBID (and not the database name), or you can use NID to change only change the database name (and not the DBID).
Replicating with Data Pump Across a Network Link
Data Pump is a powerful and flexible tool for moving data from one environment to another. This utility has significant advantages over other data replication methods, especially in the following situations:
A real-world example will help illustrate the utility. Suppose you have two database environments—a production database running on a Solaris box and a test database running on a Linux server. Your manager comes to you with these instructions:
First consider the steps required to transfer data from one database to another when using the old exp/imp utilities. The steps would look something like this:
You can perform those same steps using Data Pump. However, Data Pump provides a much more efficient and transparent method for executing those steps. If you have direct network connectivity between the source and destination database servers, you can import directly into the destination database over the network without having to create and/or copy any dump files. Furthermore, you can rename schemas on the fly as you perform the import. Additionally, it doesn’t matter if the source database is running on a different operating system than that of the destination database. Figure 2-3 illustrates the environment and required steps.
Figure 2-3. Data Pump export and import across a network link
For this example, in the source database there’s a schema named STAR. You want to move this user into the destination database and rename it to STAR_JUL. Also assume that the tablespace names are the same in both the source and the destination databases.
Notice that all of the following steps are performed on the destination server. No steps are required on the source server.
define star_user=star_jul
define star_user_pwd=star_jul_pwd
--
create user &&star_user identified by &&star_user_pwd;
grant create session, create table, create procedure to &&star_user;
$ sqlplus auxdba/auxfoo
Here is a sample CREATE DATABASE LINK script:
SQL> create database link trg
connect to trgdba identified by trgfoo
using 'shrek:1521/TRG';
SQL> create or replace directory dpdir as '/orahome/oracle/dpdir';
With the prior directory object, if you’re not using a privileged user (e.g., a user that has been granted the DBA role) you may need to additionally grant READ and WRITE privileges on the directory to the user. For instance:
SQL> grant read, write on directory dpdir to auxdba;
$ impdp auxdba/auxfoo directory=dpdir network_link=trg
schemas='STAR' remap_schema=STAR:STAR_JUL
This technique allows you to move large amounts of data between disparate databases without having to create or copy any dump files or data files. You can also rename schemas on the fly via the REMAP_SCHEMA parameter. If the tablespace names weren’t the same on both the source and destination, you can use the REMAP_TABLESPACE parameter to have tables placed in different tablespaces in the destination database. This is a very powerful Data Pump feature that lets you efficiently transfer data between disparate databases.
Tip For a complete description of Data Pump’s features, see Pro Oracle Database 12c Administration available from Apress.
Replicating with Data Pump Transportable Tablespaces
Oracle provides a mechanism for copying data files from one database to another in conjunction with using Data Pump to transport the associated metadata. This is known as the transportable tablespace feature. The amount of time this task requires is directly proportional to the time it takes to copy the data files to the destination server. In this scenario both the source and destination servers have the same operating system platform. Figure 2-4 shows the systems and the steps required to transport tablespaces for this scenario.
Figure 2-4. Using Data Pump with transportable tablespaces
The steps depicted in Figure 2-4 are described in detail next.
Run the following check on the source database to see if the set of tablespaces being transported violates any of the self-contained rules:
SQL> exec dbms_tts.transport_set_check('REPDATA,REPIDX', TRUE);
Now, see if Oracle detected any violations:
SQL> select * from transport_set_violations;
If you don’t have any violations, you should see this:
no rows selected
If you do have violations, such as an index that is built on a table that exists in a tablespace not being transported, then you’ll have to rebuild the index in a tablespace that is being transported. Be aware that detecting and resolving violations can lead to other tablespaces being required to be added in the transportable set. This can turn into a much bigger task than one might initially anticipate.
SQL> alter tablespace repdata read only;
SQL> alter tablespace repidx read only;
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ expdp trgdba/trgfoo directory=dpdir
dumpfile=trans.dmp logfile=trans.log
transport_tablespaces=REPDATA,REPIDX
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ scp oracle@shrek:/orahome/oracle/dpdir/trans.dmp /orahome/oracle/dpdir
$ scp oracle@shrek:/u01/dbfile/TRG/rep*.dbf /u01/dbfile/DUP
$ impdp auxdba/auxfoo directory=dpdir dumpfile=trans.dmp
transport_datafiles=/u01/dbfile/DUP/repdata.dbf,
/u01/dbfile/DUP/repidx.dbf
Additionally, ensure that the owner(s) of any tables or indexes within the tablespaces being transported exist(s) in the destination database. If the owning schema doesn’t exist in the destination database, you’ll receive this error:
ORA-29342: user ... does not exist in the database
If everything goes well, you should see some output indicating success:
job "AUXDBA"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed ...
SQL> alter tablespace repdata read write;
SQL> alter tablespace repidx read write;
If the data files that are being transported have a block size different from that of the destination database, then you must modify your initialization file (or use an ALTER SYSTEM command) and add a buffer pool that contains the block size of the source database. For example, to add a 16KB buffer cache, place this in the initialization file:
db_16k_cache_size=200M
You can check a tablespace’s block size via this query:
SQL> select tablespace_name, block_size from dba_tablespaces;
The transportable tablespace mechanism allows you to quickly move data files between databases. It’s an appropriate method for moving data in data warehouse–type environments where you might have a staging database that periodically needs data transferred to a reporting database.
Note To export transportable tablespaces, you must use Oracle Enterprise Edition. You can use other editions of Oracle to import transportable tablespaces.
RMAN Replication Using Transportable Tablespaces
You can use RMAN in conjunction with transportable tablespaces to copy data files from one database to a different database. In RMAN terminology the source database is referred to as the target and the destination database is referred to as the auxiliary. The target server can have the same operating system as the auxiliary server, or the target server can have a different operating system than the auxiliary server.
When the operating system is the same, you can issue the RMAN TRANSPORT TABLESPACE command to generate the tablespace metadata dump file. In this mode, the advantage to using RMAN is that you can keep the live data files online during the procedure (meaning you don’t have to place the tablespaces in read-only mode like Data Pump requires when transporting tablespaces).
When the operating systems are different (such as different endian formats), you’ll have to use the RMAN CONVERT command to create data files that can be used by a database running on a different operating system with a different endian format. In this mode, the tablespaces must be in read-only mode while the RMAN command is running.
Scenarios for transporting between two servers that have the same operating system and between two that have different operating systems are covered in the following sections.
Same Operating System
Using RMAN to transport tablespaces between servers that have the same operating system platform is fairly straightforward. As mentioned previously, in this configuration you can transport data files while the tablespaces are online. RMAN achieves this high availability by utilizing RMAN backups and archive logs while creating the transportable tablespace dump files. This means you must have a valid RMAN backup of the target database. If you don’t have a backup, when you attempt to run the TRANSPORT TABLESPACE command you’ll receive this error:
RMAN-06024: no backup or copy of the control file found to restore
The target database must be in archivelog mode also, if you attempt to run TRANSPORT TABLESPACE on a noarchivelog mode database you’ll receive the following error:
RMAN-05004: target database log mode is NOARCHIVELOG
Figure 2-5 illustrates the basic steps involved when using RMAN to transport tablespaces from one server and database to another when the servers are of the same operating system.
Figure 2-5. RMAN and transportable tablespaces (same platforms)
The steps shown in Figure 2-5 are described in detail next.
$ mkdir -p /u01/transport/TRG
$ mkdir -p /u01/auxiliary/TRG
$ rman target /
RMAN> list backup;
$ rman target /
RMAN> transport tablespace repdata, repidx
tablespace destination '/u01/transport/TRG'
auxiliary destination '/u01/auxiliary/TRG';
In the prior command, the tablespace destination is where RMAN will place files that can be used to transport the tablespaces. The auxiliary destination is used by RMAN to create files associated with a temporary auxiliary database that is used to generate the transportable tablespace files. This temporary auxiliary database will be dropped after the transportable tablespace files have been generated and placed in the transport directory. Don’t confuse this temporary auxiliary database with the auxiliary database that you’ll be transporting the tablespaces to.
$ mkdir /orahome/oracle/dpdir
This is the directory in which you’re going to place the dmpfile.dmp (from the target). If you’ve worked through examples in previous sections in this book, this directory may already exist, and that’s fine.
SQL> create directory dpdir as '/orahome/oracle/dpdir';
If you’ve worked through previous examples in this book, then this directory object may already exist, and that’s fine. Just make sure the directory object DPDIR points to the /orahome/oracle/dpdir directory.
$ scp oracle@shrek:/u01/transport/TRG/dmpfile.dmp /orahome/oracle/dpdir
$ scp oracle@shrek:/u01/transport/TRG/repdata.dbf /u01/dbfile/DUP
$ scp oracle@shrek:/u01/transport/TRG/repidx.dbf /u01/dbfile/DUP
$ impdp auxdba/auxfoo directory=dpdir dumpfile=dmpfile.dmp
transport_datafiles=/u01/dbfile/DUP/repdata.dbf,/u01/dbfile/DUP/repidx.dbf
When finished, you should have the tablespaces transported into the auxiliary (destination) database.
Cross-Platform Replication
In some scenarios you can use RMAN commands such as DUPLICATE, RESTORE, and RECOVER when the target server uses a different operating system than the auxiliary server. RMAN supports these operations only for the following operating system combinations that have the same endian format:
Additionally, these types of operations are only supported when the Oracle version in use is the same on both the target and auxiliary databases. Also, the two environments must be at the same patch level.
Tip See MOS note 1079563.1 for details on operations allowed when different platforms are in use for the target and auxiliary servers.
If an operating system combination is not listed in the prior bulleted list then you must use other supported migration procedures, such as transportable tablespace, transportable database, or Data Pump export/import. See the sections in this chapter “Different Operating System (Convert Tablespace)” and “Different Operating System (Convert Data File)” for examples of using RMAN to transport a tablespace (and associated data files) between operating systems with different endian formats.
Different Operating System (Convert Tablespace)
When transporting between different operating systems that have different endian formats, the CONVERT TABLESPACE command facilitates the converting of the data files from the source operating system platform to the destination platform. After the data files have been converted, they can be copied to a host of different operating systems with the different endian format.
I recently worked on a project moving a database from a Solaris SPARC 64-bit server to a Linux x-86 64-bit host. The steps involved in using CONVERT TABLESPACE to convert the data files are depicted in Figure 2-6.
Figure 2-6. RMAN and transportable tablespaces (different platforms)
Details of the steps shown in Figure 2-6 follow next.
SQL> SELECT platform_id, platform_name, endian_format
FROM V$TRANSPORTABLE_PLATFORM
WHERE UPPER(platform_name) LIKE 'LINUX%';
Here is some sample output:
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------ --------------------
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little
In this case, the target server can convert data files to be used on the destination Linux x86 64-bit box with the little endian format.
SQL> exec dbms_tts.transport_set_check('REPDATA,REPIDX', TRUE);
Now, see if Oracle detected any violations:
SQL> select * from transport_set_violations;
If you don’t have any violations, you should see this:
no rows selected
If you do have violations, such as an index that is built on a table that exists in a tablespace not being transported, then you’ll have to rebuild the index in a tablespace that is being transported. If you need more details on the DBMS_TTS.TRANSPORT_SET_CHECK procedure, refer to the Oracle Database PL/SQL Package and Types Reference guide available on Oracle’s technology network website.
SQL> alter tablespace repdata read only;
SQL> alter tablespace repidx read only;
$ mkdir /orahome/oracle/convert
$ rman target /
RMAN> CONVERT TABLESPACE repdata, repidx
TO PLATFORM 'Linux x86 64-bit'
FORMAT '/orahome/oracle/convert/%U';
You should now have the converted data files in the specified directory:
$ ls /orahome/oracle/convert
Here is some sample output:
data_D-TRG_I-1251088236_TS-REPDATA_FNO-5_brprfa57
data_D-TRG_I-1251088236_TS-REPIDX_FNO-6_bsprfa57
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ expdp trgdba/trgfoo directory=dpdir
dumpfile=conv.dmp logfile=conv.log
transport_tablespaces=REPDATA,REPIDX
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ scp oracle@shrek:/orahome/oracle/dpdir/conv.dmp /orahome/oracle/dpdir
$ scp oracle@shrek:/orahome/oracle/convert/data_D-TRG_I-1251088236_TS-REPDATA_FNO-5_brprfa57 /u01/dbfile/DUP/repdata.dbf
$ scp oracle@shrek:/orahome/oracle/convert/data_D-TRG_I-1251088236_TS-REPIDX_FNO-6_bsprfa57 /u01/dbfile/DUP/repidx.dbf
$ impdp auxdba/auxfoo directory=dpdir dumpfile=conv.dmp
transport_datafiles=/u01/dbfile/DUP/repdata.dbf,/u01/dbfile/DUP/repidx.dbf
You should now have the converted tablespaces (and associated data files) in the destination database.
SQL> alter tablespace repdata read write;
SQL> alter tablespace repidx read write;
Different Operating System (Convert DataFile)
You can also transport data across platforms with different endian formats using the RMAN CONVERT DATAFILE command. This example performs a conversion between Solaris 64-bit with big endian format to a Linux server with little endian format, with the conversion taking place on the destination server. The environment used and steps required are shown in Figure 2-7.
Figure 2-7. Converting data files between operating systems with different endian formats
Details of the steps shown in Figure 2-7 follow next.
SQL> SELECT platform_id, platform_name, endian_format
FROM V$TRANSPORTABLE_PLATFORM
WHERE UPPER(platform_name) LIKE 'SOLARIS%';
Here is some sample output:
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ---------------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
17 Solaris Operating System (x86) Little
20 Solaris Operating System (x86-64) Little
In this case, the destination (Linux) server is capable of converting data files from a Solaris box with the big endian format.
SQL> exec dbms_tts.transport_set_check('REPDATA,REPIDX', TRUE);
Now, see if Oracle detected any violations:
SQL> select * from transport_set_violations;
If you don’t have any violations, you should see this:
no rows selected
If you do have violations, such as an index that is built on a table that exists in a tablespace not being transported, then you’ll have to rebuild the index in a tablespace that is being transported.
SQL> alter tablespace repdata read only;
SQL> alter tablespace repidx read only;
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ expdp trgdba/trgfoo directory=dpdir
dumpfile=conv.dmp logfile=conv.log
transport_tablespaces=REPDATA,REPIDX
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ scp oracle@shrek:/orahome/oracle/dpdir/conv.dmp /orahome/oracle/dpdir
$ scp oracle@shrek:/u01/dbfile/TRG/repdata.dbf /tmp
$ scp oracle@shrek:/u01/dbfile/TRG/repidx.dbf /tmp
$ rman target /
RMAN> CONVERT DATAFILE
'/tmp/repdata.dbf',
'/tmp/repidx.dbf'
DB_FILE_NAME_CONVERT
'/tmp',
'/u01/dbfile/DUP'
FROM PLATFORM 'Solaris[tm] OE (64-bit)';
$ impdp auxdba/auxfoo directory=dpdir dumpfile=conv.dmp
transport_datafiles=/u01/dbfile/DUP/repdata.dbf,
/u01/dbfile/DUP/repidx.dbf
This imports into the destination database the metadata for the tablespaces (and associated data files) being converted. You can verify the data files exist in the data dictionary via:
SQL> select name from v$datafile where name like '%rep%';
NAME
----------------------------------------------------
/u01/dbfile/DUP/repdata.dbf
/u01/dbfile/DUP/repidx.dbf
You may want to place the newly converted tablespaces into read-write mode at this point:
SQL> alter tablespace repdata read write;
SQL> alter tablespace repidx read write;
Moving Data with External Tables
External tables are primary used to load data from csv files into the database. External tables can also be used to select data from a regular database table and create a binary dump file. The dump file is platform independent and can be used to move large amounts of data between servers of different platforms and different endian formats.
You can also encrypt or compress data, or both, when creating the dump file. Doing so provides you with an efficient and secure way of transporting data between database servers.
Figure 2-8 illustrates the components involved in using an external table to unload and load data. On the target (source) database, create a dump file using an external table that selects data from a table named INV. After it’s created, copy the dump file to the auxiliary (destination) server and subsequently load the file into the database using an external table.
Figure 2-8. Using external tables to unload and load data
A small example illustrates the technique of using an external table to unload data. Here are the steps required:
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
SQL> CREATE TABLE inv
(inv_id NUMBER,
inv_desc VARCHAR2(30));
SQL> insert into inv values (1, 'test data'),
SQL> commit;
CREATE TABLE inv_et
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpdir
LOCATION ('inv.dmp')
)
AS SELECT * FROM inv;
The previous command creates two things:
$ mkdir /orahome/oracle/dpdir
SQL> create directory dpdir as '/orahome/oracle/dpdir';
$ scp oracle@shrek:/orahome/oracle/dpdir/inv.dmp /orahome/oracle/dpdir
The remote server (to which you copy the dump file) can be a platform different from that of the server on which you created the file. For example, you can create a dump file on a Windows box, copy to a Linux/UNIX server, and select from the dump file via an external table.
SQL> CREATE TABLE inv_dw
(inv_id number
,inv_desc varchar2(30))
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpdir
LOCATION ('inv.dmp'));
After it’s created, you can access the external table data from SQL*Plus:
SQL> select * from inv_dw;
INV_ID INV_DESC
---------- ------------------------------
1 test data
You can also create and load data into regular tables using the dump file:
SQL> create table inv as select * from inv_dw;
This provides a simple and efficient mechanism for transporting data from one platform to another.
Tip For complete details on external tables, see Expert Oracle Database Architecture, available from Apress.
Enabling Parallelism
To maximize the unload performance when you create a dump file via an external table, use the PARALLEL clause. This example creates two dump files in parallel:
CREATE TABLE inv_et
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpdir
LOCATION ('inv1.dmp','inv2.dmp')
)
PARALLEL 2
AS SELECT * FROM inv;
To access the data in the dump files, create a different external table that references the two dump files:
CREATE TABLE inv_dw
(inv_id number
,inv_desc varchar2(30))
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpdir
LOCATION ('inv1.dmp','inv2.dmp'));
You can now use this external table to select data from the dump files:
SQL> select * from inv_dw;
Enabling Compression
You can create a compressed dump file via an external table. For example, use the COMPRESS option of the ACCESS PARAMETERS clause:
CREATE TABLE inv_et
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY dpdir
ACCESS PARAMETERS (COMPRESSION ENABLED BASIC)
LOCATION ('inv.dmp')
)
AS SELECT * FROM inv;
In Oracle 12c there are four levels of compression: BASIC, LOW, MEDIUM, and HIGH. Before using compression, ensure that the COMPATIBLE initialization parameter is set to 12.0.0 or higher. The LOW, MEDIUM, and HIGH levels of compression require Oracle Enterprise Edition, along with the Advanced Compression option.
Tip You can also enable encryption when transporting data via external tables. See the Oracle Advanced Security Administrator’s Guide, which can be freely downloaded from the Technology Network area of the Oracle web site (http://otn.oracle.com), for full details on implementing encryption.
Summary
This chapter lays the foundation for understanding how you can move data from one database environment to another. This chapter discussed several different manual techniques for moving data:
Understanding these manual methods lays the foundation for intelligently using RMAN’s duplicate database functionality. You’ll now better understand the advantages and disadvantages of each feature. You will be in a better position to architect replication solutions. The RMAN duplicate database feature is next discussed in detail in the Chapters 3, 4, and 5 in this book.
18.227.10.45