© Bobby Curtis 2016

Bobby Curtis, Pro Oracle GoldenGate for the DBA, 10.1007/978-1-4842-1179-3_7

7. Advanced Features

Bobby Curtis

(1)Winston, Georgia, USA

To this point, you have been given a lot of information related to how Oracle GoldenGate can be installed, configured, run, verified and monitored. These are all great things to understand when running Oracle GoldenGate. Although these are the basics, Oracle GoldenGate also provides additional features to help you customize and make management a bit easier. These are the advanced features of Oracle GoldenGate.

Advanced features are used to help administrators customize, scale, and process transactions in a more robust way. This chapter takes a look at how to create and use macros, define and use tokens, and other advanced features of Oracle GoldenGate.

Macros

Macros are a powerful feature of Oracle GoldenGate. In the many deployments I have done, this seems to be the one feature that can prevent many mistakes within the environment. Macros provide a way to modularize the code base for the replication environment and make it easier to move replication settings between release environments. Many people, however, do not understand this simple time-saving technique.

Although macros are a timesaver, the question is where these modules of code should be stored for reuse. If you look at the subdirectories within the Oracle GoldenGate home, you will notice there is no directory named dirmac. This is normal because most people who use macros often put the macro files in the dirprm directory. I actually think this is bad practice because it doesn’t provide a clear mapping of what is in the directory. To remedy this problem, you should create the dirmac” directory manually; then all the macros that you use within the environment can be called from a single location.

Creating a Macro

As discussed, macros are a very powerful tool to use within Oracle GoldenGate, as they provide the ability to modularize the GoldenGate environment. To use a macro, it first needs to be created and then set up to be used within the GoldenGate processes.

A macro is simply a code block that can be reused over and over again in the same environment or copied to other environments. The basics of a macro read similar to a PL/SQL code block. Listing 7-1 shows the basic components.

Listing 7-1. Basic Macro Code
MACRO <name>
BEGIN
<GoldenGate related information>
END;

In Listing 7-1, you see that the macro is created by simply editing a text file and adding lines that name the macro and define the beginning and the end of the macro. Everything in between the BEGIN and END calls of the macro are general GoldenGate commands that will be read when the GoldenGate process starts. To illustrate this functionality, Listing 7-2 shows what the macro rtables look like in my test environment.

Listing 7-2. Test Macro Example
MACRO #rtables
BEGIN
MAP SOE.ADDRESSES, TARGET SOE.ADDRESSES;
MAP SOE.CARD_DETAILS, TARGET SOE.CARD_DETAILS THREAD(2);
MAP SOE.CUSTOMERS, TARGET SOE.CUSTOMERS, THREAD(3);
MAP SOE.INVENTORIES, TARGET SOE.INVENTORIES, THREAD(4);
MAP SOE.LOGON, TARGET SOE.LOGON, THREAD(5);
MAP SOE.ORDER_ITEMS, TARGET SOE.ORDER_ITEMS, THREAD(1);
MAP SOE.ORDERENTRY_METADATA, TARGET SOE.ORDERENTRY_METADATA, THREAD(2);
MAP SOE.PRODUCT_DESCRIPTIONS, TARGET SOE.PRODUCT_DESCRIPTIONS, THREAD(4);
MAP SOE.PRODUCT_INFORMATION, TARGET SOE.PRODUCT_INFORMATION, THREAD(5);
MAP SOE.WAREHOUSES, TARGET SOE.WAREHOUSES, THREAD(1);
END;
Note

The lines with THREAD(#) are related to a coordinated replicat.

Now that you have a macro that can be used to map tables, how do you use the macro in a parameter file? Like many programming languages that allow you to modularize code, an INCLUDE statement is needed to tell the GoldenGate process to read the associated file. Listing 7-3 shows how this is done using the replicat process.

Listing 7-3. Illustration of Including Macros
INCLUDE ./dirmac/rtables.mac                
REPLICAT RSRC1
SETENV (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
SETENV (ORACLE_SID="rmt12c")
USERID GGATE, PASSWORD ********
ASSUMETARGETDEFS
REPORTCOUNT EVERY 5 SECONDS, RATE
DISCARDFILE ./dirrpt/RSRC1.dsc, append, megabytes 500
#rtables();

Executing a Macro

In Listing 7-3, you see that the INCLUDE statement is listed at the top of the parameter file. At the bottom of the parameter file, the macro that is desired is called. By providing these two pieces of information, you are telling GoldenGate that you want to read the macro and use it as part of the fundamental operation of the GoldenGate process.

When the GoldenGate process is started, the INCLUDE file is read (Listing 7-4) and the macro is executed to provide the process.

Listing 7-4. Execution During Startup of Process
INCLUDE ./dirmac/logon.mac
MACRO #logon_settings
BEGIN
USERID ggate, PASSWORD *****
END;
INCLUDE ./dirmac/rtables.mac
MACRO #rtables
BEGIN
MAP SOE.ADDRESSES, TARGET SOE.ADDRESSES;
MAP SOE.CARD_DETAILS, TARGET SOE.CARD_DETAILS THREAD(2);
MAP SOE.CUSTOMERS, TARGET SOE.CUSTOMERS, THREAD(3);
MAP SOE.INVENTORIES, TARGET SOE.INVENTORIES, THREAD(4);
MAP SOE.LOGON, TARGET SOE.LOGON, THREAD(5);
MAP SOE.ORDER_ITEMS, TARGET SOE.ORDER_ITEMS, THREAD(1);
MAP SOE.ORDERENTRY_METADATA, TARGET SOE.ORDERENTRY_METADATA, THREAD(2);
MAP SOE.ORDERS, #runrates(3);
MAP SOE.PRODUCT_DESCRIPTIONS, TARGET SOE.PRODUCT_DESCRIPTIONS, THREAD(4);
MAP SOE.PRODUCT_INFORMATION, TARGET SOE.PRODUCT_INFORMATION, THREAD(5);
MAP SOE.WAREHOUSES, TARGET SOE.WAREHOUSES, THREAD(1);
END;
REPLICAT RSRC1
SETENV (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
SETENV (ORACLE_SID="rmt12c")
#logon_settings()
USERID ggate, PASSWORD *****
ASSUMETARGETDEFS
REPORTCOUNT EVERY 5 SECONDS, RATE
DBOPTIONS NOSUPPRESSTRIGGERS
REPERROR (DEFAULT, EXCEPTION)
REPERROR (-1, EXCEPTION)
REPERROR (-1403, EXCEPTION)
REPERROR (-2291, EXCEPTION)
DISCARDFILE ./dirrpt/RSRC1.dsc, append, megabytes 500
#rtables();
MAP SOE.ADDRESSES, TARGET SOE.ADDRESSES;
MAP SOE.CARD_DETAILS, TARGET SOE.CARD_DETAILS THREAD(2);
MAP SOE.CUSTOMERS, TARGET SOE.CUSTOMERS, THREAD(3);
MAP SOE.INVENTORIES, TARGET SOE.INVENTORIES, THREAD(4);
MAP SOE.LOGON, TARGET SOE.LOGON, THREAD(5);
MAP SOE.ORDER_ITEMS, TARGET SOE.ORDER_ITEMS, THREAD(1);
MAP SOE.ORDERENTRY_METADATA, TARGET SOE.ORDERENTRY_METADATA, THREAD(2);
MAP SOE.ORDERS, #runrates(3);
MAP SOE.PRODUCT_DESCRIPTIONS, TARGET SOE.PRODUCT_DESCRIPTIONS, THREAD(4);
MAP SOE.PRODUCT_INFORMATION, TARGET SOE.PRODUCT_INFORMATION, THREAD(5);
MAP SOE.WAREHOUSES, TARGET SOE.WAREHOUSES, THREAD(1);

As you can see from the examples, it is easy to create a macro, include it in a parameter file, and have it execute. The examples showed here are very basic for demonstration purposes; however, the real power of a macro comes into play when you can nest macros withing macros to make a very modular configuration.

Tokens

Tokens are another advanced feature that can be used to capture and store data within the trail file’s user token area. Data that are stored as tokens can be used to customize the way that Oracle GoldenGate delivers information. A few ways that tokens can be used are:

  • Column mappings.

  • Used in stored procedures that are called by SQLEXEC.

  • User exits.

  • Macros.

Note

There are two types of tokens: user-defined and GoldenGate-specific tokens (GGSToken). GGSTokens are used to store LOGSCN and TRANID, among other internal information.

Because tokens can be used in a wide range of GoldenGate areas, these bits of user-defined information can be powerful for defining specifics of a business rule or what is happening with the GoldenGate environment.

Defining Tokens

To define a token, you must define the token name and associate it with data that should be captured; this is normally environment-related information. The data that can be defined within a token can be any valid character data or values retrieved from an Oracle GoldenGate column-conversion function.

The record header of the trail file permits up to 2,000 bytes of data to be stored for user token information. The token name, length of data, and the data itself must all fit into the 2,000 bytes allocated in the record header.

You can define a token in the capture and pump processes of the GoldenGate environment. The TOKENS option is part of the TABLE parameter for the extract process. Listing 7-5 highlights the syntax needed for a token definition.

Listing 7-5. Token Definition Syntax
TABLE <table defined>, TOKENS (<token name> = <token_data> [, ...]);

The syntax for defining a token can be placed either directly into the parameter file for the extract or it can be defined in a macro for reuse.

Running Tokens

Now that you know how to define a token, putting them to use is the next step. As previously mentioned, tokens are defined as part of the TABLE parameter in the extract process. This allows you to define customer user tokens. To illustrate how to define a token in an extract, Listing 7-6 shows how this is done per table.

Listing 7-6. Defining Tokens in Extract Process
INCLUDE ./dirmac/logon.mac
--CHECKPARAMS
EXTRACT ESRC1
#logon_settings()
TRANLOGOPTIONS DBLOGREADER
SETENV (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
SETENV (ORACLE_SID="src12c")
EXTTRAIL ./dirdat/lt
WARNLONGTRANS 1h, CHECKINTERVAL 30m
WILDCARDRESOLVE IMMEDIATE
REPORTCOUNT EVERY 3 MINUTES, RATE
INCLUDE ./dirmac/heartbeat_extract.mac
TABLE SOE.ADDRESSES,TOKENS (
TK-OSUSER = @GETENV ('GGENVIRONMENT' , 'OSUSERNAME'),
TK-GROUP = @GETENV ('GGENVIRONMENT' , 'GROUPNAME'),
TK-HOST =  @GETENV('GGENVIRONMENT' , 'HOSTNAME'),
TABLE SOE.CARD_DETAILS;
TABLE SOE.CUSTOMERS;
TABLE SOE.INVENTORIES;
TABLE SOE.LOGON;
TABLE SOE.ORDER_ITEMS;
TABLE SOE.ORDERENTRY_METADATA;
TABLE SOE.ORDERS;
TABLE SOE.PRODUCT_DESCRIPTIONS;
TABLE SOE.PRODUCT_INFORMATION;
TABLE SOE.WAREHOUSES;

In Listing 7-6, notice that there are four tokens defined. These tokens start with the prefix TK-.

Note

A token can be named anything you want to name it.

These are the names that will be used on the replicat side when reading the token from the trail file being processed. Also notice that the information being gathered is coming from the environment that GoldenGate is running in by using the @GETENV parameter. The information that is obtained from @GETENV is stored as the value of the token with the prefix of TK-. At a later point in the replication, this information can be retrieved by calling the token name.

Note

Tokens can be populated with any type of information; if you need more information on the @GETENV function, you can find it at http://​docs.​oracle.​com/​goldengate/​c1221/​gg-winux/​GWURF/​column_​conversion_​functions015.​htm#GWURF788.

Applying Tokens

With tokens being defined in the extract process and having your desired information captured in user tokens, you will want to have this information applied on the receiving end of replication. To do this, the replicat has to be told about the tokens and how they should be applied in the database.

To apply tokens, they need to be mapped to columns in the target table. The target table can either be the replicat table with additional columns for tokens or a separate table with columns that only map the tokens. Either way, the tokens have to be mapped to columns. In Listing 7-7, you will see how I map the table SOE.ADDRESS to a table that will only store the contents of the tokens.

Listing 7-7. Replicat Mapping with Tokens Via Macro
MACRO #rtables
BEGIN
MAP SOE.ADDRESSES, TARGET SOE.ADDRESSES;
MAP SOE.ADDRESSES, TARGET SOE.TOKEN_INFO, COLMAP(OSUSERNAME=@TOKEN('TK-OSUSER'),GROUPNAME=@TOKEN('TK-GROUP'),HOSTNAME=@TOKEN('TK-HOST'),TABLENAME=@TOKEN('TK-TABLE'));
MAP SOE.CARD_DETAILS, TARGET SOE.CARD_DETAILS THREAD(2);
MAP SOE.CUSTOMERS, TARGET SOE.CUSTOMERS, THREAD(3);
MAP SOE.INVENTORIES, TARGET SOE.INVENTORIES, THREAD(4);
MAP SOE.LOGON, TARGET SOE.LOGON, THREAD(5);
MAP SOE.ORDER_ITEMS, TARGET SOE.ORDER_ITEMS, THREAD(1);
MAP SOE.ORDERENTRY_METADATA, TARGET SOE.ORDERENTRY_METADATA, THREAD(2);
MAP SOE.PRODUCT_DESCRIPTIONS, TARGET SOE.PRODUCT_DESCRIPTIONS, THREAD(4);
MAP SOE.PRODUCT_INFORMATION, TARGET SOE.PRODUCT_INFORMATION, THREAD(5);
MAP SOE.WAREHOUSES, TARGET SOE.WAREHOUSES, THREAD(1);
END;

The key thing to notice in this MAP statement is that the TARGET table is SOE.TOKEN_INFO. These tables are different in structure; however, the COLMAP option allows you to map what information is getting applied to what columns. In this case, I want to map all the tokens previously defined to the columns in the SOE.TOKEN_INFO table. If you mapped this out logically in a table, it would look similar to Table 7-1.

Table 7-1. Example of Column Mapping for Tokens

Column Name

Token Mapping

OSUSERNAME

@TOKEN=(‘TK-OSUSER’)

GROUPNAME

@TOKEN=(‘TK-GROUP’)

HOSTNAME

@TOKEN=(‘TK-HOST’)

TABLENAME

@TOKEN=(‘TK-TABLE’)

When the replicat begins to apply the information in the trail files to the database, the tokens will be mapped to the corresponding columns for SOE.TOKEN_INFO. After the apply process is complete, you can query the SOE.TOKEN_INFO table and verify that the token information has been applied to the table. Figure 7-1 shows the token information applied to the correct table.

A340882_1_En_7_Fig1_HTML.jpg
Figure 7-1. Token info in the table

As you can see, the transactions that ran against the SOE.ADDRESSES table have presented the token information for the OS User who ran the GoldenGate process, the GoldenGate process group, the hostname where the token was defined, and the table that it was captured from.

Just like macros, tokens are a very powerful tool that can be used to customize the replication process. Tokens provide a way for you to define and capture custom or granular detail information from the source database and retain a record of it on the target side. Tokens also allow you to execute defined procedures using SQLEXEC depending on the contents of the token.

Heartbeat

Another feature that is part of Oracle GoldenGate is the ability to track the latency within the replication framework by using a heartbeat process. A heartbeat process is set of GoldenGate processes that are used to calculate the “true” lag within the network, from end to end. From an administration standpoint, this is a handy tool for validating the lag in the network.

Starting in Oracle GoldenGate 12c (12.2.0.1.0), there are two types of heartbeat processes. The first type is the traditional heartbeat, which is external to the existing GoldenGate processes. The second type is the integrated heartbeat, which is new in 12.2.0.1.0. Let’s take a look at both processes so you can get an understanding of how they work.

Traditional Heartbeat

The traditional heartbeat is a process that was developed to measure lag between the source and target by using “heartbeat” tables. Although there are tables required for this heartbeat process, there are a few different parts that need to be configured to make the process work.

To successfully set the traditional heartbeat structure within a GoldenGate environment, there are a number of objects that need to be set up. Table 7-2 provides a brief breakdown of what needs to be added to the source and target database to run a traditional heartbeat.

Table 7-2. Traditional Heartbeat Components

Location

Component

Type

Source

HEARTBEAT

Table

Source

SEQ_GGS_HEARTBEAT_ID

Sequence

Source

HEARTBEAT_TRIG

Trigger

Source

<no name required>

Scheduler Job

Target

GGS_HEARTBEAT_HISTORY

Table

Target

SEQ_GGS_HEARTBEAT_HIST

Sequence

Target

GGS_HEARTBEAT_TRIG_HIST

Trigger

Target

TOTAL_LAG_HB

View

Note

The component names in Table 7-2 can be changed when creating your heartbeat framework. The names are mostly taken from documentation and personal setup information.

The original heartbeat information provided by Oracle can be found in My Oracle Support Note ID 1299679.1.

As you can tell, there are a few things that need to be configured to make the traditional heartbeat work. When you first start configuring traditional heartbeats, the components in Table 7-2 are just the basics. These components do not contain anything that Oracle GoldenGate will need to replicate information. Table 7-2 only provides the framework from a database level. Let’s take a closer look at this SQL framework before you dive into the specifics at the GoldenGate process level.

Source Database Configuration

The first thing that needs to be done is to set up the traditional heartbeat database components in the source database. These components consist of a table specific to heartbeat information, a sequence, a trigger, and a job scheduler configuration that fires the trigger. Let’s take a look at each of these components.

Traditional Heartbeat Table

The traditional heartbeat table is just a standard table that you can create in any schema that you like. Traditionally, the heartbeat table should be created in the schema that will house your Oracle GoldenGate objects. Listing 7-8 shows the DDL for building out the table.

Note

Your heartbeat table can be different from this framework.

Listing 7-8. Create DDL for Traditional Heartbeat Table
drop table &&ogg_user..heartbeat;

-- Create table statement
CREATE TABLE &&ogg_user..HEARTBEAT
(ID NUMBER ,
SRC_DB           VARCHAR2(30),
EXTRACT_NAME     varchar2(8),
SOURCE_COMMIT    TIMESTAMP,
TARGET_COMMIT    TIMESTAMP,
CAPTIME          TIMESTAMP,
CAPLAG           NUMBER,
PMPTIME          TIMESTAMP,
PMPGROUP         VARCHAR2(8 BYTE),
PMPLAG           NUMBER,
DELTIME          TIMESTAMP,
DELGROUP         VARCHAR2(8 BYTE),
DELLAG           NUMBER,
TOTALLAG         NUMBER,
thread           number,
update_timestamp timestamp,
EDDLDELTASTATS   number,
EDMLDELTASTATS   number,
RDDLDELTASTATS   number,
RDMLDELTASTATS   number,
CONSTRAINT HEARTBEAT_PK PRIMARY KEY (SRC_DB)
)
/

Notice that the table will be used to keep track of key metric information and that the primary key is a standard number data type. This is due to the table being ideal for databases that are version 11g and below. If you build a heartbeat table in Oracle Database 12c, you can leverage the identity column features of the database as well. Now that you have an 11g version of the table created, you will need a sequence and a trigger to increment the information stored in the table. The parts of the traditional heartbeat can be created with the examples in Listing 7-9.

Note

The PL/SQL code presented here is a commented set of code that can be optained from an Oracle White Paper at http://www.ateam-oracle.com/wp-content/uploads/2013/04/OGG-Best-Practice-heartbeat-table-using-DBMS_SCHEDULER-V11_0-ID1299679.1.pdf .

Listing 7-9. Sequence and Trigger Required
DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID ;

CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;

CREATE OR REPLACE TRIGGER &&ogg_user..HEARTBEAT_TRIG
BEFORE INSERT OR UPDATE ON &&ogg_user..HEARTBEAT
FOR EACH ROW
BEGIN
select SEQ_GGS_HEARTBEAT_ID.nextval
into :NEW.ID
from dual;
select systimestamp
into :NEW.target_COMMIT
from dual;
select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.CAPLAG
from dual;
select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000
into :NEW.PMPLAG
from dual;
select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000
into :NEW.DELLAG
from dual;
select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.TOTALLAG
from dual;
end ;
/

After creating the components in Listing 7-10, the table needs to be primed to ensure that all the components are working. This is done by simply inserting into the table an initial record (Listing 7-10). Once this information is inserted, it should validate that the trigger fires and updates key information.

Listing 7-10. Insert Statement for Heartbeat
-- this assumes that the table is empty
INSERT INTO &&ogg_user..HEARTBEAT (SRC_DB) select db name from V$database;
commit;

Now that the source side of the heartbeat is configured, the next thing that needs to happen is to ensure this table is updated on a regular basis. To do this, you will need to set up a scheduler job that will fire the trigger every minute or on a defined timeframe that you are comfortable with. Listing 7-11 provides you with a scheduler job normally used for heartbeat tables.

Listing 7-11. Schedule Job Setup
grant select on v_$database to &&ogg_user;

BEGIN
    SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '&&ogg_user..OGG_HB',
                                defer => false,
                                force => false);
END;
/


CREATE OR REPLACE PROCEDURE &&ogg_user..gg_update_hb_tab IS
v_thread_num     NUMBER;
v_db_unique_name VARCHAR2 (128);
BEGIN
SELECT db_unique_name
INTO  v_db_unique_name
FROM v$database;


UPDATE &&ogg_user..heartbeat
SET update_timestamp = SYSTIMESTAMP
,src_db = v_db_unique_name;
END;
/


BEGIN
SYS.DBMS_SCHEDULER.CREATE_JOB (
job_name => '&&ogg_user..OGG_HB',
job_type => 'STORED_PROCEDURE',
job_action => '&&ogg_user..GG_UPDATE_HB_TAB',
number_of_arguments => 0,
start_date => NULL,
repeat_interval => 'FREQ=MINUTELY',
end_date => NULL,
job_class => '"SYS"."DEFAULT_JOB_CLASS"',
enabled => FALSE,
auto_drop => FALSE,
comments => 'GoldenGate',
credential_name => NULL,
destination_name => NULL);


SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name => '&&ogg_user..OGG_HB',
attribute => 'restartable', value => TRUE);


SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(
name => '&&ogg_user..OGG_HB',
attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);


SYS.DBMS_SCHEDULER.enable(
name => '&&ogg_user..OGG_HB');
END;
/

You will notice that this scheduler job will update the heartbeat table; however, the heartbeat table will have a few columns with NULL values (Figure 7-2). This is normal because the trigger doesn’t have all the information needed. As the changes in the table are captured and replicated, the additional information will be provided on the target side.

A340882_1_En_7_Fig2_HTML.jpg
Figure 7-2. Ouput of heartbeat table

Target Database Configuration

With the source side set up, you need to set up the target side. The target side configuration has to be set up the same way using SQL. The components needed for the heartbeat to work are similar to those for the source side: a table, a sequence, and a trigger.

The table that needs to be created is a simple table to keep track of all the historical information for the heartbeat. Historical information provides a way to validate the lag over time. Listing 7-12 provides the DDL for the table.

Listing 7-12. Target Side Heartbeat Table
DROP TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY;

CREATE TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY
(      ID NUMBER ,
SRC_DB           VARCHAR2(30),
EXTRACT_NAME     varchar2(8),
SOURCE_COMMIT    TIMESTAMP,
TARGET_COMMIT    TIMESTAMP,
CAPTIME          TIMESTAMP,
CAPLAG           NUMBER,
PMPTIME          TIMESTAMP,
PMPGROUP         VARCHAR2(8 BYTE),
PMPLAG           NUMBER,
DELTIME          TIMESTAMP,
DELGROUP         VARCHAR2(8 BYTE),
DELLAG           NUMBER,
TOTALLAG         NUMBER,
thread           number,
update_timestamp timestamp,
EDDLDELTASTATS   number,
EDMLDELTASTATS   number,
RDDLDELTASTATS   number,
RDMLDELTASTATS   number
);

You will notice that the table structure is the same as the source table. Like many other tables that are replicated in GoldenGate, having a matching table structure helps to simplify the replication requirements. Keeping this in mind, you will need to also create a sequence for the table to use (Listing 7-13).

Listing 7-13. Sequence Needed for Target Heartbeat Table
DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST ;
CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST INCREMENT BY 1 START WITH 1 ORDER ;

Finally, you need to create the trigger that will fire when transactions happen against the table. This trigger, just like in the source table, will do a few calculations against the incoming data and calculate the lag from source to target. Listing 7-14 provides the DDL for the triggers.

Listing 7-14. Target Side Triggers
create or replace TRIGGER &&ogg_user.GGS_HEARTBEAT_TRIG_HIST
BEFORE INSERT OR UPDATE ON ggate.GGS_HEARTBEAT_HISTORY
FOR EACH ROW
BEGIN
select seq_ggs_HEARTBEAT_HIST.nextval into :NEW.ID
from dual;
select systimestamp into :NEW.target_COMMIT from dual;
select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.CAPLAG
from dual;
select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME - :NEW.CAPTIME,' ')))) * 86400
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))
+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000
into :NEW.PMPLAG
from dual;
select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME - :NEW.PMPTIME,' ')))) * 86400
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2))
+ to_number(substr((:NEW.DELTIME - :NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000
into :NEW.DELLAG
from dual;
select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))
+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000
into :NEW.TOTALLAG
from dual;
end ;
/
ALTER TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG_HIST ENABLE;

Now that the SQL components of the traditional heartbeat are in place, you will need to configure the GoldenGate processes for the heartbeat.

Heartbeat Configuration

To configure the heartbeat configuration is just like setting up normal replication; the only difference here is that the heartbeat table is the only object being replicated across the extracts and replicats. The design of this keeps the heartbeat as minimal as possible to prevent it from affecting existing replication and provide near real-time information on the network lag.

To do this, you will need to have three parameter files created, one parameter file for each of the processes needed in the heartbeat replication. Listing 7-15 provides a breakdown of the parameter files and their contents.

Extract:

Listing 7-15. Heartbeat Parameter Files and Contents (Extract/Pump/Replicat)
INCLUDE ./dirmac/logon.mac
--CHECKPARAMS
EXTRACT EXT_HB
#logon_settings()
TRANLOGOPTIONS DBLOGREADER
SETENV (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
SETENV (ORACLE_SID="src12c")
WARNLONGTRANS 1h, CHECKINTERVAL 30m
EXTTRAIL ./dirdat/h1
--WILDCARDRESOLVE IMMEDIATE
REPORTCOUNT EVERY 5 MINUTES, RATE
INCLUDE ./dirmac/heartbeat_extract.mac

Pump (Extract):

INCLUDE ./dirmac/logon.mac
EXTRACT PMP_HB
#logon_settings()
RMTHOST 10.10.1.12, MGRPORT 15000, COMPRESS
RMTTRAIL ./dirdat/h2
INCLUDE ./dirmac/heartbeat_pump.mac

Replicat:

INCLUDE ./dirmac/logon.mac
REPLICAT REP_HB
SETENV (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
SETENV (ORACLE_SID="rmt12c")
#logon_settings()
ASSUMETARGETDEFS
REPORTCOUNT EVERY 5 MINUTES, RATE
DISCARDFILE ./dirrpt/RSRC1.dsc, append, megabytes 500
DBOPTIONS NOSUPPRESSTRIGGERS
INCLUDE ./dirmac/heartbeat_replicat.mac

You will notice that in each of these parameter files, there is a call to a macro for its specific replication information. The macro for each process is specific to providing the tokens needed for replication of the heartbeat information. Listing 7-16 provides the information that is in these macros.

Extract:

Listing 7-16. Macro Information for Heartbeat
--Heartbeat Extract Macro
--Heartbeat Table
--Inital write - 12-21-2013 - BLC
table ggate.heartbeat,
tokens(
capgroup=@getenv('GGENVIRONMENT','GROUPNAME'),
captime=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
eddldeltastats=@getenv ('DELTASTATS', 'DDL'),
edmldeltastats=@getenv ('DELTASTATS', 'DML')
);

Pump (extract):

--Heartbeat Pump Macro
--Heartbeat Table
--Inital write - 12-21-2013 - BLC
table ggate.heartbeat,
tokens(
pmpgroup=@getenv('GGENVIRONMENT','GROUPNAME'),
pmptime=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP'))
);

Replicat:

--Heartbeat Replicat Macro
--Tar_Heartbeat Table
--Inital write - 12-21-2013 - BLC
MAP ggate.HEARTBEAT, TARGET ggate.GGS_HEARTBEAT_HISTORY,
KEYCOLS (DELGROUP),
INSERTMISSINGUPDATES,
COLMAP (USEDEFAULTS,
ID = 0,
SOURCE_COMMIT=@GETENV ('GGHEADER', 'COMMITTIMESTAMP'),
EXTRACT_NAME=@TOKEN ('CAPGROUP'),
CAPTIME=@TOKEN ('CAPTIME'),
PMPGROUP=@TOKEN ('PMPGROUP'),
PMPTIME=@TOKEN ('PMPTIME'),
DELGROUP=@GETENV ('GGENVIRONMENT', 'GROUPNAME'),
DELTIME=@DATE ('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV ('JULIANTIMESTAMP')),
EDDLDELTASTATS=@TOKEN ('EDDLDELTASTATS'),
EDMLDELTASTATS=@TOKEN ('EDMLDELTASTATS'),
RDDLDELTASTATS=@GETENV ('DELTASTATS', 'DDL'),
RDMLDELTASTATS=@GETENV ('DELTASTATS', 'DML')
);

With everything in place now, you can start create and start the extract, pump, and replicat processes associated with the heartbeat process. As the schedule job begins to update the table on the source side, the information will be replicated and applied to the target side. All the information captured in the tokens will be inserted into the table on the target side, providing you with near real-time information on the network lag. Figure 7-3 provides a view into the table on the target side.

A340882_1_En_7_Fig3_HTML.jpg
Figure 7-3. Output of the heartbeat process

Another nice feature of the heartbeat process is that if you use the macros as pointed out here, you can then integrate the heartbeat into your existing GoldenGate processes. This provides you a way to calculate the lag on processes replicating your data. This approach is shown in Figure 7-3 as well.

Integrated Heartbeat

The heartbeat process has become such a main component of many GoldenGate environments, Oracle decided to take the process a step further and make it a core feature in the latest release of Oracle GoldenGate 12c. Starting with GoldenGate 12.2.0.1.0, the heartbeat process is now integrated into the core code base. This is both a good thing and a bad thing in my opinion. I say this because the new integrated heartbeat only solves the issue of the heartbeat in your existing processes; it does not allow you to monitor network lag like the traditional heartbeat can. In the end, the integrated heartbeat is a great feature if you want to quickly set up and monitor lag on your existing replication processes.

Parameters for Integrated Heartbeat

To set up the integrated heartbeat requires minimal configuration; however, like other aspects of GoldenGate, this setup requires adding a parameter within the configuration. If you want the parameters to take effect at the global level, the changes need to be made in the GLOBALS file. If the changes are to be localized to the process, they can be added to the individual process parameter files. The parameters that have to be used are the following:

  • HEARTBEATTABLE <table_name>: This parameter allows you to define the heartbeat table you want to use; the default name will be GG_HEARTBEAT.

  • ENABLE_HEARTBEAT_TABLE | DISABLE_HEARTBEAT_TABLE: These parameters are used to either start or stop the heartbeat-related items in the GoldenGate environment. (The parameters can be used in either GLOBALS, Extract, or Replicat to enable or disable the heartbeat.)

The HEARTBEATTABLE parameter tells GoldenGate where to build and look for the heartbeat table. This parameter is normally placed in the GLOBALS file so when it is built using the ADD HEARTBEATTABLE command, GoldenGate automatically builds the table. After the table is built, it will be referenced based on the information in the GLOBALS file.

The next set of parameters that is used with the integrated heartbeat table are ENABLE_HEARTBEAT_TABLE and DISABLE_HEARTBEAT_TABLE. These parameters are designed to start and stop the heartbeat process in the processes. These parameters can either be added to the GLOBALS file or localized in the process parameter files.

Set Up Integrated Heartbeat

Once you have decided where the parameters for integrated heartbeat should be located, the next step is to set up the heartbeat process. To do this, you need to see what options you have for creating the heartbeat table. Figure 7-4 provides the Command Summary that is used with GGSCI. You will notice that there is a HEARTBEATTABLE object with the commands for ADD, DELETE, ALTER, and INFO listed.

A340882_1_En_7_Fig4_HTML.jpg
Figure 7-4. GGSCI Command Summary

To create the heartbeat table, you simply need to run ADD HEARTBEATTABLE if your heartbeat table name is listed in the GLOBALS file. If not, you will have to provide a schema and table name with the ADD HEARTBEATTABLE command. Listing 7-17 provides an example of the command structure if needed.

Listing 7-17. ADD HEARTBEATTABLE Without GLOBALS
GGSCI> dblogin userid [ gg user ] password ******
GGSCI> ADD HEARTBEATTABLE [ schema ].[ table name ]

After you have run the command for creating the heartbeat table, you will notice that GoldenGate creates all the items needed for the heartbeat process to work. Figure 7-5 provides a view of all the objects created.

A340882_1_En_7_Fig5_HTML.jpg
Figure 7-5. Objects created by ADD HEARTBEATTABLE

If you are having a hard time reading the image in Figure 7-5, here is a list of all the database objects that the command creates for you.

Tables:

<heartbeat_table>_SEED (default GG_HEARTBEAT_SEED)
<heartbeat_table> (default GG_HEARTBEAT)
<heartbeat_table>_HISTORY (default GG_HEARTBEAT_HISTORY)

Views:

GG_LAG
GG_LAG_HISTORY

Stored Procedures:

GG_UPDATE_HB_TAB
GG_PURGE_HB_TAB

Scheduler Jobs:

GG_UPDATE_HEARTBEATS
GG_PURGE_HEARTBEATS

As you can tell, unlike the manual process of setting up the heartbeat process, Oracle has integrated all the steps and objects required into a single command. This provides a very effective way to use the heartbeat process within the GoldenGate framework.

Functions

The last advanced feature is how Oracle GoldenGate can be used to transform data while it is in transit. Oracle GoldenGate provides functions that allow data to be tested while in transit from the source system to the target system. These functions are executed on a column basis. Table 7-3 provides a summary of these functions for quick reference.

Table 7-3. Oracle GoldenGate Functions

Function

Category

CASE

Performance testing

EVAL

Performance testing

IF

Performance testing

COLSTAT

Handling missing columns

COLTEST

Handling missing columns

DATE

Dates

DATEDIFF

Dates

DATENOW

Dates

COMPUTE

Arithmetic calculations

NUMBIN

Strings

NUMSTR

Strings

STRCAT

Strings

STRCMP

Strings

STREXT

Strings

STREQ

Strings

STRFIND

Strings

STRLEN

Strings

STRLTRIM

Strings

STRNCAT

Strings

STRNCMP

Strings

STRNUM

Strings

STRRTRIM

Strings

STRSUB

Strings

STRTRIM

Strings

STRUP

Strings

VALONEOF

Strings

AFTER

Others

BEFORE

Others

BEFOREAFTER

Others

BINARY

Others

BINTOHEX

Others

GETENV

Others

GETVAL

Others

HEXTOBIN

Others

HIGHVAL | LOWVAL

Others

RANGE

Others

TOKEN

Others

As you can tell, there are quite a few functions that can be used against data as it is being shipped. Many of these functions fit into six different categories, and every category can help in identifying what is happening with the data. To illustration how functions work, let’s take a look at the IF function.

IF Function

The IF function belongs in the performance testing category and is helpful when you want to test data for conditions before the data arrive in the target database. This function operates just like a normal programming IF statement, by returning one of two values based on a defined condition as mapped in the COLMAP statement of the replicat. We take a closer look at this in a moment.

Note

The @IF function can be used with other conditional arguments to test one or more exceptions.

To understand how the @IF functions work, the syntax is as follows:

@IF ( condition, value_if_non-zero, value_if-zero)

To use the @IF function, you need to enable the replicat that is applying transactions to evaluate the data and make changes as required based on the values of the function. To do this, the MAP clause of the replicat parameter file needs to be updated. Listing 7-18 provides an example of a replicat parameter file using the @IF function in the MAP statement.

Listing 7-18. Replicat Using @IF Function
--CHECKPARAMS
REPLICAT REP
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12cr1")
SETENV (ORACLE_SID="oragg")
USERID ggate, PASSWORD ggate
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 50
WILDCARDRESOLVE IMMEDIATE
BATCHSQL
map SF.ORDERS, target ATL.ORDERS
COLMAP (USEDEFAULTS, PRICE = @IF(PRICE>100, PRICE, 1000));

In Listing 7-18, you are saying to check the price column of the data coming in to see if the value is greater than 100. If the value is greater, then round the price to 1,000; otherwise, leave the price as the value being replicated.

Using conditional checking within Oracle GoldenGate, data can be evaluated and changed as it is replicated between environments. Doing these conditional checks during replication enables the administrator to quickly make changes to data as needed without spending a lot of time scrubbing data beforehand.

Summary

In this chapter, you have seen four types of advanced features that Oracle GoldenGate provides to help make configuration, running, transforming, and monitoring simpler. These features provide a rich framework for working with and around data that are being replicated. The next chapter builds on some of these features and how they relate to the security model of Oracle GoldenGate.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.33.235