© Bobby Curtis 2016

Bobby Curtis, Pro Oracle GoldenGate for the DBA, 10.1007/978-1-4842-1179-3_10

10. GoldenGate Utilities

Bobby Curtis

(1)Winston, Georgia, USA

Everything you have looked at in the previous chapters has been about how to configure and run Oracle GoldenGate. Besides the basics of building a GoldenGate environment, there are many utilities that come with the base product that allow you to work with data within the environment. These utilities are tools that allow you to either look up, replay, map, or trace different aspects of the Oracle GoldenGate environment. This chapter highlights some of the basic tools you will use to interact with your GoldenGate environment. At the end, you should have a basic understanding of what these tools look like and how to use them.

You might be wondering what utilities come with Oracle GoldenGate. Many of these tools can be found in the Oracle GoldenGate Home. The following is a list of utilities that are discussed in this chapter:

  • Oracle GoldenGate Error (OGGERR)

  • Logdump

  • Definition Generator (Defgen)

  • Check Parameters (checkprm)

  • ConvChk

  • Key Generation (keygen)

Oracle GoldenGate Error

Every now and then, Oracle GoldenGate throws an error. These errors are commonly known as Oracle GoldenGate errors (OGG errors). In the early days of Oracle GoldenGate, when these errors were thrown, users didn’t know what they meant or even how to look them up. Fortunately, before and after the Oracle acquisition of Oracle GoldenGate, you could use a tool called oggerr to look up the error.

The oggerr utility resides in the Oracle GoldenGate Home directory and can be called from the command line. By simply calling the utility without any options, you will get a usage display (Listing 10-1).

Listing 10-1. oggerr Usage
$ ./oggerr
Usage: ./oggerr (-v | [OGG-code...])

Looking at the usage in Listing 10-1, you can see that the oggerr utility only has one flag option and then the OGG error code has to be provided. Listing 10-2 provides you with an example of how to look up an error.

Listing 10-2. oggerr Usage with Error Code
$ ./oggerr -v OGG-00001
00001, 00000, "Execution cannot continue - Program Terminating"
// *Cause:  This is a generic message that indicates a process failure.
// *Action: Look for other messages in the process report and error log that
//          provide more context for this failure. If you cannot determine and
//          resolve the problem, contact Oracle Support.

In the preceding example, you are asking for the verbose explanation of the error code provided. Using the –v flag, you are presented with detailed information about the error. Although some errors might not give you enough information when looking them up this way, it is still a good utility to use when you need a quick explanation of the error.

Note

You can find a list of all the error codes for Oracle GoldenGate in the reference guides at docs.oracle.com ( http://docs.oracle.com/goldengate/c1221/gg-winux/GMESG/toc.htm ).

Logdump

Of all the utilities that are discussed in this chapter, the most used utility is the logdump utility. Logdump allows you to look at the trail files that are generated on capture of data or before the data are applied to the target side. This utility is possibly the most used and largest of the tools provided by Oracle for GoldenGate.

The logdump utility is located directly in the root of the Oracle GoldenGate Home directory . This makes it simple to find and to use when needed to access trail files. Your trail files should be somewhere that you can access easily because the logdump utility requires you to open the trail file from within the utility. To start using logdump, you simply need to navigate to the Home directory for GoldenGate and execute the utility. Listing 10-3 illustrates this process.

Listing 10-3. Executing Logdump
[email protected]:/home/oracle >cd $OGG_HOME
[email protected]:/opt/app/oracle/product/12.2.0.0/oggcore_1 >./logdump


Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2
Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.
Logdump 4 >

Once you have opened the logdump utility, there are a few things that need to be done before you can start reading trail files. There are quite a few options that can be used within logdump. To find out what these options are, you can type help at the logdump prompt. This generates a list of commands, shown in Listing 10-4, that can be used within logdump.

Listing 10-4. Logdump Help Output
Logdump 4 >help

FC [<num> | <string>]     - Edit previous command
HISTORY                   - List previous commands
OPEN | FROM  <filename>   - Open a Log file
RECORD | REC              - Display audit record
NEXT [ <count> ]          - Display next data record
SKIP [ <count> ] [FILTER] - Skip down <count> records
     FILTER               - Apply filter during skip
COUNT                     - Count the records in the file
      [START[time] <timestr>,]
      [END[time] <timestr>,]
      [INT[erval] <minutes>,]
      [LOG[trail] <wildcard-template>,]
      [FILE <wildcard-template>,]
      [DETAIL ]
       <timestr> format is
         [[yy]yy-mm-dd] [hh[:mm][:ss]]
POSITION [ <rba> | FIRST | LAST | EOF ] - Set position in file
         REVerse | FORward              - Set read direction
RECLEN [ <size> ]  - Sets max output length
EXIT | QUIT        - Exit the program
FILES | FI | DIR   - Display filenames
ENV                - Show current settings
VOLUME | VOL | V   - Change default volume
DEBUG              - Enter the debugger
GHDR  ON | OFF     - Toggle GHDR display
DETAIL ON | OFF | DATA - Toggle detailed data display
DECRYPT OFF | ON [KEYNAME key_name]. - Decrypt data encrypted with GoldenGate trail encryption.
RECLEN <nnn>        - Set data display length
SCANFORHEADER (SFH)  [PREV]  - Search for the start of a header
SCANFORTYPE   (SFT) - Find the next record of <TYPE>
      <typename> | <typenumber>
      [,<filename-template>]
SCANFORRBA    (SFR) - Find the next record with <SYSKEY>
      <syskey>                - syskey = -1 scans for next record
      ,<filename-template>
SCANFORTIME  (SFTS) - Find the next record with timestamp
      <date-time string>
      [,<filename-template>]
         <date-time string> format is
           [[yy]yy-mm-dd] [hh[:mm][:ss]]
SCANFORENDTRANS  (SFET) - Find the end of the current transaction
SCANFORNEXTTRANS (SFNT) - Find start of the next transaction
SCANFORMETADATA  (SFMD) - Find a metadata record
      [DDR] | [TDR]
      [NEXT] | [<Index>]
SHOW <option>       - Display internal information
      [OPEN]        - list open files
      [TIME]        - print current time in various formats
      [ENV]         - show current environment
      [RECTYPE]     - show list of record types
      [FILTER]      - show active filter items
BIO  <option>       - Set LargeBlock I/O info
      [ON]          - Enable LargeBlock I/O (default)
      [OFF]         - Disable LargeBlock I/O
      [BLOCK <nnnn>]- Set LargeBlock I/O size
TIMEOFFSET <option> - Set the time offset from GMT
      [LOCAL]            - Use local time
      [GMT]              - Use GMT time
      [GMT +/- hh[:mm]]  - Offset +/- from GMT
FILTER SHOW
FILTER ENABLE | ON   - Enable filtering
FILTER DISABLE | OFF - Disable filtering
FILTER CLEAR [ <filterid> | <ALL> ]
FILTER MATCH     ANY | ALL
FILTER [INClude | EXCLude] <filter options>
   <filter options> are
       RECTYPE  <type number | type name>
       STRING [BOTH] /<text>/ [<column range>]
       HEX      <hex string>  [<column range>]
       TRANSID  <TMF transaction identifier>
       FILENAME <filename template>
       PROCESS  <processname template>
       INT16    <16-bit integer>
       INT32    <32-bit integer>
       INT64    <64-bit integer>
       STARTTIME <date-time string>
       ENDTIME   <date-time string>
       SYSKEY   [<comparison>] <32/64-bit syskey>
       SYSKEYLEN [<comparison>] [<value>]
       TRANSIND [<comparison>] <nn>
       UNDOFLAG [<comparison>] <nn>
       RECLEN   [<comparison>] <nn>
       AUDITRBA [<comparison>] <nnnnnnnn>
       ANSINAME <ansi table name>
       GGSTOKEN <tokenname> [<comparison>] [<tokenvalue>]
       USERTOKEN <tokenname> [<comparison>] [<tokenvalue>]
       CSN | LogCSN [<comparison>] [<value>]
   <column range>
       <start column>:<end column>, ie  0:231
   <comparison>
       =, ==, !=, <>, <, >, <=, >=  EQ, GT, LE, GE, LE, NE
X <program> [string]  - Execute <program>
TRANSHIST nnnn        - Set size of transaction history
TRANSRECLIMIT nnnn    - Set low record count threshold
TRANSBYTELIMIT nnnn   - Set low byte count threshold
LOG {STOP} | { [TO] <filename> } - Write a session log
BEGIN <date-time>     - Set next read position using a timestamp
SAVEFILECOMMENT on | OFF  - Toggle comment records in a savefile
SAVE <savefilename> [!] <options>  - Write data to a savefile
   <options> are
   nnn RECORDS | nnn BYTES
   [NOCOMMENT]  - Suppress the Comment header/trailer recs, Default
   [COMMENT]    - Insert Comment header/trailer recs
   [TRUNCATE ]  - purgedata an existing savefile
   [EXT ( <pri>, <sec> [,<max>])] - Savefile Extent sizes on NSK
   [MEGabytes <nnnn>]             - For extent size calculation
   [TRANSIND <nnn>]               - Set the transind field
   [COMMITTS <nnn>]               - Set the committs field
USERTOKEN     on  | OFF | detail  - Show user token info
HEADERTOKEN   on  | OFF | detail  - Show header token info
GGSTOKEN      on  | OFF | detail  - Show GGS token info
FILEHEADER    on  | OFF | detail  - Display file header contents
ASCIIHEADER   ON  | off           - Toggle header charset
EBCDICHEADER  on  | OFF           - Toggle header charset
ASCIIDATA     ON  | on            - Toggle user data charset
EBCDICDATA    on  | OFF           - Toggle user data charset
ASCIIDUMP     ON  | off           - Toggle charset for hex/ascii display
EBCDICDUMP    on  | OFF           - Toggle charset for hex/ascii display
PRINTMXCOLUMNINFO  on | OFF       - Toggle SQL/MX columninfo display
TMFBEFOREIMAGE     on | OFF       - Toggle display of TMF before images
FLOAT  <value>                    - Interpret a floating point number
       [FORMAT <specifier>]       - sprintf format default %f

As you can tell, there are quite a few different commands, over multiple pages, that can be used within the logdump utility. Each command provides access to and a different outlook on the data contained within the trail files. Now that you have seen all the options that are associated with the logdump utility, let’s take a look at accessing some data within a trail file.

Accessing Trail File Data

To access a trail file using the logdump utility, you have to explicitly tell logdump what trail file you wish to open. This is done by issuing the open command with the trail file you want to look at. Listing 10-5 illustrates this process.

Listing 10-5. Open a Trail File
Logdump 5 >open ./dirdat/rt000000000
Current LogTrail is /opt/app/oracle/product/12.2.0.0/oggcore_1/dirdat/rt000000000
Note

You can open a range of trail files by using wildcards.

Now that you have an open trail file, you want to look at some data. Before you can do that, you have to set up the environment to display what you want to see. To do this, you need to refer back to the list of commands in Listing 10-4. Although there are several commands to choose from, there are just a few common ones that will get you started. These common logdump commands are highlighted in Listing 10-6.

Listing 10-6. Common Logdump Commands
GHDR  ON | OFF     - Toggle GHDR display
DETAIL ON | OFF | DATA - Toggle detailed data display
USERTOKEN     on  | OFF | detail  - Show user token info
HEADERTOKEN   on  | OFF | detail  - Show header token info
GGSTOKEN      on  | OFF | detail  - Show GGS token info

You might be asking yourself how to use these commands within logdump. It is quite simple. The commands can be entered at the logdump prompt to enable or disable the command you want to use. Use of these common commands can be seen in Listing 10-7.

Listing 10-7. Common Logdump Commands
Logdump 6 >ghdr on
Logdump 7 >detail on
Logdump 8 >detail data
Logdump 9 >ggstoken on
Logdump 10 >usertoken on

With the commands in Listing 10-7 executed, you now have a basic setup for reviewing a trail file established. The next thing you need to do is to start looking at the trail file and reviewing the data that it contains. By default you are positioned at the top of the trail file; that is, relative byte address (RBA) 0. All you need to do at this point is use the next (n) command to move forward in the file. Listing 10-8 shows you the contents of the trail file starting at RBA 0 when you click Next.

Listing 10-8. Trail File  Header Information
Logdump 11 >n
2016/03/14 19:34:21.383.380 FileHeader           Len  1497 RBA 0
Name: *FileHeader*
 3000 0366 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..f0...GG..TL..1...
 0005 3200 0004 2000 0000 3300 0008 02f2 543e 2d1c | ..2... ...3.....T>-.
 82d4 3400 0043 0041 7572 693a 6672 6564 3a61 636d | ..4..C.Auri:fred:acm
 653a 636f 6d3a 3a75 3031 3a61 7070 3a6f 7261 636c | e:com::u01:app:oracl
 653a 7072 6f64 7563 743a 3132 2e32 2e30 3a6f 6767 | e:product:12.2.0:ogg
 636f 7265 5f31 3a50 4747 3132 4335 0000 4735 0000 | core_1:PGG12C5..G5..
 4300 4175 7269 3a66 7265 643a 6163 6d65 3a63 6f6d | C.Auri:fred:acme:com

All you see when you start at RBA 0 is the header of the trail file. The header tells you where the trail file came from, the version of GoldenGate being used, and what data pump process created the trail file. To get to an actual record in the trail file, you need to move forward in the trail file again using the next command.

After moving forward in the trail file a few times, you should come upon the first record that is stored in the trail file.

Note

Starting in Oracle GoldenGate 12c (12.2.0.1.0) you will begin seeing table metadata in the trail files.

As part of the record, you will notice there are metadata for the table that was captured. This is new in Oracle GoldenGate 12c and helps with the mapping of data if columns are in different orders. Listing 10-9 shows you the header of the transaction header that corresponds to the metadata for the record that is coming through. Take a moment to review the header information.

Listing 10-9. Trail File Output (1)
Logdump 14 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x00)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :   281  (x0119)   IO Time    : 2016/03/14 20:27:28.773.265
IOType     :   170  (xaa)     OrigNode   :     2  (x02)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
DDR/TDR Idx:   (001, 001)     AuditPos   : 25851148
Continued  :     N  (x00)     RecCount   :     1  (x01)


2016/03/14 20:27:28.773.265 Metadata             Len 281 RBA 1707
Name: SOE.LOGON
*
 1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale 6)Level         7)Null             8)Bump if Odd      9)Internal Length  10)Binary Length11)Table Length 12)Most Sig DT     13)Least Sig DT     14)High Precision  15)Low Precision16)Elementary Item  17)Occurs      18)Key Column       19)Sub DataType    20)Native DataType 21)Character Set    22)Character Length 23)LOB Type    24)Partial Type
*
TDR version: 1
Definition for table SOE.LOGON
Record Length: 134
Columns: 3
LOGON_ID      64     50        0  0  0 1 0     50     50     50 0 0 0 0 1    0 1   2    2       -1      0 0 0
CUSTOMER_ID   64     50       56  0  0 1 0     50     50     50 0 0 0 0 1    0 1   2    2       -1      0 0 0
LOGON_DATE   192     19      112  0  0 1 0     19     19     19 0 5 0 0 1    0 1   0   12       -1      0 0 0
End of definition

This example of a trail file header looks like Greek to the untrained eye. The information contained in the header is for the SOE.LOGON table. The cryptic header section provides information relative to the transaction that is coming through, such as record length, if it is a before or after transaction, the date the transaction occurred, and the number of records contained with the transaction.

The next section in this header provides details on the metadata for the SOE.LOGON table . The section that has the numbers that look like column headers is a key to help you decipher the specific information related to the columns provided just below that. Finally, the section below that area is information related to the record length and the definition of the table. This section lists the names of the columns in the record that is being sent. This helps with the mapping resolutions between the source and target metadata.

In the first section of the transaction header, you can see how the metadata are being passed in the trail file. In this portion of the trail file, you look at the same transaction, just more of the actual transaction. Just like the metadata header, the transaction has a header that specifies a lot of information for the transactions. Listing 10-10 provides this output for review.

Listing 10-10. Trail Output (2)
Logdump 15 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x0c)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    54  (x0036)   IO Time    : 2016/03/14 20:27:20.008.002
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :       4814       AuditPos   : 25851148
Continued  :     N  (x00)     RecCount   :     1  (x01)


2016/03/14 20:27:20.008.002 Insert               Len    54 RBA 2048
Name: SOE.LOGON  (TDR Index: 1)
After  Image:                                             Partition 12   G  s
 0000 000b 0000 0007 3239 3832 3938 3600 0100 0a00 | ........2982986.....
 0000 0637 3034 3737 3000 0200 1500 0032 3031 362d | ...704770......2016-
 3033 2d31 343a 3230 3a32 373a 3139                | 03-14:20:27:19
Column     0 (x0000), Len    11 (x000b)
 0000 0007 3239 3832 3938 36                       | ....2982986
Column     1 (x0001), Len    10 (x000a)
 0000 0006 3730 3437 3730                          | ....704770
Column     2 (x0002), Len    21 (x0015)
 0000 3230 3136 2d30 332d 3134 3a32 303a 3237 3a31 | ..2016-03-14:20:27:1
 39                                                | 9


GGS tokens:
 5200 0014 4141 415a 324a 4141 4841 4141 674d 2b41 | R...AAAZ2JAAHAAAgM+A
 4345 0001 7401 0000 4c00 0008 3235 3835 3835 3430 | CE..t...L...25858540
 3600 000a 322e 3235 2e32 3032 3137 6901 0002 0001 | 6...2.25.20217i.....

The header information provides the same information as was provided with the metadata. This includes the record length, before and after image information, and the date the transaction occurred. Along with this information, you will see just below the header a section with a date and timestamp. This line in the record provides information on the type of transaction that occurred, the length of the transaction, and the relative byte address. In the case of this transaction, it is an insert that had a length of 54 bytes located at the 2048 byte address.

Note

The RBA can be used to calculate a rough estimate of size between transactions. Additionally, you can use the count command to get a sense of how many transactions are being applied in a replicat environment.

Immediately after the header, comes the transaction-specific information. This section provides the binary and ASCII format of data. The binary section is to the left of the ASCII information that is displayed. The binary section also provides the column mappings for the data; however, it is not clear on the column names. This is where the metadata in the header are helpful. By marrying the two together, you can quickly see what columns are being mapped for the transaction.

This was a quick overview of running logdump and reviewing a trail file. Trail files are a good thing to take a look at to get a good understanding of what logdump can do for you. Logdump is one of the most powerful tools you can have in your GoldenGate toolkit when problems arise.

Definition Generator

Starting with Oracle GoldenGate 12c (12.2.0.1.0), Oracle is providing the metadata for tables in the trail files to make mapping of data between source and target a bit easier. Although this is provided now, the flexibility of Oracle GoldenGate allows for different version of GoldenGate to be used between each other and still replicat data. This means you can have an Oracle GoldenGate 11g replicating to an Oracle GoldenGate 12c environment and vice versa. So how would you successfully map data in such an architecture? If the table definitions are different or you are performing replication in a heterogeneous environment, you will need to use a definition file.

Definition files are generated by an Oracle GoldenGate utility called defgen, short for definition generator, which comes bundled with the core product. The purpose of this utility is to provide a means of mapping the table structure while the data are actively replicated. Defgen allows for definition files to be created that supportthe defining of the table metadata structure on either the source or target side of the replication process. This section looks at how to use the defgenutility to map the source table to the target table.

When replicating data, the assumption made is that the metadata matches on both sides (source and target) of the replication environment. In some situations, this assumption is not accurate and leads to problems with processes abending due to mismatch structures. When this happens, the mapping between source and target metadata needs to be performed to ensure successful replication. This is where using the defgen utility will ensure successful integration of data on the target side.

Note

The defgen utility is installed in the Oracle GoldenGate Home directory.

Configure Defgen

Like anything in Oracle GoldenGate, before defgen can be used it needs to be configured. To configure a source-side mapping with the defgen utility, we need to create a parameter file. The following steps will create a parameter file called defgen.prm.

  1. From GGSCI, edit a parameter file called defgen.

    edit params defgen
  2. Edit the parameter file with the parameters needed.

    --Your parameter file will vary, this is an example only--
    DEFSFILE ./dirdef/defgen.def
    USERID ggate, PASSWORD ggate
    TABLE sfi.clients;
  3. Save and close the file.

  4. Exit GGSCI.

After saving the parameter file, you will be ready to run the defgen utility to generate the definition file needed to ensure mapping of the source columns to the columns on the target side.

Note

Parameters that can be used in the parameter file for generating a definition file can be found in Table 10-1.

Table 10-1. Parameters for Defgen

Parameter

Description

CHARSET character_set

Use this parameter to specify a character set that defgen will use to read the parameter file. By default, the character set of the parameter file is that of the local operating system. If used, CHARSET must be the first line of the parameter file.

DEFSFILE file_name [APPEND | PURGE] [CHARSET character_set] [FORMAT RELEASE major.minor]

Specifies the relative or fully qualified name of the data definitions file that is to be the output of defgen.

[{SOURCEDB | TARGETDB} datasource] {USERIDALIAS alias | USERID user, PASSWORD password [encryption_options]}

The datasource can be a DSN (Datasource Name), an SQL/MX catalog, or a container of an Oracle container database (CDB). If connecting to an Oracle CDB, connect to the root container as the common user if you need to generate definitions for objects in more than one container. Otherwise, you can connect to a specific container to generate definitions only for that container.

NOCATALOG

Removes the container name (Oracle) or the catalog name (SQL/MX) from table names before their definitions are written to the definitions file. Use this parameter if the definitions file is to be used for mapping to a database that only supports two-part names (owner.object).

TABLE [container.| catalog.] owner.table [, {DEF | TARGETDEF} template];

Specifies the fully qualified name of a table or tables for which definitions will be defined and optionally uses the metadata of the table as a basis for a definitions template.

The defgen utility has a few parameters that can be used when setting up the definitions, shown in Table 10-1. The parameters that are listed in Table 10-1 are to be used when configuring the parameter file for generating definition files.

Running Defgen

After you have built and saved your parameter file, the next step is to run the defgen utility to create the definition file. To run defgen, make sure that you are in the Oracle GoldenGate Home directory. The utility is run from the GGSCI prompt using the parameters that are defined in Table 10-2.

Table 10-2. Parameters for Defgen Run

Parameter

Description

PARAMFILE

Relative or full path name of the defgen parameter file.

REPORTFILE

Sends output to the screen and to the designated report file.

[ NOEXTATTR ]

Used to support backward compatibility with Oracle GoldenGate versions that are older than Release 11.2.1 and do not support character sets other than ASCII, nor case-sensitivity or object names that are quoted with spaces.

To run the defgen utility that will create the definition file needed, the command is fairly straightforward. Listing 10-11 provides an example of running the command using relative paths.

Listing 10-11. Running Defgen Utility
$ defgen paramfile ./dirprm/defgen.prm reportfile ./dirrpt/defgen.rpt

After the definition file has been generated, the file needs to be copied over to the target system and specified in the replicat parameter file. To copy the definition file to the target system, any File Transfer Protocol (FTP) or Session Control Protocol [SCP] utility can be used as long as the transfer is done in binary mode. This ensures that the file is copied correctly between any platforms.

Note

The definition file should be copied in binary mode to avoid any unexpected characters being placed in the file by the FTP utility.

Once the definition file is copied to the target system, the file is associated with the replicat parameter file using the SOURCEDEFS parameter. Listing 10-12 displays how a replicat parameter file would look with the definition file defined.

Listing 10-12. Replicat Parameter File with Definition File Defined
-- Example replicat parameter file. Your requirements may vary.--
REPLICAT REPI
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12cr1")
SETENV (ORACLE_SID="oragg")
USERID ggate, PASSWORD <password>
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/REPI.dsc, append, megabytes 50
SOURCEDEFS ./dirdef/defgen.def
map SFI.CLIENTS, target ATLI.CLIENTS;

After configuring the replicat to use the definition file, data can be mapped between source and target without any errors. Notice, in Listing 10-12, that the MAP statement has an extract option after telling Oracle GoldenGate where the target table is. The DEF option that is an option for MAP is used to tell the replicat to use the definition file specified. Using a definition file allows table metadata to be different between source and target systems. This makes the process of integrating data between different systems easier and more flexible.

Check Parameters

Introduced in Oracle GoldenGate 12c (12.2.0.10) is a new way of validating parameter files, checkprm. The checkprm command is used to assess the validity of the specified parameter file. This utility can provide a simple PASS/FAIL or optional details about how the values of each parameter are stored and interrupted.

The checkprm command is a command-line tool that runs out of the Oracle GoldenGate Home directory. When you use the utility with no arguments, checkprm attempts to automatically detect extract or replicat and the platform and database of the installation. The syntax of the checkprm utility is provided in Listing 10-13.

Listing 10-13. Checkprm Syntax
checkprm <param file> [--COMPONENT(-C) <component name>]
           [--MODE(-M) <mode name>] [--PLATFORM(-P) <platform name>]
           [--DATABASE(-D) <database name>] [--VERBOSE(-V)]

As you can see there are a couple of different parameters that can be passed to the utility and it will provide a validation of the parameter file.

Note

Additional information on checkprm can be found in the Oracle Docs at http://docs.oracle.com/goldengate/c1221/gg-winux/GWURF/non_ggsci_commands001.htm#GWURF1245 .

Using Checkprm

Let’s take a look at a few examples of using the checkprm utility against a replicat parameter file. These examples provide you with a quick understanding of how this utility can be used. In Listing 10-14, there are no parameters passed and the parameter file is strictly being tested for validity.

Listing 10-14. No Parameters Passed
[email protected]:/opt/app/oracle/product/12.2.0.0/oggcore_1 >./checkprm ./dirprm/rgg12c.prm

2016-03-16 21:15:46 INFO OGG-02095 Successfully set environment variable ORACLE_HOME=/opt/app/oracle/product/12.1.0.2/dbhome_1.

2016-03-16 21:15:46 INFO OGG-02095 Successfully set environment variable ORACLE_SID=rmt12c.

2016-03-16 21:15:46 INFO OGG-10139 Parameter file ./dirprm/rgg12c.prm: Validity check: PASS.

Runtime parameter validation is not reflected in the above check.

In reviewing the output, you will notice that the utility successfully set the Oracle Home, set the Oracle SID, and verified the parameter file with a PASS. This means that the parameter file validated successfully and can be used within your environment.

Now, let’s take a more verbose look at the same parameter file. We know that the parameter file passes validation, but you want to see what is contained in the parameter file and if everything is syntax validated. To do this, you would pass the verbose options. Listing 10-15 provides a sample of this output.

Listing 10-15. Verbose Checking with Checkprm
[email protected]:/opt/app/oracle/product/12.2.0.0/oggcore_1 >./checkprm ./dirprm/rgg12c.prm -V

2016-03-16 21:21:46 INFO OGG-02095 Successfully set environment variable ORACLE_HOME=/opt/app/oracle/product/12.1.0.2/dbhome_1.

2016-03-16 21:21:46 INFO OGG-02095 Successfully set environment variable ORACLE_SID=rmt12c.

Parameter file validation context:

component(s): REPLICAT
mode(s)     : N/A
platform(s) : Linux
database(s) : Oracle 12c


GLOBALS

enablemonitoring                     : <enabled>
checkpointtable                      : ggate.checkpoint
ggschema                             : ggate


./dirprm/rgg12c.prm

replicat                             : RGG12C
setenv                               : (ORACLE_HOME="/opt/app/oracle/product/12.1.0.2/dbhome_1")
setenv                               : (ORACLE_SID="rmt12c")
userid                               : ggate
  password                           : *******
reperror                             : <enabled>
  spec_option                        : default
    discard                          : <enabled>
assumetargetdefs                     : <enabled>
map                                  : SOE.*
  target                             : SOE.*


2016-03-16 21:21:46 INFO OGG-10139 Parameter file ./dirprm/rgg12c.prm:  Validity check: PASS.
Runtime parameter validation is not reflected in the above check.

Notice that in the verbose output, you can see what options have been enabled, the name of the process, what environment is set, and what the mappings are tied to. Just like before, the parameter file passes and can be used with your environment.

The last example shows how you can check to see if the parameter file and the associated parameters can be used between different modes of a process. The parameter file that has been used in the previous two examples has been from a classic replicat process. What if you wanted to take that same parameter file and convert to an integrated replicat? You can use the move option of checkprm to validate the parameter file before attempting the change. Listing 10-16 shows how you can validate the parameter for such a move.

Listing 10-16. Validating for Integrated Processes
[email protected]:/opt/app/oracle/product/12.2.0.0/oggcore_1 >./checkprm ./dirprm/rgg12c.prm -M Integrated Replicat

2016-03-16 21:24:17 INFO OGG-02095 Successfully set environment variable ORACLE_HOME=/opt/app/oracle/product/12.1.0.2/dbhome_1.

2016-03-16 21:24:17 INFO OGG-02095 Successfully set environment variable ORACLE_SID=rmt12c.

2016-03-16 21:24:17 INFO OGG-10139 Parameter file ./dirprm/rgg12c.prm: Validity check: PASS.

Runtime parameter validation is not reflected in the above check.

From looking at the output, you can see that the current parameter file can be moved to an integrated replicat process without any issues. This is designed to help make your decision-making process easier and to check legacy environments before converting to newer Oracle GoldenGate standards.

ConvChk

In Oracle GoldenGate environments earlier than version 12.2.0.1.0, your trail files were limited to two-letter, six-digit names. Starting with Oracle GoldenGate 12c (12.2.0.1.0), the default name of a trail file is two letters with nine digits. This allows for long-running environments to have more distinct trail file names. If you are upgrading your Oracle GoldenGate environment to 12.2.0.1.0 or later, you will want to upgrade your trail file names as well. This is where the convchk utility comes in handy.

The convchk utility is used to upgrade your trail files from a six-digit checkpoint record for a given trail to a nine-digit trail name. The syntax for this utility is provided in Listing 10-17.

Listing 10-17. Convchk Syntax
convchk extrac trail [ seqlen_d | seqlen_6d ]

Key Generator

Oracle GoldenGate uses a user ID and passwords to interact with the database. These two critical pieces of information are stored in the parameter files in clear text. This means that the password for the user ID is stored in clear text. Does anyone else see a security violation here? If you are like 90 percent of the other administrators out there, you have a need to ensure that all passwords are secure and not passed in clear text. Oracle provides you a way to do this with the keygen utility .

The keygen utility is used to generate one or more encryption keys to be used with Oracle GoldenGate in the ENCKEYS files. The syntax for the keygen utility is illustrated in Listing 10-18.

Listing 10-18. Keygen Syntax
keygen key_length n

When using keygen without any options, the utility runs in an interactive mode. The key_length option is the length of the encryption key, up to 256 bits (32 bytes) and the n is the number of keys to generate to the screen. Listing 10-19 displays the output of running the keygen utility.

Listing 10-19. Keygen Output
[email protected]:/opt/app/oracle/product/12.2.0.0/oggcore_1 >./keygen 128 5
0xB2BB5879131B52726615FD18668C116C


0x3DE1362B38DCB711E6A77F26F4AF924C

0xC706155D5C9D1D31663A023482D3132D

0x522CF30E805E8350E6CC844110F7940D

0xDC51D140A41FE96F665F074F9D1A166E

Now all you need to do is copy each of the encrypted keys to a file named ENCKEYS. Then the ENCKEYS file is shared between the needed Oracle GoldenGate environments. They can then be referenced in the parameter files instead of a clear text password.

Summary

This chapter looked at a few of the utilities that come bundled with Oracle GoldenGate. A few of these utilities are standard utilities that have been with GoldenGate from day one; there also are a few that are new as of Oracle GoldenGate 12c (12.2.0.1.0). All of these utilities are used to help you configure and manage your environments in the most efficient way possible.

At this point in the book, you have covered a wide range of items in the Oracle GoldenGate application from the command line. The next chapter looks at the next generation of building Oracle GoldenGate environments with the new Studio tool.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.37.196