Downstream Capture — avoid white water at the source

In Chapter 1, we briefly mention Downstream Capture(DSC) and how it can be used to offload the Capture and Propagation processes off a Production/Source Server. In this section, we will take a deeper dive into DSC and cover:

  • What is DSC?
  • When to use DSC?
  • How to set up DSC?

By covering the above with the example provided, you should have the information needed to understand and properly use DSC.

DSC is a configuration of Streams that has the Capture and Propagation Process on a different server from where the data is processed. Recall the first image that was seen in Chapter 1 of DSC.

Downstream Capture — avoid white water at the source

The Source Production database is where all the transactions are run against. The Worker database receives redo logs from the Source Production database and then the normal Streams process (Capture, Propagate) happens on the Worker database (flowing from left to right in the image) and then the Apply process is at the Target Database. Notice that we wrote that the Worker Database receives the redo logs.

The components of DSC are a mixture of concepts from Oracle Data Guard and Oracle Streams. The Oracle Streams processes we know about already. As for Oracle Data Guard components, the redo transport service is used in DSC. It is this redo transport service that is used in DSC to move logs from a Source Production database to a Worker Database. Also note that a Worker Database can be the end Target Database as seen in the following image:

Downstream Capture — avoid white water at the source

From the previous image, we can see that the Target Database has both the Capture and Apply processes. As all the processing is on the Target Database there is no need to have a Propagate process. To get a better understanding, let us start with one of the differences in DSC as compared to a "regular" Streams configuration — the parameter LOG_ARCHIVE_DEST_n.

Downstream Capture — avoid white water at the source

LOG_ARCHIVE_DEST_n is a parameter that is used on both the source and target sides. Setting this parameter will control how the redo logs are sent and received at the source and target side. We will configure the parameter, LOG_ARCHIVE_DEST_2 on both source and target, as a part of the example used in this section.

Downstream Capture — avoid white water at the source

Evolving the diagram, we now show the local Online redo logs for each of the respective databases. Also, Standby redo logs have been added on the Target Database side. The Oracle recommendation for Standby redo logs is to have at least one more group than the number of online redo log file groups on the source. The properly configured Capture process will scan the Standby redo logs that came across the network. The Standby redo logs will cross the network as a result of configuring the parameter LOG_ARCHIVE_DEST_2 on both source and target sides.

Up to this point we have covered a little of the what and how of DSC. Shifting focus to the when of DSC, we answer the question, when should you use DSC? DSC is most often used to "offload" the Capture and Propagation processes off of the host machine of your Source Production (Database).

Downstream Capture — avoid white water at the source

Now back to the how. We are going to take the image above and fully work through the scenario from ground zero. The scenario and assumptions are:

  • We have a Production Level OLTP database, STRM1, and due to the policy and performance demand only required and necessary processing can be done on the host server.
  • A replicated database is needed where activities that are not "required and necessary" can be done. The replicated database will be STRM2.
  • Network and Storage resources have been properly scoped and allocated.
  • Both databases are configured in archive log mode.
  • The necessary supplemental logging and pre-work for Streams configuration has been completed and this includes setting up a Streams Administrator. We will use STRM_ADMIN as the Streams Administrator for this chapter. Please refer to Chapter 3 for details on the pre-work for Streams.
  • The schema in this example is LEARNING.

We will be going over the set up of DSC with the following steps:

  1. Setting up the redo log transport.
  2. Configuring the Streams part of DSC.

Setting up the redo log transport

  1. The SYS password needs to be the same on the source and target. Run the following command on source and target to make sure that the sys password is the same. Replace<password> with a password that meets your security policy requirement.
    alter user sys identified by <password>;
    
  2. A DBLINK between STRM2 and STRM1 will also be used. Create this as Streams Administrator.
    -- run on STRM2.US.APGTECH.COM
    conn STRM_ADMIN/STRM_ADMIN
    -- create dblink
    create database link STRM2.US.APGTECH.COM connect to STRM_ADMIN identified by STRM_ADMIN using 'STRM2';
    -- test it
    select * from STRM2.US.APGTECH.COM;
    
  3. The configuration of LOG_ARCHIVE_DEST_2 on STRM2 should be:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE;
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/u07/STRM2/standby-archives/
    VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
    SCOPE=SPFILE;
    
  4. Set the parameter LOG_ARCHIVE_CONFIG to enable the sending and receiving of redo logs between STRM1 and STRM2
    -- run on STRM2.US.APGTECH.COM
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(STRM1,STRM2)' SCOPE=SPFILE;
    
  5. Now for creating the Standby redo logs on STRM2. This example is for a single instance database. We will cover RAC at the end of this example. Let's "Keep It Simple" for right now.
    • Determine the size of the redo log on source (STRM1)
    SELECT L.THREAD#,
    L.GROUP#,
    L.BYTES / 1024 / 1024 MB,
    LF.MEMBER LOCATION
    FROM V$LOG L, V$LOGFILE LF
    WHERE L.GROUP# = LF.GROUP#;
    
    • The result should be similar to below (shown in table format for clarity):

    THREAD#

    GROUP#

    MB

    LOCATION

    1

    1

    50

    /u04/oracle/STRM1/archives/

    1

    2

    50

    /u05/oracle/STRM1/archives/

    1

    3

    50

    /u06/oracle/STRM1/archives/

    • Once the size of the redo logs is determined from Source we can create the Standby redo log on STRM2. Remember to create an additional group as recommended by Oracle.
      -- run on STRM2.US.APGTECH.COM
      conn /as sysdba
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
      (/u07/STRM2/standby-archives/slog4.rdo') SIZE 50M;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
      (/u07/STRM2/standby-archives/slog5.rdo') SIZE 50M;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 6
      (/u07/STRM2/standby-archives/slog6.rdo') SIZE 50M;
      ALTER DATABASE ADD STANDBY LOGFILE GROUP 7
      (/u07/STRM2/standby-archives/slog7.rdo') SIZE 50M;
      
      • Confirm that standby redo logs were created successfully:
      -- run on STRM2.US.APGTECH.COM
      conn /as sysdba
      SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS
      FROM V$STANDBY_LOG;
      
    • Now to configure redo transport service for the STRM1 side
  6. The configuration of LOG_ARCHIVE_DEST_2 on STRM1 should be:
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE;
    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=STRM2 LGWR SYNC NOREGISTER
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=ORCL102D'
    SCOPE=SPFILE;
    

    Note that we are using SYNC parameter for real-time downstream Capture process. Otherwise use SYNC.

  7. Set the parameter LOG_ARCHIVE_CONFIG to enable the sending and receiving of redo logs between STRM1 and STRM2
    -- run on STRM1.US.APGTECH.COM
    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(STRM1,STRM2)' SCOPE=SPFILE;
    

Now we have the redo transport service configured for real-time downstream capture. Our next step is to configure.

Configuring the Streams part of DSC

If you have not already done so, create the schema LEARNING with a couple of tables on STRM1 and STRM2. You can modify (see notes at the beginning of script) and use the supplied script, Create_LEARNING.sql to quickly create a tablespace, user, and the table EMPLOYEE with data.

From this point forward, the setting up of DSC should be fairly familiar with some slight important. The steps involved are all on STRM2. The following scripts are all run on STRM2.US.APGTECH.COM.

  1. Create a single Queue to be used by both Capture and Apply process.
    conn STRM_ADMIN/STRM_ADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STRM_ADMIN.DOWNSTREAM_Q_TABLE',
    queue_name => 'STRM_ADMIN.DOWNSTREAM_Q',
    queue_user => 'STRM_ADMIN'),
    END;
    /
    
  2. Create the Apply process.
    conn STRM_ADMIN/STRM_ADMIN
    BEGIN
    DBMS_APPLY_ADM.CREATE_APPLY(
    queue_name => 'STRM_ADMIN.DOWNSTREAM_Q',
    apply_name => 'DOWNSTREAM_APPLY',
    apply_captured => TRUE
    );
    END;
    /
    
  3. Create the Capture process.
    conn STRM_ADMIN/STRM_ADMIN
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE(
    queue_name => 'STRM_ADMIN.DOWNSTREAM_Q',
    capture_name => 'DOWNSTREAM_CAPTURE',
    rule_set_name => NULL,
    start_scn => NULL,
    source_database => 'STRM1.US.APGTECH.COM',
    use_database_link => true,
    first_scn => NULL,
    logfile_assignment => 'implicit'), -- Refer to Note below.
    END;
    /
    

    It is at this point that we specifically focus on the logfile_assignment parameter. We set this parameter to 'implicit' to instruct the CAPTURE process to scan all redo log files added by redo transport services or manually from the source database to the downstream database.

  4. Alter the Capture process for real-time capturing

    Once the Capture process is created; this command will alter it so that real-time capturing can occur. Real-time Capture, captures changes in the online redologs from Source.

    conn STRM_ADMIN/STRM_ADMIN
    BEGIN
    DBMS_CAPTURE_ADM.SET_PARAMETER(
    capture_name => 'DOWNSTREAM_CAPTURE',
    parameter => 'downstream_real_time_mine',
    value => 'y'),
    END;
    

    You will also need to archive the current redo log on the Source with this command:

    ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  5. Add the rule to the Capture process
    conn STRM_ADMIN/STRM_ADMIN
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'LEARNING',
    streams_type => 'capture',
    streams_name => 'downstream_capture',
    queue_name => 'STRM_ADMIN.downstream_q',
    include_dml => true,
    include_ddl => true,
    include_tagged_lcr => false,
    source_database => 'STRM1.US.APGTECH.COM',
    inclusion_rule => TRUE);
    END;
    /
    
  6. Instantiate the schema
    conn STRM_ADMIN/STRM_ADMIN
    DECLARE
    -- Variable to hold instantiation SCN value
    iscn NUMBER;
    BEGIN
    -- Get current SCN from Source
    iscn := [email protected];
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    source_schema_name => 'LEARNING',
    source_database_name => 'STRM1.US.APGTECH.COM',
    instantiation_scn => iscn,
    recursive => TRUE);
    END;
    /
    
  7. Start up the Apply and Capture process
    Start the Apply process:
    =================
    conn STRM_ADMIN/STRM_ADMIN
    exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'DOWNSTREAM_APPLY'),
    select apply_name, status from dba_apply;
    Start the Capture process:
    ==================
    conn STRM_ADMIN/STRM_ADMIN
    exec DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'DOWNSTREAM_CAPTURE'),
    select capture_name, status from dba_capture;
    
  8. Test and Celebrate

    Now on STRM1.US.APGTECH.COM add some additional data and confirm Streaming.

    Insert into LEARNING.EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, TIME) Values (5, 'Larry', 'Jonson', NULL);
    Insert into LEARNING.EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME, TIME) Values (6, 'Karen', 'Kim', NULL);
    

So, DSC, is just another configuration of Oracle Streams with a little bit of help from redo transport services. If you are in a situation where policy or performance reason dictates that "only ("pure") OLTP process can run on Production" then DSC is a possible solution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.53.209