© Michelle Malcher and Darl Kuhn 2019
Michelle Malcher and Darl KuhnPro Oracle Database 18c Administrationhttps://doi.org/10.1007/978-1-4842-4424-1_5

5. Managing Control Files, Online Redo Logs, and Archivelogs

Michelle Malcher1  and Darl Kuhn2
(1)
Huntley, IL, USA
(2)
Morrison, CO, USA
 

An Oracle database consists of three types of mandatory files: data files, control files, and online redo logs. Chapter 4 focused on tablespaces and data files. This chapter looks at managing control files and online redo logs and implementing archivelogs. The first part of the chapter discusses typical control file maintenance tasks, such as adding, moving, and removing control files. The middle part of the chapter examines DBA activities related to online redo log files, such as renaming, adding, dropping, and relocating these critical files. Finally, the architectural aspects of enabling and implementing archiving are covered.

Managing Control Files

A control file is a small binary file that stores the following types of information:
  • Database name

  • Names and locations of data files

  • Names and locations of online redo log files

  • Current online redo log sequence number

  • Checkpoint information

  • Names and locations of RMAN backup files

You can query much of the information stored in the control file from data dictionary views. This example displays the types of information stored in the control file by querying v$controlfile_record_section:
SQL> select distinct type from v$controlfile_record_section;
Here is a partial listing of the output:
TYPE
----------------------------
FILENAME
TABLESPACE
RMAN CONFIGURATION
BACKUP CORRUPTION
PROXY COPY
FLASHBACK LOG
REMOVABLE RECOVERY FILES
AUXILIARY DATAFILE COPY
DATAFILE
You can view database-related information stored in the control file via the v$database view. The v$ views are based on x$ tables or views, and the v$database is based on an x$database, which is just a read of the control file:
SQL> select name, open_mode, created, current_scn from v$database;
Here is the output for this example:
NAME      OPEN_MODE            CREATED   CURRENT_SCN
--------- -------------------- --------- -----------
O18C      READ WRITE           28-SEP-12     2573820
Every Oracle database must have at least one control file. When you start your database in nomount mode, the instance is aware of the location of the control files from the CONTROL_FILES initialization parameter in the spfile or init.ora file . When you issue a STARTUP NOMOUNT command, Oracle reads the parameter file and starts the background processes and allocates memory structures:
-- locations of control files are known to the instance
SQL> startup nomount;
At this point, the control files have not been touched by any processes. When you alter your database into mount mode, the control files are read and opened for use:
-- control files opened
SQL> alter database mount;

If any of the control files listed in the CONTROL_FILES initialization parameter are not available, then you cannot mount your database.

When you successfully mount your database, the instance is aware of the locations of the data files and online redo logs but has not yet opened them. After you alter your database into open mode, the data files and online redo logs are opened:
-- datafiles and online redo logs opened
SQL> alter database open;

Note

Keep in mind that when you issue the STARTUP command (with no options), the previously described three phases are automatically performed in this order: nomount, mount, open. When you issue a SHUTDOWN command , the phases are reversed: close the database, unmount the control file, stop the instance.

The control file is created when the database is created. As you saw in Chapter 2, you should create at least two control files when you create your database (to avoid a single point of failure). Previously, you should have multiple control files stored on separate storage devices controlled by separate controllers, but because of storage devices it might be difficult to know if it is a separate device, so it is important to have fault-tolerant devices with mirroring. The control file is a very important part of the database and needs to be available or very quickly restored if needed.

Control files can also be on ASM disk groups. This allows for one control file in the +ORADATA disk group and another file in +FRA disk group. Managing the control files and details inside remain the same as on the file system except that the control files are just using ASM disk groups.

After the database has been opened, Oracle will frequently write information to the control files, such as when you make any physical modifications (e.g., creating a tablespace, adding/removing/resizing a data file). Oracle writes to all control files specified by the CONTROL_FILES initialization parameter. If Oracle cannot write to one of the control files, an error is thrown:
ORA-00210: cannot open the specified control file

If one of your control files becomes unavailable, shut down your database, and resolve the issue before restarting (see Chapter 19 for using RMAN to restore a control file). Fixing the problem may mean resolving a storage-device failure or modifying the CONTROL_FILES initialization parameter to remove the control file entry for the control file that is not available.

Displaying the Contents of a Control File

You can use the ALTER SESSION statement to display the physical contents of the control file; for example,
SQL> oradebug setmypid
SQL> oradebug unlimit
SQL> alter session set events 'immediate trace name controlf level 9';
SQL> oradebug tracefile_name
The prior line of code displays the following name of the trace file:
/ora01/app/oracle/diag/rdbms/o18c/o18c/trace/o18c_ora_4153.trc
The trace file is written to the $ ADR_HOME/trace directory. You can also view the trace directory name via this query:
SQL> select value from v$diag_info where name='Diag Trace';
Here is a partial listing of the contents of the trace file:
***************************************************************************
DATABASE ENTRY
***************************************************************************
 (size = 316, compat size = 316, section max = 1, section in-use = 1,
  last-recid= 0, old-recno = 0, last-recno = 0)
 (extent = 1, blkno = 1, numrecs = 1)
 09/28/2012 16:04:54
 DB Name "O18C"
 Database flags = 0x00404001 0x00001200
 Controlfile Creation Timestamp  09/28/2012 16:04:57
 Incmplt recovery scn: 0x0000.00000000

You can inspect the contents of the control file when troubleshooting or when trying to gain a better understanding of Oracle internals.

Viewing Control File Names and Locations

If your database is in a nomount state, a mounted state, or an open state, you can view the names and locations of the control files, as follows:
SQL> show parameter control_files
You can also view control file location and name information by querying the V$CONTROLFILE view . This query works while your database is mounted or open:
SQL> select name from v$controlfile;
If, for some reason, you cannot start your database at all, and you need to know the names and locations of the control files, you can inspect the contents of the initialization (parameter) file to see where they are located. If you are using a spfile, even though it is a binary file, you can still open it with a text editor. The safest approach is to make a copy of the spfile and then inspect its contents with an OS editor:
$ cp $ORACLE_HOME/dbs/spfileo18c.ora $ORACLE_HOME/dbs/spfileo18c.copy
$ vi $ORACLE_HOME/dbs/spfileo18c.copy
You can also use the strings command to search for values in a binary file:
$ strings spfileo18c.ora | grep -i control_files
If you are using a text-based initialization file, you can view the file directly, with an OS editor, or use the g rep command:
$ grep -i control_files $ORACLE_HOME/dbs/inito18c.ora

Adding a Control File

Adding a control file means copying an existing control file and making your database aware of the copy by modifying your CONTROL_FILES parameter. This task must be done while your database is shut down. This procedure only works when you have a good existing control file that can be copied. Adding a control file isn’t the same thing as creating or restoring a control file.

Tip

See Chapter 4 for an example of re-creating a control file for the purpose of renaming and moving data files. See Chapter 19 for an example of re-creating a control file for the purpose of renaming a database.

If your database uses only one control file, and that control file becomes damaged, you need to either restore a control file from a backup (if available) and perform a recovery or re-create the control file. If you are using two or more control files, and one becomes damaged, you can use the remaining good control file(s) to quickly get your database into an operating state.

If a database is using only one control file, the basic procedure for adding a control file is as follows:
  1. 1.

    Alter the initialization file CONTROL_FILES parameter to include the new location and name of the control file.

     
  2. 2.

    Shut down your database.

     
  3. 3.

    Use an OS command to copy an existing control file to the new location and name.

     
  4. 4.

    Restart your database.

     

Depending on whether you use a spfile or an init.ora file, the previous steps vary slightly. The next two sections detail these different scenarios.

Spfile Scenario

If your database is open, you can quickly determine whether you are using a spfile with the following SQL statement:
SQL> show parameter spfile
Here is some sample output:
NAME                            TYPE        VALUE
------------------------------- ----------- ------------------------------
spfile                          string      /ora01/app/oracle/product/18.1
                                            .0.1/db_1/dbs/spfileo18c.ora
When you have determined that you are using a spfile, use the following steps to add a control file:
  1. 1.
    Determine the CONTROL_FILES parameter’s current value:
    SQL> show parameter control_files
    The output shows that this database is using only one control file:
    NAME                            TYPE        VALUE
    ------------------------------- ----------- ------------------------------
    control_files                   string      /u01/dbfile/o18c/control01.ctl
     
  2. 2.
    Alter your CONTROL_FILES parameter to include the new control file that you want to add, but limit the scope of the operation to the spfile (you cannot modify this parameter in memory). Make sure you also include any control files listed in step 1:
    SQL> alter system set control_files='/u01/dbfile/o18c/control01.ctl',
    '/u01/dbfile/o18c/control02.ctl' scope=spfile;
     
  3. 3.
    Shut down your database:
    SQL> shutdown immediate;
     
  4. 4.
    Copy an existing control file to the new location and name. In this example, a new control file named control02.ctl is created via the OS cp command:
    $ cp /u01/dbfile/o18c/control01.ctl /u01/dbfile/o18c/control02.ctl
     
  5. 5.
    Start up your database:
    SQL> startup;
     
You can verify that the new control file is being used by displaying the CONTROL_FILES parameter:
SQL> show parameter control_files
Here is the output for this example:
NAME                  TYPE        VALUE
--------------------- ----------- ------------------------------
control_files         string      /u01/dbfile/o18c/control01.ctl
                                  ,/u01/dbfile/o18c/control02.ctl

Init.ora Scenario

Run the following statement to verify that you are using an init.ora file. If you are not using a spfile, the VALUE column is blank :
SQL> show parameter spfile
NAME                            TYPE        VALUE
------------------------------- ----------- ------------------------------
spfile                          string
To add a control file when using a text init.ora file, perform the following steps:
  1. 1.
    Shut down your database:
    SQL> shutdown immediate;
     
  2. 2.
    Edit your init.ora file with an OS utility (such as vi), and add the new control file location and name to the CONTROL_FILES parameter. This example opens the init.ora file, using vi, and adds control02.ctl to the CONTROL_FILES parameter:
    $ vi $ORACLE_HOME/dbs/inito18c.ora
    Listed next is the CONTROL_FILES parameter after control02.ctl is added:
    control_files='/u01/dbfile/o18c/control01.ctl',
                  '/u01/dbfile/o18c/control02.ctl'
     
  3. 3.
    From the OS, copy the existing control file to the location and name of the control file being added:
    $ cp /u01/dbfile/o18c/control01.ctl /u01/dbfile/o18c/control02.ctl
     
  4. 4.
    Start up your database:
    SQL> startup;
     
You can view the control files in use by displaying the CONTROL_FILES parameter:
SQL> show parameter control_files
For this example, here is the output:
NAME                       TYPE        VALUE
-------------------------- ----------- ------------------------------
control_files              string      /u01/dbfile/o18c/control01.ctl
                                       ,/u01/dbfile/o18c/control02.ctl

Moving a Control File

You may occasionally need to move a control file from one location to another. For example, if new storage is added to the database server, you may want to move an existing control file to the newly available location.

The procedure for moving a control file is very similar to adding a control file. The only difference is that you rename the control file instead of copying it. This example shows how to move a control file when you are using a spfile:
  1. 1.
    Determine the CONTROL_FILES parameter’s current value:
    SQL> show parameter control_files
    The output shows that this database is using only one control file:
    NAME                            TYPE        VALUE
    ------------------------------- ----------- ------------------------------
    control_files                   string      /u01/dbfile/o18c/control01.ctl
     
  2. 2.
    Alter your CONTROL_FILES parameter to reflect that you are moving a control file. In this example, the control file is currently in this location:
    /u01/dbfile/o18c/control01.ctl
    You are moving the control file to this location:
    /u02/dbfile/o18c/control01.ctl
    Alter the spfile to reflect the new location for the control file. You have to specify SCOPE=SPFILE because the CONTROL_FILES parameter cannot be modified in memory:
    SQL> alter system set
         control_files='/u02/dbfile/o18c/control01.ctl' scope=spfile;
     
  3. 3.
    Shut down your database:
    SQL> shutdown immediate;
     
  4. 4.
    At the OS prompt, move the control file to the new location. This example uses the OS mv command:
    $ mv /u01/dbfile/o18c/control01.ctl /u02/dbfile/o18c/control01.ctl
     
  5. 5.
    Start up your database:
    SQL> startup;
     
You can verify that the new control file is being used by displaying the CONTROL_FILES parameter:
SQL> show parameter control_files
Here is the output for this example:
NAME                            TYPE        VALUE
------------------------------- ----------- ------------------------------
control_files                   string      /u02/dbfile/o18c/control01.ctl

Removing a Control File

You may run into a situation in which you experience a media failure with a storage device that contains one of your multiplexed control files:
ORA-00205: error in identifying control file, check alert log for more info
In this scenario, you still have at least one good control file. To remove a control file, follow these steps:
  1. 1.
    Identify which control file has experienced media failure by inspecting the alert.log for information:
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: '/u01/dbfile/o18c/control02.ctl'
     
  2. 2.
    Remove the unavailable control file name from the CONTROL_FILES parameter. If you are using an init.ora file, modify the file directly with an OS editor (such as vi). If you are using a spfile, modify the CONTROL_FILES parameter with the ALTER SYSTEM statement. In this spfile example the control02.ctl control file is removed from the CONTROL_FILES parameter:
    SQL> alter system set control_files='/u01/dbfile/o18c/control01.ctl'
         scope=spfile;

    This database now has only one control file associated with it. You should never run a production database with just one control file. See the section “Adding a Control File,” earlier in this chapter, for details on how to add more control files to your database.

     
  3. 3.
    Stop and start your database:
    SQL> shutdown immediate;
    SQL> startup;
     

Note

If SHUTDOWN IMMEDIATE does not work, use SHUTDOWN ABORT to shut down your database. There is nothing wrong with using SHUTDOWN ABORT to quickly close a database when SHUTDOWN IMMEDIATE hangs; however, remember that the database is rolling back changes and might not be hanging. Depending on the transactions that are in a rollback state, the startup might take some time or hinder performance.

Control files can be in an ASM diskgroup. This will allow for you to move the back-end disk and storage around without having to move datafiles or control files. If the ASM layer is used, the storage devices and disks become transparent to the database files. This does not protect from a possible recovery from corruption of a file or if a file is removed, but it does prevent having to move files because of location and disk being used. The files will be of type CONTROLFILE in the ASM views to know the location of the files.

Online Redo Logs

Online redo logs store a record of transactions that have occurred in your database. These logs serve the following purposes:
  • Provide a mechanism for recording changes to the database so that in the event of a media failure, you have a method of recovering transactions.

  • Ensure that in the event of total instance failure, committed transactions can be recovered (crash recovery) even if committed data changes have not yet been written to the data files.

  • Allow administrators to inspect historical database transactions through the Oracle LogMiner utility.

  • They are read by Oracle tools such as GoldenGate or Streams to replicate data.

You are required to have at least two online redo log groups in your database. Each online redo log group must contain at least one online redo log member. The member is the physical file that exists on disk. You can create multiple members in each redo log group, which is known as multiplexing your online redo log group.

Tip

I highly recommend that you multiplex your online redo log groups and, if possible, have each member on a separate physical device governed by a separate controller.

The log-writer log buffer (in the SGA) to the online redo log files (on disk). The redo record has a system change number (SCN) assigned to it in order to identify the transaction redo information. There are committed and uncommitted records written to the redo logs. The log writer flushes the contents of the redo log buffer when any of the following are true:
  • A COMMIT is issued.

  • A log switch occurs.

  • Three seconds go by.

  • The redo log buffer is one-third full.

Since this is a database process, the container database (CDB) will manage the redo logs. PDBs do not have their own redo logs, which also means that planning for space and sizing of the redo logs is at the CDB level and includes all of the PDB transactions. This architecture will be discussed more in Chapter 22, but the transaction sizing is based on all of the PDBs for a CDB.

The online redo log group that the log writer is actively writing to is the current online redo log group. The log writer writes simultaneously to all members of a redo log group. The log writer needs to successfully write to only one member in order for the database to continue operating. The database ceases operating if the log writer cannot write successfully to at least one member of the current group.

When the current online redo log group fills up, a log switch occurs, and the log writer starts writing to the next online redo log group. A log sequence number is assigned to each redo log when a switch occurs to be used for archiving. The log writer writes to the online redo log groups in a round-robin fashion. Because you have a finite number of online redo log groups, eventually the contents of each online redo log group are overwritten. If you want to save a history of the transaction information, you must place your database in archivelog mode (see the section “Implementing Archivelog Mode” later in this chapter).

When your database is in archivelog mode, after every log switch the archiver background process copies the contents of the online redo log file to an archived redo log file. In the event of a failure, the archived redo log files allow you to restore the complete history of transactions that have occurred since your last database backup.

Figure 5-1 displays a typical setup for the online redo log files. This figure shows three online redo log groups, each containing two members. The database is in archivelog mode. In the figure, group 2 has recently been filled with transactions, a log switch has occurred, and the log writer is now writing to group 3. The archiver process is copying the contents of group 2 to an archived redo log file. When group 3 fills up, another log switch will occur, and the log writer will begin writing to group 1. At the same time, the archiver process will copy the contents of group 3 to archive log sequence 3 (and so forth).
../images/214899_3_En_5_Chapter/214899_3_En_5_Fig1_HTML.jpg
Figure 5-1

Online redo log configuration

The online redo log files are not intended to be backed up. These files contain only the most recent redo transaction information generated by the database. When you enable archiving, the archived redo log files are the mechanism for protecting your database transaction history.

The contents of the current online redo log files are not archived until a log switch occurs. This means that if you lose all members of the current online redo log file, you lose transactions. Listed next are several mechanisms log files:log files:
  • Multiplex the groups.

  • Consider setting the ARCHIVE_LAG_TARGET initialization parameter to ensure that the online redo logs are switched at regular intervals.

  • If possible, never allow two members of the same group to share the same physical disk.

  • Ensure that OS file permissions are set appropriately (restrictive, that only the owner of the Oracle binaries has permissions to write and read).

  • Use physical storage devices that are redundant (i.e., RAID [redundant array of inexpensive disks]).

  • Appropriately size the log files, so that they switch and are archived at regular intervals.

Note

The only tool provided by Oracle that can protect you and preserve all committed transactions in the event that you lose all members of the current online redo log group is Oracle Data Guard, implemented in maximum protection mode. See MOS note 239100.1 for more details regarding Oracle Data Guard protection modes.

Flash is another option for redo logs. Since the logs are written out to archivelogs and require fast writes, flash drives are a way to improve performance of redo logs. If flash is not available, the options are to place redo logs on physical disks and based on the previous list to minimize failures. Solid state disks might not provide faster writes, which does not make them the ideal choice for redo logs.

The online redo log files are never backed up by an RMAN backup or by a user-managed hot backup. If you did back up the online redo log files, it would be meaningless to restore them. The online redo log files contain the latest redo generated by the database. You would not want to overwrite them from a backup with old redo information. For a database in archivelog mode, the online redo log files contain the most recently generated transactions that are required to perform a complete recovery. The redo log files should also be excluded from other system backup (non-database) along with other data files.

Displaying Online Redo Log Information

Use the V$LOG and V$LOGFILE views to display information about online redo log groups and corresponding members:
COL group#     FORM 99999
COL thread#    FORM 99999
COL grp_status FORM a10
COL member     FORM a30
COL mem_status FORM a10
COL mbytes     FORM 999999
--
SELECT
 a.group#
,a.thread#
,a.status grp_status
,b.member member
,b.status mem_status
,a.bytes/1024/1024 mbytes
FROM v$log     a,
     v$logfile b
WHERE a.group# = b.group#
ORDER BY a.group#, b.member;
Here is some sample output:
GROUP# THREAD# GRP_STATUS MEMBER                         MEM_STATUS  MBYTES
------ ------- ---------- ------------------------------ ---------- -------
     1       1 INACTIVE   /u01/oraredo/o18c/redo01a.rdo                  50
     1       1 INACTIVE   /u02/oraredo/o18c/redo01b.rdo                  50
     2       1 CURRENT    /u01/oraredo/o18c/redo02a.rdo                  50
     2       1 CURRENT    /u02/oraredo/o18c/redo02b.rdo                  50
When you are diagnosing online redo log issues, the V$LOG and V$LOGFILE views are particularly helpful. You can query these views while the database is mounted or open. Table 5-1 briefly describes each view.
Table 5-1

Useful Views Related to Online Redo Logs

View

Description

V$LOG

Displays the online redo log group information stored in the control file

V$LOGFILE

Displays online redo log file member information

The STATUS column of the V$LOG view is especially useful when you are working with online redo log groups. Table 5-2 describes each status and its meaning for the V$LOG view.
Table 5-2

Status for Online Redo Log Groups in the V$LOG View

Status

Meaning

CURRENT

The log group is currently being written to by the log writer.

ACTIVE

The log group is required for crash recovery and may or may not have been archived.

CLEARING

The log group is being cleared out by an ALTER DATABASE CLEAR LOGFILE command.

CLEARING_CURRENT

The current log group is being cleared of a closed thread.

INACTIVE

The log group is not required for crash recovery and may or may not have been archived.

UNUSED

The log group has never been written to; it was recently created.

The STATUS column of the V$LOGFILE view also contains useful information. This view offers information about each physical online redo log file member of a log group. Table 5-3 provides descriptions of each status and its meaning for each log file member.
Table 5-3

Status for Online Redo Log File Members in the V$LOGFILE View

Status

Meaning

INVALID

The log file member is inaccessible or has been recently created.

DELETED

The log file member is no longer in use.

STALE

The log file member’s contents are not complete.

NULL

The log file member is being used by the database.

It is important to differentiate between the STATUS column in V$LOG and the STATUS column in V$LOGFILE. The STATUS column in V$LOG reflects the status of the log group. The STATUS column in V$LOGFILE reports the status of the physical online redo log file member. Refer to these tables when diagnosing issues with your online redo logs.

Determining the Optimal Size of Online Redo Log Groups

Try to size the online redo logs so that they switch anywhere from two to six times per hour. The V$LOG_HISTORY contains a history of how frequently the online redo logs have switched. Execute this query to view the number of log switches per hour:
select count(*)
,to_char(first_time,'YYYY:MM:DD:HH24')
from v$log_history
group by to_char(first_time,'YYYY:MM:DD:HH24')
order by 2;
Here is a snippet of the output:
  COUNT(*) TO_CHAR(FIRST
---------- -------------
         1 2012:10:23:23
         3 2012:10:24:03
        28 2012:10:24:04
        23 2012:10:24:05
        68 2012:10:24:06
        84 2012:10:24:07
        15 2012:10:24:08

From the previous output, you can see that a great deal of log switch activity occurred from approximately 4:00 am to 7:00 am This could be due to a nightly batch job or users in different time zones updating data. For this database the size of the online redo logs should be increased. You should try to size the online redo logs to accommodate peak transaction loads on the database.

The V$LOG_HISTORY system change number (SCN). As stated, a general rule of thumb is that you should size your online redo log files so that they switch approximately two to six times per hour. You do not want them switching too often because there is overhead with the log switch; however, leaving transaction information in the redo log without archiving will create issues with recovery. If a disaster causes a media failure in your current online redo log, you can lose those transactions that haven’t been archived. If a disaster causes a media failure in your current online redo log, you can lose those transactions that haven’t been archived.

Oracle initiates a checkpoint as part of a log switch. During a checkpoint, the database-writer background process writes modified (also called dirty) blocks to disk, which is resource intensive. Checkpoint messages in the alert log will also be a way of looking at how fast logs are switching or if there are waits associated with archiving.

Tip

Use the ARCHIVE_LAG_TARGET initialization parameter to set a maximum amount of time (in seconds) between log switches. A typical setting for this parameter is 1,800 seconds (30 minutes). A value of 0 (default) disables this feature. This parameter is commonly used in Oracle Data Guard environments to force log switches after the specified amount of time elapses.

You can also query the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view to determine if your online redo log files have been sized correctly:
SQL> select optimal_logfile_size from v$instance_recovery;
Here is some sample output:
OPTIMAL_LOGFILE_SIZE
--------------------
349

This column reports the redo log file size (in megabytes) that is considered optimal, based on the initialization parameter setting of FAST_START_MTTR_TARGET. Oracle recommends that you configure all online redo logs to be at least the value of OPTIMAL_LOGFILE_SIZE. However, when sizing your online redo logs, you must take into consideration information about your environment (such as the frequency of the switches) .

Determining the Optimal Number of Redo Log Groups

Oracle requires at least two redo log groups in order to function. But, having just two groups sometimes isn’t enough. To understand why this is so, remember that every time a log switch occurs, it initiates a checkpoint. As part of a checkpoint the database writer writes all modified (dirty) blocks from the SGA to the data files on disk. Also recall that the online redo logs are written to in a round-robin fashion, and that eventually the information in a given log is overwritten. Before the log writer can begin to overwrite information in an online redo log, all modified blocks in the SGA associated with the redo log must first be written to a data file. If not, all modified blocks have been written to the data files, you see this message in the alert.log file :
Thread 1 cannot allocate new log, sequence <sequence number>
Checkpoint not complete
Another way to explain this issue is that Oracle needs to store in the online redo logs any information that would be required to perform a crash recovery. To help you visualize this, see Figure 5-2.
../images/214899_3_En_5_Chapter/214899_3_En_5_Fig2_HTML.jpg
Figure 5-2

Redo protected until the modified (dirty) buffer is written to disk

At time 1, Block A is read from Data File AA into the buffer cache and modified. At time 2 the redo-change vector information (how the block changed) is written to the log buffer. At time 3 the log-writer process writes the Block A change-vector information to online redo log 1. At time 4 a log switch occurs, and online redo log 2 becomes the current online redo log.

Now, suppose that online redo log 2 fills up quickly and another log switch occurs, at which point the log-writer attempts to write to online redo log 1. The log writer isn’t allowed to overwrite information in online redo log 1 until the database writer writes Block A to Data File AA. Until Block A is written to Data File AA, Oracle needs information in the online redo logs to recover this block in the event of a power failure or shutdown abort. Before Oracle overwrites information in the online redo logs, it ensures that blocks protected by redo have been written to disk. If these modified blocks haven’t been written to disk, Oracle temporarily suspends processing until this occurs. There are a few ways to resolve this issue:
  • Add more redo log groups.

  • Lower the value of FAST_START_MTTR_TARGET . Doing so causes the database-writer process to write older modified blocks to disk in a shorter time frame.

  • Tune the database-writer process (modify DB_WRITER_PROCESSES).

If you notice that the Checkpoint not complete message is occurring often (say, several times a day), I recommend that you add one or more log groups to resolve the issue. Adding an extra redo log gives the database writer more time to write modified blocks in the database buffer cache to the data files before the associated redo with a block is overwritten. There is little downside to adding more redo log groups. The main concern is that you could bump up against the MAXLOGFILES value that was used when you created the database. If you need to add more groups and have exceeded the value of MAXLOGFILES, then you must re-create your control file and specify a high value for this parameter.

If adding more redo log groups doesn’t resolve the issue, you should carefully consider lowering the value of FAST_START_MTTR_TARGET . When you lower this value, you can potentially see more I/O because the database-writer process is more actively writing modified blocks to data files. Ideally, it would be nice to verify the impact of modifying FAST_START_MTTR_TARGET in a test environment before making the change in production. You can modify this parameter while your instance is up; this means you can quickly modify it back to its original setting if there are unforeseen side effects.

Finally, consider increasing the value of the DB_WRITER_PROCESSES parameter . Carefully analyze the impact of modifying this parameter in a test environment before you apply it to production. This value requires that you stop and start your database; therefore, if there are adverse effects, downtime is required to change this value back to the original setting.

Adding Online Redo Log Groups

If you determine that you need to add an online redo log group, use the ADD LOGFILE GROUP statement . In this example, the database already contains two online redo log groups that are sized at 50M each. An additional log group is added that has two members and is sized at 50MB:
alter database add logfile group 3
('/u01/oraredo/o18c/redo03a.rdo',
 '/u02/oraredo/o18c/redo03b.rdo') SIZE 50M;

In this scenario I highly recommend that the log group you add be the same size and have the same number of members as the existing online redo logs. If the newly added group doesn’t have the same physical characteristics as the existing groups, it’s harder to accurately determine performance issues. If a larger size is preferred, the new group can be added at the larger size, then the other groups can be dropped and re-created with the larger size value in order to keep the size of the redo logs the same (an example of this is in the next section).

For example, if you have two log groups sized at 50MB, and you add a new log group sized at 500MB, this is very likely to produce the Checkpoint not complete issue described in the previous section. This is because flushing all modified blocks from the SGA that are protected by the redo in a 500MB log file can potentially take much longer than flushing modified blocks from the SGA that are protected by a 50MB log file.

Resizing and Dropping Online Redo Log Groups

You may need to change the size of your online redo logs (see the section “Determining the Optimal Size of Online Redo Log Groups” earlier in this chapter). You cannot directly modify the size of an existing online redo log (as you can a data file). To resize an online redo log, you have to first add online redo log groups that are the size you want, and then drop the online redo logs that are the old size.

Say you want to resize the online redo logs to be 200MB each. First, you add new groups that are 200MB, using the A DD LOGFILE GROUP statement . The following example adds log group 4, with two members sized at 200MB:
alter database add logfile group 4
('/u01/oraredo/o18c/redo04a.rdo',
 '/u02/oraredo/o18c/redo04b.rdo') SIZE 200M;

Note

You can specify the size of the log file in bytes, kilobytes, megabytes, or gigabytes.

After you’ve added the log files with the new size, you can drop the old online redo logs. A log group must have an INACTIVE status before you can drop it. You can check the status of the log group, as shown here:
SQL> select group#, status, archived, thread#, sequence# from v$log;
You can drop an inactive log group with the ALTER DATABASE DROP LOGFILE GROUP statement:
SQL> alter database drop logfile group <group #>;
If you attempt to drop the current online log group, Oracle returns an ORA-01623 error, stating that you cannot drop the current group. Use the ALTER SYSTEM SWITCH LOGFILE statement to switch the logs and make the next group the current group:
SQL> alter system switch logfile;
After a log switch the log group that was previously the current group retains an active status as long as it contains redo that Oracle requires to perform crash recovery. If you attempt to drop a log group with an active status, Oracle throws an ORA-01624 error, indicating that the log group is required for crash recovery. Issue an ALTER SYSTEM CHECKPOINT command to make the log group inactive:
SQL> alter system checkpoint ;

Additionally, you cannot drop an online redo log group if doing so leaves your database with only one log group. If you attempt to do this, Oracle throws an ORA-01567 error and informs you that dropping the log group is not permitted because it would leave you with fewer than two log groups for your database (as mentioned earlier, Oracle requires at least two redo log groups in order to function).

Dropping an online redo log group does not remove the log files from the OS. You have to use an OS command to do this (such as the rm Linux/Unix command). Before you remove a file from the OS, ensure that it is not in use and that you do not remove a live online redo log file. For every database on the server, issue this query to view which online redo log files are in use:
SQL> select member from v$logfile;
Before you physically remove a log file, first switch the online redo logs enough times that all online redo log groups have recently been switched; doing so causes the OS to write to the file and thus give it a new timestamp. For example, if you have three groups, make sure you perform at least three log switches:
SQL> alter system switch logfile;
SQL> /
SQL> /

Tip

These steps of adding and removing redo logs is another exercise to perform before turning over a new database or a regularly scheduled testing period for practice and testing scripts to perform in production databases.

Now, verify at the OS prompt that the log file you intend to remove does not have a new timestamp. First, go to the directory containing the online redo log files:
$ cd  /u01/oraredo/o18c
Then, list the files to view the latest modification date:
$ ls -altr

When you are absolutely sure the file is not in use, you can remove it. The danger in removing a file is that if it happens to be an in-use online redo log, and the only member of a group, you can cause serious damage to your database. Ensure that you have a good backup of your database and that the file you are removing is not used by any databases on the server.

Adding Online Redo Log Files to a Group

You may occasionally need to add a log file to an existing group. For example, if you have an online redo log group that contains only one member, you should consider adding a log file (to provide a higher level of protection against a single–log file member failure). Use the ALTER DATABASE ADD LOGFILE MEMBER statement to add a member file to an existing online redo log group. You need to specify the new member file location, name, and group to which you want to add the file:
SQL> alter database add logfile member '/u02/oraredo/o18c/redo01b.rdo'
     to group 1;

Make certain you follow standards with regard to the location and names of any newly added redo log files.

Removing Online Redo Log Files from a Group

Occasionally, you may need to remove a log file from a group. For example, your database may have experienced a failure with one member of a multiplexed group, and you want to remove the apostate member. First, make sure the log file you want to drop is not in the current group:
SELECT a.group#, a.member, b.status, b.archived, SUM(b.bytes)/1024/1024 mbytes
FROM v$logfile a, v$log b
WHERE a.group# = b.group#
GROUP BY a.group#, a.member, b.status, b.archived
ORDER BY 1, 2;
If you attempt to drop a log file that is in the group with the CURRENT status, you receive the following error:
ORA-01623: log 2 is current log for instance o18c (thread 1) - cannot drop
If you are attempting to drop a member from the current online redo log group, then force a switch, as follows:
SQL> alter system switch logfile ;
Use the ALTER DATABASE DROP LOGFILE MEMBER statement log group. You do not need to specify the group number because you are removing a specific file:
SQL> alter database drop logfile member '/u01/oraredo/o18c/redo04a.rdo';
You also cannot drop the last remaining log file of a group. A group must contain at least one log file. If you attempt to drop the last remaining log file of a group, you receive the following error:
ORA-00361: cannot remove last log member ...

Moving or Renaming Redo Log Files

Sometimes you need to move or rename online redo log files. For example, you may have added some new mount points to the system, and you want to move the online redo logs to the new storage. You can use two methods to accomplish this task:
  • Add the new log files in the new location and drop the old log files.

  • Physically rename the files from the OS.

If you cannot afford any downtime, consider adding new log files in the new location and then dropping the old log files. See the section “Adding Online Redo Log Groups,” earlier in this chapter, for details on how to add a log group. See also the section “Resizing and Dropping Online Redo Log Groups,” earlier in this chapter, for details on how to drop a log group.

Alternatively, you can physically move the files from the OS. You can do this with the database open or closed. If your database is open, ensure that the files you move are not part of the current online redo log group (because those are actively written to by the log-writer background process). It is dangerous to try to do this task while your database is open because on an active system, the online redo logs may be switching at a rapid rate, which creates the possibility of attempting to move a file while it is being switched to be the current online redo log. Therefore, I recommend that you only try to do this while your database is closed.

The next example shows how to move the online redo log files with the database shut down. Here are the steps:
  1. 1.
    Shut down your database:
    SQL> shutdown immediate;
     
  2. 2.
    From the OS prompt, move the files. This example uses the mv command to accomplish this task:
    $ mv /u02/oraredo/o18c/redo02b.rdo /u01/oraredo/o18c/redo02b.rdo
     
  3. 3.
    Start up your database in mount mode:
    SQL> startup mount;
     
  4. 4.
    Update the control file with the new file locations and names:
    SQL> alter database rename file '/u02/oraredo/o18c/redo02b.rdo'
         to '/u01/oraredo/o18c/redo02b.rdo';
     
  5. 5.
    Open your database:
    SQL> alter database open;
     

You can verify that your online redo logs are in the new locations by querying the V$LOGFILE view. I recommend as well that you switch your online redo logs several times and then verify from the OS that the files have recent timestamps. Also check the alert.log file for any pertinent errors.

Controlling the Generation of Redo

For some types of applications, you may know beforehand that you can easily re-create the data. An example might be a data warehouse environment in which you perform direct path inserts or use SQL*Loader to load data. In these scenarios you can turn off the generation of redo for direct path loading. You use the N OLOGGING clause to do this:
create tablespace inv_mgmt_data
  datafile '/u01/dbfile/o12c/inv_mgmt_data01.dbf' size 100m
  extent management local
  uniform size 128k
  segment space management auto
  nologging;
If you have an existing tablespace and want to alter its logging mode, use the ALTER TABLESPACE statement:
SQL> alter tablespace inv_mgmt_data nologging;
You can confirm the tablespace logging mode by querying the DBA_TABLESPACES view:
SQL> select tablespace_name, logging from dba_tablespaces;
The generation of redo logging cannot be suppressed for regular INSERT, UPDATE, and DELETE statements. For regular data manipulation language (DML) statements, the NOLOGGING clause is ignored. The NOLOGGING clause does apply, however, to the following types of DML:
  • Direct path INSERT statements

  • Direct path SQL*Loader

The NOLOGGING clause also applies to the following types of DDL statements:
  • CREATE TABLE ... AS SELECT (NOLOGGING only affects the initial create, not subsequent regular DML, statements against the table)

  • ALTER TABLE ... MOVE

  • ALTER TABLE ... ADD/MERGE/SPLIT/MOVE/MODIFY PARTITION

  • CREATE INDEX

  • ALTER INDEX ... REBUILD

  • CREATE MATERIALIZED VIEW

  • ALTER MATERIALIZED VIEW ... MOVE

  • CREATE MATERIALIZED VIEW LOG

  • ALTER MATERIALIZED VIEW LOG ... MOVE

Be aware that if redo isn’t logged for a table or index, and you have a media failure before the object is backed up, then you cannot recover the data; you receive an ORA-01578 error, indicating that there is logical corruption of the data.

Note

You can also override the tablespace level of logging at the object level. For example, even if a tablespace is specified as NOLOGGING, you can create a table with the LOGGING clause.

Implementing Archivelog Mode

Recall from the discussion earlier in this chapter that archive redo logs are created only if your database is in archivelog mode. If you want to preserve your database transaction history to facilitate point-in-time and other types of recovery, you need to enable that mode.

In normal operation, changes to your data generate entries in the database redo log files. As each online redo log group fills up, a log switch is initiated. When a log switch occurs, the log-writer process stops writing to the most recently filled online redo log group and starts writing to a new online redo log group. The online redo log groups are written to in a round-robin fashion—meaning the contents of any given online redo log group will eventually be overwritten. Archivelog mode preserves redo data for the long term by employing an archiver background process to copy the contents of a filled online redo log to what is termed an archive redo log file . The trail of archive redo log files is crucial to your ability to recover the database with all changes intact, right up to the precise point of failure.

Making Architectural Decisions

When you implement archivelog mode, you also need a strategy for managing the archived log files. The archive redo logs consume disk space. If left unattended, these files will eventually use up all the space allocated for them. If this happens, the archiver cannot write a new archive redo log file to disk, and your database will stop processing transactions. At that point, you have a hung database. You then need to intervene manually by creating space for the archiver to resume work. For these reasons, there are several architectural decisions you must carefully consider before you enable archiving:
  • Where to place the archive redo logs and whether to use the fast recovery area to store them

  • How to name the archive redo logs

  • How much space to allocate to the archive redo log location

  • How often to back up the archive redo logs

  • When it’s okay to permanently remove archive redo logs from disk

  • How to remove archive redo logs (e.g., have RMAN remove the logs, based on a retention policy)

  • Whether multiple archive redo log locations should be enabled

  • When to schedule the small amount of downtime that is required (if a production database)

As a general rule of thumb, you should have enough space in your primary archive redo location to hold at least a day’s worth of archive redo logs. This lets you back them up on a daily basis and then remove them from disk after they have been backed up.

If you decide to use a fast recovery area (FRA) for your archive redo log location, you must ensure that it contains sufficient space to hold the number of archive redo logs generated between backups. Keep in mind that the FRA typically contains other types of files, such as RMAN backup files, flashback logs, and so on. If you use an FRA, be aware that the generation of other types of files can potentially impact the space required by the archive redo log files. There are parameters that can be set to manage the FRA and provide a way to resize the space for recovery in order for the database to continue instead of having to increase space on the file system.

The parameters DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE set the file location for the FRA and the size of the space to be used by the database. These can also prevent one database filling up the space for other databases that might be on the same server. The ASM diskgroup FRA can be created to manage the space using ASM. DB_RECOVERY_FILE_DEST = +FRA, will allow the area to use the FRA diskgroup. Again, there are advantages of managing space behind the scene in a scenario that fills up space. Using these parameters along with ASM removes specific file systems and allows for more options to quickly address issues with archive logs and using the recovery areas. FRA is recommended for this since the parameters are dynamic and can will allow for changes to occur to prevent the database hanging. This should be included in the planning and architecting of the archive mode of the database.

You need a strategy for automating the backup and removal of archive redo log files. For user-managed backups, this can be implemented with a shell script that periodically copies the archive redo logs to a backup location and then removes them from the primary location. As you will see in later chapters, RMAN automates the backup and removal of archive redo log files.

If your business requirements are such that you must have a certain degree of high availability and redundancy, then you should consider writing your archive redo logs to more than one location. Some shops set up jobs to copy the archive redo logs periodically to a different location on disk or even to a different server.

Setting the Archive Redo File Location

Before you set your database mode to archiving, you should specifically instruct Oracle where you want the archive redo logs to be placed. You can set the archive redo log file destination with the following techniques:
  • Set the LOG_ARCHIVE_DEST_N database initialization parameter.

  • Implement FRA.

These two approaches are discussed in detail in the following sections.

Tip

If you do not specifically set the archive redo log location via an initialization parameter or by enabling the FRA, then the archive redo logs are written to a default location. For Linux/Unix, the default location is ORACLE_HOME/dbs. For Windows, the default location is ORACLE_HOMEdatabase. For active production database systems, the default archive redo log location is rarely appropriate.

Setting the Archive Location to a User-Defined Disk Location (non-FRA)

If you are using an i nit<SID>.ora file, modify the file with an OS utility (such as vi). In this example the archive redo log location is set to / u01/oraarch/o18c:
log_archive_dest_1='location=/u01/oraarch/o18c'
log_archive_format='o12c_%t_%s_%r.arc'

In the prior line of code, my standard for naming archive redo log files includes the ORACLE_SID (in this example, o18c to start the string); the mandatory parameters %t, %s, and %r; and the string .arc, to end. I like to embed the name of the ORACLE_SID in the string to avoid confusion when multiple databases are housed on one server. I like to use the extension .arc to differentiate the files from other types of database files.

Tip

If you do not specify a value for LOG_ARCHIVE_FORMAT , Oracle uses a default, such as %t_%s_%r.dbf. One aspect of the default format that I do not like is that it ends with the extension .dbf, which is widely used for data files. This can cause confusion about whether a particular file can be safely removed because it is an old archive redo log file or should not be touched because it is a live data file. Most DBAs are reluctant to issue commands such as rm *.dbf for fear of accidentally removing live data files.

If you are using a spfile, use ALTER SYSTEM to modify the appropriate initialization variables:
SQL> alter system set log_archive_dest_1='location=/u01/oraarch/o18c' scope=both;
SQL> alter system set log_archive_format='o12c_%t_%s_%r.arc' scope=spfile;

You can dynamically change the LOG_ARCHIVE_DEST_n parameters while your database is open. However, you have to stop and start your database for the LOG_ARCHIVE_FORMAT parameter to take effect.

Recovering from Setting a Bad spfile Parameter

Take care not to set the LOG_ARCHIVE_FORMAT to an invalid value; for example,
SQL> alter system set log_archive_format='%r_%y_%dk.arc' scope=spfile;
If you do so, when you attempt to stop and start your database, you won’t even get to the nomount phase (because the spfile contains an invalid parameter):
SQL> startup nomount;
ORA-19905: log_archive_format must contain %s, %t and %r

In this situation, if you are using a spfile, you cannot start your instance. You have a couple of options here. If you are using RMAN and are backing up the spfile, then restore the spfile from a backup.

If you are not using RMAN, you can also try to edit the spfile directly with an OS editor (such as vi), but Oracle doesn’t recommend or support this.

The alternative is to create an init.ora file manually from the contents of the spfile. First, rename the spfile that contains a bad value:
$ cd $ORACLE_HOME/dbs
$ mv spfile<SID>.ora spfile<SID>.old

In SQLPlus create the init.ora file from the spfile.

SQL> create pfile=inito18c.ora from spfile;

Then, open the pfile with a text editor, such as vi:
$ vi inito18c.ora
Modify the bad parameter to contain a valid value. Exit out of the pfile. You should now be able to start up your database using the pfile. Then you can copy the pfile into a new spf
SQL> create spfile from pfile;

Then you can start up the database using the fixed spfile.

When you specify LOG_ARCHIVE_FORMAT, you must include %t (or %T), %s (or %S), and d% in the format string. Table 5-4 lists the valid variables you can use with the LOG_ARCHIVE_FORMAT initialization parameter.
Table 5-4

Valid Variables for the Log Archive Format String

Format String

Meaning

%s

Log sequence number

%S

Log sequence number padded to the left with zeros

%t

Thread number

%T

Thread number padded to the left with zeros

%a

Activation ID

%d

Database ID

%r

Resetlogs ID required to ensure uniqueness across multiple incarnations of the database

You can view the value of the LOG_ARCHIVE_DEST_N parameter by running the following:
SQL> show parameter log_archive_dest
Here is a partial listing of the output:
NAME                             TYPE        VALUE
-------------------------------- ----------- --------------------------
log_archive_dest                 string
log_archive_dest_1               string      location=/u01/oraarch/o18c
log_archive_dest_10              string

You can enable up to 31 different locations for the archive redo log file destination. For most production systems, one archive redo log destination location is usually sufficient. If you need a higher degree of protection, you can enable multiple destinations. Keep in mind that when you use multiple destinations, the archiver must be able to write to at least one location successfully. If you enable multiple mandatory locations and set LOG_ARCHIVE_MIN_SUCCEED_DEST to be higher than 1, then your database may hang if the archiver cannot write to all mandatory locations.

You can check the details regarding the status of archive redo log locations via this query:
select
 dest_name
,destination
,status
,binding
from v$archive_dest;
Here is a small sample of the output:
DEST_NAME            DESTINATION                    STATUS    BINDING
-------------------- ------------------------------ --------- ---------
LOG_ARCHIVE_DEST_1   /u01/oraarch/o18c              VALID     OPTIONAL
LOG_ARCHIVE_DEST_2                                  INACTIVE  OPTIONAL

Using the FRA for Archive Log Files

The FRA is an area on disk—specified via database initialization parameters—that can be used to store files, such as archive redo logs, RMAN backup files, flashback logs, and multiplexed control files and online redo logs. To enable the use of FRA, you must set two initialization parameters (in this order):
  • DB_RECOVERY_FILE_DEST_SIZE specifies the maximum space to be used for all files that are stored in the FRA for a database.

  • DB_RECOVERY_FILE_DEST specifies the base directory for the FRA.

When you create an FRA, you are not really creating anything—you are telling Oracle which directory to use when storing files that go in the FRA. For example, say 200GB of space are reserved on a mount point, and you want the base directory for the FRA to be /u01/fra. To enable the FRA, first set DB_RECOVERY_FILE_DEST_SIZE:
SQL> alter system set db_recovery_file_dest_size=200g scope=both;
Next, set the DB_RECOVERY_FILE_DEST parameter:
SQL> alter system set db_recovery_file_dest='/u01/fra' scope=both;

If you are using an init.ora file, modify it with an OS utility (such as vi) with the appropriate entries.

After you enable FRA, by default, Oracle writes archive redo logs to subdirectories in the FRA.

Note

If you’ve set the LOG_ARCHIVE_DEST_N parameter to be a location on disk, archive redo logs are not written to the FRA.

You can verify that the archive location is using FRA:
SQL> archive log list;
If archive files are being written to the FRA, you should see output like this:
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
You can display the directory associated with the FRA like this:
SQL> show parameter db_recovery_file_dest
When you first implement FRA, there are no subdirectories beneath the base FRA directory (specified with DB_RECOVERY_FILE_DEST). The first time Oracle needs to write a file to the FRA, it creates any required directories beneath the base directory. For example, after you implement FRA, if archiving for your database is enabled, then the first time a log switch occurs, Oracle creates the following directories beneath the base FRA directory:
<SID>/archivelog/<YYYY_MM_DD>

Each day that archive redo logs are generated results in a new directory’s being created in the FRA, using the directory name format YYYY_MM_DD. Archive redo logs written to the FRA use the OMF format naming convention (regardless of whether you’ve set the LOG_ARCHIVE_FORMAT parameter).

If you want archive redo logs written to both FRA and a non-FRA location, you can enable that, as follows:
SQL> alter system set log_archive_dest_1='location=/u01/oraarch/o18c';
SQL> alter system set log_archive_dest_2='location=USE_DB_RECOVERY_FILE_DEST' ;

Enabling Archivelog Mode

After you have set the location for your archive redo log files, you can enableSYS (or a user with the SYSDBA privilege) and do the following:
$ sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;
You can confirm archivelog mode with this query:
SQL> archive log list;
You can also confirm it as follows:
SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG

Disabling Archivelog Mode

Usually, you don’t disable archivelog mode for a production database. However, you may be doing a big data load and want to reduce any overhead associated with the archiving process, and so you want to turn off archivelog mode before the load begins and then re-enable it after the load. If you do this, be sure you make a backup as soon as possible after re-enabling archiving.

To disable archiving, do the following as SYS (or a user with the SYSDBA privilege):
$ sqlplus / as sysdba
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database noarchivelog;
SQL> alter database open;
You can confirm archivelog mode with this query:
SQL> archive log list;
You can also confirm the log mode, as follows:
SQL> select log_mode from v$database;
LOG_MODE
------------
NOARCHIVELOG

Reacting to a Lack of Disk Space in Your Archive Log Destination

The archiver background process writes archive redo logs to a location that you specify. If, for any reason, the archiver process cannot write to the archive location, your database hangs. Any users attempting to connect receive this error:
ORA-00257: archiver error. Connect internal only, until freed.

As a production-support DBA, you never want to let your database get into that state. Sometimes, unpredictable events happen, and you have to deal with unforeseen issues.

Note

DBAs who support production databases have a mindset completely different from that of architect DBAs. Getting new ideas or learning about new technologies is a perfect time to work together and communicate what might work or not work in your environment. Set up time outside of troubleshooting with production DBAs and architects to plan and set strategies for the environment.

In this situation your database is as good as down and completely unavailable. To fix the issue, you have to act quickly:
  • Move files to a different location.

  • Compress old files in the archive redo log location.

  • Permanently remove old files.

  • Switch the archive redo log destination to a different location (this can be changed dynamically, while the database is up and running).

  • If using FRA, increase the space allocation for DB_RECOVERY_FILE_DEST_SIZE.

  • If using FRA, change the destination to different location in the parameter DB_RECOVERY_FILE_DEST.

  • RMAN backup and delete the archive log files.

  • Remove expired files from the directory using RMAN.

Moving files is usually the quickest and safest way to resolve the archiver error along with increasing the allocation or directory with the DB_RECOVERY_FILE_DEST parameters. You can use an OS utility such as mv to move old archive redo logs to a different location. If they are needed for a subsequent restore and recovery, you can let the recovery process know about the new location. Be careful not to move an archive redo log that is currently being written to. If an archived redo log file appears in V$ARCHIVED_LOG, that means it has been completely archived.

You can use an OS utility such as gzip to compress archive redo log files in the current archive destination. If you do this, you have to remember to uncompress any files that may be later needed for a restore and recovery. Be careful not to compress an archive redo log that is currently being written to.

Another option is to use an OS utility such as rm to remove archive redo logs from the disk permanently. This approach is dangerous because you may need those archive redo logs for a subsequent recovery. If you do remove archive redo log files, and you don’t have a backup of them, you should make a full backup of your database as soon as possible. Using RMAN to back up and delete the files is a much safer approach and assures that recovery is possible until another backup is performed. Again, this approach is risky and should only be done as a last resort; if you delete archive redo logs that haven’t been backed up, then you chance not being able to perform a complete recovery.

If another location on your server has plenty of space, you can consider changing the location to which the archive redo logs are being written. You can perform this operation while the database is up and running; for example,
SQL> alter system set log_archive_dest_1='location=/u02/oraarch/o18c';

After you’ve resolved the issue with the primary location, you can switch back the original location.

Note

When a log switch occurs, the archiver determines where to write the archive redo logs, based on the current FRA setting or a LOG_ARCHIVE_DEST_N parameter. It doesn’t matter to the archiver if the destination has recently changed.

When the archive redo log file destination is full, you have to scramble to resolve it. This is why a good deal of thought should precede enabling archiving for 24-7 production databases.

For most databases, writing the archive redo logs to one location is sufficient. However, if you have any type of disaster recovery or high-availability requirement, then you should write to multiple locations. Sometimes, DBAs set up a job to back up the archive redo logs every hour and copy them to an alternate location or even to an alternate server.

Backing Up Archive Redo Log Files

Depending on your business requirements, you may need a strategy for backing up archive redo log files. Minimally, you should back up any archive redo logs generated during a backup of a database in archivelog mode. Additional strategies may include the following:
  • Periodically copying archive redo logs to an alternate location and then removing them from the primary destination

  • Copying the archive redo logs to tape and then deleting them from disk

  • Using two archive redo log locations

  • Using Data Guard for a robust disaster recovery solution

Keep in mind that you need all archive redo logs generated since the begin time of the last good backup to ensure that you can completely recover your database. Only after you are sure you have a good backup of your database should you consider removing archive redo logs that were generated prior to the backup.

If you are using RMAN as a backup and recovery strategy, then you should use RMAN to back up the archive redo logs. Additionally, you should specify an RMAN retention policy for these files and have RMAN remove the archive redo logs only after the retention policy requirements are met (e.g., back up files at least once before removing from disk) (see Chapter 18 for details on using RMAN) .

Summary

This chapter described how to configure and manage control files and online redo log files and enable archiving. Control files and online redo logs are critical database files; a normally operating database cannot function without them.

Control files are small binary files that contain information about the structure of the database. Any control files specified in the parameter file must be available in order for you to mount the database. If a control file becomes unavailable, then your database will cease operating with the next log switch or needed write to the control file until you resolve the issue. I highly recommend that you configure your database with at least three control files. If one control file becomes unavailable, you can replace it with a copy of a good existing control file. It is critical that you know how to configure, add, and remove these files.

Online redo logs are crucial files that record the database’s transaction history. If you have multiple instances connected to one database, then each instance generates its own redo thread. Each database must be created with two or more online redo log groups. You can operate a database with each group’s having just one online redo log member. However, I highly recommend that you create your online redo log groups with two members in each group. If an online redo log has at least one member that can be written to, your database will continue to function. If all members of an online redo log group are unavailable, then your database will cease to operate. As a DBA you must be extremely proficient in creating, adding, moving, and dropping these critical database files. Using storage options such as flash will allow for faster writes to the redo logs. Solid state disks might not provide faster writes, which does not make them the ideal choice for redo logs.

Archiving is the mechanism for ensuring you have all the transactions required to recover the database. Once enabled, the archiver needs to successfully copy the online redo log after a log switch occurs. If the archiver cannot write to the primary archive destination, then your database will hang. Therefore, you need to map out carefully the amount of disk space required and how often to back up and subsequently remove these files.

Using Fast Recovery Area (FRA) provides additional ways to manage the archive logs and allows for some flexibility in planning for growth and sizing of the disk needed for the archive logs.

The chapters up to this point in the book have covered tasks such as installing the Oracle software; creating databases; and managing tablespaces, data files, control files, online redo log files, and archiving. The next several chapters concentrate on how to configure a database for application use and include topics such as creating users and database objects.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.229.253