CHAPTER 4
Memory Structures

In this chapter, we'll look at Oracle's three major memory structures:

  • System Global Area (SGA): This is a large, shared memory segment that virtually all Oracle processes will access at one point or another.
  • Process Global Area (PGA): This is memory that is private to a single process or thread, and is not accessible from other processes/threads.
  • User Global Area (UGA): This is memory associated with your session. It will be found either in the SGA or the PGA depending on whether you are connected to the database using shared server (then it will be in the SGA), or dedicated server (it will be in the PGA, in the process memory).

Note In earlier releases of Oracle, shared server was referred to as Multi-Threaded Server or MTS. In this book, we will always use the term "shared server."


We'll first discuss the PGA and UGA, and then we'll move on to examine the really big structure: the SGA.

The Process Global Area and User Global Area

The PGA is a process-specific piece of memory. In other words, it is memory specific to a single operating system process or thread. This memory is not accessible by any other process/thread in the system. It is typically allocated via either of the C runtime calls malloc() or memmap(), and it may grow (and shrink even) at runtime. The PGA is never allocated in Oracle's SGA—it is always allocated locally by the process or thread.

The UGA is, in effect, your session's state. It is memory that your session must always be able to get to. The location of the UGA is wholly dependent on how you connected to Oracle. If you connected via a shared server, then the UGA must be stored in a memory structure that every shared server process has access to—and that would be the SGA. In this way, your session can use any one of the shared servers, since any one of them can read and write your session's data. On the other hand, if you are using a dedicated server connection, this need for universal access to your session state goes away, and the UGA becomes virtually synonymous with the PGA; it will, in fact, be contained in the PGA of your dedicated server. When you look at the system statistics, you'll find the UGA reported in the PGA in dedicated server mode (the PGA will be greater than or equal to the UGA memory used; the PGA memory size will include the UGA size as well).

So, the PGA contains process memory and may include the UGA. The other areas of PGA memory are generally used for in-memory sorting, bitmap merging, and hashing. It would be safe to say that, besides the UGA memory, these are the largest contributors by far to the PGA.

Starting with Oracle9i Release 1 and above, there are two ways to manage this other non-UGA memory in the PGA:

  • Manual PGA memory management, where you tell Oracle how much memory is it allowed to use to sort and hash any time it needs to sort or hash in a specific process
  • Automatic PGA memory management, where you tell Oracle how much memory it should attempt to use systemwide

The manner in which memory is allocated and used differs greatly in each case and, as such, we'll discuss each in turn. It should be noted that in Oracle9i, when using a shared server connection, you can only use manual PGA memory management. This restriction was lifted with Oracle 10g Release 1 (and above). In that release, you can use either automatic or manual PGA memory management with shared server connections.

PGA memory management is controlled by the database initialization parameter WORKAREA_SIZE_POLICY and may be altered at the session level. This initialization parameter defaults to AUTO, for automatic PGA memory management when possible in Oracle9i Release 2 and above. In Oracle9i Release 1, the default setting was MANUAL.

In the sections that follow, we'll take a look at each approach.

Manual PGA Memory Management

In manual PGA memory management, the parameters that will have the largest impact on the size of your PGA, outside of the memory allocated by your session for PL/SQL tables and other variables, will be as follows:

  • SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk.
  • SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_SIZE was 512KB and SORT_AREA_RETAINED_SIZE was 256KB, then your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort was complete, the sorting area would be "shrunk" down to 256KB, and any sorted data that did not fit in that 256KB would be written out to the temporary tablespace.
  • HASH_AREA_SIZE: The amount of memory your server process would use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory and anything that didn't fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

These parameters control the amount of space Oracle will use to sort or hash data before writing (swapping) it to disk, and how much of that memory segment will be retained after the sort is done. The SORT_AREA_SIZE-SORT_AREA_RETAINED_SIZE is generally allocated out of your PGA, and the SORT_AREA_RETAINED_SIZE will be in your UGA. You can discover your current usage of PGA and UGA memory and monitor its size by querying special Oracle V$ views, also referred to as dynamic performance views.

For example, let's run a small test whereby in one session we'll sort lots of data and, from a second session, we'll monitor the UGA/PGA memory usage in that first session. To do this in a predicable manner, we'll make a copy of the ALL_OBJECTS table, with about 45,000 rows in this case, without any indexes (so we know a sort has to happen):

ops$tkyte@ORA10G> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA10G> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.

To remove any side effects from the initial hard parsing of queries, we'll run the following script, but for now ignore its output. We'll run the script again in a fresh session so as to see the effects on memory usage in a controlled environment. We'll use the sort area sizes of 64KB, 1MB, and 1GB in turn:

create table t as select * from all_objects;
exec dbms_stats.gather_table_stats( user, 'T' );
alter session set workarea_size_policy=manual;
alter session set sort_area_size = 65536;
set termout off
select * from t order by 1, 2, 3, 4;
set termout on
alter session set sort_area_size=1048576;
set termout off
select * from t order by 1, 2, 3, 4;
set termout on
alter session set sort_area_size=1073741820;
set termout off
select * from t order by 1, 2, 3, 4;
set termout on

Note When we process SQL in the database, we must first "parse" the SQL statement. There are two types of parses available. The first is a hard parse, which is what happens the first time a query is parsed by the database instance and includes query plan generation and optimization. The second is a soft parse, which can skip many of the steps a hard parse must do. We hard parse the previous queries so as to not measure the work performed by that operation in the following section.


Now, I would suggest logging out of that SQL*Plus session and logging back in before continuing, in order to get a consistent environment, or one in which no work has been done yet.

To ensure we're using manual memory management, we'll set it specifically and specify our rather small sort area size of 64KB. Also, we'll identify our session ID (SID) so we can monitor the memory usage for that session.

ops$tkyte@ORA10G> alter session set workarea_size_policy=manual;
Session altered.

ops$tkyte@ORA10G> select sid from v$mystat where rownum = 1;

       SID
----------
       151

Now, we need to measure SID 151's memory from a second separate session. If we used the same session, then our query to see how much memory we are using for sorting might itself influence the very numbers we are looking at. To measure the memory from this second session, we'll use a small SQL*Plus script I developed for this. It is actually a pair of scripts. The one we want to watch that resets a small table and sets a SQL*Plus variable to the SID is called reset_stat.sql:

drop table sess_stats;

create table sess_stats
( name varchar2(64), value number, diff number );

variable sid number
exec :sid := &1

Note Before using this script (or any script, for that matter), make sure you understand what the script does. This script is dropping and re-creating a table called SESS_STATS. If your schema already has such a table, you'll probably want to use a different name!


The other script is called watch_stat.sql, and for this case study, it uses the MERGE SQL statement so we can initially INSERT the statistic values for a session and then later come back and update them—without needing a separate INSERT/UPDATE script:

merge into sess_stats
using
(
select a.name, b.value
  from v$statname a, v$sesstat b
 where a.statistic# = b.statistic#
   and b.sid = :sid
   and (a.name like '%ga %'
        or a.name like '%direct temp%')
) curr_stats
on (sess_stats.name = curr_stats.name)
when matched then
  update set diff = curr_stats.value - sess_stats.value,
             value = curr_stats.value
when not matched then
  insert ( name, value, diff )
  values
  ( curr_stats.name, curr_stats.value, null )
/

select *
  from sess_stats
 order by name;

I emphasized the phrase "for this case study" because of the lines in bold—the names of the statistics we're interested in looking at change from example to example. In this particular case, we're interested in anything with ga in it (pga and uga), or anything with direct temp, which in Oracle 10g will show us the direct reads and writes against temporary space (how much I/O we did reading and writing to temp).


Note In Oracle9i, direct I/O to temporary space was not labeled as such. We would use a WHERE clause that included and (a.name like '%ga %'or a.name like '%physical % direct%') in it.


When this watch_stat.sql script is run from the SQL*Plus command line, we'll see a listing of the PGA and UGA memory statistics for the session, as well as temporary I/O. Before we do anything in session 151, the session using manual PGA memory management, let's use this script to find out how much memory that session is currently using and how many temporary I/Os we have performed:

ops$tkyte@ORA10G> @watch_stat
6 rows merged.


NAME                                             VALUE       DIFF
------------------------------------------- ---------- ----------
physical reads direct temporary tablespace           0
physical writes direct temporary tablespace          0
session pga memory                              498252
session pga memory max                          498252
session uga memory                              152176
session uga memory max                          152176

So, before we begin we can see that we have about 149KB (152,176/1,024) of data in the UGA and 487KB of data in the PGA. The first question is "How much memory are we using between the PGA and UGA?" That is, are we using 149KB + 487KB of memory, or are we using some other amount? This is a trick question, and one that you cannot answer unless you know whether the monitored session with SID 151 was connected to the database via a dedicated server or a shared server—and even then it might be hard to figure out. In dedicated server mode, the UGA is totally contained within the PGA, in which case we would be consuming 487KB of memory in our process or thread. In shared server, the UGA is allocated from the SGA, and the PGA is in the shared server. So, in shared server mode, by the time we get the last row from the preceding query, the shared server process may be in use by someone else. That PGA isn't "ours" anymore, so technically we are using 149KB of memory (except when we are actually running the query, at which point we are using 487KB of memory between the combined PGA and UGA). So, let's now run the first big query in session 151, which is using manual PGA memory management in dedicated server mode. Note that we are using the same script from earlier, so the SQL text matches exactly, thus avoiding the hard parse:


Note Since we haven't set a SORT_AREA_RETAINED_SIZE, its reported value will be zero, but its used value will match SORT_AREA_SIZE.


ops$tkyte@ORA10G> alter session set sort_area_size = 65536;
Session altered.

ops$tkyte@ORA10G> set termout off;
query was executed here
ops$tkyte@ORA10G>  set termout on;

Now if we run our script again in the second session, we'll see something like this. Notice this time that the session xxx memory and session xxx memory max values do not match. The session xxx memory max value represents how much memory we are using right now. The session xxx memory max value represents the peak value we used at some time during our session while processing the query.

ops$tkyte@ORA10G> @watch_stat
6 rows merged.

NAME                                             VALUE       DIFF
------------------------------------------- ---------- ----------
physical reads direct temporary tablespace        2906       2906
physical writes direct temporary tablespace       2906       2906
session pga memory                              498252          0
session pga memory max                          563788      65536
session uga memory                              152176          0
session uga memory max                          217640      65464

6 rows selected.

As you can see, our memory usage went up—we've done some sorting of data. Our UGA temporarily increased from 149KB to 213KB (64KB) during the processing of our query, and then it shrunk back down. To perform our query and the sorting, Oracle allocated a sort area for our session. Additionally, the PGA memory went from 487KB to 551KB, a jump of 64KB. Also, we can see that we did 2,906 writes and reads to and from temp.

By the time we finish our query and exhausted the resultset, we can see that our UGA memory went back down where it started (we released the sort areas from our UGA) and the PGA shrunk back somewhat (note that in Oracle8i and before, you would not expect to see the PGA shrink back at all; this is new with Oracle9i and later).

Let's retry that operation but play around with the size of our SORT_AREA_SIZE increasing it to 1MB. We'll log out of the session we're monitoring, log back in, and use the reset_stat.sql script to start over. As the beginning numbers are consistent, I don't display them here—only the final results:

ops$tkyte@ORA10G> alter session set sort_area_size=1048576;
Session altered.

ops$tkyte@ORA10G> set termout off;
query was executed here
ops$tkyte@ORA10G>  set termout on

Now in the other session we can measure our memory usage again:

ops$tkyte@ORA10G> @watch_stat
6 rows merged.

NAME                                             VALUE       DIFF
------------------------------------------- ---------- ----------
physical reads direct temporary tablespace         684        684
physical writes direct temporary tablespace        684        684
session pga memory                              498252          0
session pga memory max                         2398796    1900544
session uga memory                              152176          0
session uga memory max                         1265064    1112888

6 rows selected.

As you can see, our PGA had grown considerably this time during the processing of our query. It temporarily grew by about 1,728KB, but the amount of physical I/O we had to do to sort this data dropped considerably as well (use more memory, swap to disk less often). We may have avoided a multipass sort as well, a condition that happens when there are so many little sets of sorted data to merge together that Oracle ends up writing the data to temp more than once. Now, let's go to an extreme here:

ops$tkyte@ORA10G> alter session set sort_area_size=1073741820;
Session altered.

ops$tkyte@ORA10G> set termout off;
query was executed here
ops$tkyte@ORA10G> set termout on

Measuring from the other session, we can see the memory used so far:

ops$tkyte@ORA10G> @watch_stat
6 rows merged.

NAME                                             VALUE       DIFF
------------------------------------------- ---------- ----------
physical reads direct temporary tablespace           0          0
physical writes direct temporary tablespace          0          0
session pga memory                              498252          0
session pga memory max                         7445068    6946816
session uga memory                              152176          0
session uga memory max                         7091360    6939184

6 rows selected.

We can observe that even though we allowed for up to 1GB of memory to the SORT_AREA_SIZE, we really only used about 6.6MB. This shows that the SORT_AREA_SIZE setting is an upper bound, not the default and only allocation size. Here notice also that we did only one sort again, but this time it was entirely in memory; there was no temporary space on disk used, as evidenced by the lack of physical I/O.

If you run this same test on various versions of Oracle, or perhaps even on different operating systems, you might see different behavior, and I would expect that your numbers in all cases would be a little different from mine. But the general behavior should be the same. In other words, as you increase the permitted sort area size and perform large sorts, the amount of memory used by your session will increase. You might notice the PGA memory going up and down, or it might remain constant over time, as just shown. For example, if you were to execute the previous test in Oracle8i, I am sure that you would notice that PGA memory does not shrink back in size (i.e., the SESSION PGA MEMORY equals the SESSION PGA MEMORY MAX in all cases). This is to be expected, as the PGA is managed as a heap in 8i releases and is created via malloc()-ed memory. In 9i and 10g, new methods attach and release work areas as needed using operating system-specific memory allocation calls.

The important things to remember about using the *_AREA_SIZE parameters are as follows:

  • These parameters control the maximum amount of memory used by a SORT, HASH, and/or BITMAP MERGE operation.
  • A single query may have many operations taking place that use this memory, and multiple sort/hash areas could be created. Remember that you may have many cursors opened simultaneously, each with their own SORT_AREA_RETAINED needs. So, if you set the sort area size to 10MB, you could use 10, 100, 1,000 or more megabytes of RAM in your session. These settings are not session limits; rather, they are limits on a single operation, and your session could have many sorts in a single query or many queries open that require a sort.
  • The memory for these areas is allocated on an "as needed basis." If you set the sort area size to 1GB as we did, it does not mean you will allocate 1GB of RAM. It only means that you have given the Oracle process the permission to allocate that much memory for a sort/hash operation.

Automatic PGA Memory Management

Starting with Oracle9i Release 1, a new way to manage PGA memory was introduced that avoids using the SORT_AREA_SIZE, BITMAP_MERGE_AREA_SIZE, and HASH_AREA_SIZE parameters. It was introduced to attempt to address a few issues:

  • Ease of use: Much confusion surrounded how to set the proper *_AREA_SIZE parameters. There was also much confusion over how those parameters actually worked and how memory was allocated.
  • Manual allocation was a "one-size-fits-all" method: Typically as the number of users running similar applications against a database went up, the amount of memory used for sorting/hashing went up linearly as well. If 10 concurrent users with a sort area size of 1MB used 10MB of memory, 100 concurrent users would probably use 100MB, 1,000 would probably use 1000MB, and so on. Unless the DBA was sitting at the console continually adjusting the sort/hash area size settings, everyone would pretty much use the same values all day long. Consider the previous example, where you saw for yourself how the physical I/O to temp decreased as the amount of RAM we allowed ourselves to use went up. If you run that example for yourself, you will almost certainly see a decrease in response time as the amount of RAM available for sorting increases. Manual allocation fixes the amount of memory to be used for sorting at a more or less constant number, regardless of how much memory is actually available. Automatic memory management allows us to use the memory when it is available; it dynamically adjusts the amount of memory we use based on the workload.
  • Memory control: As a result of the previous point, it was hard, if not impossible, to keep the Oracle instance inside a "box" memory-wise. You could not control the amount of memory the instance was going to use, as you had no real control over the number of simultaneous sorts/hashes taking place. It was far too easy to use more real memory (actual physical free memory) than was available on the machine.

Enter automatic PGA memory management. Here, you first simply set up and size the SGA. The SGA is a fixed-size piece of memory, so you can very accurately see how big it is, and that will be its total size (until and if you change that). You then tell Oracle, "This is how much memory you should try to limit yourself across all work areas—a new umbrella term for the sorting and hashing areas you use." Now, you could in theory take a machine with 2GB of physical memory and allocate 768MB of memory to the SGA and 768MB of memory to the PGA, leaving 512MB of memory for the OS and other processes. I say "in theory" because it doesn't work exactly that cleanly, but it's close. Before I discuss why that is true, we'll take a look at how to set up automatic PGA memory management and turn it on.

The process to set this up involves deciding on the proper values for two instance initialization parameters, namely

  • WORKAREA_SIZE_POLICY: This parameter may be set to either MANUAL, which will use the sort area and hash area size parameters to control the amount of memory allocated, or AUTO, in which case the amount of memory allocated will vary based on the current workload present in the database. The default and recommended value is AUTO.
  • PGA_AGGREGATE_TARGET: This parameter controls how much memory the instance should allocate, in total, for all work areas used to sort/hash data. Its default value varies by version and may be set by various tools such as the DBCA. In general, if you are using automatic PGA memory management, you should explicitly set this parameter.

So, assuming that WORKAREA_SIZE_POLICY is set to AUTO, and PGA_AGGREGATE_TARGET has a nonzero value, you will be using the new automatic PGA memory management. You can "turn it on" in your session via the ALTER SESSION command or at the system level via the ALTER SESSION command.


Note Bear in mind the previously discussed caveat that in Oracle9i, shared server connections will not use automatic memory management; rather, they will use the SORT_AREA_SIZE and HASH_AREA_SIZE parameters to decide how much RAM to allocate for various operations. In Oracle 10g and up, automatic PGA memory management is available to both connection types. It is important to properly set the SORT_AREA_SIZE and HASH_AREA_SIZE parameters when using shared server connections with Oracle9i.


So, the entire goal of automatic PGA memory management is to maximize the use of RAM while at the same time not using more RAM than you want. Under manual memory management, this was virtually an impossible goal to achieve. If you set SORT_AREA_SIZE to 10MB, when one user was performing a sort operation that user would use up to 10MB for the sort work area. If 100 users were doing the same, they would use up to 1000MB of memory. If you had 500MB of free memory, the single user performing a sort by himself could have used much more memory, and the 100 users should have used much less. That is what automatic PGA memory management was designed to do. Under a light workload, memory usage could be maximized as the load increases on the system, and as more users perform sort or hash operations, the amount of memory allocated to them would decrease—to obtain the goal of using all available RAM, but not attempting to use more than physically exists.

Determining How the Memory Is Allocated

Questions that come up frequently are "How is this memory allocated?" and "What will be the amount of RAM used by my session?" These are hard questions to answer for the simple reason that the algorithms for serving out memory under the automatic scheme are not documented and can and will change from release to release. When using things that begin with "A"—for automatic—you lose a degree of control, as the underlying algorithms decide what to do and how to control things.

We can make some observations based on some information from MetaLink note 147806.1:

  • The PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is pre-allocated when the database is started up. You can observe this by setting the PGA_AGGREGATE_TARGET to a value much higher than the amount of physical memory you have available on your server. You will not see any large allocation of memory as a result.
  • A serial (nonparallel query) session will use a small percentage of the PGA_AGGREGATE_TARGET, about 5 percent or less. So, if you have set the PGA_AGGREGATE_TARGET to 100MB, you would expect to use no more than about 5MB per work area (e.g., the sort or hash work area). You may well have multiple work areas in your session for multiple queries, or more than one sort/hash operation in a single query, but each work area will be about 5 percent or less of the PGA_AGGREGATE_TARGET.
  • As the workload on you server goes up (more concurrent queries, concurrent users), the amount of PGA memory allocated to your work areas will go down. The database will try to keep the sum of all PGA allocations under the threshold set by PGA_AGGREGATE_TARGET. It would be analogous to having a DBA sit at a console all day, setting the SORT_AREA_SIZE and HASH_AREA_SIZE parameters based on the amount of work being performed in the database. We will directly observe this behavior shortly in a test.
  • A parallel query may use up to 30 percent of the PGA_AGGREGATE_TARGET, with each parallel process getting its slice of that 30 percent. That is, each parallel process would be able to use about 0.3 *PGA_AGGREGATE_TARGET / (number of parallel processes).

OK, so how can we observe the different work area sizes being allocated to our session? By applying the same technique we used earlier in the manual memory management section, to observe the memory used by our session and the amount of I/O to temp we performed. The following test was performed on a Red Hat Advanced Server 3.0 Linux machine using Oracle 10.1.0.3 and dedicated server connections. This was a two-CPU Dell PowerEdge with hyperthreading enabled, so it was as if there were four CPUs available. Using reset_stat.sql and a slightly modified version of watch_stat.sql from earlier, I captured the session statistics for a session as well as the total statistics for the instance. The slightly modified watch_stat.sql script captured this information via the MERGE statement:

merge into sess_stats
using
(
select a.name, b.value
  from v$statname a, v$sesstat b
 where a.statistic# = b.statistic#
   and b.sid = &1
   and (a.name like '%ga %'
       or a.name like '%direct temp%')
union all
select 'total: ' || a.name, sum(b.value)
  from v$statname a, v$sesstat b, v$session c
 where a.statistic# = b.statistic#
   and (a.name like '%ga %'
        or a.name like '%direct temp%')
   and b.sid = c.sid
   and c.username is not null
 group by 'total: ' || a.name
) curr_stats
on (sess_stats.name = curr_stats.name)
when matched then
  update set diff = curr_stats.value - sess_stats.value,
             value = curr_stats.value
when not matched then
  insert ( name, value, diff )
  values
  ( curr_stats.name, curr_stats.value, null )
/

I simply added the UNION ALL section to capture the total PGA/UGA and sort writes by summing over all sessions, in addition to the statistics for a single session. I then ran the following SQL*Plus script in that particular session. The table BIG_TABLE had been created beforehand with 50,000 rows in it. I dropped the primary key from this table, so all that remained was the table itself (ensuring that a sort process would have to be performed):

set autotrace traceonly statistics;
select * from big_table order by 1, 2, 3, 4;
set autotrace off

Note The BIG_TABLE table is created as a copy of ALL_OBJECTS with a primary key, and it can have as many or as few rows as you like. The big_table.sql script is documented in the "Setting Up" section at the beginning of this book.


Now, I ran that small query script against a database with a PGA_AGGREGATE_TARGET of 256MB, meaning I wanted Oracle to use up to about 256MB of PGA memory for sorting. I set up another script to be run in other sessions to generate a large sorting load on the machine. This script loops and uses a built-in package, DBMS_ALERT, to see if it should continue processing. If it should, it runs the same big query, sorting the entire BIG_TABLE table. When the simulation was over, a session could signal all of the sorting processes, the load generators, to "stop" and exit. The script used to perform the sort is as follows:

declare
    l_msg  long;
    l_status number;
begin
    dbms_alert.register( 'WAITING' );
    for i in 1 .. 999999 loop
        dbms_application_info.set_client_info( i );
        dbms_alert.waitone( 'WAITING', l_msg, l_status, 0 );
        exit when l_status = 0;
        for x in ( select * from big_table order by 1, 2, 3, 4 )
        loop
            null;
        end loop;
    end loop;
end;
/
exit

The script to stop these processes from running is as follows:

begin
    dbms_alert.signal( 'WAITING', '' );
    commit;
end;

To observe the differing amounts of RAM allocated to the session I was measuring, I initially ran the SELECT in isolation—as the only session. I captured the same six statistics and saved them into another table, along with the count of active sessions. Then I added 25 sessions to the system (i.e., I ran the preceding benchmark script with the loop in 25 new sessions). I waited a short period of time—one minute for the system to adjust to this new load—and then I created a new session, captured the statistics for it with reset_stat.sql, ran the query that would sort, and then ran watch_stat.sql to capture the differences. I did this repeatedly, for up to 500 concurrent users.

It should be noted that I asked the database instance to do an impossible thing here. As noted previously, based on the first time we ran watch_stat.sql, each connection to Oracle, before even doing a single sort, consumed almost .5MB of RAM. At 500 users, we would be very close to the PGA_AGGREGATE_TARGET setting just by having them all logged in, let alone actually doing any work! This drives home the point that the PGA_AGGREGATE_TARGET is just that: a target, not a directive. We can and will exceed this value for various reasons.

Table 4-1 summarizes my findings using approximately 25 user increments.

Table 4-1. PGA Memory Allocation Behavior with Increasing Numbers of Active Sessions, with PGA_AGGREGATE_TARGET Set to 256MB

Active Sessions PGA Used by Single Session PGA in Use by System Writes to Temp by Single Session Reads from Temp by Single Session
1 7.5 2 0 0
27 7.5 189 0 0
51 4.0 330 728 728
76 4.0 341 728 728
101 3.2 266 728 728
126 1.5 214 728 728
151 1.7 226 728 728
177 1.4 213 728 728
201 1.3 218 728 728
226 1.3 211 728 728
251 1.3 237 728 728
276 1.3 251 728 728
301 1.3 281 728 728
326 1.3 302 728 728
351 1.3 324 728 728
376 1.3 350 728 728
402 1.3 367 728 728
426 1.3 392 728 728
452 1.3 417 728 728
476 1.3 439 728 728
501 1.3 467 728 728

Note You might wonder why only 2MB of RAM is reported in use by the system with one active user. It has to do with the way I measured. The simulation would snapshot the single session's of interest's statistics. Next, I would run the big query in the single session of interest and then snapshot that session's statistics again. Finally, I would measure how much PGA was used by the system. By the time I measured, the single session of interest would have already completed and given back some of the PGA it was using to sort. So, the number for PGA used by the system is an accurate measurement of the system's PGA memory at the time it was measured.


As you can see, when I had few active sessions, my sorts were performed entirely in memory. For an active session count of 1 to somewhere less than 50, I was allowed to sort entirely in memory. However, by the time I got 50 users logged in, actively sorting, the database started reining in the amount of memory I was allowed to use at a time. It took a couple of minutes before the amount of PGA being used fell back within acceptable limits (the 256MB request), but it eventually did. The amount of PGA memory allocated to the session dropped from 7.5MB to 4MB to 3.2MB, and eventually down to the area of 1.7 to 1.3MB (remember, parts of that PGA are not for sorting, but are for other operations—just the act of logging in created a .5MB PGA). The total PGA in use by the system remained within tolerable limits until somewhere around 300 to 351 users. There I started to exceed on a regular basis the PGA_AGGREGATE_TARGET and continued to do so until the end of the test. I gave the database instance in this case an impossible task—the very act of having 350 users, most executing a PL/SQL, plus the sort they were all requesting, just did not fit into the 256MB of RAM I had targeted. It simply could not be done. Each session, therefore used as little memory as possible, but had to allocate as much memory as it needed. By the time I finished this test, 500 active sessions were using a total of 467MB of PGA memory—as little as they could.

You should, however, consider what Table 4-1 would look like under a manual memory management situation. Suppose the SORT_AREA_SIZE had been set to 5MB. The math is very straightforward: each session would be able to perform the sort in RAM (or virtual memory as the machine ran out of real RAM), and thus would consume 6 to 7MB of RAM per session (the amount used without sorting to disk in the previous single-user case). I ran the preceding test again with SORT_AREA_SIZE set to 5MB, and as I went from 1 user to adding 25 at a time, the numbers remained consistent, as shown in Table 4-2.

Table 4-2. PGA Memory Allocation Behavior with Increasing Numbers of Active Sessions, with SORT_AREA_SIZE Set to 5MB (Manual Memory Management)

Active Sessions PGA Used by Single Session PGA in Use by System Writes to Temp by Single Session Reads from Temp by Single Session
1 6.4 5 728 728
26 6.4 137 728 728
51 6.4 283 728 728
76 6.4 391 728 728
102 6.4 574 728 728
126 6.4 674 728 728
151 6.4 758 728 728
176 6.4 987 728 728
202 6.4 995 728 728
226 6.4 1227 728 728
251 6.4 1383 728 728
277 6.4 1475 728 728
302 6.4 1548 728 728

Had I been able to complete the test (I have 2GB of real memory on this server and my SGA was 600MB; by the time I got to 325 users, the machine was paging and swapping to the point where it was impossible to continue), at 500 users I would have allocated around 2,750MB of RAM! So, the DBA would probably not set the SORT_AREA_SIZE to 5MB on this system, but rather to about 0.5MB, in an attempt to keep the maximum PGA usage at a bearable level at peak. Now at 500 users I would have had about 500MB of PGA allocated, perhaps similar to what we observed with automatic memory management—but at times when there were fewer users, we would have still written to temp rather than performing the sort in memory. In fact, when running the preceding test with a SORT_AREA_SIZE of .5MB, we would observe the data in Table 4-3.

Table 4-3. PGA Memory Allocation Behavior with Increasing Numbers of Active Sessions, with SORT_AREA_SIZE Set to 0.5MB (Manual Memory Management)

Active Sessions PGA Used by Single Session PGA in Use by System Writes to Temp by Single Session Reads from Temp by Single Session
1 1.2 1 728 728
26 1.2 29 728 728
51 1.2 57 728 728
76 1.2 84 728 728
101 1.2 112 728 728
126 1.2 140 728 728
151 1.2 167 728 728
176 1.2 194 728 728
201 1.2 222 728 728
226 1.2 250 728 728

This represents a very predicable—but suboptimal—use of memory as the workload increases or decreases over time. Automatic PGA memory management was designed exactly to allow the small community of users to use as much RAM as possible when it was available and back off on this allocation over time as the load increased, and increase the amount of RAM allocated for individual operations over time as the load decreased.

Using PGA_AGGREGATE_TARGET to Control Memory Allocation

Earlier, I wrote that "in theory" we can use the PGA_AGGREGATE_TARGET to control the overall amount of PGA memory used by the instance. We saw in the last example that this is not a hard limit, however. The instance will attempt to stay within the bounds of the PGA_AGGREGATE_TARGET, but if it cannot, it will not stop processing; rather, it will just be forced to exceed that threshold.

Another reason this limit is "in theory" is because the work areas, while a large contributor to PGA memory, are not the only contributor to PGA memory. Many things contribute to your PGA memory allocation, and only the work areas are under the control of the database instance. If you create and execute a PL/SQL block of code that fills in a large array with data in dedicated server mode where the UGA is in the PGA, Oracle cannot do anything but allow you to do it.

Consider the following quick example. We'll create a package that can hold some persis-tent (global) data in the server:

ops$tkyte@ORA10G> create or replace package demo_pkg
  2  as
  3          type array is table of char(2000) index by binary_integer;
  4          g_data array;
  5  end;
  6  /
Package created.

Now we'll measure the amount of memory our session is currently using in the PGA/UGA (I was using dedicated server in this example, so the UGA is a subset of the PGA memory):

ops$tkyte@ORA10G> select a.name, to_char(b.value, '999,999,999') value
  2    from v$statname a, v$mystat b
  3   where a.statistic# = b.statistic#
  4     and a.name like '%ga memory%';


NAME                           VALUE
------------------------------ ------------
session uga memory                1,212,872
session uga memory max            1,212,872
session pga memory                1,677,900
session pga memory max            1,677,900

So, initially we are using about 1.5MB of PGA memory in our session (as a result of compiling a PL/SQL package, running this query, etc.). Now, we'll run our query against BIG_TABLE again using the same 256MB PGA_AGGREGATE_TARGET (this was done in an otherwise idle instance; we are the only session requiring memory right now):

ops$tkyte@ORA10GR1> set autotrace traceonly statistics;
ops$tkyte@ORA10GR1> select * from big_table order by 1,2,3,4;
50000 rows selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        721  consistent gets
          0  physical reads
          0  redo size
    2644246  bytes sent via SQL*Net to client
      37171  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      50000  rows processed
ops$tkyte@ORA10GR1> set autotrace off

As you can see, the sort was done entirely in memory, and in fact if we peek at our session's PGA/UGA usage, we can see how much we used:

ops$tkyte@ORA10GR1> select a.name, to_char(b.value, '999,999,999') value
  2   from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4    and a.name like '%ga memory%';

NAME                           VALUE
------------------------------ ------------
session uga memory                1,212,872
session uga memory max            7,418,680
session pga memory                1,612,364
session pga memory max            7,838,284

The same 7.5MB of RAM we observed earlier. Now, we will proceed to fill up that CHAR array we have in the package (a CHAR datatype is blank-padded so each of these array elements is exactly 2,000 characters in length):

ops$tkyte@ORA10G> begin
  2          for i in 1 .. 100000
  3          loop
  4                  demo_pkg.g_data(i) := 'x';
  5          end loop;
  6  end;
  7  /
PL/SQL procedure successfully completed.

Upon measuring our session's current PGA utilization after that, we find something similar to the following:

ops$tkyte@ORA10GR1> select a.name, to_char(b.value, '999,999,999') value
  2   from v$statname a, v$mystat b
  3  where a.statistic# = b.statistic#
  4    and a.name like '%ga memory%';

NAME                           VALUE
------------------------------ ------------
session uga memory              312,952,440
session uga memory max          312,952,440
session pga memory              313,694,796
session pga memory max          313,694,796

Now, that is memory allocated in the PGA that the database itself cannot control. We already exceeded the PGA_AGGREGATE_TARGET and there is quite simply nothing the database can do about it—it would have to fail our request if it did anything, and it will do that only when the OS reports back that there is no more memory to give. If we wanted, we could allocate more space in that array and place more data in it, and the database would just have to do it for us.

However, the database is aware of what we have done. It does not ignore the memory it cannot control; rather, it recognizes that the memory is being used and backs off the size of memory allocated for work areas accordingly. So if we rerun the same sort query, we see that this time we sorted to disk—the database did not give us the 7MB or so of RAM needed to do this in memory since we had already exceeded the PGA_AGGREGATE_TARGET:

ops$tkyte@ORA10GR1> set autotrace traceonly statistics;
ops$tkyte@ORA10GR1> select * from big_table order by 1,2,3,4;
50000 rows selected.

Statistics
----------------------------------------------------------
          6 recursive calls
          2 db block gets
        721 consistent gets
        728 physical reads
          0 redo size
    2644246 bytes sent via SQL*Net to client
      37171 bytes received via SQL*Net from client
       3335 SQL*Net roundtrips to/from client
          0 sorts (memory)
          1 sorts (disk)
      50000 rows processed
ops$tkyte@ORA10GR1> set autotrace off

So, because some PGA memory is outside of Oracle's control, it is easy for us to exceed the PGA_AGGREGATE_TARGET simply by allocating lots of really large data structures in our PL/SQL code. I am not recommending you do that by any means—I'm just pointing out that the PGA_AGGREGATE_TARGET is a more of a request than a hard limit.

Choosing Between Manual and Auto Memory Management

So, which method should you use: manual or automatic? My preference is to use the automatic PGA memory management by default.


Caution I'll repeat this from time to time in the book: please do not make any changes to a production system—a live system—without first testing for any side effects. For example, please do not read this chapter, check your system, and find you are using manual memory management, and then just turn on automatic memory management. Query plans may change, and performance may be impacted. One of three things could happen:

  • Things run exactly the same.
  • Things run better than they did before.
  • Things run much worse then they did before.

Exercise caution before making changes, test the proposed change first.


One of the most perplexing things for a DBA can be setting the individual parameters, especially parameters such as SORT|HASH_AREA_SIZE and so on. Many times, I see systems running with incredibly small values for these parameters—values so small that system performance is massively impacted in a negative way. This is probably a result of the fact that the default values are very small themselves: 64KB for sorting and 128KB for hashing. There is a lot of confusion over how big or small these values should be. Not only that, but the values you would like to use for them might vary over time, as the day goes by. At 8:00 am, with two users, a 50MB sort area size might be reasonable for the single user logged in. However, at 12:00 pm with 500 users, 50MB might not be appropriate. This is where the WORKAREA_SIZE_POLICY = AUTO setting and the corresponding PGA_AGGREGATE_TARGET come in handy. Setting the PGA_AGGREGATE_TARGET, the amount of memory you would like Oracle to feel free to use to sort and hash, is conceptually easier than trying to figure out the perfect SORT|HASH_AREA_SIZE, especially since there isn't a perfect value for these parameters; the perfect value varies by workload.

Historically, the DBA configured the amount of memory used by Oracle by setting the size of the SGA (the buffer cache; the log buffer; and the Shared, Large, and Java pools). The remaining memory on the machine would then be used by the dedicated or shared servers in the PGA region. The DBA had little control over how much of this memory would or would not be used. She could set the SORT_AREA_SIZE, but if there were 10 concurrent sorts, then Oracle could use as much as 10 *SORT_AREA_SIZE bytes of RAM. If there were 100 concurrent sorts, then Oracle would use 100 *SORT_AREA_SIZE bytes; for 1,000 concurrent sorts, 1,000 *SORT_AREA_SIZE; and so on. Couple that with the fact that other things go into the PGA, and you really don't have good control over the maximal use of PGA memory on the system.

What you would like to have happen is for this memory to be used differently as the memory demands on the system grow and shrink. The more users, the less RAM each should use. The fewer users, the more RAM each should use. Setting WORKAREA_SIZE_POLICY = AUTO is just such a way to achieve this. The DBA specifies a single size now, the PGA_AGGREGATE_TARGET or the maximum amount of PGA memory that the database should strive to use. Oracle will distribute this memory over the active sessions as it sees fit. Further, with Oracle9i Release 2 and up, there is even PGA advisory (part of Statspack, available via a V$ dynamic performance view and visible in Enterprise Manager), much like the buffer cache advisor. It will tell you over time what the optimal PGA_AGGREGATE_TARGET for your system is to minimize physical I/O to your temporary tablespaces. You can use this information to either dynamically change the PGA size online (if you have sufficient RAM) or decide whether you might need more RAM on your server to achieve optimal performance.

Are there times, however, when you won't want to use it? Absolutely, and fortunately they seem to be the exception and not the rule. The automatic memory management was designed to be multiuser "fair." In anticipation of additional users joining the system, the automatic memory management will limit the amount of memory allocated as a percentage of the PGA_AGGREGATE_TARGET. But what happens when you don't want to be fair, when you know that you should get all of the memory available? Well, that would be the time to use the ALTER SESSION command to disable automatic memory management in your session (leaving it in place for all others) and to manually set your SORT|HASH_AREA_SIZE as needed. For example, that large batch process that takes place at 2:00 am and does tremendously large hash joins, some index builds, and the like? It should be permitted to use all of the resources on the machine. It does not want to be "fair" about memory use—it wants it all, as it knows it is the only thing happening in the database right now. That batch job can certainly issue the ALTER SESSION commands and make use of all resources available.

So, in short, I prefer to use automatic PGA memory management for end user sessions—for the applications that run day to day against my database. Manual memory management makes sense for large batch jobs that run during time periods when they are the only activities in the database.

PGA and UGA Wrap-Up

So far, we have looked at two memory structures: the PGA and the UGA. You should understand now that the PGA is private to a process. It is the set of variables that an Oracle dedicated or shared server needs to have independent of a session. The PGA is a "heap" of memory in which other structures may be allocated. The UGA is also a heap of memory in which various session-specific structures may be defined. The UGA is allocated from the PGA when you use a dedicated server to connect to Oracle and from the SGA under a shared server connection. This implies that when using a shared server, you must size your SGA's Large pool to have enough space in it to cater for every possible user that will ever connect to your database concurrently. So, the SGA of a database supporting shared server connections is generally much larger than the SGA for a similarly configured, dedicated server mode-only database. We'll cover the SGA in more detail next.

The System Global Area

Every Oracle instance has one big memory structure referred to as the System Global Area (SGA). This is a large, shared memory structure that every Oracle process will access at one point or another. It will vary in size from a few of megabytes on small test systems, to hundreds of megabytes on medium to large systems, up to many gigabytes in size for really big systems.

On a UNIX operating system, the SGA is a physical entity that you can "see" from the OS command line. It is physically implemented as a shared memory segment—a stand-alone piece of memory to which processes may attach. It is possible to have an SGA on a system without having any Oracle processes; the memory stands alone. It should be noted, however, that if you have an SGA without any Oracle processes, this is an indication that the database crashed in some fashion. It is an unusual situation, but it can happen. This is what an SGA "looks like" on Red Hat Linux:

[tkyte@localhost tkyte]$ ipcs -m | grep ora
0x99875060 2031619    ora10g    660        538968064 15
0x0d998a20 1966088    ora9ir2   660        117440512 45
0x6b390abc 1998857    ora9ir1   660        130560000 50

Three SGAs are represented here: one owned by the OS user ora10g, another by the OS user ora9ir2, and the third by the OS user ora9ir1. They are about 512MB, 112MB, and 124MB, respectively.

On Windows, you really cannot see the SGA as a distinct entity the way you can in UNIX/Linux. Because on the Windows platform, Oracle executes as a single process with a single address space, the SGA is allocated as private memory to the oracle.exe process. If you use the Windows Task Manager or some other performance tool, you can see how much memory oracle.exe has allocated, but you cannot see what is the SGA versus any other piece of allocated memory.

Within Oracle itself, you can see the SGA regardless of platform, using another magic V$ view called V$SGASTAT. It might look as follows (note that this code does not come from the preceding system; it's from a system with all features configured to enable viewing of all pools available):

ops$tkyte@ORA10G> compute sum of bytes on pool
ops$tkyte@ORA10G> break on pool skip 1
ops$tkyte@ORA10G> select pool, name, bytes
  2 from v$sgastat
  3 order by pool, name;

POOL         NAME                                BYTES
------------ ------------------------------ ----------
java pool    free memory                      16777216
************                                ----------
sum                                           16777216

large pool   PX msg pool                         64000
             free memory                      16713216
************                                ----------
sum                                           16777216

shared pool  ASH buffers                       2097152
             FileOpenBlock                      746704
             KGLS heap                          777516
             KQR L SO                            29696
             KQR M PO                           599576
             KQR M SO                            42496
...
             sql area                          2664728
             table definiti                        280
             trigger defini                       1792
             trigger inform                       1944
             trigger source                        640
             type object de                     183804
************                                ----------
sum                                          352321536

streams pool free memory                      33554432
************                                ----------
sum                                           33554432

             buffer_cache                   1157627904
             fixed_sga                          779316
             log_buffer                         262144
************                                ----------
sum                                         1158669364


43 rows selected.

The SGA is broken up into various pools:

  • Java pool: The Java pool is a fixed amount of memory allocated for the JVM running in the database. In Oracle10g, the Java pool may be resized online while the database is up and running.
  • Large pool: The Large pool is used by shared server connections for session memory, by parallel execution features for message buffers, and by RMAN backup for disk I/O buffers. This pool is resizable online in both Oracle 10g and 9i Release 2.
  • Shared pool: The Shared pool contains shared cursors, stored procedures, state objects, dictionary caches, and many dozens of other bits of data. This pool is resizable online in both Oracle 10g and 9i.
  • Streams pool: This is a pool of memory used exclusively by Oracle Streams, a data-sharing tool within the database. This pool is new in Oracle 10g and is resizable online. In the event the Streams pool is not configured and you use the Streams functionality, Oracle will use up to 10 percent of the Shared pool for streams memory.
  • The "Null" pool: This one doesn't really have a name. It is the memory dedicated to block buffers (cached database blocks), the redo log buffer, and a "fixed SGA" area.

A typical SGA might look as shown in Figure 4-1.

image

Figure 4-1. Typical SGA

The parameters that have the greatest effect on the overall size of the SGA are as follows:

  • JAVA_POOL_SIZE: Controls the size of the Java pool.
  • SHARED_POOL_SIZE: Controls the size of the Shared pool, to some degree.
  • LARGE_POOL_SIZE: Controls the size of the Large pool.
  • DB_*_CACHE_SIZE: Eight of these CACHE_SIZE parameters control the sizes of the various buffer caches available.
  • LOG_BUFFER: Controls the size of the redo buffer, to some degree.
  • SGA_TARGET: Used with automatic SGA memory management in Oracle 10g and above.
  • SGA_MAX_SIZE: Used to control the maximum size to which the SGA can be resized while the database is up and running.

In Oracle9i, the various SGA components must be manually sized by the DBA but, starting in Oracle 10g, there is a new option to consider: automatic SGA memory management, whereby the database instance will allocate and reallocate the various SGA components at runtime, in response to workload conditions. When using the automatic memory management with Oracle 10g, it is a matter of simply setting the SGA_TARGET parameter to the desired SGA size, leaving out the other SGA-related parameters altogether. The database instance will take it from there, allocating memory to the various pools as needed and even taking memory away from one pool to give to another over time.

Regardless of whether you are using automatic or manual memory management, you will find that memory is allocated to the various pools in units called granules. A single granule is an area of memory either 4MB, 8MB, or 16MB in size. The granule is the smallest unit of allocation, so if you ask for a Java pool of 5MB and your granule size is 4MB, Oracle will actually allocate 8MB to the Java pool (8 being the smallest number greater than or equal to 5 that is a multiple of the granule size of 4). The size of a granule is determined by the size of your SGA (this sounds recursive to a degree, as the size of the SGA is dependent on the granule size). You can view the granule sizes used for each pool by querying V$SGA_DYNAMIC_COMPONENTS. In fact, we can use this view to see how the total SGA size might affect the size of the granules:

sys@ORA10G> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 576M

sys@ORA10G> select component, granule_size from v$sga_dynamic_components;

COMPONENT                 GRANULE_SIZE
------------------------- ------------
shared pool                    4194304
large pool                     4194304
java pool                      4194304
streams pool                   4194304
DEFAULT buffer cache           4194304
KEEP buffer cache              4194304
RECYCLE buffer cache           4194304
DEFAULT 2K buffer cache        4194304
DEFAULT 4K buffer cache        4194304
DEFAULT 8K buffer cache        4194304
DEFAULT 16K buffer cache       4194304
DEFAULT 32K buffer cache       4194304
OSM Buffer Cache               4194304

13 rows selected.

In this example, I used automatic SGA memory management and controlled the size of the SGA via the single parameter SGA_TARGET. When my SGA size is under about 1GB, the granule is 4MB. When the SGA size is increased to some threshold over 1GB (it will vary slightly from operating system to operating system and even from release to release), I see an increased granule size:

sys@ORA10G> alter system set sga_target = 1512m scope=spfile;
System altered.

sys@ORA10G> startup force
ORACLE instance started.

Total System Global Area 1593835520 bytes
Fixed Size                   779316 bytes
Variable Size             401611724 bytes
Database Buffers         1191182336 bytes
Redo Buffers                 262144 bytes
Database mounted.
Database opened.
sys@ORA10G> select component, granule_size from v$sga_dynamic_components;

COMPONENT                 GRANULE_SIZE
------------------------- ------------
shared pool                   16777216
large pool                    16777216
java pool                     16777216
streams pool                  16777216
DEFAULT buffer cache          16777216
KEEP buffer cache             16777216
RECYCLE buffer cache          16777216
DEFAULT 2K buffer cache       16777216
DEFAULT 4K buffer cache       16777216
DEFAULT 8K buffer cache       16777216
DEFAULT 16K buffer cache      16777216
DEFAULT 32K buffer cache      16777216
OSM Buffer Cache              16777216

13 rows selected.

As you can see, at 1.5GB of SGA, my pools will be allocated using 16MB granules, so any given pool size will be some multiple of 16MB.

With this in mind, let's look at each of the major SGA components in turn.

Fixed SGA

The fixed SGA is a component of the SGA that varies in size from platform to platform and from release to release. It is "compiled" into the Oracle binary itself at installation time (hence the name "fixed"). The fixed SGA contains a set of variables that point to the other components of the SGA, and variables that contain the values of various parameters. The size of the fixed SGA is something with which we have no control over, and it is generally very small. Think of this area as a "bootstrap" section of the SGA—something Oracle uses internally to find the other bits and pieces of the SGA.

Redo Buffer

The redo buffer is where data that needs to be written to the online redo logs will be cached temporarily, before it is written to disk. Since a memory-to-memory transfer is much faster then a memory-to-disk transfer, use of the redo log buffer can speed up database operation. The data will not reside in the redo buffer for very long. In fact, LGWR initiates a flush of this area in one of the following scenarios:

  • Every three seconds
  • Whenever someone commits
  • When LGWR is asked to switch log files
  • When the redo buffer gets one-third full or contains 1MB of cached redo log data

For these reasons, it will be a very rare system that will benefit from a redo buffer of more than a couple of megabytes in size. A large system with lots of concurrent transactions could benefit somewhat from large redo log buffers because while LGWR (the process responsible for flushing the redo log buffer to disk) is writing a portion of the log buffer, other sessions could be filling it up. In general, a long-running transaction that generates a lot of redo log will benefit the most from a larger than normal log buffer, as it will be continuously filling up part of the redo log buffer while LGWR is busy writing out some of it. The larger and longer the transaction, the more benefit it could receive from a generous log buffer.

The default size of the redo buffer, as controlled by the LOG_BUFFER parameter, is whatever is the greater of 512KB and (128 * number of CPUs)KB. The minimum size of this area is OS dependent. If you would like to find out what that is, just set your LOG_BUFFER to 1 byte and restart your database. For example, on my Red Hat Linux instance I see the following:

sys@ORA10G> alter system set log_buffer=1 scope=spfile;
System altered.

sys@ORA10G> startup force
ORACLE instance started.

Total System Global Area 1593835520 bytes
Fixed Size                   779316 bytes
Variable Size             401611724 bytes
Database Buffers         1191182336 bytes
Redo Buffers                 262144 bytes
Database mounted.
Database opened.
sys@ORA10G> show parameter log_buffer

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_buffer                           integer     262144

The smallest log buffer I can really have, regardless of my settings, is going to be 256KB on this system.

Block Buffer Cache

So far, we have looked at relatively small components of the SGA. Now we are going to look at one that is possibly huge in size. The block buffer cache is where Oracle stores database blocks before writing them to disk and after reading them in from disk. This is a crucial area of the SGA for us. Make it too small and our queries will take forever to run. Make it too big and we'll starve other processes (e.g., we won't leave enough room for a dedicated server to create its PGA, and we won't even get started).

In earlier releases of Oracle, there was a single block buffer cache, and all blocks from any segment went into this single area. Starting with Oracle 8.0, we had three places to store cached blocks from individual segments in the SGA:

  • Default pool: The location where all segment blocks are normally cached. This is the original—and previously only—buffer pool.
  • Keep pool: An alternate buffer pool where by convention you would assign segments that were accessed fairly frequently, but still got aged out of the default buffer pool due to other segments needing space.
  • Recycle pool: An alternate buffer pool where by convention you would assign large segments that you access very randomly, and which would therefore cause excessive buffer flushing but would offer no benefit, because by the time you wanted the block again it would have been aged out of the cache. You would separate these segments out from the segments in the Default and Keep pools so that they would not cause those blocks to age out of the cache.

Note that in the Keep and Recycle pool descriptions I used the phrase "by convention." There is nothing in place to ensure that you use neither the Keep pool nor the Recycle pool in the fashion described. In fact, the three pools manage blocks in a mostly identical fashion; they do not have radically different algorithms for aging or caching blocks. The goal here was to give the DBA the ability to segregate segments to hot, warm, and do not care to cache areas. The theory was that objects in the Default pool would be hot enough (i.e., used enough) to warrant staying in the cache all by themselves. The cache would keep them in memory since they were very popular blocks. You might have had some segments that were fairly popular, but not really hot; these would be considered the warm blocks. These segments' blocks could get flushed from the cache to make room for some blocks you used infrequently (the "do not care to cache" blocks). To keep these warm segments blocks cached, you could do one of the following:

  • Assign these segments to the Keep pool, in an attempt to let the warm blocks stay in the buffer cache longer.
  • Assign the "do not care to cache" segments to the Recycle pool, keeping the Recycle pool fairly small so as to let the blocks come into the cache and leave the cache rapidly (decrease the overhead of managing them all).

This increased the management work the DBA had to perform, as there were three caches to think about, size, and assign objects to. Remember also that there is no sharing between them, so if the Keep pool has lots of unused space, it won't give it to the overworked Default or Recycle pool. All in all, these pools were generally regarded as a very fine, low-level tuning device, only to be used after most all other tuning alternatives had been looked at (if I could rewrite a query to do one-tenth the I/O rather then set up multiple buffer pools, that would be my choice!).

Starting in Oracle9i, the DBA had up to four more optional caches, the db_Nk_caches, to consider in addition to the Default, Keep, and Recycle pools. These caches were added in support of multiple blocksizes in the database. Prior to Oracle9i, a database would have a single blocksize (typically 2KB, 4KB, 8KB, 16KB, or 32KB). Starting with Oracle9i, a database can have a default blocksize, which is the size of the blocks stored in the Default, Keep, or Recycle pool, as well as up to four nondefault blocksizes, as explained in Chapter 3.

The blocks in these buffer caches are managed in the same way as the blocks in the original Default pool—there are no special algorithm changes for them either. Let's now move on to cover how the blocks are managed in these pools.

Managing Blocks in the Buffer Cache

For simplicity, assume in this discussion that there is just a single Default pool. Because the other pools are managed in the same way, we need only discuss one of them.

The blocks in the buffer cache are basically managed in a single place with two different lists pointing at them:

  • The dirty list of blocks that need to be written by the database block writer (DBWn; we'll take a look at that process a little later)
  • A list of nondirty blocks

The list of nondirty blocks used to be a Least Recently Used (LRU) list in Oracle 8.0 and before. Blocks were listed in order of use. The algorithm has been modified slightly in Oracle8i and in later versions. Instead of maintaining the list of blocks in some physical order, Oracle employs a touch count algorithm, which effectively increments a counter associated with a block as you hit it in the cache. This count is not incremented every time you hit the block, but about once every three seconds—if you hit it continuously. You can see this algorithm at work in one of the truly magic sets of tables: the X$ tables. The X$ tables are wholly undocumented by Oracle, but information about them leaks out from time to time.

The X$BH table shows information about the blocks in the block buffer cache (which offers more information than the documented V$BH view). Here, we can see the touch count get incremented as we hit blocks. We can run the following query against that view to find the five "currently hottest blocks" and join that information to the DBA_OBJECTS view to see what segments they belong to. The query orders the rows in X$BH by the TCH (touch count) column and keeps the first five. Then we join the X$BH information to DBA_OBJECTS by X$BH.OBJ to DBA_OBJECTS.DATA_OBJECT_ID:

sys@ORA10G> select tch, file#, dbablk,
  2         case when obj = 4294967295
  3              then 'rbs/compat segment'
  4              else (select max( '('||object_type||') ' ||
  5                                owner || '.' || object_name ) ||
  6                           decode( count(*), 1, '', ' maybe!' )
  7                      from dba_objects
  8                        where data_object_id = X.OBJ )
  9         end what
 10   from (
 11 select tch, file#, dbablk, obj
 12   from x$bh
 13  where state <> 0
 14  order by tch desc
 15        ) x
 16  where rownum <= 5
 17 /

       TCH      FILE#     DBABLK WHAT
---------- ---------- ---------- ----------------------------------------
     51099          1       1434 (TABLE) SYS.JOB$
     49780          1       1433 (TABLE) SYS.JOB$
     48526          1       1450 (INDEX) SYS.I_JOB_NEXT
     11632          2         57 rbs/compat segment
     10241          1       1442 (INDEX) SYS.I_JOB_JOB

Note (2^32 - 1) or 4,294,967,295 is a magic number used to denote "special" blocks. If you would like to understand what the block is associated with, use the query select * from dba_extents where file_id = FILE# and block_id <= <DBABLK and block_id+blocks-1 >= DBABLK.


You might be asking what is meant by the 'maybe!' and the use of MAX() in the preceding scalar subquery. This is due to the fact that DATA_OBJECT_ID is not a "primary key" in the DBA_OBJECTS view as evidenced by the following:

sys@ORA10G> select data_object_id, count(*)
  2    from dba_objects
  3   where data_object_id is not null
  4   group by data_object_id
  5  having count(*) > 1;

DATA_OBJECT_ID  COUNT(*)
-------------- ----------
             2         17
             6          3
             8          3
            10          3
            29          3
           161          3
           200          3
           210          2
           294          7
           559          2

10 rows selected.

This is due to clusters (discussed in the Chapter 10), which may contain multiple tables. Therefore, when joining from X$BH to DBA_OBJECTS to print out a segment name, we would technically have to list all of the names of all of the objects in the cluster, as a database block does not belong to a single table all of the time.

We can even watch as Oracle increments the touch count on a block that we query repeatedly. We will use the magic table DUAL in this example—we know it is a one row, one column table. We need to find out the block information for that single row. The built-in DBMS_ROWID package is good for getting that. Additionally, since we query ROWID from DUAL, we are making Oracle actually read the real DUAL table from the buffer cache, not the "virtual" DUAL table enhancement of Oracle 10g.


Note Prior to Oracle 10g, querying DUAL would incur a full table scan of a real table named DUAL stored in the data dictionary. If you set autotrace on and query SELECT DUMMY FROM DUAL, you will observe some I/O in all releases of Oracle (consistent gets). In 9i and before, if you query SELECT SYSDATE FROM DUAL or variable := SYSDATE in PL/SQL, you will also see real I/O occur. However, in Oracle 10g, that SELECT SYSDATE is recognized as not needing to actually query the DUAL table (since you are not asking for the column or rowid from dual) and is done in a manner similar to calling a function. Therefore, DUAL does not undergo a full table scan—just SYSDATE is returned to the application. This small change can dramatically decrease the amount of consistent gets a system that uses DUAL heavily performs.


So every time we run the following query, we should be hitting the real DUAL table:

sys@ORA9IR2> select tch, file#, dbablk, DUMMY
  2    from x$bh, (select dummy from dual)
  3   where obj = (select data_object_id
  4                  from dba_objects
  5                 where object_name = 'DUAL'
  6                   and data_object_id is not null)
  7 /

       TCH      FILE#     DBABLK D
---------- ---------- ---------- -
         1          1       1617 X
         0          1       1618 X

sys@ORA9IR2> exec dbms_lock.sleep(3.2);
PL/SQL procedure successfully completed.

sys@ORA9IR2> /

       TCH      FILE#     DBABLK D
---------- ---------- ---------- -
         2          1       1617 X
         0          1       1618 X

sys@ORA9IR2> exec dbms_lock.sleep(3.2);
PL/SQL procedure successfully completed.

sys@ORA9IR2> /

       TCH      FILE#     DBABLK D
---------- ---------- ---------- -
         3          1       1617 X
         0          1       1618 X

sys@ORA9IR2> exec dbms_lock.sleep(3.2);
PL/SQL procedure successfully completed.

sys@ORA9IR2> /

       TCH      FILE#     DBABLK D
---------- ---------- ---------- -
         4          1       1617 X
         0          1       1618 X

I expect output to vary by Oracle release—you may well see more than two rows returned. You might observe TCH not getting incremented every time. On a multiuser system, the results will be even more unpredictable. Oracle will attempt to increment the TCH once every three seconds (there is a TIM column that shows the last update time to the TCH column), but it is not considered important that the number be 100 percent accurate, as it is close. Also, Oracle will intentionally "cool" blocks and decrement the TCH count over time. So, if you run this query on your system, be prepared to see potentially different results.

So, in Oracle8i and above, a block buffer no longer moves to the head of the list as it used to; rather, it stays where it is in the list and has its touch count incremented. Blocks will naturally tend to "move" in the list over time, however. I put the word "move" in quotes because the block doesn't physically move; rather, multiple lists are maintained that point to the blocks and the block will "move" from list to list. For example, modified blocks are pointed to by a dirty list (to be written to disk by DBWn). Also, as they are reused over time, when the buffer cache is effectively full, and some block with a small touch count is freed, it will be "placed back" into approximately the middle of the list with the new data block.

The whole algorithm used to manage these lists is fairly complex and changes subtly from release to release of Oracle as improvements are made. The actual full details are not relevant to us as developers, beyond the fact that heavily used blocks will be cached, and blocks that are not used heavily will not be cached for long.

Multiple Blocksizes

Starting in Oracle9i, you can have multiple database blocksizes in the same database. Previously, all blocks in a single database were the same size and in order to have a different blocksize, you had to rebuild the entire database. Now you can have a mixture of the "default" blocksize (the blocksize you used when you initially created the database; the size that is used for the SYSTEM and all TEMPORARY tablespaces) and up to four other blocksizes. Each unique blocksize must have its own buffer cache area. The Default, Keep, and Recycle pools will only cache blocks of the default size. In order to have nondefault blocksizes in your database, you will need to have configured a buffer pool to hold them.

In this example, my default blocksize is 8KB. I will attempt to create a tablespace with a 16KB blocksize:

ops$tkyte@ORA10G> create tablespace ts_16k
  2  datafile size 5m
  3  blocksize 16k;
create tablespace ts_16k
*
ERROR at line 1:
ORA-29339: tablespace blocksize 16384 does not match configured blocksizes

ops$tkyte@ORA10G> show parameter 16k

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_16k_cache_size                    big integer 0

Right now, since I have not configured a 16KB cache, I cannot create such a tablespace. I could do one of a couple things right now to rectify this situation. I could set the DB_16K_CACHE_SIZE parameter and restart the database. I could shrink one of my other SGA components in order to make room for a 16KB cache in the existing SGA. Or, I might be able to just allocate a 16KB cache if the SGA_MAX_SIZE parameter was larger than my current SGA size.


Note Starting in Oracle9i, you have the ability to resize various SGA components while the database is up and running. If you want the ability to "grow" the size of the SGA beyond its initial allocation, you must have set the SGA_MAX_SIZE parameter to some value larger than the allocated SGA. For example, if after startup your SGA size was 128MB and you wanted to add an additional 64MB to the buffer cache, you would have had to set the SGA_MAX_SIZE to 192MB or larger to allow for the growth.


In this example, I will shrink my DB_CACHE_SIZE since I currently have it set rather large:

ops$tkyte@ORA10G> show parameter db_cache_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_cache_size                        big integer 1G

ops$tkyte@ORA10G> alter system set db_cache_size = 768m;
System altered.

ops$tkyte@ORA10G> alter system set db_16k_cache_size = 256m;
System altered.

ops$tkyte@ORA10G> create tablespace ts_16k
  2 datafile size 5m
  3 blocksize 16k;
Tablespace created.

So, now I have another buffer cache set up: one to cache any blocks that are 16KB in size. The Default pool, controlled by the db_cache_size parameter, is 768MB in size and the 16KB cache, controlled by the db_16k_cache_size parameter, is 256MB in size. These two caches are mutually exclusive; if one "fills up," it cannot use space in the other. This gives the DBA a very fine degree of control over memory use, but it comes at a price. A price of complexity and management. These multiple blocksizes were not intended as a performance or tuning feature, but rather came about in support of transportable tablespaces—the ability to take formatted data files from one database and transport or attach them to another database. They were implemented in order to take data files from a transactional system that was using an 8KB blocksize and transport that information to a data warehouse using a 16KB or 32KB blocksize.

The multiple blocksizes do serve a good purpose, however, in testing theories. If you want to see how your database would operate with a different blocksize—how much space, for example, a certain table would consume if you used a 4KB block instead of an 8KB block—you can now test that easily without having to create an entirely new database instance.

You may also be able to use multiple blocksizes as a very finely focused tuning tool for a specific set of segments, by giving them their own private buffer pools. Or, in a hybrid system with transactional users, you could use one set of data and reporting/warehouse users could query a separate set of data. The transactional data would benefit from the smaller blocksizes due to less contention on the blocks (less data/rows per block means less people in general would go after the same block at the same time) as well as better buffer cache utilization (users read into the cache only the data they are interested in—the single row or small set of rows). The reporting/warehouse data, which might be based on the transactional data, would benefit from the larger blocksizes due in part to less block overhead (it takes less storage overall), and larger logical I/O sizes perhaps. And since reporting/warehouse data does not have the same update contention issues, the fact that there are more rows per block is not a concern, but a benefit. Additionally, the transactional users get their own buffer cache in effect; they do not have to worry about the reporting queries overrunning their cache.

But in general, the Default, Keep, and Recycle pools should be sufficient for fine-tuning the block buffer cache, and multiple blocksizes would be used primarily for transporting data from database to database and perhaps for a hybrid reporting/transactional system.

Shared Pool

The Shared pool is one of the most critical pieces of memory in the SGA, especially with regard to performance and scalability. A Shared pool that is too small can kill performance to the point where the system appears to hang. A Shared pool that is too large can have the same effect. A Shared pool that is used incorrectly will be a disaster as well.

So, what exactly is the Shared pool? The Shared pool is where Oracle caches many bits of "program" data. When we parse a query, the parsed representation is cached there. Before we go through the job of parsing an entire query, Oracle searches the Shared pool to see if the work has already been done. PL/SQL code that you run is cached in the Shared pool, so the next time you run it, Oracle doesn't have to read it in from disk again. PL/SQL code is not only cached here, it is shared here as well. If you have 1,000 sessions all executing the same code, only one copy of the code is loaded and shared among all sessions. Oracle stores the system parameters in the Shared pool. The data dictionary cache (cached information about database objects) is stored here. In short, everything but the kitchen sink is stored in the Shared pool.

The Shared pool is characterized by lots of small (4KB or less in general) chunks of memory. Bear in mind that 4KB is not a hard limit—there will be allocations that exceed that size—but in general the goal is to use small chunks of memory to prevent the fragmentation that would occur if memory chunks were allocated in radically different sizes, from very small to very large. The memory in the Shared pool is managed on a LRU basis. It is similar to the buffer cache in that respect—if you don't use it, you'll lose it. A supplied package called DBMS_SHARED_POOL may be used to change this behavior—to forcibly pin objects in the Shared pool. You can use this procedure to load up your frequently used procedures and packages at database startup time, and make it so they are not subject to aging out. Normally, though, if over time a piece of memory in the Shared pool is not reused, it will become subject to aging out. Even PL/SQL code, which can be rather large, is managed in a paging mechanism so that when you execute code in a very large package, only the code that is needed is loaded into the Shared pool in small chunks. If you don't use it for an extended period of time, it will be aged out if the Shared pool fills up and space is needed for other objects.

The easiest way to break Oracle's Shared pool is to not use bind variables. As you saw in Chapter 1, not using bind variables can bring a system to its knees for two reasons:

  • The system spends an exorbitant amount of CPU time parsing queries.
  • The system uses large amounts of resources managing the objects in the Shared pool as a result of never reusing queries.

If every query submitted to Oracle is a unique query with the values hard-coded, the concept of the Shared pool is substantially defeated. The Shared pool was designed so that query plans would be used over and over again. If every query is a brand-new, never-before-seen query, then caching only adds overhead. The Shared pool becomes something that inhibits performance. A common but misguided technique that many use to try to solve this issue is adding more space to the Shared pool, which typically only makes things worse than before. As the Shared pool inevitably fills up once again, it gets to be even more of an overhead than the smaller Shared pool, for the simple reason that managing a big, full Shared pool takes more work than managing a smaller, full Shared pool.

The only true solution to this problem is to use shared SQL—to reuse queries. Earlier, in Chapter 1, we briefly looked at the parameter CURSOR_SHARING, which can work as a short-term crutch in this area. The only real way to solve this issue, however, is to use reusable SQL in the first place. Even on the largest of large systems, I find that there are typically at most 10,000 to 20,000 unique SQL statements. Most systems execute only a few hundred unique queries.

The following real-world example demonstrates just how bad things can get if you use the Shared pool poorly. I was asked to work on a system where the standard operating procedure was to shut down the database each and every night, to wipe out the SGA and restart it clean. The reason for doing this was that the system was having issues during the day whereby it was totally CPU-bound and, if the database were left to run for more than a day, performance would really start to decline. They were using a 1GB Shared pool inside of a 1.1GB SGA. This is true: 0.1GB dedicated to block buffer cache and other elements and 1GB dedicated to caching unique queries that would never be executed again. The reason for the cold start was that if they left the system running for more than a day, they would run out of free memory in the Shared pool. At that point, the overhead of aging structures out (especially from a structure so large) was such that it overwhelmed the system and performance was massively degraded (not that performance was that great anyway, since they were managing a 1GB Shared pool). Additionally, the people working on this system constantly wanted to add more and more CPUs to the machine, due to the fact that hard-parsing SQL is so CPU intensive. By correcting the application and allowing it to use bind variables, not only did the physical machine requirements drop (they then had many times more CPU power than they needed), but also the allocation of memory to the various pools was reversed. Instead of a 1GB Shared pool, they had less then 100MB allocated—and they never used it all over many weeks of continuous uptime.

One last comment about the Shared pool and the parameter SHARED_POOL_SIZE. In Oracle9i and before, there is no direct relationship between the outcome of the query

ops$tkyte@ORA9IR2> select sum(bytes) from v$sgastat where pool = 'shared pool';

SUM(BYTES)
----------
100663296

and the SHARED_POOL_SIZE parameter

ops$tkyte@ORA9IR2> show parameter shared_pool_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
shared_pool_size                     big integer 83886080

other than the fact that the SUM(BYTES) FROM V$SGASTAT will always be larger than the SHARED_POOL_SIZE. The Shared pool holds many other structures that are outside the scope of the corresponding parameter. The SHARED_POOL_SIZE is typically the largest contributor to the Shared pool as reported by the SUM(BYTES), but it is not the only contributor. For example, the parameter CONTROL_FILES contributes 264 bytes per file to the "miscellaneous" section of the Shared pool. It is unfortunate that the "Shared pool" in V$SGASTAT and the parameter SHARED_POOL_SIZE are named as they are, since the parameter contributes to the size of the Shared pool, but it is not the only contributor.

In Oracle 10g and above, however, you should see a one-to-one correspondence between the two, assuming you are using manual SGA memory management (i.e., you have set the SHARED_POOL_SIZE parameter yourself):

ops$tkyte@ORA10G> select sum(bytes)/1024/1024 mbytes
  2 from v$sgastat where pool = 'shared pool';

    MBYTES
----------
       128

ops$tkyte@ORA10G> show parameter shared_pool_size;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
shared_pool_size                     big integer 128M

This is a relatively important change as you go from Oracle9i and before to 10g. In Oracle 10g, the SHARED_POOL_SIZE parameter controls the size of the Shared pool, whereas in Oracle9i and before, it was just the largest contributor to the Shared pool. You would want to review your 9i and before actual Shared pool size (based on V$SGASTAT) and use that figure to set your SHARED_POOL_SIZE parameter in Oracle 10g and above. The various other components that used to add to the size of the Shared pool now expect that memory to have been allocated for them by you.

Large Pool

The Large pool is not so named because it is a "large" structure (although it may very well be large in size). It is so named because it is used for allocations of large pieces of memory that are bigger than the Shared pool is designed to handle.

Prior to the introduction of the Large pool in Oracle 8.0, all memory allocation took place in the Shared pool. This was unfortunate if you were using features that made use of "large" memory allocations such as shared server UGA memory allocations. This issue was further complicated by the fact that processing, which tended to need a lot of memory allocation, would use the memory in a different manner than the way in which the Shared pool managed it. The Shared pool manages memory on a LRU basis, which is perfect for caching and reusing data. Large memory allocations, however, tended to get a chunk of memory, use it, and then were done with it—there was no need to cache this memory.

What Oracle needed was something similar to the Recycle and Keep buffer pools implemented for the block buffer cache. This is exactly what the Large pool and Shared pool are now. The Large pool is a Recycle-style memory space, whereas the Shared pool is more like the Keep buffer pool—if people appear to be using something frequently, then you keep it cached.

Memory allocated in the Large pool is managed in a heap, much in the way C manages memory via malloc() and free(). As soon as you "free" a chunk of memory, it can be used by other processes. In the Shared pool, there really was no concept of freeing a chunk of memory. You would allocate memory, use it, and then stop using it. After a while, if that memory needed to be reused, Oracle would age out your chunk of memory. The problem with using just a Shared pool is that one size doesn't always fit all.

The Large pool is used specifically by

  • Shared server connections, to allocate the UGA region in the SGA
  • Parallel execution of statements, to allow for the allocation of interprocess message buffers, which are used to coordinate the parallel query servers
  • Backup for RMAN disk I/O buffers in some cases

As you can see, none of these memory allocations should be managed in an LRU buffer pool designed to manage small chunks of memory. With shared server connection memory, for example, once a session logs out, this memory is never going to be reused, so it should be immediately returned to the pool. Also, shared server UGA memory allocation tends to be "large." If you review the earlier examples with the SORT_AREA_RETAINED_SIZE or PGA_AGGREGATE_TARGET, the UGA can grow very large and is definitely bigger than 4KB chunks. Putting MTS memory into the Shared pool causes it to fragment into odd-sized pieces of memory and, furthermore, you will find that large pieces of memory that will never be reused will age out memory that could be reused. This forces the database to do more work to rebuild that memory structure later.

The same is true for parallel query message buffers, since they are not LRU manageable. They are allocated and cannot be freed until they are done being used. Once they have delivered their message, they are no longer needed and should be released immediately. With backup buffers, this applies to an even greater extent—they are large, and once Oracle is done using them, they should just "disappear."

The Large pool is not mandatory when using shared server connections, but it is highly recommended. If you do not have a Large pool and use a shared server connection, the allocations come out of the Shared pool as they always did in Oracle 7.3 and before. This will definitely lead to degraded performance over some period of time and should be avoided. The Large pool will default to some size if the parameter DBWR_IO_SLAVES or PARALLEL_MAX_SERVERS is set to some positive value. It is recommended that you set the size of the Large pool manually if you are using a feature that uses it. The default mechanism is typically not the appropriate value for your situation.

Java Pool

The Java pool was added in version 8.1.5 of Oracle to support running Java in the database. If you code a stored procedure in Java, Oracle will make use of this chunk of memory when processing that code. The parameter JAVA_POOL_SIZE is used to fix the amount of memory allocated to the Java pool for all session-specific Java code and data.

The Java pool is used in different ways, depending on the mode in which the Oracle server is running. In dedicated server mode, the Java pool includes the shared part of each Java class, which is actually used per session. These are basically the read-only parts (execution vectors, methods, etc.) and are about 4KB to 8KB per class.

Thus, in dedicated server mode (which will most likely be the case for applications using purely Java stored procedures), the total memory required for the Java pool is quite modest and can be determined based on the number of Java classes you will be using. It should be noted that none of the per-session state is stored in the SGA in dedicated server mode, as this information is stored in the UGA and, as you will recall, the UGA is included in the PGA in dedicated server mode.

When connecting to Oracle using a shared server connection, the Java pool includes both of the following:

  • The shared part of each Java class.
  • Some of the UGA used for per-session state of each session, which is allocated from the JAVA_POOL within the SGA. The remainder of the UGA will be located as normal in the Shared pool, or if the Large pool is configured, it will be allocated there instead.

As the total size of the Java pool is fixed in Oracle9i and before, application developers will need to estimate the total requirement of their applications and multiply this estimate by the number of concurrent sessions they need to support. This number will dictate the overall size of the Java pool. Each Java UGA will grow or shrink as needed, but bear in mind that the pool must be sized such that all UGAs combined must be able to fit in it at the same time. In Oracle 10g and above, this parameter may be modified, and the Java pool may grow and shrink over time without the database being restarted.

Streams Pool

The Streams pool is a new SGA structure starting in Oracle 10g. Streams itself is a new database feature as of Oracle9i Release 2 and above. It was designed as a data sharing/replication tool and is Oracle's stated direction going forward for data replication.


Note The statement that Streams "is Oracle's stated direction going forward for data replication" should not be interpreted as meaning that Advanced Replication, Oracle's now legacy replication feature, is going away anytime soon. Rather, Advanced Replication will continue to be supported in future releases. To learn more about Streams itself, see the Streams Concepts Guide available on http://otn.oracle.com in the Documentation section.


The Streams pool (or up to 10 percent of the Shared pool if no Streams pool is configured) is used to buffer queue messages used by the Streams process as it is moving/copying data from one database to another. Instead of using permanent disk-based queues, with the attendant overhead associated with them, Streams uses in-memory queues. If these queues fill up, they will spill over to disk eventually. If the Oracle instance with the memory queue fails for some reason, due to an instance failure (software crash), power failure, or whatever, these in-memory queues are rebuilt from the redo logs.

So, the Streams pool will only be important in systems using the Streams database feature. In those environments, it should be set in order to avoid "stealing" 10 percent of the Shared pool for this feature.

Automatic SGA Memory Management

Just as there are two ways to manage PGA memory, there are two ways to manage SGA memory starting in Oracle 10g: manually, by setting all of the necessary pool and cache parameters: and automatically, by setting just a few memory parameters and a single SGA_TARGET parameter. By setting the SGA_TARGET parameter, you are allowing the instance to size and resize various SGA components.


Note In Oracle9i and before, only manual SGA memory management was available—the parameter SGA_TARGET did not exist and the parameter SGA_MAX_SIZE was a limit, not a dynamic target.


In Oracle 10g, memory-related parameters are classified into one of two areas:

  • Auto-tuned SGA parameters: Currently these are DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, and JAVA_POOL_SIZE.
  • Manual SGA parameters: These include LOG_BUFFER, STREAMS_POOL, DB_NK_CACHE_SIZE, DB_KEEP_CACHE_SIZE, and DB_RECYCLE_CACHE_SIZE.

At any time in Oracle 10g, you may query V$SGAINFO to see which components of the SGA are resizable.


Note To use automatic SGA memory management, the parameter STATISTICS_LEVEL must be set to TYPICAL or ALL. If statistics collection is not enabled, the database will not have the historical information needed to make the necessary sizing decisions.


Under automatic SGA memory management, the primary parameter for sizing the auto-tuned components is SGA_TARGET, which may be dynamically sized while the database is up and running, up to the setting of the SGA_MAX_SIZE parameter (which defaults to be equal to the SGA_TARGET, so if you plan on increasing the SGA_TARGET, you must have set the SGA_MAX_SIZE larger before starting the database instance). The database will use the SGA_TARGET value, minus the size of any of the other manually sized components such as the DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE, and so on, and use that amount of memory to size the default buffer pool, Shared pool, Large pool, and Java pool. Dynamically at runtime, the instance will allocate and reallocate memory between those four memory areas as needed. Instead of returning an ORA-04031 "Unable to allocate N bytes of shared memory" error to a user when the Shared pool runs out of memory, the instance could instead choose to shrink the buffer cache by some number of megabytes (a granule size) and increase the Shared pool by that amount.

Over time, as the memory needs of the instance are ascertained, the size of the various SGA components would become more or less fixed in size. The database also remembers the sizes of these four components across database startup and shutdown so that it doesn't have to start all over again figuring out the right size for your instance each time. It does this via four double-underscore parameters: __DB_CACHE_SIZE, __JAVA_POOL_SIZE, __LARGE_POOL_SIZE, and __SHARED_POOL_SIZE. During a normal or immediate shutdown, the database will record these values to the stored parameter file and use them upon startup to set the default sizes of each area.

Additionally, if you know you want a certain minimum value to be used for one of the four areas, you may set that parameter in addition to setting the SGA_TARGET. The instance will use your setting as the lower bound, or the smallest size that particular area may be.

Summary

In this chapter, we took a look at the Oracle memory structure. We started at the process and session level, examining the PGA and UGA, and their relationship. We saw how the mode in which we connect to Oracle will dictate how memory is organized. A dedicated server connection implies more memory used in the server process than under a shared server connection, but that use of a shared server connection implies there will be the need for a significantly larger SGA. Then, we discussed the main structures of the SGA itself. We discovered the differences between the Shared pool and the Large pool, and looked at why we might want a Large pool to "save" our Shared pool. We covered the Java pool and how it is used under various conditions, and we looked at the block buffer cache and how that can be subdivided into smaller, more focused pools.

Now we are ready to move on to the physical processes that make up the rest of an Oracle instance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.171.20