Summary

The data cache can be split into multiple buffer pools, either by block size or (with the keep and recycle pools) by usage. Many sites, though, stick to a single buffer pool using the default block size offered by the Database Configuration Assistant (DBCA) for their platform—and this is generally the most sensible strategy.

The data cache is also split into granules of a fixed size—4MB, 8MB, or 16MB, depending on platform version and the size of the SGA. The granular approach makes it possible to reallocate memory dynamically between the data cache and other key parts of the SGA. Each buffer pool is made up of many discrete granules, and each individual granule is owned by a single buffer pool. A granule that belongs to the data cache holds an array of buffers and a matching array of buffer headers, as well as a little management overhead.

Each buffer pool may be split into several working data sets, which are constructed as linked lists of buffer headers and hence, implicitly, the buffers pointed to by the buffer headers. The working data set is an important “unit of complexity” in Oracle; each one is protected by its own cache buffers LRU chain latch and is kept “clean” by a single database writer (dbwr) process. A single database writer, though, may be responsible for many working data sets. The numbers of working data sets (per buffer pool) and database writers are dependent on the cpu_count.

Each working data set is split into a pair of linked lists, the main replacement list (REPL_MAIN) and the auxiliary replacement list (REPL_AUX), with buffers moving constantly between the two. (There are other linked lists that connect small subsets of the working data set intermittently, but these relate to writing and will be addressed in Chapter 6). The function of the REPL_AUX list is to hold buffers that are believed to be instantly reusable if a session needs a buffer to read a block from disc or clone a block that is already in memory. The purpose of REPL_MAIN is to keep track of recently used (buffered) blocks, ensuring that “popular” blocks stay buffered while allowing “unpopular” blocks to fall out of memory as rapidly as possible.

There is a second structure imposed on the content of the data cache, again employing linked lists (but very short ones), that uses the data block address of the buffered blocks to scatter the buffers across a very large hash table. This means that if we want to find a block in the data cache, we can work out very rapidly where it should be in the hash table structure and check a small linked list to see if it is currently in memory.

A combination of latches (when we want to manipulate or examine the contents of the linked lists) and pins (when we want to protect or modify the contents of a buffer) allows us to move a buffer from one hash bucket to another as we replace a copy of one block with a copy of another; at the same time, further latch activity allows us to relocate the buffer or, strictly speaking, its header in the replacement lists.

Because the choice of hash bucket depends on the data block address, consistent read copies of a given block will be attached to the same hash bucket. If we have a large number of copies of a block, the hash chain for that bucket will become very long, and the time taken to find the correct copy of a block in the bucket will become a performance threat. For this reason Oracle tries to impose a limit of six on the number of copies that you can have of a block.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.39.252