Contents of OnLine Shared Memory and IDS Resident Shared Memory

The engines differ in the way they refer to these similar sections of shared memory. This section is the complete shared memory of the OnLine systems and the resident portion of the IDS shared memory. With the exception of more structures in IDS, they are similar enough to discuss at the same time. Informix uses its shared memory to store system information as well as to buffer actual pages from the database. This system information keeps track of system resources and their availability to the user.

This data is stored in arrays and other data structures within shared memory. These data structures are created and sized at the time the shared memory is initialized. The parameters come from the $TB/ONCONFIG file for that particular database instance. The tbmonitor utility writes to the $TB/ONCONFIG file as its parameters change. These data structures are accessible to the DBA only through the use of the engine or other utilities, mainly tb/onstat. In IDS systems, many of these structures are also available through tables in the system monitoring interface (SMI) database called sysmaster.

These data structures, or pseudotables, contain information that is critical to the DBA. A large portion of the task of understanding what is going on in the database consists of understanding how to use tb/onstat to decipher the contents of the pseudotables. The following are major areas and pseudotables in the shared memory structure.

Latches

An inherent problem with a multiuser database is the situation in which multiple users need concurrent access to a system resource. Shared memory is such a resource, and OnLine needs to consider what happens when there are conflicts in accessing shared memory resources.

Even though a pseudotable itself may contain locks that grant access to a database resource, access to the pseudotable itself must be controlled. Informix does this through latches.

A latch is a data mechanism that allows a user to modify a system resource. Before a user or process is allowed to modify a system resource, it must first acquire the latch that is associated with the resource. There is not actually a separate pseudotable for latches because the latch data element is the first item in its respective lock table.

IDS systems refer to latches as mutexes. They are the same thing. Latches can be viewed by using the tb/onstat - s command:

joe 53> tbstat -s
RSAM Version 5.01.UC1    -- OnLine-Up 8 days 17:34:19 — 18704 Kbytes
Latches  with lock or user set
name      address  lock wait user
bbf[0]    e075adb8 1    0    e0002770
physlog   e0001670 1    1    e0002e30
physb2    e0001694 1    1    e0002470
bf[3276]  e072dcac 1    0    e00024dO
pt[33]    e067afa0 1    0    e0002590
pt[48]    e067bdec 1    0    e0002al0
bbf[0]    e075adb8 1    0    e0002al0
bh[4749]  e06e9cb0 1    0    e0001e70
bbf[0]    e075adb8 1    0    e0002al0
bbf[0]    e075adb8 1    0    e0002al0
tblsps    e00000f8 1    0    e0001e70
LRU13     e000046c 1    0    e0002230
(Note:   This  is a composite  run over  several tbstat -s invocations.
The data  is  not necessarily  consistent.   It's just a format sample)

Locks

The locks pseudotable is one of the busiest of the pseudotables. The lock table is broken down into a number of linked lists, each controlled by its own latch. The server uses a hashing mechanism based upon the lock requested to control access to the correct linked list. As access to this table is controlled by a small number of latch resources, the locks table can become a bottleneck in active systems. There is one lock for every OnLine resource that is shared or is otherwise in use. Entries in the lock table are cheap because they each use only a small amount of shared memory. Configure your system for as many as you can, within reason. Lock configuration depends on the level of locking that you are using for your tables. If you have a 200,000-row table with row-level locking, you will need 200,000 locks to do an update to every row in that table. If you are using the default page-level locking, you can get by with fewer locks.

One lock space in the lock table is used for each lock, whether that lock is a very high level lock like a database lock or a very low level lock like a row lock. Suppose you were doing an insert of a million rows into a table. The target table is set for row-level locking. Attempting this insert would cause an overflow of the locks table and would cause the job to crash unless you had a million locks table entries. There is a simple rule that helps you avoid this locking problem:

Always lock the target table in exclusive mode, if you can.

The "if you can" is an important caveat here. Locking the target table means that no other users can access the table while the table is locked. If this causes a concurrency problem with your operations, you'll have to split the insert into several separate jobs or, better yet, use the dbload utility to load the rows without having to worry about locks (or long transactions).

Here's an example:

insert into new table (select * from million_row_table);

This would use a million locks if new_table had been created or altered to row-level locking, but the following would use just one lock (assuming no indexes):

begin work;
lock table new_table in exclusive mode;
insert into new_table (select * from million_row_table) ;
commit work;

Locks can be viewed by using the tb/onstat -k command:

joe  42>tbstat -k
RSAM Version 5.01.UC1  —- OnLine-Up 7 days 23:48:24 -- 18704 Kbytes
Locks
address  wtlist   owner    lklist   type     tblsnum  rowid    size

e0058e68 0        e0002230 0            S    1000002  202      0
e0059a28 0        e0002bf0 0            S    1000002  202      0
3 active, 200000 total, 16384 hash buckets

Chunks and Dbspaces

The chunk pseudotable is actually two pseudotables, one for the primary and one for the mirrored chunks. The chunk pseudotable keeps track of each chunk that has been allocated in an OnLine system. The total number of allowable chunks is the tunable parameter CHUNKS in tb/onmonitor and in the $TB/ONCONFIG file.

One latch or mutex controls access to the chunk pseudotable. The dbspaces pseudotable registers all dbspaces in the system. Access to this table is also controlled by one latch.

There are two tb/onstat options that can view chunk and dbspace information. To see the space utilization of the database, use the tb/onstat -d command:

joe  43>tbstat -d
RSAM Version 5.01.UC1 -- OnLine-Up 7 days 23:48:38 -- 18704 Kbytes
Dbspaces
address  number   flags   fchunk   nchunks  flags    owner    name
e0016e88 1        1       1        3        N        Informix rootdbs
e0016eb8 2        1       4        2        N        informix logspace
e0016ee8 3        1       5        2        N        informix dbspacel
e0016f18 4        1       7        2        N        informix logspace2
e0016f48 5        1       10       1        N        informix slowdbspace
5 active, 20 total
Chunks
address  chk/dbs offset size      free     bpages   flags pathname
e0014e78 1   1   0      135000    35718             PO-   /dev/rootdbs3
e0014fl0 2   1   0      57000     56997             PO-   /dev/rootdbs1
e0014fa8 3   1   0      75000     74997             PO-   /dev/rootdbs2
e0015040 4   2   0      147500    24542             PO-   /dev/log1
e00150d8 5   3   0      812500    74310             PO-   /dev/chunk1
60015170 6   3   0      677000    78338             PO-   /dev/chunk2
6 active, 27 total

To see read and write information regarding your database's chunks, use the tb/onstat -D command:

joe  44>tbstat -D
RSAM Version 5.01.UC1 -- OnLine-Up 7 days 23:48:53 -- 18704 Kbytes
Dbspaces
address  number   flags   fchunk   nchunks  flags    owner    name
e0016e88 1        1       1        3        N        informix rootdbs
e0016eb8 2        1       4        2        N        informix logspace
e0016ee8 3        1       5        2        N        informix dbspacel
e0016f18 4        1       7        2        N        informix logspace2
e0016f48 5        1       10       1        N        informix dbspace3
5 active, 20 total
Chunks
address  chk/dbs offset  page Rd  page Wr  pathname
e0014e78 1   1   0       731422   1030390  /dev/rootdbs3
e0014f10 2   1   0       114001   0        /dev/rootdbs1
e0014fa8 3   1   0       150001   0        /dev/rootdbs2
e0015040 4   2   0       233963   120000   /dev/log1
e00150d8 5   3   0       8498773  238186   /dev/chunk1
e0015170 6   3   0       10034058 178169   /dev/chunk2
e0015208 7   4   0       341480   173365   /dev/log3
e00152a0 8   2   0       239961   120000   /dev/log2
e0015338 9   4   0       193767   164848   /dev/log4
e00153d0 10  5   0       1628350  114065   /dev/chunk3
10 active, 27 total

Users

Every time a server process starts up, it attaches to the shared memory of the Informix system. When this occurs, the user obtains an entry in the users pseudotable, and certain information, such as process ID and user name, are stored. Besides server processes, some OnLine administrative functions occupy a user slot. This data can be seen using tb/onstat.

You can also see some information about OnLine users by using the UNIX ps command. There is always a tbinit process running. It started up the shared memory in the first place. If your system is using page cleaners, you will see one user tbpgcl for every page cleaner that you have configured. With some versions of UNIX, the page cleaners will be identified as tbinit rather than as tbpgcl.

On IDS systems, the only engine processes that the operating system will see are the onit processes that are the virtual processors.

OnLine has a "trap-door" built into its user table that often saves the DBA when the user table fills. It saves the last user entry for a tbmonitor process, whether there are existing tbmonitor processes running or not. The DBA can always get into the system as a user through tb/onmonitor.

To see the contents of the users pseudotable, use the tb/onstat -u command:

joe  46>tbstat -u
RSAM Version 5.01.UC1 -- OnLine-Up 7 days 23:49:06 -- 18704 Kbytes
Users
address  flags   pid   user      tty     wait  tout locks  nreads nwrites
e0001b70 ------D 555   informix  console  0       0    0     6746     2607
e0001bd0 ------d 0     informix  console  0       0    0     0        0
e0001c30 ------F 556   informix           0       0    0     0        0
e0001c90 ------F 557   informix           0       0    0     0        0
e0001cf0 ------f 558   informix           0       0    0     0        0
e0002890 ------  4143  dba       -        0       0    1     2907     148
e00028f0 ------  16460 dba       -        0       0    1     54972    4767
e00029b0 ------  12247 support   -        0       0    1     100726   6677
e0002b90 ——————  12245 support   -        0       0    1     100554   6651
e0002bf0 --B---- 12246 support   -        0       0    1     101859   6650
e0002c50 ------- 13547  dba -      -      0       0    1     6522     1741
e0002e30 --A---M 2740   informix   ttyi02 0       0    0     0        0
42 active, 150 total

Transactions

OnLine versions beginning with version 5.0 include transaction information in the tb/onstat -u output. This is useful in looking at distributed transactions across different databases on different machines. It's not too useful for simple applications, since each user will have only one transaction going at a time. To see users and their transactions (OnLine 5.0 and later), use the tb/onstat -u command:

joe  27>    tbstat -u
RSAM Version 5.01.UC1 -- OnLine–Up 6 days 06:40:42 — 12976 Kbytes
Users
address  flags   pid   user     tty      wait    tout locks nreads nwrites
e000lbbc ------D 3954  informix ttyp8    0       0    0     1348   16
e0001c28 ------D 0     informix ttypB    0       0    0     0      0
e0001c94 ------F 3956  informix          0       0    0     0      0
e000ld00 ------F 3957  informix          0       0    0     0      0
e000227c ------  16936 dba      -        0       0    1     0      1
e00022e8 ------  16965 dba      -        0       0    1     0      19
e0002354 ------  16957 dba      -        0       0    1     0      16
e00023c0 ------  16948 dba      -        0       0    1     0      1
e000242c ------  22832 tony     -        0       0    1     0      0
e0002498 ------  23247 dba      ttypl    0       0    1     158    18
e0002504 ------  23246 dba      ttyp1    0       0    1     510    33
11 active, 110 total
Transactions
address  flags user     locks log begin isolation retrys coordinator
e0004a24 A---- e0001bbc 0     0         NOTRANS   0
e0004c04 A---- e0001c28 0     0         NOTRANS   0
e0004de4 A---- e000242c 1     0         COMMIT    0
e0006464 A---- e000227c 1     0         COMMIT    0
e0006644 A---- e00022e8 1     0         COMMIT    0
e0006824 A---- e0002354 1     0         COMMIT    0
e0006a04 A---- e00023c0 1     0         COMMIT    0
e0006be4 A---- e0001d6c 1     0         COMMIT    0
e0006dc4 A---- e0002498 1     0         COMMIT    0
e0006fa4 A---- e0002504 1     0         COMMIT    0
10 active, 110 total

Tblspaces

The tblspace pseudotable tracks open tablespaces in the system. All tables are tracked here, whether they are permanent or temporary. Temporary tables that are created because of ORDER BY clauses also receive entries in this table. If multiple processes are accessing the tablespace, a single entry will be in the pseudotable.

To see the tablespaces in use, use the tb/onstat -t command:

joe  30>    tbstat -t
RSAM Version 5.00.UC2   -- OnLine–Up 6 days 06:41:15 -- 12976 Kbytes

Tblspaces
 n address  flgs ucnt tblnum   physaddr npages nused  npdata nrows  nextns
 0 e00b85f4 1    1    1000001  10000e   5405   408    0      0      1
 1 e00b88b8 1    1    2000001  200004   905    2      0      0      1
 2 e00b8b7c 1    1    3000001  300004   10005  940    0      0      1
 3 e00b8e40 1    1    4000001  500004   1355   2      0      0      1
 4 e00b9104 1    19   3000002  300005   64     59     35     627    8

5 active, 1100 total, 512 hash buckets

Buffers

The buffers are by far the largest portion of the shared memory in OnLine systems and are a large portion of the shared memory in many IDS systems.. The internal data structure for the buffer is a threefold structure.

First, there is the actual page from the disk itself. The buffer holds the entire page, so another structure is used to track the information about the page itself. This is the buffer header table. The third structure is a hash table to simplify locating an individual buffer.

The tb/onstat command comes in two flavors, depending upon whether you want to see just the buffers in use or whether you want to see all buffer data. To see the status of buffers currently in use by an engine, use the tb/onstat -b command:

joe  31> tbstat -b
RSAM Version 4.10.UD4    -- OnLine–Up 8 days 17:26:14 —- 18704 Kbytes
Buffers
address  user    flgs  pagenum  memaddr  nslots pgflgs xflgs owner  waitlist
e070976c 0        0     651566   e0937000 29     1      400   e00023b0 0
e06fff6c 0        6     50479c   e0807000 23     1      0     0        0
e07052ac 0        0     504894   e08ad800 77     10     400   e00020b0 0
e071c7ac 0        2     50af71   e0b97800 101    90     100   0        0
e072ae6c 0        0     5047bl   e0d65000 29     1      400   e00020b0 0
e073feac 0        0     50661f   e1005800 1      90     400   e00026b0 0
e072876c 0        0     537ffb   e0d17000 51     90     400   e0002530 0
e072986c 0        2     56c416   e0d39000 48     90     100   0        0
2526 modified, 4750  total, 8192 hash buckets, 2048 buffer size
(NOTE:  this output is a composite output from several runs of tbstat
There may be inconsistencies because of this.  It's just a sample.)

To see the status of all the buffers, use the tb/onstat -B command:

joe  35> tbstat -B
RSAM Version 4.10.UD4  -- OnLine–Up 7 days 23:51:48 -- 18704 Kbytes
Buffers
address  user     flgs pagenum  memaddr  nslots pgflgs xflgs owner waitlist
e06fa9ac 0        6    69c58b   e075b800 27     1      0     0       0
e06faaac 0        6    57a379   e075d800 144    10     0     0       0
e06faaec 0        6    60f8e3   e075e000 36     90     0     0       0
..
..  (continues for all of your defined buffers)
..
e06facac 0        6    66d87f   e0761800 82     90     0     0     0
e06facec 0        6    69beac   e0762000 75     90     0     0     0
e06fad2c 0        6    69c49f   60762800 27     1      0     0     0
284 modified, 4750 total, 8192 hash buckets, 2048 buffer size

LRUs

The least recently used (LRU) queues are data structures that track usage of the buffer pool. The queues are allocated in pairs, one for clean data and one for dirty data. Data is clean if the buffer is not flagged as having been modified. The queues are arranged so that the most recently used buffers are found at the head of the queue.

When a buffer is accessed, it is moved to the beginning of the queue. Buffers at the least recently used end of each queue (dirty queue and clean queue) are the ones that are used first for storing new data from disk or for flushing to disk.

There is one latch for each queue, making two latches for every queue pair. The number of LRU queues is determined by the $TB/ONCONFIG parameter LRUS.

To see the status of the LRU queues use the tb/onstat -R command:

joe  57>tbstat -R
RSAM Version 4.10.UD4   -- OnLine–Up 7 days 23:53:44 -- 18704 Kbytes
16 buffer LRU queues
LRU  0:    6 ( 0.9%) modified of  643 total
LRU  1:    6 ( 1.0%) modified of  631 total
LRU  2:    8 ( 1.2%) modified of  674 total
LRU  3:    6 ( 1.1%) modified of  557 total
LRU  4:   13 ( 2.4%) modified of  551 total
LRU  5:    6 ( 1.0%) modified of  601 total
LRU  6:    8 ( 1.4%) modified of  569 total
LRU  7:    4 ( 0.9%) modified of  524 total

57 dirty, 4750 queued, 4750 total, 8192 hash buckets, 2048 buffer size
start clean at 60% dirty, stop at 50%; first pass search 70%

Page Cleaners

As the buffer pool fills with data, it becomes necessary for some buffers to be reused to hold new data. It is advantageous for there to always be some clean buffers on the queue. If the database engine needs to bring data in from the disk into a buffer, it is most efficient if the target buffer is a clean buffer, since clean buffers can be overwritten with the new data.

A dirty buffer is another thing. If there are no clean buffers, Informix has to use a dirty buffer. First, the dirty buffer needs to be written out (flushed) to disk. Then the engine brings the new data into the newly clean buffer. This creates more load for the engine, as the engine has to perform the flush. This is known as a foreground (fg) write. If you are getting many foreground writes, the buffers need cleaning more often.

The preferred way of cleaning the buffers is through the actions of the page cleaner daemons. Their actions are controlled by the Informix parameters LRU_MAX_DIRTY and LRU_MIN_DIRTY. As each LRU queue becomes populated with dirty buffers, it will eventually reach the LRU_MAX_DIRTY percentage. When the buffer reaches this percentage of dirtiness, the page cleaner daemon is awakened and it begins cleaning the buffers. When the dirtiness is down to the LRU_MIN_DIRTY percentage, the page cleaner goes back to sleep.

The defaults are usually 60 percent for MAX and 50 percent for MIN. These parameters are tunable. The object is to reduce the foreground writes to a minimum, as these are engine processes. We'll talk extensively about tuning these parameters in Chapter 8.

To see the status of the page cleaners, use the tb/onstat -F command:

joe  59>tbstat -F
RSAM Version 5.01.UC1  -- OnLine–Up 7 days 23:53:56 -- 18704 Kbytes
Fg Writes     LRU Writes   Idle Writes    Chunk Writes
0             279744       5142           471793

address  flusher  snooze   state    data
e0001c30 0        60       I        0        = 0
e0001c90 1        60       I        0        = 0
e0001cf0 2        60       I        0        = 0
e0001d50 3        60       I        0        = 0
e0001db0 4        60       I        0        = 0
e000le10 5        60       I        0        = 0
states: Exit Idle Chunk Lru

Logs

There are two types of logs in the Informix engines, physical logs and logical logs. The physical logs store pages of data or indexes from the buffer pool as the pages are modified. The physical log is used primarily in the fast recovery process that accompanies the startup of the database. It also coordinates with the tbtape archiving processes when an archive is running.

Access to the physical log is through two physical log buffers, which are periodically flushed to the actual physical log. There is one latch controlling access to each of the two physical log buffers.

The logical logs are areas of the disk (often in their own dbspace) in which Informix stores records of transactions that are in progress or are completed. A newly-installed OnLine engine begins with 3 logs in the root dbspace. IDS systems default to a minimum of 6 logs in the root dbspace. The size of these logs is a tunable parameter in tb/onmonitor.

In OnLine systems only, it is very important to give thought to the sizing of your logical logs before you begin to initialize your system. The sizing of the logical logs can be changed only by reinitializing the entire database. This is a major task, so it is important to get the sizing right the first time. Additional logs can be added and become available for use only after an archive is done.

IDS systems have done away with this limitation. Logical logs can be of various sizes, and the size is controlled at the time the logical log is created

Access to the logical logs is through three built-in logical log buffers, each controlled by its own latch or mutex.

To see the status of your logs, use the tb/onstat -1 command:

joe  62>tbstat -1
RSAM Version 5.01.UC1   -- OnLine–Up 7 days 23:54:27 -- 18704 Kbytes
Physical Logging
Buffer bufused  bufsize  numpages numwrits pages/I/O
  P-l  69       128      713655   5901     120.94
      phybegin physize  phypos   phyused  %used
      117202   6000     2999     69       1.15

Logical Logging
Buffer bufused  bufsize  numrecs  numpages numwrits recs/pages pages/I/O
  L-l  8        64       31030148 668262   21047    46.4       31.8

address  number   flags    uniqid   begin        size     used    %used
e06d25d8 1        F------  0        400b8e      30000        0     0.00
e06d25f4 2        F------  0        4080be      30000        0     0.00
e06d2610 3        F------  0        40f5ee      30000        0     0.00
e06d262c 4        F------  0        10126e      30000        0     0.00
e06d2648 5        F------  0        10879e      30000        0     0.00
e06d2664 6        F------  0        l0fcce      30000        0     0.00
e06d2680 7        F------  0        416ble      30000        0     0.00
e06d269c 8        F------  0        800003      30000        0     0.00
e06d26b8 9        F------  0        700a94      30000        0     0.00

Profiles

The profile pseudotable contains various items related to the performance of the Informix system. It includes items such as cache hit ratios for reading and writing, number of different types of calls to the database, number of times pseudotables overflowed, and number of times a process had to wait for different resources.

This is one of the most important pseudotables, as this is just about the only source of data about the performance of the database in OnLine systems. IDS has more depth to its database monitoring capabilities because of the SMI database, but you still be using on-stat -p quite a bit. You'll be doing tb/onstat -p in your sleep the more you work on tuning the database.

The last line of this output differs between OnLine and IDS systems. For more detail see Chapter 10. For an overall performance profile, use the tb/onstat -p command:

joe  63>tbstat -p
RSAM Version 5.01.UC1  -- OnLine–Up 7 days 23:54:38 -- 18704 Kbytes
Profile
dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached
14970755 22166265 498552631 97.00   543747   2139408  49870693 98.91

isamtot  open     start    read     write    rewrite  delete   commit rollbk
424714409 5813437  23348999 175473799 10989374 5499045  2603325  8699910 101
ovtbls   ovlock   ovuser   ovbuff   usercpu  syscpu   numckpts flushes
0        0        0        0        387006.97 41611.33 294      587
bufwaits lokwaits lockreqs deadlks dltouts lchwaits ckpwaits compress
21748    8046     476500421 9       0       1594907  1664     2220019

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.2.122