Darl Kuhn and Thomas Kyte

Expert Oracle Database Architecture

Techniques and Solutions for High Performance and Productivity

4th ed.
Darl Kuhn
Morrison, CO, USA
Thomas Kyte
Denver, CO, USA
ISBN 978-1-4842-7498-9e-ISBN 978-1-4842-7499-6
© Darl Kuhn and Thomas Kyte 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Apress imprint is published by the registered company APress Media, LLC part of Springer Nature.

The registered company address is: 1 New York Plaza, New York, NY 10004, U.S.A.

Introduction

The inspiration for the material contained in this book comes from my experiences developing Oracle software and from working with fellow Oracle developers and DBAs to help them build reliable and robust applications based on the Oracle database. The book is basically a reflection of what I do every day and of the issues I see people encountering each and every day.

I covered what I felt was most relevant, namely, the Oracle database and its architecture. I could have written a similarly titled book explaining how to develop an application using a specific language and architecture—for example, one using JavaServer Pages that speaks to Enterprise JavaBeans, which in turn uses JDBC to communicate with Oracle. However, at the end of the day, you really do need to understand the topics covered in this book in order to build such an application successfully. This book deals with what I believe needs to be universally known to develop successfully with Oracle, whether you are a Visual Basic programmer using ODBC, a Java programmer using EJBs and JDBC, or a Perl programmer using DBI Perl. This book does not promote any specific application architecture; it does not compare three-tier to client/server. Rather, it covers what the database can do and what you must understand about the way it works. Since the database is at the heart of any application architecture, the book should have a broad audience.

As the title suggests, Expert Oracle Database Architecture concentrates on the database architecture and how the database itself works. I cover the Oracle database architecture in depth: the files, memory structures, and processes that comprise an Oracle database and instance. I then move on to discuss important database topics such as locking, concurrency controls, how transactions work, and redo and undo, and why it is important for you to know about these things. Lastly, I examine the physical structures in the database such as tables, indexes, and datatypes, covering techniques for making optimal use of them.

What This Book Is About

One of the problems with having plenty of development options is that it’s sometimes hard to figure out which one might be the best choice for your particular needs. Everyone wants as much flexibility as possible (as many choices as they can possibly have), but they also want things to be very cut and dried—in other words, easy. Oracle presents developers with almost unlimited choice. No one ever says, “You can’t do that in Oracle.” Rather, they say, “How many different ways would you like to do that in Oracle?” I hope that this book will help you make the correct choice.

This book is aimed at those people who appreciate the choice but would also like some guidelines and practical implementation details on Oracle features and functions. For example, Oracle has a really neat feature called parallel execution. The Oracle documentation tells you how to use this feature and what it does. Oracle documentation does not, however, tell you when you should use this feature and, perhaps even more important, when you should not use this feature. It doesn’t always tell you the implementation details of this feature, and if you’re not aware of them, this can come back to haunt you (I’m not referring to bugs, but the way the feature is supposed to work and what it was really designed to do).

In this book, I strove to not only describe how things work but also explain when and why you would consider using a particular feature or implementation. I feel it is important to understand not only the “how” behind things but also the “when” and “why” as well as the “when not” and “why not!”

Who Should Read This Book

The target audience for this book is anyone who develops applications with Oracle as the database back end. It is a book for professional Oracle developers who need to know how to get things done in the database. The practical nature of the book means that many sections should also be very interesting to the DBA. Most of the examples in the book use SQL*Plus to demonstrate the key features, so you won’t find out how to develop a really cool GUI—but you will find out how the Oracle database works, what its key features can do, and when they should (and should not) be used.

This book is for anyone who wants to get more out of Oracle with less work. It is for anyone who wants to see new ways to use existing features. It is for anyone who wants to see how these features can be applied in the real world (not just examples of how to use the feature, but why the feature is relevant in the first place). Another category of people who would find this book of interest is technical managers in charge of the developers who work on Oracle projects. In some respects, it is just as important that they understand why knowing the database is crucial to success. This book can provide ammunition for managers who would like to get their personnel trained in the correct technologies or ensure that personnel already know what they need to know.

To get the most out of this book, the reader should have
  • Knowledge of SQL: You don’t have to be the best SQL coder ever, but a good working knowledge will help.

  • An understanding of PL/SQL: This isn’t a prerequisite, but it will help you to absorb the examples. This book will not, for example, teach you how to program a FOR loop or declare a record type; the Oracle documentation and numerous books cover this well. However, that’s not to say that you won’t learn a lot about PL/SQL by reading this book. You will. You’ll become very intimate with many features of PL/SQL, you’ll see new ways to do things, and you’ll become aware of packages/features that perhaps you didn’t know existed.

  • Exposure to some third-generation language (3GL), such as C or Java: I believe that anyone who can read and write code in a 3GL language will be able to successfully read and understand the examples in this book.

  • Familiarity with the Oracle Database Concepts manual.

A few words on that last point: due to the Oracle documentation set’s vast size, many people find it to be somewhat intimidating. If you’re just starting out or haven’t read any of it as yet, I can tell you that the Oracle Database Concepts manual is exactly the right place to start. It’s about 600+ pages long (I know that because I wrote some of the pages and edited every one) and touches on many of the major Oracle concepts that you need to know about. It may not give you each and every technical detail (that’s what the other 10,000 to 20,000 pages of documentation are for), but it will educate you on all the important concepts. This manual touches the following topics (to name a few):
  • The structures in the database and how data is organized and stored

  • Distributed processing

  • Oracle’s memory architecture

  • Oracle’s process architecture

  • Schema objects you will be using (tables, indexes, clusters, and so on)

  • Built-in datatypes and user-defined datatypes

  • SQL stored procedures

  • How transactions work

  • The optimizer

  • Data integrity

  • Concurrency control

I will come back to these topics myself time and time again. These are the fundamentals. Without knowledge of them, you will create Oracle applications that are prone to failure. I encourage you to read through the manual and get an understanding of some of these topics.

How This Book Is Structured

This book has 15 chapters, and each is like a “minibook”—a virtually stand-alone component. Occasionally, I refer to examples or features in other chapters, but you could pretty much pick a chapter out of the book and read it on its own. For example, you don’t have to read Chapter 10 on database tables to understand or make use of Chapter 14 on parallelism.

The format and style of many of the chapters is virtually identical:
  • An introduction to the feature or capability.

  • Why you might want to use the feature or capability (or not). I outline when you would consider using this feature and when you would not want to use it.

  • How to use this feature. The information here isn’t just a copy of the material in the SQL reference; rather, it’s presented in a step-by-step manner: here is what you need, here is what you have to do, and these are the switches you need to go through to get started. Topics covered in this section will include

  • How to implement the feature

  • Examples, examples, examples

  • How to debug this feature

  • Caveats of using this feature

  • How to handle errors (proactively)

  • A summary to bring it all together

There will be lots of examples and lots of code, all of which is available for download from the GitHub site. The following sections present a detailed breakdown of the content of each chapter.

Where Can I Find the Book’s Source Code?

The best way to digest the material in this book is to thoroughly work through and understand the hands-on examples. As you work through the examples in this book, you may decide that you prefer to type in all the code by hand. Many readers choose to do this because it is a good way to get familiar with the coding techniques that are being used. Having said that, there are many complex examples in this book. Therefore, you may opt for downloading the source code and running examples without having to manually type them in.

All of the source code for this book can be downloaded from the GitHub website. You do not need a GitHub account to access the source code, but we recommend signing up to make the most of this service. To find the source code for this book (or any Apress book):
  1. 1.

    Go to the book’s product page on Apress.com, located at www.apress.com/9781484274989 .

     
  2. 2.

    There will be a button marked Download Source Code. Click this to be taken to the book’s page on GitHub.

     
  3. 3.

    Once on GitHub, download the code as a zip using the green button, or, if you have a GitHub account, you can clone the source code directly to your machine using Git.

     
  4. 4.

    That’s it!

     

Source code can be continuously updated after a book has published. That means that if there are any corrections, you will always get the latest version. If for any reason you want to get hold of the original source code, exactly as it is in your copy of the book, you can go to https://github.com/Apress/ [repository-name-here]/releases and download release v1.0.

If you like to type in the code, you can use the source code files to check the results you should be getting—they should be your first stop if you think you might have typed an error. If you don’t like typing, then downloading the source code from the GitHub site is a must! Either way, the code files will help you with updates and debugging.

Tip

If you have any problems accessing the source code for an Apress book, email [email protected].

Setting Up Your Environment

In this section, I will cover how to set up an environment capable of executing the examples in this book. Specifically
  • Accessing an Oracle database

  • How to set up the EODA account used for many of the examples in this book

  • How to set up the SCOTT/TIGER demonstration schema properly

  • Installing Statspack

  • Installing and running runstats, creating the BIG_TABLE, and other custom utilities used throughout the book

As described previously in the “Where Can I Find the Book’s Source Code?” section, all of the scripts used in this book are available for download from the GitHub site. There is a chNN folder that contains the scripts for each chapter (where NN is the number of the chapter). The ch00 folder contains the scripts listed here in the “Setting Up Your Environment” section.

Most of the examples in this book are designed to run 100 percent in the SQL*Plus environment. If you already have access to an Oracle database, then you can skip ahead to creating the EODA and SCOTT schemas in your database. You’ll also need to set up Statspack and the custom scripts. These components are used extensively throughout the book.

Accessing an Oracle Database

This book is chock full of hands-on database examples. Therefore, it’s critical that you have access to an Oracle database as you work through the examples in each chapter. If you want to use an Oracle database installed on your PC, there are a couple of free and simple ways of doing this. Both techniques listed next involve using Oracle VM VirtualBox:
  • Installing Oracle VM VirtualBox and a pre-built database VM

  • Installing Oracle VM VirtualBox, cloning a Git repository, and running Vagrant to build your environment

I’ll briefly describe both of the prior techniques in the following sections.

Oracle VM VirtualBox and a Pre-built Database VM

One of the quickest free and easy ways to gain access to a fully functional Oracle database is to download and install Oracle VM VirtualBox and use it with a pre-built database VM. You can literally have a working database within a few minutes of downloading and installing the required software.

First, you must download and install VirtualBox. To do this, go to this link and download and install the software:

www.virtualbox.org/wiki/Downloads

After you have downloaded and installed VirtualBox, then download a pre-built database VM and follow the instructions for importing the appliance VM into VirtualBox. Use this link to download the pre-built VM:

www.oracle.com/downloads/developer-vm/community-downloads.html

The Database App Development VM includes a container database named CDB$ROOT. Within this container database, there is a pluggable database named ORCL. When logging onto the Database App Development VM, use the oracle OS account (there should be a default terminal window open for you after starting the VM). In the default terminal window, you can access SQL*Plus as follows:
$ sqlplus / as sysdba
If you’re prompted for a username and password, exit the SQL*Plus session, and set the following from the operating system prompt:
$ export TWO_TASK=
After you’ve set the prior environment variable, you should be able to access SQL*Plus via
$ sqlplus / as sysdba

Using Oracle VM VirtualBox with a pre-built VM is by far the easiest way to gain access to a fully functional Oracle database. If you are a bit more technically savvy, then I would suggest using a Vagrant box described in the next section to build an environment where you can access an Oracle database on your PC.

Oracle VM VirtualBox, Git, and Vagrant

This approach requires that you download and install Oracle VM VirtualBox, Git, and Vagrant. You also need to download the Oracle installation media. After you’ve installed those, then use Git to clone a Vagrant repository and then use a Vagrant box to build a virtual machine on your laptop. This approach might seem a little daunting at first, so I would suggest you look up Tim Hall’s YouTube video titled “Vagrant: Oracle Database Build.” That YouTube video walks you through the entire process.

Described next are the high-level steps for building an Oracle environment. First, navigate to Oracle’s database download site and download the Oracle installation software:

www.oracle.com/database/technologies/oracle-database-software-downloads.html

Now navigate to this link and download and install Oracle VM VirtualBox on your laptop:

www.virtualbox.org/wiki/Downloads

Next, navigate to the Git download page and download and install Git on your laptop:

https://git-scm.com/downloads

Next, navigate to the Vagrant download page and download and install Vagrant on your laptop:

www.vagrantup.com/docs/installation

Now start up the git bash shell. On Windows, you do this by typing in “git bash” in the Startup box. Using the git bash shell window, create a directory:
mkdir c:vagrantboxes
Change directories to that directory:
cd c:vagrantboxes
Now use git to clone the Vagrant box repository:
git clone https://github.com/oracle/vagrant-boxes
Next, change directories to the version of the Oracle database that you want to install:
cd c:vagrant-boxesOracleDatabase<Database Version>
Copy the Oracle installation media to this directory also:
cp <Oracle Install Media> c:vagrant-boxesOracleDatabase<Database Version>
Now type in vagrant up:
vagrant up
After the VM is started, you should be able to access the VM via a secure shell session:
vagrant ssh
Once you’ve logged on to the VM, you should be able to access the root and oracle OS account. For example:
$ sudo su - oracle

Using a Vagrant box is an extremely powerful way to create your own VMs that contain Oracle databases. You can even easily build a RAC database environment from scratch using these techniques.

Database Setup

All of the examples in this book are run against a container database named CDB. I’ll connect to the root CDB container to perform tasks such as modifying initialization parameters, stopping/starting the database, and so forth. The CDB container database contains two pluggable databases PDB1 and PDB2. When demonstrating application-type examples, I’ll connect to the PDB1 pluggable database as either EODA or SCOTT. For reference, here’s the Database Creation Assistant (DBCA) code that I used to create the CDB multitenant database:
dbca -silent -createDatabase
-templateName General_Purpose.dbc
-createAsContainerDatabase true
-pdbName PDB -numberOfPDBs 2
-sid CDB -gdbName CDB
-characterset AL32UTF8
-sysPassword foo
-systemPassword foo
-pdbAdminUserName pdbadmin
-pdbAdminPassword foo
-initParams sga_target=1024M,pga_aggregate_target=512M
After the database is created, I can connect to the CDB root container as SYS:
$ sqlplus / as sysdba
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
As SYS, if I want to switch containers to the pluggable database, I can do so as follows:
SQL> alter session set container=PDB1;
To connect to the pluggable database as SYSTEM, EODA, and/or SCOTT, I’ll use the pluggable database default service to connect, for example:
$ sqlplus eoda/foo@PDB1
SQL> show con_name
CON_NAME
------------------------------
PDB1
For reference, here are the tnsnames.ora file contents on my laptop:
CDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = CDB)
    )
  )
PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PDB1)
    )
  )
PDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PDB2)
    )
  )

Don’t worry if you don’t have access to a container/pluggable database to run the examples. If you’re not using a multitenant database, then all of your connections in the code examples will be to the database itself (since there is no concept of a pluggable database in the older single-tenant Oracle database architecture).

Tip

The Oracle database architecture types such as single tenant and multitenant are explained in Chapter 2.

Database Users in This Book

All the concepts in this book are explained through hands-on examples. To execute these examples, I use an Oracle database on my laptop. Depending on the example, I’ll use one of the following four users in my database:
  • SYS: This is an Oracle-created user that has all database privileges. I’ll use this to start/stop the database, add tablespaces, modify initialization parameters, and so on.

  • SYSTEM: This is an Oracle-created user that has elevated database privileges. I’ll use this to create users, grant privileges, and so on.

  • EODA: This is a user that I created that has a variety of special database privileges granted to it. These privileges are required to demonstrate various concepts.

  • SCOTT: This is a user that I created using scripts provided by Oracle. Historically, this user has been used to explain simple database concepts and is the owner of the EMP and DEPT tables.

Setting Up the EODA Schema

The EODA user is used for most of the examples in this book. This is simply a schema that has been granted the DBA role and granted execute and select on certain objects owned by SYS. This example assumes that you’re using a container database that contains a pluggable database. The pluggable database that I’m using in this book is named PDB1. In the following code, you’ll have to specify the name of the pluggable database that you’re using:
-- Script name: creeoda.sql
-- Define the PDB you want to connect to in your database.
-- If you’re using a non-container database, then leave the PDB variable blank.
-- But you really should be using a container database going forward.
define PDB=PDB1
connect / as sysdba
-- Switch containers to the PDB
alter session set container=&&PDB;
define username=eoda
define usernamepwd=foo
create user &&username identified by &&usernamepwd;
grant dba to &&username;
grant execute on dbms_stats      to &&username;
grant select  on sys.V_$STATNAME to &&username;
grant select  on sys.V_$MYSTAT   to &&username;
grant select  on sys.V_$LATCH    to &&username;
grant select  on sys.V_$TIMER    to &&username;
conn &&username/&&usernamepwd@&&PDB
show user
In the previous code, if you’re not using a container/pluggable database, there’s no need to alter your session to set it to a container. Also, if not using a container/pluggable database, then you can connect directly to the user you created via
SQL> conn &&username/&&usernamepwd
Note

You can set up whatever user (schema) you want to run the examples in this book. I picked the username EODA simply because it’s an acronym for the title of the book.

Setting Up the SCOTT/TIGER Schema

The SCOTT/TIGER schema will sometimes already exist in your database. This schema is often used to show basic examples especially when you require a couple of tables with primary and foreign key relationships (the EMP and DEPT tables). There is nothing magic about using the SCOTT account. You could install the EMP/DEPT tables directly into your own database account if you wish.

Having said that, many of my examples in this book draw on the tables in the SCOTT schema. If you would like to be able to work along with them, you will need these tables. If you are working on a shared database, it would be advisable to install your own copy of these tables in some account other than SCOTT to avoid side effects caused by other users mucking about with the same data.

You can use the following script to create the SCOTT user in a pluggable database. You’ll need to change the pluggable database name and connection string to match your environment (my pluggable database name is PDB1 in this example):
-- Script name: crescott.sql
-- If using a container database, SCOTT needs to be created within a PDB.
-- If you’re using a non-container database, leave PDB blank.
-- This same code to create SCOTT is in $ORACLE_HOME/rdbms/admin/utlsampl.sql
-- Set this to your PDB in your database.
define PDB=PDB1
define scott_pwd=tiger
conn / as sysdba
alter session set container=&&PDB;
-- Be careful when dropping users. Make sure this is a dev/test environment.
drop user scott cascade;
create user scott identified by &&scott_pwd;
grant create session       to scott;
grant create table         to scott;
grant unlimited tablespace to scott;
grant create procedure     to scott;
grant execute on dbms_lock to scott;
grant execute on dbms_flashback to scott;
conn scott/&&scott_pwd@&&PDB
CREATE TABLE EMP
(EMPNO NUMBER(4) NOT NULL,
 ENAME VARCHAR2(10),
 JOB VARCHAR2(9),
 MGR NUMBER(4),
 HIREDATE DATE,
 SAL NUMBER(7, 2),
 COMM NUMBER(7, 2),
 DEPTNO NUMBER(2)
);
INSERT INTO EMP VALUES (7369, 'SMITH',  'CLERK',     7902,
TO_DATE('17-DEC-1980', 'DD-MON-YYYY'),  800, NULL, 20);
INSERT INTO EMP VALUES (7499, 'ALLEN',  'SALESMAN',  7698,
TO_DATE('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);
INSERT INTO EMP VALUES (7521, 'WARD',   'SALESMAN',  7698,
TO_DATE('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);
INSERT INTO EMP VALUES (7566, 'JONES',  'MANAGER',   7839,
TO_DATE('2-APR-1981', 'DD-MON-YYYY'),  2975, NULL, 20);
INSERT INTO EMP VALUES (7654, 'MARTIN', 'SALESMAN',  7698,
TO_DATE('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
INSERT INTO EMP VALUES (7698, 'BLAKE',  'MANAGER',   7839,
TO_DATE('1-MAY-1981', 'DD-MON-YYYY'),  2850, NULL, 30);
INSERT INTO EMP VALUES (7782, 'CLARK',  'MANAGER',   7839,
TO_DATE('9-JUN-1981', 'DD-MON-YYYY'),  2450, NULL, 10);
INSERT INTO EMP VALUES (7788, 'SCOTT',  'ANALYST',   7566,
TO_DATE('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);
INSERT INTO EMP VALUES (7839, 'KING',   'PRESIDENT', NULL,
TO_DATE('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);
INSERT INTO EMP VALUES (7844, 'TURNER', 'SALESMAN',  7698,
TO_DATE('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);
INSERT INTO EMP VALUES (7876, 'ADAMS',  'CLERK',     7788,
TO_DATE('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);
INSERT INTO EMP VALUES (7900, 'JAMES',  'CLERK',     7698,
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'),   950, NULL, 30);
INSERT INTO EMP VALUES (7902, 'FORD',   'ANALYST',   7566,
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'),  3000, NULL, 20);
INSERT INTO EMP VALUES (7934, 'MILLER', 'CLERK',     7782,
TO_DATE('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 10);
CREATE TABLE DEPT
(DEPTNO NUMBER(2),
 DNAME VARCHAR2(14),
 LOC VARCHAR2(13)
);
INSERT INTO DEPT VALUES (10, 'ACCOUNTING', 'NEW YORK');
INSERT INTO DEPT VALUES (20, 'RESEARCH',   'DALLAS');
INSERT INTO DEPT VALUES (30, 'SALES',      'CHICAGO');
INSERT INTO DEPT VALUES (40, 'OPERATIONS', 'BOSTON');
CREATE TABLE BONUS
        (
        ENAME VARCHAR2(10)      ,
        JOB VARCHAR2(9)  ,
        SAL NUMBER,
        COMM NUMBER
        ) ;
CREATE TABLE SALGRADE
      ( GRADE NUMBER,
        LOSAL NUMBER,
        HISAL NUMBER );
INSERT INTO SALGRADE VALUES (1,700,1200);
INSERT INTO SALGRADE VALUES (2,1201,1400);
INSERT INTO SALGRADE VALUES (3,1401,2000);
INSERT INTO SALGRADE VALUES (4,2001,3000);
INSERT INTO SALGRADE VALUES (5,3001,9999);
alter table emp add constraint emp_pk primary key(empno);
alter table dept add constraint dept_pk primary key(deptno);
alter table emp add constraint emp_fk_dept foreign key(deptno) references dept;
alter table emp add constraint emp_fk_emp foreign key(mgr) references emp;
After running the prior code, you should be able to connect as SCOTT to your pluggable database and describe the tables, for example:
$ sqlplus scott/tiger@localhost:1521/PDB1
SQL> desc dept
Name                                Null?    Type
----------------------------------- -------- ----------------------------
DEPTNO                                       NOT NULL NUMBER(2)
DNAME                                        VARCHAR2(14)
LOC                                          VARCHAR2(13)

Many of my examples in this book draw on the tables in the SCOTT schema. If you would like to be able to work along with them, you will need these tables. If you are working on a shared database, it would be advisable to install your own copy of these tables in some account other than SCOTT to avoid side effects caused by other users mucking about with the same data.

Setting Up Statspack

Statspack is designed to be installed when connected to the root container database as SYS (CONNECT/AS SYSDBA) or as a user granted the SYSDBA privilege. In many installations, installing Statspack will be a task that you must ask the DBA or administrators to perform.

Installing Statspack is trivial. You simply run @spcreate.sql. This script will be found in $ORACLE_HOME/rdbms/admin and should be executed when connected to the root container database as SYS via SQL*Plus. You’ll need to know the following three pieces of information before running the spcreate.sql script:
  • The password you would like to use for the PERFSTAT schema that will be created

  • The default tablespace you would like to use for PERFSTAT

  • The temporary tablespace you would like to use for PERFSTAT

Running the script will look something like this:
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
SQL> @spcreate
Enter value for perfstat_password:
... <output omitted for brevity> ...

The script will prompt you for the needed information as it executes. In the event you make a typo or inadvertently cancel the installation, you should use spdrop.sql found in $ORACLE_HOME/rdbms/admin to remove the user and installed views prior to attempting another install of Statspack. The Statspack installation will create a file called spcpkg.lis. You should review this file for any possible errors that might have occurred. The user, views, and PL/SQL code should install cleanly, however, as long as you supplied valid tablespace names (and didn’t already have a user PERFSTAT).

Tip

Statspack is documented in the following text file: $ORACLE_HOME/rdbms/admin/spdoc.txt.

Custom Scripts

In this section, I will describe the requirements (if any) needed by various scripts used throughout this book. As well, we will investigate the code behind the scripts.

Runstats

Runstats is a tool I developed to compare two different methods of doing the same thing and show which one is superior. You supply the two different methods and Runstats does the rest. Runstats simply measures three key things:
  • Wall clock or elapsed time: This is useful to know, but not the most important piece of information.

  • System statistics: This shows, side by side, how many times each approach did something (such as a parse call) and the difference between the two.

  • Latching: This is the key output of this report.

As we’ll see in this book, latches are a type of lightweight lock. Locks are serialization devices. Serialization devices inhibit concurrency. Applications that inhibit concurrency are less scalable, can support fewer users, and require more resources. Our goal is always to build applications that have the potential to scale—ones that can service 1 user as well as 1000 or 10,000. The less latching we incur in our approaches, the better off we will be. I might choose an approach that takes longer to run on the wall clock but that uses ten percent of the latches. I know that the approach that uses fewer latches will scale substantially better than the approach that uses more latches.

Runstats is best used in isolation, that is, on a single-user database. We will be measuring statistics and latching (locking) activity that result from our approaches. We do not want other sessions to contribute to the system’s load or latching while this is going on. A small test database is perfect for these sorts of tests. I frequently use my desktop PC or laptop, for example.

Note

I believe all developers should have a test bed database they control to try ideas on, without needing to ask a DBA to do something all of the time. Developers definitely should have a database on their desktop, given that the licensing for the personal developer version is simply “use it to develop and test with, do not deploy, and you can just have it.” This way, there is nothing to lose! Also, I’ve taken some informal polls at conferences and seminars. Virtually every DBA out there started as a developer! The experience and training developers could get by having their own database—being able to see how it really works—pays dividends in the long run.

In order to use Runstats, you need to set up access to several V$ views, create a table to hold the statistics, and create the Runstats package. The code to create the Runstats package is contained in the runstats.sql script. You will need access to four V$ tables (those magic, dynamic performance tables): V$STATNAME, V$MYSTAT, V$TIMER, and V$LATCH. Here is a view I use:
$ sqlplus eoda/foo@PDB1
SQL> create or replace view stats
       as select 'STAT...' || a.name name, b.value
      from v$statname a, v$mystat b
     where a.statistic# = b.statistic#
    union all
    select 'LATCH.' || name,  gets
      from v$latch
    union all
    select 'STAT...Elapsed Time', hsecs from v$timer;
Note

The actual object names you need to be granted access to will be V_$STATNAME, V_$MYSTAT, and so on—that is, the object name to use in the grant will start with V_$ not V$. The V$ name is a synonym that points to the underlying view with a name that starts with V_$. So, V$STATNAME is a synonym that points to V_$STATNAME—a view. You need to be granted access to the view.

You can either have SELECT on V$STATNAME, V$MYSTAT, V$TIMER, and V$LATCH granted directly to you (so you can create the view yourself), or you can have someone that does have SELECT on those objects create the view for you and grant SELECT privileges on the view to you.

Once you have that set up, all you need is a small table to collect the statistics:
SQL> create global temporary table run_stats
( runid varchar2(15),
  name varchar2(80),
  value int )
on commit preserve rows;
Last, you need to create the package that is Runstats. It contains three simple API calls:
  • RS_START (Runstats Start) to be called at the beginning of a Runstats test

  • RS_MIDDLE to be called in the middle, as you might have guessed

  • RS_STOP to finish off and print the report

The specification is as follows:
SQL> create or replace package runstats_pkg
    as
        procedure rs_start;
        procedure rs_middle;
        procedure rs_stop( p_difference_threshold in number default 0 );
    end;
    /
Package created.

The parameter, p_difference_threshold, is used to control the amount of data printed at the end. Runstats collects statistics and latching information for each run and then prints a report of how much of a resource each test (each approach) used and the difference between them. You can use this input parameter to see only the statistics and latches that had a difference greater than this number. By default, this is zero, and you see all of the outputs.

Next, we’ll look at the package body procedure by procedure. The package begins with some global variables. These will be used to record the elapsed times for our runs:
SQL> create or replace package body runstats_pkg
    as
    g_start number;
    g_run1 number;
    g_run2 number;
Next is the RS_START routine. This will simply clear out our statistics holding table and then populate it with the “before” statistics and latches. It will then capture the current timer value, a clock of sorts that we can use to compute elapsed times in hundredths of seconds:
    procedure rs_start
    is
  begin
    delete from run_stats;
    insert into run_stats
    select 'before', stats.* from stats;
    g_start := dbms_utility.get_cpu_time;
  end;

Next is the RS_MIDDLE routine. This procedure simply records the elapsed time for the first run of our test in G_RUN1. Then it inserts the current set of statistics and latches. If we were to subtract these values from the ones we saved previously in RS_START, we could discover how many latches the first method used, how many cursors (a statistic) it used, and so on.

Last, it records the start time for our next run:
  procedure rs_middle
  is
  begin
    g_run1 := (dbms_utility.get_cpu_time-g_start);
    insert into run_stats
    select 'after 1', stats.* from stats;
    g_start := dbms_utility.get_cpu_time;
  end;
The next and final routine in this package is the RS_STOP routine. Its job is to print out the aggregate CPU times for each run and then print out the difference between the statistic/latching values for each of the two runs (only printing out those that exceed the threshold):
  procedure rs_stop(p_difference_threshold in number default 0)
  is
  begin
    g_run2 := (dbms_utility.get_cpu_time-g_start);
    dbms_output.put_line( 'Run1 ran in ' || g_run1 || ' cpu hsecs' );
    dbms_output.put_line( 'Run2 ran in ' || g_run2 || ' cpu hsecs' );
    if ( g_run2 <> 0 )
    then
      dbms_output.put_line
      ( 'run 1 ran in ' || round(g_run1/g_run2*100,2) ||
      '% of the time' );
    end if;
    dbms_output.put_line( chr(9) );
    insert into run_stats
    select 'after 2', stats.* from stats;
    dbms_output.put_line
    ( rpad( 'Name', 30 ) || lpad( 'Run1', 16 ) ||
    lpad( 'Run2', 16 ) || lpad( 'Diff', 16 ) );
    for x in
    ( select rpad( a.name, 30 ) ||
      to_char( b.value-a.value, '999,999,999,999' ) ||
      to_char( c.value-b.value, '999,999,999,999' ) ||
      to_char( ( (c.value-b.value)-(b.value-a.value)),
      '999,999,999,999' ) data
      from run_stats a, run_stats b, run_stats c
      where a.name = b.name
      and b.name = c.name
      and a.runid = 'before'
      and b.runid = 'after 1'
      and c.runid = 'after 2'
      and abs( (c.value-b.value) - (b.value-a.value) )
      > p_difference_threshold
      order by abs( (c.value-b.value)-(b.value-a.value))
    ) loop
    dbms_output.put_line( x.data );
    end loop;
    dbms_output.put_line( chr(9) );
    dbms_output.put_line
    ( 'Run1 latches total versus runs -- difference and pct' );
    dbms_output.put_line
    ( lpad( 'Run1', 14 ) || lpad( 'Run2', 19 ) ||
      lpad( 'Diff', 18 ) || lpad( 'Pct', 11 ) );
    for x in
    ( select to_char( run1, '9,999,999,999,999' ) ||
      to_char( run2, '9,999,999,999,999' ) ||
      to_char( diff, '9,999,999,999,999' ) ||
      to_char( round( run1/decode( run2, 0, to_number(0), run2) *100,2 ), '99,999.99' ) || '%' data
      from ( select sum(b.value-a.value) run1, sum(c.value-b.value) run2,
      sum( (c.value-b.value)-(b.value-a.value)) diff
      from run_stats a, run_stats b, run_stats c
      where a.name = b.name
      and b.name = c.name
      and a.runid = 'before'
      and b.runid = 'after 1'
      and c.runid = 'after 2'
      and a.name like 'LATCH%'
      )
    ) loop
    dbms_output.put_line( x.data );
    end loop;
  end;
  end;
  /
Package body created.
Now you are ready to use Runstats. By way of example, we’ll demonstrate how to use Runstats to see which is more efficient, a single bulk INSERT vs. row-by-row processing. We’ll start by setting up two tables into which we’ll insert 1,000,000 rows (the BIG_TABLE table creation script is provided later in this section):
SQL> create table t1 as select * from big_table  where 1=0;
Table created.
SQL> create table t2 as select * from big_table  where 1=0;
Table created.
And now we are ready to perform the first method of inserting the records, using a single SQL statement. We start by calling RUNSTATS_PKG.RS_START:
SQL> exec runstats_pkg.rs_start;
PL/SQL procedure successfully completed.
SQL> insert into t1
    select *
      from big_table
     where rownum <= 1000000;
1000000 rows created.
SQL> commit;
Commit complete.
Now we are ready to perform the second method, row-by-row insertion of data:
SQL> exec runstats_pkg.rs_middle;
PL/SQL procedure successfully completed.
SQL> begin
            for x in ( select *
                         from big_table
                        where rownum <= 1000000 )
            loop
                    insert into t2 values X;
            end loop;
            commit;
    end;
  /
PL/SQL procedure successfully completed.
And finally, we’ll generate the report:
SQL> set serverout on;
SQL> exec runstats_pkg.rs_stop(1000000)
Run1 ran in 186 cpu hsecs
Run2 ran in 2209 cpu hsecs
run 1 ran in 8.42% of the time
Name                                      Run1            Run2            Diff
STAT...calls to get snapshot s             122       1,000,126       1,000,004
STAT...execute count                        23       1,000,029       1,000,006
STAT...session cursor cache hi              12       1,000,023       1,000,011
STAT...opened cursors cumulati              23       1,000,035       1,000,012
STAT...db block gets from cach         113,703       1,115,364       1,001,661
STAT...db block gets from cach         130,907       1,140,704       1,009,797
STAT...db block gets                   130,907       1,140,704       1,009,797
STAT...recursive calls                     180       1,010,311       1,010,131
STAT...session logical reads           155,498       1,178,758       1,023,260
STAT...db block changes                122,869       2,094,734       1,971,865
STAT...file io wait time               181,916       4,047,635       3,865,719
STAT...session pga memory max                0       3,866,624       3,866,624
LATCH.cache buffers chains             507,687       5,645,160       5,137,473
STAT...physical write total by               0      21,684,224      21,684,224
STAT...undo change vector size       4,079,104      67,922,592      63,843,488
STAT...physical read bytes         276,627,456     343,220,224      66,592,768
STAT...physical read total byt     276,627,456     344,940,544      68,313,088
STAT...KTFB alloc space (block     147,456,000     217,055,232      69,599,232
STAT...cell physical IO interc     276,627,456     366,624,768      89,997,312
STAT...redo size                   135,092,608     412,189,272     277,096,664
STAT...logical read bytes from   1,139,335,168   9,441,386,496   8,302,051,328
Run1 latches total versus runs -- difference and pct
Run1            Run2              Diff          Pct
628,686         6,099,913         5,471,227     10.31%
PL/SQL procedure successfully completed.

This confirms you have the RUNSTATS_PKG package installed and shows you why you should use a single SQL statement instead of a bunch of procedural code when developing applications whenever possible!

Mystat

The mystat.sql and its companion, mystat2.sql, are used to show the increase in some Oracle “statistic” before and after some operation. The mystat.sql script captures the begin value of some statistic:
$ sqlplus eoda/foo@PDB1
set echo off
set verify off
column value new_val V
define S="&1"
set autotrace off
select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = lower('&S');
set echo on
And mystat2.sql reports the difference (&V is populated by running the first script, mystat.sql—it uses the SQL*Plus NEW_VAL feature for that. It contains the last VALUE selected from the preceding query):
set echo off
set verify off
select a.name, b.value V, to_char(b.value-&V,'999,999,999,999') diff
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) = lower('&S');
set echo on
For example, to see how much redo is generated by an UPDATE statement, we can do the following:
SQL> @mystat "redo size"
SQL> set echo off
NAME                           VALUE
------------------------------ ---------
redo size                      491167892
SQL> update big_table set owner = lower(owner) where rownum <= 1000;
1000 rows updated.
SQL> commit;
SQL> @mystat2
SQL> set echo off
NAME                           V           DIFF
------------------------------ ----------- ----------------
redo size                      491265640   97,748

This shows our UPDATE of 1000 rows generated 97,748 bytes of redo.

Show_Space

The SHOW_SPACE routine prints detailed space utilization information for database segments. The code for creating the SHOW_SPACE procedure is contained in the show_space.sql script. Here is the interface to it:
$ sqlplus eoda/foo@PDB1
SQL> @show_space.sql
Procedure created.
SQL> desc show_space
PROCEDURE show_space
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 P_SEGNAME                      VARCHAR2                IN
 P_OWNER                        VARCHAR2                IN     DEFAULT
 P_TYPE                         VARCHAR2                IN     DEFAULT
 P_PARTITION                    VARCHAR2                IN     DEFAULT
The arguments are as follows:
  • P_SEGNAME: Name of the segment—the table or index name, for example.

  • P_OWNER: Defaults to the current user, but you can use this routine to look at some other schema.

  • P_TYPE: Defaults to TABLE and represents the type of object you are looking at. For example, select distinct segment_type from dba_segments lists valid segment types.

  • P_PARTITION: Name of the partition when you show the space for a partitioned object. SHOW_SPACE shows space for only a partition at a time.

The output of this routine looks as follows, when the segment resides in an Automatic Segment Space Management (ASSM) tablespace:
SQL> set serverout on;
SQL> exec show_space('BIG_TABLE');
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................          14,469
Total Blocks............................          15,360
Total Bytes.............................     125,829,120
Total MBytes............................             120
Unused Blocks...........................             728
Unused Bytes............................       5,963,776
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          43,145
Last Used Block.........................             296
PL/SQL procedure successfully completed.
The items reported are as follows:
  • Unformatted Blocks: The number of blocks that are allocated to the table below the high-water mark, but have not been used. Add unformatted and unused blocks together to get a total count of blocks allocated to the table but never used to hold data in an ASSM object.

  • FS1 Blocks–FS4 Blocks: Formatted blocks with data. The ranges of numbers after their name represent the emptiness of each block. For example, (0-25) is the count of blocks that are between 0 and 25 percent empty.

  • Full Blocks: The number of blocks that are so full that they are no longer candidates for future inserts.

  • Total Blocks, Total Bytes, Total Mbytes: The total amount of space allocated to the segment measured in database blocks, bytes, and megabytes.

  • Unused Blocks, Unused Bytes: Represents a portion of the amount of space never used. These are blocks allocated to the segment, but are currently above the high-water mark of the segment.

  • Last Used Ext FileId: The file ID of the file that contains the last extent that contains data.

  • Last Used Ext BlockId: The block ID of the beginning of the last extent; the block ID within the last used file.

  • Last Used Block: The block ID offset of the last block used in the last extent.

For reference, the commented code for SHOW_SPACE follows. This utility is a simple layer on top of the DBMS_SPACE API in the database:
create or replace procedure show_space
( p_segname in varchar2,
  p_owner   in varchar2 default user,
  p_type    in varchar2 default 'TABLE',
  p_partition in varchar2 default NULL )
-- this procedure uses authid current user so it can query DBA_*
-- views using privileges from a ROLE and so it can be installed
-- once per database, instead of once per user that wants to use it
authid current_user
as
    l_free_blks                 number;
    l_total_blocks              number;
    l_total_bytes               number;
    l_unused_blocks             number;
    l_unused_bytes              number;
    l_LastUsedExtFileId         number;
    l_LastUsedExtBlockId        number;
    l_LAST_USED_BLOCK           number;
    l_segment_space_mgmt        varchar2(255);
    l_unformatted_blocks number;
    l_unformatted_bytes number;
    l_fs1_blocks number; l_fs1_bytes number;
    l_fs2_blocks number; l_fs2_bytes number;
    l_fs3_blocks number; l_fs3_bytes number;
    l_fs4_blocks number; l_fs4_bytes number;
    l_full_blocks number; l_full_bytes number;
    -- inline procedure to print out numbers nicely formatted
    -- with a simple label
    procedure p( p_label in varchar2, p_num in number )
    is
    begin
        dbms_output.put_line( rpad(p_label,40,'.') ||
                              to_char(p_num,'999,999,999,999') );
    end;
begin
   -- this query is executed dynamically in order to allow this procedure
   -- to be created by a user who has access to DBA_SEGMENTS/TABLESPACES
   -- via a role as is customary.
   -- NOTE: at runtime, the invoker MUST have access to these two
   -- views!
   -- this query determines if the object is an ASSM object or not
   begin
      execute immediate
          'select ts.segment_space_management
             from dba_segments seg, dba_tablespaces ts
            where seg.segment_name      = :p_segname
              and (:p_partition is null or
                  seg.partition_name = :p_partition)
              and seg.owner = :p_owner
              and seg.tablespace_name = ts.tablespace_name'
             into l_segment_space_mgmt
            using p_segname, p_partition, p_partition, p_owner;
   exception
       when too_many_rows then
          dbms_output.put_line
          ( 'This must be a partitioned table, use p_partition => ');
          return;
   end;
   -- if the object is in an ASSM tablespace, we must use this API
   -- call to get space information, else we use the FREE_BLOCKS
   -- API for the user managed segments
   if l_segment_space_mgmt = 'AUTO'
   then
     dbms_space.space_usage
     ( p_owner, p_segname, p_type, l_unformatted_blocks,
       l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes,
       l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes,
       l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes, p_partition);
     p( 'Unformatted Blocks ', l_unformatted_blocks );
     p( 'FS1 Blocks (0-25)  ', l_fs1_blocks );
     p( 'FS2 Blocks (25-50) ', l_fs2_blocks );
     p( 'FS3 Blocks (50-75) ', l_fs3_blocks );
     p( 'FS4 Blocks (75-100)', l_fs4_blocks );
     p( 'Full Blocks        ', l_full_blocks );
  else
     dbms_space.free_blocks(
       segment_owner     => p_owner,
       segment_name      => p_segname,
       segment_type      => p_type,
       freelist_group_id => 0,
       free_blks         => l_free_blks);
     p( 'Free Blocks', l_free_blks );
  end if;
  -- and then the unused space API call to get the rest of the
  -- information
  dbms_space.unused_space
  ( segment_owner     => p_owner,
    segment_name      => p_segname,
    segment_type      => p_type,
    partition_name    => p_partition,
    total_blocks      => l_total_blocks,
    total_bytes       => l_total_bytes,
    unused_blocks     => l_unused_blocks,
    unused_bytes      => l_unused_bytes,
    LAST_USED_EXTENT_FILE_ID => l_LastUsedExtFileId,
    LAST_USED_EXTENT_BLOCK_ID => l_LastUsedExtBlockId,
    LAST_USED_BLOCK => l_LAST_USED_BLOCK );
    p( 'Total Blocks', l_total_blocks );
    p( 'Total Bytes', l_total_bytes );
    p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );
    p( 'Unused Blocks', l_unused_blocks );
    p( 'Unused Bytes', l_unused_bytes );
    p( 'Last Used Ext FileId', l_LastUsedExtFileId );
    p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );
    p( 'Last Used Block', l_LAST_USED_BLOCK );
end;
/

Big_Table

For examples throughout this book, I use a table called BIG_TABLE. The code for creating this table is contained in the big_table.sql script. Depending on which system I use, this table has between one record and four million records and varies in size from 200MB to 800MB. In all cases, the table structure is the same.

To create BIG_TABLE, I wrote a script that does the following:
  • Creates an empty table based on ALL_OBJECTS. This dictionary view is used to populate the BIG_TABLE.

  • Makes this table NOLOGGING. This is optional. I did it for performance. Using NOLOGGING mode for a test table is safe; you won’t use it in a production system, so features like Oracle Data Guard will not be enabled.

  • Populates the table by seeding it with the contents of ALL_OBJECTS and then iteratively inserting into itself, approximately doubling its size on each iteration.

  • Creates a primary key constraint on the table.

  • Gathers statistics.

To build the BIG_TABLE table, you can run the following script at the SQL*Plus prompt and pass in the number of rows you want in the table. The script will stop when it hits that number of rows.
$ sqlplus eoda/foo@PDB1
create table big_table
as
select rownum id, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP,
STATUS, TEMPORARY, GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
  from all_objects
 where 1=0
/
alter table big_table nologging;
declare
  l_cnt number;
  l_rows number := &numrows;
begin
  insert /*+ append */
  into big_table
  select rownum id, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
  DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP,
  STATUS, TEMPORARY, GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
  from all_objects
  where rownum <= &numrows;
  --
  l_cnt := sql%rowcount;
  commit;
  while (l_cnt < l_rows)
  loop
    insert /*+ APPEND */ into big_table
    select rownum+l_cnt,OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
    DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP,
    STATUS, TEMPORARY, GENERATED, SECONDARY, NAMESPACE, EDITION_NAME
    from big_table a
    where rownum <= l_rows-l_cnt;
    l_cnt := l_cnt + sql%rowcount;
    commit;
  end loop;
end;
/
alter table big_table add constraint
big_table_pk primary key(id);
exec dbms_stats.gather_table_stats( user, 'BIG_TABLE', estimate_percent=> 1);

I estimated baseline statistics on the table. The index associated with the primary key will have statistics computed automatically when it is created.

Coding Conventions

The one coding convention I use in this book that I would like to point out is how I name variables in PL/SQL code. For example, consider a package body like this:
create or replace package body my_pkg
as
   g_variable varchar2(25);
   procedure p( p_variable in varchar2 )
   is
      l_variable varchar2(25);
   begin
      null;
   end;
end;
/
Here, I have three variables: a global package variable, G_VARIABLE; a formal parameter to the procedure, P_VARIABLE; and a local variable, L_VARIABLE. I name my variables after the scope they are contained in. All globals begin with G_, parameters with P_, and local variables with L_. The main reason for this is to distinguish PL/SQL variables from columns in a database table. For example, a procedure such as the following would always print out every row in the EMP table where ENAME is not null:
create procedure p( ENAME in varchar2 )
as
begin
   for x in ( select * from emp where ename = ENAME ) loop
      Dbms_output.put_line( x.empno );
   end loop;
end;

SQL sees ename = ENAME and compares the ENAME column to itself (of course). We could use ename = P.ENAME, that is, qualify the reference to the PL/SQL variable with the procedure name, but this is too easy to forget, leading to errors.

I just always name my variables after the scope. That way, I can easily distinguish parameters from local variables and global variables, in addition to removing any ambiguity with respect to column names and variable names.

Errata

Apress makes every effort to make sure that there are no errors in the text or the code. However, to err is human, and as such we recognize the need to keep you informed of any mistakes as they’re discovered and corrected. Errata sheets are available for all our books at www.apress.com . If you find an error that hasn’t already been reported, please let us know. The Apress website acts as a focus for other information and support, including the code from all Apress books, sample chapters, previews of forthcoming titles, and articles on related topics.

Acknowledgments

I would like to thank many people for helping me complete this book.

First, I would like to thank you, the reader of this book. There is a high probability that if you are reading this book, you have participated in my site http://asktom.oracle.com in some fashion, perhaps by asking a question or two. It is that act—the act of asking questions and of questioning the answers—that provides me with the material for the book and the knowledge behind the material. Without the questions, I would not be as knowledgeable about the Oracle database as I am. So, it is you who ultimately makes this book possible.

I would like to thank Tony Davis for his previous work making my work read well. If you enjoy the flow of the sections, the number of section breaks, and the clarity, then that is in some part due to him. I have worked with Tony writing technical material since the year 2000 and have watched his knowledge of Oracle grow over that time. He now has the ability to not only edit the material but in many cases tech edit it as well. Many of the examples in this book are there because of him (pointing out that the casual reader was not going to “get it” without them). This book would not be what it is without him.

Without a technical review team of the caliber I had during the writing of this book and the previous editions, I would be nervous about the content. The first edition had Jonathan Lewis, Roderick Manalac, Michael Möller, and Gabe Romanescu as technical reviewers. They spent many hours poring over the material and verifying it was technically accurate as well as useful in the real world. Subsequent editions had a team of similar caliber: Melanie Caffrey, Christopher Beck, and Jason Straub. I firmly believe a technical book should be judged not only by who wrote it but also by who reviewed it. Given these seven people, I feel confident in the material.

At Oracle, I work with the best and brightest people I have ever known, and they all have contributed in one way or another. I would like to thank Ken Jacobs in particular for his support and enthusiasm over the years. Ken is unfortunately (for us) no longer with Oracle Corporation, but his impact will long be felt.

Lastly, but most important, I would like to acknowledge the unceasing support I’ve received from my family. You know you must be important to someone when you try to do something that takes a lot of “outside of work hours” and that someone lets you know about it. Without the continual support of my wife, Melanie (who also was a technical reviewer on the book), son Alan, and daughter Megan, I don’t see how I could have finished this book.

—Thomas Kyte

I’d like to thank Tom for inviting me to work with him on this book; this is a great technical honor. I’d also like to acknowledge Jonathan Gennick; his guidance (over many years and books) laid the foundation for me being able to work on a book of this caliber. And I’d like to thank Heidi, Lisa, Evan, and Brandi; without their support, I could not have successfully participated.

—Darl Kuhn

Table of Contents
Index 1101
About the Authors
Darl Kuhn
../images/319655_4_En_BookFrontmatter_Figb_HTML.jpg
is a DBA/developer working for Oracle. He also teaches Oracle classes at the University of Denver in Colorado, and is an active member of the Rocky Mountain Oracle Users Group. Darl enjoys sharing knowledge, which has led to several book projects over the years.
 
Thomas Kyte
../images/319655_4_En_BookFrontmatter_Figc_HTML.jpg
has been working for Oracle (the company) since version 7.0.9 (1993) and working with Oracle (software) since version 5.1.5c (the $99 single-user version for DOS on 360KB floppy disks). Before working at Oracle, he worked for more than six years as a systems integrator, building large-scale, heterogeneous databases and applications, mostly for military and government customers. These days, he spends a great deal of his time working with the Oracle database and, more specifically, helping people who are using the Oracle database. He works directly with customers, either in specifying and building their systems or, more frequently, helping them rebuild or tune them. In addition, he authors the “Ask Tom” column in Oracle Magazine, where he answers questions about the Oracle database and tools. On a typical day, he receives and answers dozens of questions at asktom.oracle.com. Every two months, he publishes a “best of” in the magazine (all of the questions asked are available on the Web, stored in an Oracle database). Additionally, he gives technical seminars covering much of the material in this book. He spends a lot of his time helping people be successful with the Oracle database. In his spare time, he builds applications and develops software within Oracle Corporation. This book is a reflection of what he does every day, covering topics and questions that people struggle with every day. The issues are covered from the perspective of “When I use this, I do it this way.” It is the culmination of many years of experience using the product in myriad situations.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.42.33