CHAPTER 10

image

Examples

One could argue that polymorphism is the subject of this chapter. Polymorphism is about using the same message (or command) in order to get the same result, but for different kinds of objects. Let’s look at a simple example. Figure 10-1 shows the use of the command draw() for three different geometric shapes: a square, a triangle, and a circle.

9781484207383_Fig10-01.jpg

Figure 10-1. An example of polymorphic command use with different objects

Although the three geometric shapes in Figure 10-1 are different objects, using the same command, you can communicate what you want each object to do, and the object knows how to do it.

The polymorphic use of commands or messages reduces the use of synonyms in a programming language. If you’re a writer, synonyms are wonderful things. They allow you to use words that have similar meanings but sound different, so that your resulting prose can have a somewhat rhythmic cadence. It can sound beautiful.

If you’re a programmer, synonyms are awful things. They cause ambiguity because you can’t know from “the sound” of the synonym if it means something similar (LIKE) or if it means exactly the same thing (=). And in programming, ambiguity is a really bad thing. Furthermore, synonyms expand the size of a language and make it harder to learn and use. The harder the language is to use, the more defects will be created when using it, and that too is bad.

Let’s go back to Figure 10-1 for a second. I could have named the command to draw each geometric shape in the figure draw_square(), draw_triangle(), and draw_circle(). That’s the procedural programming paradigm. But by doing so, I would have made the programming language three times larger than when I simply used the same command, draw(), in the context of the object to which it belongs.

Even if you’re not using an object-oriented language, you can still use an object-oriented approach by employing modularity and polymorphism. Employing modularity means abstracting large program units into smaller components that can then be reused and assembled to accomplish a larger purpose. Employing polymorphism means abstracting out a set of commands that communicate the same action; however, the action taken will be appropriate for the object executing the command. And that’s what I’m going to cover in this chapter, by doing the following:

  • Show you a list of polymorphic commands that I have used repeatedly, project after project, always with the same meaning and to take the same action, but an action appropriate to each object.
  • Tell you a handful of stories in which I use each polymorphic command, so you’ll have some real-world examples to draw from when you employ polymorphism in your next project.
  • Leave you with some advice about what you should do next.

I’ll get you jump-started with a list of time-tempered polymorphic commands.

Polymorphic Commands

Any time you write or rewrite a program or application, you have an opportunity to abstract out patterns of use that you have experienced beforehand, and employ that experience as the intelligent and wise use of object-orientation. As I’ve already stated several times, you do not need to be using an object-oriented language in order to employ object-orientation.

Earlier in this book, I introduced you to the idea of creating a “table package” for every table. Each table package contains any executable behavior that “belongs to” the data held in the table in question. Then, in Chapter 6, I showed you how to create an object table, based on a new object TYPE, that contains any executable behavior that “belongs to” its associated object attributes.

Creating a table package is an example of employing object-orientation in a procedural language setting, and creating an object table with a TYPE is an example of employing object-orientation in an object-oriented language setting. Both examples take advantage of polymorphism.

After more than 20 years of coding in PL/SQL, I’ve created a list of 50 polymorphic commands that can be employed to name most, if not all, of the methods you will use while developing components (and applications) with PL/SQL. Table 10-1 is a list of the top 20 polymorphic commands I commonly use for table packages and TYPEs The first column of the table lists the method name, including whether it is a FUNCTION or PROCEDURE. If a method name is not followed by parentheses, it means that there is a form of the method that takes no parameters. In columns two through four, I indicate whether a method is used for each “kind” of table: a code table, a content table, or an intersection table. Finally, in column five, I indicate whether the method is used in a hierarchical version of one of the three previous table types.

Table 10-1. Polymorphic Commands for Package Tables and TYPEs

Table10-1

Table 10-2 is a list of polymorphic commands employed by packages and TYPEs that play a particular role in a process, rather than act as merely a persistence entity like a table and table package or object table. The first column of Table 10-2 lists method names along with the type of method. Columns two through six indicate the use of the method name by each of the roles listed in the headings:

  • Data migration: A one-time process where data is moved and/or transformed from one location to another, typically from a “staging table” to an “application table.”
  • On-demand data processing: A process that manipulates data in the database, on demand, for the purpose of completing some ongoing business-related task.
  • Polling data processing: A process that manipulates data in the database by managing tasks in a queue or simply sleeping between invocations of the process in question, once again for the purpose of completing some ongoing business-related processing.
  • Data entry: In support of a presentation layer process of entering data into the database for some ongoing business related task.
  • Reporting: In support of a presentation layer process of viewing data in the database for some ongoing or ad hoc business-related task.

Table 10-2. Polymorphic Commands for Roles

Table10-2

There is some overlap between these five role definitions. For example, a service-oriented interface between two systems will employ data migration, data processing, and table package routines. But that’s OK. What I’m trying to do is to get you to think in polymorphic terms.

If you learn to think in polymorphic terms, and then employ polymorphism, once you know what a polymorphic command does on one object, you’ll already have a good idea of what it does on the next. And that’s quite similar to how we think about the real world. If you tell me you ride a bike, I might respond that I ride a horse. We both know what the other means, even if we’ve never done the other activity, simply because we apply the word ride polymorphically to the context—in your case, a bicycle; in my case, a horse.

In the sections that follow, I’m going to tell you how I’ve used some (it would take another book to explain them all) of these commands in a particular setting.

Let’s start with table packages and TYPEs.

Parametric SQL

I could call this story “The Man Who Hated SQL.” The programmer in question was unhappy because he had to break the rhythm of his coding from a procedure language (PL/SQL) to a nonprocedural language (SQL) and back again—back and forth, back and forth, PL/SQL and SQL, PL/SQL and SQL.

While our unhappy programmer was complaining about the nonprocedural nature of SQL and how bad SQL is, he should have been more concerned about reducing the number of different SQL statements used against the database, because reducing the number of SQL statements used against an Oracle database can improve the database’s overall performance.

I find it surprising, and at the same time somewhat amusing, how many people hate SQL. You would think its creation was the work of the devil. In contrast, my opinion is that SQL is very effective. You can accomplish a lot of data processing work with very little actual programming effort using SQL. OK, I agree, some things don’t work consistently from one implementation to the next, but all in all, no viable alternative has taken its place yet, nor does it appear one will any time soon.

It’s really amazing how easy data processing is with SQL. Regardless, one goal during the development of an application should be to write a particular SQL statement once and only once, just as it should be a goal to write the same code routine once and only once, and then use it as a subroutine. To that end, I create table packages or TYPEs that employ the same set of polymorphic commands to provide a PL/SQL method in place of writing many of the same SQL statements over and over again.

Code Table Methods

As you saw earlier in the book, I use a database architecture where I always store the primary key, which is a sequence-generated ID value that points to a code, into a content or intersection table. I never store the code or description value. Why?

First, modifying the code or description of a given code table entry will require only an UPDATE against the code table. The next time the code and description values are retrieved for a particular ID value, they will then be the “new” values. The primary key, the ID, will always be the same value, but the code and description values can change without impacting the content and intersection tables that have stored the ID value in question.

Second, in an internationalized application, I can transparently switch the code and description values to those with the same meaning in another language, because the only value that is immutable is the ID. With that in mind, think about this. We code programs in English, therefore we should be able to program access to codes in English, and that’s why the first method I’m going to talk about exists.

get_id( )

Since we code our programs in English, we should be able to specify our code entries as constant values in our programs in English. So any time you need to get the ID value for a particular code value in English, you call FUNCTION get_id(aiv_code), passing it a code value in English, and it returns the corresponding ID value. If the code you specify does not exist, the underlying singleton SQL SELECT statement raises a NO_DATA_FOUND exception, which you must handle in your calling program unit. Here’s an example of the method from code table package WORKER_TYPE:

FUNCTION get_id(
aiv_code                       in     WORKER_TYPES.code%TYPE )
return                                WORKER_TYPES.id%TYPE is

n_id                                  WORKER_TYPES.id%TYPE;

begin
  select id
  into   n_id
  from   WORKER_TYPES
  where  code = aiv_code;

  return n_id;
end get_id;

Once this method is written and in use, one and only one SQL statement need ever be written to retrieve an ID for a specified code value. You will see me use this method quite often in a data migration, data processing, and reporting program units. First, I get the ID values as pseudo-constants in the initialization section of a package whenever there is a small number codes in scope. Next, I reduce the number of tables used in a corresponding cursor’s SQL SELECT statement. Instead, I use an IF statement to assign the code entry’s code and description values inside the cursor’s FOR LOOP. You’ll see some examples of this practice later in this chapter, in the “Reporting” section.

get_code_descr( )

Once you’ve decided to use a database architecture where every assigned code value is stored as an ID value, you’ll need some way to retrieve the code and description values so they can be displayed in the presentation layer, whether it is for data entry or reporting. And that’s the purpose of this method. If you need the code and description values for a specific ID value, you simply call PROCEDURE get_code_descr(ain_id, aov_code, aov_descr), passing it the corresponding ID value. The method returns the code and description values as OUT parameter values If the ID you specify does not exist, the underlying singleton SQL SELECT statement raises a NO_DATA_FOUND exception, which you must handle in your calling program unit. Here’s an example of the method from code table package WORKER_TYPE:

PROCEDURE get_code_descr(
ain_id                         in     WORKER_TYPES.id%TYPE,
aov_code                          out WORKER_TYPES.code%TYPE,
aov_description                   out WORKER_TYPES.description%TYPE ) is

begin
  select code,
         description
  into   aov_code,
         aov_description
  from   WORKER_TYPES
  where  id = ain_id;
end get_code_descr;

Once again, after this method is written and in use, one and only one SQL statement need ever be written to retrieve a code and description for a specified code ID value. Similar functions can be written to get just the code value or description value, and they would be appropriately named get_code() and get_descr().

get_code_id_descr( )

If an application is written for a high volume of data entry, the GUI in question will present a field for the entry of a code value and a second field to display the specified code’s description value. The users may also have an icon they can click to see a pop-up list of code values from which to select.

The typical end-user action in this design is for the end user to specify all or part of a code value, and then to tab into the next modifiable field, at which point the presentation layer validates the user’s code value entry. That’s what this method is for: validation.

In order to validate a code value, simply pass the code value to PROCEDURE get_code_id_descr(aiov_code, aon_id, aov_description, aid_on). This procedure first tries to find an exact code value match for the specified point in time. If it does not find a match, the procedure tries to find a single LIKE match. If it does not find a single LIKE match, it may return a NO_DATA_FOUND or TOO_MANY_ROWS exception, which the presentation layer will need to handle. Both exceptions mean that the user has specified an invalid code value. Here’s an example from code table package WORKER_TYPE:

PROCEDURE get_code_id_descr(
aiov_code                      in out WORKER_TYPES.code%TYPE,
aon_id                            out WORKER_TYPES.id%TYPE,
aov_description                   out WORKER_TYPES.description%TYPE,
aid_on                         in     WORKER_TYPES.active_date%TYPE ) is

v_code                                WORKER_TYPES.code%TYPE;

begin
  select id,
         description
  into   aon_id,
         aov_description
  from   WORKER_TYPES
  where  code = aiov_code
  and    aid_on between active_date and nvl(inactive_date, DATE_.d_MAX);
exception
  when NO_DATA_FOUND then
    select id,
           code,
           description
    into   aon_id,
           v_code,
           aov_description
    from   WORKER_TYPES
    where  code like aiov_code||'%'
    and    aid_on between active_date and nvl(inactive_date, DATE_.d_MAX);

    aiov_code := v_code;
end get_code_id_descr;

Variable v_code is used in the second half of the method, during the LIKE match, to capture the full-length code value from the database and return it to the presentation layer. You can call this method from the WHEN-VALIDATE-ITEM event in Developer/2000 forms, from the item_changed() event in PowerBuilder, from JavaScript (through a servlet ) in the onChange event of an AJAX-based web page, and so on.

Now let’s look at methods used with content tables.

Content Table Methods

What’s the difference between a code table and a content table? In most cases, code tables have a very limited number of entries—values agreed upon by the consensus of users or dictation of management. These “agreed-upon” values are used to categorize the input of content and sometimes intersection entries. In contrast, content tables have an unlimited number of entries, where end users select or specify code values, enter numeric values, or enter textual values as part of doing business.

In response to the unlimited number of entries possible in content tables, it’s not uncommon to take the effort to code methods to eliminate the repeated coding of the SQL singleton SELECT, INSERT, and UPDATE statements with parametric method calls. Let’s start out by looking at one such method.

get_row( )

Method get_row returns a PL/SQL record (a row) from a relational table or an instance of an object TYPE (a row) from an object table, based on the criteria specified in the record or object passed to the method. As the programmer, it’s your responsibility to code the method to retrieve by the primary key and any other unique keys defined for the table in question. Then, to use this method, follow these steps.

  1. Set column or attribute values in a corresponding PL/SQL record or object instance for only those columns or attributes that are used in a particular primary or unique key constraint.
  2. Pass the PL/SQL record or object instance to the method.

get_row() then returns a matching row from the database if a matching row exists; otherwise, it returns a NULL PL/SQL record or object instance.

This method replaces a singleton SQL SELECT statement with a parametric method call based on a set of primary or unique key values. We never need to code that SQL statement ever again. Where’s that unhappy SQL programmer now?

The following is an example of the get_row method from content table package LOGICAL_WORKPLACE:

FUNCTION get_row(
air_logical_workplace          in     LOGICAL_WORKPLACES%ROWTYPE)
return                                LOGICAL_WORKPLACES%ROWTYPE is

r_logical_workplace                   LOGICAL_WORKPLACES%ROWTYPE;

begin
  if    air_logical_workplace.id is not NULL then
    --pl('retrieve the row by the primary key'),
    select *
    into   r_logical_workplace
    from   LOGICAL_WORKPLACES
    where  id = air_logical_workplace.id;
  elsif air_logical_workplace.id_context is not NULL then
    --pl('retrieve the row by the id_context unique key'),
    select *
    into   r_logical_workplace
    from   LOGICAL_WORKPLACES
    where  id_context = air_logical_workplace.id_context;
  else
    --pl('retrieve the row by the code, name, and active_date'),
    select *
    into   r_logical_workplace
    from   LOGICAL_WORKPLACES
    where  code        = air_logical_workplace.code
    and    name        = air_logical_workplace.name
    and    active_date = air_logical_workplace.active_date;
  end if;
  return r_logical_workplace;
exception
  when NO_DATA_FOUND then
    raise;
  when OTHERS then
    raise_application_error(-20001, SQLERRM||
      ' on select LOGICAL_WORKPLACES'||
      ' in LOGICAL_WORKPLACE.get_row()'),
end get_row;

As you will see shortly, get_row() is called by set_row() in order to determine whether to insert or update.

get_code_id_name( ), get_code( ), and get_name( )

The methods get_code_id_name(), get_code(), and get_name() serve the same purpose as their counterparts in code table packages. See the section about get_code_id_descr() earlier in the chapter for an explanation.

is_duplicate( )

In a database design where you use sequence-generated ID values for primary keys, every table will have a primary key that is an ID value, and at least one unique key that acts as a “primary key” made up of real-world values. Method is_duplicate() allows the presentation layer to verify that the values specified in fields on a screen are not going to create a row with duplicate values when an entry is saved to the database. Here’s an example of FUNCTION is_duplicate(aiv_code, aiv_name, aid_active_date) from content table package LOGICAL_WORKPLACE:

FUNCTION is_duplicate(
aiv_code                       in     LOGICAL_WORKPLACES.code%TYPE,
aiv_name                       in     LOGICAL_WORKPLACES.name%TYPE,
aid_active_date                in     LOGICAL_WORKPLACES.active_date%TYPE)
return                                boolean is

n_count                               number;

begin
  --pl('retrieve the row by the code, name, and active_date'),
  select count(1)
  into   n_count
  from   LOGICAL_WORKPLACES
  where  code               = aiv_code
  and    name               = aiv_name
  and    trunc(active_date) = trunc(aid_active_date);

  if nvl(n_count, 0) > 0 then
    return TRUE;
  else
    return FALSE;
  end if;
end is_duplicate;

Like its presentation layer counterpart get_code_id_name(), is_duplicate() can be called from any presentation layer This time, however, in order to provide immediate feedback to users that they are specifying duplicate values, before they try to save an entry to the database, because it will return TRUE if the values already exist in the database. If you’re working with a programming language in the presentation layer that does not support a Boolean value from SQL, such as Java, you can simply wrap your call to this function in a stand-alone database function like the following:

create or replace FUNCTION to_boolean_number(
aib_boolean                    in     boolean )
return                                number is
/*
to_boolean_number.fun
by Donald J. Bales on 2014-10-20
A method to return a numeric value for false (0) and true (1).
This is very handy for calling functions that return boolean
values from JDBC, which can't handle boolean database values.
*/
begin
  if aib_boolean is not null then
    if aib_boolean then
      return 1;
    else
      return 0;
    end if;
  else
    return NULL;
  end if;
end to_boolean_number;

FUNCTION to_boolean_number() takes a Boolean value as a parameter, and then returns a 1 for TRUE or 0 for FALSE. If you call is_duplicate() from a Java program, you’ll need to wrap is_duplicate() with to_boolean_number() so you’ll get a numeric value 1 for TRUE and 0 for FALSE. For example, you would code your JDBC stored procedure call as follows:

...
int               duplicate = 0;
CallableStatement cstmt     = null;

try {
  // Create the callable statement
  cstmt = conn.prepareCall(
    "{ ? = call to_boolean_number(is_duplicate(?, ?, ?)) }");

  // Register the OUT parameter
  cstmt.registerOutParameter(1, Types.INTEGER);

  // Set the IN parameters
  cstmt.setString(2, code);
  cstmt.setString(3, name);
  cstmt.setTimestamp(4, activeDate);

  // Execute the stored procedure
  cstmt.execute();

  duplicate = cstmt.getInt(1);
}
catch (SQLException e) {
  System.err.println("SQL Error: " + e.getMessage());
}
finally {
  if (cstmt != null)
    try {  cstmt.close(); } catch (SQLException ignore) { }
}
...

Enough JDBC, let’s get back to PL/SQL.

set_row( )

Method set_row is the ultimate parametric method replacement for SQL—that is, as far as our unhappy SQL programmer is concerned! To INSERT or UPDATE the database with the row you desire, you simply pass the row in question, a PL/SQL record or object instance, to PROCEDURE set_row(air_row). When you do, here’s what set_row() does.

  1. It calls get_row() to determine if a row with the specified primary or unique key values already exists in the database. If it does, get_row() returns the matching row. Otherwise, get_row() returns a NULL row.
  2. If the desired row already exists in the database, set_row() will compare the non-primary key columns. If any non-primary key column values are different, set_row() will UPDATE the database with the modified values.
  3. If the desired row doesn’t exist in the database, set_row() will allocate a primary key value if necessary, and then INSERT the desired row into the database. The row passed to set_row() is an IN OUT parameter, so the value of any allocated primary key will be updated in the passed row if a new row is inserted into the database.

This is a very handy method when you’re coding a data migration program. You can call set_row() as many times as you like with the same values, and it will insert a row only once, and update it only if values have changed. This allows you to code data migration programs that can be rerun as needed until you get the “correct” results.

The following is an example of method set_row() from content table package LOGICAL_WORKPLACE:

PROCEDURE set_row(
aior_logical_workplace         in out LOGICAL_WORKPLACES%ROWTYPE) is

d_null                       constant date        := DATES.d_MIN;
n_null                       constant number      := 0;
v_null                       constant varchar2(1) := ' ';
r_logical_workplace                   LOGICAL_WORKPLACES%ROWTYPE;

begin
  -- get the existing row
  begin
    r_logical_workplace := get_row(aior_logical_workplace);
  exception
    when NO_DATA_FOUND then
      r_logical_workplace := NULL;
  end;
  -- if a row exists, update it if needed
  if r_logical_workplace.id is not NULL then
    aior_logical_workplace.id         := r_logical_workplace.id;
    aior_logical_workplace.parent_id  := r_logical_workplace.parent_id;
    aior_logical_workplace.id_context := r_logical_workplace.id_context;
    if nvl(r_logical_workplace.workplace_type_id, n_null) <>
       nvl(aior_logical_workplace.workplace_type_id, n_null) or
       nvl(r_logical_workplace.code,              v_null) <>
       nvl(aior_logical_workplace.code,              v_null) or
       nvl(r_logical_workplace.name,              v_null) <>
       nvl(aior_logical_workplace.name,              v_null) or
       nvl(r_logical_workplace.active_date,       d_null) <>
       nvl(aior_logical_workplace.active_date,       d_null) or
       nvl(r_logical_workplace.inactive_date,     d_null) <>
       nvl(aior_logical_workplace.inactive_date,     d_null) then
      begin
        update LOGICAL_WORKPLACES
        set    workplace_type_id    = aior_logical_workplace.workplace_type_id,
               code                 = aior_logical_workplace.code,
               name                 = aior_logical_workplace.name,
               active_date          = aior_logical_workplace.active_date,
               inactive_date        = aior_logical_workplace.inactive_date
        where  id = aior_logical_workplace.id;

        n_updated := nvl(n_updated, 0) + nvl(sql%rowcount, 0);
      exception
        when OTHERS then
          raise_application_error( -20002, SQLERRM||
            ' on update LOGICAL_WORKPLACES'||
            ' in LOGICAL_WORKPLACE.set_row()' );
      end;
    end if;
  else
  -- add the row if it does not exist
    begin
      if aior_logical_workplace.id is NULL then
        aior_logical_workplace.id := get_id();
      end if;
      aior_logical_workplace.id_context :=
        create_id_context(
          aior_logical_workplace.parent_id,
          aior_logical_workplace.id );
      insert into LOGICAL_WORKPLACES (
             id,
             parent_id,
             id_context,
             workplace_type_id,
             code,
             name,
             active_date,
             inactive_date )
      values (
             aior_logical_workplace.id,
             aior_logical_workplace.parent_id,
             aior_logical_workplace.id_context,
             aior_logical_workplace.workplace_type_id,
             aior_logical_workplace.code,
             aior_logical_workplace.name,
             aior_logical_workplace.active_date,
             aior_logical_workplace.inactive_date );

      n_inserted := nvl(n_inserted, 0) + nvl(sql%rowcount, 0);
    exception
      when OTHERS then
        raise_application_error( -20003, SQLERRM||
          ' on insert LOGICAL_WORKPLACES'||
          ' in LOGICAL_WORKPLACE.set_row()' );
    end;
  end if;
end set_row;

You have to admit that that is a pretty cool method—that is, if you hate to code SQL.

WHAT? NO METHODS TO REPLACE CURSORS?

Have you noticed that I have all kinds of table package methods that wrap singleton SQL statements with a method, yet I have none to wrap a cursor with a method call? An esteemed peer of mine, Steven Feuerstein, has written about doing just that: wrapping a cursor with a method that returns a collection. A program is then written to loop through the rows of a collection instead of the rows of a cursor.

Used judiciously—that is, with cursors that return a small number of rows—Steven’s method can be quite effective. However, in practice, I don’t find much use for this technique. Remember that I advocate the use of PL/SQL methods in order to eliminate “duplicate” code. In contrast, I find that most stored procedures that use cursors use a rather unique cursor. So, because of that, I see very little opportunity for reuse.

At the same time, misuse of wrapping cursors with methods that return collections can lead to inordinately large amounts of shared memory use in an Oracle database. This in turn leads to poor performance. Regardless, at some point in time, I suggest you check out Steven’s ideas. Every good idea has its place.

Intersection Table Methods

Tables that act as the intersection between two content tables (commonly called a many-to-many relationship/entity) use all the same methods as a content tables and a few more. So let’s start by listing the methods that an intersection table has in common with a content table:

  • get_row(): Used to detect and retrieve an existing row based on primary key or unique key data.
  • is_duplicate(): Used to detect a duplicate entry before a presentation layer program tries to insert the values in question into the database.
  • set_row(): Used to conditionally insert or update a row in the database, based on use of get_row() to detect an existing entry.

An intersection table has two additional methods that are concerned about the existence of intersection history: is_active() and is_history(). Let’s look at is_active() first.

is_active( )

Method is_active() returns a Boolean value based on whether or not an “active” entry exists in an intersection table for the specified “primary” content entity at the specified point in time. By “primary” content entity, I mean the content entity that is the subject of the query.

For example, the three intersection entities in the example’s ERD from Chapter 1LOGICAL_ASSIGNMENTS, PHYSICAL_ASSIGNMENTS, and WORK_ASSIGNMENTS—have a primary content entity, WORKERS, that is the subject of the relationship. And they have a secondary content entity—LOGICAL_WORKPLACES, PHYSICAL_WORKPLACES, and WORKS, respectively—that is the object of the relationship.

By “active,” I mean that the specified point in time is between an entry’s active date and inactive date. Figure 10-2 illustrates the method’s use.

9781484207383_Fig10-02.jpg

Figure 10-2. An illustration of method is_active()’s use

If an active entry exists, FUNCTION is_active(ain_primary_id, aid_on) returns TRUE; otherwise, it returns FALSE. The following is an example of the method from intersection table package LOGICAL_ASSIGNMENT:

FUNCTION is_active(
ain_worker_id                  in     LOGICAL_ASSIGNMENTS.worker_id%TYPE,
aid_on                         in     date)
return                                boolean is

pragma AUTONOMOUS_TRANSACTION;

n_count                               number;

begin
 select count(1)
 into   n_count
 from   LOGICAL_ASSIGNMENTS
 where  worker_id       = ain_worker_id
 and    aid_on    between active_date
                      and nvl(inactive_date, DATE_.d_MAX);

 if nvl(n_count,0) > 0 then
  return TRUE;
 else
  return FALSE;
 end if;

 commit;
end is_active;

In the example above and below, for functions is_inactive() and is_history(), I am judiciously using pragma AUTONOMOUS_TRANSACTION so these methods can “see” outside of the current transaction context. That’s a bit of an advanced topic. So until you’re ready to study up on that topic, make sure you use that pragma extremely carefully, or not at all.

is_history( )

The second intersection table package method, is_history(), exists for a whole different reason. It determines whether there is any history of an active entry over a specified period of time. FUNCTION is_history(ain_primary_id, aid_active_date, aid_inactive_date) is designed to return TRUE if history does exist; otherwise, it returns FALSE. It also uses pragma autonomous transaction, so it can be used in a before insert for each row (BIR) trigger on an intersection table, to prevent overlapping historical entries. Figure 10-3 illustrates the use of is_history().

9781484207383_Fig10-03.jpg

Figure 10-3. An illustration of method is_history()’s use

The following is an example of method is_history() from intersection table package LOGICAL_ASSIGNMENT:

kFUNCTION is_history(
ain_worker_id                  in     LOGICAL_ASSIGNMENTS.worker_id%TYPE,
aid_active_date                in     LOGICAL_ASSIGNMENTS.active_date%TYPE,
aid_inactive_date              in     LOGICAL_ASSIGNMENTS.inactive_date%TYPE)
return                                boolean is

pragma AUTONOMOUS_TRANSACTION;

n_count                               number;

begin
  select count(1)
  into   n_count
  from   LOGICAL_ASSIGNMENTS
  where  worker_id                        = ain_worker_id
  and    active_date                     <=
           nvl(aid_inactive_date, DATE_.d_MAX)
  and    nvl(inactive_date, DATE_.d_MAX) >= aid_active_date;
  commit;

  if nvl(n_count,0) > 0 then
    return TRUE;
  else
    return FALSE;
  end if;
end is_history;

At first glance, most programmers think I’ve coded the comparisons between the dates in this method incorrectly, but examine Figure 10-4. What I’m asking the database in the query is, “Are there any entries where the active date takes place before the end of the range (inactive date) I’m specifying and at the same time the inactive date takes place after the start of the range (active_date) I’m specifying?” If there are, then these entries are active at some point (or points) in time during the range I’ve specified.

9781484207383_Fig10-04.jpg

Figure 10-4. An illustration of overlapping historical entries

The use of an autonomous transaction in this method allows it to be used to verify that no existing entries overlap the history of the new entry, without generating a mutating table error.

Hierarchical Table Methods

Hierarchical entities—those that have a relationship with themselves—have three additional methods, which are centered on the hierarchy. A common example of a hierarchical entity is one that stores organization information, as the LOGICAL_WORKPLACE entity does in the example schema from Chapter 1. From data in this entity, we can build an organization chart, as in Figure 10-5.

9781484207383_Fig10-05.jpg

Figure 10-5. An example of a hierarchical relationship

Determining who reports to an intermediate organization, like Manufacturing in Figure 10-5, can be quite complex and time-consuming. One option is to use Oracle’s CONNECT BY clause in a SQL SELECT statement, but the retrieval times can be onerous. So as a performance work-around, I create what I call a “contextual primary key string,” which is made up of a list of primary keys in the hierarchy separated by some predetermined separator character. I prefer to use a period (.).

When I want to identify everyone who works for Manufacturing, I look up the ID context for Manufacturing, and then ask for any entry that is LIKE it. Let’s investigate this design further by looking at a method to create the ID context.

create_id_context( )

Method create_id_context() returns an ID context string for the specified parent entry and new logical workplace ID. An application programmer must make sure that the associated ID context column in the associated table is set to the return value of FUNCTION create_id_context(ain_parent_id, ain_id) any time a row is inserted into the database. Here’s an example of the method for hierarchical content table package LOGICAL_WORKPLACE:

FUNCTION create_id_context(
ain_parent_id                  in     LOGICAL_WORKPLACES.parent_id%TYPE,
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE)
return                                varchar2 is

v_id_context                          LOGICAL_WORKPLACES.id_context%TYPE;

begin
  v_id_context := get_parent_id_context(ain_parent_id);

  if v_id_context is not NULL then
    return substr(v_id_context||'.'||to_char(ain_id), 1, 2000);
  else
    return to_char(ain_id);
  end if;
end create_id_context;

Let’s work through an example using the primary key ID values for the organizations represented in Figure 10-5. Table 10-3 lists the primary key values for some of the organizations shown in that figure.

Table 10-3. Selected LOGICAL WORKPLACE Entries

Table10-3

The Chemicals department (ID = 8) is part of the Manufacturing business unit (ID = 5). So when FUNCTION create_id_context() is called, its parent’s ID value, 5, is passed as ain_parent_id, and its ID value, 8, is passed as ain_logical_workplace_id.

First, the function retrieves the parent’s ID context, 1.5. Then the method appends a period and the ID value, 8, to the parent’s ID context, to get 1.5.8. This is the value returned by the function.

Once that value is stored in the database, if I want a list of everyone under Manufacturing, all I need to do is add the following SQL to a query:

where id_context like aiv_id_context||'%'.

get_code_context( )

Method get_code_context() returns the hierarchical list of code values as a context string for the specified ID. For example, executing FUNCTION get_code_context(ain_id) will return the string VDMI.MFG.CHEM for the Chemicals department listed in Table 10-3. Here’s an example of the method from hierarchical content table package LOGICAL_WORKPLACE:

FUNCTION get_code_context(
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE)
return                                varchar2 is

cursor c_code_context(
ain_id       in     LOGICAL_WORKPLACES.id%TYPE) is
select upper(code) code
from   LOGICAL_WORKPLACES
connect by prior parent_id = id
start with       id        = ain_id
order by level desc;

v_code_context                        varchar2(2000);

begin
  for r_code_context in c_code_context(ain_id) loop
    v_code_context := substr(v_code_context||'.'||r_code_context.code, 1, 2000);
  end loop;
  return v_code_context;
end get_code_context;

In this case, I do use the CONNECT BY SQL clause in the method’s cursor, c_code_context, to get the list of code values for the context string.

get_id_context( )

Method get_id_context() returns the ID context for the specified ID. For example, calling FUNCTION get_id_context(ain_id) for Manufacturing (ID = 5) as in Table 10-3 will return the value 1.5. Here’s an example of the method from hierarchical content table package LOGCAL_WORKPLACE:

FUNCTION get_id_context(
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE)
return                                varchar2 is

v_id_context                          LOGICAL_WORKPLACES.id_context%TYPE;

begin
  if ain_id is not NULL then
    select id_context
    into   v_id_context
    from   LOGICAL_WORKPLACES
    where  id = ain_id;
  end if;

  return v_id_context;
end get_id_context;

This method is used by method create_id_context() to get the ID context for a new entry’s parent entry.

get_name_context( )

Similar to its sibling get_code_context(), get_name_context() returns the hierarchical list of name values as a context string for the specified ID. For example, executing FUNCTION get_name_context(ain_id) will return the string Very Dirty Manufacturing, Inc..Manufacturing.Chemicals for the Chemicals department listed in Table 10-3. Here’s an example of the method from hierarchical content table package LOGICAL_WORKPLACE:

FUNCTION get_name_context(
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE)
return                                varchar2 is

cursor c_name_context(
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE) is
select initcap(name) name
from   LOGICAL_WORKPLACES
connect by prior parent_id = id
start with       id        = ain_id
order by level desc;

v_name_context                        varchar2(2000);

begin
  for r_name_context in c_name_context(ain_id) loop
    v_name_context := substr(v_name_context||'.'||r_name_context.name, 1, 2000);
  end loop;
  return v_name_context;
end get_name_context;

As with get_code_context(), I use the CONNECT BY SQL clause in the method’s cursor, c_name_context, to get the list of name values for the context string.

That’s almost the entire list of table package or object TYPE methods. A few more are also used by most executables, so I will discuss them next.

The Black Box

The notion of a “black box” is something that you use but have no idea how it works. A lot of people drive cars, yet they don’t actually know how they work. Even more people use electronic devices, yet they don’t know how they work either. To these users, these are black boxes.

These black boxes have an interface. In the case of a car, the interface consists of an ignition switch, steering wheel, gas pedal, brake pedal, and so on. But beyond the use of the interface, users know little about how gasoline put in the tank turns into mechanical energy that takes them from place to place. And why should they?

In this section, I’m going to introduce you to a handful of methods that are used in almost every table package, object TYPE, or role package. Let’s start with those that are common to all table packages and object TYPEs.

Table Methods

A couple methods are applicable to almost all, if not all, table packages and object TYPEs. They are get_id and inactivate(). The method get_id is shown without parentheses because it takes no arguments. From my point of view, these “black box” methods should exist in every table package and object TYPE. Let’s start with get_id.

get_id

Method get_id allocates and returns a new primary key value from an underlying Oracle sequence. FUNCTION get_id() acts as an abstraction layer between the underlying Oracle sequence and PL/SQL. This abstraction layer allows you to later change the means by which you allocate a primary key value without having to recode any PL/SQL program units that call the method.

The following is an example of the get_id method from code table package WORKER_TYPE:

FUNCTION get_id
return                                WORKER_TYPES.id%TYPE is

n_id                                  WORKER_TYPES.id%TYPE;

begin
  select WORKER_TYPES_ID.nextval
  into   n_id
  from   SYS.DUAL;

  return n_id;
end get_id;

inactivate( )

Rarely have I encountered an application where I actually delete data from a table. Instead, applications work with the idea of inactivating or invalidating an entry. Method inactivate() handles inactivation.

If you have a code table, you can’t delete a code value that’s in use (one that has any relationships). Yet you may want to stop users from using an “outdated” code value. To do so, you write your code value’s SELECT statement to utilize the code’s inactive date, and then you set a code’s inactive date so users can no longer select it from a list of values.

Along the same line of thinking, if you have a content table, and a particular entry has relationships, but you no longer want users to select the content table entry, you can set its inactive date and prevent it from appearing in the presentation layer. Just as with the code, the content entry has historical significance but is no longer to be used.

With an intersection table, the concept is slightly different. With an intersection, the assignment of a relationship between two tables exists for the period of time defined from its active date through its inactive date. So when you inactivate an intersection entry, you’re simply defining the end of a historical relationship.

Regardless of the type of table for which an entry is inactivated, inactivate() does the same thing: it sets the inactive date. The following is an example of an inactivate() method from content table package LOGICAL_WORKPLACE:

PROCEDURE inactivate(
ain_id                         in     LOGICAL_WORKPLACES.id%TYPE,
aid_inactive_date              in     LOGICAL_WORKPLACES.inactive_date%TYPE) is

begin
  update LOGICAL_WORKPLACES
  set    inactive_date = aid_inactive_date
  where  id            = ain_id;
end inactivate;

That’s straightforward enough. Now let’s look at methods used by all packages and TYPEs.

Universal Methods

I expect some methods to exist in every package or TYPE. You’ve already been introduced to methods help and test. I have one new one here named initialize. Let’s start with help.

help

Method help should display help text for the package or TYPE for which it is executed. PROCEDURE help was covered extensively in Chapter 9. You should code this method to display help text in whatever manner you find suitable. What I showed you in Chapter 9 was just an example. But, whatever you do, you must be consistent; otherwise, programmers won’t consider the help as reliable documentation, and then you’ll have defeated the purpose of coding the method altogether.

The following is an example of the method from code table package WORKER_TYPE:

PROCEDURE help is

begin
  TEXT_HELP.process('WORKER_TYPE'),
end help;

In this example, PROCEDURE process in package TEXT_HELP will format and display the contents of the associated specification.

initialize

I know I have not talked about method initialize before, but I feel it’s an important part of testing and should exist in any package that uses its initialization section. Just in case you don’t know what I mean by “initialization section,” let me elaborate.

A package body is a PL/SQL block. Because it is a PL/SQL block, it has a declaration section, which is where you code any instance variables, functions, procedures, and so on, and it has an executable section. Really, it does. If, after all the functions and procedures you coded, you add the keyword BEGIN, any code from the keyword BEGIN to keyword END at the end of the package body is executed once, and only once, any time the package is instantiated. By “instantiated,” I mean loaded into memory and initialized. There it is, that word: initialized.

So for any package where I use the “initialization section,” I add a public PROCEDURE initialize to the package. Why? So I can test the code in the initialization section.

One of the mysterious errors developers, support personnel, and end users encounter is NO_DATA_FOUND the first time they try to use a particular package, and then the error magically goes away. This is almost always caused by a SQL SELECT statement executed in the initialization section of a package that can’t find a matching row in the database. It happens once, and only once, because the package is initialized once, and only once.

A problem like this can be quite elusive. You can go crazy troubleshooting the problem and then testing that your solution works. How do you fix it? Create a PROCEDURE initialize where you put your initialization code, and then execute that procedure in the initialization section of your package. Then, you can execute PROCEDURE initialize as many times as you like during testing, without having to log out and in, in order to get the database to initialize the package.

Here’s an example of the method:

PROCEDURE initialize is

begin
  n_contractor := get_id(v_contractor);
  n_employee   := get_id(v_employee);
  n_unknown    := get_id(v_unknown);
end initialize;

Then add a call to initialize in the initialization section of your package, like so:

...
-- Initialization section
begin
  initialize;

end WORKER_TYPE;

Once this code is in place, any time you access the package the first time, the package will execute the code in method initialize. And any time you need to test the code from the initialization section of a package, all you need do is execute method initialize. Take a look at source file top_100_names.pkb for a complete example.

test

The test procedure should be coded so it performs all necessary tests on its package or TYPE’s methods. I covered this method extensively in Chapter 8. Here’s a partial example of the method from code table package WORKER_TYPE:

PROCEDURE test is

n_id                                  WORKER_TYPES.id%TYPE;
v_code                                WORKER_TYPES.code%TYPE;
v_description                         WORKER_TYPES.description%TYPE;

begin
  -- Send feedback that the test ran
  pl('WORKER_TYPE.test()'),

  -- Clear the last set of test results
  &_USER..TEST.clear('WORKER_TYPE'),

  -- First, we need some test values

  -- Let's make sure they don't already exist: DELETE
  &_USER..TEST.set_test('WORKER_TYPE', 'DELETE', 0,
    'Delete test entries'),
  begin
    delete WORKER_TYPES
    where  code in (
      &_USER..TEST.v_TEST_30,
      &_USER..TEST.v_TEST_30_1,
      &_USER..TEST.v_TEST_30_2);
    &_USER..TEST.ok();
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  -- Now let's add three test codes: INSERT
  &_USER..TEST.set_test('WORKER_TYPE', 'INSERT', 1,
    'Insert 3 test entries'),
  begin
    insert into WORKER_TYPES (
           id,
           code,
           description,
           active_date,
           inactive_date )
    values (
           get_id(),
           &_USER..TEST.v_TEST_30,
           &_USER..TEST.v_TEST_80,
           &_USER..TEST.d_TEST_19000101,
           &_USER..TEST.d_TEST_99991231 );

    insert into WORKER_TYPES (
           id,
           code,
           description,
           active_date,
           inactive_date )
    values (
           get_id(),
           &_USER..TEST.v_TEST_30_1,
           &_USER..TEST.v_TEST_80,
           &_USER..TEST.d_TEST_19000101,
           &_USER..TEST.d_TEST_99991231 );

    insert into WORKER_TYPES (
           id,
           code,
           description,
           active_date,
           inactive_date )
    values (
           get_id(),
           &_USER..TEST.v_TEST_30_2,
           &_USER..TEST.v_TEST_80,
           &_USER..TEST.d_TEST_19000101,
           &_USER..TEST.d_TEST_99991231 );

    &_USER..TEST.ok();
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  -- Now that we have test entries,
  -- let's test the package methods
  &_USER..TEST.set_test('WORKER_TYPE', 'get_id()', 2,
    'Get the ID for the specified code'),
  begin
    n_id := get_id(&_USER..TEST.v_TEST_30);

    if n_id > 0 then
      &_USER..TEST.ok();
    else
      &_USER..TEST.error();
    end if;
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  &_USER..TEST.set_test('WORKER_TYPE', 'get_code_descr()', 3,
    'Get the code and description for the specified ID'),
  begin
    get_code_descr(
      n_id,
      v_code,
      v_description);
    if v_code        = &_USER..TEST.v_TEST_30 and
       v_description = &_USER..TEST.v_TEST_80 then
      &_USER..TEST.ok();
    else
      &_USER..TEST.error();
    end if;
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  &_USER..TEST.set_test('WORKER_TYPE', 'get_code_id_descr()', 4,
    'Get the code, ID, and description for the specified code'),
  begin
    v_code := &_USER..TEST.v_TEST_30;
    get_code_id_descr(
      v_code,
      n_id,
      v_description);
    if v_code           = &_USER..TEST.v_TEST_30 and
       n_id             > 0                      and
       v_description    = &_USER..TEST.v_TEST_80 then
      &_USER..TEST.ok();
    else
      &_USER..TEST.error();
    end if;
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  &_USER..TEST.set_test('WORKER_TYPE', 'get_code_id_descr()', 5,
    'Get the code, ID, and description for the specified date'),
  begin
    v_code := 'TEST';
    -- This test should raise a TOO_MANY_ROWS exception
    -- because at least three duplicate values will
    -- on the date specified
    get_code_id_descr(
      v_code,
      n_id,
      v_description,
      &_USER..TEST.d_TEST_99991231);
    if v_code           = &_USER..TEST.v_TEST_30 and
       n_id > 0                 and
       v_description    = &_USER..TEST.v_TEST_80 then
      &_USER..TEST.ok();
    else
      &_USER..TEST.error();
    end if;
  exception
    when TOO_MANY_ROWS then
      &_USER..TEST.ok();
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  &_USER..TEST.set_test('WORKER_TYPE', 'help()', 6,
    'Display help'),
  begin
    help();
    &_USER..TEST.ok();
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  -- Let's make sure they don't already exist: DELETE
  &_USER..TEST.set_test('WORKER_TYPE', 'DELETE', 7,
    'Delete test entries'),
  begin
    delete WORKER_TYPES
    where  code in (
      &_USER..TEST.v_TEST_30,
      &_USER..TEST.v_TEST_30_1,
      &_USER..TEST.v_TEST_30_2);
    &_USER..TEST.ok();
  exception
    when OTHERS then
      &_USER..TEST.error(SQLERRM);
  end;

  &_USER..TEST.set_test('WORKER_TYPE', NULL, NULL, NULL);
  &_USER..TEST.success();
end test;

If you include and code method test in every package, then you can totally automate the testing of your PL/SQL executables, as I outlined at the end of Chapter 9.

Divide and Conquer

The traditional concept of “divide and conquer” is a tactic to destroy teamwork. Your actions cause dissention among team members, so they no longer work together toward a mutual goal, but instead each works on his own goals. Hence, they are no longer a team.

So where you had a team of discrete members working together at various tasks to achieve a common goal, you end up with a group of individuals competing with each other to achieve many goals. The latter is what we normally end up with in an application.

  • The database tries to optimize its work around the overall use of reads and writes to the file system while maintaining a transaction context.
  • SQL tries to optimize the overall access to data in the database.
  • PL/SQL tries to optimize the overall performance of its executables.
  • The communication network tries to optimize the overall consumption of bandwidth.
  • The application server tries to optimize the overall performance of its hosted executables.
  • The presentation layer tries to optimize the overall end-user experience.

What do these layers of technology all have in common? They are all optimized for a fictitious set of events: “the average.” They work as a group of individuals competing with each other to achieve their own goals. So how can you turn them into a team whose members work together for a common goal? You divide and conquer. But this time, divide and conquer means you divide up a larger goal into tasks, and then have the best team member perform each task. I know, it’s 180 degrees from the traditional definition of divide and conquer, but it works.

For example, if there is data-intensive work to be accomplished, it should be done in the database. Retrieving data from the database through a network to an application server, or worse, a presentation layer, in order to accomplish data migration or data processing is ludicrous. In addition, you should use SQL and PL/SQL to manipulate data. When you combine the use of these two languages, you have the easiest to code, yet most powerful combination for data processing that has ever existed.

In addition, if there is behavior related to data, like so-called business rules, it should reside in the database. Otherwise, each additional layer may have its own version of “the rules,” and you’ll end up with business rule anarchy. As I tried to convince you earlier, behavior is just as important to store as the data to which it “belongs.”

Overall, I’m suggesting that you use PL/SQL and SQL to create actors—program units to fulfill particular roles—in order to accomplish the following kinds of tasks:

  • Data migration
  • Data processing
  • Reporting

Accordingly, this section will cover each of these roles and the polymorphic commands they typically employ.

Data Migration

As I stated earlier, my concept of data migration (DM) is a one-time task where you move external data into a highly structured schema in a database. The notion of it being a “one-time” task is important here. If you needed to move external data into a database on a regular basis, I would call it data processing instead.

Here’s the high-level idea: load the external data that is to be moved into an application’s data structure into staging tables with whatever method is quickest, and then use a PL/SQL data processing program unit to move the data into the application’s data structure.

As illustrated in Figure 10-6, you can use an Oracle database link if you’re moving data from one database to another, or you can use C/C++ with the Oracle Call Interface (OCI) API, Java and JDBC, XML, or better yet SQL*Loader, if you’re loading the data from a file system into the database. Use whatever method is the most efficient.

9781484207383_Fig10-06.jpg

Figure 10-6. An illustration of data migration

After the data is loaded into the database, you can then write a PL/SQL package to move the data from the staging tables into a target application’s schema. Such a package would have at least one public method: process. When executed, process would do the following:

  1. Use the package’s initialization section to load code value IDs that can then be used in a cross-reference IF statement to map external code values to ones in the target schema.
  2. Initialize the set_row() counters—the public package variables n_inserted and n_updated—to zero for each target table.
  3. Use a cursor and cursor FOR LOOP to read through the rows of the staging table.
  4. Use the set_row() method on the target table(s) to insert the external data.
  5. Display the n_insert and n_update counts upon completion in order to confirm that the data migration completed successfully.

After you’ve coded PROCEDURE process, all you would need to do is execute the method in order to move the data from a staging table to a (or set of) corresponding application table(s).

process

Method process is used to start an on-demand or polling process. The following is an example of the method from data migration package TOP_100_NAME:

PROCEDURE process is

-- This is the cursor for the last names.
cursor c_last is
select last_name
from   TOP_100_LAST_NAMES
order by 1;

-- This is the cursor for the first names.
cursor c_first is
select first_name,
       gender_code
from   TOP_100_FIRST_NAMES
order by 1;

-- This is the cursor for the middle initials.
cursor c_middle is
select letter
from   A_THRU_Z
order by 1;

-- This is the number of seconds since midnight
-- I'll use it to profile my code's performance.
n_start                               number :=
  to_number(to_char(SYSDATE, 'SSSSS'));

-- I'll use this to keep track of how many rows were selected
n_selected                            number := 0;

-- Here, I declare a record anchored to the table so
-- I can set the column values and then insert using
-- the record.
r_worker                              WORKERS%ROWTYPE;

begin
  -- Reset the insert/update counters for set_row()
  WORKER.n_inserted := 0;
  WORKER.n_updated  := 0;

  -- Loop through the last names
  for r_last in c_last loop

    -- While looping through the last names,
    -- loop through the first names
    for r_first in c_first loop

      -- While looping through the last and first names
      -- loop through the 26 letters in the English
      -- Alphabet in order to get middle initials
      for r_middle in c_middle loop
        n_selected                := n_selected + 1;
        -- Initialize the record

        -- Set the PK to NULL so set_row() determines
        -- whether or not the entry already exists
        -- by it's unique keys
        r_worker.id               := NULL;

        -- Flip flop from contractor to employee and back again
        if r_worker.worker_type_id = WORKER_TYPE.contractor then
          r_worker.worker_type_id := WORKER_TYPE.employee;
        else
          r_worker.worker_type_id := WORKER_TYPE.contractor;
        end if;

        -- Set the External ID, UK1, to NULL so set_row() determines
        -- whether or not the entry already exists
        -- by it's unique keys
        r_worker.external_id      := NULL;

        -- The first, middle, and last names come from the cursors
        r_worker.first_name       := r_first.first_name;
        r_worker.middle_name      := r_middle.letter||'.';
        r_worker.last_name        := r_last.last_name;

        -- get the name using the table's package
        r_worker.name             := WORKER.get_formatted_name(
          r_worker.first_name, r_worker.middle_name, r_worker.last_name);

        -- get the date from determinate function create_birth_date()
        r_worker.birth_date       := create_birth_date(r_worker.name);

        -- select the corresponding ID value
        if r_first.gender_code = 'F' then
          r_worker.gender_type_id := GENDER_TYPE.female;
        else
          r_worker.gender_type_id := GENDER_TYPE.male;
        end if;

        -- Insert the row into the database
        begin
          WORKER.set_row(r_worker);
        exception
          when OTHERS then
            raise_application_error(-20001, SQLERRM||
              ' on call WORKER.set_row()'||
              ' in TOP_100_NAME.process()'),
        end;
      end loop; -- c_middle

      commit;  -- commit every 26 rows

    end loop; -- c_first

  end loop; -- c_last

  -- Display the results
  pl(to_char(n_selected)||' rows selected'),
  pl(to_char(WORKER.n_inserted)||' rows inserted'),
  pl(to_char(WORKER.n_updated)||' rows updated'),
  pl('in '||to_char(to_number(to_char(SYSDATE, 'SSSSS')) - n_start)||
    ' seconds.'),
end process;

When migrating data, you should be able to take advantage of methods get_row() and set_row() on the target content or intersection tables. Now let’s take a look at an on-demand, data processing PL/SQL program unit.

On-Demand Data Processing

If you have a task that needs to be done now, or on an hourly, daily, weekly, or monthly basis, you’ll use some kind of external scheduler like cron in UNIX or Linux, or a Scheduled Tasks in Windows, or perhaps the internal Oracle job queue to execute the process. I call this kind of processing on-demand data processing (ODDP).

Just like its data migration sibling, on-demand data processing is executed through PROCEDURE process. What’s different here is the fact that this is not a one-time event. You expect to use this process over and over again. Accordingly, I usually create a table to log when the process runs, along with any performance statistics like how long it took to run, how many rows it inserted, how many rows it updated, and so on.

Since I want to keep track of when a process executes, I now have three common method names used for an on-demand data processing program unit: process, start_logging(), and stop_logging(). Let’s look at process first.

process

As before, process is used to start an on-demand or polling process. This time, process incorporates the use of a log so it can record that the process took place, along with some performance statistics. The following is an example of the method from package REPORT_STAGING_TABLE:

PROCEDURE process is

-- Get a list of all the tables that start with REPORT_ in the USER's schema
cursor c_table is
select table_name
from   SYS.USER_TABLES
where  table_name like 'REPORT\_%' escape ''
order by 1;

n_selected                            number := 0;
n_deleted                             number := 0;

begin
  -- Start logging
  ON_DEMAND_PROCESS_LOG.start_logging(
    'REPORT_STAGING_TABLE', 'process'),

  -- For each report "staging" table, delete any data over 2 days old
  for r_table in c_table loop
    n_selected := n_selected + 1;

    execute immediate 'delete '||
      r_table.table_name||
      ' where insert_date < SYSDATE - 2';

    n_deleted := n_deleted + nvl(sql%rowcount, 0);
  end loop;

  -- Stop logging
  ON_DEMAND_PROCESS_LOG.stop_logging(
    n_selected, NULL, NULL, n_deleted, NULL);
exception
  when OTHERS then
    -- Stop logging, but report an error
    ON_DEMAND_PROCESS_LOG.stop_logging(
      n_selected, NULL, NULL, n_deleted, SQLERRM);
    raise;
end process;

If you examine this example, you’ll see that the process starts out by calling method start_logging() at the beginning, and then calls method stop_logging() at the end. The underlying log package, ON_DEMAND_PROCESS_LOG, keeps track of the start and stop time, and then records the statistics to table ON_DEMAND_PROCESS_LOGS when method stop_logging() is called.

The process itself deletes any staged reporting data that is over two days old. As with its data migration sibling, if processing data, you should be able to take advantage of methods get_row() and set_row() on any target content or intersection tables.

Now let’s take a look at what supporting method start_logging() does.

start_logging( )

Method start_logging() stores the time it was called, in memory, allocates and stores a new primary key value, and then inserts a “starting” row in a log table. In this case, it inserts a row into table ON_DEMAND_PROCESS_LOGS as defined in the following partial listing from script on_demand_process_logs.tab:

create table ON_DEMAND_PROCESS_LOGS (
id                             number                         not null,
object_name                    varchar2(30)                   not null,
method_name                    varchar2(30)                   not null,
rows_selected                  number,
rows_inserted                  number,
rows_updated                   number,
rows_deleted                   number,
result                         varchar2(256),
elapsed_time                   number,
insert_user                    varchar2(30)  default USER     not null,
insert_date                    date          default SYSDATE  not null,
update_user                    varchar2(30)  default USER     not null,
update_date                    date          default SYSDATE  not null)

...

This log table holds information about how long it took to execute the process, row processing statistics, and the time when the process was started and stopped. The following is an example of the method start_logging() from package ON_DEMAND_PROCESS_LOG:

PROCEDURE start_logging(
aiv_object_name                in     ON_DEMAND_PROCESS_LOGS.object_name%TYPE,
aiv_method_name                in     ON_DEMAND_PROCESS_LOGS.method_name%TYPE) is

pragma AUTONOMOUS_TRANSACTION;

begin
  n_start := to_number(to_char(SYSDATE, 'SSSSS'));
  n_id    := get_id();

  insert into ON_DEMAND_PROCESS_LOGS (
         id,
         object_name,
         method_name )
  values (
         n_id,
         upper(aiv_object_name),
         upper(aiv_method_name) );
  commit;
end start_logging;

Variable n_start in method start_logging() is an instance (package body) variable used to capture the time when the method was called. Similarly, variable n_ id is used to capture the new log entry’s primary key so the same entry can be updated in method stop_logging(). In addition, this method employs the use of pragma autonomous_transaction so a process that starts but fails will still leave a starting entry in the log.

Now let’s look at stop_logging().

stop_logging( )

Method stop_logging() takes the row processing information and a result description passed to it, and uses that information along with the stored start time and primary key to update the starting log entry. As part of updating the log entry, the method calculates the elapsed time in seconds between the start and stop of logging. The following is an example of the method from package ON_DEMAND_PROCESS_LOG:

PROCEDURE stop_logging(
ain_rows_selected              in     ON_DEMAND_PROCESS_LOGS.rows_selected%TYPE,
ain_rows_inserted              in     ON_DEMAND_PROCESS_LOGS.rows_inserted%TYPE,
ain_rows_updated               in     ON_DEMAND_PROCESS_LOGS.rows_updated%TYPE,
ain_rows_deleted               in     ON_DEMAND_PROCESS_LOGS.rows_deleted%TYPE,
aiv_result                     in     ON_DEMAND_PROCESS_LOGS.result%TYPE) is

pragma AUTONOMOUS_TRANSACTION;

n_elapsed_time                        number;

begin
  n_elapsed_time := to_number(to_char(SYSDATE, 'SSSSS')) - n_start;

  update ON_DEMAND_PROCESS_LOGS
  set    rows_selected            = ain_rows_selected,
         rows_inserted            = ain_rows_inserted,
         rows_updated             = ain_rows_updated,
         rows_deleted             = ain_rows_deleted,
         result                   = aiv_result,
         elapsed_time             = n_elapsed_time,
         update_user              = USER,
         update_date              = SYSDATE
  where  id = n_id;
  commit;

  n_id := NULL;

  -- Display the results
  if ain_rows_selected is not null then
    pl(to_char(ain_rows_selected)||' rows selected.'),
  end if;
  if ain_rows_inserted is not null then
    pl(to_char(ain_rows_inserted)||' rows inserted.'),
  end if;
  if ain_rows_updated is not null then
    pl(to_char(ain_rows_updated)||' rows updated.'),
  end if;
  if ain_rows_deleted is not null then
    pl(to_char(ain_rows_deleted)||' rows deleted.'),
  end if;
  if aiv_result is not null then
    pl(aiv_result);
  end if;
  pl('Elapsed time: '||to_char(n_elapsed_time)||' seconds.'),
end stop_logging;

In addition to recording the log statistics, stop_logging() also echoes those statistics to the screen if serveroutput is set to on in SQL*Plus. Like its sibling start_logging(), this method employs the use of pragma autonomous_transaction, so a process that fails can still leave a completed log entry.

Having log information about when on-demand processes execute is an important part of managing their use. Access to statistics about each on-demand process is even more invaluable. Using these statistics, you can determine whether a process is fast and efficient, and make adjustments accordingly.

If you find you’re running an on-demand process every few minutes, or if you’re interested in offloading a long-running process from the presentation layer, then the process is probably better done as a polling process. Let’s look at that next.

Polling Data Processing

Unlike an on-demand process, a polling data process (PDP) just keeps running. It follows three high-level steps.

  1. Check a queue for commands.
  2. Process the “next” command.
  3. Sleep for a predetermined period of time, then go back to step 1.

I’ll give you three common examples here of using “divide and conquer” to offload a portion of data processing to the database rather than the presentation or application layer. Doing so makes the presentation or application layer appear to be fast and efficient. And, since the best place to perform data processing is in the database, the offloaded portion of the larger process is fast and efficient. First, let’s look at an example with the application layer.

Let’s say you have web or message services that, in turn, update your database whenever there is a transaction on an external system. Rather than tie up the services with the entire business process, you can have your services update a set of staging tables in your database, and then have a polling process on your database wake up every so often and finish processing the staged data.

A second example is one where you have a long-running process in the presentation layer. Let’s say you have a security system where an administrator can assign access to worker data based on business organization, location, and so on. Rather than tie up the administrator’s computer while the mass assignments take place, you can have the presentation layer submit the processing request to your polling process’s queue. Then, when your polling process wakes up from its sleep period, it will process your long-running request in the background. What’s the result? To the end user using the presentation layer, the application appears easy, fast, and efficient.

A third example is along the same line of thinking. This time, however, you set up your end user’s reporting subsystem to submit long-running reports to your polling process’s queue, and then have your polling process email the end user with a link (URL) to a completed report when it finishes executing the report in the background. Once again, the end user’s experience is one where submitting the report is easy, fast, and efficient.

Using a polling process requires a different design than an on-demand process. Instead of logging each time a process executes (which you can still do), you need to create a queue for process and processing-related commands, and possibly a table to hold the current status of the polling process. I’ll start out by showing you an example of the process method, and then follow up with discussions on each of the supporting methods.

process

Once again, process is used to start an on-demand or polling process. In this case, it’s a polling data process. A polling process is one that does its thing, goes to sleep, and then wakes up to do its thing all over again and again. A polling process is dependent on a queue, at the very least, so you can tell the process to quit executing.

The queue can also be used to indicate what needs to be processed. For example, if you have a long-running process that must take place as part of inserting an entry into a particular table, you could have a trigger on that table insert a row into a polling process’s queue, like a primary key ID value, and then have the polling process finish the long-running process asynchronously.

The following is an example of a polling process that you can find in package POLLING_PROCESS:

PROCEDURE process is

r_polling_process_queues              POLLING_PROCESS_QUEUES%ROWTYPE;

begin
  DEBUG.set_text('POLLING_PROCESS', 'Starting'),

  -- perform a manual loop until it recieves a command to quit
  loop
    DEBUG.set_text('POLLING_PROCESS', 'Getting next command'),

    -- Get the next command from the queue
    r_polling_process_queues :=
      POLLING_PROCESS_QUEUE.get_next(r_polling_process_queues);

    -- If it's time to quit, pool the queue once more to delete
    -- the quit command, and then exit
    if r_polling_process_queues.command =
         POLLING_PROCESS_QUEUE.v_QUIT then

      POLLING_PROCESS_STATUS.set_status('Quiting'),

      DEBUG.set_text('POLLING_PROCESS', 'Quiting'),

      r_polling_process_queues :=
        POLLING_PROCESS_QUEUE.get_next(r_polling_process_queues);
      exit;
    elsif r_polling_process_queues.command = 'DISABLE' then
      DEBUG.disable('POLLING_PROCESS'),
    elsif r_polling_process_queues.command = 'ENABLE'  then
      DEBUG.enable('POLLING_PROCESS'),
    end if;

    -- *** Now do all your groovy data processing here! ***

    POLLING_PROCESS_STATUS.set_status('Processing'),

    DEBUG.set_text('POLLING_PROCESS',
      'I''m doing some groovy data processing at '||
      to_char(SYSDATE, 'HH:MI:SS'));

    -- *** End of your groovy data processing section.  ***

    POLLING_PROCESS_STATUS.set_status('Sleeping'),

    DEBUG.set_text('POLLING_PROCESS', 'Sleeping'),

    -- Sleep for 10 seconds
    SYS.DBMS_LOCK.sleep(10);

    DEBUG.set_text('POLLING_PROCESS', 'Awake'),
  end loop;
end process;

This example currently has many lines that employ the use of DEBUG.set_text() to record what’s going on in the process as it runs. The debug logging, as it is called, can be turned on before running the process, or it can be turned on by executing method enable. Conversely, debug logging can be turned off by executing disable. This means you can enable and disable debug logging on this process from any other database session at any point in time while the process is running. That’s helpful when it comes to troubleshooting a new polling process.

This example also employs method quit to tell the polling process to stop executing. In addition, it updates a status table POLLING_PROCESS_STATUSES with the current status of the process, so you can determine what it’s working on at any point in time while it’s executing.

In this example, it’s sleeping for only 10 seconds in between processing. I’ve just coded it that way in order to make it easy to demonstrate. In practice, I normally have a polling process sleep 3 to 5 minutes (180 to 300 seconds).

Let’s look at method get_next() next.

get_next( )

Method get_next() deletes the last item processed from the queue as specified by the passed parameter, and then retrieves the next command from a queue. In this case, it deletes and selects an entry from table POLLING_PROCESS_QUEUES. The following is an example of a queue table definition from file polling_process_queues.tab:

create table POLLING_PROCESS_QUEUES (
id                             number                         not null,
command                        varchar2(256)                  not null,
insert_user                    varchar2(30)  default USER     not null,
insert_date                    date          default SYSDATE  not null)

...

FUNCTION get_next(air_queue_entry) is the “heart and soul” of the polling process. Here’s an example of the method from table package POLLING_PROCESS_QUEUE:

FUNCTION get_next(
air_polling_process_queues     in     POLLING_PROCESS_QUEUES%ROWTYPE)
return                                POLLING_PROCESS_QUEUES%ROWTYPE is

pragma AUTONOMOUS_TRANSACTION;

r_polling_process_queues              POLLING_PROCESS_QUEUES%ROWTYPE;

begin
  delete POLLING_PROCESS_QUEUES
  where  id = air_polling_process_queues.id;

  begin
    select q.*
    into   r_polling_process_queues
    from   POLLING_PROCESS_QUEUES q
    where  q.id = (
    select min(n.id)
    from   POLLING_PROCESS_QUEUES n);
  exception
    when NO_DATA_FOUND then
      r_polling_process_queues := NULL;
  end;

  commit;

  return r_polling_process_queues;
end get_next;

The way that this method is written, an entry is removed from the queue only after it has completed successfully.

Let’s look at the one method that always uses the queue, quit, next.

quit

Method quit is used to tell a polling or long-running process to quit before its next round of processing. With a polling process, this method stores a quit command in the polling process’s queue, so the next time it wakes and starts processing, it quits executing.

PROCEDURE quit inserts a new message into its queue table with a primary key value of –1 so the command jumps to the top of the queue. Here’s an example of the method from table package POLLING_PROCESS_QUEUE:

PROCEDURE quit is

pragma AUTONOMOUS_TRANSACTION;

begin
  begin
    insert into POLLING_PROCESS_QUEUES (
           id,
           command,
           insert_user,
           insert_date )
    values (
           -1,
           v_QUIT,
           USER,
           SYSDATE );
    pl('Queued to quit.'),
  exception
    when DUP_VAL_ON_INDEX then
      pl('Already queued to quit.'),
  end;

  commit;
end quit;

A convenience method quit also exists in package POLLING_PROCESS, because it’s easier to remember to execute POLLING_PROCESS.quit to stop the execution of POLLING_PROCESS.process than it is to remember the name of the queue’s table package.

So you can now start and quit the process. Wouldn’t it be nice to be able to see what the process is doing whenever you need to? Well that’s what method enable is for. Let’s look at it next.

enable

Method enable is used to enable something—in this case, the logging of debug information at any desired point in time. PROCEDURE enable inserts a new command into the queue with a primary key value of -2, so enabling debug logging has an even higher priority than quitting. Here’s an example of the method from package POLLING_PROCESS_QUEUE:

PROCEDURE enable is

pragma AUTONOMOUS_TRANSACTION;

begin
  begin
    insert into POLLING_PROCESS_QUEUES (
           id,
           command,
           insert_user,
           insert_date )
    values (
           -2,
           v_ENABLE,
           USER,
           SYSDATE );
    pl('Queued to enable logging.'),
  exception
    when DUP_VAL_ON_INDEX then
      pl('Already queued enable logging.'),
  end;

  commit;
end enable;

After executing PROCEDURE enable, and after the polling process’s next access to the queue, PROCEDURE process enables debug logging. As with its sibling method quit, a convenience method enable also exists in package POLLING_PROCESS.

And disable?

disable

Method disable is used to disable something—in this case, the logging of debug information at any desired point in time. PROCEDURE disable inserts a new command into the queue with a primary key value of -3, so disabling debug logging has an even higher priority than enabling debug logging and quitting. Here’s an example of the method from package POLLING_PROCESS_QUEUE:

PROCEDURE disable is

pragma AUTONOMOUS_TRANSACTION;

begin
  begin
    insert into POLLING_PROCESS_QUEUES (
           id,
           command,
           insert_user,
           insert_date )
    values (
           -3,
           v_DISABLE,
           USER,
           SYSDATE );
    pl('Queued to disable logging.'),
  exception
    when DUP_VAL_ON_INDEX then
      pl('Already queued to disable logging.'),
  end;

  commit;
end disable;

After executing PROCEDURE disable, and after the polling process’s next access to the queue, PROCEDURE process disables debug logging. As with its sibling methods quit and enable, a convenience method disable also exists in package POLLING_PROCESS.

Now let’s look at the methods used to record and read the current status of a polling process.

set_status( )

Method set_status() is used to record a process’s current status in a status table. In this case, set_status() is updating a row in table POLLING_PROCESS_STATUSES, as defined from following partial listing of file polling_process_statuses.tab:

create table POLLING_PROCESS_STATUSES (
status                         varchar2(256)                  not null,
update_user                    varchar2(30)  default USER     not null,
update_date                    date          default SYSDATE  not null)

...

This table is supposed to have one, and only one, row at any point in time. PROCEDURE set_status(aiv_status) updates or inserts an entry in this table as needed. The following is an example of this method from table package POLLING_PROCESS_STATUS:

PROCEDURE set_status(
aiv_status                     in     POLLING_PROCESS_STATUSES.status%TYPE) is

pragma AUTONOMOUS_TRANSACTION;

begin
  update POLLING_PROCESS_STATUSES
  set    status      = aiv_status,
         update_user = USER,
         update_date = SYSDATE;

  if nvl(sql%rowcount, 0) = 0 then
    insert into POLLING_PROCESS_STATUSES (
           status,
           update_user,
           update_date )
    values (
           aiv_status,
           USER,
           SYSDATE );
  end if;

  commit;
exception
  when OTHERS then
    raise_application_error(-20002, SQLERRM||
      ' on update or insert POLLING_PROCESS_STATUSES'||
      ' in POLLING_PROCESS_STATUS.set_status()'),
end set_status;

So how do you get the status? Come on now . . .

get_status

Method get_status returns the current status for the associated process from its status table. FUNCTION get_status retrieves the one, and only, row from the table, and then returns the value of column status. Here’s an example of the method from table package POLLING_PROCESS_STATUS:

FUNCTION get_status
return                                POLLING_PROCESS_STATUSES.status%TYPE is

v_status                              POLLING_PROCESS_STATUSES.status%TYPE;

begin
  select status
  into   v_status
  from   POLLING_PROCESS_STATUSES;

  return v_status;
exception
  when NO_DATA_FOUND then
    return 'UNKNOWN';
  when OTHERS then
    raise_application_error(-20001, SQLERRM||
      ' on select POLLING_PROCESS_STATUSES'||
      ' in POLLING_PROCESS_STATUS.get_status()'),
end get_status;

And finally, the last polling process method: status.

status

Since we are in such a “commanding (or demanding) mood” with this section, status is a convenience method that exists in the polling process’s package that executes method get_status from the table package for the status table. Once again, it exists in the polling process package because it’s simply more intuitive to execute commands for a given process from its own package.

Here’s an example from polling process package POLLING_PROCESS:

PROCEDURE status is

begin
  pl(POLLING_PROCESS_STATUS.get_status);
end status;

With the queue and status tables and their associated table packages in place, you can now tell the polling process to do the following:

  • Quit
  • Enable debug logging
  • Disable debug logging
  • Process a given command
  • Display its current status

A common use for on-demand or polling processing is to build interfaces between systems. Let’s discuss some common method names used for just that next.

Interfacing

Interfacing, systems integration, or whatever you want to call it, is just data processing. It is commonly a multistep data process (MSDP) that follows a fairly standard set of tasks.

  1. Log that you are going to attempt to download data from a source to your target system.
  2. Download data from the source to your target system.
  3. Log that you downloaded the data successfully.
  4. Verify that you have cross-reference values for all mapped fields.
  5. Upload data from its staging tables to its target tables.
  6. Log that you uploaded the data successfully.

As you can see, interfacing consists of two high-level steps: downloading and uploading (for lack of better terms). Figure 10-7 shows a visual representation of this division of tasks. These steps can be accomplished synchronously or asynchronously. Regardless, I like to separate the major steps by recording their success or failure in a status table. This allows me to rerun the process, as needed, until both steps are completed.

9781484207383_Fig10-07.jpg

Figure 10-7. Interfacing with external systems, two steps: downloading and uploading

Whether or not I use a program external to the database to move data from a source to destination staging tables, the “download” program can record when it starts its processing and when it completes. Then when the upload process starts, it can check to see if the download process has been completed. If the download has been completed, the upload process can log when it starts and when it completes.

Dividing a long-running process into multiple smaller processes like this not only allows you to do asynchronous processing, but also provides you with a framework to build robust processes that can be rerun over and over again until they finish there processing.

For example, if you build an interface with multistep processing that should move data into your database once a week, it can be scheduled to run every day of the week with cron or Scheduled Tasks, and it will just quit if the weekly work has already been completed.

Here, I’ll discuss commonly used method names for a multistep interface on-demand process. Let’s start with process.

process

Again, process is used to start an on-demand or polling process. In this case, it’s a multistep data process. A multistep process is one that checks which portion of a process needs to be done and (re)starts the process at the required step. A multistep process is dependent on a status table that is used to record when a step is completed successfully.

The following is an example of a multistep process that you can find in package WEEKLY_INTERFACE:

PROCEDURE process is

begin
  if not WEEKLY_INTERFACE_STATUS.is_downloaded() then
    download();
    if not WEEKLY_INTERFACE_STATUS.is_downloaded() then
      pl('WARNING: download() did not complete successfully.'),
    end if;
  end if;

  if WEEKLY_INTERFACE_STATUS.is_downloaded() then
    if not WEEKLY_INTERFACE_STATUS.is_uploaded() then
      upload();
      if not WEEKLY_INTERFACE_STATUS.is_uploaded() then
        pl('WARNING: upload() did not complete successfully.'),
      end if;
    end if;
  end if;

  pl('process() completed successfully.'),
end process;

As you can see from this example, PROCEDURE process is concerned only with the status of the multistep process and calls other procedures to accomplish its required steps.

Let’s take a look at the supporting methods. We’ll start with is_downloaded.

is_downloaded

Method is_downloaded returns a Boolean value that indicates whether the first step, downloading, has been accomplished. FUNCTION is_downloaded returns TRUE if the download process has been completed; otherwise, it returns FALSE.

The following is an example of the method from supporting table package WEEKLY_INTERFACE_STATUS:

FUNCTION is_downloaded
return                                boolean is

pragma AUTONOMOUS_TRANSACTION;

d_download_date
  WEEKLY_INTERFACE_STATUSES.download_date%TYPE;

begin
  begin
    select download_date
    into   d_download_date
    from   WEEKLY_INTERFACE_STATUSES
    where  id =
      WEEKLY_INTERFACE_STATUS.get_week();
  exception
    when NO_DATA_FOUND then
      d_download_date := NULL;
  end;

  if d_download_date is not NULL then
    return TRUE;
  else
    return FALSE;
  end if;
end is_downloaded;

PROCEDURE WEEKLY_INTERFACE.process calls this method directly from package WEEKLY_INTERFACE_STATUS. In addition, I’ve added a convenience method, is_downloaded, which returns the number 1 for TRUE and 0 for FALSE to package WEEKLY_INTERFACE, because external programs that might want to know the status of the download process, such as a Java JDBC program, cannot call a stored procedure that returns a Boolean value. Here’s an example of the is_downloaded convenience method from package WEEKLY_INTERFACE:

FUNCTION is_downloaded
return                                number is

begin
  -- A function that returns a 1 or 0 from TRUE and FALSE
  -- from WEEKLY_INTERFACE_STATUS.is_downloaded()
  return to_boolean_number(WEEKLY_INTERFACE_STATUS.is_downloaded());
end is_downloaded;

Next, let’s take a look at the download method.

download

Method download is used to start a download process—that is, a process to move data between systems—from a data source to your target staging tables. PROCEDURE download can be utilized in your multistep process if the data can be moved from its source to the destination staging tables from inside the database. You can do this in a variety of ways, using the following:

  • A database link between systems
  • An embedded JDBC program to access an external database
  • Package SYS.UTL_MAIL to access an email server
  • Package SYS.UTL_HTTP to access a web server
  • Package SYS.UTL_FILE to access the database server’s file system

If you need to perform this step externally with another program, you can simply code a put_line() message that reminds the user that this step is accomplished externally. By handling the external process in this fashion, you don’t need to modify the code in method process.

The following is an example of the method from package WEEKLY_INTERFACE:

PROCEDURE download is

begin
  pl('Executing download()'),
  -- You can code this procedure to move data between systems
  -- using an Oracle database link, loaded JDBC driver and class,
  -- etc. Or, you can change this message to remind the user
  -- that this is an asynchronous process handled by an external
  -- program.

  set_downloaded();
end download;

If you code this method, then you’ll call set_downloaded upon a successful download. If, on the other hand, you code the download process externally, that program will need to call WEEKLY_INTERFACE.set_downloaded in order to let process know that the first step of the multistep process has been completed asynchronously.

While we’re on the topic, let’s look at set_downloaded next.

set_downloaded

Method set_downloaded is used to set the status of the download process to complete. Supporting table package’s PROCEDURE WEEKLY_INTERFACE_STATUS.set_downloaded is called from inside download, if the download method is actually coded to perform the download process.

The following is an example of the supporting status table WEEKLY_INTERFACE_STATUSES from script weekly_interface_statuses.tab:

create table WEEKLY_INTERFACE_STATUSES (
id                             number                         not null,
download_date                  date,
upload_date                    date,
insert_user                    varchar2(30)  default USER     not null,
insert_date                    date          default SYSDATE  not null,
update_user                    varchar2(30)  default USER     not null,
update_date                    date          default SYSDATE  not null)

...

And, here’s an example of the method set_downloaded supporting table package WEEKLY_INTERFACE_STATUS:

PROCEDURE set_downloaded is

pragma AUTONOMOUS_TRANSACTION;

begin
  update WEEKLY_INTERFACE_STATUSES
  set    download_date              = SYSDATE,
         update_user                = USER,
         update_date                = SYSDATE
  where  id =
    WEEKLY_INTERFACE_STATUS.get_week();

  if nvl(sql%rowcount, 0) = 0 then
    insert into WEEKLY_INTERFACE_STATUSES (
           id,
           download_date )
    values (
           WEEKLY_INTERFACE_STATUS.get_week(),
           SYSDATE );
  end if;

  commit;
end set_downloaded;

I’ve also created a convenience method set_downloaded in package WEEKLY_INTERFACE, which can be called by an external download program in order to set the download process to complete, as follows:

PROCEDURE set_downloaded is

begin
  WEEKLY_INTERFACE_STATUS.set_downloaded();
end set_downloaded;

Enough with the download business. Now let’s talk about uploading, starting with method is_uploaded.

is_uploaded

Method is_uploaded returns a Boolean value that indicates whether the second step, uploading, has been accomplished. FUNCTION is_uploaded returns TRUE if the upload process has been completed; otherwise, it returns FALSE.

The following is an example of the method from supporting table package WEEKLY_INTERFACE_STATUS:

FUNCTION is_uploaded
return                                boolean is

pragma AUTONOMOUS_TRANSACTION;

d_upload_date
  WEEKLY_INTERFACE_STATUSES.upload_date%TYPE;

begin
  begin
    select upload_date
    into   d_upload_date
    from   WEEKLY_INTERFACE_STATUSES
    where  id =
      WEEKLY_INTERFACE_STATUS.get_week();
  exception
    when NO_DATA_FOUND then
      d_upload_date := NULL;
  end;

  if d_upload_date is not NULL then
    return TRUE;
  else
    return FALSE;
  end if;
end is_uploaded;

PROCEDURE WEEKLY_INTERFACE.process calls this method directly from package WEEKLY_INTERFACE_STATUS. Unlike its sibling, is_downloaded, there’s no need for a convenience method, because the whole point of the multistep process package is to upload the data from staging to application tables. So no external process will ever be used, and hence no convenience method is required.

Now let’s look at upload.

upload

Method upload is used to start an upload process—that is, a process to move data from staging tables to an application’s tables. This is where you code all your groovy data processing.

Here’s an example of the method from package WEEKLY_INTERFACE:

PROCEDURE upload is

-- Add cursor(s) used to loop through the staging data here.

begin
  pl('Executing upload()'),
  if is_verified() then

    -- Add your data migration code here.

    WEEKLY_INTERFACE_STATUS.set_uploaded();
  end if;
end upload;

And last, but not least, is set_uploaded.

set_uploaded

Method set_uploaded is used to set the status of the upload process to complete. Supporting table package’s PROCEDURE WEEKLY_INTERFACE_STATUS.set_uploaded is called from inside upload, if the upload method completes successfully.

The following is an example of the method from supporting table package WEEKLY_INTERFACE_STATUS:

PROCEDURE set_uploaded is

pragma AUTONOMOUS_TRANSACTION;

begin
  update WEEKLY_INTERFACE_STATUSES
  set    upload_date                = SYSDATE,
         update_user                = USER,
         update_date                = SYSDATE
  where  id =
    WEEKLY_INTERFACE_STATUS.get_week();

  if nvl(sql%rowcount, 0) = 0 then
    insert into WEEKLY_INTERFACE_STATUSES (
           id,
           upload_date )
    values (
           WEEKLY_INTERFACE_STATUS.get_week(),
           SYSDATE );
  end if;

  commit;
end set_uploaded;

Using the WEEKLY_INTERFACE package as a framework for your database destination interfaces, you can provide a consistent treatment to all interfaces and at the same time have the freedom to perform each high-level step in a manner that is suitable for solving the associated business requirements.

Next, let’s look at the polymorphic use of method names in support of the reporting presentation layer.

Reporting

It has become quite fashionable in our industry to develop or implement the data-entry portion of an application and then throw a reporting utility—like Cognos Impromptu, Business Objects, Crystal Reports, BIRT, and so on—at the application’s relational database or data warehouse for reporting needs. Since I’m not selling any of these “solutions,” I can share my honest opinion. Throwing a reporting utility at an application for its reporting needs is a poor idea. Suddenly, we’ve decided to give our customers half of a solution. What’s with that? I know that it’s all about the money. But I’ve never seen much large-scale reporting success with end users with this strategy. Here’s why:

  • Most end users know very little about how the information they work with is actually organized and related.
  • Most end users know very little about how information is input into their application and how it is consequently stored in the application’s database.
  • Most end users have no time to spend trying to engineer a set of complex reports to help them run the business.

That’s not to say that there aren’t exceptions. But for the most part, reporting requirements and deliverables should be a required part of any application’s development or implementation (depending on whether you build or buy).

For those exceptional end users who are information and system architects, when they do use reporting utilities, they usually get poor performance from the database. It’s not necessarily the database’s fault; it’s a matter of physics.

A well-designed relational or object-relational database will have tables that are all in their third normal form. Accordingly, any SELECT statements written against a well-designed database will have many tables in the FROM clause of a given SELECT statement. The more tables, the more data must be physically read from physical devices, the more physical devices have to work, the longer it takes. It’s not a database problem; it’s a physics problem.

Now if you recall from previous chapters, I’ve spent some time showing you the consequences of physics and a database. You can use the following tactics to “divide and conquer” in order to improve the speed and efficiency of retrieving data for reporting:

  • Use pseudo-constants and PL/SQL IF statements to replace code tables in a FROM clause of a SELECT statement. This reduces the use of physical devices.
  • Retrieve data from the smallest table that will produce the smallest set of results, and then do programmatic nested loops with cursors to retrieve the “rest” of the data. This reduces the use of physical devices.
  • Create thin (narrow) row extracts of data, and then join them together in a SELECT statement. This reduces the use of memory and physical devices. For example, if the source table has an average row size of 297 bytes, you may extract data from it into a temporary table that has an average row size of 13 bytes. The temporary table is what I call a narrow row table.
  • Post-sort results. Sort the rows for a report after the report data has already been placed in a staging table(s). This reduces the use of memory and CPU cycles.

However, you can use these tactics only if you write stored procedures to produce the result sets for your reports, placing the results in staging tables, which are then later queried by the presentation layer. What I’m advocating here is heresy to some, and bliss for others. I know many of you may want to send me an email explaining why this is such a bad idea. Do it. I would love to argue with you about it. Here’s what I do know:

  • In applications with well over one million rows in each content table, I’m able to produce short reports in less than 3 seconds, most in less than 1 second. That’s compared to over 3 minutes trying to accomplish the same task with a large, complicated SQL SELECT statement.
  • In the same large enterprise applications, I’ve been able to produce large reports in less than 3 minutes. That’s compared to 20 minutes to 28 hours for some of the largest reports. Yes, even reports that took 28 hours to produce using a very large and complicated SQL SELECT statement took less than 3 minutes when these “divide and conquer” principles were applied. And no, there were no accidental Cartesian products in the SELECT statement.
  • When this architecture is adopted, the business intelligence on how to produce the report permanently resides in the database where it is easiest and most efficient to process data. This means that any presentation layer can be used to produce the result set for a given report by executing it in the database, and then simply formatting and displaying the data in whatever fashion is desired. Presentation layers come and go. Does the business knowledge that produces your reports have to go with it? No.

Figure 10-8 demonstrates the reporting architecture I’m advocating. In this architecture, producing a report consists of these three high-level steps.

  1. The presentation layer calls a function get_report() in the database, passing it any required parameters for producing a report.
  2. get_report() produces the result set for the report, placing the results in a staging table specific to the report. Then get_report() returns an ID for the results.
  3. The presentation layer executes a SELECT statement against the report’s staging table, where the report ID is the one returned from get_report(), and orders the results by the report’s sequence.

9781484207383_Fig10-08.jpg

Figure 10-8. A modular PL/SQL- and SQL-driven reporting architecture

I call reports produced in this manner “staged data reports.” In practice, I’ve used this architecture to impressively improve the performance and complexity of reports, while also reducing the consumption of server resources. That’s a win-win situation. I’ve called the same report stored procedure from Oracle Reports, PowerBuilder, and a J2EE, Python Django or Ruby on Rails application on the Web. Here, the same result set is used by six different presentation layers. This means you can consistently produce the results for a report, and present the results in whatever way you see fit.

Accordingly, in this section, I’m going to cover the polymorphic method names I use in this reporting architecture. Let’s get started with get_report().

get_report( )

Method get_report() produces a report result set by retrieving data from application tables, placing the results in a report-specific set of staging tables, and then returning an ID to the result set. FUNCTION get_report(...) takes runtime parameters passed to it from the presentation layer and uses those to efficiently retrieve and stage the data for a report. Most often, you’ll use get_report() to simply improve the performance of a report. However, some reports cannot be created with just SQL statements, and that’s where the use of get_report() becomes critically important.

Here’s an example of the method from a report stored procedure that produces a work history report for a given worker_id. This method is from package REPORT_WKHST:

FUNCTION get_report(
ain_worker_id                  in     REPORT_WKHSTS.worker_id%TYPE)
return                                REPORT_WKHSTS.report_wkhst_id%TYPE is

cursor c_date(
ain_worker_id                  in     REPORT_WKHSTS.worker_id%TYPE) is
select v1.active_date
from ( select active_date
       from   LOGICAL_ASSIGNMENTS
       where  worker_id = ain_worker_id
       UNION
       select active_date
       from   PHYSICAL_ASSIGNMENTS
       where  worker_id = ain_worker_id
       UNION
       select active_date
       from   WORK_ASSIGNMENTS
       where  worker_id = ain_worker_id ) v1
order by 1;

d_logical_inactive_date               REPORT_WKHSTS.inactive_date%TYPE;
d_physical_inactive_date              REPORT_WKHSTS.inactive_date%TYPE;
d_work_inactive_date                  REPORT_WKHSTS.inactive_date%TYPE;

r_worker                              WORKERS%ROWTYPE;

r_report_wkhsts                       REPORT_WKHSTS%ROWTYPE;

begin
  r_report_wkhsts.report_wkhst_id  := get_id();
  r_report_wkhsts.report_wkhst_seq := 0;

  r_worker.id                := ain_worker_id;
  r_worker                   := WORKER.get_row(r_worker);

  r_report_wkhsts.worker_id   := r_worker.id;
  r_report_wkhsts.worker_name :=
    WORKER.get_formatted_name(
      r_worker.first_name,
      r_worker.middle_name,
      r_worker.last_name);

  for r_date in c_date(ain_worker_id) loop
    r_report_wkhsts.active_date := r_date.active_date;

    get_logical_workplace_name(
      r_report_wkhsts.worker_id,
      r_report_wkhsts.active_date,
      r_report_wkhsts.logical_workplace_name,
      d_logical_inactive_date);

    get_physical_workplace_name(
      r_report_wkhsts.worker_id,
      r_report_wkhsts.active_date,
      r_report_wkhsts.physical_workplace_name,
      d_physical_inactive_date);

    get_work_name(
      r_report_wkhsts.worker_id,
      r_report_wkhsts.active_date,
      r_report_wkhsts.work_name,
      d_work_inactive_date);

    r_report_wkhsts.inactive_date :=
      least(
        d_logical_inactive_date,
        d_physical_inactive_date,
        d_work_inactive_date);

    set_report(r_report_wkhsts);
  end loop;

  commit;

  return r_report_wkhsts.report_wkhst_id;
end get_report;

If you examine the listing, you’ll notice that I call three private methods: get_logical_workplace_name(), get_physical_workplace_name(), and get_work_name(), which are used to get the data in a programmatic nested loop fashion. I suppose if I spent enough time, I could write a SQL SELECT statement to get these results, but in this case, it’s much easier and faster to use PL/SQL.

The following script, report_wkhst.get_report.sql, is a test unit I use to run the report from SQL*Plus during development and troubleshooting:

rem report_wkhst.get_report.sql
rem by Donald J. Bales on 2014-10-20
rem Test Unit for REPORT_WKHSTS_TS.get_report()

column worker_name             format a21 trunc;
column logical_workplace_name  format a11 trunc;
column physical_workplace_name format a11 trunc;
column work_name               format a11 trunc;
column active_date             format a8;
column inactive_date           format a8;

execute pl('report_wkhst_id='||to_char(REPORT_WKHST.get_report(11649889)));

select worker_name,
       logical_workplace_name,
       physical_workplace_name,
       work_name,
       to_char(active_date,   'YYYYMMDD') active_date,
       to_char(inactive_date, 'YYYYMMDD') inactive_date
from   REPORT_WKHSTS
where  report_wkhst_id = &report_wkhst_id
order by report_wkhsts_seq;
;

When executed with worker_id 11649889, the script produces the following output:

WORKER_NAME           LOGICAL_WOR PHYSICAL_WO WORK_NAME   ACTIVE_D INACTIVE
--------------------- ----------- ----------- ----------- -------- --------
PATTERSON, MICHELLE D BIOLOGICALS GRAVITRON 1 ENGINEER1   19740520 19850507
PATTERSON, MICHELLE D BIOLOGICALS GRAVITRON 1 ENGINEER2   19850508 19960425
PATTERSON, MICHELLE D BIOLOGICALS GRAVITRON 1 ENGINEER3   19960426

In practice, every report has a different package name that starts with REPORT_, whose associated table is the plural form of the package’s name. All reports use the same method name, get_report(), with the presentation layer passing the method any required parameter values. Since a report package has an associated table, it’s only fitting for it to have an associated method name for inserting values into that table, and that name is set_report().

set_report( )

Since you’re an astute reader, you’re probably thinking, “Why doesn’t he just use the method name set_row()?” It’s because set_report() does something different than set_row(), so I consistently use the name set_report() in order to emphasize that difference.

Before I get into an explanation of the difference, let’s look at an example of the definition of an associated report staging table. This is the report staging table definition for the work history report. You can find this listing in script report_wkhsts.tab:

create table REPORT_WKHSTS (
report_wkhst_id                number                         not null,
report_wkhst_seq               number                         not null,
worker_id                      number                         not null,
worker_name                    varchar2(100)                  not null,
active_date                    date,
logical_workplace_name         varchar2(80),
physical_workplace_name        varchar2(80),
work_name                      varchar2(80),
inactive_date                  date,
insert_user                    varchar2(30)  default USER     not null,
insert_date                    date          default SYSDATE  not null)
nologging

And here is an example of method set_report() that inserts the data into the report staging table:

PROCEDURE set_report(
aior_report_wkhsts              in out REPORT_WKHSTS%ROWTYPE) is

begin
  aior_report_wkhsts.report_wkhst_seq := aior_report_wkhsts.report_wkhst_seq + 1;

  if aior_report_wkhsts.inactive_date  = DATE_.d_max then
    aior_report_wkhsts.inactive_date  := NULL;
  end if;

  aior_report_wkhsts.insert_user      := USER;
  aior_report_wkhsts.insert_date      := SYSDATE;

  insert into REPORT_WKHSTS values aior_report_wkhsts;
exception
  when OTHERS then
    raise_application_error(-20004, SQLERRM||
      ' on insert REPORT_WKHSTS'||
      ' in REPORT_WKHST.set_report()'),
end;

In contrast to method set_row(), which selects and then inserts or updates, PROCEDURE set_report(air_row) strictly inserts. In addition, set_report() increments the value of the report’s sequence number. The sequence number is the second half of a two-column primary key for the staging table. With this design, get_report() can return a single value, an ID, that will allow the presentation layer to select all the subject rows for a report, and then sort against the sequence value.

This means that the order of the information on the report is driven by a given report’s engine in the database, not the presentation layer. This is also what allows you to, as I mentioned earlier, post-sort.

You can code a report stored procedure so that, after it places all of its data in its staging table, you sort the data in the table by opening a cursor against the staging table with an ORDER BY clause, and then update the sequence numbers in the same staging table so the results will have a new sort order.

Now you have a means to get the presentation layer to use a modular and efficient approach to creating reports, but what if you want to email a report from the database? That’s the purpose of method report().

report( )

Not all reports need human interaction to produce them. Sometimes, an application has a list of reports that should be produced on an hourly, daily, weekly, or monthly basis. Why should you require users to run them? A better design is to provide end users with a set of batch-processed report distribution screens, where they can configure who gets what report, and then have the application produce the reports for them.

If the database produces the reports, it can send the report via email as fixed-length text, HTML, PDF, and so on, or as a link. The following is an example of an HTML emailed report from report procedure REPORT_WKHST:

PROCEDURE report(
ain_worker_id                  in     REPORT_WKHSTS.worker_id%TYPE,
aiv_to                         in     varchar2) is

cursor c_report_wkhsts(
ain_report_wkhst_id            in     REPORT_WKHSTS.report_wkhst_id%TYPE) is
select initcap(worker_name) worker_name,
       active_date,
       initcap(logical_workplace_name) logical_workplace_name,
       initcap(physical_workplace_name) physical_workplace_name,
       initcap(work_name) work_name,
       inactive_date
from   REPORT_WKHSTS
where  report_wkhst_id = ain_report_wkhst_id
order by report_wkhst_seq;

n_line                                number;
t_lines                               EMAIL.LINES;
n_report_wkhst_id                     REPORT_WKHSTS.report_wkhst_id%TYPE;
v_worker_name                         REPORT_WKHSTS.worker_name%TYPE;

begin
  n_report_wkhst_id := get_report(ain_worker_id);

  t_lines(incr(n_line)) := '</pre><table>';
  for r_report_wkhsts in c_report_wkhsts(n_report_wkhst_id) loop
    if c_report_wkhsts%rowcount = 1 then
      v_worker_name         := r_report_wkhsts.worker_name;
      t_lines(incr(n_line)) := '<tr><td align="center" colspan="5">'||
        '<big>Work History Report</big></td></tr>';
      t_lines(incr(n_line)) := '<tr><td align="center" colspan="5">'||
        'for '||v_worker_name||'</td></tr>';
      t_lines(incr(n_line)) := '<tr><td align="center" colspan="5">'||
       '</td></tr>';
      t_lines(incr(n_line)) := '<tr>'||
        '<th align="left">Logical</th>'||
        '<th align="left">Physical</th>'||
        '<th align="left"></th>'||
        '<th align="left">Active</th>'||
        '<th align="left">Inactive</th>'||
        '</tr>';
      t_lines(incr(n_line)) := '<tr>'||
        '<th align="left">Workplace</th>'||
        '<th align="left">Workplace</th>'||
        '<th align="left">Work</th>'||
        '<th align="left">Date</th>'||
        '<th align="left">Date</th>'||
        '</tr>';
    end if;
    t_lines(incr(n_line)) := '<tr>'||
      '<td align="left">'||
        r_report_wkhsts.logical_workplace_name||'</td>'||
      '<td align="left">'||
        r_report_wkhsts.physical_workplace_name||'</td>'||
      '<td align="left">'||
        r_report_wkhsts.work_name||'</td>'||
      '<td align="left">'||
        to_char(r_report_wkhsts.active_date, 'MM/DD/YYYY')||'</td>'||
      '<td align="left">'||
        to_char(r_report_wkhsts.inactive_date, 'MM/DD/YYYY')||'</td>'||
      '</tr>';
  end loop;
  t_lines(incr(n_line)) := '</table><pre>';

  EMAIL.send(
    EMAIL.get_username,
    aiv_to,
    'Work History Report for '||v_worker_name,
    t_lines);
end report;

In this case, PROCEDURE report(ain_worker_id, aiv_to) calls get_report() to produce the result set, and then queries the staging table to produce an HTML report. report() then calls supporting package EMAIL.send() to send the report.

The following is an example of a test unit for report(). This is from script report_wkhst.report.sql:

rem report_wkhst.report.sql
rem by Donald J. Bales on 2014-10-20
rem Test Unit for REPORT_WKHSTS_TS.report()

execute REPORT_WKHST.report(11649889, '[email protected]'),

After I execute this test unit, I get an email report, as shown in Figure 10-9. In practice, I even use this functionality to send an email report to the support personnel any time a user encounters an error. That way, support personnel can start troubleshooting a given problem before the customer even calls. How’s that for customer service?

9781484207383_Fig10-09.jpg

Figure 10-9. An example of an emailed report

Well, that’s it for polymorphism. I hope you can appreciate the power and simplicity that the use of polymorphic command names provides when used in the development of software. I also hope you take the time to sit down and think about what method names you should use before you start coding your next program.

Summary

In this chapter, I introduced the concept of organizing the nature of behavior behind a common set of method names. In addition, I demonstrated how I’ve used each method name in the past. With all the content you’ve seen in this book, you should be well on your way to solving business and science problems with the use of SQL and PL/SQL in a relational of object-relational setting.

Some have given me the feedback that perhaps the object-relational stuff should be treated as an advanced topic. Phooey! There’s nothing advanced about object-relational. It’s actually the correct and natural way of thinking. We should have been doing it this way for at least the past 20 years. So I hope you’ll find yourself in a position to use it.

Just how astute are you? Did you notice that I even used polymorphism with table column names? Normally, I even use id for the primary key column name, and then the singular of the table name concatenated to _id for the corresponding foreign key column name. I used the same names in this book in order to make the relationships obvious. By treating column names polymorphically, you can implicitly note what the column is for in every table, just as you do with method names in executable code.

Wow! That was fast, wasn’t it? Did I make you work hard? Did you learn anything? I sure hope so. As I stated in the introduction, this was a tutorial—a place to get started. The only way you can learn how to program is to discover the possibilities of a programming language and start mimicking someone else’s work by writing programs.

I’ve been extremely successful in my career when working with Oracle and PL/SQL, so if you mimic my coding behavior, I’m guessing that you will be successful, too. But you’ve just started. Here’s what I suggest you do next:

  • Browse the Oracle Database SQL Reference, so you can get an idea of what can be accomplished using SQL. You don’t have to remember it all; you just need to remember where to look for the information when you realize you can put a possibility to work!
  • Read Oracle’s PL/SQL User’s Guide and Reference, or Steven Feuerstein’s Oracle PL/SQL Programming (O’Reilly, 2014). These are both good references. Once again, you don’t need to remember everything you read, just the possibilities. You can always refer to the reference later. You have better use for your memory, like remembering your lover’s birthday or your anniversary!
  • Browse Oracle’s PL/SQL Packages and Types Reference. Why should you build something that’s already been built by experts? Just catalog the possibilities in your gray matter.
  • Check out Oracle’s Application Developers Guides.
  • Share your ideas by writing an article, a white paper, or at least an email about your experience. After you do, go back and write an INSERT statement in order to add yourself to the list of database geniuses from Chapter 1.

As I said before, good skill. Luck is for losers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.121.209