Вы находитесь на странице: 1из 50

PL/SQL ENHANCEMENTS IN ORACLE9i

Bryn Llewellyn, PL/SQL Product Manager, Oracle Corp


Chris Racicot, PL/SQL Development Director, Oracle Corp

Delivered as paper #129 at Oracle OpenWorld, San Francisco, Tue 4-Dec-2001

SUMMARY OF ENHANCEMENTS
A number of important enhancements have been made to PL/SQL in Oracle9i in each of these areas: its
implementation (i.e. that which effects the execution characteristics of a given system of source code); language
features (i.e. the addition of new syntax to express powerful new semantics); and Oracle supplied PL/SQL library
units.
Some of the enhancements are transparent, for example: the change to use the same parser for compile-time checked
embedded SQL as is used for compiling SQL issued from other programming environments; or the reimplementation
of the Utl_Tcp package (moving from java to native C). No user action is required to enjoy these (beyond installing
Oracle9i).
Some are semi-transparent, for example the new option to compile PL/SQL source to native C. Small declarative steps
are required to enjoy these, while existing application code remains unchanged.
And some introduce new semantics, either in the language itself or by virtue of new APIs in the supplied PL/SQL library
units. Study and thought is required in order to design and implement changes to existing code or creation of new
code to leverage these enhancements.
The following enhancements have been selected for detailed description…
• Native compilation of PL/SQL
• CASE statements and CASE expressions
• Bulk binding enhancements:
exception handling with bulk binding;
bulk binding in native dynamic SQL
• Table functions and cursor expressions
• Multilevel collections
• Enhancements to the Utl_Http package

Extensive complete code samples are listed in the appendix to provide working demonstrations for all these features.
The remaining enhancements are: either completely transparent; or need no special explanation from a PL/SQL
perspective (since they are new SQL language features which PL/SQL supports in an obvious way); or are object-
oriented features which are scoped out of this paper; or are features of supplied packages and as such are less
interesting from a PL/SQL language viewpoint. These will be listed briefly in the last sections…
• Transparent enhancements
• New SQL features
• Object oriented features
• New or enhanced supplied packages
Fast Track to Oracle9i

Thus PL/SQL at Oracle9i delivers improved performance and functionality for the application and improved usability
for the developer.

1. NATIVE COMPILATION OF PL/SQL


1.1. OVERVIEW
PL/SQL is often used as a thin wrapper for executing SQL statements, setting bind variables and handling result sets.
See code sample A.1.1 in the Appendix. In such cases the execution speed of the PL/SQL code is rarely an issue. It is
the execution speed of the SQL that determines the performance. (The efficiency of the context switch between the
PL/SQL and the SQL operating environments might be an issue, but that’s a different discussion. See the sections on
bulk binding and table functions.)
However, we see an increasing trend to use PL/SQL for computationally intensive database independent tasks. It is
after all a fully functional 3GL. See code sample A.1.2. Here it is the execution speed of the PL/SQL code that
determines the performance.
In pre-Oracle9i versions, compilation of PL/SQL source code always results in a representation (usually referred to
bytecode) which is stored in the database and interpreted at run-time by a virtual machine implemented within
ORACLE which in turn runs natively on the given platform. Oracle9i introduces a new approach. PL/SQL source
code may optionally be compiled into native object code which is linked into ORACLE. (Note however that an
anonymous PL/SQL block is never compiled natively.)
The A.1.2 program runs about 33% faster when compiled in NATIVE mode than when compiled in interpreted
mode while the A.1.1 program runs about 3% faster when compiled in NATIVE mode. (Each measurement was for
about 12 million iterations.)
While for data intensive programs native compilation may give only a marginal performance improvement, we have
never seen it give performance degradation.

1.2. ONE-TIME DBA SETUP


Native PL/SQL compilation is achieved by translating the PL/SQL source code into C source code which is then
compiled on the given platform. The compiling and linking of the generated C source code is done using 3rd party
utilities whose location has been specified by the DBA, typically in init.ora. The DBA should ensure that all these
utilities are owned by the ORACLE owner (or a correspondingly trusted system user) and that only this user has write
access to them. The object code for each natively compiled PL/SQL library unit is stored on the platform’s filesystem
in directories, similarly under the DBA’s control. Thus native compilation does take longer than interpreted mode
compilation. Our tests have shown a factor of about times two. This is because it involves these extra steps: generating
C code from the initial output of the PL/SQL compilation; writing this to the filesystem; invoking and running the C
compiler; and linking the resulting object code into ORACLE.
Oracle recommends that the C compiler is configured to do no optimization. Our tests have shown that optimizing
the generated C produces negligible improvement in run-time performance but substantially increases the compilation
time.

1.3. HOW DOES THE USER CHOOSE BETWEEN INTERPRETED AND NATIVE COMPILATION MODES?
The compiler mode is determined by the session parameter plsql_compiler_flags. The user may set it thus…

alter session set plsql_compiler_flags = 'NATIVE' /* or 'INTERPRETED' */;

…to set the compilation mode for subsequently compiled PL/SQL library units. The mode is stored with the library
unit’s metadata, so that if it is implicitly recompiled as a consequence of dependency checking then the mode the user
intended will be used. It may be inspected thus…

2 Paper # 129
Fast Track to Oracle9i

select o.object_name name, s.param_value comp_mode,


from user_stored_settings s, user_objects o
where
o.object_id = s.object_id
and param_name = 'plsql_compiler_flags'
and o.object_type in ( 'PACKAGE', 'PROCEDURE', 'FUNCTION' );

Note however that Dbms_Utility.Compile_Schema uses the current value of


plsql_compiler_flags rather than the stored compilation mode.
Oracle recommends that all the PL/SQL library units that are called from a given top-level unit are compiled in the
same mode. This is because there is a cost for the context switch when a library unit compiled in one mode invokes
one compiled in the other mode. Significantly, this recommendation includes the Oracle-supplied library units. (These
are always shipped compiled in interpreted mode.)

1.4. UPGRADING A WHOLE DATABASE TO NATIVE


The simplest way to honor the recommendation above (Oracle recommends that all the PL/SQL library units that are called
from a given top-level unit are compiled in the same mode) is to upgrade the whole database so that all PL/SQL library units are
compiled NATIVE. A release soon after Oracle9i Database Version 9.2.0 will include such a script together with its
partner to downgrade a whole database so that all PL/SQL library units are compiled INTERPRETED. Meanwhile,
these are posted on OTN here…

http://otn.oracle.com//tech/pl_sql/htdocs/README_2188517.htm

1.5. ORACLE9i IN ACTION


170 Systems, Inc (www.170Systems.com) have been an Oracle Partner for eleven years and participated in the Beta
Program for the Oracle9i Database with particular interest in PL/SQL Native Compilation. They have now certified
their 170 MarkView Document Management and Imaging System™ against Oracle9i and have updated the install
scripts to optionally turn on Native Compilation. They have observed a performance increase of up to 40% for
computationally intensive routines, and no performance degredation, in line with our observations using code samples
A.1.1 and A.1.2 (see Appendix).
The 170 MarkView Document Management and Imaging System provides Content Management, Document
Management, Imaging and Workflow solutions – all tightly integrated with the Oracle9i Database, Oracle9i
Application Server and the Oracle E-Business Suite. Enabling businesses to capture and manage all of their
information online in a single, unified system – regardless of original source or format – the 170 MarkView solution
provides scalable, secure, production-quality Internet B2B and intranet access to all of an organization’s vital
information, while streamlining the associated business processes and maximizing operational efficiency.
A large-scale multi-user, multi-access system, 170 MarkView™ supports the very large numbers of documents, images,
concurrent users, and the high transaction rates required by 170 Systems customers. Therefore performance and
scalability are especially important. 170 Systems customers include organizations such as British Telecommunications,
E*TRADE Group, the Apollo Group and the University of Pennsylvania. 170 MarkView uses several different
mechanisms to interface to the Oracle9i Database. Part of the business logic, including preparation of data for
presentation, is implemented in the database in PL/SQL. The computation involves string processing supported by
stacks and lists of values modeled as PL/SQL collections. Several PL/SQL modules implement complex logic and
include intensive string manipulation and processing. PL/SQL collections are leveraged in this complex processing.

1.5. BUSINESS BENEFITS OF NATIVE COMPILATION


• Increased speed and scalability

3 Paper # 129
Fast Track to Oracle9i

2. THE CASE KEYWORD: CASE STATEMENTS AND CASE EXPRESSIONS


2.1. CASE STATEMENT
While CASE constructs don’t offer any fundamentally new semantics, they do allow a more compact notation and
some elimination of repetition with respect to what otherwise would be expressed with an IF construct. Consider the
implementation of a decision table whose predicate is the value of a particular expression. These two fragments…

case n
when 1 then Action1;
when 2 then Action2;
when 3 then Action3;
else ActionOther;
end case;

…and…

if
n = 1 then Action1;
elsif n = 2 then Action2;
elsif n = 3 then Action2;
else ActionOther;
end if;

…are semantically almost identical. But coding best practice gurus generally recommend the CASE formulation
because it more directly models the idea. By pulling out the decision expression n to the start and by mentioning it
only once the programmer’s intention is clearer. This is significant both to the proof reader and to the compiler, which
therefore has better information from while to generate efficient code. For example, the compiler knows immediately
that the decision expression needs to be evaluated just once. And, since the IF formulation repeats the decision
expression for each leg, there’s a greater risk of typographical error which can be difficult to spot.
Moreover, the CASE formulation makes it explicit that the coded cases are the only ones that need handling (see the
discussion of the case_not_found exception below).
CASE constructs are available in most programming languages. Oracle9i introduces them in PL/SQL (and in SQL).

2.2. CASE EXPRESSION


Consider these two semantically almost identical fragments

text := case n
when 1 then one
when 2 then two
when 3 then three
else other
end;

…and

if
n = 1 then text := one;
elsif n = 2 then text := two;
elsif n = 3 then text := three;
else text := other;
end if;

4 Paper # 129
Fast Track to Oracle9i

The CASE formulation makes it explicit that the intention of the fragment is to provide a value for text.

2.3. SEARCHED CASE STATEMENT AND SEARCHED CASE EXPRESSION


For both the CASE statement and the CASE expression, the searched variant tests each leg on an arbitrary boolean
expression, rather than on equality on a single expression common for all legs, thus

case
when n = 1 then Action1;
when n = 2 then Action2;
when n = 3 then Action3;
when ( n > 3 and n < 8 ) then Action4through7;
else ActionOther;
end case;

…and…

text := case
when n = 1 then one
when n = 2 then two
when n = 3 then three
when ( n > 3 and n < 8 ) then four_through_seven
else other
end;

Note: With the CASE formulation as with the IF formulation, the leg which is selected for particular data values will in
general depend on the order in which the legs are written. Consider…

case
when this_patient.pregnant = 'Y' then Action1;
when this_patient.unconscious = 'Y' then Action2;
when this_patient.age < 5 then Action3;
when this_patient.gender = 'F' then Action4;
else ActionOther;
end case;

An unconscious pregnant woman will receive Action1.

2.4. THE CASE_NOT_FOUND EXCEPTION


A subtle difference between the CASE construct and the corresponding IF construct occurs when the ELSE leg is
omitted. With the IF consruct, if none of the legs is selected then there is no action. But with the CASE construct, if
none of the legs is selected then the case_not_found exception (ORA-06592: CASE not found while executing CASE
statement) is raised, thus…

...
p:=0; q:=0; r:=0;
case
when p = 1 then Action1;
when r = 2 then Action2;
when q > 1 then Action3;
end case;
exception
when case_not_found

5 Paper # 129
Fast Track to Oracle9i

then Show ( 'Trapped: case_not_found' );


...

2.5. BUSINESS BENEFITS OF CASE CONSTRUCTS


• Increased usability for the developer and code reviewer

3. BULK BINDING ENHANCEMENTS


3.1. HANDLING AND REPORTING EXCEPTIONS
Consider a program to insert the elements in a PL/SQL collection into a database table. It’s possible that some
elements might fail and that the designer would regard this as a non-fatal error and want to continue to insert
subsequent elements. The explicit row by row implementation would handle the exception, and probably record it for
subsequent review thus…

declare /* relies on... create table t ( text varchar2(3) ) */


type words_t is table of varchar2(10); words words_t :=
words_t ( 'dog', 'fish', 'cat', 'ball', 'bat', 'spoke', 'pad' )
/* 'ball' and 'spoke' will raise ORA-01401 */;
n integer := 0;
type error_indexes_t is table of integer index by binary_integer;
error_indexes error_indexes_t;
type error_codes_t is table of varchar2(255) index by binary_integer;
error_codes error_codes_t;
begin
for j in words.first..words.last
loop
begin
insert into t ( text ) values ( words(j) );
exception when others then
n := n+1; error_indexes(n) := j; error_codes(n) := SQLERRM;
end;
end loop;

for j in 1..n
loop
Show ( error_indexes(j), error_codes(j) );
end loop;
end;

Pre-Oracle9i there was no way to continue after a row-wise exception in the bulk binding approach…

forall j in words.first..words.last
insert into t ( text ) values ( words(j) );

…and the effect of the ORA-01401 on [what would be] just some of the rows meant that no rows are inserted.
Oracle9i introduces the save exceptions syntax and the corresponding “ORA-24381: error(s) in array DML”
exception. This allows the implied loop to continue after row-wise failure…

forall j in words.first..words.last
save exceptions /* new at 9i */
insert into t ( text ) values ( words(j) );

…resulting in the successful insert of 'dog', 'cat', 'bat', 'pad'.


To complement this construct, the sql%bulk_exceptions collection allows reporting of the erroring rows in the

6 Paper # 129
Fast Track to Oracle9i

exception handler for ORA-24381 thus…

declare
...
bulk_errors exception;
pragma exception_init ( bulk_errors, -24381 );
begin
forall j in words.first..words.last
save exceptions
insert into t ( text ) values ( words(j) );
exception when bulk_errors then
for j in 1..sql%bulk_exceptions.count
loop
Show (
sql%bulk_exceptions(j).error_index,
Sqlerrm(-sql%bulk_exceptions(j).error_code) );
end loop;
end;

…which produces…

2: ORA-01401: inserted value too large for column


4: ORA-01401: inserted value too large for column
6: ORA-01401: inserted value too large for column

The construct is also supported in native dynamic SQL thus…

forall j in words.first..words.last
save exceptions
execute immediate 'insert into t ( text ) values ( :the_word )'
using words(j);

3.2. BULK BINDING IN NATIVE DYNAMIC SQL


3.2.1. DEFINING
Consider a program to populate elements of a PL/SQL collection from a SELECT query thus…

declare
type employee_ids_t is table of employees.employee_id%type
index by binary_integer;
employee_ids employee_ids_t; n integer:=0;
begin
for j in ( select employee_id from employees where salary < 3000 )
loop
n := n+1; employee_ids(n) := j.employee_id;
end loop;
end;

Each explicit row by row assignment of the collection element to the cursor component causes a context switch
between the PL/SQL engine and the SQL engine resulting in performance overhead. The following formulation (one
of a family of constructs generically referred to as bulk binding and available pre-Oracle9i)…

begin
select employee_id
bulk collect into employee_ids
from employees where salary < 3000;

7 Paper # 129
Fast Track to Oracle9i

end;

…substantially improves performance by minimizing the number of context switches required to execute the block.
(The above fragments work pre-Oracle 9i.)
There are many application implementation situations that require dynamic SQL. Native dynamic SQL (execute
immediate and related constructs) is usually preferred over Dbms_Sql because it’s easier to write and proof read
and executes faster. However, pre-Oracle9i, only Dbms_Sql could be used for dynamic bulk binding. Oracle9i
introduces the following syntax for bulk binding in native dynamic SQL …

begin /* new at 9i */
execute immediate 'select employee_id from employees where salary < 3000'
bulk collect into employee_ids;
end;

3.2.2. IN-BINDING
The same progression (explicit row by row processing, bulk binding, bulk binding in native dynamic SQL) is
supported for DML (insert, update and delete) thus…

for j in employee_ids.first..employee_ids.last
loop
update employees set salary = salary*1.1
where employee_id = employee_ids(j);
end loop;

…then…

forall j in employee_ids.first..employee_ids.last
update employees set salary = salary*1.1
where employee_id = employee_ids(j);

…then…

forall j in employee_ids.first..employee_ids.last
execute immediate 'update employees set salary = salary*1.1'
|| ' where employee_id = :the_id'
using employee_ids(j) /* new at 9i */;

3.2.3. OUT-BINDING
The progression is also supported for implicit query in a DML statement via the returning keyword…

for j in employee_ids.first..employee_ids.last
loop
update employees set salary = salary*1.1
where employee_id = employee_ids(j)
returning salary into salaries(j);
end loop;

…then…

forall j in employee_ids.first..employee_ids.last
update employees set salary = salary*1.1
where employee_id = employee_ids(j)
returning salary bulk collect into salaries
/* this is not a typo: employee_ids is subscipted but salaries isn’t */;

8 Paper # 129
Fast Track to Oracle9i

…then…

forall j in employee_ids.first..employee_ids.last
execute immediate 'update employees set salary = salary*1.1'
|| ' where employee_id = :the_id'
|| ' returning salary into :the_salary'
using employee_ids(j)
returning bulk collect into salaries /* new at 9i */;

3.2.4. ORACLE9i ENHANCEMENT FOR


BULK FETCH FROM CURSOR VARIABLE ASSIGNED BY NATIVE DYNAMIC SQL
This is described in the section “Table Functions and Cursor Expressions” below.

3.3. BUSINESS BENEFITS OF BULK BINDING ENHANCEMENTS


• Increased speed and scalability for appropriate applications
• Improved functionality by virtue of better exception handling

4. TABLE FUNCTIONS AND CURSOR EXPRESSIONS


4.1. OVERVIEW
Cursor expressions (sometimes known as cursor subqueries) are an element of the SQL language and pre-Oracle9i
were supported in SQL and by certain programming environments but not by PL/SQL. Oracle9i introduces PL/SQL
support for cursor expressions. For example, a cursor expression can be used in the SELECT statement used to open
a PL/SQL cursor, and manipulated appropriately thereafter. It can also be used as an actual parameter to a PL/SQL
procedure or function, which has great significance in connection with table functions.
Table functions were also supported (in rudimentary form) in pre-Oracle9i, but a number of major enhancements
have been made at Oracle9i. A table function can now be written to deliver rows pipeline fashion as soon as they are
computed, dramatically improving response time in a “first rows” scenario. It can now be written to accept a SELECT
statement as input, allowing an indefinite number of transformations to be daisy-chained, avoiding the need for
storage of intermediate results. And it can now be written so that its computation can be parallelized to leverage
Oracle’s parallel query mechanism.
The enabling of parallel execution of a table function means that it’s now possible to leverage the power of PL/SQL
in the ETL phase of data warehouse applications without serialization.

4.2. CURSOR VARIABLES – RECAP


This PL/SQL language feature was available pre-Oracle9i. A cursor variable is a pointer (declared as type ref cursor)
to an actual cursor. Code which is written to manipulate a cursor variable can be reused for successive assignments to
different actual cursors. The understanding of the PL/SQL features introduced in Oracle9i for cursor expressions and
table functions depends on understanding cursor variables.
Consider this procedure…

create or replace procedure Fetch_From_Cursor


( p_cursor in sys_refcursor )
is
the_name varchar2(4000);
begin
loop
fetch p_cursor into the_name;

9 Paper # 129
Fast Track to Oracle9i

exit when p_cursor%notfound;


Show ( the_name );
end loop;
end Fetch_From_Cursor;

It can be invoked with a cursor variable which has been assigned to any SELECT statement against any table whose
select list is a single VARCHAR2, for example…

declare
the_cursor sys_refcursor;
begin
open the_cursor for
select last_name from employees order by last_name;
Fetch_From_Cursor ( the_cursor );
close the_cursor;

open the_cursor for


select department_name from departments order by department_name;
Fetch_From_Cursor ( the_cursor );
close the_cursor;
end;

Note: the available type sys_refcursor, defining a generic weak cursor, is a usability enhancement, new at Oracle9i.
Pre-Oracle9i it would be necessary to define at type, for example…

create or replace package My_Types is


type Weak_Cursor is ref cursor;
...
end My_Types;

…and then to declare p_cursor in My_Types.Weak_Cursor and the_cursor My_Types.Weak_Cursor.

4.3. ORACLE9i ENHANCEMENT FOR


BULK FETCH FROM CURSOR VARIABLE ASSIGNED BY NATIVE DYNAMIC SQL
Consider modifying the Fetch_From_Cursor procedure to use bulk fetch, thus…

create or replace procedure Bulk_Fetch_From_Cursor


( p_cursor in sys_refcursor )
is
type names_t is table of varchar2(4000)
index by binary_integer;
the_names names_t;
begin
fetch p_cursor bulk collect into the_names;

for j in the_names.first..the_names.last
loop
Show ( the_names(j) );
end loop;
end Bulk_Fetch_From_Cursor;

It can be invoked with a cursor variable which has been assigned using native dynamic SQL, thus…

declare
the_cursor sys_refcursor;

10 Paper # 129
Fast Track to Oracle9i

begin
open the_cursor for
'select last_name from employees order by last_name';
Bulk_Fetch_From_Cursor ( the_cursor );
close the_cursor;

open the_cursor for


'select department_name from departments order by department_name';
Bulk_Fetch_From_Cursor ( the_cursor );
close the_cursor;
end;

If this is attempted in a pre-Oracle9i environment (making appropriate substitution for sys_refcursor), then: either
bulk fetch can be used when the cursor variable is assigned using static SQL; or explicit row by row fetch can be used
when the cursor variable is assigned using native dynamic SQL. But the attempt to do bulk fetch when the cursor
variable is assigned using native dynamic SQL causes “ORA-01001: invalid cursor”.

4.4. MANIPULATING CURSOR EXPRESSIONS IN PL/SQL


Consider the task: list the department names, and for each department list the names of the employees in that
department. It can be simply implemented by a classical sequential programming approach thus…

begin
for department in (
select department_id, department_name
from departments
order by department_name
)
loop
Show ( department.department_name );
for employee in (
select last_name
from employees
where department_id = department.department_id
order by last_name
)
loop
Show ( employee.last_name );
end loop;
end loop;
end;

The following SELECT expresses the query requirement in a single SQL statement …

select
department_name,
cursor (
select last_name
from employees e
where e.department_id = d.department_id
order by last_name
) the_employees
from departments d
order by department_name;

…and runs at SQL*Plus pre-Oracle9i. (This implies of course that a corresponding cursor can be manipulated in the
programming language used to implement SQL*Plus.) However, an attempt to associate such a SELECT statement

11 Paper # 129
Fast Track to Oracle9i

with a PL/SQL cursor pre-Oracle9i fails to compile (with PLS-00103). Oracle9i introduces support for this thus…

declare
cursor the_departments is
select
department_name,
cursor (
select last_name
from employees e
where e.department_id = d.department_id
order by last_name
)
from departments d
where department_name in ( 'Executive', 'Marketing' )
order by department_name;

v_department_name departments.department_name%type;
the_employees sys_refcursor;

type employee_last_names_t is table of employees.last_name%type


index by binary_integer;
v_employee_last_names employee_last_names_t;
begin
open the_departments;
loop
fetch the_departments into v_department_name, the_employees;
exit when the_departments%notfound;
Show ( v_department_name );
fetch the_employees bulk collect into v_employee_last_names;
for j in v_employee_last_names.first..v_employee_last_names.last
loop
Show ( v_employee_last_names(j) );
end loop;
end loop;
close the_departments;
end;

Though this is more lines of code, and arguably less easy to proof read, than the sequentially programmed
implementation it has this advantage: there is only one SQL statement, and so it can be optimized more effectively
than (what the SQL engine sees as) two unconnected SQL statements.
Note: Bulk fetch is used for the_employees cursor. This is not currently available for the_departments cursor
because the appropriate collection type cannot be declared…

declare
type department_r is record
( department_name departments.department_name%type,
the_employees sys_refcursor );
begin null; end;

…causes “PLS-00989: Cursor Variable in record, object, or collection is not supported by this release”.

4.5. USING A CURSOR EXPRESSION AS AN ACTUAL PARAMETER TO A PL/SQL FUNCTION


A cursor variable (i.e. a variable of type ref cursor) points to an actual cursor, and may be used as a formal
parameter to a PL/SQL procedure or function. A cursor expression defines an actual cursor, and as we have seen is a
construct that’s legal in a SQL statement. (Both these statements are true pre-Oracle9i.) So we would expect that it

12 Paper # 129
Fast Track to Oracle9i

would be possible to invoke a PL/SQL procedure or function which has a formal parameter of type ref cursor
with a cursor expression as its actual parameter, thus…

My_Function ( cursor ( select my_col from my_tab ) )

In fact, this was not allowed under any circumstances pre-Oracle9i (ORA-22902 ). New at Oracle9i it is now allowed
under certain circumstances: when the function (it cannot be a procedure) is invoked in a top level SQL statement.
Given a function that can be invoked thus…

declare
the_cursor sys_refcursor;
n number;
begin
open the_cursor for
select my_col from my_tab;
n := My_Function ( the_cursor );
close the_cursor;
end;

…it can now be invoked…

select 'My_Function' My_Function from dual


where My_Function ( cursor ( select my_col from my_tab ) ) = 1;

…or…

select 'My_Function' My_Function from dual


order by My_Function ( cursor ( select my_col from my_tab ) );

Most significantly, this syntax is now allowed in the invocation of a table function in the FROM list of a SELECT
statement, see below.
Note: the following syntax…

begin
My_Function ( cursor ( select my_col from my_tab ) );
end;

…is not allowed. (It fails with “PLS-00405: subquery not allowed in this context”.)

4.6. “YOUNG MANAGERS” SCENARIO


Consider the requirement to find those managers in the employees table, the majority of whose direct reports were
hired before the manager. The algorithm depends on finding the direct reports for each manager and comparing the
number who were hired before him with the number who were hired after him. This can be programmed
straightforwardly in PL/SQL using classical techniques. See code sample A.4.1.1 in the Appendix. (Note that, seeking
to use enhanced Oracle9i functionality, this is implemented using a single SQL SELECT which has a cursor subquery
for the reports of a given manager.) This approach allows the production of a report, or as is illustrated, populating a
table with the results.
But suppose the requirement is more subtle: to create a VIEW to represent managers as specified, so that it can be
leveraged in ad hoc queries representing the current state of the underlying data. In fact, the requirement in this scenario
can be implemented in pure SQL using only SQL functions such as SUM and DECODE. See code sample A.4.1.2.
There are some rules that are too complex to implement by DECODE, in which case the user could write his own
fucntion.

13 Paper # 129
Fast Track to Oracle9i

But the approach in A.4.1.2, thought it works, feels back to front! Unlike A.4.1.1, it does not model the simple
statement of the algorithm, and is therefore hard to write and to proof read. A more comfortable approach is to define
a view thus…

create view young_managers as


select ...
from employees managers
where Most_Reports_Before_Manager( < stuff for this manager > ) = 1;

We can do this classically (see A.4.1.3) thus…

create view young_managers as


select ...
from employees managers
where Most_Reports_Before_Manager
(
managers.employee_id, managers.hire_date
) = 1;

…or by passing a cursor expression as the actual parameter to a function whose formal parameter is of type ref
cursor (see A.4.1.4) thus…

create view young_managers as


select ...
from employees managers
where Most_Reports_Before_Manager
(
cursor ( < select hire date stuff for this manager’s reports > ),
managers.hire_date
) = 1;

The A.4.1.4 approach is not possible before Oracle9i. Its advantage over the A.4.1.3 approach is marginal rather than
dramatic: it offers greater potential for reuse in that its logic is expressed in terms of, and depends only on, the select
list for an arbitrary SELECT whereas the A.4.1.3 approach hard-codes the SELECT; and, since there is only one SQL
statement, this can be optimized more effectively than (what the SQL engine sees as) two unconnected SQL
statements (as discussed above).
The dramatic benefit of the new Oracle9i feature allowing a cursor expression as an actual parameter to a PL/SQL
function come is connection with table functions, discussed below.

4.7. TABLE FUNCTIONS – RECAP


Suppose we have two schema-level types, a tuple analogous to a table row and a table of these, defined thus…

create type lookup_row as object ( idx number, text varchar2(20) );


create type lookups_tab as table of lookup_row;

We can then write a PL/SQL function which returns an instance of the table thus…

create or replace function Lookups_Fn return lookups_tab is


v_table lookups_tab;
begin
/*
To extend a nested table, you must use the built-in procedure EXTEND,
but to extend an index-by table, you just specify larger subscripts.
*/

14 Paper # 129
Fast Track to Oracle9i

v_table := lookups_tab ( lookup_row ( 1, 'ONE' ) );


for j in 2..9
loop
v_table.Extend;
if j = 2 then v_table(j) := lookup_row ( 2, 'two' );
elsif j = 3 then v_table(j) := lookup_row ( 3, 'THREE' );
elsif j = 4 then v_table(j) := lookup_row ( 4, 'four' );
elsif j = 5 then v_table(j) := lookup_row ( 5, 'FIVE' );
elsif j = 6 then v_table(j) := lookup_row ( 6, 'six' );
elsif j = 7 then v_table(j) := lookup_row ( 7, 'SEVEN' );
else v_table(j) := lookup_row ( j, 'other' );
end if;
end loop;
return v_table;
end Lookups_Fn;

We can then invoke it in the FROM list of a SELECT statement thus…

select * from table ( cast ( Lookups_Fn() as lookups_tab ) );

This allows a table to be synthesized by computation. For example, the function might call Utl_File procedures (to
parse data that cannot be handled by the SQL*Loader utility or by the external table feature), or might call C routines
(via the callout framework) which access arbitrary external data sources. Or it might access database tables and
perform transformations which cannot be expressed with pure SQL and SQL functions. The SELECT statement can
be used to define a view, and/or combined with other tables in the FROM list in an arbitrarily complex SQL
statement.
A table function, then, is a PL/SQL function which can be invoked in the FROM clause of a SQL SELECT clause.
We’ll see below that a table function which exploits new Oracle9i functionality, which we expect all table functions to
do, can only be invoked in the FROM clause of a SQL SELECT clause.

4.8. PIPELINED TABLE FUNCTIONS – NEW IN ORACLE9i


The above functionality is available pre-Oracle9i. However, it has the limitation that the function must run to
completion, storing all the rows it computes in the PL/SQL table before even the first row can be delivered. (There
are other limitations, see below.) Oracle9i introduces the pipelined construct which allows the procedure to be re-
written thus…

create or replace function Lookups_Fn return lookups_tab


pipelined
is
v_row lookup_row;
begin
for j in 1..10
loop
v_row :=
case j
when 1 then lookup_row ( 1, 'one' )
...
when 7 then lookup_row ( 7, 'seven' )
else lookup_row ( j, 'other' )
end;
pipe row ( v_row );
end loop;
return;
end Lookups_Fn;

15 Paper # 129
Fast Track to Oracle9i

Thus each row is delivered as soon as it is ready, so that the response time characteristics of a table function are
symmetrical with those of a rowsource based on a table scan or an index scan. (For performance, the PL/SQL
runtime system delivers the rows from a pipelined table function in batches.)
Note: the procedure body now mentions only rows (i.e. not the table), and the table is just implied by the return type.
(For elegance, the IF construct has been replaced with the new CASE formulation.) The same syntax as above can be
used to select from the table function, but it can now be simplified thus…

select * from table ( Lookups_Fn );

(The invocation will be written Lookups_Fn() in the following to emphasize its status as a function.)
Oracle9i also introduces the possibility to create a table function which returns a PL/SQL type thus…

create or replace package My_Types is


type lookup_row is record ( idx number, text varchar2(20) );
type lookups_tab is table of lookup_row;
end My_Types;

create or replace function Lookups_Fn return My_Types.lookups_tab


pipelined
is
v_row My_Types.lookup_row;
begin
for j in 1..10
loop
case j
when 1 then v_row.idx := 1; v_row.text := 'one';
...
when 7 then v_row.idx := 7; v_row.text := 'seven';
else v_row.idx := j; v_row.text := 'other';
end case;
pipe row ( v_row );
end loop;
return;
end Lookups_Fn;

In the limit, a PL/SQL type may be defined in the declare section of an anonymous block and hence have no
persistence. However, to be useful in connection with table functions, the PL/SQL types must be declared in a
package, and so when discussing table functions they are usually referred to as package-level types (in contrast to
schema-level types).
Note: A table function which returns a package-level type must be pipelined. Moreover, the simpler SELECT syntax
(without the CAST) must be used.

4.9. PIPING DATA FROM ONE TABLE FUNCTION TO THE NEXT – NEW IN ORACLE9i
A table function may now be defined with an input parameter of type ref cursor and invoked with a cursor
expression as the actual parameter. Consider the following…

create or replace function Mappings_Fn ( p_input_rows in sys_refcursor )


return My_Types.lookups_tab
pipelined
is
v_in_row My_Types.lookup_row;
v_out_row My_Types.lookup_row;
begin
/*

16 Paper # 129
Fast Track to Oracle9i

The following causes...


PLS-00361: IN cursor 'P_INPUT_ROWS' cannot be OPEN'ed
(The system opens the cursor on invoking the function.)
*/
--open p_input_rows;
loop
fetch p_input_rows into v_in_row;
exit when p_input_rows%notfound;

case v_in_row.idx
when 1 then v_out_row.idx := 1*2; v_out_row.text := 'was one';
when 2 then v_out_row.idx := 2*3; v_out_row.text := 'was TWO';
when 3 then v_out_row.idx := 3*4; v_out_row.text := 'was three';
when 4 then v_out_row.idx := 4*5; v_out_row.text := 'was FOUR';
when 5 then v_out_row.idx := 5*6; v_out_row.text := 'was five';
when 6 then v_out_row.idx := 6*7; v_out_row.text := 'was SIX';
when 7 then v_out_row.idx := 7*8; v_out_row.text := 'was seven';
else v_out_row.idx :=
v_in_row.idx*10; v_out_row.text := 'was other';
end case;
pipe row ( v_out_row );
end loop;
close p_input_rows;
return;
end Mappings_Fn;

Suppose t is a table which supports a select list compatible with My_Types.lookup_row. We can now invoke the
table function thus…

select * from table ( Mappings_Fn ( cursor ( select idx, text from t ) ) );

Of course, t might have been a view defined thus…

create or replace view t as


select * from table ( Lookups_Fn() );

…which implies the more compact syntax…

create or replace view v as


select *
from table ( Mappings_Fn ( cursor ( select * from table ( Lookups_Fn() ) ) ) );

Data can be piped from one to the next of an arbitrary number of table functions daisy-chained in succession. And
due to the pipelining feature storage of intermediate results is avoided. Table functions can thus be used to implement
the extraction, load and transformation operation (a.k.a. ETL) for building a datawarehouse from OLTP data. In the
limit, the extraction table function would access a foreign data source as discussed above.
4.10. THE “YOUNG MANAGERS” SCENARIO REVISITED – TABLE FUNCTION APPROACH
We can now use yet another approach! The complete solution can be implemented in a table function. This has the
usability advantage of keeping all the logic in one place, and the performance advantage of invoking the function only
once rather than once per row in the table. See code sample A.4.1.5 in the Appendix. This was derived “mechanically”
from code sample A.4.1.1 simply by creating an appropriate PL/SQL table type and by creating the block as a
pipelined function to return that type, substituting pipe row ( manager_employee_id ) for insert into
young_managers values ( manager_employee_id ).
The function can be made more general by giving it a ref cursor input parameter and by passing in the cursor
expression as the actual parameter. See code sample A.4.1.6. This would allow it to be “pointed at” any table which

17 Paper # 129
Fast Track to Oracle9i

expressed a hierarchy where both parent and child have a date.

4.11. FANOUT: USING TABLE FUNCTIONS WITH SIDE EFFECTS


Sometimes the specification for the transformation to be implemented as a table function explicitly excludes source
data with certain characteristics. In such cases, it’s useful to report on the excluded source data and often most
convenient to direct the report to the database for further analysis. A table function may do DML, provided that this
is within an autonomous transaction, thus…

create or replace function Lookups_Fn_With_Side_Effect


return My_Types.lookups_tab
pipelined
/*
uses...
create table exclusions ( n number );
*/
is
pragma autonomous_transaction;
v_row My_Types.lookup_row;
begin
for j in 1..15
loop
case
when j < 11 then
case j
when 1 then v_row.idx := 1; v_row.text := 'one';
when 2 then v_row.idx := 2; v_row.text := 'TWO';
when 3 then v_row.idx := 3; v_row.text := 'three';
when 4 then v_row.idx := 4; v_row.text := 'FOUR';
when 5 then v_row.idx := 5; v_row.text := 'five';
when 6 then v_row.idx := 6; v_row.text := 'SIX';
when 7 then v_row.idx := 7; v_row.text := 'seven';
else v_row.idx := j; v_row.text := 'other';
end case;
pipe row ( v_row );
else
insert into exclusions values ( j );
end case;
end loop;
commit;
return;
end Lookups_Fn_With_Side_Effect;

4.12. PARALLELIZING TABLE FUNCTION EXECUTION – NEW IN ORACLE9i


It is beyond the scope of this paper to describe the details of Oracle’s parallel query feature. Suffice it to say that when
certain environment conditions are met (especially a hardware environment that supports multiple concurrently
executing processes making concurrent disk accesses, and a user environment close to single-user) and when the
objects referenced in a query have appropriate parallel attributes, then the elapsed time for long-running queries can be
cut in direct proportion to the number of available CPUs. This is especially significant in decision support systems
(a.k.a. DSS) both at query time and in the extraction, transformation and load (a.k.a. ETL) operations to populate
them.
Oracle9i introduces table function features to allow their execution to be parallelized. These features require (with one
small exception, see below) that the table function has exactly one strongly typed ref cursor input parameter.

18 Paper # 129
Fast Track to Oracle9i

4.12.1. SPECIAL CASE: FUNCTION BEHAVIOR IS INDEPENDENT OF THE PARTITIONING OF THE INPUT DATA
Consider a function which processes each row from its input cursor individually. (Such a transformation which
generates two or more output rows from each input row – generically referred to as piviotting - benefits particularly
from a table function implementation.) The syntax to parallelize this is straightforward thus…

create or replace function Rowwise_Xform_Fn ( p_input_rows in sys_refcursor )


return My_Types.xforms_tab
pipelined
parallel_enable ( partition p_input_rows by any )
is
v_in_row My_Types.input_row;
v_out_row My_Types.xform_row;
begin
loop
fetch p_input_rows into v_in_row;
exit when p_input_rows%notfound;
v_out_row.n := v_in_row.n*2; v_out_row.typ := 'a';
pipe row ( v_out_row );
v_out_row.n := v_in_row.n*3; v_out_row.typ := 'b';
pipe row ( v_out_row );
end loop;
close p_input_rows;
return;
end Rowwise_Xform_Fn;

See code sample A.4.2.1 in the Appendix for the complete working example. They keyword any expresses the
programmer’s assertion that the results are independent of the order in which the function gets the input rows. When
this keyword is used, the runtime system randomly partitions the data among the query slaves. This keyword is
appropriate for use with functions that take in one row, manipulate its columns, and generate output row(s) based on
the columns of this row only. (Of course if this assertion doesn’t hold, then the output will not be predictable.) This is
the small exception referred to above: the input ref cursor need not be strongly typed to be partitioned by any.)
The ability to exploit the parallel potential of a table function depends on whether the source can be parallelized.

4.12.2. GENERAL CASE: FUNCTION BEHAVIOR DEPENDS ON THE PARTITIONING OF THE INPUT DATA
Consider a transformation along the lines of…

select avg ( salary ), department_id from employees group by department_id;

…where the aggregation operation to be performed on the set of salaries for a given department is arbitrarily complex
such that a classical SQL implementation is impossible, slow by virtue of a function invocation for each row of the
source table, or prohibitively challenging to write and debug. For example, it might be that the cost to the employer of
paying a given salary depends on the hire date because of changes in benefits packages that affect only employees
hired after the date of change. This is illustrated in code sample A.4.2.2 in the Appendix, but to avoid obscuring it
with a complicated algorithm, the aggregation is simply the sum for the salary for each distinct department. This has
the general form…

create or replace function Aggregate_Xform ( p_input_rows in My_Types.cur_t )


return My_Types.dept_sals_tab
pipelined
is
...
begin
Get_Next_Row();
while Got_Next_Dept() /* relies on assumption that

19 Paper # 129
Fast Track to Oracle9i

all rows for given dept are delivered consecutively */


loop
v_total_sal := 0;
while Got_Next_Row_In_Dept()
loop
v_total_sal := v_total_sal + g_in_row.sal;
Get_Next_Row();
end loop;
g_out_row.sal := v_total_sal; g_out_row.dept := g_current_dept;
pipe row ( g_out_row );
end loop;
close p_input_rows;
return;
end Aggregate_Xform;

Given that the input rows will be partitioned between different slaves, the integrity of the algorithm requires that all the
rows for a given department go to the same slave, and that all these rows are delivered consecutively. (Strictly speaking, the
requirement for consecutive delivery is negotiable, but the design of the algorithm to handle this case would need to
be much more elaborate. For that reason, Oracle commits to consecutive delivery.) We use the term clustered to signify
this type of delivery, and cluster key for the column (in this case “department”) on which the aggregations done. But
significantly, the algorithm does not care in what order of cluster key it receives each successive cluster, and Oracle does
not guarantee any particular order here.
This allows the possibility of a quicker algorithm than if rows were required to be clustered and delivered in order of
the cluster key. It scales as order N rather than order N.log(N), where N is the number of rows. The syntax is…

create or replace function Aggregate_Xform ( p_input_rows in My_Types.cur_t )


return My_Types.dept_sals_tab
pipelined
cluster p_input_rows by (dept)
parallel_enable
( partition p_input_rows by hash (dept) )
is...

We can choose between hash (dept) and range (dept) depending on what we know about the distribution of
the values. (hash will be quicker than range and is the natural choice to be used with cluster... by.) Here, to be
partitioned by a specified column, the input ref cursor must be strongly typed. cluster... by is not allowed
without parallel_enable ( partition... by.
Note: at version 9.0.1 it is necessary to include ORDER BY on the cluster key in the SELECT used to invoke the table
function thus…

select * from table (


Aggregate_Xform (
cursor (
select salary, department_id from employees
where department_id is not null
order by department_id ) ) );

…to preserve correctness of behavior, but this restriction will be removed when the order N clustering algorithm is
productized.

4.12.3. ORDER BY VERSUS CLUSTER BY


This alternative syntax is also allowed…

create or replace function My_Fn ( p_input_rows in My_Types.cur_t )

20 Paper # 129
Fast Track to Oracle9i

return My_Types.items_tab
pipelined
order p_input_rows by (c1)
parallel_enable
( partition p_input_rows by range (c1) )
is...

This means that those rows that are delivered to a particular slave as directed by partition... by will be locally
sorted by that slave, thus parallelizing the sort. Therefore there should be no ORDER BY in the SELECT used to
invoke the table function. (To have one would subvert the attempt to parallelize the sort.) Thus it’s natural to use the
range option together with the order by option. This will be slower than cluster by, and so should be used only
when the algorithm depends on it.
Note: the cluster... by construct cannot be used together with the order... by in the declaration of a table
function. This means that an algorithm which depends on clustering on one key, c1, and then on ordering within the
set row for a given value of c1 by, say, c2 would have to be parallelized by using the order... by in the declaration
in the table function. (The algorithm in code sample A.5.5 has this character.) Here we would use…

create or replace function Median ( p_input_rows in My_Types.cur_t )


return My_Types.items_tab
pipelined
order p_input_rows by (c1,c2)
parallel_enable
( partition p_input_rows by range (c1) )
is...

The current restriction preventing using cluster... by together with order... by implies no loss of
functionality, but only a missed opportunity to leverage the order N sort.
Caution: It is possible to design an algorithm for a table function which would deliver a different number of rows
according to the degree of parallelism. The simplest example is a function which returns a table of NUMBER
representing the count of the rows its input cursor delivered. A non-parallelized version would deliver just one row
giving count(*) for the input table. A parallelized version would deliver N rows (where N is the degree of parallelism),
the sum of whose values would give count(*) for the input table. However, this breaks the parallel query abstraction.
Oracle recommends against programming this way.

4.12.4. PERFORMANCE EXPERIMENT


Oracle Corp compared pre-Oracle9i and Oracle9i performance using a Sun Ultra-Enterprise 4500 machine with 3 GB
RAM and 10 CPUs at 168 MHz. A 1,000,000 row source table was used for a 1 row in to 7 rows out pivot transform.
The pre-Oracle9i approach was a PL/SQL cursor loop with 7 INSERTs. The Oracle9i approach was a table function
with 7 PIPE ROWs. The experiment is described in Performance and Scalability in DSS Environment
with Oracle9i, by Neil Thombre, on otn.oracle.com/deploy/performance/.
The pre-Oracle9i approach took 87 minutes. The Oracle9i approach with no parallelization took 37 minutes (ie
improvement factor 2.4x). The Oracle9i approach with parallelization degree 20 took 12 minutes (ie improvement factor
7.5x over the pre-Oracle9i baseline).

4.13. SYNTAX FOR TABLE FUNCTION BASED ON SCHEMA-LEVEL TYPE


When a table function is written to return a schema-level type, the syntax required to invoke it is somewhat verbose.
For completeness it is illustrated in code sample A.4.3 in the Appendix.

4.14. BUSINESS BENEFITS OF TABLE FUNCTIONS AND CURSOR EXPRESSIONS


• Cursor expressions allow encapsulation of logic for re-use in compatible query situations, giving increased

21 Paper # 129
Fast Track to Oracle9i

developer productivity and application reliability.


• Table functions give increased functionality by allowing sets of tuples from arbitrary external data sources and
sets of tuples synthesized from arbitrary computations to be invoked (as if they were a table) in the FROM list
of a SELECT clause. For convenience they can be used to define a VIEW, giving new functionality.
• Table functions can be used to deliver the rows from an arbitrarily complex PL/SQL transformation sourced
from Oracle tables (including therefore other table functions) as a “VIEW”, without storage of the calculated
rows. This gives increased speed and scalability. And increased developer productivity and application
reliability.
• Taking the “VIEW” metaphor a step further, the input parameters to the table function allow the “VIEW” to
be parameterizable, increasing code re-usability and therefore increasing developer productivity and
application reliability.
• A table function with a ref cursor input parameter can be invoked with another table function as the data
source. Thus table functions can be daisy-chained allowing modular program design and hence increased ease
of programming, re-use and application robustness.
• Table function execution can be parallelized giving improved speed and scalability. This, combined with the
daisy-chaining feature, makes table functions particularly suitable in datawarehouse applications for
implementing Extraction, Transformation and Load operations.
• Fanout (DML from an autonomous transaction in the table function) adds functionality of particular interest
in datawarehouse applications.
• A table function allows data stored in nested tables to be queried as if it were stored relationally, and data
stored relationally to be queried as if it were stored as nested tables. (This will be illustrated in the code
samples for the next section). This allows genuine independence between the format for the persistent storage
of data and the design of the applications which access it. (A VIEW can be defined on a table function, and
INSTEAD OF triggers can be created on the VIEW to complete the picture.)

5. MULTILEVEL COLLECTIONS
5.1. COLLECTIONS - RECAP
There are two schema-level collection prototypes: VARRAY and (nested) TABLE. Both define one-dimensional ordered
arrays of elements of a specified type, and can be leveraged in the creation of user-defined schema-level types thus…

create type Arr_t is varray(255) of number;

…or…

create type Tab_t is table of varchar2(2000);

If appropriate, the element type can be an object type thus…

create type Obj_t is object ( a number, b varchar2(4000), c date );

Instances of schema-level types based on VARRAY or TABLE can be stored as fields of a column in a relational
database table thus…

create type Arr_t is varray(255) of Obj_t;


create table t (id number, arr Arr_t);
insert into t ( id, arr ) values
( 1, Arr_t ( Obj_t ( 1, 'one', '1-Jan-01' ), Obj_t ( 2, 'two', '2-Jan-01' ) ) );
insert into t ( id, arr ) values

22 Paper # 129
Fast Track to Oracle9i

( 5, Arr_t ( Obj_t ( 5, 'five', '5-Jan-01' ), Obj_t ( 6, 'six', '6-Jan-01' ) ) );

The main difference between VARRAY and TABLE is that the former has a defined upper bound whereas the latter
is unbounded. A VARRAY field is stored inline for small sizes (<= 4000 bytes) and in an opaque system-managed
LOB within the given relational table for larger sizes, while a TABLE field is stored as several rows in a separate
opaque system managed relational table. This impacts the efficiency of access, leading to a generically familiar trade-
off: non-negotiable maximum collection size with faster access versus unlimited collection size with slower access.
PL/SQL allows variables of user-defined types and provides mechanisms for passing data stored in schema-level
collections to and from the corresponding PL/SQL structures thus…

declare
cursor c is select id, arr from t;
v_id number;
v_arr Arr_t;
begin
open c;
loop
fetch c into v_id, v_arr; exit when c%notfound;
Show ( v_id );
for j in v_arr.first..v_arr.last
loop
Show ( v_arr(j).a, v_arr(j).b, v_arr(j).c );
end loop;
end loop;
close c;
end;

PL/SQL also allows types based on VARRAY or TABLE to be declared within library units. This will typically be in a
package for reuse across several library units. In addition, PL/SQL allows the index-by variant of TABLE. (This variant
is not allowed as the basis of a schema-level type.)
All the above is supported pre-Oracle9i.

5.2. COLLECTIONS OF COLLECTIONS – NEW IN ORACLE9i


Consider extending the departments table with a column to record the list current projects for each department,
modeled as a collection. A project is characterized by inter alia a list of tasks, and a task is characterized by inter alia a
list of employees currently or previously assigned to it. Thus a project list is a collection of collections of collections.
This implies a requirement, in its simplest form, to support…

create type T1_tab_t is table of number;


create type T2_tab_t is table of T1_tab_t;
create type T3_tab_t is table of T2_tab_t;

…but pre-Oracle9i the create type T2_tab_t statement fails with “PLS-00534: A Table type may not contain a
nested table type or VARRAY”.

Oracle9i adds support for this, allowing collection hierarchies of arbitrary depth. The corresponding syntax for types
defined within a PL/SQL library unit is also supported…

create or replace package p is


type T1_tab_t is table of number index by binary_integer;
type T2_tab_t is table of T1_tab_t index by binary_integer;
type T3_tab_t is table of T2_tab_t index by binary_integer;
end p;

23 Paper # 129
Fast Track to Oracle9i

This fails pre-Oracle 9i with “PLS-00507: a PLSQL Table may not contain a table or a record with composite fields”.

5.3. “RUNNER’S TRAINING LOGS” EXAMPLE SCENARIO


Consider implementing a system to allow a running coach to maintain training logs for each of the runners under his
guidance. Each runner is identified by first name and runs several times per week. A run is characterized by the
distance and the average pace. The coach will want to monitor week by week variations and progress. Of course many
designs for the logical data model will work, but we consider just two here…

Single flat relational table…

create table reln_training_logs (


first_name varchar2(20) not null,
week number not null,
run number not null,
distance number not null,
pace number not null );
alter table reln_training_logs
add constraint reln_training_logs_pk primary key (first_name,week,run)
using index;

Relational table with multilevel collection column…

create type run_t as object ( distance number, pace number );


create type weeks_running_t is varray(20) of run_t not null;
create type training_log_t is varray(255) of weeks_running_t not null;
create table nested_training_logs (
first_name varchar2(20) primary key,
training_log training_log_t );

The reln_training_logs approach would be suitable if the typical access was for ad hoc queries across runners,
and the nested_training_logs approach would be suitable if the typical access was to report all the information
for each of a number of selected runners.
We’ll look at code to populate and to report on the nested_training_logs table. And then we’ll see how table
functions can be written to “view” nested_training_logs as reln_training_logs and to “view”
reln_training_logs as nested_training_logs. By writing each with a ref cursor input parameter we can
conveniently test that the result of two successive transformations is identical to the starting data. See section A.5 in
the Appendix for the complete working code.

5.3.1. POPULATING THE NESTED TABLE – SEE A.5.2


The following rely on datastructures for which the code is shown in section A.5.1. The code has this general shape…

v_training_log := training_log_t ( weeks_running_t ( run_t ( 0, 0 ) ) );


v_training_log(1) := weeks_running_t (
run_t ( 1, 6 ),
run_t ( 7, 7 ),
...
run_t ( 18, 10 )
);
v_training_log.extend;
...
insert into nested_training_logs ( first_name, training_log ) values
( 'fred', v_training_log );

24 Paper # 129
Fast Track to Oracle9i

5.3.2. REPORTING ON THE NESTED TABLE – SEE A.5.3


The code has this general shape…

for v_row in
( select first_name, training_log from nested_training_logs )
loop
Show ( v_row.first_name );
for week in v_row.training_log.first..v_row.training_log.last
loop
Show ( week );
for run in v_row.training_log(week).first..v_row.training_log(week).last
loop
Show ( run );
Show ( v_row.training_log(week)(run).distance );
Show ( v_row.training_log(week)(run).pace );
end loop;
end loop;
end loop;

…where we see that appending each successive subscript to the variable representing the multilevel collection instance
drills down each successive layer in its structure.
If appropriate, this could be re-written using bulk collect into local multilevel collection (with one extra level) thus…

declare
type first_name_tab_t is table of nested_training_logs.first_name%type
index by binary_integer;
v_first_name_tab first_name_tab_t;

type training_logs_tab_t is table of training_log_t


index by binary_integer;
v_training_logs_tab training_logs_tab_t;
begin
select first_name, training_log
bulk collect into v_first_name_tab, v_training_logs_tab
from nested_training_logs;

for j in v_first_name_tab.first..v_first_name_tab.last
loop
Show ( v_first_name_tab(j) );
for week in v_training_logs_tab(j).first..
v_training_logs_tab(j).last
loop
Show ( week );
for run in v_training_logs_tab(j)(week).first..
v_training_logs_tab(j)(week).last
loop
Show ( run );
Show ( v_training_logs_tab(j)(week)(run).distance );
Show ( v_training_logs_tab(j)(week)(run).pace );
end loop;
end loop;
end loop;
end;

25 Paper # 129
Fast Track to Oracle9i

5.3.3. DERIVING A TABLE FUNCTION FROM THE REPORTING LOGIC


TO OUTPUT A RELATIONAL “VIEW” – SEE A.5.4
At the heart of the innermost loop above we have the required information to populate a record corresponding to one
row of the relational representation. (Writing a program to generate a report is a convenient way to test the logic
before converting the code to a table function.) The conversion is relatively routine: (1) surround the block with a
create function statement and declare a ref cursor input parameter; (2) define types for a record, and table of
such records, according to the requirement and add a return declaration for the record type; (3) add the
pipelined keyword; (4) declare local variables v_in_row and v_out_row as records of the appropriate types; (5)
(if it’s not already coded this way) reformulate the cursor loop to use fetch p_in_cursor into v_in_row with
the corresponding exit condition (don’t open it – this is done by the system when the table function is invoked); (6)
replace the Show invocations with assignments for the elements of the target record; (7) deliver the record as the
actual parameter to pipe row(); (8) add close p_in_cursor and return as the last executable statements.

We can now conveniently perform ad hoc queries, for example…

select first_name, avg ( distance ) d, avg ( pace ) p


from
(
select first_name, distance, pace
from table
(
Reln_Training_Logs_Fn
(
cursor
(
select first_name, training_log from nested_training_logs
)
)
)
)
group by first_name;

5.3.4. WRITING A TABLE FUNCTION TO OUTPUT A NESTED “VIEW”


FROM THE RELATIONAL REPRESENTATION – SEE A.5.5
Suppose it had been decided to implement the persistent storage as a relational representation. It is still possible to
view it as if it were the nested table representation by using a table function. The simplest design would use nested
PL/SQL cursor loops thus: for each distinct runner… ; for each distinct week for that runner… ; for each run for that
week for that runner add the object to represent the run to the column in the collection for that week; when done with
that week add the column for the whole week to the “plane” of the collection for that runner’s log; when done with
that runner, pipe the record representing the name and the training log collection.
To make the table function more general, it needs to have a ref cursor input parameter to be invoked with a
SELECT having two levels of nested CURSOR subqueries corresponding to the above nested PL/SQL loops. An
alternative is to design the function to accept a “flat” SELECT. The latter approach requires slightly more elaborate
coding of the function logic (to explicitly detect the next week and the next runner) but makes the resulting function
substantially more user-friendly, and so it was selected for implementation in this illustration. To make the example
richer w.r.t. understanding table functions, parallelization decalations are added to ensure that all the rows for a
particular runner go consecutively to the same slave, and that for that runner the input rows are ordered by week and
then run. (The algorithm depends on these assumptions.)

5.3.5. END-TO-END TEST – SEE A.5.6


We started with data persistently stored in the nested representation, n. In this test, we populate a table for the

26 Paper # 129
Fast Track to Oracle9i

relational representation, r, by…

insert into r ( select * from < N_to_R_Tab_Fn selecting from n > )

The we populate a second table for the nested representation, n2 by…

insert into n2 ( select * from < R_to_N_Tab_Fn selecting from r > )

…and then populate a second table for the relational representation, r2 by running N_to_R_Tab_Fn on n2,
leveraging the fact that N_to_R_Tab_Fn has a ref cursor input parameter. Then we do…

select * from r2 minus select * from r;


select * from r minus select * from r2;

…and confirm that the two tables r and r2 are identical.

5.4. BUSINESS BENEFITS


Storing data as collection instances in a column of a database table is a pre-optimization to favor certain access paths
(typically accessing all the elements of the collection for each selected row). PL/SQL is needed to populate and query
such collection instances. Modeling data as a collection in a PL/SQL program is essential for the implementation of
certain algorithms (see for example the perfect triangles algorithm in code sample A.1.1 in the Appendix). A collection
can be used as the target of a bulk bind improve the performance of data transfer between the database and the
PL/SQL processing.
• Previously only one-dimensional phenomena could leverage the above benefits. Multilevel collections now
offer them for an arbitrarily enlarged set of real-world problems.

6. ENHANCEMENTS TO THE UTL_HTTP PACKAGE


6.1. BACKGROUND
The B2B component of eBusiness depends on automatic communication between business sites across the public
internet. The HTTP transport mechanism is used to send the request and to receive the reply. Though partners in a
particular B2B relationship could define standards for their protocols from scratch, the de facto standard is emerging to
use XML for both request and reply. Of course we can expect increasing standardization in future, extending to cover
the specifics of the XML encoding.
Oracle has technology to allow both the sender and the receiver straightforwardly to implement their services backed
by an Oracle9i database, and using only PL/SQL on top of productized APIs. The simplest way to code the receiver
is to use mod_plsql, either directly via the HTTP listener component of the Oracle9i database or via Oracle9iAS and
to write a PL/SQL database procedure which is exposed as the URL representing the request. The XML document
expressing the request is decoded, the database is accessed to supply the reply information and is updated
appropriately, and the reply is encoded and sent using Htp.Print or similar. This end of the dialogue is beyond the
scope of this paper.
The request is typically sent (or more likely queued and then sent later) in the body of a database trigger which fires on
an event like a stock level falling below the defined threshold for reordering. The XML document expressing the
request is encoded by accessing current database values and sent, typically using the “POST” method to ensure that an
arbitrarily large XML request can be sent piecewise. Authentication information (e.g. username and password) is likely
to be required as part of the request. And possibly the request header will need to be explicitly set to reflect an agreed
protocol. Then the response is (started to be) fetched and its status code is checked for errors and its header is
checked for protocol compliance. Then the arbitrarily large XML document expressing the response itself is fetched
piecewise, decoded, and the information is used to update the database. A robust implementation is likely to have a
component which automatically sends a generated email to a system administrator in the event of an error. Oracle has

27 Paper # 129
Fast Track to Oracle9i

features for encoding and decoding XML, and for sending email from the database, but these are beyond the scope of
this paper.
Depending on the design of the workflow, state may need to be represented. For example, a customer might request a
price and delivery date for a given quantity of items from several vendors. Each vendor would reply with price and
delivery date and with an “offer good to” date. When the customer site sends a request to the selected vendor to place
a definite order, it will need to refer to the specific offer. If such a scheme is used within a single organization, for
example to communicate between databases at local offices in different countries, then the communication protocol
can be designed from scratch, and most likely an offer reference number will be exchanged as part of the XML
encoding. However, if the partners in the B2B relationship are completely independent, and especially if the
relationship is casual, then the requestor will have to follow whatever protocol the receiver has defined. It might be
that the receiver has implemented the state which represents an ongoing dialogue using cookies. In this case the
sender will need to handle these programmatically.

6.2. ORACLE9i ENHANCEMENTS


6.2.1. GENERAL
The Utl_Http package pre-Oracle9i allowed a basic implementation of the sending site. It allowed an arbitrarily large
response to be handled piecewise in a PL/SQL VARCHAR2. But it supported just the “GET” method, i.e. did not
support sending arbitrarily large messages in the body of the request. And it did not support authentication, setting the
header of the request, inspecting the status code and header of the response, or dealing with cookies. Oracle9i adds
support for all these (including optionally fetching the response “as is” into a PL/SQL RAW), and beyond that
provides full support for the semantics that can be expressed via HTTP. For example, persistent HTTP connections
are now supported. Use of these gives dramatic speed and scalabilty improvement for applications that repeatedly and
frequently make HTTP requests to the same site. And users now have full control over the encoding of character data,
see below.
HTTP relies on an underlying transport layer. Thus the Utl_Http package (written in PL/SQL) is implemented on
top of the Utl_Tcp package. (The Utl_Smtp package for sending email from the database is the same.) Pre-
Oracle9i, Utl_Tcp was implemented in Java. At Oracle9i it has been reimplemented natively, i.e. in C directly on top
of the socket layer to improve its performance.
The code sample in A.6 in the Appendix shows how to model the message sender at SQL*Plus, and can be used to
inspect the return status and content of an arbitrary password protected URL.
A code sample implementing both the sending site and the receiving site in a complete self-contained B2B simulation
is available for download under Sample Code link on the PL/SQL homepage on OTN, here…

http://otn.oracle.com/tech/pl_sql/

6.2.2. ENCODING OF CHARACTER DATA


In the classical client/server architecture, the database and the client may use different encoding schemes to represent
character data. For example in a Japanese application, the database might use (a variety of) EUC character set and the
client might use (a variety of) SJIS character set. Thus character encoding conversion is required. The solution is well
known and long established: Oracle Net transparently handles the conversion (as specified by the database character set
and the NLS_LANG client environment variable). A corresponding issue exists for Utl_Http. When a request is
sent it might need to be encoded differently than the database character set (because the sender knows that the target
URL requires this). And when a response is received, it may again be encoded differently from the database character
set (because that’s the non-negotiable behavior of the target URL).
There are two areas of concern when sending a request: the URL and the request body. When sending by the “GET”
method, all request parameterization is via the URL itself, typically after the ? delimiter. Search terms for example are
normally handled this way. HTTP defines no convention for specifying different character sets for the URL and
expects that everything is 7-bit ASCII. Other character encoding schemes should be represented as the hex codes of

28 Paper # 129
Fast Track to Oracle9i

their bytes using the %nn notation. (The sender of the request must know from documentation what character set the
URL expects to decode from the hex representation.) Oracle9i introduces the Utl_Url package which has functions
to convert from the database character set to a hex coded representation of a specified character set, and vice versa. In
addition, these functions handle the conversion of the reserved symbols: percent (%), semi-colon (;) slash (/),
question mark (?), colon (:), at sign (@), ampersand (&), equals sign (=), plus sign (+), dollar sign ($), and comma (,).
When sending by the “PUT” method, the character set of the request body should be set via the charset attribute of
the Content-Type in the request header, using the new Utl_Http.Set_Header procedure. If this is done, it gives
Oracle sufficient information to transform appropriately when sending a character request body (by using
Utl_Http.Write_Text). If the charset attribute is not set in the request header, then no character set
conversion takes place unless the user has catered for it via the overloaded procedure
Utl_Http.Set_Body_Charset. The variant Set_Body_Charset(charset varchar2) – a.k.a. the global
variant - allows the user to set a fallback character set, to be assumed, if no other information is provided, for both
requests and responses for the session. The variant Set_Body_Charset(r Utl_Http.Req, charset
varchar2) – a.k.a the request variant, allows the user to insist on a character set for the body for this request. (A
record of PL/SQL type Utl_Http.Req is returned when the HTTP request is begun with
Utl_Http.Begin_Request.) The choice made via the request variant will not only override that made via the
global variant but will also override that made via the charset attribute of the request header. For this reason, the
recommended way to specify the character set conversion for the request body is via the charset attribute of the
header. Only if the user has a special reason for leaving this unspecified in the request header would he use the request
variant of Set_Body_Charset.
There is just one area of concern when receiving the response: the response body. If the implementation of the URL
is well-mannered, then the character set of the response body will be specified correctly in the charset attribute of
the Content-Type in the response header, accessible to the user via the procedure Utl_Http.Get_Header. Oracle
will implicitly perform the appropriate conversion in connection with calling Utl_Http.Read_Text. However, this
is often not set. In this case the user can use the global variant of Set_Body_Charset to determine the character set
conversion. However, the charset attribute of the response header is sometimes set wrong. (This is likely when pages
in different character sets are served up as files from the filesystem seen by the webserver, since the Content-Type
header information will often be set globally for the server with no mechanism to make it file specific.) For this reason
a third overloaded variant Set_Body_Charset(r Utl_Http.Resp,charset varchar2) is provided – a.k.a.
the response variant. (A record of PL/SQL type Utl_Http.Resp is returned when the HTTP response is got with
Utl_Http.Get_Response.) The choice made via the response variant will override that made via the global variant
and that expressed via the charset attribute of the response header.
Note: from Oracle8i v8.1.6 and pre-Oracle9i, Oracle detected the charset of the response body (if this was specified)
and used the information to do the character set conversion. And if the charset attribute of the response body was
not specified then no conversion took place and no overriding or fallback mechanism was provided. Under special
circumstances (eg fetching a SJIS Japanese response where the charset attribute is not specified into a EUC
database) problems arose pre-Orcale9i.
Thus the user now has full control over all character set conversion issues. In an extreme case, where the response
body is Content-Type text/html and where the HTML <meta> tag is used to specify the character set, the user can
retrieve the response body into a PL/SQL RAW with Utl_Http.Read_Raw and then write custom code to parse
the HTML and to convert to the database character set in a PL/SQL VARCHAR2 once the response character set is
discovered.

6.3. BUSINESS BENEFITS


• Substantially increased functionality allowing implementation of fully functional B2B applications

7. TRANSPARENT ENHANCEMENTS
Record construction and copying is now faster in Oracle9i. Benchmarks designed to stress this feature improved by up
to 5 times. Less focused benchmarks improved by 5 to 10%.

29 Paper # 129
Fast Track to Oracle9i

There is considerable improvement in the execution of PL/SQL programs that reference subprograms that are part of
another package. Benchmarks specifically designed to test this feature showed a speedup of over 50%. In more generic
benchmarks an improvement of 5% has been seen.
There is a 60%, or more, reduction in overhead of calling PL/SQL procedures from SQL statements. Thus the
execution of SQL statements that reference subprograms is faster.
The SQL parser replaces PL/SQL’s compile-time analysis of a static SQL statement with analysis using a SQL
component shared with the RDBMS. Therefore duplication of SQL analysis is reduced. PL/SQL is allowed to pick up
new SQL features as they are implemented in the RDBMS. Errors due to differences in SQL analysis between SQL
and PL/SQL are also eliminated.
The Utl_Tcp package has been reimplemented (moving from java to native C) to deleiver increased performance.
Though not strictly speaking transparent changes, we list here for convenience restrictions that have been removed:
it’s now possible to assign (for example) a variable of type VARCHAR2 to one of type NVARCHAR2 and enjoy
implicit conversion; it’s now possible to assign aVARCHAR2 to a CLOB and (provided the CLOB isn’t too big) vice
versa, and to use substr and instr with CLOB variables.

BUSINESS BENEFITS
• Increased speed and scalability
• Improved usability for the developer

8. NEW SQL FEATURES


SQL has some new language features and new functions in Oracle9i. which are reflected in PL/SQL as expected: the
MERGE keyword; the datatypes TIMESTAMP (WITH TIME ZONE and WITH LOCAL TIME ZONE variants) and
INTERVAL (YEAR TO MONTH and DAY TO SECOND variants); the functions nullif and coalesce.

BUSINESS BENEFITS
• Seamless access to SQL features

9. OBJECT ORIENTED FEATURES


Oracle8 introduced support for objects in the server along with relational tables to enable the same data-model
across all tiers. In Oracle9i, Oracle’s Object-Relational vision achieves functional and operational completeness
with the introduction of features such as inheritance, multi-level collections, type evolution and so on. Inheritance
and multi-level collections bring the server’s modeling capabilities closer to that provided by Java, C++ or XML,
making it easy to model business objects in the database and achieve uniformity of data models across
tiers.
PL/SQL now supports the notion of substitutable variables: a variable intended to hold a supertype (or a REF to
one) can be assigned a subtype (or a REF to one). It is also possible to dispatch overloaded methods
polymorphically. A method invoked on an object is dispatched (‘virtually’) to the specific implementation based on
the runtime type.

declare
person_var person_type; /* can denote an object of this type
or of any of its subtypes */
begin
person_var := person_type(...);
person_var.some_method(); /* invokes some_method() of person */

30 Paper # 129
Fast Track to Oracle9i

person_var := employee_type(...); /* employee_type inherits from


person type and overrides
some_method() */
person_var.some_method(); /* invokes some_method() of employee */
end;

BUSINESS BENEFITS
• functional and operational object oriented completeness

Note: The full treatment of Oracle’s object oriented functionality is beyond the scope of this paper.
10. NEW OR ENHANCED SUPPLIED PACKAGES
New packages: Dbms_Xmlgen (creates an XML document from any SQL query, returning the result as a CLOB);
Dbms_Metadata (provides interfaces for extracting complete definitions of database objects either as XML or as
SQL DDL); dbms_aqelm, dbms_encode, dbms_fga, dbms_flashback, dbms_ldap,
dbms_libcache, dbms_logmnr_cdc_publish, dbms_logmnr_cdc_subscribe, dbms_odci,
dbms_outln_edit, dbms_redefinition, dbms_transform, dbms_url, dbms_wm,
dbms_xmlquery, dmbs_xmlsave, utl_encode.
New Types: XMLtype, UriType, DBUriType, and HttpUriType, dbms_types, anydata_type,
anydataset_type, anytype_type.
Enhanced packages: Utl_Raw (enhanced with these new APIs: Cast_To_Number, Cast_From_Number,
Cast_To_Binary_Integer, Cast_From_Binary_Integer); Utl_File; Utl_Http (as discused at length
above).

BUSINESS BENEFITS
• Increased out-of-the-box functionality

APPENDIX : CODE SAMPLES


Note: several of the following code samples depend on the employees and departments tables. These are in the
hr sample schema. Scripts to create and populate this are provided with Oracle9i, and it is included in the standard
pre-built database. The samples are complete. They can be copied and pasted as is into SQL*Plus. (Don’t forget that
SQL*Plus requires that anonymous blocks and CREATE statements for PL/SQL library units and for types must be
terminated with a “/”.)
The main sections below are numbered in correspondence to the main sections above which they illustrate, and so
there are gaps in the numbering.

A.1. SAMPLES TO TEST PERFORMANCE IMPROVEMENT FROM PL/SQL NATIVE COMPILATION


A.1.1. THIN WRAPPER FOR EXECUTING SQL STATEMENTS
Constructs like the following (especially which invoke Htp.Print and similar) are commonly used for presenting
results in a way that can be difficult using only SQL.

begin
for department in ( select department_id d, department_name from departments
order by department_name )
loop
Dbms_Output.Put_Line ( Chr(10) || department.department_name );
for employee in ( select last_name from employees
where department_id = department.d
order by last_name )

31 Paper # 129
Fast Track to Oracle9i

loop
Dbms_Output.Put_Line ( '- ' || employee.last_name );
end loop;
end loop;
end;

This example when run against prepared data to give 12 million iterations, i.e. approximately the same number as A.1.2
below runs about 3% faster when compiled in native mode. Note: the example could be re-written to run more
efficiently by using a single SELECT with a CURSOR subquery as described in the section on cursor expressions.
There are many examples of this formulation throughout this paper.

A.1.2. COMPUTATIONALLY INTENSIVE ALGORITHM WITH NO DATABASE ACCESS


Consider the task of finding all right-angled triangles with all side lengths integer (a.k.a. perfect triangles). We must
count only unique triangles, i.e. those whose sides are not each the same integral multiple of the sides of a perfect
triangle already found. The following implements an exhaustive search among candidate triangles with all possible
combinations of lengths of the two shorter sides, each in the range 1 to a specified maximum. Each candidate is
coarsely filtered by testing if the square root of the sum of the squares of the two short sides is within 0.01 of an
integer. Triangles which pass this test are tested exactly by applying Pythagoras’s theorem using integer arithmetic.
Candidate perfect triangles are tested against the list of multiples of perfect triangles found so far. Each new unique
perfect triangle is stored in a PL/SQL table, and its multiples (up to the maximum length) are stored in a separate
PL/SQL table to facilitate uniqueness testing.
The implementation thus involves a doubly nested loop with these steps at its heart: several arithmetic operations,
casts and comparisons; calls to procedures implementing comparisons driven by iteration through a PL/SQL table
(with yet more arithmetic operations); and extension of PL/SQL tables where appropriate.
The elapsed time was measured for p_max=5000 (i.e. 12.5 million repetitions of the heart of the loop) using
interpreted and natively compiled versions of the procedure. The times were 548 sec and 366 sec respectively (on a
Sun Ultra60 with no load apart from the test). Thus the natively compiled version was about 33% faster.

create or replace procedure Perfect_Triangles ( p_max in integer ) is


t1 integer; t2 integer;
long integer; short integer; hyp number; ihyp integer;

type side_r is record ( short integer, long integer );


type sides_t is table of side_r index by binary_integer;
unique_sides sides_t; n integer:=0 /* curr max elements in unique_sides */;
dup_sides sides_t; m integer:=0 /* curr max elements in dup_sides */;

procedure Store_Dup_Sides ( p_long in integer, p_short in integer ) is


mult integer:=2; long_mult integer:=p_long*2; short_mult integer:=p_short*2;
begin
while ( long_mult < p_max ) or ( short_mult < p_max )
loop
n := n+1;
dup_sides(n).long := long_mult; dup_sides(n).short := short_mult;
mult := mult+1; long_mult := p_long*mult; short_mult := p_short*mult;
end loop;
end Store_Dup_Sides;

function Sides_Are_Unique ( p_long in integer, p_short in integer )


return boolean is
begin
for j in 1..n
loop
if ( p_long = dup_sides(j).long ) and ( p_short = dup_sides(j).short )

32 Paper # 129
Fast Track to Oracle9i

then return false; end if;


end loop;
return true;
end Sides_Are_Unique;

begin /* Perfect_Triangles */
t1 := Dbms_Utility.Get_Time;
for long in 1..p_max
loop
for short in 1..long
loop
hyp := Sqrt ( long*long + short*short ); ihyp := Floor(hyp);
if hyp-ihyp < 0.01
then
if ( ihyp*ihyp = long*long + short*short )
then
if Sides_Are_Unique ( long, short )
then
m := m+1;
unique_sides(m).long := long; unique_sides(m).short := short;
Store_Dup_Sides ( long, short );
end if;
end if;
end if;
end loop;
end loop;
t2 := Dbms_Utility.Get_Time;

Dbms_Output.Put_Line (
chr(10) || To_Char( ((t2-t1)/100), '9999.9' ) || ' sec' );
end Perfect_Triangles;

A.4. SAMPLES TO ILLUSTRATE TABLE FUNCTIONS AND CURSOR EXPRESSIONS


A.4.1. “YOUNG MANAGERS” SCENARIO
Find those managers in the employees table, the majority of whose direct reports were hired before the manager.
A.4.1.1. CLASSICAL PROCEDURAL APPROACH
cursor managers is
select
employee_id, hire_date,
cursor (
select hire_date
from employees reports
where reports.manager_id = managers.employee_id
)
from employees managers;

manager_employee_id employees.employee_id%type;
manager_hire_date employees.hire_date%type;
reports sys_refcursor;
type report_hire_dates_tis table of employees.hire_date%type
index by binary_integer;
report_hire_dates report_hire_dates_t;
before integer; after integer;
begin
open managers;
loop

33 Paper # 129
Fast Track to Oracle9i

before:=0; after:=0;
fetch managers into manager_employee_id, manager_hire_date, reports;
exit when managers%notfound;
fetch reports bulk collect into report_hire_dates;
if report_hire_dates.count > 0
then
for j in report_hire_dates.first..report_hire_dates.last
loop
null;
case report_hire_dates(j) < manager_hire_date
when true then before:=before+1;
else after:=after+1;
end case;
end loop;
end if;
if before > after then
insert into young_managers values ( manager_employee_id ); end if;
end loop;
close managers;
end;

A.4.1.2. PURE SQL APPROACH


create or replace view report_hire_dates as
select managers.employee_id manager_employee_id,
managers.hire_date manager_hire_date,
reports.hire_date report_hire_date
from employees managers,
employees reports
where managers.employee_id = reports.manager_id;

create or replace view young_managers as


select manager_employee_id from (
select manager_employee_id,
sum (
Decode (
sign ( report_hire_date - manager_hire_date ),
/* when */ -1, /* then */ 1,
/* when */ 0, /* then */ 1,
/* else */ -1
)
) s
from report_hire_dates
group by manager_employee_id )
where s > 0;

A.4.1.3. USING CLASSICAL FUNCTION IN WHERE CLAUSE


create or replace function Most_Reports_Before_Manager (
p_manager_id in number,
p_manager_hire_date in date )
return number
is
report_hire_date date;
before integer:=0;
after integer:=0;
begin
for report in (
select hire_date, employee_id from employees where manager_id = p_manager_id )
loop

34 Paper # 129
Fast Track to Oracle9i

case report.hire_date < p_manager_hire_date


when true then before:=before+1;
else after:=after+1;
end case;
end loop;

case ( before > after ) and ( before+after > 0 )


when true then return 1;
else return 0;
end case;
end Most_Reports_Before_Manager;

create or replace view young_managers as


select employee_id manager_employee_id
from employees managers
where Most_Reports_Before_Manager ( employee_id, hire_date ) = 1;

A.4.1.4. USING FUNCTION WITH REF CURSOR PARAMETER IN WHERE CLAUSE


create or replace function Most_Reports_Before_Manager (
report_hire_dates_cur in sys_refcursor,
manager_hire_date in date )
return number
is
type report_hire_date_t is table of employees.hire_date%type
index by binary_integer;
report_hire_dates report_hire_date_t;
before integer:=0; after integer:=0;
begin
fetch report_hire_dates_cur bulk collect into report_hire_dates;
if report_hire_dates.count > 0
then
for j in report_hire_dates.first..report_hire_dates.last
loop
case report_hire_dates(j) < manager_hire_date
when true then before:=before+1;
else after:=after+1;
end case;
end loop;
end if;
case before > after
when true then return 1;
else return 0;
end case;
end Most_Reports_Before_Manager;

create or replace view young_managers as


select managers.employee_id manager_employee_id
from employees managers
where Most_Reports_Before_Manager
(
cursor ( select reports.hire_date from employees reports
where reports.manager_id = managers.employee_id
),
managers.hire_date
) = 1;

A.4.1.5. USING A TABLE FUNCTION


create or replace package My_Types is

35 Paper # 129
Fast Track to Oracle9i

type employee_ids_tab is table of employees.employee_id%type;


end My_Types;

create or replace function Young_Managers_Fn


return My_Types.employee_ids_tab
pipelined
is
cursor managers is
select
employee_id, hire_date,
cursor (
select hire_date
from employees reports
where reports.manager_id = managers.employee_id
)
from employees managers;

manager_employee_id employees.employee_id%type;
manager_hire_date employees.hire_date%type;
reports sys_refcursor;
type report_hire_date_tis table of employees.hire_date%type
index by binary_integer;
report_hire_dates report_hire_date_t;
before integer; after integer;
begin
open managers;
loop
before:=0; after:=0;
fetch managers into manager_employee_id, manager_hire_date, reports;
exit when managers%notfound;
fetch reports bulk collect into report_hire_dates;
if report_hire_dates.count > 0
then
for j in report_hire_dates.first..report_hire_dates.last
loop
case report_hire_dates(j) < manager_hire_date
when true then before:=before+1;
else after:=after+1;
end case;
end loop;
end if;
if before > after then
pipe row ( manager_employee_id ); end if;
end loop;
close managers;
return;
end Young_Managers_Fn;

create or replace view young_managers as


select column_value manager_employee_id from table ( Young_Managers_Fn() );

A.4.1.6. USING A TABLE FUNCTION WITH A REF CURSOR INPUT PARAMETER


create or replace function Young_Managers_Fn ( managers in sys_refcursor )
return My_Types.employee_ids_tab
pipelined
is
manager_employee_id employees.employee_id%type;
manager_hire_date employees.hire_date%type;
reports sys_refcursor;

36 Paper # 129
Fast Track to Oracle9i

type report_hire_date_tis table of employees.hire_date%type


index by binary_integer;
report_hire_dates report_hire_date_t;
before integer; after integer;
begin
loop
before:=0; after:=0;
fetch managers into manager_employee_id, manager_hire_date, reports;
exit when managers%notfound;
fetch reports bulk collect into report_hire_dates;
if report_hire_dates.count > 0
then
for j in report_hire_dates.first..report_hire_dates.last
loop
case report_hire_dates(j) < manager_hire_date
when true then before:=before+1;
else after:=after+1;
end case;
end loop;
end if;
if before > after then
pipe row ( manager_employee_id ); end if;
end loop;
close managers;
return;
end Young_Managers_Fn;

select column_value manager_employee_id from table


(
Young_Managers_Fn
(
cursor
(
select
employee_id, hire_date,
cursor (
select hire_date
from employees reports
where reports.manager_id = managers.employee_id
)
from employees managers
)
)
);

Note: An attempt to create a view as the above select statement currently fails with “ORA-22902: CURSOR expression
not allowed” where the exception is raised because the SELECT statement which is the argument of the CURSOR
formal parameter to the table function itself has a cursor expression (a.k.a. cursor subquery). A view can be created
when SELECT statement does not have a cursor subquery (see the Mappings_Fn example above).

A.4.2. PARALLELIZING TABLE FUNCTION EXECUTION


A.4.2.1. ALGORITHM IS INDEPENDENT OF THE ORDERING OF THE SOURCE ROWS
create or replace package My_Types is
type input_row is record ( n number );
type cur_t is ref cursor return input_row;

type xform_row is record ( n number, typ char(1) );

37 Paper # 129
Fast Track to Oracle9i

type xforms_tab is table of xform_row;


end My_Types;

create table t ( n number );


begin
for j in 1..1000
loop insert into t ( n ) values ( j ); end loop;
commit;
end;

create or replace function Rowwise_Xform_Fn ( p_input_rows in sys_refcursor )


return My_Types.xforms_tab
pipelined
parallel_enable ( partition p_input_rows by any )
is
v_in_row My_Types.input_row;
v_out_row My_Types.xform_row;
begin
loop
fetch p_input_rows into v_in_row;
exit when p_input_rows%notfound;
v_out_row.n := v_in_row.n * 2; v_out_row.typ := 'a';
pipe row ( v_out_row );
v_out_row.n := v_in_row.n * 3; v_out_row.typ := 'b';
pipe row ( v_out_row );
end loop;
close p_input_rows;
return;
end Rowwise_Xform_Fn;

select * from table ( Rowwise_Xform_Fn ( cursor ( select n from t ) ) )


where rownum < 11;

A.4.2.2. ALGORITHM REQUIRES ONLY THAT THE SOURCE ROWS ARE CLUSTERED
Note: in order to avoid having to make the algorithm distractingly complex, this DELETE should be issued...

delete from employees where department_id is null;

…before continuing thus…

create or replace package My_Types is


type dept_sal_row is record ( sal number(8,2), dept number(4) );
type cur_t is ref cursor return dept_sal_row;

type dept_sals_tab is table of dept_sal_row;


end My_Types;

create or replace function Aggregate_Xform ( p_input_rows in My_Types.cur_t )


return My_Types.dept_sals_tab
pipelined
--parallel_enable
-- ( partition p_input_rows by [ hash / range] (dept) )
--[ cluster / order ] p_input_rows by (dept)
is
g_in_row My_Types.dept_sal_row;
g_out_row My_Types.dept_sal_row;
g_first_time boolean := true;
g_last_dept number := null;

38 Paper # 129
Fast Track to Oracle9i

g_got_a_row boolean;
g_new_dept boolean;
g_current_dept employees.department_id%type;
g_prev_dept employees.department_id%type;
v_total_sal number;

procedure Get_Next_Row is begin


fetch p_input_rows into g_in_row;
g_got_a_row := not p_input_rows%notfound;
if g_got_a_row
then
case g_first_time
when true then
g_first_time := false;
g_new_dept := false;
else
g_new_dept := g_prev_dept <> g_in_row.dept;
end case;
g_prev_dept := g_in_row.dept;
end if;
return;
end Get_Next_Row;

function Got_Next_Dept return boolean is begin


g_current_dept := g_in_row.dept;
g_new_dept := false;
return g_got_a_row;
end Got_Next_Dept;

function Got_Next_Row_In_Dept return boolean is begin


return ( not g_new_dept ) and g_got_a_row;
end Got_Next_Row_In_Dept;

begin
Get_Next_Row();
while Got_Next_Dept()
loop
v_total_sal := 0;
while Got_Next_Row_In_Dept()
loop
v_total_sal := v_total_sal + g_in_row.sal;
Get_Next_Row();
end loop;
g_out_row.sal := v_total_sal; g_out_row.dept := g_current_dept;
pipe row ( g_out_row );
end loop;
close p_input_rows;
return;
end Aggregate_Xform;

A.4.3. THE LOOKUPS_FN AND MAPPINGS_FN EXAMPLE


RE-WRITTEN TO RETURN SCHEMA-LEVEL TYPES
Since the query syntax for an object table is rather verbose, we recap it here using a table.

create table t of lookup_row;


insert into t values ( lookup_row ( 1, 'one' ) );
insert into t values ( lookup_row ( 2, 'TWO' ) );
insert into t values ( lookup_row ( 3, 'three' ) );

39 Paper # 129
Fast Track to Oracle9i

insert into t values ( lookup_row ( 4, 'FOUR' ) );


insert into t values ( lookup_row ( 5, 'five' ) );
insert into t values ( lookup_row ( 6, 'SIX' ) );
insert into t values ( lookup_row ( 7, 'seven' ) );
insert into t values ( lookup_row ( 8, 'other' ) );
insert into t values ( lookup_row ( 9, 'other' ) );
insert into t values ( lookup_row ( 10, 'other' ) );
commit;

/* this is how an object query should be written */


select VALUE(a) rec from t a;

/* because it’s verbose, it’s convenient to define a view */


create or replace view v as
select value(a) rec from t a;

/* test the view */


select * from v;

Now the example proper...

create type lookup_row as object ( idx number, text varchar2(20) );

create type lookups_tab as table of lookup_row;

create or replace function Lookups_Fn


return lookups_tab
pipelined
is
v_row lookup_row;
begin
for j in 1..10
loop
v_row :=
case j
when 1 then lookup_row ( 1, 'one' )
when 2 then lookup_row ( 2, 'TWO' )
when 3 then lookup_row ( 3, 'three' )
when 4 then lookup_row ( 4, 'FOUR' )
when 5 then lookup_row ( 5, 'five' )
when 6 then lookup_row ( 6, 'SIX' )
when 7 then lookup_row ( 7, 'seven' )
else lookup_row ( j, 'other' )
end;
pipe row ( v_row );
end loop;
return;
end Lookups_Fn;

Note the syntax of the query. Since the table function returns an object, it follows from the syntax against an object
table above. Again, it’s convenient to encapsulate it in a view.

select value(a) rec


from table
(
cast ( Lookups_Fn() as lookups_tab )
) a;

create or replace view lookups as

40 Paper # 129
Fast Track to Oracle9i

select value(a) rec


from table
(
cast ( Lookups_Fn() as lookups_tab )
) a;

select * from lookups;

create or replace function Mappings_Fn ( p_input_rows in sys_refcursor )


return lookups_tab
pipelined
is
v_in_row lookup_row;

/* always initialize an object type using a type constructor


or user defined constructor */
v_out_row lookup_row := lookup_row( 1, 'x' );
begin
loop
fetch p_input_rows into v_in_row;
exit when p_input_rows%notfound;
case v_in_row.idx
when 1 then v_out_row.idx := 1*2; v_out_row.text := 'was one';
when 2 then v_out_row.idx := 2*3; v_out_row.text := 'was TWO';
when 3 then v_out_row.idx := 3*4; v_out_row.text := 'was three';
when 4 then v_out_row.idx := 4*5; v_out_row.text := 'was FOUR';
when 5 then v_out_row.idx := 5*6; v_out_row.text := 'was five';
when 6 then v_out_row.idx := 6*7; v_out_row.text := 'was SIX';
when 7 then v_out_row.idx := 7*8; v_out_row.text := 'was seven';
else v_out_row.idx :=
v_in_row.idx*10; v_out_row.text := 'was other';
end case;
pipe row ( v_out_row );
end loop;
close p_input_rows;
return;
end Mappings_Fn;

Note the syntax of the query. It’s most compactly expressed using the views v or lookups defined above.

select value(b)
from table
(
cast
(
Mappings_Fn
(
cursor ( select * from lookups )
)
as lookups_tab
)
) b;

For completeness, here’s how it looks without the view…

select value(b) from table


(
cast
(

41 Paper # 129
Fast Track to Oracle9i

Mappings_Fn
(
cursor
( select value(a) from table
(
cast ( Lookups_Fn() as lookups_tab )
) a
)
)
as lookups_tab
)
) b;

For convenience, we can now establish the whole thing as a view…

create or replace view mapped_lookups as


select value(b) rec from table
(
cast
(
Mappings_Fn
(
cursor
( select value(a) from table
(
cast ( Lookups_Fn() as lookups_tab )
) a
)
)
as lookups_tab
)
) b;

We can now access the from PL/SQL without restriction, for example…

declare
cursor table_fn_cur is
select * from mapped_lookups;
rec lookup_row;
begin
open table_fn_cur;
loop
fetch table_fn_cur into rec;
exit when table_fn_cur%notfound;
Print ( rec.idx, rec.text );
end loop;
close table_fn_cur;
end;

Note the syntax for the implicit cursor for loop…

begin
for j in ( select * from mapped_lookups )
loop
Show ( j.rec.idx, j.rec.text );
end loop;
end;

42 Paper # 129
Fast Track to Oracle9i

A.5. SAMPLES TO ILLUSTRATE MULTILEVEL COLLECTIONS


THE “RUNNER’S TRAINING LOGS” EXAMPLE SCENARIO

A.5.1.DEFINE THE DATASTRUCTURES


create type run_t as object ( distance number, pace number );
create type weeks_running_t is varray(20) of run_t not null;
create type training_log_t is varray(255) of weeks_running_t not null;

create or replace package My_Types is


type reln_training_log_row_t is record (
first_name varchar2(20),
week number,
run number,
distance number,
pace number );

type cur_t is ref cursor


/* strong cursor type for table function partitioning */
return reln_training_log_row_t;

type reln_training_logs_tab_t is table of reln_training_log_row_t;

type nested_training_log_row_t is record (


first_name varchar2(20),
training_log training_log_t );
type nested_training_logs_tab_t is table of nested_training_log_row_t;
end My_Types;

create table nested_training_logs (


first_name varchar2(20) primary key,
training_log training_log_t );

create table nested_training_logs_2 (


first_name varchar2(20) primary key,
training_log training_log_t );

create table reln_training_logs (


first_name varchar2(20) not null,
week number not null,
run number not null,
distance number not null,
pace number not null );
alter table reln_training_logs
add constraint reln_training_logs_pk primary key (first_name,week,run)
using index;

create table reln_training_logs_2 (


first_name varchar2(20) not null,
week number not null,
run number not null,
distance number not null,
pace number not null );
alter table reln_training_logs_2
add constraint reln_training_logs_2_pk primary key (first_name,week,run)
using index;

A.5.2.PROCEDURE TO POPULATE THE NESTED TABLE

43 Paper # 129
Fast Track to Oracle9i

create or replace procedure Populate_Nested_Training_Logs is


v_training_log training_log_t;
begin
v_training_log := training_log_t ( weeks_running_t ( run_t ( 0, 0 ) ) );

v_training_log(1) :=
weeks_running_t
(
run_t ( 1, 6 ),
run_t ( 7, 7 ),
run_t ( 3, 6 ),
run_t ( 9, 9 ),
run_t ( 3, 6 ),
run_t ( 18, 10 )
);

v_training_log.extend;
v_training_log(2) :=
weeks_running_t
(
run_t ( 5, 7 ),
run_t ( 9, 8 ),
run_t ( 3, 7 ),
run_t ( 9, 9 ),
run_t ( 3, 7 )
);

v_training_log.extend;
v_training_log(3) :=
weeks_running_t
(
run_t ( 5, 7 ),
run_t ( 9, 8 ),
run_t ( 3, 7 ),
run_t ( 9, 9 ),
run_t ( 3, 7 )
) ;

insert into nested_training_logs ( first_name, training_log ) values


( 'fred', v_training_log );

v_training_log := training_log_t ( weeks_running_t ( run_t ( 0, 0 ) ) );

v_training_log(1) :=
weeks_running_t
(
run_t ( 2, 10 ),
run_t ( 3, 11 ),
run_t ( 3, 11 ),
run_t ( 4, 12 )
);

v_training_log.extend;
v_training_log(2) :=
weeks_running_t
(
run_t ( 1, 10 ),
run_t ( 2, 11 ),
run_t ( 3, 12 ),

44 Paper # 129
Fast Track to Oracle9i

run_t ( 2, 10 ),
run_t ( 1, 9 ),
run_t ( 4, 12 )
);

insert into nested_training_logs ( first_name, training_log ) values


( 'sid', v_training_log );
end Populate_Nested_Training_Logs;

A.5.3.REPORT ON THE CONTENTS OF THE NESTED TABLE


begin
for v_row in
( select first_name, training_log from nested_training_logs )
loop
Dbms_Output.Put_Line ( v_row.first_name );
for week in v_row.training_log.first..
v_row.training_log.last
loop
Dbms_Output.Put_Line ( '. week #' || To_Char(week) );
for run in v_row.training_log(week).first..
v_row.training_log(week).last
loop
Dbms_Output.Put_Line
(
'. run #' || To_Char(run) || ': '
|| Lpad ( v_row.training_log(week)(run).distance, 3, ' ' )
|| ' /'
|| Lpad ( v_row.training_log(week)(run).pace, 3, ' ' )
);
end loop;
end loop;
end loop;
end;

A.5.4.TABLE FUNCTION TO “VIEW” THE CONTENTS OF THE NESTED TABLE


AS A RELATIONAL TABLE

create or replace function Reln_Training_Logs_Fn


( p_nested_training_logs in sys_refcursor )
return My_Types.reln_training_logs_tab_t
/*
The algorithm handles each row in isolation and thus
is amenable to the simplest form of parallelism
*/
parallel_enable ( partition p_nested_training_logs by any ) pipelined
is
v_in_row My_Types.nested_training_log_row_t;
v_out_row My_Types.reln_training_log_row_t;
begin
loop
fetch p_nested_training_logs into v_in_row;
exit when p_nested_training_logs%notfound;

for week in v_in_row.training_log.first..


v_in_row.training_log.last
loop
for run in v_in_row.training_log(week).first..
v_in_row.training_log(week).last

45 Paper # 129
Fast Track to Oracle9i

loop
v_out_row.first_name := v_in_row.first_name;
v_out_row.week := week;
v_out_row.run := run;
v_out_row.distance := v_in_row.training_log(week)(run).distance;
v_out_row.pace := v_in_row.training_log(week)(run).pace;
pipe row ( v_out_row );
end loop;
end loop;
end loop;
close p_nested_training_logs;
return;
end Reln_Training_Logs_Fn;

A.5.5.TABLE FUNCTION TO “VIEW” THE CONTENTS OF THE RELATIONAL TABLE


AS A NESTED TABLE

create or replace function Nested_Training_Logs_Fn


( p_reln_training_logs My_Types.cur_t )
return My_Types.nested_training_logs_tab_t
/*
The algorithm depends on assuming that it receives rows ordered by
first_name, week, then run, and that all the rows for
a particular runner go consecutively to the same slave.
These declarations ensure this and remove the need for an
ORDER BY clause in the SELECT that's used to invoke this fucntion.
*/
order p_reln_training_logs by ( first_name, week, run )
parallel_enable ( partition p_reln_training_logs by range ( first_name ) )
pipelined
is
g_in_row My_Types.reln_training_log_row_t;
g_out_row My_Types.nested_training_log_row_t;
g_weeks_running weeks_running_t;
g_training_log training_log_t;
g_first_time boolean := true;
g_got_a_row boolean;
g_new_week boolean;
g_new_runner boolean;
g_current_first_name reln_training_logs.first_name%type;
g_prev_first_name reln_training_logs.first_name%type;
g_current_week reln_training_logs.week%type;
g_prev_week reln_training_logs.week%type;

procedure Get_Next_Row is begin


fetch p_reln_training_logs into g_in_row;
g_got_a_row := not p_reln_training_logs%notfound;
if g_got_a_row
then
case g_first_time
when true then
g_first_time := false;
g_new_runner := false;
g_new_week := false;
else
g_new_runner := g_prev_first_name <> g_in_row.first_name;
g_new_week := case g_new_runner
when true then true
else g_prev_week <> g_in_row.week

46 Paper # 129
Fast Track to Oracle9i

end;
end case;
g_prev_first_name := g_in_row.first_name;
g_prev_week := g_in_row.week;
end if;
return;
end Get_Next_Row;

function Got_Next_Runner return boolean is begin


g_current_first_name := g_in_row.first_name;
g_new_runner := false;
return g_got_a_row;
end Got_Next_Runner;

function Got_Next_Week return boolean is begin


g_current_week := g_in_row.week;
g_new_week := false;
return ( not g_new_runner ) and g_got_a_row;
end Got_Next_Week;

function Got_Next_Run return boolean is begin


return ( not g_new_week ) and g_got_a_row;
end Got_Next_Run;

procedure New_Training_Log is begin


g_training_log := null;
end New_Training_Log;

procedure New_Weeks_Running is begin


g_weeks_running := null;
end New_Weeks_Running;

procedure Store_This_Run is begin


if g_weeks_running is null
then
g_weeks_running := weeks_running_t ( run_t ( 0, 0 ) );
else
g_weeks_running.extend;
end if;
g_weeks_running ( g_in_row.run ):=
run_t ( g_in_row.distance, g_in_row.pace );
end Store_This_Run;

procedure Store_This_Weeks_Running is begin


if g_training_log is null
then
g_training_log := training_log_t ( weeks_running_t ( run_t ( 0, 0 ) ) );
else
g_training_log.extend;
end if;
g_training_log ( g_current_week ):= g_weeks_running;
end Store_This_Weeks_Running;

procedure OutPut_This_Runner is begin


g_out_row.first_name := g_current_first_name;
g_out_row.training_log := g_training_log;
end OutPut_This_Runner;

begin
Get_Next_Row();

47 Paper # 129
Fast Track to Oracle9i

while Got_Next_Runner()
loop
New_Training_Log;
while Got_Next_Week()
loop
New_Weeks_Running;
while Got_Next_Run()
loop
Store_This_Run;
Get_Next_Row();
end loop;
Store_This_Weeks_Running;
end loop;
OutPut_This_Runner; pipe row ( g_out_row );
end loop;
close p_reln_training_logs;
return;
end Nested_Training_Logs_Fn;

A.5.6. END-TO-END TEST


truncate table nested_training_logs;
execute Populate_Nested_Training_Logs

truncate table reln_training_logs;


insert into reln_training_logs
(
select *
from table
(
Reln_Training_Logs_Fn
(
cursor
(
select first_name, training_log from nested_training_logs
)
)
)
);

truncate table nested_training_logs_2;


insert into nested_training_logs_2
(
select *
from table
(
Nested_Training_Logs_Fn
(
cursor ( select * from reln_training_logs )
)
)
);

truncate table reln_training_logs_2;


insert into reln_training_logs_2
(
select *
from table
(

48 Paper # 129
Fast Track to Oracle9i

Reln_Training_Logs_Fn
(
cursor
(
select first_name, training_log from nested_training_logs_2
)
)
)
);

select * from reln_training_logs_2 minus select * from reln_training_logs;

select * from reln_training_logs minus select * from reln_training_logs_2;

A.6. USING UTL_HTTP


The following block shows: how to send an HTTP request, setting the proxy information, setting the method to
“GET”, providing username/password authentication information, and setting the request header; and how to get the
response, retrieving the status code, the header information, and the response body. The “GET” method is suitable
for non-parameterized URLs or for URLs with a manageable volume of parameter name-value pairs. The maximum
length of the URL string is limited by the capacity of the PL/SQL VARCHAR2 variable used to pass it. The “POST”
method is suitable for parameterizing the request with an arbitrarily large volume of data, especially for example as
might be the case when the request is expressed as an XML document.

declare
req Utl_Http.Req;
resp Utl_Http.Resp;
name varchar2(255);
value varchar2(1023);
v_msg varchar2(80);
v_url varchar2(32767) := 'http://otn.oracle.com/';
begin
/* request that exceptions are raised for error Status Codes */
Utl_Http.Set_Response_Error_Check ( enable => true );

/* allow testing for exceptions like Utl_Http.Http_Server_Error */


Utl_Http.Set_Detailed_Excp_Support ( enable => true );

Utl_Http.Set_Proxy (
proxy => 'www-proxy.us.oracle.com',
no_proxy_domains => 'us.oracle.com' );

req := Utl_Http.Begin_Request (
url => v_url,
method => 'GET' );
/*
Alternatively use method => 'POST' and Utl_Http.Write_Text to
build an arbitrarily long message
*/

Utl_Http.Set_Authentication (
r => req,
username => 'SomeUser',
password => 'SomePassword',
scheme => 'Basic',
for_proxy => false /* this info is for the target web server */ );

Utl_Http.Set_Header (
r => req,

49 Paper # 129
Fast Track to Oracle9i

name => 'User-Agent',


value => 'Mozilla/4.0' );

resp := Utl_Http.Get_Response ( r => req );

Dbms_Output.Put_Line ( 'Status code: ' || resp.status_code );


Dbms_Output.Put_Line ( 'Reason phrase: ' || resp.reason_phrase );

for i in 1..Utl_Http.Get_Header_Count ( r => resp )


loop
Utl_Http.Get_Header (
r => resp,
n => i,
name => name,
value => value );
Dbms_Output.Put_Line ( name || ': ' || value);
end loop;

begin
loop
Utl_Http.Read_Text (
r => resp,
data => v_msg );
Dbms_Output.Put_Line ( v_msg );
end loop;
exception when Utl_Http.End_Of_Body then null;
end;

Utl_Http.End_Response ( r => resp );


exception
/*
The exception handling illustrates the use of "pragma-ed" exceptions
like Utl_Http.Http_Client_Error. In a realistic example, the program
would use these when it coded explicit recovery actions.

Request_Failed is raised for all exceptions after calling


Utl_Http.Set_Detailed_Excp_Support ( enable=>false )
And it is NEVER raised after calling with enable=>true
*/
when Utl_Http.Request_Failed then
Dbms_Output.Put_Line ( 'Request_Failed: ' || Utl_Http.Get_Detailed_Sqlerrm );

/* raised by URL http://xxx.oracle.com/ */


when Utl_Http.Http_Server_Error then
Dbms_Output.Put_Line ( 'Http_Server_Error: ' || Utl_Http.Get_Detailed_Sqlerrm );

/* raised by URL http://otn.oracle.com/xxx */


when Utl_Http.Http_Client_Error then
Dbms_Output.Put_Line ( 'Http_Client_Error: ' || Utl_Http.Get_Detailed_Sqlerrm );

/* code for all the other defined exceptions you can recover from */

when others then


Dbms_Output.Put_Line (SQLERRM);
end;

50 Paper # 129

Вам также может понравиться