Вы находитесь на странице: 1из 25

Fast Track PL/SQL

Turbo-charge
PL/SQL Performance
with Bulk Processing Features

Steven Feuerstein
PL/SQL Evangelist, Quest Software
steven.feuerstein@quest.com
PL/SQL Obsession - www.ToadWorld.com/SF

Copyright 2000-2008 Steven Feuerstein - Page 1


How to benefit most from this session
Watch, listen, focus on concepts and principles.
Download and use any of my training materials:

PL/SQL Obsession http://www.ToadWorld.com/SF

Download and use any of my scripts (examples,


performance scripts, reusable code) from the same
location: the demo.zip file.
filename_from_demo_zip.sql

You have my permission to use all these materials to do


internal trainings and build your own applications.
But remember: they are not production ready.
You must test them and modify them to fit your needs.
Copyright 2000-2008 Steven Feuerstein - Page 2
Other Websites of Interest for
PL/SQL Developers

Copyright 2000-2008 Steven Feuerstein - Page 3


Turbo-charge SQL with
bulk processing statements
Improve the performance of multi-row SQL operations
by an order of magnitude or more with bulk/array
processing in PL/SQL!
CREATE OR REPLACE PROCEDURE upd_for_dept (
dept_in IN employee.department_id%TYPE
,newsal_in IN employee.salary%TYPE)
IS
CURSOR emp_cur IS
SELECT employee_id,salary,hire_date
FROM employee WHERE department_id = dept_in;
BEGIN
FOR rec IN emp_cur LOOP

adjust_compensation (rec, newsal_in);


Row by row processing:
UPDATE employee SET salary = rec.salary simple and elegant but
WHERE employee_id = rec.employee_id;
END LOOP;
inefficient
END upd_for_dept;

Copyright 2000-2008 Steven Feuerstein - Page 4


Row by row processing of DML in PL/SQL
Oracle server

PL/SQL Runtime Engine SQL Engine


PL/SQL block
Procedural
statement
FOR rec IN emp_cur LOOP executor
UPDATE employee SQL
SET salary = ... statement
WHERE employee_id =
rec.employee_id;
executor
END LOOP;

Performance penalty
for many context
switches

Copyright 2000-2008 Steven Feuerstein - Page 5


Bulk processing with FORALL
Oracle server

PL/SQL Runtime Engine SQL Engine


PL/SQL block
Procedural
FORALL indx IN statement
list_of_emps.FIRST..
list_of_emps.LAST executor
SQL
UPDATE employee
SET salary = ... statement
WHERE employee_id = executor
list_of_emps(indx);

Update... Update...
Update... Update...
Update... Update...
Update... Update...
Update... Update...
Update... Fewer context switches, Update...
same SQL behavior
Copyright 2000-2008 Steven Feuerstein - Page 6
Bulk Processing in PL/SQL

FORALL
Use with inserts, updates, deletes and merges.
Move data from collections to tables.
BULK COLLECT
Use with implicit and explicit queries.
Move data from tables into collections.
In both cases, the "back back" end processing in the
SQL engine is unchanged.
Same transaction and rollback segment management
Same number of individual SQL statements will be
executed.
But BEFORE and AFTER statement-level triggers only
fire once per FORALL INSERT statements.

Copyright 2000-2008 Steven Feuerstein - Page 7


statement_trigger_and_forall.sql
Classic Optimization Tradeoff

These features will help you make your code


run much faster.
But the user sessions will consume more
PGA memory.
Let's review memory usage and PL/SQL.

Copyright 2000-2008 Steven Feuerstein - Page 8


PL/SQL in Shared Memory

System Global Area (SGA) of RDBMS Instance

Shared Pool Library cache


Shared SQL
Select * Update emp
Reserved Pool
Pre-parsed from emp Set sal=...

Large Pool calc_totals show_emps upd_salaries

emp_rec emp%rowtype; emp_rec emp%rowtype;


tot_tab tottabtype; tot_tab tottabtype;
Session 1
Session 1 memory Session 2 memory Session 2
(PGA/UGA) (PGA/UGA)

similar.sql
Copyright 2000-2008 Steven Feuerstein - Page 9
BULK COLLECT for multi-row querying

SELECT * BULK COLLECT INTO collection FROM table;

FETCH cur BULK COLLECT INTO collection;

Fetch one or more rows into a collection.


Collection is always filled sequentially from
index value 1.
Query does not raise NO_DATA_FOUND if
no rows are fetched.
Instead, the collection is empty.
Use FETCH with LIMIT to manage memory.
Copyright 2000-2008 Steven Feuerstein - Page 10
An "unlimited" BULK COLLECT

DECLARE
/* Never can be more than 100 employees! */
Declare a VARRAY TYPE employees_aat IS VARRAY (100) OF
of records to hold employees%ROWTYPE;
the queried data.
l_employees employees_aat;
BEGIN
Fetch all rows into SELECT *
collection BULK COLLECT INTO l_employees bulkcoll.sql
sequentially, FROM employees; bulktiming.sql

starting with 1.
FOR indx IN 1 .. l_employees.COUNT
LOOP
Iterate through the process_employee (l_employees(indx));
END LOOP;
collection
END;
contents with a
loop. But what if I need to fetch and process
millions of rows?
This approach could consume unacceptable
amounts of memory.
Copyright 2000-2008 Steven Feuerstein - Page 11
Limiting retrieval with BULK COLLECT

If you are certain that your table with never


have more than N rows, use a VARRAY (N)
to hold the fetched data.
If that limit is exceeded, Oracle will raise an error.
If you do not know in advance how many
rows you might retrieve, you should:
Declare an explicit cursor.
Fetch BULK COLLECT with the LIMIT clause.

Copyright 2000-2008 Steven Feuerstein - Page 12


Limit rows returned by BULK COLLECT

CREATE OR REPLACE PROCEDURE bulk_with_limit


(deptno_in IN dept.deptno%TYPE)
IS
CURSOR emps_in_dept_cur IS
SELECT * FROM emp
WHERE deptno = deptno_in;

TYPE emp_tt IS TABLE OF emps_in_dept_cur%ROWTYPE;


emps emp_tt;
BEGIN
OPEN emps_in_dept_cur;
Use the LIMIT clause with the
LOOP
INTO to manage the amount of
FETCH emps_in_dept_cur
memory used with the BULK
BULK COLLECT INTO emps LIMIT 1000;
COLLECT operation.
EXIT WHEN emps.COUNT = 0;
Definitely the preferred approach
in production applications with
process_emps (emps);
large or varying datasets.
END LOOP;
CLOSE emps_in_dept_cur;
END bulk_with_limit; bulklimit.sql
Copyright 2000-2008 Steven Feuerstein - Page 13
Details on that LIMIT clause

The limit value can be a literal or a variable.


I suggest using passing the limit as an parameter
to give you maximum flexibility.
Tom Kyte recommends always using 100.
Setting it to 500 or 1000 doesn't seem to make
much difference in performance.
With very large volumes of data and small
numbers of batch processes, however, a
larger LIMIT could help.

Copyright 2000-2008 Steven Feuerstein - Page 14


Terminating loops containing BULK COLLECT

LOOP
FETCH my_cursor BULK COLLECT INTO l_collection LIMIT 100;
EXIT WHEN my_cursor%NOTFOUND; BAD IDEA

You will need to break the habit of checking


%NOTFOUND right after the fetch.
You might skip processing some of your data.
Instead, do one of the following:
At the end of the loop, check %NOTFOUND.
Right after fetch, exit when collection.COUNT = 0.
At end of loop, exit when collection.COUNT < limit.
bulklimit_stop.sql
Copyright 2000-2008 Steven Feuerstein - Page 15
When to convert to BULK COLLECT

Prior to Oracle10g, you should convert all


multiple row fetch logic, including cursor for
loops, to BULK COLLECTs.
For Oracle10g and above, leave your cursor
for loops in place if they...
contain no DML operations.
seem to be running fast enough.
Explicit BULK COLLECTs will usually run
faster than cursor for loops optimized to BC.
Copyright 2000-2008 Steven Feuerstein - Page 16
Use FORALL for multi-row DML operations

PROCEDURE upd_for_dept (...) IS


BEGIN
FORALL indx IN low_value .. high_value
UPDATE employee
SET salary = newsal_in
WHERE employee_id = list_of_emps (indx);
END; Binding array

Convert loops that contain inserts, updates,


deletes or merges to FORALL statements.
Header looks identical to a numeric FOR loop.
Implicitly declared integer iterator
At least one "bind array" that uses this iterator as its
index value.
Copyright 2000-2008 Steven Feuerstein - Page 17
More on FORALL

Use any type of collection with FORALL.


One DML statement is allowed per FORALL.
Each FORALL is its own "extended" DML statement.
The collection must be indexed by integer.
The binding array must be sequentially filled.
Unless you use the INDICES OF or VALUES OF clause.
SQL%ROWCOUNT returns total number of rows
modified by entire FORALL.
Unreliable when used with LOG ERRORS.
Use the SQL%BULK_ROWCOUNT cursor attribute
to determine how many rows are modified by each
statement. bulktiming.sql
Copyright 2000-2008 Steven Feuerstein - Page 18
bulk_rowcount.sql
FORALL and collections of records

Prior to 11g, you cannot reference a field of a


record in FORALL.
You must instead break data into separate
collections, or...
You can also perform record-level inserts and
updates.
In 11g, this restriction is lifted (but it is an
undocumented feature). 11g_field_of_record.sql

Copyright 2000-2007 Steven Feuerstein - Page 19


INDICES OF and VALUES OF

Prior to Oracle10g R2, the binding arrays in a


FORALL statement must be sequentially
filled.
Now, however, you can bind sparse
collections by using INDICES OF and
VALUES OF in the FORALL header.
PROCEDURE upd_for_dept (...) IS
BEGIN
FORALL indx IN INDICES OF list_of_emps
UPDATE employee
10g_indices_of*.sql
SET salary = newsal_in 10g_values_of*.sql
WHERE employee_id = list_of_emps (indx);

Copyright 2000-2008 Steven Feuerstein - Page 20


Exception handling and FORALL

When an exception occurs in a DML statement....


That statement is rolled back and the FORALL stops.
All (previous) successful statements are not rolled back.
Use the SAVE EXCEPTIONS clause to tell Oracle
to continue past exceptions, and save the error
information for later.
Then check the contents of the pseudo-collection of
records, SQL%BULK_EXCEPTIONS.
Two fields: ERROR_INDEX and ERROR_CODE

Copyright 2000-2008 Steven Feuerstein - Page 21


FORALL with SAVE EXCEPTIONS

Add SAVE EXCEPTIONS to enable FORALL to


suppress errors at the statement level.

CREATE OR REPLACE PROCEDURE load_books (books_in IN book_obj_list_t)


IS
bulk_errors EXCEPTION;
PRAGMA EXCEPTION_INIT ( bulk_errors, -24381 );
BEGIN
FORALL indx IN books_in.FIRST..books_in.LAST Allows processing of all
SAVE EXCEPTIONS statements, even after
INSERT INTO book values (books_in(indx)); an error occurs.
EXCEPTION
WHEN bulk_errors THEN
FOR indx in 1..SQL%BULK_EXCEPTIONS.COUNT
LOOP Iterate through
log_error (SQL%BULK_EXCEPTIONS(indx).ERROR_CODE); pseudo-collection of
END LOOP; errors.
END;

bulkexc.sql
Copyright 2000-2008 Steven Feuerstein - Page 22
Converting old-fashioned code to bulk

Change from integrated, row-by-row


approach to a phased approach.
Phase 1: get the data with BULK COLLECT.
Filling those collections
Phase 2: massage collections so they are
ready for DML operations.
Phase 3: push the data to the database with
FORALL.
cfl_to_bulk_0.sql
cfl_to_bulk_5.sql
10g_indices_of.sql
10g_values_of.sql
Copyright 2000-2008 Steven Feuerstein - Page 23
Collections impact on "Rollback segment too
small" and "Snapshot too old" errors

Rollback segment too small...


Cause: so many uncommitted changes, the
rollback segment can't handle it all.
FORALL will cause the error to occur even sooner.
Still need to use incremental commits.
Snapshot too old...
Cause: a cursor is held open too long and Oracle
can no longer maintain the snapshot information.
Solution: open-close cursor, or use BULK
COLLECT to retrieve information more rapidly.

forall_incr_commit.sql
Copyright 2000-2008 Steven Feuerstein - Page 24
Bulk Processing Conclusions

Most important performance tuning feature in


PL/SQL.
Almost always the fastest way to execute multi-row SQL
operations in PL/SQL.
You trade off increased complexity of code for
dramatically faster execution.
But in Oracle Database 10g and above, the compiler will
automatically optimize cursor FOR loops to BULK
COLLECT efficiency.
No need to convert unless the loop contains DML or you
want to maximally optimize your code.
Watch out for the impact on PGA memory!
emplu.pkg
Copyright 2000-2008 Steven Feuerstein - Page 25

Вам также может понравиться