Вы находитесь на странице: 1из 44

Applications Performance Group

Doc Type:
Subject:

Standards & Guidelines

Performance Standards
Recommendations for the
Oracle Applications

Coverage:
-

- SQL
Views
PL/SQL
Java
Forms
Reports
PRO*C
Discoverer
Data Modeling
Concurrent Manager Jobs

Author(s):
Contributor(s):
Creation Date:
Last Updated:
Version:

Status:

Functional Specification
Table of Contents

1. Overview___________________________________________________________________________4
2. Performance Standards_______________________________________________________________5
2.1. SQL__________________________________________________________________________________5
2.1.1. Bind Variables_________________________________________________________________________________5
2.1.2. nvl() and decode() ______________________________________________________________________________6
2.1.3. IN vs. EXISTS_________________________________________________________________________________8
2.1.4. Sharable Memory______________________________________________________________________________10
2.1.5. Outer-Joins___________________________________________________________________________________11
2.1.6. Execution plans_______________________________________________________________________________11
2.1.7. Deadlock and Locking Order_____________________________________________________________________14
2.1.8. General Guidelines_____________________________________________________________________________16

2.2. Views________________________________________________________________________________17
2.2.1. Creating Views________________________________________________________________________________17
2.2.2. Using Views__________________________________________________________________________________17
2.2.3. View Merging_________________________________________________________________________________17

2.3. PL/SQL______________________________________________________________________________18
2.3.1. Layers of pl/sql-java "objects" ____________________________________________________________________18
2.3.2. PL/SQL table usage____________________________________________________________________________18
2.3.3. Bulk________________________________________________________________________________________19
2.3.4. Shared pool pinning ___________________________________________________________________________20
2.3.5. General PL/SQL performance guidelines ___________________________________________________________21

2.4. Java_________________________________________________________________________________22
2.4.1. Object Creation_______________________________________________________________________________22
2.4.2. Strings and StringBuffers________________________________________________________________________22
2.4.3. Coding Best Practices___________________________________________________________________________24
2.4.4. Synchronization_______________________________________________________________________________25
2.4.5. Collections___________________________________________________________________________________25
2.4.6. Garbage Collection_____________________________________________________________________________26
2.4.7. Weak & Soft References________________________________________________________________________26
2.4.8. JDBC Guidelines______________________________________________________________________________27
2.4.9. Memory Footprint_____________________________________________________________________________29
2.4.10. Reducing Database Trips_______________________________________________________________________29
2.4.11. Deployment_________________________________________________________________________________29
2.4.12. Green Threads versus Native Threads_____________________________________________________________29

2.5. Forms_______________________________________________________________________________30
2.5.1. Forms Blocks_________________________________________________________________________________30
2.5.2. Use of bind variables___________________________________________________________________________30
2.5.3. LOVs_______________________________________________________________________________________30
2.5.4. Record Groups________________________________________________________________________________31
2.5.5. Caching______________________________________________________________________________________31
2.5.6. Item Properties________________________________________________________________________________31

2.6. Reports______________________________________________________________________________31
2.6.1. Reports SQL__________________________________________________________________________________31
2.6.2. Initialization Values____________________________________________________________________________32
2.6.3. Break Groups_________________________________________________________________________________32
2.6.4. Computed Columns____________________________________________________________________________32
2.6.5. Lexical Parameters_____________________________________________________________________________32
2.6.6. Defaulting Report Parameters____________________________________________________________________32

2.7. PRO*C ______________________________________________________________________________34


2.7.1. Arrays processing______________________________________________________________________________34
2.7.2. Linking with the shared library ___________________________________________________________________35
2.7.3. PRO*C Compile options________________________________________________________________________35
2

Oracle Confidential

Version:

2.7.4. Parallel processing using PRO*C_________________________________________________________________35


2.7.5. Object Cache _________________________________________________________________________________36
2.7.6. DML RETURNING ___________________________________________________________________________36
2.7.7. The MAKE FILE _____________________________________________________________________________36

2.8. Discoverer___________________________________________________________________________37
2.9. Materialized Views____________________________________________________________________37
2.10. Data Modeling_______________________________________________________________________38
2.10.1. Data Modeling for OLTP_______________________________________________________________________38
2.10.2. Arrange most used/accessed columns first in a new table______________________________________________38
2.10.3. Primary Keys________________________________________________________________________________39
2.10.4. NULL columns_______________________________________________________________________________39
2.10.5. Indexes_____________________________________________________________________________________39
2.10.6. Attribute Type _______________________________________________________________________________40
2.10.7. Views______________________________________________________________________________________40
2.10.8. General Guidelines____________________________________________________________________________41

2.11. Concurrent Manager Jobs_____________________________________________________________42


2.11.1. Concurrent Manager Management_______________________________________________________________42
2.11.2. Queue Management __________________________________________________________________________42
2.11.3. Concurrent processing Guidelines _______________________________________________________________43

2.12. Who Should Tune ____________________________________________________________________44


2.13. Performance Measurement ____________________________________________________________44
2.14. Administrative Interfaces______________________________________________________________44
2.15. Configuration Parameters_____________________________________________________________44

Functional Specification

1. Overview
The objective of this document is to present a series of performance development standards for
use in conjunction with Oracle Applications Release 11.5 and beyond. The standards presented in
this document cover the following areas: SQL, Views, PL/SQL, Java, PRO*C, Forms, Reports, and
Discoverer. We will document the relevant performance development standards for each
individual area. It is important that Applications developers adhere to the standards listed in this
document. Failure to do so often leads to performance issues and bugs that require a large redesign of a feature. Due to the nature of Applications development and the ever-evolving
technology stack, this document will continue to evolve in order to incorporate any new
performance standards.

Oracle Confidential

Version:

2. Performance Standards
This section details the performance development standards for the individual areas such as SQL
or View related standards. It is assumed that the reader of this document is fluent in the different
areas of writing SQL, etc.

2.1. SQL
This section documents the standards pertaining to the use of SQL in Application
code. Due to the complexity, views are discussed in a separate section. Please note
that the SQL performance standards presented here apply to all clients of SQL
including Forms, PL/SQL, Java, HTML, Perl, PRO*C, Reports, Discoverer, Views, and
any other component where SQL is used.

2.1.1.

Bind Variables

Bind variables allow SQL statements to be shared across repeated executions. The
use of bind variables helps prevent a SQL statement from hard parsing on every
execution only because the values supplied have changed. Bind variables help
eliminate hard parses and in certain cases help reduce the soft parse code path (i.e.
PL/SQL).
When using bind variables, you should match the bind variables types with the
database column types to which they are being binded. For example, [transaction_id
= :b1]. In this case, the bind variable :b1 should be declared as a numeric data type
provided that the database column type is defined as a NUMBER. If the
transaction_id column is NUMERIC, and the PL/SQL variable, for example, is varchar,
then an implicit conversion will be needed in order to make the types consistent.
Inconsistent bind types can cause multiple child cursors to be created for the same
SQL statement and disable the use of an index on that column. Hence, it is
important that when you use bind variables, that the types and lengths match
exactly to that of the respective database columns. This applies to INSERTs,
SELECTS, UPDATEs, and DELETEs; in other words any SQL statement where binds are
used. PL/SQL and Forms both perform automatic binding. For example, in Forms
when you have a SQL statement which references an item in a block such as [a.col =
:MYBLOCK.MYITEM], Forms rewrites this SQL to be [a.col = :1]. PL/SQL also does
automatic binding when a SQL statement in PL/SQL references a PL/SQL variable.
Hence, it is important that your Forms Block Items and PL/SQL variables are
consistent with the types of the database columns. All SQL statements in Oracle
Applications should use bind variables except in the following exception cases:
statements involving the use of histograms and certain types of upgrade scripts.
Dynamicaly generated SQL statements should be double-checked for bind variables.
2.1.1.1.
Histograms
Histograms allow the optimizer to assign the correct selectivity for a column filter or
a join condition using the histogram distribution information rather than assuming a
uniform distribution. For skewed columns such as flags, statuses, or types,
histograms are needed so that the optimizer can accurately estimate the selectivity.
For example, if 95% of the rows consisted of having the column value of
5

Functional Specification
STATUS=COMPLETE, and only 5% consisted of the value STATUS=PENDING, the
histogram allows the optimizer to assign this correct weight. The lack of a histogram
(uniform distribution) would result in a 50% selectivity for either COMPLETE or
PENDING. The optimizer does not currently use histograms when a bind variable is
used as a value. For this reason, literals should be used only in SQL statements that
contain filters on skewed columns for which a histogram exists and only on that
column. The remaining filters should use bind variables. In addition, the use of
literals should be restricted to a consistent set of values per execution. For example,
consider the following query:
select EI.TASK_ID, EI.BILL_RATE
from PA_EXPENDITURE_ITEMS EI
where EI.TASK_ID=:b1
and EI.COST_BURDEN_DISTRIBUTED_FLAG = N

Notice in the above example that TASK_ID is using a bind variable, however, the
COST_BURDEN_DISTRIBUTED_FLAG is using a literal (N). This is an accepted use of
literals.
2.1.1.2.
Non-Repeatable Upgrade Scripts
Non-repeatable upgrade scripts are another exception where literals can be
used in place of bind variables. A non-repeatable upgrade script applies to a
script that is run once and only once during the lifetime of an upgrade cycle.
If the script is run more than once, or is part of a parallel upgrade script, the
script should use bind variables in order to facilitate cursor reuse.

2.1.2.

nvl() and decode()

Do not use an nvl() or decode() function on an indexed column; neither as an rvalue


nor as an lvalue. Such constructs prevent the optimizer from utilizing an index on
the column. For example:
select ai.invoice_num,ai.amount_paid,ai.posting_status,
apv.invoice_date, apv.prepayment_flag
from ap_invoices ai,
ap_invoice_prepays_v apv
where ai.invoice_id=apv.invoice_id
and

ai.invoice_num = nvl(:b1,ai.invoice_num)

Although you may think that the optimizer can use the index on AI.INVOICE_NUM
because the nvl() is on the right-hand side, it will not. Functions such as nvl() and
decode() are considered index-unsafe for the simple reason that the ability to utilize
an index depends on the bind variable value. In the example above, if the bind
variable :b1 is null, then the expression will result in the following: (ai.invoice_num =
ai.invoice_num). Obviously, in this case, the index on AI.INVOICE_NUM cannot be
used because this expression is semantically equivalent to [1=1]. The optimizer has
no way of knowing whether or not a bind variable value is supplied or if it is null.
2.1.2.1.
nvl() and optimizer statistics
Another example of an nvl() construct that should not be used is as follows:
update GL_BALANCES GBAL
set

PERIOD_NET_DR = :b1

where (GBAL.CODE_COMBINATION_ID,GBAL.PERIOD_NAME,

Oracle Confidential

Version:

GBAL.SET_OF_BOOKS_ID,GBAL.CURRENCY_CODE,GBAL.ACTUAL_FLAG)
in (select CODE_COMBINATION_ID , PERIOD_NAME ,
SET_OF_BOOKS_ID , CURRENCY_CODE , ACTUAL_FLAG
from POSTING_INTERIM )
and NVL(GBAL.TRANSLATED_FLAG,X) <>R

In the above example, although TRANSLATED_FLAG is not indexed, and is not


the main driving filter, it does have a histogram because it is a skewed
column. However, the nvl() construct prevents the optimizer from accurately
estimating the selectivity of the filter. Hence, a sub-optimal execution plan is
generated.

2.1.2.2.
decode() and join-key resolution
You should also not use decode() or nvl() as a run-time join filter. This
prevents the optimizer from assigning the correct join cardinality estimates.
Doing so often leads to poor execution plans. You should join directly to the
tables, and the join keys should be explicitly provided. For example:
select ae.source_table,
d.invoice_distribution_id,
ap.invoice_payment_id
from ap_ae_lines_all ae,
ap_invoice_distributions_all d,
po_distributions_all pd,
ap_invoice_payments_all ap
where decode(ae.source_table,AP_INVOICE_DISTRIBUTIONS,ae.source_id,null)
= d.invoice_distribution_id (+)
and

ae.source_id = 21628

and

ae.source_table = AP_INVOICE_DISTRIBUTIONS

and

pd.po_distribution_id(+) = d.po_distribution_id

and

decode(ae.source_table,AP_INVOICE_PAYMENTS,ae.source_id,null)

= ap.invoice_payment_id (+)

In the above example, the join between the table AP_AE_LINES_ALL and
AP_INVOICE_PAYMENTS_ALL depends on the runtime value of the
AE.SOURCE_TABLE column. Hence, the optimizer will not be able to
accurately estimate the join cardinality between these two tables at plan
generation time. The optimizer will use internal defaults, and it may result in
a sub-optimal plan.
2.1.2.3.

nvl() and the negation case

Another common misuse of nvl() is the negation case whereby you want to
retrieve rows given a certain criteria. Consider the following query:
select max(poll2.creation_date)
from po_line_locations_archive poll2,
po_headers_archive poh,
po_lines_archive pol1
where pol1.po_line_id=poll2.po_line_id and

Functional Specification
poh.po_header_id=pol1.po_header_id and
NVL(POL1.LATEST_EXTERNAL_FLAG ,N)=Y and
pol1.item_id=:b1 AND
POH.TYPE_LOOKUP_CODE IN (STANDARD,PLANNED,BLANKET) AND
NVL(POH.LATEST_EXTERNAL_FLAG ,N) = Y AND
POLL2.SHIPMENT_TYPE != PRICE BREAK AND
NVL(POLL2.LATEST_EXTERNAL_FLAG,N) = Y)

In
the
previous
example,
the
predicate
[ NVL(POH.LATEST_EXTERNAL_FLAG ,N)=Y] can be semantically rewritten
as [POH.LATEST_EXTERNAL_FLAG=Y]. This avoids the nvl() construct and
the unnecessary overhead of invoking the nvl() SQL function. Do not use nvl()
on a column when you are after the non-null rows, and the predicate is an
equality predicate.

2.1.3.

IN vs. EXISTS

In certain circumstances, it is better to use IN rather than EXISTS. In general, if the


selective predicate is in the subquery, then use IN. If the selective predicate is in the
parent query, then use EXISTS. Sometimes, Oracle can rewrite a subquery when
used with an IN clause to take advantage of selectivity specified in the subquery.
This is most beneficial when the most selective filter appears in the subquery, and
when there are indexes on the join columns.
Conversely, using EXISTS is beneficial when the most selective filter is in the parent
query. This allows the selective predicates in the parent query to be applied before
filtering the rows against the EXISTS criteria.
delete from mtl_supply ms1
where exists (select 'supply exists'
from po_requisition_lines pl
where pl.requisition_header_id=:b0
and ms1.supply_type_code='REQ'
and ms1.supply_source_id=pl.requisition_line_id
and nvl(pl.modified_by_agent_flag,'N') <>'Y'
and nvl(pl.closed_code,'OPEN')='OPEN'
and nvl(pl.cancel_flag,'N')='N'
and pl.line_location_id is null)

Explain Plan:
DELETE STATEMENT Cost=2070, Rows=30196
DELETE MTL_SUPPLY Cost=, Rows=
FILTER Cost=, Rows=
TABLE ACCESS FULL MTL_SUPPLY Cost=2070, Rows=30196
FILTER Cost=, Rows=
TABLE ACCESS BY INDEX ROWID PO_REQUISITION_LINES_ALL Cost=3, Rows=1
INDEX UNIQUE SCAN PO_REQUISITION_LINES_U1 Cost=2, Rows=1

Below is the execution plan for the preceding statement, rewritten where EXISTS is
replaced with IN.
Note that subquery is not correlated wheb using the IN clause.
8

Oracle Confidential

Version:

delete from mtl_supply ms1


where ms1.supply_source_id in
(select pl.requisition_line_id
from po_requisition_lines pl
where pl.requisition_header_id=:b0
and ms1.supply_type_code='REQ'
and nvl(pl.modified_by_agent_flag,'N') <>'Y'
and nvl(pl.closed_code,'OPEN')='OPEN'
and nvl(pl.cancel_flag,'N')='N'
and pl.line_location_id is null)

Functional Specification

Explain Plan:
DELETE STATEMENT Cost=8, Rows=1
DELETE MTL_SUPPLY Cost=, Rows=
NESTED LOOPS Cost=8, Rows=1
TABLE ACCESS BY INDEX ROWID PO_REQUISITION_LINES_ALL Cost=5, Rows=1
INDEX RANGE SCAN PO_REQUISITION_LINES_U2 Cost=3, Rows=1
TABLE ACCESS BY INDEX ROWID MTL_SUPPLY Cost=3, Rows=62439
INDEX RANGE SCAN MTL_SUPPLY_N1 Cost=2, Rows=62439

2.1.4.

NOT EXISTS should always be used rather than NOT IN.

Sharable Memory

SQL statements that consume a large amount of sharable memory place a large
burden on the shared pool. The larger the SQL statement, the more memory
allocations and latch gets will be needed in order to build a sharable cursor in the
cursor cache. SQL statements that require a large amount of memory (i.e. several
megabytes) pose a scalability problem since this limits the amount of sharable
cursors that can be active in the shared pool. Suppose for example, that a query Q1
against a view V1 consumes 1.5 MB of sharable memory. Suppose that query Q2 is a
slight variant of Q1 in that it specified an additional filter or a different filter. This
results in 3 MB of shared memory allocated for only two cursors. Shared pool
operations are also slightly more expensive in an OPS/RAC environment due to the
need to acquire global cache locks. Hence, it is important that Apps SQL statements
are kept to a reasonable minimum in terms of the sharable memory required. Apps
SQL statements should not exceed 1 MB in terms of the amount of sharable memory
required for the cursor for any particular SQL statement. The amount of sharable
memory for a SQL statement can be measured by querying the v$sql table and
examining the SHARABLE_MEM column. The following is an example of a query that
reports the amount of sharable memory consumed for a given SQL statement:
select sql_text,sharable_mem
from v$sql
where sql_text like %select ae.source_table%
SQL_TEXT SHARABLE_MEM (bytes)
select ae.source_table, d.invoice_distribution_id, ap.invoice_payment_id
from ap_ae_lines_all ae,
ap_invoice_distributions_all d,
po_distributions_all pd,
ap_invoice_payments_all ap
where decode(ae.source_table,AP_INVOICE_DISTRIBUTIONS,ae.source_id,null)
= d.invoice_distribution_id (+)
and

ae.source_id = :b1

and

ae.source_table = AP_INVOICE_DISTRIBUTIONS

and

pd.po_distribution_id(+) = d.po_distribution_id

and

decode(ae.source_table, AP_INVOICE_PAYMENTS,ae.source_id,null)

= ap.invoice_payment_id (+)

In the above example, the SQL statement consumed almost 40K of shared memory
for the cursor. It is important that you monitor the amount of sharable memory
10

Oracle Confidential

Version:

consumed for your SQL statements so as to ensure that it is a reasonable amount. If


your query references a view, you may need to optimize the view or simplify the
query in order to reduce the amount of sharable memory consumed.

2.1.5.

Outer-Joins

Tables that are outer-joined prevent the optimizer from choosing them as driving
tables. This limits the degree of optimization in terms of join permutation for which
the optimizer can consider. Do not outer-join to a table unless it is absolutely
needed. You should consider using default values in the base tables so as to avoid
an outer-join. Outer-joins are typically needed when there is no corresponding match
or the outer-row key does not exist in the inner-table. NEVER NEVER NEVER NEVER
Outer-join to a view. This typically results in a non-mergable view execution plan
with a full table scan on the adjoining table. If you need outer-join semantics, rewrite
the SQL to outer-join to the required base tables that make up the view.

2.1.6.

Execution plans

As a developer, you are responsible for generating and evaluating the execution
plans for each and every SQL statement for which you check in the code. Do not
make assumptions about the execution plans. You should generate an execution
plan, and review the plan to ensure that it is optimal. Things that you should
highlight from the plan are the driving table and the driving index, non-mergable
views, full table scans, non-selective indexes, and the join methods.
Following execution plan example illustrates that a full table scan is occurring on
both the SO_LINES and SO_HEADERS table. The optimizer also chose to use a hash
join as the join method that is typically the join method of choice when the estimate
join cardinality between the tables is high.
PLAN TABLE:
Operation

Name

Rows

Bytes

Cost

1K

287K

568

1K

287K

568

1K

66M

568

1K|

66M

568

HASH JOIN

194K

66M

337

HASH JOIN

15K

3M

253

SELECT STATEMENT
COUNT STOPKEY
VIEW
FILTER
SORT GROUP BY
SORT GROUP BY

TABLE ACCESS FULL

SO_LINES

15K

1M

236

TABLE ACCESS FULL

SO_HEADERS

1M

128M

16

You should also generate a SQL trace file and use tkprof to format the output of the
SQL trace file. You should examine the elapsed times, disk reads, and buffer gets.
For a single execution, a high number of buffer gets typically points to an inefficient
SQL statement. A high number of disk reads can be even worse than a high number
of buffer gets since disk reads will be more expensive than reads from the buffer
cache. This usually indicates that a full table scan on a large table is occurring or
that a large join between two tables is occurring (sort merge or hash). Check size
(memory) for SQL statements.
11

Functional Specification

Online queries should not exceed 200KB. SQL for batch jobs or complex
reports should not exceed 1MB.

Following is an example of tkprof output:


call

count

cpu

elapsed

disk

query

current

rows

------- ------ -------- ---------- ---------- ---------- ---------- ------Parse

Execute
Fetch

0.11

1.72

0.17

13.91

31614 172.14

1257.83

0
0

0
0

37558

18

1771379

6639 189682

------- ------ -------- ---------- ---------- ---------- ---------- ------total

31617

173.97 1271.91

37558

1771379

6657 189682

Misses in library cache during parse: 1


Optimizer goal: CHOOSE
Parsing user id: 41 (CRP)
Rows

Execution Plan

------- --------------------------------------------------0

SELECT STATEMENT GOAL: RULE

189682 SORT (UNIQUE)


189682
189682
189682
189682
189682
1
1
205598
205599

UNION-ALL
FILTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS: (BY INDEX ROWID) OF MTL_PARAMETERS
INDEX: ANALYZED (UNIQUE SCAN) OF MTL_PARAMETERS_U1 (UNIQUE)
TABLE ACCESS (BY INDEX ROWID) OF
MTL_SYSTEM_ITEMS_B

189682
189682
189682
189682
0
0

INDEX (RANGE SCAN) OF MTL_SYSTEM_ITEMS_B_N4 (NON-UNIQUE)


TABLE ACCESS (BY INDEX ROWID) OF MTL_ITEM_CATEGORIES
INDEX (UNIQUE SCAN) OF MTL_ITEM_CATEGORIES_U1 (UNIQUE)
TABLE ACCESS (BY INDEX ROWID) OF MTL_CATEGORIES_B
INDEX (UNIQUE SCAN) OF MTL_CATEGORIES_B_U1 (UNIQUE)
INDEX (UNIQUE SCAN) OF MTL_CATEGORIES_TL_U1 (UNIQUE)

For more information on how to interpret execution plan see also Chapter 9
"Using Explain Plan" Oracle 9i Database Performance Guide and Reference
For more information on performance heuristics and join methods see SQL
Repository documentation
12

Oracle Confidential

Version:

13

Functional Specification

2.1.7.

Deadlock and Locking Order

SQL statements that are locking rows should be analyzed carefully to insure that
Application deadlock or lock ordering issues are avoided. The Oracle database raises
an error (ORA-60) when application deadlock occurs, however, it does not resolve
the deadlock. The Application must be designed in such a way that these scenarios
do not occur. Consider the following cursor that attempts to lock qualifying rows:
CURSOR lock_departure(x_dep_id NUMBER) IS
SELECT DEP.STATUS_CODE,
DEL.STATUS_CODE,
LD.LINE_DETAIL_ID,
PLD.PICKING_LINE_DETAIL_ID
FROM WSH_DEPARTURES DEP,
WSH_DELIVERIES DEL,
SO_LINE_DETAILS LD,
SO_PICKING_LINE_DETAILS PLD
WHERE DEP.DEPARTURE_ID = x_dep_id
AND

DEL.ACTUAL_DEPARTURE_ID(+) = DEP.DEPARTURE_ID

AND

LD.DEPARTURE_ID(+) = DEP.DEPARTURE_ID

AND

PLD.DEPARTURE_ID(+) = DEP.DEPARTURE_ID

FOR UPDATE NOWAIT;

The problem with this query is that the locking order is largely dependent on the
execution plan and the row source order. For example, it is possible that the rows in
SO_LINE_DETAILS can be locked before the rows in SO_PICKING_LINE_DETAILS. It is
also possible that the rows of SO_PICKING_LINE_DETAILS are locked before the rows
in SO_LINE_DETAILS. The locking order is based on the join order (i.e. execution
plan). If one user ran this query under the RBO, and another user ran this query
under the CBO, locking order issues could arise due to the likelihood of a plan
difference. Another problem with this cursor is that it performs non-qualified locking
via the FOR UPDATE. The FOR UPDATE clause can take additional optional
arguments specifying the tables to be locked. For example, FOR UPDATE OF DEP
means that only the rows in WSH_DEPARTURES should be locked. The solution to this
query is to specify the tables to be locked in the FOR UPDATE clause via the FOR
option, or break the query into separate cursors such that each cursor locks a single
table only. For example, the above cursor can be rewritten as follows:
CURSOR lock_departure(x_dep_id NUMBER) IS
select departure_id
from WSH_DEPARTURES
where DEPARTURE_ID = x_dep_id
FORUPDATE NOWAIT;
CURSOR lock_deliveries(x_dep_id NUMBER) IS
select delivery_id
from WSH_DELIVERIES
where ACTUAL_DEPARTURE_ID = x_dep_id
FOR UPDATE NOWAIT;

14

Oracle Confidential

Version:

CURSOR lock_line_details(x_dep_id NUMBER) IS


select line_detail_id
from SO_LINE_DETAILS
where DEPARTURE_ID = x_dep_id
FOR UPDATE NOWAIT;
CURSOR lock_picking_details(x_dep_id NUMBER) IS
select picking_line_detail_id
from SO_PICKING_LINE_DETAILS
where DEPARTURE_ID = x_dep_id
FOR UPDATE NOWAIT;
Begin
OPEN lock_departure(entity_id);
CLOSE lock_departure;
OPEN lock_deliveries(entity_id);
CLOSE lock_deliveries;
OPEN lock_line_details(entity_id);
CLOSE lock_line_details;
OPEN lock_picking_details(entity_id);
CLOSE lock_picking_details;
End;

In summary, do not code a SQL statement that performs an unqualified lock via the
FOR UPDATE clause. You should either break up the SQL statement into multiple
single table cursors or specify the FOR <table> option of the FOR UPDATE clause.

15

Functional Specification

2.1.8.

General Guidelines

Avoid constructing complex SQL statements that attempt to cover all possible
scenarios. Use conditional logic and break it into simpler and scalable SQL
statements

Avoid creating generic find windows. Find windows should be optimized


based on different common search cases.

Search screens and UIs should prevent execution of blind queries as well as
nonselective queries. Such queries impact the whole system.

Do not automatically execute queries or LOVs in UIs when navigating to


them.

Do not pre-pend % by default to LOVs or search fileds. It will disable index


use.

Use of functions and expressions should be avoided in the conditions used for
index access.

Use of character functions (i.e. like) on number columns causes implicit type
conversion that will disable index use.

Do not use Dynamic SQL or REF Cursors for frequently executed statements.

Collapse redundant SQL to reduce server round trips.

Use the DML RETURNING feature to merge SQLs and reduce resource
consumption:
Replace:
select seq.nextval into id from dual;
insert into tab (tab_id, ...) values (id, ...)

with
insert into tab (tab_id,...)
values
(seq.nextval,...)
returning tab_id into id;

16

Minimize round trips by using array processing.

Statements accessing more than one table should use table aliases while
referencing columns, even if column definition is unambiguous.

For poorly performing queries, you may need to revisit the functionality or
change the code in another layer in order to tune the entire flow. Do not just
focus on the SQL statement by itself, evaluate the entire flow.

Hints should not be used in Apps code unless approved by performance


team.

Do not use PARALLEL hint or alter objects PARALLEL. It will bypass the buffer
cache and impact execution plans.

Gather table and index statistics using FND_STATS package.

Do not outer-join to views.

Oracle Confidential

Version:

2.2. Views
This section covers performance standards related to creation, maintenance and
optimization of Apps views. You should read through the section on SQL
performance standards before reading through this section.

2.2.1.

Creating Views

When creating views, level of view nesting should be 1 - views should directly
expand to base tables. Do not create views on top of views on top of views, etc.
Avoid PL/SQL functions in view definitions, both in WHERE clause and SELECT clause.
PL/SQL function in a SQL statement does not provide read consistency and adds
significant overhead to SQL execution due to the context switch between PL/SQL and
SQL for each row, even though the overhead is reduced in Oracle9i.

2.2.2.

Using Views

Do not use views blindly. Transparent changes to views can severely impact the
performance of clients of the view (e.g. hr_locations, ra_phones).
Views should not be used in Reports, PL/SQL, Java, or PRO*C. Conditional logic should
be used in the code and the code should join directly to base tables. Views should be
constrained in use within the online (i.e. Forms and Self-Service).
Avoid queries such as select * from <view>. It can break code if columns are
added, and prevent column elimination optimization..
Instead of joining to _VL views, join directly to _TL or _B table and include NLS filter
where language=USERENV(LANG).

2.2.3.

View Merging

The Query Transformer attempts to merge the body of the view with the body of the
SQL statement. Optimizer then considers resulting statement as a single query and
this allows him to consider more efficient join orders and index access paths.
If the view is not merged, the query block making up the view is executed stand
alone, and the results are joined with the parent query. The lack of view merging
can lead to an inefficient plan because joins that could reduce the view answer set
are not pushed inside the view.
Example of view merging:
SELECT h.header_id, h.org_id, sold_to_org.customer_number
FROM oe_sold_to_orgs_v sold_to_org,
oe_order_headers h
WHERE h.order_number = :b1 AND
h.sold_to_org_id = sold_to_org.organization_id

Explain Plan:
17

Functional Specification
SELECT STATEMENT Cost=7, Rows=1
NESTED LOOPS Cost=7, Rows=1
NESTED LOOPS Cost=6, Rows=1
TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL Cost=4, Rows=1
INDEX RANGE SCAN OE_ORDER_HEADERS_U2 Cost=3, Rows=1
TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS Cost=2, Rows=391250
INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 Cost=1, Rows=391250
INDEX UNIQUE SCAN HZ_PARTIES_U1 Cost=1, Rows=5103330

Example of a non-mergeable view:


If the keyword VIEW followed by the view name appears in the plan, then the
view is not mergeable.
SELECT h.header_id, h.org_id,
sold_to_org.customer_number
FROM oe_sold_to_orgs_v sold_to_org,
oe_order_headers h
WHERE h.order_number = :b1 AND
h.sold_to_org_id = sold_to_org.organization_id(+)

Explain Plan:
SELECT STATEMENT Cost=15365, Rows=1
NESTED LOOPS OUTER Cost=15365, Rows=1
TABLE ACCESS BY INDEX ROWID OE_ORDER_HEADERS_ALL Cost=4, Rows=1
INDEX RANGE SCAN OE_ORDER_HEADERS_U2 Cost=3, Rows=1
VIEW OE_SOLD_TO_ORGS_V Cost=, Rows=391250
MERGE JOIN Cost=15361, Rows=391250
INDEX FULL SCAN HZ_PARTIES_U1 Cost=11093, Rows=5103330
SORT JOIN Cost=4268, Rows=391250
TABLE ACCESS FULL HZ_CUST_ACCOUNTS Cost=1719, Rows=391250

2.3. PL/SQL
This section covers performance related standards for PL/SQL.

2.3.1.

Layers of pl/sql-java "objects"

When calling APIs from java (java->group APIs->public APIs->private APIs),


overhead can be significant; developers should always call the most detailed level
possible. In general, it is recommended to keep the level of nesting to not more than
five.

2.3.2.

PL/SQL table usage

PL/SQL tables should not be iteratively searched when there are hundreds or
thousands of records. To implement PL/SQL table searches, one of the following
methods is recommended:

18

Use INDEX by BINARY_INTEGER or INDEX by VARCHAR2 (available in 9iR2)


Oracle Confidential

Version:

table, indexed by the field you want to search by.

2.3.3.

For non-numeric keys or numeric keys that exceed the binary_integer size
boundary (e.g., IDs derived from SYSGUIDs), consider implementing HASH
lookup searches using the DBMS_UTILITY.GET_HASH_VALUE construct. Care
must be taken to resolve hash collisions (two different values that yield the
same hash value). Alternatively, such searches can be replaced by global
temporary tables, indexed by the search key. Recent tests show that this
approach can give almost a two order of magnitude improvement over linear
searches of pl/sql tables that exceed 1000 entries.

Bulk

PL/SQL engine executes procedural statements but sends SQL statements to the SQL
engine, which executes the SQL statements and, in some cases, returns data to the
PL/SQL engine. Too many context switches between the PL/SQL and SQL engines can
harm performance. That can happen when a loop executes a separate SQL
statement for each element of a collection, specifying the collection element as a
bind variable.
A DML statement can transfer all the elements of a collection in a single operation, a
process known as bulk binding. If the collection has x elements, using bulk binding
you can perform the equivalent of x SELECT, INSERT, UPDATE, or DELETE statements
using a single operation. This technique improves performance by minimizing the
number of context switches between the PL/SQL and SQL engines. With bulk binds,
entire collections, not just individual elements, are passed back and forth.
To do bulk binds with INSERT, UPDATE, and DELETE statements, you enclose the SQL
statement within a PL/SQL FORALL statement.
Example:
DECLARE
TYPE NumList IS VARRAY(15) OF NUMBER;
lines NumList := NumList();
BEGIN
/* Populate varray */
...
FORALL j IN 1..X
UPDATE RLM_SCHEDULE_LINES SET PROCESS_STATUS = p_status
WHERE line_id = lines(j);
END;

To do bulk binds with SELECT statements, you include the BULK COLLECT clause in
the SELECT statement instead of using INTO. If you are using Cursors, you can still
use bulk by including the BULK COLLECT clause in FETCH.
Example of Retrieving Query Results into Collections with the BULK COLLECT
Clause:
DECLARE
TYPE ModelRecTab IS TABLE OF OE_ORDER_LINES%ROWTYPE;
model_recs ModelRecTab;
BEGIN
SELECT * BULK COLLECT

19

Functional Specification
INTO model_recs
FROM OE_ORDER_LINES
WHERE item_type_code = MODEL
AND

END;

Example of bulk-fetch from a cursor into a collection of records:


DECLARE
TYPE ModelRecTab IS TABLE OF OE_ORDER_LINES%ROWTYPE;
model_recs ModelRecTab;
CURSOR c1 IS
SELECT *
FROM OE_ORDER_LINES
WHERE item_type_code = MODEL
AND

BEGIN
OPEN c1;
FETCH c1 BULK COLLECT
INTO model_recs;
END;

2.3.4.

Shared pool pinning

Common APIs should be part of the shared pool pinning script.


library cache fragmentation.

20

Oracle Confidential

This will reduce

Version:

2.3.5.

General PL/SQL performance guidelines

Use variable assignment instead of SQL.


Replace:

select <value> into <var> from dual;

with:
var := value;

SQL reference to dual requires 5 buffer gets just for DUAL scan

Cache frequently referenced values


lookups etc.).

Take the effort to write "base table" SQLs only; don't use complex views, as
these APIs will be heavily used. Avoid referencing complex views in PL/SQL
API.

Consider using IF ELSE constructs around simplified queries rather than


encapsulating logic in more complex SQL query using OR predicates:

(organization information, codes,

Replace:
select ... from tab
where

col1 = l_col1 OR col2 = l_col2

with:
IF l_col1 is not null THEN
select ...
from tab
where col1 = l_col1
ELSIF l_col2 is not null THEN
select ...
from tab where col2 = l_col2...

Use MERGE statement instead of INSERT/UPDATE whenever applicable

Avoid passing large number of parameters individually. Consider using


NOCOPY to pass large parameter data structures or a large number of
parameters by reference

Use bulk assignment (PL/SQL


assignment, when possible.

For dynamic SQL use "EXECUTE IMMEDIATE" rather than the DBMS_SQL
package.

When using EXECUTE IMMEDIATE, the USING construct should be used to


pass values as bind variables, rather than inserting the values directly into
the executed string. This will eliminate hard-parses and abuse of the library
cache.

Reduce DB round trips/context switches by using bulk operations (BULK


COLLECT and FORALL statements).

Don't initialize variables using the G_MISS_NUM, G_MISS_CHAR, or


G_MISS_DATE values to reflect missing values. The new standard calls for
reversing the logic, treating null (default initialization) as missing. For more
information on PL/SQL variable initializetion see also Oracle Applications
Business Object API Coding Standards Document.

record

level)

rather

that

field-by-field

21

Functional Specification

Avoid large complex queries; decompose them into multiple cursors

Avoid passing large number of parameters in a PL/SQL procedure

PL/SQL variables and corresponding database columns should be of the same


type

Use DBMS_PROFILER to profile code where bottlenecks can be isolated and


line level stats can be obtained .

2.4. Java
This section covers performance related standards for Java.

2.4.1.

Object Creation

All chained constructors are automatically called when creating an object with new.
Chaining more constructors for a particular object causes extra overhead at object
creation, as does initializing instance variables more than once. Java initializes
variables to the following defaults:

NULL for objects

0 for integer types of all lengths (byte, char, short, int, long)

0.0 for float types (float and double)

FALSE for booleans

There is no need to reinitialize these values in the constructor (although an


optimizing compiler should be able to eliminate the extra redundant statement). In
general, if you can identify that the creation of a particular object is a bottleneck,
either because it takes too long or because a great number of those objects is being
created, you should check the constructor hierarchy to eliminate any multiple
initializations to instance variables.
You can avoid constructors by unmarshalling objects from a serialized stream,
because deserialization does not use constructors. However, serializing and
deserializing objects is a CPU-intensive procedure and is unlikely to speed up the
application. Another way to avoid constructors when creating objects is by creating
a clone() of an object. You can create new instances of classes that implement the
Cloneable interface using the clone() method. These new instances do not call any
class constructor, thus avoiding constructor initializations. Cloning does not save a
lot of time because the main overhead in creating an object is in the creation, not
the initialization. However, when there are extensive initializations or many objects
generated from a class with some significant initialization, this technique can help.
Compiler can canonicalize Strings that are equal and are compiled in the same pass.
The String.intern( ) method canonicalizes strings in an internal table. There is only
one copy of each String that has been interned, no matter how many references
point to it. Since Strings are immutable, two different methods can share a copy of
the same string. String constants are automatically interned.
There are two reasons for interning Strings: to save space, by removing String
literal duplicates and to speed-up String equality comparations.

2.4.2.

Strings and StringBuffers

Use StringBuffer rather than the String concatenation operator (+). The String
concatenation operator + involves a lot of work: a new StringBuffer is created, the two
arguments are added to it with append(), and the final result is converted back with a
22

Oracle Confidential

Version:

toString(). This increases cost in both space and time. Especially if you're appending
more than one String, consider using a StringBuffer directly instead.

23

Functional Specification

2.4.3.

Coding Best Practices

Declare constant variables as final static. That way they be allocated and
initialized once.

Declare methods that should not be overridden as final. They can be a


candidate for inlining.

Avoid accessors for public members. Choose private members wisely.

Avoid casting
Type-specific code is not just faster than code with type casts; it's also
cleaner and safer. Unfortunately, sometimes its difficult to avoid the use of
casting.
Upcast operations (also called widening conversions in the Java Language
Specification) convert a subclass reference to an ancestor class reference.
This casting operation is normally automatic, since it's always safe and can
be implemented directly by the compiler.
Downcast operations (also called narrowing conversions in the Java Language
Specification) convert an ancestor class reference to a subclass reference.
This casting operation creates execution overhead, since Java requires that
cast is checked at runtime to make sure that it's valid. If the referenced
object is not an instance of either the target type for the cast or a subclass of
that type, the attempted cast is not permitted and must throw a
java.lang.ClassCastException. Method calls on casted objects are more
expensive.

Local variables are accessed faster than class members since they are stored
on stack rather than heap.
Amount localAmount = MyClass.amount;
for(int x=0; x<10000; x++)
total + = localAmount;

Do not use Exceptions for code path execution.


Exception object and snapshot of stack have to be created.
classCastException vs InstanceOf

Use buffered I/O :


InputStreamReader vs BufferedReader

Serializable classes: use transient to avoid serialization of unnecessary data.

Use reflection code to minimize eager class loading for large classes.
if(x==1)
AM = new ApplicationModule();

vs
if(x==1)
AM = Class.forName(ApplicationModule).newInstance();

Compiler options:
-g:none; no debugging information
-O: Applies optimizations.

24

Oracle Confidential

Version:

2.4.4.

Synchronization

In the JDK interpreter, calling a synchronized method is typically 10 times slower


than calling an unsynchronized method. With JIT compilers, this performance gap
has increased to 50-100 times. Avoid synchronized methods if you can -- if you can't,
synchronizing on methods rather than on code blocks is slightly faster.
Bad candidates for synchronization:

Read only objects.

Thread local objects.

Synchronization overhead:

Acquire monitor, lock object, execute method then release monitor.

No ordering for synchronization - depends on the thread scheduler.

Only one synchronized method can be executed on an object at once; serial


access.
public class MyCollection{
public synchronized boolean put(){ }
public synchronized boolean get(){ }
}

Synchronize blocks of code on different objects to avoid serial execution.


public void modify{
Synchronized(employee){
//modify employee info
}
}

Create non-synchronized classes with synchronized wrappers if needed.

2.4.5.

Collections

Legacy collections (like Vector and Hashtable) are synchronized, whereas new
collections (like ArrayList and HashMap) are unsynchronized, and must be
"wrapped" via Collections.SynchronizedList or Collections.synchronizedMap if
synchronization is desired. Do not use synchronized classes for thread local
collections.
Do not use object collections for primitive data types. Custom collection classes
should be used instead.
Size collection at maximum size to avoid frequent reallocations and rehashing in
case of hashtables or hashmaps.
Use java.util.Arrays.asList():

Returns fixed size array.

Changes are applied on array objects.


25

Functional Specification

2.4.6.

Garbage Collection

The canonicalization is one way to avoid garbage collection: fewer objects mean less
to garbage-collect. Similarly, the pooling technique also tends to reduce garbagecollection requirements, partly because you are creating fewer objects by reusing
them, and partly because you deallocate memory less often by holding on to the
objects you have allocated. Another technique for reducing garbage-collection
impact is to avoid using objects where they are not needed. For example, there is no
need to create an extra unnecessary Integer to parse a String containing an int
value, as in:
String string = "55";
int theInt = new Integer(string).intValue();

Instead, there is a static method available for parsing:


int theInt = Integer.parseInt(string);

When a class does not provide a static method, you can sometimes use a dummy
instance to repeatedly execute instance methods, thus avoiding the need to create
extra objects.
Using primitive data types in cases when you can hold an object in a primitive datatype format rather than another format can also reduce garbage collection. For
example, if you have a large number of objects each with a String instance variable
holding a number (e.g., "1492", "1997"), it is better to make that instance variable
an int data type and store the numbers as ints, provided that the conversion
overheads do not swamp the benefits of holding the values in this alternative
format.
Be aware of which methods alter objects directly without making copies and which
ones return a copy of an object. For example, any String method that changes the
string (such as String.trim()) returns a new String object, whereas a method like
Vector.setSize() does not return a copy. If you do not need a copy, use (or create)
methods that do not return a copy of the object being operated on.
Avoid using generic classes that handle Object types when you are dealing with
basic data types. For example, there is no need to use Vector to store ints by
wrapping them in Integers. Instead, implement an IntVector class that holds the ints
directly.
Avoid Finalization (GC perspective). Finalizers prolong the life of a non-referenced
object!
Do not call System.gc().

2.4.7.

Weak & Soft References

WeakReferences and SoftReferences differ essentially in the order in which the


26

Oracle Confidential

Version:

garbage collector clears them. Garbage collector does not clear SoftReference
objects until all WeakReferences have been cleared.
WeakReferences are intended for caches that normally take up more space and are
the first to be reclaimed when memory gets low. SoftReferences are intended for
canonical tables that are normally smaller, and developers prefer them not to be
garbage-collected unless memory gets really low. This differentiation between the
two reference types allows cache memory to be freed up first if memory gets low;
only when there is no more cache memory to be freed does the garbage collector
start looking at canonical table memory.
Java 2 comes with a java.util.WeakHashMap class that implements a hash table with
keys held by weak references.
A WeakReference normally maintains references to elements in a table of
canonicalized objects. If memory gets low, any of the objects referred to by the table
and not referred to anywhere else in the application except by other weak
references are garbage-collected. This does not affect the canonicalization because
only those objects not referenced anywhere else are removed.

2.4.8.

JDBC Guidelines

Create a pool of DB connections

Use reusable connection class to encapsulate driver and connection details

Use Oracle sequences - encapsulate sequence functions into a generic class

Use Metadata for dynamic SQL


getMetaData()

Use the index version of the getColumn method instead of by name


getColumn(index)

vs.

getColumn(name).

Use native data type get methods instead of getObject (getxxx)


getNUMBER()
getString()
getDate()

getClob()

Use stored procedures for DB intensive operations

Use of prepared statements eliminates reparses and reduces round-trips and


shared pool overhead
ps = dbconn.prepareCall (?)
ps.setInt (value)

Use of column data types reduces JDBC round-trips to obtain column data
types
stmt = dbconn.CreateStatement();
stmt.defineColumnType(1,Types.VARCHAR,length)

Use Batch Operations as an extremely efficient when dealing with a large


number of rows to reduce number of round-trips:
27

Functional Specification
ps = dbconn.prepareStatement();
ps.setExecuteBatch (100)
ps.set<type> (n,value)
ps.executeUpdate()

PreFetch can be set at the connection or statement level. It improves overall


fetch performance.
dbconn.setDefaultRowPrefetch (100)
rset = stmt.executeQuery (select ename from emp)
rset.next()

2.4.8.1.
JDBC and Multithreading
Do not use multithreading on the same JDBC connection object. The Oracle
JDBC driver supports multithreading, however, all JDBC methods are
synchronized. This means that if two threads are trying to execute a
statement on the same connection object then one of them will have to wait
until the other one finishes. This practically defeats the purpose of allowing
multithreading and only adds to the complexity of your program. In addition
to this, using JDBC in a multithreaded fashion is error prone and not well
tested. This will ultimately expose your program to runtime errors that
maybe hard to debug and fix.

28

Oracle Confidential

Version:

2.4.9.

Memory Footprint

Optimize the SQL statements for the VOs. Avoid selecting unnecessary columns in
VOs since all columns data are brought in the VM. Specific VOs should be created for
each page.
Restrict the number of rows. rowsetiterator fetches all the rows. BC4J places the
values in its own collection.
Size of out-binds should be always explicitly defined.

JDBC defaults to 4000 bytes for VARCHAR2

OracleStatement.defineColumnType(int, int, int)

set Precision for View Attribute( in XML)

Release AMs to pool:

DB connections are tied to AMs

pagecontext.releaseRootApplicationModule()

retainAM=N

2.4.10. Reducing Database Trips


Set VOs prefetch size to proper value by calling VO.setFetchSize , or at design time
in Jdev3.2 ( Tuning Tab). Avoid the false predicate approach where 1=2. Instead,
use
VO methods to set the state:
vo.setMaxFetchSize(0)
vo.setPreparedForExecution(true)

2.4.11. Deployment
Package application in zip/jar files.

Reduced # of files.

non-compressed format.

JVM options:

Xnoclassgc : disable class gc

-Xms, -Xmx : initial and maximum heap size.

-noverify : disables byte code verification.

-verbosegc : monitor gc activity

-native | -green.

2.4.12. Green Threads versus Native Threads


Green threads are the default threads provided by the JDKTM. Native threads are the
threads that are provided by the native OS. Native threads can provide several
29

Functional Specification
advantages over the default green threads implementation. If you run Java TM code in
a multi-processor environment, the Solaris kernel can schedule native threads on
the parallel processors for increased performance. By contrast, green threads exist
only at the user-level and are not mapped to multiple kernel threads by the
operating system. Performance enhancement from parallelism cannot be realized
using green threads. Also, when using the native threads, the VM can avoid some
inefficient remapping of I/O system calls that are necessary when green threads are
used.

2.5. Forms
2.5.1.

Forms Blocks

Do not base blocks on switched UNION/UNION ALL views whereby only one case can
be true depending i.e. on selection in search block. Instead, you should change the
query
source
programmatically
by
using
set_block_property(
..DATA_SOURCE_NAME..).
Blocks shouldnt be based on complex views. Instead, they should be decomposed
and logic should be moved to post-query triggers since roundtrips from the Forms
Server to the DB are not expensive.
Example:
Cash
Management
Find
Window
is
based
on
the
complex
view
CE_AVAILABLE_TRANSACTIONS_V that consist of a 5 way UNION of 5 other views.
Typically just one branch of the view is used based on transaction type. Changing
the data source dynamically allows the main view to be avoided and the correct
view used based on the type. As a result, shared memory was reduced from 4.8MB
to 347K and parstime from 8 seconds to 1.4.

2.5.2.

Use of bind variables

Use of literals should be avoided. Instead, PL/SQL variable should be assigned to a


Forms global and global variable name appended to the SQL statement, not the
actual value.

2.5.3.

LOVs

LOVs: If a list of values query can return more than 100 vales, list of values must be
defined with Filter Before Display = true; so user can restrict number of values
displayed. LOVs should reference base tables only, not views. They should not be
based on UNIONs. Instead, user should be prompted to select a type first.
Search screens and UIs should prevent execution of blind queries as well as
nonselective queries by enforcing that certain number of characters (different than
%) is entered as search criteria. As a general guideline, minimum number of
characters required is:
2 Char(s) for Result Sets between 100
- 1000 rows
3 Char(s) for Result Sets between 1000 - 10000 rows
4 Char(s) for Result Sets between 10000 - 1000000 rows
Note: This does not apply to exact searches
Do not pre-pend % by default to LOVs or search fileds. It will disable index use.

30

Oracle Confidential

Version:

2.5.4.

Record Groups

Record groups should not be based on complex SQL statements in order to be used
for numerous LOVs. Each record group should be based on a tuned SQL specific for
the needs of a LOV. If possible, use create_group_from_query so query will not be
executed and record group populated until record group is needed.

2.5.5.

Caching

Validations should not be performed by calling server side packages. It increases


load on the DB server, and it is not easy to grow the data server. Also, use Forms
pl/sql units so that SQL results can be cached in the Form thus reducing SQL
execution and DB round-rips.

2.5.6.

Item Properties

2.5.6.1.
Case Insensitive Query Property
Do not set case insensitive property on items that do not need case insensitivity or
items that are always stored in fixed case. Query generator does not presume that
functional indexes are available and creates query like:
select from t
where upper(X) = BLAKE
or X like Bl%
or X like bL%
or X like BL%
or X like bl%

2.5.6.2.
Visible Property
Folder blocks should not have large number of fields with VISIBLE set to TRUE. Fields
set to VISIBLE at design time should be the ones that majority of users would want to
see.
In a form that has Stacked canvases, only the canvas that is displayed when the
form opens should have the VISIBLE property set to TRUE. All other stacked
canvases should have VISIBLE set to FALSE. If that depends on runtime parameters
or profiles, all stacked canvases should have property VISIBLE set to FALSE and it
should be programmatically turned on at runtime for the stacked canvas that needs
to be displayed. The same applies to content canvases that are not displayed at
form startup.
2.5.6.3.
Display Property
If form has multiple Tabs defined, DISPLAY property should be set to NO for all tabs
except one that is displayed initially when form is loaded.

2.6. Reports
2.6.1.

Reports SQL

Large and complex SQL statements with a large number of placeholders should be
avoided since considerable amount of time will be taken to parse large and complex
SQL. This is a mid-tier parsing done by the Reports engine, not DB parsing when
31

Functional Specification
statement is executed.

2.6.2.

Initialization Values

Information that doesnt change for the duration of the report should be cached. Use
Before Report Trigger to cache those values instead of performing unnecessary joins
in the main SQL or calling APIs on a row-by-row basis.
Example:
SELECT ASP.INCOME_TAX_REGION_FLAG
FROM AP_SYSTEM_PARATEMERS ASP;

2.6.3.

Break Groups

Limit the number of break groups. Oracle Reports appends each column of the break
group to the main SQL, which will result in a more expensive sort and may influence
the execution plan.

2.6.4.

Computed Columns

Try to place all computations into SQL statements. Avoid formula columns since they
can be expensive if report produces large number of rows. Aggregations and totals
should be performed in SQL statement not in Reports.

2.6.5.

Lexical Parameters

Use lexical parameters to dynamically construct Reports queries. Instead of creating


a complex SQL statements with UNIONs or Ors, use lexical parameters to construct
appropriate query based on the user parameters.
When assigning values to lexical parameters, use bind variables. Example:
AFTER FORM TRIGGER

Proper use with bind variables:


IF :p_trx_number_low is not null THEN
:lp_trx_num_low := and a.trx_number >= :p_trx_number_low ;
END IF;

Improper use with literals:


IF :p_trx_number_low is not null THEN
:lp_trx_num_low := and a.trx_number >=||:p_trx_number_low;
END IF;

Use equality parameters whenever possible - it will produce more efficient SQL
statement and execution plan To achieve that you can also use lexical parameters.
Example:
IF (:p_return_date_low = :p_return_date_high) then
:lp_return_date := and h.ordered_date = :p_retrun_date_low;
ELSE
:lp_return_date :=
and h.ordered_date between :p_retrun_date_low and :p_return_date_high;
END IF;

2.6.6.

Defaulting Report Parameters

Report parameters should not be defaulted to i.e. min and max values if user
32

Oracle Confidential

Version:

doesnt provide parameters. Apps Reports should require a minimal set of


mandatory parameters to avoid executing blind queries. If no values are provided,
default them to minimum data window that makes functional sense.

33

Functional Specification

2.7. PRO*C
2.7.1.

Arrays processing

The array interface allows you to declare a local C array and populate it with values.
The array can then be used to perform an insert, update or delete. Arrays enables
you to reduce the number of roundtrips to the database. For example, if you need to
read 1000 rows from the database. Instead of of opening a cursor and loop through
each fetch until all rows are retrieved, use an array of 1000 elements which will
result in only one SQL fetch call. Using arrays can considerably SQL call overhead
and as well as network overhead if running in a distributed environment.
With PRO*C releases prior to PRO*C release 8, you cannot use an array of structures
within PRO*C. You can however use arrays within within a single structure to
perform batch operations. PRO*C 8.0 allows you to use an array of structures to
perform batch SQL and other object-type operations. Using an array of structures
allows for more elegant programming and also offers more flexibilty in organizing
the data structures.
When using array processing, always declare the same length for each array if you
are using multiple items within the single batch operation. You need to declare the
following:

Declare a standard batch size

Declare all arrays based on the batch size and use the FOR
:batch size clause when you perform SQL operations to specify
explicitly the number of rows to be processed.
For SELECT and DML statements, sqlca.sqlerrd[2] reports the cumulative sum of
rows processed. In some cases, using the array interface increased performance by
an order of magnitude. Please refer to the PRO*C/PRO*C++ Precompiler
Programmers Guide (Using Host Arrays section).
2.7.1.1.
Selecting the Batch Size
Choosing the optimal batch size depends on many factors, such as the size of the
data set being fetched., the performance characteristics of the network between the
application and the database server, and the latency of round trips. Larger batch
sizes increase the amount of network traffic between the application and the
database server. For networks experiencing performance bottlenecks, using a large
batch size can have a negative effect on performance. A large number of columns,
especially those with LONG row lenghts, can cause a low batch operation. Therefore
choose an appropriate batch size by making the batch size parameter dynamically
configurable upon execution. As a rule of thumb, start with a small batch size (500)
and gradually increment it until you achieve optimal performance.
Do not allocate statically allocate arrays or structures. Always allocate the arrays or
structures dynamically by using a memory allocation routine such as malloc(). This
enables you to free the batch array size or structure once it is no longer needed. It
also reduces application startup overhead by allocating only the amount of memory
needed.
You should always allocate and deallocate the arrays or structures dynamically to
ensure that sufficient memory exists. Also, use the FOR :batch_size clause when
you perform SQL operations to specify explicitly the number of rows to be processed.
34

Oracle Confidential

Version:

2.7.2.

Linking with the shared library

In PRO*C releases 2.1 and above, a client shared library, libclntsh.so, is provided so
that that PRO*C applications could be linked with this shared library. It helps reduce
the PRO*C executables size from 2-3 MB to 50-100KB on average. This results in not
only in disk storage savings but also saves compile time and execution time. Since
the executables are linked with the shared library, only functions that are called are
paged in, which increases the performance of the executable since the memory
requirements drop significantly. In order to make use of the client-shared library,
relink your PRO*C and OCI programs with with the libclntsh.so library(-lclntsh) or
libclntsh.sl on HP. Set the environment variable ORA_CLIENT_LIB to shared before
compiling.

2.7.3.

PRO*C Compile options

There are several PRO*C compiler options that can increase cursor management
performance. The HOLD_CURSOR compile option, when set to YES, causes
Oracle to hold the cursor handle that is associated with the SQL statement in the
cursor cache. This helps eliminate reparsing should the SQL statement be
reexecuted at a later stage in the application. This eliminates the need to reparse
the SQL statement since the cursor can be reused. HOLD_CURSOR set to NO
causes the cursor handle to be reused following the execution of the SQL statement
and the closing of the cursor. Set HOLD_CURSOR to NO to increase the cursor
cache hit ratio.
The RELEASE_CURSOR compile option, when set to YES, releases the private SQL
area associated with the SQL statement cursor. This means that the parsed
statement for the SQL statement is removed. If the SQL statement is reexecuted
later, the SQL statement must be parsed again, and a private SQL area must be
allocated. When RELEASE_CURSOR = NO, the cursor handle and private SQL area
are not reused unless the number of open cursors exceeds MAXOPENCURSORS. Set
RELEASE_CURSOR = NO and HOLD_CURSOR = YES to increase the cursor cache
hit ratio. Set the MAXOPENCURSORS compile option to the maximum number of
cursors used in your application.
PRO*C 8i provides the ability to reduce network round trips to the database server
prefetching rows of a cursor. The PREFETCH compiler option allows you to specify
the number of rows to be prefetched. The default value is 1, and the maximum
number of rows which can be prefetched is 65,535. Prefetching is primarily useful for
cursors which do not perform array processing or fetch rows in batches. However, it
can also be used with array processing. For example, if your cursor fetches in
batches of 100 rows, and the prefetch compiler option is set to 500, then after 500
rows have been fetched by the program, another database round trip will occur in
order ro fetch the next set of 500 rows.

2.7.4.

Parallel processing using PRO*C

It is important to divide the tasks in your application, allowing them to be


parallelized. You can achieve it by using the fork() and exec() calls to invoke
35

Functional Specification
multiple PRO*C applications, or preferably by using the threads feature which will
help you parallelize your application by creating several threads.
The threads option allows a high degree of parallelism by establishing separate
contexts via the EXEC SQL CONTEXT ALLOCATE statement, and each thread may
use a different context to perform SQL operations. Using threads can increase the
performance of your application significantly by processing SQL statements
simultaneously using lightweight threads. Using threads is more efficient and
elegant than the fork() and exec() technique. When a fork() is issued from a PRO*C
application following an established connection, the child process will not be able to
to make use of the connection. Although th econnection to Oracle is treated as a file
descriptor (socket), and fork() duplicates all open file descriptors, the process id
(PID) od the process process that establishes the connection is also used to manage
the connection. Therefeore after the fork() and exec(), you may get ORA-1012 errors
(not logged on) when SQL statements are issued from the child process. If you still
intend to use the fork() and exec() technique, the preferred method is to issue the
fork() and fork() before any connection is established., and then establish separate
connections in each parent and child process.

2.7.5.

Object Cache

Two object interfaces, associative and navigational, are u sed to manipulate


transient copies of the objects and persistent objects, respectively. Objects allocated
via the EXEC SQL ALLOCATE statement are transient copies of persistent objects in
the Oracle database. Once fetched in the object cache, updates can be made to the
transient copies but require explicit SQL statements to make the changes persistent
to the database. You can use the EXEC SQL FREE statement to free an object from
the object cache. You can also use the EXEC SQL CACHE FREE ALL to free the
entire object cache memory. The associative interface is typically used when
accessing a large collection of objects or objects that are not referencable. , or
performing update or insert operations that apply to a set of objects. The
navigational interface is generally used when accessing a small set of distinct
objects. In the navigational interface, the EXEC SQL OBJECT FLUSH, statement
flushes the persistent changes to the database.
Two init.ora parameters control the optimal size of the object cache:

Object_cache_max_size_percent: specify the maximum size of


the object cache.

Object_cache_optimal_size: specify the size of the object cache


to be reduced to when it exceeds the maximum size.

2.7.6.

DML RETURNING

This feature allows a value to be returned as part of a DML statement. It allows 2


SQL statements to be combined into one, thus reducing server roundtrips.

2.7.7.

The MAKE FILE

When you develop applications, always use make files to compile and link code. Do
not use manual compile scripts to produce executables. Use new make files
provided with new releases that incorporates new functionality. Therefore, if you use
manual compile scripts, code that you compiled under a previous release may not
link properly under a new release. Use the sample make files provided (proc.mk or
36

Oracle Confidential

Version:

demo_proc.mk) to cpmpile and link your application. They provide a lot of


functionality by checking and resolving dependencies and reporting errors caused
by dependency violation.

2.8. Discoverer
This section covers performance related standards for Discoverer.
Discoverer is designed as an ad-hoc query tool. For true enterprise reporting, Oracle
Reports should be used. The main focus is to enable end-user to access data from
the database, produce standard reports and enable powerful analytics (Ranking,
Top Ten, Drilling capabilities). In order to provide this analytical capability,
Discoverer creates indexed cubic cache in the middle-tier. Default settings for the
cache are large so Discoverer takes advantage of the memory on the server. By
modifying those settings, system resources available to Discoverer can be
controlled.
Avoid returning tens of thousands of rows. Provide parameters to reduce the number
of rows returned.
Data model should support efficient reporting from Discoverer. If all required data is
already stored in denormalized tables or in materialized views, Discoverer reports
would be simpler and perform better. It is advisable not to create complex folders,
since they are in essence views created on top of other views or tables (other
Complex or Base Folders). Whenever you select any information from the complex
folder, the whole complex query is executed. If you are looking for a particular
information, try to retrieve minimal number of rows and involve minimal number of
tables and views in process and that is best achieved by selecting from Base
Folders.
For more information on Oracle Discoverer see also an Oracle Whitepaper
Oracle9iAS Discoverer Best Practices for release 1.0.2.2.

2.9. Materialized Views


Design fast refreshable MV based on 9iR2 functionality.
Consider using Nested Materialized views to break up a complex MV that is not fast
refreshable.
Materialized views should be created with REFRESH FAST ON DEMAND option and
not on COMMIT.
To speed up MJV refresh, you should create indexes on the materialized views
columns that store the rowids of the fact table.
Always create MV logs with proper storage parameters as improper storage
parameters can add significant overhead during the delta capture.
Indexes on MVs should be created on the columns used in UI queries.
Creating an index on SNAPTIME$$ column of the MV log will impact the refresh time
as well as MV log maintenance.
Prebuilt tables should not be used; the prebuilt option does not impose the integrity
check on the MV and it is up to the users to verify it, making it an unreliable option.

37

Functional Specification

2.10.

Data Modeling

This section covers performance related standards for Data Modeling, primarily
physical design standards.
Scalability of an application greatly depends on the data model design. Successful
data model should result in easy-to-code, well performing application. Tuning
database parameters will not address the issue of a non-scalable model.

2.10.1. Data Modeling for OLTP


OLTP environments tend to be highly normalized. Denormalization has disadvantage
of longer row lengths, that my slow down DML operations and generaly higher data
volume. On the other hand, high level of normalization often requires complex SQL
statements with multiple table joins that can result in limited application
performance.

2.10.2. Arrange most used/accessed columns first in a new table


There are a couple of ways to arrange columns in table i.e. mandatory columns
together or functionally related columns together. From the performance
perspective, it is better to start with most frequently used/accessed columns. Oracle
rows are variable width, and hence if a SQL accesses column1 and column15
frequently, then the query will need to pay the price of fast forwarding through
columns2-columns14 for each row access.
Below is the recommended order of some of the major columns/column groups
usually found in a table.
Note that this scenario only applies when you create the table for the first time, how
you add columns after the first time/release is discussed in a separate rule below.
Primary key column(s)
OBJECT_VERSION_NUMBER column
Unique key column(s)
(Remaining) Really heavily used/accessed column(s)
(Remaining) Mandatory foreign key column(s)
(Remaining) Mandatory column(s)
(Remaining) Optional foreign key column(s)
(Remaining) Optional column(s)
WHO-column group
Concurrent program column group
Flex field column group
For more information on physical data design standards see also Physical Data
Model Recommendations for the Oracle eBusiness Suite.

38

Oracle Confidential

Version:

2.10.3. Primary Keys


Primary key constraints improve performance because it allows the optimizer to
make use of the constraint information when optimizing joins and filter predicates. It
also helps with query rewrite.
2.10.3.1.
Do not use Global Unique ID as primary key
Consider using a sequence/ID column instead of Oracle feature called Global Unique
ID (GUID) as the primary key column for tables. GUID takes up a lot of storage space
as well as selecting a new GUID value can be time consuming.
2.10.3.2.
Surrogate Keys
Surrogate keys are system generated primary keys. An example of a surrogate key
is creating an additional header_id column on the order_headers_all table as
opposed to using the order_number as the PK. Surrogate keys cannot be used to
enforce uniqueness. The true primary key of the table must also be unique.
Surrogate keys may be considered if true primary key can change or if primary key
consists of numerous columns that results in very large keys (composite columns >
4 columns). Disadvantages of surrogate keys use:

more indexes are required.

slows down DML operations

more joins are needed in order to navigate through the model.


The number of required joins is proportional to the level of nesting of
surrogate keys.

scalability problem (moving temperature problem)

2.10.4. NULL columns


ID or critical flag columns should not be NULL to reflect a processing ready state.
However, NULL value can be used to reflect a value of process completion. Setting
the flag to NULL after successful completion (i.e. predominant value) results in a
small and very efficient index.
Null should never be used as a meaningful value for Ids and critical flags you are
searching on. NULL values are not indexed and conditions like PORL.LINE_LOCATION_ID IS
NULL are leading to the full table scans.

2.10.5. Indexes
Join columns on the table should be indexed.
2.10.5.1.
Avoid overindexing
Index maintenance is an overhead during DML operations. Create indexes just on
selective columns on which you perform searches. Example:
AP_INVOICES_N2 (VENDOR_ID)
AP_INVOICES_U2 (VENDOR_ID, INVOICE_NUM)
The AP_INVOICES_N2 index is redundant since the optimizer can use the U2 index.
39

Functional Specification

2.10.5.2.
Order columns in index by occurrence rather than selectivity
Columns in multi-column indexes should be ordered by their occurrence in the
WHERE CLAUSE, not by selectivity.
That means, columns used frequently and in = conditions in the front, columns
used less frequently and in BETWEEN, >, <, min, max and ORDER BY in
the end.

2.10.6. Attribute Type


Instead of using long inlists, an attribute type should be created to reflect a certain
classification. Example:
SELECT DISTINCT SO_HEADERS.HEADER_ID
FROM SO_HEADERS,
SO_LINES
WHERE SO_LINES.S2 IN (18,5)
AND

SO_LINES.HEADER_ID = SO_HEADERS.HEADER_ID

AND

SO_LINES.ITEM_TYPE_CODE IN ('KIT','MODEL','CLASS','STANDARD','SERVICE')

AND

SO_LINES.LINE_TYPE_CODE IN ('REGULAR','DETAIL')

AND

SO_LINES.ATO_LINE_ID IS NULL

AND

SO_LINES.OPEN_FLAG||'' = 'Y'

AND

SO_HEADERS.OPEN_FLAG = 'Y'

2.10.7. Views
When converting base tables into views review performance implications
thoroughly. SQL will always access all tables in a view. Crating views on top of other
views is against coding standards. Level of view nesting should be 1 - views should
directly expand to base tables.
2.10.7.1.
Views should use UNION ALL rather than UNION
If UNION functionality is required in the view definition, always consider using
UNION ALL rather than UNION statement.
2.10.7.2.
Views should not define column as concatenation of cols or
literals
Views should avoid defining columns as concatenation of columns and/or literals
unless you can be sure these columns will not be used in the where clause of SQL
statements. CBO will not be able to use index on concatenated columns and that can
create a performance issue.
Consider why you have this implementation and if it could be handled in the API
instead.
2.10.7.3.
Views, in general, should not contain functions
Try to avoid defining columns using functions (i.e. DECODE, NVL) unless you can be
40

Oracle Confidential

Version:

sure that these columns will not be used in the WHERE clause of SQL statements.
CBO will not be able to use index on such columns and that can create a
performance issue
If you have functions in view definitions, consider why do you have this
implementation and if it could be handled in APIs instead.

2.10.8. General Guidelines


One-to-one mappings should always be collapsed into a single table.
Data Model specification should include specific SQL examples reflecting the typical
uses.
Sequence should be always crated with cache value set to 20.
Storing pieces of code in a table to create dynamic SQL statements seems very
flexible but actually is very rigid since in different scenarios different SQL statements
are created and any changes to the code that can be beneficial for one particular
statement is not so good for other scenarios in which this same piece of code takes
part. It is much better to use other means of code reusability.

41

Functional Specification

2.11.

Concurrent Manager Jobs

This section details the performance standards of modules, which will be scheduled
and executed via the Concurrent Manager and tips on managing the concurrent
manager without negatively impacting online E-Business suite users.

2.11.1. Concurrent Manager Management


The Concurrent manager is designed to control what jobs run and when. You should
limit the number of concurrent manager queues to reserve processing resources for
on-line users.
If queue wait times are too long, try to move less critical programs to lower priorities
and/or move these jobs to special queues so that a time critical program can
execute immediately.
When adjusting sleep times for concurrent managers, do not set it too low as
needless re-queries of the fnd_concurrent_processes table can be expensive. A
larger cache size should help increase your throughput. The goal is to avoid jobs
waiting when resources are available and avoid monopolizing resources available to
on-line users at the expense of increasing batch throughput.
The objective is to tune for throughput; not how many jobs are waiting. Monitor and
track all jobs and identify poor performing programs and use this information to
determine periods of high and low activity to aid you in rescheduling jobs to periods
of lower activity. To preserve some of this information ( before purging it), it can be
archived to a data warehouse and summary tables can be created which can be
useful in determining poor performing programs. With this information available,
you can continuously monitor current running jobs and compare them with their
average and maximum run times and help you detect if a program is running well
beyond its average time.

2.11.2. Queue Management


If you have a high number of small concurrent requests and short requests are
queued for long periods, add a separate manager to manage these small requests
and exclude them from the other queues. Always test any changes as increasing the
number of workers or reducing the sleep time may increase the throughput of the
concurrent managers at the cost of increasing the workload on the server. If you are
resource constrained then you will slow down online users and throughput will not
increase as jobs take longer to run.
Measure elapsed run time of requests to identify jobs that consistently take a long
time to run and those that consistently complete in a short time. Compare the
running time of fast requests to the time they regularly spend queuing. Use the
information gathered to decide how to assign programs to queues and which ones
need tuning.
Dedicating a concurrent manager to process short running jobs can help prevent
such jobs getting stuck behind long running jobs in the queue. If many short
running, but long waiting requests are identified, consider creating a concurrent
manager that handles short running requests.
If all queues are running at maximum capacity, it may be necessary to add more
queues. You can define as many concurrent managers as your workload requires
and as resources permit.
42

Oracle Confidential

Version:

Check long running requests to determine if any of these requests should be


assigned to a queue that will handle long running jobs. Consider defining separate
concurrent manager queues to process long-running, non-critical requests outside
peak hours to avoid bottlenecks.
Identify critical jobs and ensure that these are treated as special cases irrespective
of their running time. Allow any other programs that vary in running time to run in
the default Standard queue.

2.11.3. Concurrent processing Guidelines


Concurrent Managers poll database tables to determine which concurrent requests
should be processed. Connection to the database can occur at a SQL*NET or IPC
level. If the managers are on the same server as the database, use IPC or bequeath
connection.
A common guideline is to start with 1 concurrent process per CPU, and no more than
2 processes per CPU.
Concurrent processing performance can be achieved by setting the concurrent
manager cache size and sleep time parameters. Setting sleep time below 30
seconds may degrade the performance of your system as the concurrent managers
are continuously scanning the queue to find work. If a queue is configured for
medium to long running jobs longer than 10 minutes, there is no need to scan the
queue every minute. Increasing sleep time may alleviate the system from
unnecessary queue scans. Reduce the default sleep time of 60 seconds for queues
running short requests submitted as Pending/Normal. Avoid setting manager sleep
time less than the CRMs sleep time if you have a mixed workload of
Pending/Standby and Pending/Normal requests. If the sleep time and cache size are
not optimal, the concurrent manager may sleep after processing requests in its
cache before rescanning the fnd_concurrent_requests table. This can cause requests
to build up in the queue. Increasing the cache size (number of requests cached
every time the queue is scanned) to at least twice the number of target processes
may help. Do not set the cache size too high if you often change job priorities since a
high cache value will reduce the number of times the request table is scanned.
Reschedule some programs to run when the concurrent managers have excess
capacity and consider dedicating certain concurrent managers to process either
short or long running programs to avoid queue backup.
Add more queues if all queues are running at maximum capacity if resources
permit.
If you have a high number of small concurrent requests and short requests are
queued for long periods, add a separate manager to handle these small requests
and exclude them from the other queues. Dedicating a concurrent manager to
process short running jobs can help prevent such jobs getting stuck behind long
running jobs in the queue.
Consider defining a separate concurrent manager to process consistently longrunning, non-critical requests outside peak hours to avoid bottlenecks.
Purge the FND tables on a regular basis using the Purge Concurrent Request and
Manager
Data
program.This
program
purges
the
FND
tables
(fnd_concurrent_programs,fnd_concurrent_requests,fnd_concurrent_processes,fnd_c
oncurrent_queues) and purges the O/S files (*.req and *.out).
43

Functional Specification
Keep statistics up to date on the FND tables to ensure optimal plans.
Constantly monitor running jobs and identify the ones that should be moved to
another concurrent manager or that is a candidate for further tuning.

2.12.

Who Should Tune

TBD.

2.13.

Performance Measurement

TBD.

2.14.

Administrative Interfaces

TBD.

2.15.

Configuration Parameters
There are no new configuration parameters.

44

Oracle Confidential

Version: