Вы находитесь на странице: 1из 52

Oracle Technology

Best Practices

A Compilation of Technical Tips from


the Independent Oracle Users Group
www.ioug.org

Third Edition

CONTENTS
03

Migrating to Oracle Database 11g


Step-Ordered Approach
By April Sims

Executive Editor
John Kanagaraj

08

Adding Time Information into Tablespace Names


By Hsn S ensoy

Managing Editor
Theresa Rubinas

13

Cardinality Errors Due to Relationships


Between Columns
By Bobby Durrett

Contributing Editors
Kimberly Floss
Arup Nanda

18

How to Kill Inactive Forms (or other Database)


Sessions That Are Holding Database Exclusive
Lock at Row Level
By Vivek Awasthi

Design
Jennifer Hanasz
Patrick Williams

22

Loading Large XML Documents into Oracle


By Coleman Leviter

25

How to Quickly Identify if Physical Standby


Is Lagging Behind?
By Balaji Raghavan

31

Oracle DUMP Files Housekeeping


By Chen Rui Qing

34

Simulating Statistics in Oracle


By Ramachandran Venkateswaran

39

Keeping Schema Passwords Secret


By Michelle Malcher

43

Oracle Database 10g Upgrade Planning


Best Practices
By William D. Moore

Headquarters
Independent Oracle Users Group
401 North Michigan Avenue
Chicago, IL 60611-4267
USA
Phone: +1.312.245.1579
Fax: +1.312.527.6785
E-mail: ioug@ioug.org

Oracle Technology Best Practices | 

Welcome to the third edition of the IOUG SELECT Journal Tips and
Best Practices Booklet! As always, this booklet is chock full of tips and best
practices that we hope you will find useful. This year, we had an overwhelming
response to the call for articles from you, members of the IOUG from all
over the globe. This, in my humble opinion, is what the IOUG is all about:
Users helping users by sharing what they know. Both the SELECT Journal
and this booklet serve as a conduit to share technical knowledge with the
IOUG user community. We are able to publish quality content because of
authors who are willing to spend time and energy to distill their real-world
knowledge and experiences into such articles. Thank you, authors!
And that brings me to the dream team of editors with whom I have had
the good fortune to work on this project. Both Kim Floss and Arup Nanda
were of invaluable help in reviewing, choosing, editing and refining these
articles. Theresa Rubinas from the IOUG Headquarters team kept us all
focused and on track, and worked on the 101 things needed to get this out
to you. A big thank you to this team!
We at the IOUG SELECT Journal require both your feedback as well your
input to maintain the quality of the quarterly SELECT Journal as well as
this yearly Best Practices Booklet. You can do this by signing up to review
articles, as well as writing original content. If you have an idea about an
article and dont know where to start, we are there to help as well. Let us
know by e-mailing us at select@ioug.org or visiting us on the Web,
www.ioug.org/selectjournal.
Sincerely,
John Kanagaraj, Executive Editor
IOUG SELECT Journal
Disclaimer: IOUG and SELECT Journal have relied on the expertise of the authors to make this booklet as
complete and as accurate as possible, but no warranty is implied. The information given here is provided on an as
is basis. The authors, contributors, editors, and publishers of SELECT Journal, IOUG, and Oracle Corporation
disclaim all warranties, express or implied, with regard to the same, including, without limitation, any implied
warranties of merchantability or fitness for a particular purpose and any implied warranties of non-infringement.
The authors, contributors, editors, and publishers of SELECT Journal, IOUG, and Oracle Corporation will not be
liable to any person for any loss or damage, including, without limitation, the loss or services, arising out of the use
of any information contained in this booklet or any program or program segments provided herewith.

 | independent oracle users group

www.ioug.org

Migrating to Oracle Database 11g


Step-Ordered Approach
By April Sims
Migrating to a newer Oracle Database Version doesnt have to be confined
to a single outage period. Several interim steps can be done ahead of time
to certain compatible components saving valuable time. In a general
sense, Oracle is backward compatible for making that transition from an
earlier version to a later one. The following components can be upgraded
to Oracle Database 11g Release 1 (11.1.0.6) while still maintaining
compatibility with earlier versions of Oracle. See the Reference Section for
additional information on exact versions.

Oracle Net Services: LISTENER.ORA, SQLNET.ORA


Clients (SQLNET, JDBC, ODBC ) see Reference Section
RMAN Catalog and Database
Grid Control Repository Database

Design your environment to be adaptable to change. This means that you


should be comfortable with multiple Oracle Homes. Install the Oracle
Database 11gR1 version of the database in a different Oracle Home.
Use the tools that Oracle provides such as the oratab and oraenv file to
dynamically set environmental variables based on the ORACLE_SID. See
the following (Listing 1) for an example use of the custom code section for
oraenv for Unix and corresponding referenced file in Listing 2.

Oracle Technology Best Practices | 

Listing 1:
# Install any custom code here
#
case $ORACLE_SID in
SID)
. /u01/app/oracle/SID.ini ;;
SIDT)
. /u01/app/oracle/SIDT.ini ;;
SIDD)
. /u01/app/oracle/SIDD.ini;;
esac

Listing 2:
ORACLE_HOME=/u01/app/oracle/product/10gR2
PATH=$ORACLE_HOME/bin:$PATH
ORACLE_SID=SID
# New Recommendation for 11g Diagnostics
ORACLE_BASE=/u01/app/oracle
TNS_ADMIN==/u01/app/oracle/product/11g/network/admin
export ORACLE_SID ORACLE_HOME PATH ORACLE_BASE

Oracle Net Services


As a DBA, controlling the environment is the key to preventing disruptive
events during migrations. Using the init.ora parameter, LOCAL_LISTENER for
each ORACLE_SID on a separate port allows the DBA to turn off or on access
to that database without affecting other ports and/or ORACLE_SID (s). If using
port 1521, dynamic registration will happen for each instance on that
node, so that port is avoided to maintain control. Oracle recommends
having multiple listeners for multiple Oracle Homes (see Note:429074.1),
but experience has taught me that a single highest version listener
executable will work for most any lower-level database installed on a single
node while preventing conflicts. The key to using a higher level listener is
the variable TNS_ADMIN that overrides the default ORACLE NET SERVICES
location for the LISTENER.ORA, TNSNAMES.ORA and SQLNET.
ORA configuration files. See Listing 3 below for an example LISTENER.
ORA entry; notice the ADMIN_RESTRICTIONS_LISTENER_SID that allows only

 | independent oracle users group

www.ioug.org

the user that owns the Oracle installation directory to modify these files
tightening security vulnerability with the listener.
Listing 3:
LISTENER_SID =(ADDRESS = (PROTOCOL = TCP)(HOST = NODENAME)(PORT = 1527))
SID_LIST_LISTENER_SID =(SID_LIST =(SID_DESC =(GLOBAL_DBNAME = SID.DOMAIN)(ORACLE_HOME = /u01/
app/oracle/product/10gR2)(SID_NAME = SID))
ADMIN_RESTRICTIONS_LISTENER_SID=ON

RMAN Catalog and Grid Control Database


RMAN executable version should be the same as the target database that
dictates using the same ORACLE_HOME to run RMAN scripts that
connect to a remote RMAN catalog database. The RMAN catalog schema
version must be greater than or equal to the RMAN executable with
backward compatibility for earlier releases. Upgrading the catalog using
RMAN commands as follows:
RMAN> upgrade catalog
RMAN-06435: recovery catalog owner is rman
RMAN-06442: enter UPGRADE CATALOG command again to confirm catalog upgrade
RMAN> upgrade catalog

This will not upgrade the database that houses the RMAN catalog to
Oracle Database 11gR1, only the RMAN schema to be compatible with
the higher release of RMAN. Upgrading the catalog allows you to back
up any other Oracle 11g databases as well as previous versions. You can go
ahead and also upgrade the RMAN database to Oracle Database 11g at
this point using any of the standard methods: DBUA, EXP/IMP, EXPDP/
IMPDP or a Manual Upgrade. Many DBAs keep their RMAN Catalog
and Grid Control Repository in the same database. Grid Control 10.2.0.4
is compatible with an 11.1.0.6 version of the database.

Oracle Technology Best Practices | 

Oracle Database 11g Diagnosability Framework


This new version of Oracle brings the Automatic Diagnostic Repository
(ADR) that is a flat file structure containing all alert logs, core files,
trace files and incident information. ADR_BASE = $ORACLE_BASE
that is controlled by the database diagnostic_dest parameter that replaces
background_dump_dest, core_dump_dest and user_dump_dest. If you use the Oracle
Database 11g listener before creating any 11g databases, then $ORACLE_BASE is
recommended to be set, see Listing 2. This will control where the listener
log files are located and enable you to modify your scripts and maintenance
routines using the new adrci command-line utility. The default editor for
adrci is vi and because a standard Windows install will not have vi, copy
notepad.exe or wordpad.exe to a location in the Windows PATH renaming that
exe file to vi.exe. The adrci command-line utility has a set editor command
but this may yield unexpected results in Windows. To script automatic
purging of date-aged logs use adrci with a text file that contains the
commands. See Listing 4 for an example.
Listing 4:
adrci script =adrci_commands.txt
#adrci_commands.txt
set echo on
set homepath diag/tnslsnr/nodename/listener_sid
purge -age 10080 -type alert
purge -age 10080 -type incident
purge -age 10080 -type trace
purge -age 10080 -type cdump
set homepath diag/rdbms/sid/SID
purge -age 10080 -type alert
purge -age 10080 -type incident
purge -age 10080 -type trace
purge -age 10080 -type cdump
quit

If you still want to keep the old method of logs and trace files, see Metalink
Note:454927.1

 | independent oracle users group

www.ioug.org

Using Oracle Database 11g on a daily basis for accessory databases such as
Grid Control and RMAN, listeners for databases and clients will require
regular exposure to the new version. This should increase your skill level and
confidence in the new release while reducing the possibility of disruptions.

Reference: Metalink Documents and Resources


1. Can You Run Different Releases Of The Oracle Database On The Same
Computer At The Same Time? Doc ID: Note:429074.1
2. Setting TNS_ADMIN Environment Variable Doc ID: Note:111942.1
3. Dynamic Registration and TNS_ADMIN Doc ID: Note:181129.1
4. Oracle ODBC version compatibility mirrors compatibility for Oracle client
software.
5. http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/
jdbc_faq.htm#02_02
6. Client / Server / Interoperability Support Between Different Oracle Versions
Doc ID: Note:207303.1
7. RMAN Compatibility Matrix Doc ID: Note:73431.1
8. Oracle Enterprise Manager 10g Grid Control Certification Checker Doc ID:
Note:412431.1
9. 11g Install : Understanding about Oracle Base, Oracle Home and Oracle
Inventory locations Doc ID: Note:454442.1
10. Using and Disabling the Automatic Diagnostic Repository (ADR) with Oracle
Net for 11g Doc ID: Note:454927.1

n n n About the Author

April Sims is currently the Database Administrator at Southern Utah University and an Oracle
Certified Professional: 8i, 9i, and 10g with a masters degree in Business Administration
from University of Texas at Dallas. She has been an IOUG member for six years, SELECT
article reviewer and a presenter at Oracle OpenWorld and numerous regional Oracle-related
conferences. April is also a Contributing Editor for the IOUG SELECT Journal and can be reached
at sims@suu.edu.

Oracle Technology Best Practices | 

Adding Time Information into


Tablespace Names
By Hsn S ensoy
In VLDB databases, there is a common willing in using a tablespace
naming convention that includes seasonal information of data within
tablespace, such as CHURN_2008M01 or SALE_2008W50. This has
various benefits in terms of ILM (Information Life Cycle) management.
Byjust looking the name of a tablespace, the DBA can have a pretty good
idea about the tablespace and segments within that tablespace.
Nevertheless, prior to Oracle Database 10g this convention can cause
significant degeneration in system catalog performance if data windowing
is used. In other words, if you periodically need to drop old tablespaces and
add new ones, Oracle catalog starts to blow up.
In this paper, you will find a tricky way of solving this problem by Oracle
Database 10g.

Data Windowing and Tablespace Naming


Assume that by business a quarter of sales data is required to be stored.
Asa design consideration, the SALES table is partitioned monthly and
each three months (a quarter) is stored in a tablespace named according to
our convention. A graphical representation of the layout for the first three
months is below:

 | independent oracle users group

www.ioug.org

As the month April comes, January-March partitions would be dropped


and SALES_2008Q1 tablespace will also become useless and be dropped.

Although querying *_tablespaces or v$tablespace views gives the impression


that SALES_2008Q1 is removed from database catalog, in reality it is still
in catalog. By querying TS$ table you can see it.
SQL> select tablespace_name from dba_tablespaces where tablespace_name=SALES_2008Q1;
no rows selected
SQL> select name,online$ from sys.ts$ where name = SALES_2008Q1;
NAME
ONLINE$
------------------------------ ---------SALES_2008Q1
3

Therefore, what Oracle actually does in tablespace drops is to set


ONLINE$ column of TS$ table from one to three (soft delete). When
you drop SALES_2008Q1 and create SALES_2008Q2, you just add one
more row to that table. In a database consisting of hundreds of tablespaces
storing range-partitioned tables, such tablespace rotation will create lots of
inactive entries in the TS$ table.
The issue is this: TS$ is an index cluster table (C_TS# cluster). It shares the
cluster blocks with another internal table namely, the FET$ table.

Oracle Technology Best Practices | 

SQL> select table_name,cluster_name from dba_tables t where cluster_name = C_TS#;


TABLE_NAME
---------TS$
FET$

CLUSTER_NAME
-----------C_TS#
C_TS#

If you check the SQLs of catalog views, there are a lot of views build
on these two tables. This means as the tablespaces are cycled by drop
statement, the organization of the C_TS# cluster gets worse and worse.
As a result, many people aware of this fact advise VLDB sites not to use
seasonal information in tablespace naming. Obviously, they are right.

Solution to Problem with 10g


In Oracle Database 10g, Oracle introduced a new SQL statement for
renaming tablespaces. The statement simply changes the name of a
tablespace in Oracle database.
SQL> alter tablespace sales_2008q1 rename to sales_2008q3;

What it actually does on the background is important for the solution


of our problem. The RENAME statement just performs two updates on
the system catalog to simply update the data in place: In other words,
it does not add new data into cluster or delete some data under the
high-watermark of cluster segment but updates existing rows. A10046
trace at level 4 shows the data change activities made for the execution of
tablespace rename statement. The first recursive SQL updates no rows for
our permanent tablespace renaming. The second one uses cluster index to
access the data and updates columns of the old row.

10 | independent oracle users group

www.ioug.org

...
alter tablespace sales_2008q2 rename to sales_2008q4
...
update sys.props$ set value$=:1
where
name = DEFAULT_TEMP_TABLESPACE and upper(value$) = upper(:2)
...
update ts$ set name=:2,online$=:3,contents$=:4,undofile#=:5,undoblock#=:6,
blocksize=:7,dflmaxext=:8,dflinit=:9,dflincr=:10,dflextpct=:11,dflminext=
:12,dflminlen=:13,owner#=:14,scnwrp=:15,scnbas=:16,pitrscnwrp=:17,
pitrscnbas=:18,dflogging=:19,bitmapped=:20,inc#=:21,flags=:22,plugged=:23,
spare1=:24,spare2=:25
where
ts#=:1
Rows
Row Source Operation
------- --------------------------------------------------0 UPDATE TS$ (cr=4 pr=0 pw=0 time=617 us)
1 TABLE ACCESS CLUSTER TS$ (cr=4 pr=0 pw=0 time=228 us)
1
INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=37 us)(object id 7)
...

This does nothing within the cluster in a catastrophic way such as what
thedrop tablespace & create tablespace methods do.
Therefore, if we go over our first example, until the end of June,
SALES_2008Q3 is not needed.

Oracle Technology Best Practices | 11

As March become obsolete, we just rename the SALES_2008Q1 as


SALES_2008Q3. Therefore, there is no old record retained in the cluster to
degenerate catalog performance. No addition or deletion is performed.

Conclusion
Including seasonal information in tablespace name provides a number of
advantages, such as ease of ILM management. However prior to Oracle
Database 10g, this may cause significant catalog performance problems
in VLDB sites by the time database grows. Fortunately, the RENAME
tablespace statement provides an elegant way of overcoming this problem.
n n n About the Author

Hsn S ensoy has been working on Oracle technologies with three years of experience
previously as an OLTP System Developer and recently as a DWH Administrator. He also
lectured in many seminars on Oracle, especially for university students. He particularly focuses
on ASM, Backup and Recovery of VLDBs, and logical/physical design of data warehouses.
Heis currently working as the DBA of Turkcell Telecommunication Services data warehouse
(the largest Oracle database in Turkiye which is 70 TB), and at the same time doing his
MSc on computer sciences at one of the most prominent universities (Bogazici University) of
Turkiye, and can be reached at husnu.sensoy@turkcell.com.tr.

12 | independent oracle users group

www.ioug.org

Cardinality Errors Due to Relationships


Between Columns
By Bobby Durrett
The Oracle optimizer may miscalculate a querys cardinality, the number of
rows it returns, because it assumes an equal distribution of combinations of
column values. The optimizer assumes that no relationship exists between
columns. For example, we know that the zipcode column of an address table
relates to the state column but the database doesnt. It assumes that every
combination of state and zipcode occurs with equal frequency, even ones like
the state Alaska and a zipcode in Hawaii. Ive put together a simple test case
to show how these types of cardinality errors can lead to poor performance
and what to do about it. Please note that while the fix is applicable in Oracle
Database 10g on account of the use of SQL Profiles, the problem does exist in
other versions where you may have to apply your method of choice for the fix.

Combinations of Column Values Not Equally Distributed


Our example script shows a case where the optimizer assumes an equal
distribution of combinations of column values. Here is how the data is laid
out in this test case:
select a,b,count(*) from TEST3 group by a,b;
A

COUNT(*)

---------- ---------- ---------1

1000000

1000000

Oracle Technology Best Practices | 13

Each column has the same number of ones and twos, but the table contains
only one column combination (1,2). The values within the columns are
equally distributed between the two distinct values, but the combination of
column values are not. Our example shows that the optimizer assumes that
all four possible combinations of the two column values are equally likely.
i.e. (1,1),(1,2),(2,1),(2,2). Our query selects theone row with the column
values (1,2) out of the two million and onerecords.
select sum(a+b)
from TEST3
where
a=1 and b=2;
---------------------------------------------------| Id | Operation
| Name
| Rows |
---------------------------------------------------| 0 | SELECT STATEMENT
|
|
1 |
| 1 | SORT AGGREGATE
|
|
1 |
|* 2 | INDEX FAST FULL SCAN| TEST3INDEX | 500k|
----------------------------------------------------

Here are the relevant column and table statistics:


C
LOW
HIGH NUM_DISTINCT
- ---------- ---------- -----------A
1
2
2
B
1
2
2
NUM_ROWS
---------2000001

Number of rows in plan =


(number of rows in table)*
(1/number of distinct values column A)*

14 | independent oracle users group

www.ioug.org

(1/number of distinct values column B)


or
500000=2000001*(1/2)*(1/2)
The optimizer assumes that one fourth of the rows will have the column
values (1,2). It picks a full index scan, because it believes that a large
number of rows will be returned. The query takes about a hundred times
longer with a full scan than it does with a range scan.

SQL Profiles
SQL profiles, new with Oracle Database 10g, let you improve the speed of
a given query by giving the optimizer the information it needs to correctly
estimate the cardinality when the data in a group of columns is unequally
distributed. You create SQL profiles using the SQL Tuning Advisor feature.
You execute the advisors functions
DBMS_SQLTUNE.CREATE_TUNING_TASK,
DBMS_SQLTUNE.EXECUTE_TUNING_TASK, and
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
to analyze the SQL statement or statements and put the new profile in
place. For brevity, Ive just listed the names of the procedures. Here is the
output of the SQL tuning advisor for the query in the example:
DBMS_SQLTUNE.REPORT_TUNING_TASK(MY_SQL_TUNING_TASK)
----------------------------------------------------------------------GENERAL INFORMATION SECTION
----------------------------------------------------------------------Tuning Task Name : my_sql_tuning_task
Scope
: COMPREHENSIVE
Time Limit(seconds): 600

Oracle Technology Best Practices | 15

Completion Status : COMPLETED


Started at
: 09/14/2006 13:26:05
Completed at

: 09/14/2006 13:26:10

----------------------------------------------------------------------SQL ID : 2fw0d281r0x2g
SQL Text: select sum(a+b) from TEST3 where a=1 and b=2

----------------------------------------------------------------------FINDINGS SECTION (1 finding)


----------------------------------------------------------------------1- SQL Profile Finding (see explain plans section below)
-------------------------------------------------------A potentially better execution plan was found for this statement.
Recommendation (estimated benefit: 99.68%)
-----------------------------------------Consider accepting the recommended SQL profile.

After accepting the recommended profile the plan for the query changes,
notice that the optimizer now knows that only one row will be returned
and it chooses the range scan of the index.
-----------------------------------------------| Id | Operation
| Name
| Rows |
-----------------------------------------------| 0 | SELECT STATEMENT |
|
1 |
| 1 | SORT AGGREGATE |
|
1 |
|* 2 | INDEX RANGE SCAN| TEST3INDEX |
1 |
------------------------------------------------

16 | independent oracle users group

www.ioug.org

The range scan runs the query in about one hundredth of the time as the
full scan. Unfortunately, a given SQL profile only applies to a single SQL
statement, so you have to generate a profile for every SQL that experiences
a cardinality issue. SQL Profiles can overcome bad cardinality estimates
and result in better execution plan choices, but there are cases where
this feature will not improve a plan that suffers from a wrong cardinality
calculation. Hints can be used to overcome cardinality errors that are due
to relationships between columns when it isnt possible to use SQL Profiles
to give the optimizer the information it needs to make the best choice.
This tech tip is a slimmed down version of a presentation I gave at the
SCIOUG and the COLLABORATE 08 user group conference. The full
version of the paper, slides, and sample scripts and their output can be
found at www.geocities.com/bobbyandmarielle/sqltuning.zip.
n n n About the Author

Bobby Durrett is a Senior Oracle DBA and Peoplesoft Administrator at US Foodservice


in Greenville, S.C. He has 14 years of experience with Peoplesoft and Oracle, specializing
in performance tuning. He enjoys giving presentations on tuning topics at user-group
meetings including COLLABORATE and SCIOUG (www.scioug.org). He can be reached at
bobbyandmarielle@yahoo.com.

Did You

KNOW

.?.

IOUG used to be called International Oracle User Group. In March


2006, the International part was replaced by Independent in the
name, to reflect the groups geographically independent status.

Oracle Technology Best Practices | 17

How to Kill Inactive Forms (or other


Database) Sessions That Are Holding
Database Exclusive Lock at Row Level
By Vivek Awasthi
Often, user sessions are left hanging in the database when they either dont
log out cleanly, the user forgets to logout or the mid-tier does not close
connections properly. At times, this creates a conflict when the user session
locks a row and another session wants to update the same row. In this tip,
we look at one specific case in which Forms users with a direct connection
to the database do not close the connection. Although this is specific to
Oracle Applications (E-Business Suite), the concept of detecting and killing
conflicting sessions remains the same.
When a finance user is working with invoice workbench form, and he or
she does not log out cleanly, it causes invoice validation to fail because of
the row lock held by these inactive form sessions. In this case, I use the
following script to kill inactive forms session that are holding an exclusively
database lock at row level. This solution is tested on Sun 9 with Oracle
application 11.5.9/11.5.10 with Oracle 9i Database and can be easily
adapted for anysituation.
The mon_kill_form.ksh script shown below is run every 30 minutes to
kill these inactive forms via crontab that are inactive more than one hour
or any predetermined limit. If you want to use this in a RAC environment,
this script should be run on all database nodes note that the inst_id
variable will need to be changed in this case.

18 | independent oracle users group

www.ioug.org

Content of mon_kill_form.ksh
#!/bin/ksh
#
#File: mon_kill_form.ksh
#
# Description: This monitors the inactive form sessions and kill those form which are holding
locks.
# History:
# -----------------------------------------------------------------------------# Vivek Awasthi 1.0
03/12/2008
#
ORACLE_SID=`/usr/ucb/whoami`
export ORASID_LOW=`echo $ORACLE_SID|cut -c1-8`
ORACLE_BASE=/oraappl/od-nbs/${ORASID_LOW}
# These are setups specific to my environment modify as necessary
. $ORACLE_BASE/${ORASID_LOW}scr/bin/profile.${ORASID_LOW}.db # Source database profile
. $ORACLE_BASE/${ORASID_LOW}scr/bin/.${ORASID_LOW}acc # Password file which has apps password
MAILRECEIPIENT=<Email address of people separated by blank space>
MAILFROM=<Email address who is sending email>
MAILPRO=$SCRIPTS_TOP/mailp.ksh ## Our custom email program to send email. mailx can be used
instead.
LOGFILE=$ORACLE_BASE/${ORASID_LOW}log/kill_form_session.log
LOGFILE1=$ORACLE_BASE/${ORASID_LOW}log/kill_form_session.sql
[ -f $LOGFILE ] && rm $LOGFILE
[ -f $LOGFILE1 ] && rm $LOGFILE1
err_exit() {
echo $*; exit 1
}
# The details of the information stored are specific to Oracle Apps - adapt as required
sqlplus -s apps/$APPS_PW <<EOF | grep no rows selected
set pages 90
set lines 150
set verify off
set feedback off
spool $LOGFILE
prompt **************************************************************************************
*************

Oracle Technology Best Practices | 19

prompt
Details for Form session killed
prompt **************************************************************************************
*************
SELECT s.sid, s.serial#, p.spid, s.process, s.status, substr(s.machine,1,15) MACHINE,
substr(to_char(s.logon_time,mm-dd-yy hh24:mi:ss),1,20)
Logon_Time, s.last_call_et/3600 Last_Call_ET, s.action, s.module
FROM
GV\$SESSION s, GV\$PROCESS p
WHERE s.paddr = p.addr
AND s.username IS NOT NULL
AND s.username = APPS
AND s.osuser = a159prod
AND s.last_call_et/3600 > 1
and s.action like FRM%
and s.status=INACTIVE
and s.inst_id=1
and (s.sid,s.serial#) in
( select l.session_id,s.serial#
from gv\$locked_object l, dba_objects
o, gv\$session
s, gv\$process
p
where l.object_id = o.object_id
and l.session_id = s.sid
and s.paddr
= p.addr
and s.status != KILLED
and o.object_name not like FND%
and o.object_name not like WF%
and l.locked_mode in (3,5)
and o.owner <> APPLSYS
);
spool off
EOF
#fi
cnt=`cat $LOGFILE|grep INACTIVE|wc -l`
if [ $cnt -ge 1 ]; then
cat $LOGFILE|grep INACTIVE|grep -v ^$|awk -F {print alter system kill session
$1,$2;} > $LOGFILE1
sqlplus / as sysdba <<EOF
@$LOGFILE1
EOF
$MAILPRO $MAILFROM $MAILRECEIPIENT Form Session Killed in ${ORASID_LOW} running on `hostname` $LOGFILE r
fi

20 | independent oracle users group

www.ioug.org

******************************************************************
Details for Form session killed
******************************************************************
SID
-----
270
598

SERIAL#
----------
13024
3009

LAST_CALL_ET
-------------------
1.4752
1.5544

SPID
--------
16906
24067

PROCESS
------------
16182
10179

STATUS
------------
INACTIVE
INACTIVE

MACHINE
------------
Machine2
Machine1

ACTION
------------------------------------------------
FRM:<USERNAME>:<Program Name>
FRM:<USERNAME>:<Program Name>

LOGON_TIME
-----------------------03-24-08 10:07:05
03-24-08 11:27:17

MODULE
-------------APXPMTCH
APXWCARD

n n n About the Author

Vivek Awasthi has 11 years of experience as an Oracle Application DBA/Oracle DBA/


Oracle Application System Administrator/System Analyst carrying out Application DBA activities
on different platforms including Oracle server installation, configuration and maintenance.
Awasthi is certified as an OCP 10g and Oracle E-Business Suite 11i Applications Database
Administrator Certified Professional. He is currently working with Solution Beacon LLC and can
be contacted at vawasthi@solutionbeacon.com or vivek.awasthi@gmail.com.

Did You

KNOW

.?.

IOUG has regional training classes, like RAC Attack! that allow
in-person, hands-on training from the experts who are doing
Oracle implementations every day.

Oracle Technology Best Practices | 21

Loading Large XML Documents


Into Oracle
By Coleman Leviter
A short time ago, using Quests product, TOAD, I had to load large XML
documents into an XMLTYPE column. The XML documents were in
excess of 128Kbytes.
TOAD allows one to manipulate XML documents up to 4K bytes in size.
There are issues. According to Quest:
...With an Oracle 9.2.0.5 client and a 9.0.1 or 9.2.0.1 server, small
XMLTypes seem to work, but trying to post an XML over 4000 bytes
to a Unicode database yields ORA-31011: XML parsing failed even if
the data is all English With an Oracle 9.2.0.1 client and a 9.2.0.4 server,
attempting to post data causes two-task conversion errors or ORA31011: XML parsing failed errors.
I have found that storing large size XML documents, over 4Kbytes and up
to (and past) 150Kbytes must be handled outside of Toad.
One way of inserting large XML documents into Oracle is to upload the
document from the host file system using the following technique:
1) Create a directory on the host file system for use as the repository for
the XML documents to upload to Oracle
2) Create a directory entry in Oracle for the host file system from step 1):
CREATE DIRECTORY xml_directory AS /home/mis/hdslink;
3) Copy the XML document(s) to the XML directory on the host file
system
22 | independent oracle users group

www.ioug.org

4) Create an Oracle table with a NUMBER column, DATE column and


an XMLTYPE column.
5) Create a PL/SQL script containing an insert (or update) statement that
for the XML document.
6) Insert (or update) the XML document into the XMLTYPE column.
7) If the document is not a properly formatted XML document, Oracle
will raise an exception.
Example:
ERROR at line 1:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00225: end-element tag TOTALWEIGH does not match
start-element tag TOTALWEIGHT
Error at line 7
ORA-06512: at SYS.XMLTYPE, line 254
ORA-06512: at line 1
An excerpt from the Oracle manual (PL/SQL Packages and Types
Reference, 10g Release 1 (10.1) ) reveals the following:
constructor function XMLType(
xmlData IN bfile,
csid IN number,
schema IN varchar2 := NULL,
validated IN number := 0,
wellformed IN number := 0)
return self as result deterministic

Oracle Technology Best Practices | 23

This constructor function allows one to insert an XML document into


Oracle using the XMLTYPE constructor and the BFILE parameter.
Heres the PL/SQL script I used:
-------------------------------------------------------------------------------------------SET SERVEROUTPUT ON
DECLARE
lcl_xml_name VARCHAR2(40) DEFAULT &hdsdir; -- XML file name in
XML_DIRECTORY
lcl_seq_num NUMBER DEFAULT &seq;
-- seq_num
BEGIN
INSERT INTO xml_example ( xml_seq, xml_date, xml_data)
VALUES ( lcl_seq_num, sysdate,
XMLTYPE(BFILENAME(XML_DIRECTORY, lcl_xml_name),
nls_charset_id(AL32UTF8)));
COMMIT;
EXCEPTION
WHEN OTHER THEN
END;
/

DBMS_OUTPUT.PUT_LINE(A problem occurred: || lcl_xml_name);

n n n About the Author

Coleman Leviter is employed as an IT Systems Software Engineer at Arrow Electronics.


Hehas presented at IOUGs COLLABORATE 07. He is the WEB SIG chair and sits on the
steering committee at the NY Oracle Users Group (www.nyoug.org). He has worked in
the financial services industry and the aerospace industry where he developed Navigation,
Flight Control and Reconnaissance software for the F-14D Tomcat at Grumman Aerospace.
Leviter has a BSEE from Rochester Institute of Technology, an MBA from C.W. Post and an
MSCS from New York Institute of Technology. He may be contacted at cleviter@ieee.org.

24 | independent oracle users group

www.ioug.org

How to Quickly Identify if Physical


Standby Is Lagging Behind?
By Balaji Raghavan
Oracle Standby database has been in existence since Oracle Version
7.3. Physical Standby database has matured over a period of time and
proved time and again that it is one of the more robust Disaster Recovery
solutions. Oracle captures and maintains the information related to
Archive Logs, Archive Gaps and the logs that needs to been applied to
the contingency databases in various data dictionary views. I have always
been puzzled at to why Oracle has not provided a single script to check
the archive logs generated in Production and also show the comparison as
to how far the contingency database (physical standby) is lagging behind.
V$ARCHIVE_GAP lists the sequence number of the archive logs that
are known to be missing on the standby database. If the Media Recovery
Process was not started or stopped manually, the logs will continue to ship
in this case, V$ARCHIVE_GAP will not be a good view to check. At
the end of this article, you will know how to check the logs lagging behind
with a single script.
We will review the scripts to find out the number of logs the Contingency
database is lagging behind (if any). For easy usage, I have two scripts:
1. To check only against single Prod / Cont database consider Listing -1
2. To check against multiple Prod / Cont databases take a look at Listing - 2

Oracle Technology Best Practices | 25

Listing 1
#!/usr/bin/ksh
#
Program Name : single_dr_check.ksh
#
#
Purpose
: To Capture the archive logs that are behind Prod
#
#
and contingency database and report the difference #
#
Parameters : $1 Prod DB Name
#
#
$2 Cont DB Name
#
#
Special Note : Create a hidden file .p for password
#
#
: This script runs on 10g Databases and above
#
#
Usage
: ./single_dr_check.ksh $1 $2
#
Usage eg.
: ./single_dr_check.ksh chicago_prod boston_cont
#
#
where chicago_prod is production database
#
#
and boston_cont is contingency database
#
#
Author
: Balaji Raghavan
#

export ORACLE_SID=$1
ORAENV_ASK=NO
. oraenv
export P=`cat .p`
$ORACLE_HOME/bin/sqlplus s /nolog <<EOF
set head off
set verify off
set echo off
set show off
col prod_seq new_v prod_log
conn sys/$P@$1 as sysdba
set termout off
select Prod : , max(sequence#) prod_seq
from V\$ARCHIVED_LOG
/
conn sys/$P@$2 as sysdba
col cont_seq new_v cont_log
select Cont : ,
max(sequence#) cont_seq from V\$LOG_HISTORY;
select
decode(&prod_log - &cont_log,0,The DR is in Sync with Prod,DR is behind #),
&prod_log - &cont_log, Logs
from DUAL;
EOF
exit
Sample output:
$./single_dr_check.ksh hicago_prod boston_cont

26 | independent oracle users group

www.ioug.org

Prod :

132608

Cont :

132608

The DR is in Sync with Prod

0 Logs

Notes:
1. The above mentioned scripts will work on Oracle Database 10g.
Starting 10g we can connect to Contingency database via sys account.
Connecting to Contingency database using system or non-sys accounts
will display the following message
ORA-01033: ORACLE initialization or shutdown in progress
2. oraenv should be configured and should work without any issues, the $1
should a valid database name in the server you are running the script.
3. A hidden file.p should be created in the same directory where the script
is located and should have the sys password.
4. Using any other account other than sys will fail due to the fact that the
non sys account will not be able to query the v$log_history table in
contingency database.
5. I have use v\$archived_log and v\$log_history. In some environments,
the reference v\$ will fail and hence \ should be removed from the
script for it to work.
6. This should should be executed only from Prod or Contingency server
as it sets the ORACLE_SID with $1 value.

Oracle Technology Best Practices | 27

Listing 2
#!/usr/bin/ksh
############################################################################
#
Program Name : multiple_dr_check.ksh
#
#
Purpose
: To Capture the archive logs that are behind Prod
#
#
and contingency database and report the difference #
#
Parameters : $1 -- Prod DB Name to set ORACLE Variables
#
#
Special Note : Create a hidden file .p with Prod:Cont:PWD values #
#
Usage
: ./multiple_dr_check.ksh $1 $2
#
#
Usage eg.
: ./multiple_dr_check.ksh chicago_prod .p
#
#
where .p file contains the entries for Prod, Cont #
#
and password, please grant read permission on .p
#
#
file to current user only, $1 should be the SID on #
#
local machine, this will set the ORACLE_HOME and
#
#
other variables to invoke SQLPlus
#
############################################################################
#
Author
: Balaji Raghavan
#
############################################################################
#
Disclaimer
#
#
The views expressed are my own and not necessarily
#
#
those of any associated employer.
#
############################################################################
export ORACLE_SID=$1
ORAENV_ASK=NO
. oraenv
cat $2 | while read LINE
do
export PROD=`echo $LINE | cut -d: -f1`
export CONT=`echo $LINE | cut -d: -f2`
export PWD=`echo $LINE | cut -d: -f3`
$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
conn sys/$PWD@$PROD as sysdba
set head off
set verify off
set echo off
set show off
col prod_seq new_v prod_log
set termout off
select Prod $PROD : , max(sequence#) prod_seq
from V\$ARCHIVED_LOG
/
conn sys/$PWD@$CONT as sysdba
col cont_seq new_v cont_log
select Cont $CONT : ,
max(sequence#) cont_seq from V\$LOG_HISTORY;

28 | independent oracle users group

www.ioug.org

select
decode(&prod_log - &cont_log,0,The DR is in Sync with Prod,DR is behind #),
&prod_log - &cont_log, Logs
from DUAL;
EOF
done
exit
Sample Output:
Prod Chicago :

132611

Cont Boston :

132609

DR is behind #

2 Logs

Prod Newyork :

81752

Cont London :

81752

The DR is in Sync with Prod


Prod London :

91291

Cont SFO

91243

DR is behind #

0 Logs

48 Logs

Notes:
1. The above mentioned scripts will work on Oracle Database 10g.
Starting 10g we can connect to Contingency database via sys account.
Connecting to Contingency database using system or non-sys accounts
will display the following message
ORA-01033: ORACLE initialization or shutdown in progress
2. oraenv should be configured and should work without any issues, the $1
should a valid database name in the server you are running the script

Oracle Technology Best Practices | 29

3. A hidden file.p should be created in the same directory where the script
is located,.p file should contain prod database separated by colon :
contingency database separated by colon : and sys password.
4. Using any other account other than sys will fail due to the fact that the
non sys account will not be able to query the v$log_history table in
contingency database.
5. I have use v\$archived_log and v\$log_history, in some environment
v\$ will fail, \ should be removed for the script to work.
n n n About the Author

Balaji Raghavan has more than 13 years of IT experience with more than 12 years using
Oracle Products. He is currently working as VP Consultant II at a major financial institution.
He has worked on wide variety of projects and environments varying from Mainframe to
Midrange. He can be reached at balaji.raghavans@gmail.com.

Did You

KNOW

.?.

Only IOUG members enjoy these benefits: the SELECT Journal, a


quarterly publication from IOUG written by Oracle users for users;
unrestricted access to LoOK (Library of Oracle Knowledge), a
compendium of technical articles, presentations, tips and techniques
from many users; ability to provide enhancement requests to Oracle;
IOUG ResearchWire, a quick summary of whats happening in the
world of Oracle; discounts and many more.

30 | independent oracle users group

www.ioug.org

Oracle DUMP Files Housekeeping


By Chen Rui Qing
One of the regular tasks for a DBA is to regularly and periodically inspect
the alert log file and other dump and log files. For the alert log file, it may
be beneficial to truncate the file after inspection to easily identify new
issues and keep the file size under a manageable size. Normally, I rename
the inspected alert log file to something like alert.log_mmdd. For other
Oracle trace files, under the directories specified by bdump and adump,
the number of files keeps growing with the database operations and some
sort of housekeeping is necessary to keep your dump directories clean
andmanageable.
To automate this housekeeping task, I developed a simple shell script to be
run monthly; usually on the first day of the month. The script is shown here:
#!/usr/bin/ksh
#
#
NAME:
#
hskp_oradump.sh -- housekeeping oracle dump files
#
#
USAGE:
#
$0 [oracle sid]
#
#
DESCRIPTION:
#
20 5 1 * * /home/ora10g/hskp_oradump.sh
#
Schedule the script to run at 1st day of month, archive the previous
months
#
oracle dump files monthly, and archive the listener.log as well.
#
# ASSUMPTION
#
(1) You are using SPFILE, not PFILE
#
(2) TNS_ADMIN variable is not defined
#
#
CREATED:
#
Chen Rui Qing 2007-XX-XX.
#

Oracle Technology Best Practices | 31

#
REVISION: (YYYY-MM-DD)
#
Chen Rui Qing 2008-02-12 enhanced the script to skip archival of
active files.
#
Chen Rui Qing 2008-05-02 added comments for IOUG Best Practices
Booklet, making it more readable.
#
# Load the user profile in. Normally people would export ORACLE_HOME, ORACLE_BASE,
# ORACLE_SID and so on here. But its a good pratice is to define these parameters
# in user profile, to minimize the maintenance effort.
if [ ${ORACLE_HOME} = ] ; then

. $HOME/.profile
fi
# incase you want to use a different instance; not the one in user profile.
if [ $# -ge 1 ] ; then

ORACLE_SID=$1
fi
APPLN=`basename $0 .sh`
APPLOG=$HOME/log/${APPLN}.`date +%Y`.log
# YYMM of previous month, TAIST-8 is my system TZ
MONTH=`TZ=TAIST+16;date +%y%m;TZ=TAIST-8`
ARCHIVE=archive/${MONTH}
echo \n`date` ==>${ORACLE_SID}: \c >> ${APPLOG}
echo $0 started ... >> ${APPLOG}
# get the dump file destination from spfile
#/u01/admin/glsdb/adump
#/u01/admin/glsdb/bdump
#/u01/admin/glsdb/cdump
#/u01/admin/glsdb/udump
for DUMPDIR in `strings ${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora |grep dump |cut -d \ -f 2`
do

if [ ! -d ${DUMPDIR} ] ; then

echo dump directory does not existing ... >> ${APPLOG}

exit 8

fi

cd ${DUMPDIR}

if [ ! -d ${ARCHIVE} ] ; then

mkdir -p ${ARCHIVE} >> ${APPLOG} 2>&1

else

echo ${DUMPDIR}: archiving already done for ${MONTH}. >> ${APPLOG}

continue

fi

32 | independent oracle users group

www.ioug.org



# 2008-02-12 chen: do not archive if it is in use, eg. mrp.trc file. Otherwise

# oracle instance would append the trace message to the file in the archive directory

DD=`date +%b %d`

for FILE in `ls -lt |grep -v ${DD} |awk {print $9}`

do

if [ -f ${FILE} ] ; then

mv ${FILE} ${ARCHIVE}/ >> ${APPLOG} 2>&1

fi

done
done
# archive the listener.log as well
LSNR_YYMM=${ORACLE_HOME}/network/log/listener.log.${MONTH}
if [ ! -f ${LSNR_YYMM}.gz ] ; then

cp ${ORACLE_HOME}/network/log/listener.log ${LSNR_YYMM}

echo \c > ${ORACLE_HOME}/network/log/listener.log

gzip ${LSNR_YYMM} > /dev/null 2>&1

if [ $? -eq 0 ] ; then

echo listener.log archiving done for ${MONTH}. >> ${APPLOG}

fi
fi
echo `date` <== \c >> ${APPLOG}
echo $0 done. >> ${APPLOG}
exit 0

n n n About the Author

Chen Rui Qing is a DBA at Grocery Logistics of Singapore Pte. Ltd. He is an OCP in Oracle
8i, 9i and 10g. In Oct. 2001, he changed his career from Mechanical Engineering to IT, joined
SingTel of Singapore as System Analyst focusing on Web application development. In Dec. 2006,
he joined Grocery Logistics. He can be contacted at chenruiqing@hotmail.com.

Oracle Technology Best Practices | 33

Simulating Statistics in Oracle


By Ramachandran Venkateswaran
This technical tip is about simulating optimizer behavior in low volume
development environment as if the SQL was executing on a high volume
production environment. Normally developers do not have access to the
production environment. So they test their queries in the development
environment, which has a low volume of data. This, however, is misleading
as the low volume results in the generation of a good execution path. When
these queries move to the actual high volume production environment,
these queries may have poor response time as the volume of data is high.
In order to avoid this issue, we can simulate the generation of the actual
production execution plan in the development environment by importing
production environment statistics into the low volume development
environment. The example below illustrates the approach to do this.

Illustration
Set-up

Create two basic tables EMP and DEPT with few rows in the
Development environment as shown below
SQL> DESC EMP;
Name
Null?
Type
----------------------------------------- -------- ------------ID
NOT NULL NUMBER
NAME
VARCHAR2(10)
SAL
NUMBER
DEPT_ID
NUMBER
SQL> SELECT * FROM EMP;

ID NAME
SAL
DEPT_ID
---------- ---------- ---------- ---------
1 Ram
100
1

34 | independent oracle users group

www.ioug.org

2
3
4
5
6

Anand
Sunny
Gokul
Geeta
Priya

200
300
500
600
700

1
1
1
2
2

6 rows selected.

SQL> DESC DEPT;


Name
Null?
----------------------------------------- --------
ID
NAME

Type
------------NUMBER
VARCHAR2(10)

SQL> SELECT * FROM DEPT;



ID NAME
- --------- ---------
1 SAAS

2 MFGD

Create an Index on the column DEPT_ID present in the EMP table.


CREATE INDEX IDX_DEPT_ID ON EMP(DEPT_ID);

Gather statistics:
exec dbms_stats.gather_table_stats(INTELE,EMP) ;
exec dbms_stats.gather_table_stats(INTELE,DEPT) ;

View Explain plan to understand how the optimizer would instruct the
database engine to fetch the data:
SQL> select e.NAME,d.NAME from emp e, dept d where e.DEPT_ID=d.ID;

Oracle Technology Best Practices | 35

Execution Plan
---------------------------------------------------------0
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5 Card=100 Bytes=1700)
1 0
TABLE ACCESS (BY INDEX ROWID) OF EMP (TABLE) (Cost=1 Card=50 Bytes=450)
2 1
NESTED LOOPS (Cost=5 Card=100 Bytes=1700)
3 2
TABLE ACCESS (FULL) OF DEPT (TABLE) (Cost=3 Card=2 Bytes=16)
4 2
INDEX (RANGE SCAN) OF IDX_DEPT_ID (INDEX) (Cost=0 Card=50)

Case 1:
Now simulate statistics as that of the production environment.
This can be done using the DBMS_STATS.SET_TABLE_STATS
procedure as shown below. In this example, we are setting the
number and row size for the tables involved in the query as well as
the size for the IDX_DEPT_ID index.
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => EMP, numrows => 10000, numblks => 1000, avgrlen => 124);
exec dbms_stats.set_index_stats(ownname => INTELE, indname => IDX_DEPT_ID, numrows =>
10000, numlblks => 100);
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => DEPT, numrows => 1000, numblks => 100, avgrlen => 124);

Now, note how this changes the execution plan!


SQL> select e.NAME,d.NAME from emp e, dept d where e.DEPT_ID=d.ID;
Execution Plan
---------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=213 Card=5000000 Bytes=85000000)
1 0 MERGE JOIN (Cost=213 Card=5000000 Bytes=85000000)
2 1 TABLE ACCESS (BY INDEX ROWID) OF EMP (TABLE) (Cost=102 Card=10000 Bytes=90000)
3 2 INDEX (FULL SCAN) OF IDX_DEPT_ID (INDEX) (Cost=100 Card=10000)
4 1 SORT (JOIN) (Cost=25 Card=1000 Bytes=8000)
5 4 TABLE ACCESS (FULL) OF DEPT (TABLE) (Cost=24 Card=1000 Bytes=8000)

Now you can notice a change in the explain plan. We have not changed
any data, but still the execution plan changes. So the developers themselves

36 | independent oracle users group

www.ioug.org

can simulate statistics as that of production and test their queries in the
development environment.

Case 2:
I n this case, we will determine the effect of a significant increase
in the rowcount for both the EMP and DEPT tables in order to
simulate future growth. The same SET_% _STATS call is used with
count values that are 100 times that of the previous values
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => EMP, numrows => 1000000,
numblks => 10000, avgrlen => 124);
exec dbms_stats.set_index_stats(ownname => INTELE, indname => IDX_DEPT_ID, numrows =>
1000000, numlblks => 1000);
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => DEPT, numrows =>
1000000,numblks => 10000 , avgrlen => 124);
SQL> select e.NAME,d.NAME from emp e, dept d where e.DEPT_ID=d.ID;
Execution Plan
---------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8597962 Card=500000000000
Bytes=8500000000000)
1 0 HASH JOIN (Cost=8597962 Card=500000000000 Bytes=8500000000000)
2 1 TABLE ACCESS (FULL) OF DEPT (TABLE) (Cost=2230 Card=1000000 Bytes=8000000)
3 1 TABLE ACCESS (FULL) OF EMP (TABLE) (Cost=2237 Card=1000000 Bytes=9000000)

Again you can notice a change in the explain plan the MERGE
JOIN was replaced by a HASH JOIN since the optimizer concluded
that the latter was a better join condition given the new statistics.

Conclusion
Statistics plays a vital role to choose a best plan for a query. Simulating
statistics as that of production environment will help you in identifying
bottle necks that could possibly occur in higher volume production
environments. As well, it can help you simulating future growth. This tip
provides you an approach to do so.

Oracle Technology Best Practices | 37

n n n About the Author

Ramachandran Venkateswaran is an Oracle Technical Specialist with Infosys


Technologies Limited. He is also an Oracle 9i Certified Professional. He has four years of
experience in the IT industry. He has exposure to various Oracle technologies and his current
focus is on BI/Text Analysis. He can be reached at kvrkvrkvr@hotmail.com.

RANZAL & ASSOCIATES is a consulting partner


that delivers a dramatic impact to its clients.
We design, develop and implement solutions that
address organizational challenges unique to your
business. Ranzal quickly moves our clients through
the phases of system design and implementation
by matching business requirements with a focused,
step-by-step technical process. Stop by Open
World Booth #3402 to learn more.
Why clients turn to Ranzal?
Largest NA Oracle Hyperion EPM Partner
Experts in Oracle Hyperion Planning & Oracle

info@ranzal.com www.ranzal.com

Essbase Products
Experts in Oracle Hyperion Financial

Management Products
Data Services Expertise (including MDM

and Informatica)
Program & Project Management Services
Acclaimed Performance Lab

Keeping Schema Passwords Secret


By Michelle Malcher
Granting ANY privileges with alter, update, insert, delete or even select
is considered a risk when attempting to secure the database environment.
There are requirements to secure the schema owner passwords and limit
the users or applications that can actually have these passwords, because
of the unauthorized changes and access that can occur when someone has
these passwords. Also, with separation of duties and still needing to migrate
code from test environments to production, securing these permissions
and passwords has become even more challenging. Following security best
practices for databases, as well as other governance frameworks such as for
Sarbanes Oxley (SOX), are the reasons for needing to lock down schema
passwords and revoke ANY privileges. The challenge is then giving another
user just enough permissions to migrate changes to production without
giving up a secured database environment.
Schema logins have a lot of privileges to run the application or perform
tasks that most users should not have permissions to do. Several people
seem to have the schema password to be able to log in outside of the
application to check on things, make changes for production support,
development or even the DBA for migrating changes.
Of course, when you look at the some of the compliance policies, these types
of highly privileged users should be limited, and preferably only used by
the application. And if people have access to these schema passwords, their
actions need to be audited and monitored. Im sure none of us have to
worry about this, but schema passwords that have the passwords hard-coded
or set up in some way that they are difficult to change, and then handing
that password to a consultant or a temporary developer knowing that they
might always have that password. How are those privileges going to be
revoked if that password isnt changed on a regular basis?

Oracle Technology Best Practices | 39

Access to application data using outside tools such as SQLPlus is probably


a whole different conversation, but there are changes that can occur outside
of the application such as redefining tables, procedures and views. The
changes seem to fall into the hands of the DBAs. Even though DBAs
probably have access to do this without the schema passwords because of
the code and not having everything fully qualified, there would be reasons
to log in as the schema owner. Auditing the DBA access is probably
already set up, but now have a DBA log in with the schema outside of the
application with the a password that doesnt change. Then, add on that this
is just a temporary DBA to get this current release out to production. There
are all kinds of red flags going up for compliance issues.
Procedures and setting up how to log in as a different schema could prove
useful in at least reducing the risk and really limiting who has access to
these passwords. It can also provide a way to capture who logged in to that
schema outside of the application and triggers and auditing can be used to
capture what was even done during that session.
Making the assumption that DBAs or developers that are making these
changes have their own login to the database (even if they dont, machine
names can be captured), there could be a procedure created to alter the
CURRENT_SCHEMA for the session. And when it does that, it could
update an audit table to capture the information about the user that
executed this and the activities that might have been performed.
Here is some basic setup around this information. Again, this is just the
basic information to capture, and there are plenty of opportunities to add
information and other things to audit. But it provides some guidelines
around a procedure that can be used to set this up. (This has been verified
in Oracle 9i and 10g environments).

40 | independent oracle users group

www.ioug.org

Table schema_change_log table would need to be defined to capture the


wanted information for auditing when someone changes the schema. Some
basic information has been provided in this example create table:
create table schemachange_log
(username
varchar2(30),
sessionid
varchar2(80),
machinename
varchar2(80),
update_date
date,
newschema
varchar2(30));
create or replace procedure CHANGE_SCHEMA(schema1 in varchar2) AUTHID DEFINER as sql_text
varchar2(50);
begin
insert into schemachange_log values( user, sys_context(USERENV,SESSIONID),sys_
context(USERENV, HOST),sysdate,schema1);
select alter session set current_schema= ||schema1 into sql_text from dual;
execute immediate sql_text;
end;
/

Verification information can even be put into place in the above, besides
just capturing the details of who executed this procedure, by checking that
the user is allowed to switch to that schema. The owner of this procedure
should be a locked down owner and needs permissions to alter any session.
Also, permissions to execute this procedure would only be granted to a role
or individuals. In turn, the grants should be added to an audit list to make
sure that they are only changing using the properprocedures.
In SQLPlus:
connect malcher@DB1
select sys_context(USERENV,SESSION_SCHEMA) from dual;
SYS_CONTEXT(USERENV,SESSION_SCHEMA)
------------------------------------------MALCHER
exec change_schema(TESTING1);

Oracle Technology Best Practices | 41

to verify:
select sys_context(USERENV,SESSION_SCHEMA) from dual;
SYS_CONTEXT(USERENV,SESSION_SCHEMA)
------------------------------------------TESTING1

Now the script that needed to be run as that schema owner can
beexecuted.
This is just a simple setup to prevent needing to have the schema passwords
to migrate code from test to production, redefine tables or other things
that might be needed to be executed as the schema owner outside of the
application. There are controls that can be set up around these procedures
to secure who has access to execute the procedure and capture the
information that happens after they change to a different schema and
even having an additional check that the user is allowed to switch to that
schema. At least these simple steps can track a user that has switched over
to a different schema, and the password is not given out to those who
shouldnt have the password.
n n n About the Author

Michelle Malcher is a senior database administrator at PepsiCo with more than 10


years experience in database development, design and administration. She has expertise in
performance tuning, security, data modeling and database architecture of very large database
environments. She is currently serving on the IOUG Board of Directors, and has also served as
president of the Enterprise Best Practices SIG and is still an active leader of this group for the
focus area of security.

42 | independent oracle users group

www.ioug.org

Oracle Database 10g Upgrade


Planning Best Practices
By William D. Moore
In July 2010, Oracle 9i Database will begin sustaining support. The risk
associated with upgrading a database causes most organizations angst and
months of testing. Outlined below is a best-practices approach to planning
an enterprise of Oracle database upgrades. Regardless of your role, if you
are just overwhelmed with the enormous task of upgrading the databases
and applications you are responsible for, then we hope this article will help
in establishing a framework for doing just that.

Identify
1. Understand the Organization
The first step in the process is to understand the organization roles and
responsibilities. The DBAs will have details and the technical ins and
outs of the databases. Then, understand any configuration standards
used across all technical environments.
2. List the Databases, Operating Systems and Hardware
Begin collecting the detailed database information. Start by gathering
the list of database instances, versions, operating systems and hardware.
Then, identify the database work requests and resources with which
the DBAs interact. This will lead the conversation into who are the
Technical Application Leads and business contacts for each application.
Note: Each Oracle database instance can contain multiple schemas
and/or applications. This can lead to working with different business
areas regarding the same database.

Oracle Technology Best Practices | 43

3. List the Applications and Releases


Use the detailed database information as a foundation for having a
discussion with the Technical Application Leads. Confirm the testing
and production environments and uses. Identify the applications (and
uses), identify releases and future plans with each technical application
lead. Identify and confirm the interfaces. Then, identify the business
contacts and resources. Notify the Technical Application Leads that
you will need a statement from the application vendors regarding
support or compliance.
4. Identify the Projects in Progress
At this point, there should be a comprehensive list of DBAs,
Technical Application Leads, Business Owners, databases, versions,
applications (releases) and interfaces. Meet with the business owners
of the applications/databases and understand the projects in progress.
Understand any plans for the applications and any future business
plans that could affect the applications/databases. Utilize the PMO and
other initiative tracking resources to gather and obtain the information.

Assess
1. Review Vendor Plans
At this point, it is essential to review all of the gathered information
and confirm accuracy and comprehensiveness. The best method is
to have each DBA, Technical Application Lead and Business Owner
sign-off. It is essential that each application vendor using the Oracle
databases be evaluated for support and/or compliance. There are several
methods of investigation: Web site and product literature research
and/or vendor meetings.
2. Understand and Define the Application Future
Next, it is essential to understand the future of the (business and)
applications. The majority of the information gathered will come from
identifying the applications (and releases) and projects in progress. This
44 | independent oracle users group

www.ioug.org

is another opportunity to synchronize harmoniously with business


needs and schedules. Do not plan to upgrade databases that will be retired
before database support expires.
3. Define the Dependencies and Constraints
The dependencies and constraints should naturally surface.
Dependencies will come in the form of vendor software release/
readiness, configuration/architecture, interconnectivity/coupling
(withother databases) and projects in progress. Constraints will
follow the normal project constraints: internal/IT/ business resource
availability, premier/extended/sustaining Oracle support dates, budget,
and organizational readiness. It is necessary to list dependencies by
database and application.

Plan
1. Outline Major Milestones
Begin the planning effort by outlining the internal and external
major milestones. This will draw from the database lists, applications
(releases), dependencies and constraints. This will provide the project
plan skeleton structure for further detailed planning activities. Work
with all involved resources to develop the high and low level project
plans. Receiving feedback from the business resources, technical
application leads and DBAs provides for better upgrade plans.
2. Plan the Projects
After the major milestones have been defined, it is necessary to add all
of the steps for each database upgrade. While adding the task detail,
add constraints and dependencies to the project plan.
During this activity, it is necessary to schedule upgrades with other
organization initiatives to gain testing and organization synergies. Also,
this step should synchronize database projects that need to be upgraded
simultaneously.

Oracle Technology Best Practices | 45

3. Define the Resources/Budget


The outlined milestones should be broken down by assigning resources
to each project/task. Make sure enough time is added for the new
database/version training. At this point, add the resources to each
task. After the resources have been applied to each task, add the costper-resource/hour. Adding the cost-per-resource/hour will evolve the
project plan into a finalized cost estimate.
A diligent testing methodology and approach will uncover unforeseen
issues. The detailed upgrade activities provide a structure for the type and
categories of information that are needed for database upgrade planning.
It is important to upgrade easy non-critical low risk databases/applications
first. This will provide a foundation and funnel for the database release
information/issues to enter the organization. As always, share the lessons
learned across the organization.
n n n About the Author

William D. Moore has more than 18 years of project/program management and


information technology experience. Moore is a proven business leader with a strong IT
background. He has a track record of providing business value and competitive advantage
using technologies. His experience includes leadership positions within organizations, programs
and projects within multiple industries. He has a team-based management style and excellent
interpersonal and communication skills. Moore received his undergraduate degree from Purdue
University, M.B.A from Xavier University and Project Management Professional certificate from
the Project Management Institute. He can be reached at bmoore@fusionalliance.com or
LinkedIn www.linkedin.com/in/williamdmoore.

46 | independent oracle users group

www.ioug.org

Have a Great Tip or Best Practice


toShare?
IOUG is looking for new materials for the 2009 Best Practices Booklet.
Submissions should be 500-1,000 words long; due to space constraints,
we ask that your submission have a specific focus as opposed to any
overarching database principles. Tips can range from beginning- to
advanced-level skills and should include the actual code and queries used
(screenshots and other small graphics are also acceptable).
If you have any questions about this project, please contact our Best
Practices Booklet Coordinator, Theresa Rubinas, at (312) 673-5870, or
e-mail her at trubinas@ioug.org.

Submit an Article for IOUG SELECT


Journal 2009
SELECT Journal is IOUGs quarterly
publication. We are always looking
for new authors and articles for 2009.
Interested in submitting an article?
Visit www.ioug.org/selectjournal/
submit.cfm for more information.
Questions? Contact SELECT Journal
Production Coordinator Theresa
Rubinas at (312) 673-5870, or e-mail
her at trubinas@ioug.org.

Oracle Technology Best Practices | 47

Hidden Treasures on the IOUG Web site


By John Kanagaraj, executive editor, IOUG SELECT Journal
You might not be aware of this, but the IOUG Web site, www.ioug.org,
contains a number of treasures just waiting for you to explore. Chief among
these is the Library of Oracle Knowledge (or LoOK as it is known to many
IOUG old-timers). Hanging off the Online Resources tab on the IOUG
main page, this little application indexes the complete set of technical
documents managed by the IOUG. Developed by a team of IOUG Staff
and volunteers, LoOK includes the vast technical repository of presentation
and papers from all the past five IOUG conferences including papers from
the latest COLLABORATE 08 conference in Denver.
In addition to the conference papers, the sources in LoOK include past
articles in the IOUG SELECT Journal as well as from Oracle Corporation.
Social networks are among the latest Web 2.0 uber-tools, promising to
connect you up both socially and professionally, and the IOUG is not
lacking in having its own social network tool. Aptly named OPEN (for
Oracle Professionals Exchange Network) and (again!) developed by a team
of IOUG Staff and volunteers led by Tony Jedlinski, this is a forum to create
online relationships, exchange IOUG-related event information and share
technical tidbits. You can access OPEN via the Online Resources tab.
There are many other resources hanging off the Online Resources tab
including the Fusion Middleware Portal, Links to other related sites,
Vendor directory, a Career Center and the Member Directory. So heres
looking forward to YOU the IOUG member interacting with us and
your peers from around the globe using these tools.
P.S. If you are not yet an IOUG member, please explore our various
options and member benefits at www.ioug.org.

48 | independent oracle users group

www.ioug.org

COLLABORATE09
MAY 3-7, 2009 | ORANGE COUNTY CONVENTION CENTER WEST | ORLANDO, FLORIDA

Get Knowledge, Apply and Optimize at


COLLABORATE 09 IOUG Forum
If you are a DBA, developer, architect or any other technology professional across the Oracle technology spectrum,
including Stellent, Essbase and BEA products, dont miss your chance to attend COLLABORATE 09 IOUG
Forum, presented by the Independent Oracle Users Group (IOUG). Featuring hundreds of education sessions
presented by fellow Oracle customers and experts, countless networking opportunities and numerous
top IT Solution Providers, the IOUG Forum is your ticket to enhancing your Oracle knowledge.
As a registrant of the IOUG Forum, youll also receive access to additional education and events presented by
the COLLABORATE 09 co-hosts, the Oracle Applications Users Group (OAUG) and Quest International Users
Group (Quest). Dont miss your chance to:
COLLABORATE with thousands of Oracle users to share success stories, challenges and solutions for
Oracle technology and applications.
LEARN from experts and leaders in the Oracle community from more than 1,000 education sessions by
the IOUG, OAUG and Quest, presenting best practices and tested solutions.
DISCOVER technology solutions and strategies with access to more than 250 of the leading IT solution
providers exhibiting in one open forum.
GAIN inside information and insight from Oracle leadership and technology visionaries, and leverage our
users groups collective voice to Oracle.

I think this is the best conference around. Since its by users and for users, its less
biased than some of the other conferences...There is a wide variety of technical
material to please anyone. I highly recommend it. If you can only go to one event in a
year, this is the one to go to. * Theresa Stone, Principal Support Analyst, GlaxoSmithKline
*Information derived from 2008 post-conference evaluation.

Presented by:

www.ioug.org

www.oaug.org

www.questdirect.org

Save the date for COLLABORATE 09 IOUG


Forum. Registration opens in November 2008.

Вам также может понравиться