Академический Документы
Профессиональный Документы
Культура Документы
Best Practices
Third Edition
CONTENTS
03
Executive Editor
John Kanagaraj
08
Managing Editor
Theresa Rubinas
13
Contributing Editors
Kimberly Floss
Arup Nanda
18
Design
Jennifer Hanasz
Patrick Williams
22
25
31
34
39
43
Headquarters
Independent Oracle Users Group
401 North Michigan Avenue
Chicago, IL 60611-4267
USA
Phone: +1.312.245.1579
Fax: +1.312.527.6785
E-mail: ioug@ioug.org
Welcome to the third edition of the IOUG SELECT Journal Tips and
Best Practices Booklet! As always, this booklet is chock full of tips and best
practices that we hope you will find useful. This year, we had an overwhelming
response to the call for articles from you, members of the IOUG from all
over the globe. This, in my humble opinion, is what the IOUG is all about:
Users helping users by sharing what they know. Both the SELECT Journal
and this booklet serve as a conduit to share technical knowledge with the
IOUG user community. We are able to publish quality content because of
authors who are willing to spend time and energy to distill their real-world
knowledge and experiences into such articles. Thank you, authors!
And that brings me to the dream team of editors with whom I have had
the good fortune to work on this project. Both Kim Floss and Arup Nanda
were of invaluable help in reviewing, choosing, editing and refining these
articles. Theresa Rubinas from the IOUG Headquarters team kept us all
focused and on track, and worked on the 101 things needed to get this out
to you. A big thank you to this team!
We at the IOUG SELECT Journal require both your feedback as well your
input to maintain the quality of the quarterly SELECT Journal as well as
this yearly Best Practices Booklet. You can do this by signing up to review
articles, as well as writing original content. If you have an idea about an
article and dont know where to start, we are there to help as well. Let us
know by e-mailing us at select@ioug.org or visiting us on the Web,
www.ioug.org/selectjournal.
Sincerely,
John Kanagaraj, Executive Editor
IOUG SELECT Journal
Disclaimer: IOUG and SELECT Journal have relied on the expertise of the authors to make this booklet as
complete and as accurate as possible, but no warranty is implied. The information given here is provided on an as
is basis. The authors, contributors, editors, and publishers of SELECT Journal, IOUG, and Oracle Corporation
disclaim all warranties, express or implied, with regard to the same, including, without limitation, any implied
warranties of merchantability or fitness for a particular purpose and any implied warranties of non-infringement.
The authors, contributors, editors, and publishers of SELECT Journal, IOUG, and Oracle Corporation will not be
liable to any person for any loss or damage, including, without limitation, the loss or services, arising out of the use
of any information contained in this booklet or any program or program segments provided herewith.
www.ioug.org
Listing 1:
# Install any custom code here
#
case $ORACLE_SID in
SID)
. /u01/app/oracle/SID.ini ;;
SIDT)
. /u01/app/oracle/SIDT.ini ;;
SIDD)
. /u01/app/oracle/SIDD.ini;;
esac
Listing 2:
ORACLE_HOME=/u01/app/oracle/product/10gR2
PATH=$ORACLE_HOME/bin:$PATH
ORACLE_SID=SID
# New Recommendation for 11g Diagnostics
ORACLE_BASE=/u01/app/oracle
TNS_ADMIN==/u01/app/oracle/product/11g/network/admin
export ORACLE_SID ORACLE_HOME PATH ORACLE_BASE
www.ioug.org
the user that owns the Oracle installation directory to modify these files
tightening security vulnerability with the listener.
Listing 3:
LISTENER_SID =(ADDRESS = (PROTOCOL = TCP)(HOST = NODENAME)(PORT = 1527))
SID_LIST_LISTENER_SID =(SID_LIST =(SID_DESC =(GLOBAL_DBNAME = SID.DOMAIN)(ORACLE_HOME = /u01/
app/oracle/product/10gR2)(SID_NAME = SID))
ADMIN_RESTRICTIONS_LISTENER_SID=ON
This will not upgrade the database that houses the RMAN catalog to
Oracle Database 11gR1, only the RMAN schema to be compatible with
the higher release of RMAN. Upgrading the catalog allows you to back
up any other Oracle 11g databases as well as previous versions. You can go
ahead and also upgrade the RMAN database to Oracle Database 11g at
this point using any of the standard methods: DBUA, EXP/IMP, EXPDP/
IMPDP or a Manual Upgrade. Many DBAs keep their RMAN Catalog
and Grid Control Repository in the same database. Grid Control 10.2.0.4
is compatible with an 11.1.0.6 version of the database.
If you still want to keep the old method of logs and trace files, see Metalink
Note:454927.1
www.ioug.org
Using Oracle Database 11g on a daily basis for accessory databases such as
Grid Control and RMAN, listeners for databases and clients will require
regular exposure to the new version. This should increase your skill level and
confidence in the new release while reducing the possibility of disruptions.
April Sims is currently the Database Administrator at Southern Utah University and an Oracle
Certified Professional: 8i, 9i, and 10g with a masters degree in Business Administration
from University of Texas at Dallas. She has been an IOUG member for six years, SELECT
article reviewer and a presenter at Oracle OpenWorld and numerous regional Oracle-related
conferences. April is also a Contributing Editor for the IOUG SELECT Journal and can be reached
at sims@suu.edu.
www.ioug.org
CLUSTER_NAME
-----------C_TS#
C_TS#
If you check the SQLs of catalog views, there are a lot of views build
on these two tables. This means as the tablespaces are cycled by drop
statement, the organization of the C_TS# cluster gets worse and worse.
As a result, many people aware of this fact advise VLDB sites not to use
seasonal information in tablespace naming. Obviously, they are right.
www.ioug.org
...
alter tablespace sales_2008q2 rename to sales_2008q4
...
update sys.props$ set value$=:1
where
name = DEFAULT_TEMP_TABLESPACE and upper(value$) = upper(:2)
...
update ts$ set name=:2,online$=:3,contents$=:4,undofile#=:5,undoblock#=:6,
blocksize=:7,dflmaxext=:8,dflinit=:9,dflincr=:10,dflextpct=:11,dflminext=
:12,dflminlen=:13,owner#=:14,scnwrp=:15,scnbas=:16,pitrscnwrp=:17,
pitrscnbas=:18,dflogging=:19,bitmapped=:20,inc#=:21,flags=:22,plugged=:23,
spare1=:24,spare2=:25
where
ts#=:1
Rows
Row Source Operation
------- --------------------------------------------------0 UPDATE TS$ (cr=4 pr=0 pw=0 time=617 us)
1 TABLE ACCESS CLUSTER TS$ (cr=4 pr=0 pw=0 time=228 us)
1
INDEX UNIQUE SCAN I_TS# (cr=1 pr=0 pw=0 time=37 us)(object id 7)
...
This does nothing within the cluster in a catastrophic way such as what
thedrop tablespace & create tablespace methods do.
Therefore, if we go over our first example, until the end of June,
SALES_2008Q3 is not needed.
Conclusion
Including seasonal information in tablespace name provides a number of
advantages, such as ease of ILM management. However prior to Oracle
Database 10g, this may cause significant catalog performance problems
in VLDB sites by the time database grows. Fortunately, the RENAME
tablespace statement provides an elegant way of overcoming this problem.
n n n About the Author
Hsn S ensoy has been working on Oracle technologies with three years of experience
previously as an OLTP System Developer and recently as a DWH Administrator. He also
lectured in many seminars on Oracle, especially for university students. He particularly focuses
on ASM, Backup and Recovery of VLDBs, and logical/physical design of data warehouses.
Heis currently working as the DBA of Turkcell Telecommunication Services data warehouse
(the largest Oracle database in Turkiye which is 70 TB), and at the same time doing his
MSc on computer sciences at one of the most prominent universities (Bogazici University) of
Turkiye, and can be reached at husnu.sensoy@turkcell.com.tr.
www.ioug.org
COUNT(*)
1000000
1000000
Each column has the same number of ones and twos, but the table contains
only one column combination (1,2). The values within the columns are
equally distributed between the two distinct values, but the combination of
column values are not. Our example shows that the optimizer assumes that
all four possible combinations of the two column values are equally likely.
i.e. (1,1),(1,2),(2,1),(2,2). Our query selects theone row with the column
values (1,2) out of the two million and onerecords.
select sum(a+b)
from TEST3
where
a=1 and b=2;
---------------------------------------------------| Id | Operation
| Name
| Rows |
---------------------------------------------------| 0 | SELECT STATEMENT
|
|
1 |
| 1 | SORT AGGREGATE
|
|
1 |
|* 2 | INDEX FAST FULL SCAN| TEST3INDEX | 500k|
----------------------------------------------------
www.ioug.org
SQL Profiles
SQL profiles, new with Oracle Database 10g, let you improve the speed of
a given query by giving the optimizer the information it needs to correctly
estimate the cardinality when the data in a group of columns is unequally
distributed. You create SQL profiles using the SQL Tuning Advisor feature.
You execute the advisors functions
DBMS_SQLTUNE.CREATE_TUNING_TASK,
DBMS_SQLTUNE.EXECUTE_TUNING_TASK, and
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
to analyze the SQL statement or statements and put the new profile in
place. For brevity, Ive just listed the names of the procedures. Here is the
output of the SQL tuning advisor for the query in the example:
DBMS_SQLTUNE.REPORT_TUNING_TASK(MY_SQL_TUNING_TASK)
----------------------------------------------------------------------GENERAL INFORMATION SECTION
----------------------------------------------------------------------Tuning Task Name : my_sql_tuning_task
Scope
: COMPREHENSIVE
Time Limit(seconds): 600
: 09/14/2006 13:26:10
----------------------------------------------------------------------SQL ID : 2fw0d281r0x2g
SQL Text: select sum(a+b) from TEST3 where a=1 and b=2
After accepting the recommended profile the plan for the query changes,
notice that the optimizer now knows that only one row will be returned
and it chooses the range scan of the index.
-----------------------------------------------| Id | Operation
| Name
| Rows |
-----------------------------------------------| 0 | SELECT STATEMENT |
|
1 |
| 1 | SORT AGGREGATE |
|
1 |
|* 2 | INDEX RANGE SCAN| TEST3INDEX |
1 |
------------------------------------------------
www.ioug.org
The range scan runs the query in about one hundredth of the time as the
full scan. Unfortunately, a given SQL profile only applies to a single SQL
statement, so you have to generate a profile for every SQL that experiences
a cardinality issue. SQL Profiles can overcome bad cardinality estimates
and result in better execution plan choices, but there are cases where
this feature will not improve a plan that suffers from a wrong cardinality
calculation. Hints can be used to overcome cardinality errors that are due
to relationships between columns when it isnt possible to use SQL Profiles
to give the optimizer the information it needs to make the best choice.
This tech tip is a slimmed down version of a presentation I gave at the
SCIOUG and the COLLABORATE 08 user group conference. The full
version of the paper, slides, and sample scripts and their output can be
found at www.geocities.com/bobbyandmarielle/sqltuning.zip.
n n n About the Author
Did You
KNOW
.?.
www.ioug.org
Content of mon_kill_form.ksh
#!/bin/ksh
#
#File: mon_kill_form.ksh
#
# Description: This monitors the inactive form sessions and kill those form which are holding
locks.
# History:
# -----------------------------------------------------------------------------# Vivek Awasthi 1.0
03/12/2008
#
ORACLE_SID=`/usr/ucb/whoami`
export ORASID_LOW=`echo $ORACLE_SID|cut -c1-8`
ORACLE_BASE=/oraappl/od-nbs/${ORASID_LOW}
# These are setups specific to my environment modify as necessary
. $ORACLE_BASE/${ORASID_LOW}scr/bin/profile.${ORASID_LOW}.db # Source database profile
. $ORACLE_BASE/${ORASID_LOW}scr/bin/.${ORASID_LOW}acc # Password file which has apps password
MAILRECEIPIENT=<Email address of people separated by blank space>
MAILFROM=<Email address who is sending email>
MAILPRO=$SCRIPTS_TOP/mailp.ksh ## Our custom email program to send email. mailx can be used
instead.
LOGFILE=$ORACLE_BASE/${ORASID_LOW}log/kill_form_session.log
LOGFILE1=$ORACLE_BASE/${ORASID_LOW}log/kill_form_session.sql
[ -f $LOGFILE ] && rm $LOGFILE
[ -f $LOGFILE1 ] && rm $LOGFILE1
err_exit() {
echo $*; exit 1
}
# The details of the information stored are specific to Oracle Apps - adapt as required
sqlplus -s apps/$APPS_PW <<EOF | grep no rows selected
set pages 90
set lines 150
set verify off
set feedback off
spool $LOGFILE
prompt **************************************************************************************
*************
prompt
Details for Form session killed
prompt **************************************************************************************
*************
SELECT s.sid, s.serial#, p.spid, s.process, s.status, substr(s.machine,1,15) MACHINE,
substr(to_char(s.logon_time,mm-dd-yy hh24:mi:ss),1,20)
Logon_Time, s.last_call_et/3600 Last_Call_ET, s.action, s.module
FROM
GV\$SESSION s, GV\$PROCESS p
WHERE s.paddr = p.addr
AND s.username IS NOT NULL
AND s.username = APPS
AND s.osuser = a159prod
AND s.last_call_et/3600 > 1
and s.action like FRM%
and s.status=INACTIVE
and s.inst_id=1
and (s.sid,s.serial#) in
( select l.session_id,s.serial#
from gv\$locked_object l, dba_objects
o, gv\$session
s, gv\$process
p
where l.object_id = o.object_id
and l.session_id = s.sid
and s.paddr
= p.addr
and s.status != KILLED
and o.object_name not like FND%
and o.object_name not like WF%
and l.locked_mode in (3,5)
and o.owner <> APPLSYS
);
spool off
EOF
#fi
cnt=`cat $LOGFILE|grep INACTIVE|wc -l`
if [ $cnt -ge 1 ]; then
cat $LOGFILE|grep INACTIVE|grep -v ^$|awk -F {print alter system kill session
$1,$2;} > $LOGFILE1
sqlplus / as sysdba <<EOF
@$LOGFILE1
EOF
$MAILPRO $MAILFROM $MAILRECEIPIENT Form Session Killed in ${ORASID_LOW} running on `hostname` $LOGFILE r
fi
www.ioug.org
******************************************************************
Details for Form session killed
******************************************************************
SID
-----
270
598
SERIAL#
----------
13024
3009
LAST_CALL_ET
-------------------
1.4752
1.5544
SPID
--------
16906
24067
PROCESS
------------
16182
10179
STATUS
------------
INACTIVE
INACTIVE
MACHINE
------------
Machine2
Machine1
ACTION
------------------------------------------------
FRM:<USERNAME>:<Program Name>
FRM:<USERNAME>:<Program Name>
LOGON_TIME
-----------------------03-24-08 10:07:05
03-24-08 11:27:17
MODULE
-------------APXPMTCH
APXWCARD
Did You
KNOW
.?.
IOUG has regional training classes, like RAC Attack! that allow
in-person, hands-on training from the experts who are doing
Oracle implementations every day.
www.ioug.org
www.ioug.org
Listing 1
#!/usr/bin/ksh
#
Program Name : single_dr_check.ksh
#
#
Purpose
: To Capture the archive logs that are behind Prod
#
#
and contingency database and report the difference #
#
Parameters : $1 Prod DB Name
#
#
$2 Cont DB Name
#
#
Special Note : Create a hidden file .p for password
#
#
: This script runs on 10g Databases and above
#
#
Usage
: ./single_dr_check.ksh $1 $2
#
Usage eg.
: ./single_dr_check.ksh chicago_prod boston_cont
#
#
where chicago_prod is production database
#
#
and boston_cont is contingency database
#
#
Author
: Balaji Raghavan
#
export ORACLE_SID=$1
ORAENV_ASK=NO
. oraenv
export P=`cat .p`
$ORACLE_HOME/bin/sqlplus s /nolog <<EOF
set head off
set verify off
set echo off
set show off
col prod_seq new_v prod_log
conn sys/$P@$1 as sysdba
set termout off
select Prod : , max(sequence#) prod_seq
from V\$ARCHIVED_LOG
/
conn sys/$P@$2 as sysdba
col cont_seq new_v cont_log
select Cont : ,
max(sequence#) cont_seq from V\$LOG_HISTORY;
select
decode(&prod_log - &cont_log,0,The DR is in Sync with Prod,DR is behind #),
&prod_log - &cont_log, Logs
from DUAL;
EOF
exit
Sample output:
$./single_dr_check.ksh hicago_prod boston_cont
www.ioug.org
Prod :
132608
Cont :
132608
0 Logs
Notes:
1. The above mentioned scripts will work on Oracle Database 10g.
Starting 10g we can connect to Contingency database via sys account.
Connecting to Contingency database using system or non-sys accounts
will display the following message
ORA-01033: ORACLE initialization or shutdown in progress
2. oraenv should be configured and should work without any issues, the $1
should a valid database name in the server you are running the script.
3. A hidden file.p should be created in the same directory where the script
is located and should have the sys password.
4. Using any other account other than sys will fail due to the fact that the
non sys account will not be able to query the v$log_history table in
contingency database.
5. I have use v\$archived_log and v\$log_history. In some environments,
the reference v\$ will fail and hence \ should be removed from the
script for it to work.
6. This should should be executed only from Prod or Contingency server
as it sets the ORACLE_SID with $1 value.
Listing 2
#!/usr/bin/ksh
############################################################################
#
Program Name : multiple_dr_check.ksh
#
#
Purpose
: To Capture the archive logs that are behind Prod
#
#
and contingency database and report the difference #
#
Parameters : $1 -- Prod DB Name to set ORACLE Variables
#
#
Special Note : Create a hidden file .p with Prod:Cont:PWD values #
#
Usage
: ./multiple_dr_check.ksh $1 $2
#
#
Usage eg.
: ./multiple_dr_check.ksh chicago_prod .p
#
#
where .p file contains the entries for Prod, Cont #
#
and password, please grant read permission on .p
#
#
file to current user only, $1 should be the SID on #
#
local machine, this will set the ORACLE_HOME and
#
#
other variables to invoke SQLPlus
#
############################################################################
#
Author
: Balaji Raghavan
#
############################################################################
#
Disclaimer
#
#
The views expressed are my own and not necessarily
#
#
those of any associated employer.
#
############################################################################
export ORACLE_SID=$1
ORAENV_ASK=NO
. oraenv
cat $2 | while read LINE
do
export PROD=`echo $LINE | cut -d: -f1`
export CONT=`echo $LINE | cut -d: -f2`
export PWD=`echo $LINE | cut -d: -f3`
$ORACLE_HOME/bin/sqlplus -s /nolog <<EOF
conn sys/$PWD@$PROD as sysdba
set head off
set verify off
set echo off
set show off
col prod_seq new_v prod_log
set termout off
select Prod $PROD : , max(sequence#) prod_seq
from V\$ARCHIVED_LOG
/
conn sys/$PWD@$CONT as sysdba
col cont_seq new_v cont_log
select Cont $CONT : ,
max(sequence#) cont_seq from V\$LOG_HISTORY;
www.ioug.org
select
decode(&prod_log - &cont_log,0,The DR is in Sync with Prod,DR is behind #),
&prod_log - &cont_log, Logs
from DUAL;
EOF
done
exit
Sample Output:
Prod Chicago :
132611
Cont Boston :
132609
DR is behind #
2 Logs
Prod Newyork :
81752
Cont London :
81752
91291
Cont SFO
91243
DR is behind #
0 Logs
48 Logs
Notes:
1. The above mentioned scripts will work on Oracle Database 10g.
Starting 10g we can connect to Contingency database via sys account.
Connecting to Contingency database using system or non-sys accounts
will display the following message
ORA-01033: ORACLE initialization or shutdown in progress
2. oraenv should be configured and should work without any issues, the $1
should a valid database name in the server you are running the script
3. A hidden file.p should be created in the same directory where the script
is located,.p file should contain prod database separated by colon :
contingency database separated by colon : and sys password.
4. Using any other account other than sys will fail due to the fact that the
non sys account will not be able to query the v$log_history table in
contingency database.
5. I have use v\$archived_log and v\$log_history, in some environment
v\$ will fail, \ should be removed for the script to work.
n n n About the Author
Balaji Raghavan has more than 13 years of IT experience with more than 12 years using
Oracle Products. He is currently working as VP Consultant II at a major financial institution.
He has worked on wide variety of projects and environments varying from Mainframe to
Midrange. He can be reached at balaji.raghavans@gmail.com.
Did You
KNOW
.?.
www.ioug.org
#
REVISION: (YYYY-MM-DD)
#
Chen Rui Qing 2008-02-12 enhanced the script to skip archival of
active files.
#
Chen Rui Qing 2008-05-02 added comments for IOUG Best Practices
Booklet, making it more readable.
#
# Load the user profile in. Normally people would export ORACLE_HOME, ORACLE_BASE,
# ORACLE_SID and so on here. But its a good pratice is to define these parameters
# in user profile, to minimize the maintenance effort.
if [ ${ORACLE_HOME} = ] ; then
. $HOME/.profile
fi
# incase you want to use a different instance; not the one in user profile.
if [ $# -ge 1 ] ; then
ORACLE_SID=$1
fi
APPLN=`basename $0 .sh`
APPLOG=$HOME/log/${APPLN}.`date +%Y`.log
# YYMM of previous month, TAIST-8 is my system TZ
MONTH=`TZ=TAIST+16;date +%y%m;TZ=TAIST-8`
ARCHIVE=archive/${MONTH}
echo \n`date` ==>${ORACLE_SID}: \c >> ${APPLOG}
echo $0 started ... >> ${APPLOG}
# get the dump file destination from spfile
#/u01/admin/glsdb/adump
#/u01/admin/glsdb/bdump
#/u01/admin/glsdb/cdump
#/u01/admin/glsdb/udump
for DUMPDIR in `strings ${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora |grep dump |cut -d \ -f 2`
do
if [ ! -d ${DUMPDIR} ] ; then
echo dump directory does not existing ... >> ${APPLOG}
exit 8
fi
cd ${DUMPDIR}
if [ ! -d ${ARCHIVE} ] ; then
mkdir -p ${ARCHIVE} >> ${APPLOG} 2>&1
else
echo ${DUMPDIR}: archiving already done for ${MONTH}. >> ${APPLOG}
continue
fi
www.ioug.org
# 2008-02-12 chen: do not archive if it is in use, eg. mrp.trc file. Otherwise
# oracle instance would append the trace message to the file in the archive directory
DD=`date +%b %d`
for FILE in `ls -lt |grep -v ${DD} |awk {print $9}`
do
if [ -f ${FILE} ] ; then
mv ${FILE} ${ARCHIVE}/ >> ${APPLOG} 2>&1
fi
done
done
# archive the listener.log as well
LSNR_YYMM=${ORACLE_HOME}/network/log/listener.log.${MONTH}
if [ ! -f ${LSNR_YYMM}.gz ] ; then
cp ${ORACLE_HOME}/network/log/listener.log ${LSNR_YYMM}
echo \c > ${ORACLE_HOME}/network/log/listener.log
gzip ${LSNR_YYMM} > /dev/null 2>&1
if [ $? -eq 0 ] ; then
echo listener.log archiving done for ${MONTH}. >> ${APPLOG}
fi
fi
echo `date` <== \c >> ${APPLOG}
echo $0 done. >> ${APPLOG}
exit 0
Chen Rui Qing is a DBA at Grocery Logistics of Singapore Pte. Ltd. He is an OCP in Oracle
8i, 9i and 10g. In Oct. 2001, he changed his career from Mechanical Engineering to IT, joined
SingTel of Singapore as System Analyst focusing on Web application development. In Dec. 2006,
he joined Grocery Logistics. He can be contacted at chenruiqing@hotmail.com.
Illustration
Set-up
Create two basic tables EMP and DEPT with few rows in the
Development environment as shown below
SQL> DESC EMP;
Name
Null?
Type
----------------------------------------- -------- ------------ID
NOT NULL NUMBER
NAME
VARCHAR2(10)
SAL
NUMBER
DEPT_ID
NUMBER
SQL> SELECT * FROM EMP;
ID NAME
SAL
DEPT_ID
---------- ---------- ---------- ---------
1 Ram
100
1
www.ioug.org
2
3
4
5
6
Anand
Sunny
Gokul
Geeta
Priya
200
300
500
600
700
1
1
1
2
2
6 rows selected.
Type
------------NUMBER
VARCHAR2(10)
Gather statistics:
exec dbms_stats.gather_table_stats(INTELE,EMP) ;
exec dbms_stats.gather_table_stats(INTELE,DEPT) ;
View Explain plan to understand how the optimizer would instruct the
database engine to fetch the data:
SQL> select e.NAME,d.NAME from emp e, dept d where e.DEPT_ID=d.ID;
Execution Plan
---------------------------------------------------------0
SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5 Card=100 Bytes=1700)
1 0
TABLE ACCESS (BY INDEX ROWID) OF EMP (TABLE) (Cost=1 Card=50 Bytes=450)
2 1
NESTED LOOPS (Cost=5 Card=100 Bytes=1700)
3 2
TABLE ACCESS (FULL) OF DEPT (TABLE) (Cost=3 Card=2 Bytes=16)
4 2
INDEX (RANGE SCAN) OF IDX_DEPT_ID (INDEX) (Cost=0 Card=50)
Case 1:
Now simulate statistics as that of the production environment.
This can be done using the DBMS_STATS.SET_TABLE_STATS
procedure as shown below. In this example, we are setting the
number and row size for the tables involved in the query as well as
the size for the IDX_DEPT_ID index.
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => EMP, numrows => 10000, numblks => 1000, avgrlen => 124);
exec dbms_stats.set_index_stats(ownname => INTELE, indname => IDX_DEPT_ID, numrows =>
10000, numlblks => 100);
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => DEPT, numrows => 1000, numblks => 100, avgrlen => 124);
Now you can notice a change in the explain plan. We have not changed
any data, but still the execution plan changes. So the developers themselves
www.ioug.org
can simulate statistics as that of production and test their queries in the
development environment.
Case 2:
I n this case, we will determine the effect of a significant increase
in the rowcount for both the EMP and DEPT tables in order to
simulate future growth. The same SET_% _STATS call is used with
count values that are 100 times that of the previous values
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => EMP, numrows => 1000000,
numblks => 10000, avgrlen => 124);
exec dbms_stats.set_index_stats(ownname => INTELE, indname => IDX_DEPT_ID, numrows =>
1000000, numlblks => 1000);
exec dbms_stats.set_table_stats(ownname => INTELE, tabname => DEPT, numrows =>
1000000,numblks => 10000 , avgrlen => 124);
SQL> select e.NAME,d.NAME from emp e, dept d where e.DEPT_ID=d.ID;
Execution Plan
---------------------------------------------------------0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8597962 Card=500000000000
Bytes=8500000000000)
1 0 HASH JOIN (Cost=8597962 Card=500000000000 Bytes=8500000000000)
2 1 TABLE ACCESS (FULL) OF DEPT (TABLE) (Cost=2230 Card=1000000 Bytes=8000000)
3 1 TABLE ACCESS (FULL) OF EMP (TABLE) (Cost=2237 Card=1000000 Bytes=9000000)
Again you can notice a change in the explain plan the MERGE
JOIN was replaced by a HASH JOIN since the optimizer concluded
that the latter was a better join condition given the new statistics.
Conclusion
Statistics plays a vital role to choose a best plan for a query. Simulating
statistics as that of production environment will help you in identifying
bottle necks that could possibly occur in higher volume production
environments. As well, it can help you simulating future growth. This tip
provides you an approach to do so.
info@ranzal.com www.ranzal.com
Essbase Products
Experts in Oracle Hyperion Financial
Management Products
Data Services Expertise (including MDM
and Informatica)
Program & Project Management Services
Acclaimed Performance Lab
www.ioug.org
Verification information can even be put into place in the above, besides
just capturing the details of who executed this procedure, by checking that
the user is allowed to switch to that schema. The owner of this procedure
should be a locked down owner and needs permissions to alter any session.
Also, permissions to execute this procedure would only be granted to a role
or individuals. In turn, the grants should be added to an audit list to make
sure that they are only changing using the properprocedures.
In SQLPlus:
connect malcher@DB1
select sys_context(USERENV,SESSION_SCHEMA) from dual;
SYS_CONTEXT(USERENV,SESSION_SCHEMA)
------------------------------------------MALCHER
exec change_schema(TESTING1);
to verify:
select sys_context(USERENV,SESSION_SCHEMA) from dual;
SYS_CONTEXT(USERENV,SESSION_SCHEMA)
------------------------------------------TESTING1
Now the script that needed to be run as that schema owner can
beexecuted.
This is just a simple setup to prevent needing to have the schema passwords
to migrate code from test to production, redefine tables or other things
that might be needed to be executed as the schema owner outside of the
application. There are controls that can be set up around these procedures
to secure who has access to execute the procedure and capture the
information that happens after they change to a different schema and
even having an additional check that the user is allowed to switch to that
schema. At least these simple steps can track a user that has switched over
to a different schema, and the password is not given out to those who
shouldnt have the password.
n n n About the Author
www.ioug.org
Identify
1. Understand the Organization
The first step in the process is to understand the organization roles and
responsibilities. The DBAs will have details and the technical ins and
outs of the databases. Then, understand any configuration standards
used across all technical environments.
2. List the Databases, Operating Systems and Hardware
Begin collecting the detailed database information. Start by gathering
the list of database instances, versions, operating systems and hardware.
Then, identify the database work requests and resources with which
the DBAs interact. This will lead the conversation into who are the
Technical Application Leads and business contacts for each application.
Note: Each Oracle database instance can contain multiple schemas
and/or applications. This can lead to working with different business
areas regarding the same database.
Assess
1. Review Vendor Plans
At this point, it is essential to review all of the gathered information
and confirm accuracy and comprehensiveness. The best method is
to have each DBA, Technical Application Lead and Business Owner
sign-off. It is essential that each application vendor using the Oracle
databases be evaluated for support and/or compliance. There are several
methods of investigation: Web site and product literature research
and/or vendor meetings.
2. Understand and Define the Application Future
Next, it is essential to understand the future of the (business and)
applications. The majority of the information gathered will come from
identifying the applications (and releases) and projects in progress. This
44 | independent oracle users group
www.ioug.org
Plan
1. Outline Major Milestones
Begin the planning effort by outlining the internal and external
major milestones. This will draw from the database lists, applications
(releases), dependencies and constraints. This will provide the project
plan skeleton structure for further detailed planning activities. Work
with all involved resources to develop the high and low level project
plans. Receiving feedback from the business resources, technical
application leads and DBAs provides for better upgrade plans.
2. Plan the Projects
After the major milestones have been defined, it is necessary to add all
of the steps for each database upgrade. While adding the task detail,
add constraints and dependencies to the project plan.
During this activity, it is necessary to schedule upgrades with other
organization initiatives to gain testing and organization synergies. Also,
this step should synchronize database projects that need to be upgraded
simultaneously.
www.ioug.org
www.ioug.org
COLLABORATE09
MAY 3-7, 2009 | ORANGE COUNTY CONVENTION CENTER WEST | ORLANDO, FLORIDA
I think this is the best conference around. Since its by users and for users, its less
biased than some of the other conferences...There is a wide variety of technical
material to please anyone. I highly recommend it. If you can only go to one event in a
year, this is the one to go to. * Theresa Stone, Principal Support Analyst, GlaxoSmithKline
*Information derived from 2008 post-conference evaluation.
Presented by:
www.ioug.org
www.oaug.org
www.questdirect.org