Вы находитесь на странице: 1из 39

Tuning Overview

List the roles associated with the database tuning


process

The DBA, the application developer, management, the


system administrator, the network administrator.

Define the steps associated with the tunning process

Considered on the ROI (Return on Investment) outline


here are the tunning recommendation in order:

A. Do a proper logical design: In practice this


often means more tables with fewer rows per table.
In turn, this means the capability for faster
searching. The fewer rows that are stored and must
be searched through, regardless of the search
techinque, the quicker you'll be able to find what
you're looking for.
B. Do a proper physical design : Separate the
datafiles (tablespaces ) into different disks to avoid
I/O contention , use striping.
C. Redesign if necessary Sometimes the best way
to correct the database without extensive tuning
efforts is a re-analysis and redesign. Consider
redesign when the initial design was hurried, or
when multiple databases need to be integrated.
D. Write Efficient Application Code: If some SQL
code uses an inefficient seach or sort routine, despite
the best efforts of the Oracle optimizer, the
application will run slowly.
E. Rewrite code if necessary: If application code
efficiency comes into question and resource and
management permit , re-analyze and re-write the
code.
F. Tune the database memory structures Oracle
can offer substantial improvments through the
tuning of its database buffer cache. Also the shared
pool caches SQL code via the library cache
component and caches the data dictonary
component through ,obviously ,data dictonary cache.
The redo log buffer is a separatly tunable area in the
SGA.
G. Tune OS memory Structures if necessary : The
SWAP area can become a bottleneck since it
functions as an OS temporary storage, user
temporary storage, and the OS virtual memory
backing store. The SA and DBA musht work together
so that the OS provides enough shared memory and
semaphores to give the Oracle processes enough
breathing room to operate efficiently.

H. Tune Database I/O Database I/O is affected by


both the RDBMS and the OS of course, but tuning
the I/O means relocating logical and physical
structures to reduce contention. If this point in
tunning is reached you will have already tuned the
database buffer cache. Now the main focus will be to
simply adjust the physical design. You physically do
more redesiging , if necessary with I/O in mind
exclusively.
I. Tune OS I/O if necessary The OS fullfils all
write and reads requests by all processes, including
the Oracle background procceses DBWR and LGRW.
As OS typically buffers these requests, performs
reads and writes, and then returns the
acknowlegment and data back to the process upon
completion.

File system are data structures that contain


metadata about the files they manage, suche as the
location of each files starting sector address, its
sector length, its directory tree location, its
attributes (permissions,size,timestamps,etc). In
UNIX, file systems have their own logical block sizes,
which correspond to something greater than or equal
to a physical block size (512 bytes), usually 8KB by
default. The oracle Block size should be at least 8KB
or a multiple of it,such as 16KB.
Other importatn OS and Oracle I/O tuning issues
include: read-ahead capabilities, asynchronous
I/O, multiblock reads, RAID stripe sizes, disk
geometry issues, controlles issues,and many
more.

J. Tune the network if necessary A saturated


network can cancel out imporvments made by
database tuning.
K. Tune the clients if necessary
L. Consider more exotic solutions Oracle
Multithreaded Server ("MTS") , transaction
processing (TP) monitors, Oracle Parallel Query and
other parallel capabilities, Oracle's clustering
capabilities, Oracle's bitmapped indexing, MPP
machines, solid state disks, memory-resident (RAM)
disks, hardware accelerators and queuing systems.
Identify tuning goals

There are different ways of determining the goals of a


performance tuning effort, consider the application type,
also sampling the database on various quantative
measaurs is further defining the tuning goals:

α. Throughput : Work per unit time , as measured by


transactions per second; higher is obviously better.
β. Response Time: The time it takes for an application to
respond, measured in milliseconds or seconds, lower is
better.
χ. Wall Time : The elapsed time a program takes to run,
lower is better.

In most systems throughput and response time run


counter to one another as tuning goals. If response time is
high (bad), throughput might be high (good). If
throughput is low (bad) then response time might be low
(good).

Typically OLTP systems want low response time or high


throughput, in terms of transaction on the application
needs. A DSS wants low response time , and a batch
system normally wants lower wall times.

Always consider the two central tuning goals :

8. Maximize your return on invesetment Invest


your time and effort wisely by working on the
problems most likely to yield the most improvement.
9. Minimize contention : Bottlenecks are
characterized by delays and waits;eliminate or
reduce these whenever possible.

Also consider these general-purpose tuning goals :

8. Minimize the number of blocks that need to be


accessed; review and rewrite code as necessary.
9. Use caching, buffering and queueing whenever
possible to compensate for the electromechanical
disadvantage ( memory is faster than disk);prefetch
10. Minimize data transfer rates ( the time it takes
to read or write data); fast disks,RAID and parallel
operations help do this.
11. Schedule Programs to run as noncompetetively
as possible, they might run concurrently and yet still
be noncompetetive for the most part.
List the strengths of different database configurations
for recoverability.

The two main configurations for running a database are


ARCHIVELOG and NOARCHIVELOG. Assuming a recovery
operation is needed the following scenarios could arise:
If the database is in ARCHIVOLOG there are two
possibilities:

8. Complete-Recovery : The database is restored and


recovered through the applicaion of ALL redo information
(this includes both the online and archived redo log files)
since the last backup. This type of recovery is performed
normally when one of more data files or control files are
damaged, the damaged files are recovered using al the
redo information generated since the last full backup.
9. Incomplete-Recovery: In this scenario the database
is restored and recovered through the application of
only SOME of the redo information generated since
the last backup. This type of recovery is normally
used when an on-line redo log file is lost due to
hardware failure or a certain user requires to backup
to a certain point in time,in simpler terms, an
incomplete recovery only rollsback certain
transactions without the need to involve ALL the
redo-log files.

If the database is in NOARCHIVELOG:

You benefit from not having to save all the on-line redo
log files, on heavily accesed systems this type of
configuration alleviates disk-space usage,because in
ARCHIVELOG every on-line redo log file is eventually
backed-up; these archived redo-log files can easily fill
up a WHOLE disk (10-12 GB) in a matter of hours.
However the disadvantage of running this configuration is
that in the event of a failure your only means of backup is
by your last backup tape, Oracle cannot help in this case
because it does not have a history of previous redo-log
files , the ones that are ARCHIVED in ARCHIVELOG mode.

Oracle Alert Log and Trace Files


Describe the location and usefullnes of the Alert Log

The alert log records the commands and command results


of major events in the life of the database,e.g. Tablespace
creation, redo log switches, recovery operations, and data
base startups.

This file is located at the location specified by the


parameter BACKGROUND_DUMP_TEST in the init.ora file.

Describe the location and usefullness of the background and


user processes dump files.

When a background processes is terminated or abnormally


aborts an operation, it usually produces a trace file
containing an error message causing the failure. Dumps of
the current process stacks, currently executing cursors,
and many other information pertinent to the problem. A
background dump test is saved at the
background_dump_test parameter in the init.ora file,
these processes are commonly known as "detached
process dumps". A user dump test is saved to the
directory specified in the parameter user_dump_test in
the init.ora file, these processes are of cours known as
"user process dumps".

It is more important to collect these files and forward


them to Oracle Support as they may help resolve the
problem. The background processes also write an entry to
the alert.log file for the database.

Utilities and Dynamic Performance Views

Collect analysis through


0. Available Dynamic troubleshooting and performance
views.

All the Oracle products such as server


manager,enterprise manager, rely on the V$ dynamic
performance views. These views are grouped into
instance, database, memory, disk, user, session, and
contention aspects of performance they are based on
the internal X$ base tables. The available V$ views
are at the table V$FIXED_TABLE to get a listing of the
V$ views.

The V$ views are called dynamic because they are


populated at instance startup and are truncated at
shutdown. The V$ views also form the basis of the
standart Oracle tuning scripts, UTLBSTAT/UTLESTAT,
which query them using SQL scripts and format the
output that is returned. Therefore, if
UTLBSTAT/UTLESTAT do not give you what you want,
you can use Server Manager and the V$ views to
either supplement or supplant those utilities.

1. The UTLBSTAT/UTLESTAT report output

This is the most commonly used diagnostic utility.


The DBA runs UTLBSTAT before running his or her
application or simulation. The utlbstat script builds
the beginning tables necessary to collect and store
the performance data. The the DBA run utlestat
which builds the ending tables and the difference
tables, computes the performance differences
(deltas) between the utlbstat run and this utlestat
run, formats the data output data ( including
comments and some explanations ), and writes it to
the default file, report.txt . This file must be
interpreted by the DBA, either directly or inderectily (
by taking some of the output values given and using
them as inputs into simple formulas ).

Interpretation of this data means comparing these


final figures to more or less established guidelines,
keeping the ROI startegy in mind, and categorizing
the findings as acceptable or not for that given area
of performance.

2. Oracle wait events


3. Appropriate Enterprise Manager tuning tools

Enterprise Manager "performance pack" is extremly


useful.The components in the performance pack help
analyze your logicaland physical design. They also
monitor locks, a variety of performance issues
(throughput, redo, rollback,I/O, memory,etc),the top
user sessions with regard to resource consumption,
you tablespace storage (data files,fragmentation,etc),
and application events through tracing.

Define the latch types

Latches are used to control access to shared structures.


Latches are implemented using semaphores at the OS
level. Latches are locks that are held for a very small
amount of time.
Before a process gets access to a shared structure
protected by a latch, it has to first acquire the latch. This
latch could currently be free (that is, no other process is
accessing the latch) in which case the processes gets the
latch immediatly. This processes will hold the latch for the
period of time it requires and will then relinquish the latch.
In case the latch is already acquired by another
process,the process has two options:

8. Acquire the latch in immediate mode If this ocurrs


and the latch is already being used by another process (as
is the case ), the process will not wait to acquire the latch,
it will continue by taking a different action.
9. Acquire the latch in willing to wait mode If the
process fails to acquire the latch in the first try, it
will wait and try again. If the system has multiple
CPUs, this unsuccesful process will start spinning on
the latch and try to acquire it. THe number of times
the process spins on the latch is defined by the
parameter spin_count in the init.ora file. With every
spin, it will try to acquire the latch, if it does not , it
will continue to try to acquire the latch until the
spin_count parameter in the init.ora file is reached.
After that the processes will go to sleep for a
specified amount of time, wake up again, and repeat
the aforementioned cycle.

To view latches the following "views" are used :

10. V$latch : This view conatins all important


statistics related to the performance of various
latch on the system.
11. V$latchholder : If the system is currently
having latch contention problems , this view can be
used to determine which session is currently holding
the latch.

The Important latches:


There are aprox. 52 types of latches on an Oracle
installation. However the follwing latches are of significant
importance in any tuning job.

0. cache buffers lru chain latch : This latch is


responsible for protecting the access paths to db block
buffer in the buffer cache. The buffer cache size defined
by the parameter db_block_buffers resides in the SGA
and contains a cached copy of data read from data files.

The buffer cache is organized in two lists: the dirty


list and the LRU list. The dirty list contains the buffers
that have been modified but not written to the disk
yet. The LRU list is comprised of the pinned buffers,
the dirty buffers that have not yet been moved to the
dirty list, and the free buffers. The pinned buffer are
buffers that are currently accessed by other
processes. The dirty buffers contain buffers that are
to be written to the disk, and they then subsequently
get moved to the dirty list. The free buffers are the
buffers that are available for use.

When a process needs to read data from the disk that


is not already in the cache, it needs a free buffer to
read the new data. It scans the LRU list to check for
free buffers. If there are excessive requests for free
buffers in the buffer cache, tere will be high access to
the LRU list causing contention for the cache buffer
LRU chain. The contention for this latch can be
minimized with the parameter db_block_lru_latches in
the init.ora file. By increasing this paramater, the
contention for this latch can be minimized. The
maximum value for this parameter is double the
numberof CPUs

The basic reason for contention for this latch is a high


request for free buffers. You can optimize the SQL
statments to minimize the high demand for free
buffers or increase the db_block_buffer parameter to
increase the number of free buffers available on the
system. NOTE: The SGA must fit into contiguous
chunks of real memory, so if the buffer cache is
enlarged you must ensure there is enough contiguos
memory available on the system to service the
increase.

Α. redo allocation AND redo copy latches : These


latches control the access to the redo log buffer. When
a process requires writing to the redo log buffer, one of
these latches is to be acquired by the process. If the
size of the redo log information written to the redo log
buffer is less thant the log_small_entry_max_size
parameter, the process will use the redo allocation
latch. If the size is greater than this value, the process
is copied using the redo copy latch .

A quick way to check whether there is any contention


on the redo log buffer is to check where there are
any waits associated with writing to the redo log
buffer. This can be done using the V$sysstat view :

select name,value
from v$sysstat
where name = 'redo log space requests'
The size of the redo log buffer will have to be
increased if the number of waits is too high.

Contention for the redo allocation latch : The


contention for the redo allocation latch can be
reduced on a multi-CPU system by forcing the
process to use the redo copy latch instead. Because
there can be multiple redo copy latches, the copy will
be done more efficiently. The number of redo copy
latches is defined by the parameter
LOG_SIMULTANEOUS_COPIES The maximum number of
available latches on the system is double the number
of CPUs.For a single CPU system, this value is 0, and
the redo allocation latch will be used. If there is a
contention for the redo allocation latch , the value of
log_small_entry_max_size can be decreased from its
current value so thar redo copy latch is used.

Contention of the redo copy latch : If the system is


facing contention for the redo copy latch , it can be
decreased by either increasing the value of
log_small_entry_max_size ( so that the redo allocation
latch is used) or increasing the value of
log_simultaneous_copies (so that it increases the
number of redo copy latches available).

The init.ora parameter log_entry_prebuild_threshold


can be increased so that the data that is written to
the redo log buffer is grouped and written out. By
increasing the parameter, a number of write
operations can be grouped so that they can be
written out in one operation, thereby reducing
requests for these latches and thus contention.

Β. Library Cache Latch : This latch is primarly


concerned with the control of access to the libarary
cache. The library cache inclusdes the Shared SQL
area, private SQL areas, PL/SQL procedure packages,
and other control structures.Shared SQL area contains
SQLs that are shared among multiple sessions. By
increasing the sharing of these SQLs, contention to this
latch can be avoided. Contention for this latch occurs
when there is a lot of demand for space in the library
cache. Very high parsing on the system and heavy
demand to open a new cursor because of low sharing
among processes are some of the common causes of
contention on this latch.

Contention for this latch can be avoided by using


code that can be shared by multiple sessions. This
can be done by typing the code with the same
conventions (all capitals for DML key words, or just
the first letter a capital ) to the parsing engine even a
one lower-upper case letter will generate a different
hah value. Even putting more spaces in a select
statement causes the hash to be different.

Using bind variables: Using bind variables prevents


multiple copies of the same select statement from
being shared in the same pool.e.g:

select sal from employee where emp_if :=


emp_id;
Pinning frequently used objects like procedures and
packages. The advantage is that these objects will
never be flushed out of the shared pool.These objects
can be identified by :
select name, executions
from V$db_object_cache
where executions > <threshold value>
order by 2 desc;
And in order to pin these objects in the shared pool.

dbms_share_pool.keep('object
name','P');
To check the object in the shared pool that are not
pinned :

select
name,type,kept,sharable_mem
From v$db_object_cache
Where kept = 'NO'
Order by shareable_mem desc;

Tuning the Shared Pool


Tune the library cache and the data dictonary cache

The Shared pool is a special type of buffer. Whereas a


buffer is a "dumb" mechanism, simply providing
temporary storage data on its way between fast memory
and slow disk, a cache is a "smart" mechanism, retaining
memory as to whether it has that information, or part of
it, so that it can avoid many unnecessary trips to the disk
as possible. When an I/O request is made, the cache
checks to see whether it already has it ni memory.If it
does,it answers the request itself,returning the requested
data. This is know as a hit. If it does not, a trip to the disk
is warranted. This is known as a miss. For most cache
mechanism a 90+ % hit ratio is very good
perfromance.Caches are generally managed by the LRU
(Least Recently Used) algorithm, which ensures that, at
any given time, the MRU (Most Recently used) data is held
in cache, and the LRU data is aged out.

When Oracle parses a SQL statement, it allocates an SQL


area in the library cache for it by applying a mathematical
formula to the alphanumeric text of the SQL statement
and using the result to store (and later find) it in the
cache . In other words it uses a hash function In order for
the statement to be reused by another, the statements
must by identical. For example, is an extra space or
different case letter is used in a query then the hash
function will not be applicable. Another problem with hash
functions is the use of literals in the query, it is
recommended that the query use a bind variables in order
for the hash statement to be reused, a bind variable is
normally a host 3GL variable, such as a C integer. Then
the value of the variable can take on any specified integer,
with the benfit of reusing the function in the library cache.
( In the case of DSS the use of bind variables is limited )

The Shared pool is composed of the library cache and the


data dictonary cache

1. The library cache: Contains all the recently


executed SQL statements, such as stored
procedures, functions, packages, triggers, and
PL/SQL blocks.
2. The data dictonary cache portion of the shared
pool is sized by the parameter SHARED_POOL_SIZE,
and it is the ONLY way to indirectly size this cache.
The following are the object that held in the SYS and
SYSTEM schemes: X$ Tables, V$ views, DBA_ views,
User_ views. Sizing, diagnosing and tuning the
library cache so that it performs wll should have the
side effect of helping the performance of the data
dictonary cache because they both coexist in the
shared pool. They are not separatly configurable.
There are a couple of ways to measure data
dictonary cache performance. One is to query the
V$rowcache view:
3. select sum(GETMISSES)/SUM(GETS)
"DC_MISS_RATIO"
4. from v$rowcache;
The other way is to get the datadictonary section
of report.txt (UTLBSTAT/UTLESTAT). Compute the
sum of all the GET_MISS and divide that by the
sum of all the GET_REQS to get a similar
DC_MISS_RATIO.

If either of these two methods yield a


DC_MISS_RATIO > .15, increase the
SHARED_POOL_SIZE (and retest)

Measure the shared pool hit ratio


b. select gethitratio
c. from v$librarycache
d. where namespace='SQL AREA';
Size the shared pool apropriatly

To improve the performace of the library cache:

1. Minimize unnecessary parse calls from within


applications : Parsing is CPU intensive. Becasue
caching and buffering are involved, it is also
memory-intensive as a by-product. Cursor opening
and closing should be carefully placed in the
application to facilitate reuse of the private SQL area
for multiple SQL statements. The parameters
OPEN_CURSORS in init.ora , might need to be
modified in order to allow for the sufficient allocation
of cursor space ( private SQL areas). To determine
wheter your application might be inefficient in this
regard, run SQL TRACE/TKPROF and examine wheter
the count column for parse is near the value for
Execute (or Fetch). If so, the application is then
reparsing for almost every execute (or fetch).
2. Maximize reuse of those statements that must
be parsed : As mentioned,SQL statements must be
identical to be reused. Establish some
conventions,like "always code SQL statements in
uppercase".Furthermore except for DSS apllications,
use bind variables when appropriate.
3. Pin Frequently Used program objects in
memory : Is Oracle a cursor, trigger, procedure or
package can be held in memory using a special
shared pool package DBMS_SHARED_POOL. To
create this package you must run the procedure
dbmspool.sql script. You might also need to run the
scripts prvtpool.sql (check version on platform to
see if this script needs to be run) Run the(se)
script(s) as SYS.
4. EXECUTE
DBMS_SHARED_POOL.KEEP('<object name>');
To unpin the object:

EXECUTE DBMS_SHARED_POOL.UNKEEP('<object
name>');
To determine if the object was pinned

select substr(NAME,1,25), KEPT from


v$db_obeject_cache;

5. Minimize fragmentation in the library cache :


Unless your application is guarded against
fragementation, you will commonly receive ORA-
04031 errors( not enough contiguos free space ).
One way to avoid this fragmentation is by pinning
frequently used objects to memory. For less
frequently used objects you can use the init.ora
parameters SHARED_POOL_RESERVED_SIZE. You
set aside a shared pool "reverved area" for your
large objects. Essentialy you guarentee that your
necessary large object will find space. Set the
SHAED_POOL_RESERVER_SIZE to what would be the
maximum number of bytes of your largest objects
simultaneously loaded. To determine the size of a
particular object you want to include in the reserved
area, use:
6. select substr(name,1,25) "NAME",
SHAREABLE_MEM
7. from V$db_object_cache
8. where name='<object name>';
Also determine the size you need to set
SHARED_POOL_RESERVED_SIZE use:

select sum(shareable_mem)
from V$db_object_cache
where shareable_mem >=
<SHARED_POOL_RESERVED_MIN_ALLOC> ;
In order to do the previous you need to have an
idea of what constitutes a "large object". So take
the following steps:

1. Set
SHARED_POOL_RESERVED_MIN_ALLOC to
your specification
2. Set
SHARED_POOL_RESERVED_SIZE to the
output of the last query, plus a fudge factor
of 10%.

You can also set


CURSOR_SPACE_FOR_TIME to TRUE to
prevent SQL areas associated with cursors
from aging out of the library cache before
they have been closed by a program.
NOTE: Do not change the value for
CURSOR_SPACE_FOR_TIME to true if:
RELOADS in V$LIBRARY always show a 0
value, you are using Oracle forms or
SQL*Forms or You use dynamic SQL.

Pin Object in the shared pool

Rather than reserving space in the Shared pool, you may


wish to selectively "pin" packages in memory. Pining
packages in memory immediatly after starting the
database will increase the likelyhood that a large enough
section of contiguos free space is avilable in memory. The
KEEP procedure in the DBMS_SHARED_POOL package
designates the packages to pin in the shared pool.

alter procedure APPOWNER.ADD_CLIENT compile;


execute
DBMS_SHARED_POOL.KEEP('APPOWNER.ADD_CLIENT','P'
);
Pining of packages is more related to application
management than application tunning, but it can have a
performance impact.

Tune the shared pool reserved space

Rather than using the large pool, you can reserve an area
within the shared pool for large objects via the
SHARED_POOL_RESERVED_SIZE parameter in init.ora .
The "reserve size" is set aside for the shared pool entries
of large object (such as large packages).
Describe the User Global Area (UGA) and session
memory consideration
Configure the Large Pool

The large pool in Oracle will be used when Oracle requests


large contiguos area of memory within the shared pool
( such as during use of the multithreaded server). To
create a large pool, set a value (in bytes) for the
LARGE_POOL_SIZE parameter in init.ora. BY default this
pool is not created. You can specify the minimum
allocation size for an object in the large pool via the
LARGE_POOL_MIN_ALLOC parameter in init.ora. This
parameter defaults to 16KB is obsolete in Oracle 8i.

Tuning the Buffer Cache


Describe how the buffer cache is managed

The single most important tuning change you can make to


improve the performance of your Oracle system is to
properly set the size of the database buffer cache
accordingly. The database buffer cache is the cache
structure in the SGA and holds copies of the memory of
the Most Recently Used (MRU) Oracle data blocks. The
TWO parameters that determine the size are:

9. DB_BLOCK_SIZE : Which is the size of an


Oracle Block. This can range from 2KB (2048 bytes )
to 64 KB (65536 bytes). For performance generally
the higher the better. If your database has already
been created with a relatively small block size ,
consider rebuilding it if that is feasible for your
application:
1. Shut down the Instance
2. Do a full export of your database
(if feasible)
3. Increase de DB_BLOCK_SIZE in
your init.ora
4. Startup the Instance
5. Reimport the database as SYS.
10. DB_BLOCK_BUFFERS: Is the number of
Oracle blocks to be held in memory. Each buffer
equals one block. This should be sufficiently high to
yield en efficient hit ratio, but not so high as to cause
OS paging. The last thing you want is the SGA being
paged in and out of memory by the OS. Paging is
Oracle's primary job when it comes to the
DB_BLOCK_BUFFERS and you dont want the OS
paging underneath Oracle!. It pages them in on
demand. Hence, your database buffer cache, along
with the shared pool, should fit comfortably in real
(available core) memory. You must also be careful to
take into consideration non-Oracle application
memory requirements, and the OS requirements.
The size of the database buffer cache is :
DB_BLOCK_BUFFERS X DB_BLOCK_SIZE

The database buffer cache is somewhat of a misnomer, in


that the cache is a special kind of buffer. Hence buffer
cache is actually redundant, not to mention a little
confusing. The real point to put accros is that it caches
Oracle blocks It is different from the shared pool in that
it caches data and not programs.

Oracle always reads Oracle blocks into the database buffer


cache before passing them on to user processes. A user
process, or application, always reads from ( and writes
to ) the database buffer cache. The following are the
steps in the buffer management of an I/O request:

a. User select data ( request block)


b. Server looks in database buffer cache for it
c. If it finds it (through the hash function) in the LRU
list, it retunrs it.
d. It it doesnt find it,it reads in the block from the
datafile on the disk and attaches it (using the hash
function) to the MRU or LRU end of the LRU list as
appropriate.
e. If the user does not modify it, its finished.
f. If the user does modify it, DBWR writes de block
(dirty buffer) back to its location in datafile on disk.

Index can be accessesed one block at a time. Full table


scans can have multiple blocks read with one request. Set
the number of blocks (batch size) by setting:
DB_FILE_MULTIBLOCK_READ_COUNT = <number of blocks> .

Buffers can be free (clean), dirty, current, or read-


consistent (rollback).

A free buffer is one that has yet to be used since instance


startup, or one that has been used and is now available.

A dirty buffer is one that has been used, but has not been
flushed , or written out by the DBWR on checkpoint.

A current buffer is one used in service of an


INSERT,UPDATE, or DELETE. By their very nature, current
buffers more often than not become dirty.

Read-consistent buffers serve SELECT statements and


rollback. Blocks read in service of full table scans are
placed at the LRU end of the LRU buffer chain. However,
you can still cache whole tables on the MRU end of the
chain.

Calculate and tune the buffer cache hit ratio


Because memory I/O is several magnitudes faster than
disk I/O (nano vs. mili ) , you want I/O requests to be
satisfied by memory as often as possible. You want block
on average to be fetched 90 % of the time from the
database buffer cache ( data block buffer cache) versus
the datafile. You also want to minimize latch contention.
The LRU buffer chain, o list , is locked through latch
mechanisms, jsut like those throughout the Oracle kernel
and the library cache ( in the shared pool). As with any
latch approach, you must have enough because latches
( or spin locks ) contain no queing mechanisms as with
semaphores.

One way to calculate the hit ratio is with:

select 1-(P.VALUE/(D.VALUE+C.VALUE)) "CACHE HIT


RATIO"
FROM V$SYSSTAT P, V$SYSSTAT C, V$SYSSTAT D
WHERE P.NAME = 'physical reads'
AND D.NAME = 'db block gets'
AND C.NAME = 'consistent gets';
Here "physical reads" is the number of block read from the
disk, "db block gets" is the number of blocks read from
current copies of blocks in cache, and "consistent gets" is
the number of read consistent (rollback) copies of blocks
in cache. It is summed up as 1 - (physical reads / logical
reads )

Tune the buffer cache hit ratio by adding or removing


buffers
If the database buffer cache hit ratio is less than .90,
increase the db_block_buffers and rerun the query, or
utlbstat.sql/utlestat.sql.

Increasing this parameter requires an instance shutdown


and startup, it is not as complicated as changing the
db_block_size . There is also another option that simulates
the effect of adding more buffers to the database. First the
instance needs to shutdown, then the parameter
DB_BLOCK_LRU_STATISTICS = <number of buffers you would

want> . Once the instance is started up once again, let


your application run again for a reasnable amount of time,
just as you would for UTLBSTAT.sql/UTLESTAT.sql. The
table X$KCBRBH

Create multiple buffer pools


Size multiple buffer pools
Monitor buffer cache usage
Make appropriate use of table caching
Diagnose LRU latch contention
Avoid free list cotention
Tuning the Redo Log Buffer
Determine if processes are waiting for space in the
redo log buffer
Size the redo log buffer appropriatly
Reduce redo operations
Database configuration and I/O issues
Diagnose appropriate use od SYSTEM,RBS,TEMP, DATA
and INDEX tablespaces.
Use locally managed tablespaces to avoid space
management issues
Detect I/O problems
Ensure that the files are distributed to minimize I/O
contention and use appropriate type of devices.
Use striping where appropriate
Tune checkpoints
Tune DBWn process I/O

The parameter LOG_CHECKPOINT_TIMEOUT specifies an


upper bound on the time a buffer can be dirty in the cache
before DBWn must write it to disk. If you set
LOG_CHECKPOINT to 60, then no buffer reamins dirty in
the cache for more than 60 seconds. The default value for
LOG_CHECKPOINT_TIMEOUT is 1800.

Using Oracle Blocks Efficiently


Determine the appropriate block size
Optimize space usage within blocks
Detect and resolve row migration
Monitor and tune indexes

Optimizing Sort Operations


Identify the SQL operation that requiere sorting

Your first and best strategy is to avoid sorts when


possible, because they consume a substantial amount of
CPU time, memory and disk. However when this cannot be
done,which is most often the case, sorting needs to be
tuned.

Sorting can take place fully in memory, and that is the


desired case. However it is more likely to spill over to disk
sorting,especially with large tables, which can be
extremely time-consuming despite even the best physical
design of a database.
The second best strategy is to sort in memory as much as
possible and sort on disk only when absolutly necessary.
Of course, this implies allocating sufficinet temporary disk
space (in effect, the TEMP tablespace), and separating this
space physically from the rest of the Oracle datafiles,
rollback segments and redo logs.

Who generates sorts ? : The CREATE INDEX statement


obviously requires a sort operation on the index key to
enable the creation of the B*tree structure. ALTER
INDEX... REBUILD likewise requires the same sort.
However, you can choose to sort the data at the OS level
and then create the index with the NOSORT option. This
doesnt usually buy you anything, unless you happen to
already have a sorted copy of the data, because you are
onlu trading RDBMS sorting for OS sorting, which isnt
much of a trade. Other options include using a fast, third
party sorting utility, such as SyncSort, or using Oracle's
Parallel Query Option (PQO) to use SQL*Loader and load
the data in parallel, unsorted.

ORDER BY and GROUP BY usually require sorts. However,


an ORDER BY on an indexed column uses the already
sorted index in most circumstances. To verify this action
use the command EXPLAIN PLAN.

The DISTINCT qualifier must use a sorting


technique(again, unless it is used on a column already
with a unique index) to eliminate duplicte column values.
Like wise a UNION must eliminate duplicate rows
(However a UNION ALL , by definition allows duplicate
rows, so because it doesnt eliminate duplicates, it doesn't
require sorting. If primary keys are enforced on the two
UNIONable tables, there wont be any duplicates to start
with, so the UNION ALL is a recommended substitute for
the UNION operation ). INTERSECT and MINUS require
some duplicate elimination, though experience nowhere
near the burden of a UNION operation. Similarly, IN and
NOT IN can require sorting, especially if they are in
support of nested subqueries. A join operation requires
sorts of whatever tables do not already have exsiting
indexes on the join key. The more usual situation,
though , is for tables to be joined on primary keys(already
having unique indexes),thereby negating the need for
sorting any of the tables. The following list sums up the
SQL commands or operators that can trigger sorts:

a. CREATE INDEX, ALTER INDEX ... REBUILD


b. ORDER BY, GROUP BY
c. DISTINCT
d. UNION, INTERSECT, MINUS
e. IN, NOT IN

The primary parameters affecting sort operations are:

1. SORT_AREA_RETAINED_SIZE : The maximum


amount of memory to be used for an in memory sort
2. SORT_AREA_SIZE : The maximum amount of
memory to be used for an external disk sort
operation, involving the allocation of a temporary
segment.

If a sort operation requires more than


SORT_AREA_RETAINED_SIZE for an in-memory sort, it
attempts to perform the sort within SORT_AREA_SIZE for
an external disk sort, allocating a temporary segment in
the process. If the sort operation requires further
memory, it splits the sort burden into multiple sort runs
and allocates multiple temporary segments for that
purpose. The server process sorts one segment at a time
and returns the merger of the sorted segments as the
result. These memory allocations are not stored in the
SGA shared pool, except when using MTS. Instead, they
are part of the UGA. If you are using MTS,they are part of
the SGA shared pool because the UGA is relocated there
anyway.

Using EXPLAIN PLAN,you can see that many SQL


statements can require multiple sorts within their
execution plans. The sort that is currently executing in
known as the active sort. A join sort is a sort in support of
a joib operation. Any active sort requires
SORT_AREA_SIZE. Any join sort requires
SORT_AREA_RETAINED_SIZE. These settings hold true
only for the dedicated server. For PQO, each parallel query
server requires SORT_AREA_SIZE . However the two sets
of parallel servers can be working at once. So , for PQO,
set the following values:

a. SORT_AREA_SIZE x 2 x (degree of parallelism)


b. SORT_AREA_RETAINED_SIZE x (degree of
parallelism) x (number of sorts > 2)

For PQO, the optional value is 1MB. Higher values havent


yielded better performance. In general SORT_AREA_SIZE
= SORT_AREA_RETAINED_SIZE, except for MTS, which
requires some special considerations.For MTS, set
SORT_AREA_RETAINED_SIZE much smaller than
SORT_AREA_SIZE.As a guideline, you can set:

SORT_AREA_RETAINED_SIZE = (SORT_AREA_SIZE / the


number of expected concurrent sorts), but not
less than 1/10 (SORT_AREA_SIZE)
Temporary (Sort) segments must be created when a sort
cannot take place fully in memory. That is, as discussed,
when the sort operation memory requirments exceed
SORT_AREA_RETAINED_SIZE, it then requires the
allocation of a temporary segment and attempts to work
within SORT_AREA_SIZE. A temporary tablespace
segment cannot contain any permanent objects and
conisists solely of a single sort segment. Temporary
Tablespaces are created with the CREATE or ALTER
TABLESPACE ....... TEMPORARY syntax. Again,these
temporary tablespaces are made up of one segment,
created initially on the fly by the first sort requiring it. This
segment grows in extents as sort concurrency and
operation size increase.

A useful guide in seting the parameters for the extent are


INITIAL = NEXT = ( max size as prescribed by datafile or
disk) / (number of expected concurrent sorts). The
number of expected concurrent sorts can be calculated
roughly as equal to twice the number of concurrent
queries . This is a case when you dont want one large
extent sized just below the datafile size, which would
normally be a good recommendation for general use
(permanent) tablespaces. Also, set INITIAL= NEXT =
some multiple of SORT_AREA_SIZE plus at least one block
for the overhead of the extent header because you
wouldn't want any single sort requiring more than one
extent. At the same time, you can afford to have a few
sorts stored in the same extents, due to the random
nature fo concurrent access. Set PCTINCREASE to 0
because you dont want any surprises such as increasingly
large NEXT extents. Besides, becasue concurrency again
plays a factor here;having equal sized extents is a fair
approach, barring actual sizing techniques, and it works
well with random size requirements ( no single sort need
is too far from the average) .

In the SGA, a memory structure known as the Sort Extent


Pool (SEP) includes the extents that make up the single
sort segments belonging to the temporary tablespaces.
When sort space is requested by a process, this pool
offers free extents (those that have been allocated and
used by an earlier running process, and are now free but
not deallocated) to be reused, much like the capability of
reusing buffers in the database buffer cache. Furthermore,
the V$SORT_SEGMENT contains information such as
number of users,and block using the temporary sort
segments. You can use this to determine efficiency (hits)
and help size your extents properly.

Oracle offers the capability of having sorts bypass the


database buffer cache, this is called sort direct write . Of
course, you will need to SORT_AREA_SIZE bytes,but each
sort operation can have its own memory buffers and write
them directly to disk. The size of the buffers is set by the
init.ora parameter SORT_WRITE_BUFFERSIZE(32-64KB),
and the number of buffers is set by
SORT_WRITE_BUFFER(2-8).Each regular (serial) sort
operation requires a Sort Direct Writes Buffer of:

(SORT_WRITE_BUFFERS x
SORT_WRITE_BUFFER_SIZE ) + SORT_AREA_SIZE
For PQO , each (parallel) sort requires :

((SORT_WRITE_BUFFERS x SORT_WRITE_BUFFER_SIZE)
+ SORT_AREA_SIZE) x 2 x (degree of
parallelism)
The init.ora parameter, SORT_DIRECT_WRITES,
determines the sorting behavior regarding using the
database buffer cache or not. If set to AUTO, the default,
and if SORT_AREA_SIZE >= 10 x Sort Direct Writes
Buffer, the Sort Direct Writes Buffer is used. If set to
FALSE, sort writes are buffered in the database buffer
cache before being written back out to disk.These are
normal sort buffer writes. If set to TRUE, sort writes are
always sort direct writes. VLDBs, DSSs, and Data
Warehouses should normally have this set to TRUE (or at
least left to AUTO)

Ensure that sorting is done in memory when possible


Reduce the number of I/Os required for the sort runs
Allocate temporary space appropriatly
Tuning Rollback Segments
Use the dynamic performance views to check the
rollback segment performance
Reconfigure and monitor rollback segments
Define the number and sizes of rollback segments
Appropriatly allocate rollback segments to transaction.
Monitoring and Detecting Lock Contention
Define types and modes of locking
List possible causes of contention
Use Oracle utilities to detect lock contention
Resolve contention in an emergency
Prevent Locking Problems
SQL Issues and Tuning Considerations for Different
Applications
Identify the Role of the DBA in application tuning
Use optimizer modes to enhance SQL statement
performance

In general,always use the cost-based optimization


approach. The rule-based approach is available for the
benfit of existing applications, but new optimizer
functionality uses the cost-based approach. The following
features are only available with cost-based optimization.

A. Partitioned Tables
B. Index-Organized Tables
C. Reverse Indexes
D. Parallel Execution
E. Star transformations
F. Star Joins

The cost-based approach genrally chooses an execution


plan that is as good as or better than the plan chosen by
the rule-based approach. This is specially true for large
queries with multiple joins or multiple indexes. To
maintain the effectivnes of cost-based optimizer, you must
keep statistics current

To use cost-based optimization for a statement, collect


statistics for the table accessed by the statement and
enable cost-based optimization using one of these
methods :
I. Make sure the OPTIMIZER_MODE init.ora parameter
is set to its default value of CHOOSE.
II. To enable cost-based optimization for you session
only, issue an ALTER SESSION SET OPTIMIZER_MODE
with the ALL_ROWS or FIRST_ROWS option.
III. To enable cost-based optimization for an individual
SQL statement,use any hint other than RULE.

Oracle supports rule-based optimization, but you should


design new applcations to use cost-based optimization.
You should also use cost-based optimization for data
warehousing applications because the cost based
optimizer supports new and enhanced features for DSS.
You should eventually migrate your existing applications
to use th ecost-based approach, because eventually, the
rule-based approach will not be availabe in the Oracle
Server.

Manage stored outlines to store execution paths as a


series of hints

An outline consists primarly of a set of hints that is


equivalent to the optimizers results for the execution plan
generation of a particular SQL statement. When Oracle
creates an outline, Plan Stability examines the
optimization results using the same data used to generate
the execution plan. That is, Oracle uses the input to the
execution plan to generate an outline and not the
execution plan itself.

Oracle uses one of two scenarios when compiling SQL


statements and matching them with outlines. The first
scenario is that if you disable outline use by setting the
system/session parameter USE_STORED_OUTLINES to
FALSE, Oracle does not attmept to match SQL text to
outlines. The second scenario involves the following two
matching steps.

First,if you specify that Oracle must use a particular


outline category, only outlines in that category are
candidates for matching. Second, if the SQL text of the
incoming statement exactly matches the SQL text in an
outline in that category, Oracle considers both texts
identical and uses the outline. Oracle considers any
differences a mismatch.

Differences include spacing changes, carriage return


variations, embedded hints, or even differences in
comment text. These rules are identical to the rules for
cursor matching.

How Oracle Stores Outlines:

Oracle stores outline data in the OL$ table and hint data in
the OL$HINTS table. Unless you remove them Oracle
retains outlines indefinitely. Oracle retains execution plans
in cache and only recreates them if they become invalid or
if the cache is not large enough to hold all of the them.
The only effect outlines have on caching execution plans is
that the outlines category name is used in addition to the
SQL text to identify whether the plan is in cache. This
ensures Oracle does not use an execution plan complied
under one category to execute a SQL statement that
Oracle should compile under a different category.

Oracle can automatically create outlines for all SQL


statements, or you can create them for specific SQL
statements. In either case, the outlines derive their input
from the rule-based or cost-based optimizers. Oracle
creates stored outlines automatically when you set the
parameter CREATE_STORED_OUTLINES to TRUE. When
activated, Oracle creates outlines for all executed SQL
statements. You can also create stored outlines for specific
statements using the CREATE OUTLINE statement.

To use stored outlines when Oracle compiles a SQL


statement,set the system parameter
USE_STORED_OUTLINES to TRUE or to a category name.
If you set USE_STORED_OUTLINES to TRUE, Oracle uses
outlines in the DEFAULT category. If you specify a
category in the parameter, Oracle uses outlines in that
category until you re-set the USE_STORED_OUTLINES to
FALSE.If you specify a category name and Oracle does not
find an outline in that category that matches de SQL
statement, Oracle searches for an outline in the DEFAULT
category. You can access information about outlines and
related hint data that Oracle stores in the :
USER_OUTLINES and USER_OUTLINE_HINTS views.

Use the available data access methods to tune the


physical design of the database.
Identify the demands of online transaction processing
(OLTP)

OLTP applications are high throughput,insert/update-


intensive systems.These systems are characterized by
growing volumes of data that several hundred users
access concurrently. Typical OLTP applications are airline
reservation systems and banking applications. They key
goals of OLTP systems are: availability(7x24);
speed(throughput); concurrency and
recoverarability.

In these types of databases you must avoid excessive use


of indexes and clusters because these structures slow
down insert and update activity

The following elements are crucial for OLTP systems:


Rollback segments, Indexes-Cluster and hashing, Discrete
Transactions, Data Block Size, Dynamic Allocation of space
to tables and rollback segments, Transaction processing
monitors and the multi-threaded server, the shared pool,
well tuned SQL statements,integrity constraints,
client/server architecture, dynamically changeable
initialization parameters, procedures, packages and
functions.

Identify the demands of decision support systems


(DSS)

Decision support system or Data Warehousing applications


typcially convert large amounts of information into used-
defined reports. They perform queries on the large
amount of data gathered from OLTP applications. Decision
makers use these applications to determine what
strategies the organization should take. An example of a
decision support system is a marketing tool that
determines the buying patterns of consumers based on
infromation gathered from demographic studies. The key
goals of a data warehousing system (DSS) are repsonse
time, accuracy and availability.
The key to performance in a DSS is properly tuned queries
and proper use of indexes,clusters and hashing.

The following issues are crucial in implementing and


tuning DSS: Materialized Views, Indexes ( B*Tree and
Bitmap), Cluster, hashing, Data block size, Parallel
Execution, Star query, The optimizer, Using hints in
queries, PL/SQL functions in SQL statements.

One way to improve the response time in DSS is to use


Parallel execution. This feature enables multiple processes
to simultaneously process a single SQL statement. By
spreading the processing over many processes, Oracle can
execute complex statements more quickly than if only a
single server processed them. Symmetric multiprocessing
(SMP), clustered, or massively parallel systems gain the
largest performance benefits from parallel execution. This
is because operations can be effectivley spread among
many CPUs on a single system.

Reconfigure systems on a temporary basis for


particular needs.

Try and apply the previous two steps for a temporary


basis, give me a break !!!

Managing the Mixed Workload


Describe the features of Database Resource Manager
Limit the use of resources using Database Resource
Manager
Tuning with Oracle Expert
Describe the features of Oracle Expert
Multithreaded Server Tunning Issues
Identify issues associated with managing users in a
mutilthreaded server environment.

In a MTS environments,just as there is a SGA, there is


also a User Global Area (UGA), which contains user
session information, sort areas, and private SQL areas.
Normally the default RDBMS instance ( also known as
dedicated server) results in a one to one mapping of user
processes to server processes.

With MTS, the UGA is moved up into the shared pool. The
remaining process-specifc memory is reatined in the
Process Global Area (PGA) and holds information that
cannot be shared. In this way, the total amount of
memory required in using MTS is not really more than the
dedicated server, just redistributed. However, you have to
increase the SHARED_POOL_SIZE, To help size the UGA
that is relocated to the SGA, use:

select sum(value)
from v$SESSTAT SE, V$STATNAME SN
WHERE SN.NAME = 'max session memory'
AND SE.STATISTIC# = SN.STATISTIC#;
This yields the maximum amount of UGA session memory
used since instance startup. You may wish to take
samples of this result often , and then increment the
SHARED_POOL_SIZE by this amount. From Oracle 8 you
can use 'session uga memory max'

Diagnose and resolve performance issues with a


multithreaded server processes
Configure the multithreaded server envioronment to
optimize performance
With MTD you have some control over the distribution of
server versus user memory. Two init.ora parameters that
affect user memory are:

1. SESSION_CACHED_CURSORS
2. CLOSE_CACHED_OPEN_CURSOR

The parameter SESSION_CACHED_CURSORS can be set


to the expected maximum number of session cursors to be
cached in the users memory area, as long as reparsing is
kept low. This helps offload server memory requirements
at the expense of increasing user memory. Optionally, set
this parameter when statements are frequently reused.

The default setting for CLOSE_CACHE_OPEN_CURSOR is


FALSE, meaning that cursors are not closed on
COMMIT.Optionally set this to TRUE if SQL statements are
rarely reused. Make sure these two parameters do not
conflict, setting both to TRUE or FALSE seems to be the
best approach.

Вам также может понравиться