Вы находитесь на странице: 1из 6

Once you get to know what the problem is, the next step requires investigation.

While investigating the


aim is to:
Confirm the problem.
Quantify the problem.
Note any unusual complications.

Below are the ways you can start with your investigation:

1) Using the V$SQL view


This view contains cursor level details for SQL queries. It can be used when trying to locate the session or
person responsible for parsing the cursor.

SQL_TEXT: First 1000 characters of the SQL text.


EXECUTIONS: Number of executions of this statement. This is a good metric because it simulates the
actual load on the database.
DISK_READS: Number of disk reads for this statement. This will extract the SQL that typically exceeds
the data buffer capacity, generally non-repeating data fetches that are unlikely to be cached.
BUFFER_GETS: Number of logical reads. This measures logical IO and CPU and memory stress.
ROWS_PROCESSED: Total number of rows this SQL statement returns.

Finding SQL statement with excessive Disk Reads


The below query gives the result of the SQL statement that consumes at least 10,000 blocks read from
disk for each execution.

SELECT Executions EXEC, Disk_Reads DISK, Sql_Text TEXT


FROM V$SQL
WHERE Disk_Reads/ (0.01 + executions) > 10000;
Result:
Exec
5

DISK
79204

TEXT
SELECT Ename FROM Emp;

Explanation: The SQL is retrieving every single employee name from emp table. The problem is that
each of the executions required about 16000 disk reads (79204 /5).
SQL with high disk_reads will generate a load that is disk I/O intensive (high physical reads).

Note: Small decimal 0.01 in the query is inserted in the denominator so as to prevent a divide by zero
error. You can take any small number of your choice.
Besides looking for excessive disk reads, it is wise to look for number of logical reads as well. Although
not as expensive as disk reads, there is nevertheless a cost associated with each logical read.
The logical reads are associated with the column called BUFFER_GETS in the V$SQL view.

Finding SQL with excessive logical reads:


SELECT Executions EXEC, Buffer_Gets LOG, Sql_text TEXT
FROM V$SQL
WHERE Buffer_Gets/ (0.01 + Executions) > 20000;
Result:
EXEC LOG

TEXT

SELECT Ename FROM Emp;

121305

Explanation: Excessive logical reads frequently indicates problem SQL statement. The script has
20,000 per execution versus 10,000 for disk reads that we used in our previous query.
During the search for troublesome SQL statements, be cautious when using these criterions based on disk
reads. Once data is cached, the number of disk reads will probably fall sharply on subsequent executions,
especially if the query is run several times in quick successions. Logical reads, on the other hand, will be
nearly identical on subsequent execution.

Finding Excessive Total Disk Reads:


Not all the problems are due to high I/O per execution; sometimes the cumulative effect is the problem.

SELECT Executions EXEC, Disk_Reads DISK, Sql_Text TEXT


FROM V$SQL
WHERE Disk_Reads > 1000000;
The above query restrict the result to SQL statements that have consumed at least 1M blocks read from
disktotal of all executions.

Result:
Exec

DISK

TEXT

1231421

3518243

UPDATE lock_table SET code= :b0;

Explanation: In this SQL each execution runs fine, consuming only 3 disk reads per execution
(3518243/1231421).
The problem here is not due to poorly tuned SQL statement; rather, it is due to huge number of
executions (1231421).
Now one should be curious to know why this statement has been executed over 1Million times, or why it
was not cached.

When searching for BAD SQL, the lack of bind variables presents a slight difficulty; the query of V$SQL
might return hundred or even thousands of different SQL statements. Each SQL statement only differ
slightly; nevertheless, the shared SQL area (shared pool) in memory will consider the statement unique.
In these scenarios, the question is, how to group all the similar statement together?
There are several ways to accomplish this, but one very simple way is to group the SQL statement by the
amount of memory they consume. This tactic works because SQL statements that are identical except for
one parameter typically consume exactly same amount of memory. In V$SQL view, this is called
PERSISTENT_MEM.

Finding BAD Sql Form:


The query below finds the SQL forms that consume at least 1M blocks read from the disk, the total of
execution of the same form:
SELECT Persistent_Mem MEM, SUM (Disk_Reads) DISK
FROM V$SQL
GROUP BY Persistent_Mem
HAVING SUM (Disk_Reads) > 1000000
Order by SUM (Disk_Reads);

Result:
MEM

DISK

422
516

1014157
1084906

163

2004713

682

5719359

Explanation: We see from this query that SQL statements having a memory usage of 682 bytes are
responsible for over 5 million disk reads. You might think there is only one sql but if I add count(*)

probably it will display the number of SQL statements(either same or different) using this amount of
memory.
Having identified this row, the next step would be to list some of the individual SQL statements having the
shown value for Persistent_Mem. But when using this method, note that there will occasionally be other
innocent SQL statements that happen to have exactly the same value for PERSISTENT_MEM.
But this is the minor inconvenience that can be rectified by using SUBSTR function to select only first few
characters as given below:
SELECT Substr( sql_text , 1, 50) Similar Sql, COUNT(*)
FROM V$SQL
GROUP BY Substr( sql_text , 1, 50)
HAVING COUNT(*) > 1000
ORDER BY COUNT(*);
This Code would list and count the occurrence of SQL statements that are identical for atleast first 50
characters.

V$SQL LIMITATIONS
When querying the V$SQL view, remember that the statistics are not kept in memory forever; depending
on the size of the shared pool, statistics may soon aged out, thus making the statistics useless.
In some cases, it is also helpful to flush the shared pool prior to running the application in question. This
will reset the statistics in this view, so that there will be no confusion about which statistics were from
prior operations. Ofcourse this act of flushing of shared pool should be done very carefully in production
as this may further degrade the performance as each SQL have to repeat parsing for every SQL statement
received because each one will be seen as new.

Watch For Active V/S Inactive Sessions


Sometimes it happens that SQL seems to be running continuously. This can happen, for instance, if
the session is blocked or if the SQL statement is badly tuned.
A blocked session can occur if one session tries to Update a particular row that a different session is
in process of changing. The session waiting will be blocked, and the other session will be a blocker.
The session being blocked is shown as active.

The vast majority of sessions are inactive that is they are really not doing anything. The user is still
connected to the database but no queries are being run at present.

2) Activating SQL_TRACE
SQL Tracing is a very powerful tool for finding out exactly what an application is doing. You have 2
choices for starting the trace; either trace a particular session or trace the entire database. Both have their
uses, and it is important to understand clearly how to activate each method.
If tracing is desired as entire DB level, simply change one parameter in init.ora file then restart the DB.
SQL_TRACE = TRUE
For session level tracing simply issue the following command in your SQL*PLUS:
ALTER SESSION SET SQL_Trace = TRUE;
Similarly for disabling for your own session, simply issue this command:
ALTER SESSION SET SQL_Trace = FALSE;
However, you might ask how an application could issue the above statements. The answer is probably it
may not, but ofcourse there are ways to achieve the same.
In order to activate tracing for another session, it is first necessary to obtain the SID and Serial# for that
session. However this is easily retrieve from the V$SESSION dynamic view. Once these two parameters
are shown you can issue the following command:
EXECUTE SYS.dbms_system.set_sql_trace_in_session (SID, -2 SERIAL#, TRUE);

Where is the TRACE FILE?


When SQL_TRACE is activated for a session, Oracle will create a trace file in the udump admin area. File
will not really exist until the trace is actually activated.
Note that this destination is specified by an init.ora file parameter USER_DUMP_DEST
USER_DUMP_DEST = /u01/app/oracle/admin/testdb/udump
When trace is enabled, all database operations will be written to the trace file in the directory specified.
The file is not overwritten by each operation; instead new operation entries are overwritten.
For very long tracing operations, keep in mind that the file size will be limited by the init.ora parameter
Max_Dump_File_Size.
It is sometimes useful to place certain statements, or flags, in the trace file. This is easily accomplished
when the trace process is initially activated.
ALTER SESSION /* Data Warehouse module 1 */ SET Sql_Trace = TRUE;

If many different trace files are being generated, this will assist the analyst in identifying which trace is
which.
Use a SQL hint to place flags in the SQL trace file.
To obtain the maximum benefits from the trace file the timing flag should be turned ON. By default the
timing is turned OFF.
Thus it is good practice to activate timing by including the following init.ora parameter:
Timed_Statistics = True
For an individual session, timing is easily enabled with the following command:
ALTER SESSION SET Timed_Statistics = True;
For database as a whole, statistics may be activated with the following command:
ALTER SYSTEM SET Timed_Statistics = True;

Time_Statistics only produce very slight performance degradation; the benefits far outweigh the cost.

Вам также может понравиться