Вы находитесь на странице: 1из 28

Log Miner in Oracle

RDBMS

Benefits

Pinpointing when a logical corruption


happened in a database, such as errors made
at the application level

Determining actions for fine-grained


recovery at the transaction level.

Performance tuning and capacity planning


through trend analysis.

Performing post auditing

Log Miner Configuration

Requirements

Source
Database:

The
database
that
produces all
the redo log
files that you
want Log
Miner to
analyze

Mining
Database:
The database
that Log
Miner uses
when it
performs the
analysis

Log Miner
Dictionary :
Provides
table and
column
names,
instead of
internal
object IDs

Redo Log
Files:

The Changes
made to the
database

Operating Log Miner

Specify a Log Miner dictionary by using


DBMS_LOGMNR_D.BUILD

Specify a list of redo log files for analysis ,This can be


done Automatically or Manually

Start Log Miner.


Use the DBMS_LOGMNR.START_LOGMNR procedure.

Querying required information from


v$logmnr_contents

End the Log Miner session by


DBMS_LOGMNR.END_LOGMNR procedure

Log Miner Dictionary

Online Dictionary :
This is helpful when changes that has taken place very recently
and not archived yet.
Directs Log Miner to use the current online database dictionary
rather than a Log Miner dictionary contained in a flat file or in
the redo log files being analyzed.
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Extracting to Redo Log Files:


This Dictionary is extracted to Redo log file when Mining is done
in Mining Database
EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Selecting Log file for Log Miner

Automatic Method:
Oracle Recommends Automatic Method

for Log mining This can be done by


mentioning Time frame or SCN

Manual Method :
The Log file which needs to be mined

has to be added manually by DBA

Comparison b/w Automatic and


manual
Automatic
Manual

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME =>


'/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.START_LOGMNR( STARTTIME => '01-Jan-2003


08:30:00', ENDTIME => '01-Jan-2003 08:45:00',
OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
+ DBMS_LOGMNR.CONTINUOUS_MINE);
EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => 621047, ENDSCN
=> 625695, OPTIONS =>
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME =>


'/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);

Querying V$logmnr_contents

Once Log miner is run The data gets populated in


v$logmnr_contents

Select statement on v$logmnr_contents provides log


history information with respect to
timeframe,SCN,SQL_REDO ,SQL_UNDO and so on

The archive redo log files are read sequentiallyuntil either the
filter criteria, such as End Time or end SCN, specified at startup
is met or the end of the archive log file is reached.

It also serves as an auditing on user level and table for Capacity


planning

Filtering and Formatting Data Returned to


V$LOGMNR_CONTENTS

Filtering Data by Time

Filtering Data by SCN

Showing Only Committed Transactions

Skipping Redo Corruptions

Formatting the Appearance of Returned Data for


Readability

Filtering Data by Time


SQL> alter session set nls_date_format="DD-MON-YYYY HH24:MI:SS";
Session altered.
SQL> SELECT SYSDATE FROM DUAL;
SYSDATE
-------------------07-MAY-2014 11:15:27
SQL> CONN HR/HR
Connected.
SQL> update departments set department_name='Marketing_Department' where department_name='Marketing';
1 row updated.
SQL> SELECT SYSDATE FROM DUAL;
SYSDATE
-------------------07-MAY-2014 11:16:51
SQL> commit;
Commit complete.
SQL> conn / as sysdba
Connected.

SQL> begin
dbms_logmnr.start_logmnr (
starttime => '07-May-2014 11:15:00',
endtime => '07-May-2014 11:17:00',
options => dbms_logmnr.dict_from_online_catalog +
dbms_logmnr.continuous_mine +
dbms_logmnr.print_pretty_sql
);
end;
/
PL/SQL procedure successfully completed.

SQL> column sql_undo format a35


SQL> column sql_redo format a35
SQL> set lines 10000
SQL> set pages 200
SQL>
SQL> select timestamp , sql_redo , sql_undo
from v$logmnr_contents
where username = 'HR'
and seg_name = 'DEPARTMENTS';

Filtering Data by SCN


SQL> select current_scn from v$database;
CURRENT_SCN
----------5502643
SQL> conn hr/hr
Connected.

SQL> update departments set department_name='Shipping_Department' where department_name='Shipping';


1 row updated.
SQL> commit;
Commit complete.
SQL> conn / as sysdba
Connected.
SQL> select current_scn from v$database;
CURRENT_SCN
----------5503053

SQL> begin
dbms_logmnr.start_logmnr (
startscn => '5502643',
endscn => '5503053',
options => dbms_logmnr.dict_from_online_catalog +
dbms_logmnr.continuous_mine +
dbms_logmnr.print_pretty_sql
);
end;
/

select scn , sql_redo , sql_undo from v$logmnr_contents


where username = 'HR'
and seg_name = 'DEPARTMENTS';

Showing Only Committed


Transactions
SQL> connect hr/hr
Connected.
SQL> create table test
2 (id number(10));
Table created.
SQL> insert into test values(1);
1 row created.
SQL> insert into test values(2);
1 row created.
SQL> insert into test values(3);
1 row created.
SQL> insert into test values(4);
1 row created.
SQL> commit;
Commit complete.

SQL> insert into test values(98);


1 row created.
SQL> insert into test values(99);
1 row created.
SQL> insert into test values(100);
1 row created.
SQL> begin
2
dbms_logmnr.start_logmnr (
3
starttime => '08-May-2014 00:00:00',
4
endtime => '08-May-2014 09:13:00',
5
options => dbms_logmnr.dict_from_online_catalog +
6
DBMS_LOGMNR.COMMITTED_DATA_ONLY
7
);
8
end;
9
/

SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
2
3
4

column sql_undo format a70


column sql_redo format a70
set lines 10000
set pages 200
select timestamp , sql_redo , sql_undo
from v$logmnr_contents
where username = 'HR'
and seg_name = 'TEST';

Skipping Redo Corruptions


Add redo log files of interest.
-EXECUTE DBMS_LOGMNR.ADD_LOGFILE(logfilename => '/usr/oracle/data/db1arch_1_16_482701534.log' options => DBMS_LOGMNR.NEW);
-- Start LogMiner
-EXECUTE DBMS_LOGMNR.START_LOGMNR();
-- Select from the V$LOGMINER_CONTENTS view. This example shows corruptions are -- in the redo log files.
-SELECT rbasqn, rbablk, rbabyte, operation, status, info
FROM V$LOGMNR_CONTENTS;
ERROR at line 3:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 6 change 73528 time 11/06/2002 11:30:23
ORA-00334: archived log: /usr/oracle/data/dbarch1_16_482701534.log

-- Restart LogMiner. This time, specify the SKIP_CORRUPTION option.


-EXECUTE DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.SKIP_CORRUPTION);
-----

Select from the V$LOGMINER_CONTENTS view again. The output indicates that
corrupted blocks were skipped: CORRUPTED_BLOCKS is in the OPERATION
column, 1343 is in the STATUS column, and the number of corrupt blocks
skipped is in the INFO column.

SELECT rbasqn, rbablk, rbabyte, operation, status, info


FROM V$LOGMNR_CONTENTS;
RBASQN RBABLK RBABYTE OPERATION
STATUS INFO
13
2
76
START
0
13
2
76
DELETE
0
13
3
100
INTERNAL
0
13
3
380
DELETE
0
13
0
0
CORRUPTED_BLOCKS 1343 corrupt blocks 4 to 19 skipped
13
20
116
UPDATE
0

V$logmnr_logs

V$logmnr_logs view provides you list of redo logs which


logminer is using for analysis
The view gets cleared every time when logminer ends

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


PL/SQL procedure successfully completed.
SQL> select low_time,high_time,filename,filesize,info from v$logmnr_logs;
no rows selected
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( > LOGFILENAME =>
'E:\Oracle\product\10.2.0\db_1\flash_recovery_area\ORCL\ARCHIVELOG\2014_05_08\O1_MF_1_195_9POZP
52M_.ARC', > OPTIONS => DBMS_LOGMNR.NEW);
PL/SQL procedure successfully completed.

SQL> select low_time,high_time,filename,filesize,info from v$logmnr_logs;

SQL> ARCHIVE LOG LIST


Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
194
Next log sequence to archive 196
Current log sequence
196
SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( > LOGFILENAME =>
'E:\Oracle\product\10.2.0\db_1\flash_recovery_area\ORCL\ARCHIVELOG\2014_05_08\O1_MF_1_196_9PP1N
M5K_.ARC', > OPTIONS => DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.

Restoring old Data by logminer

Logical Corruption happens when user


deletes or update incorrect information .
This can be rolled back with the help of
Undo information gathered in the
V$logmnr_contenets
DBA should either know the time frame
or the SCN range for this ,Else this has to
be done in old fashioned way of adding
archives manually

SQL> SELECT DEPARTMENT_NAME FROM HR.DEPARTMENTS WHERE DEPARTMANENT_NAME LIKE


'%Marketing%';
DEPARTMENT_NAME
-----------------------------Marketing_Department
begin
dbms_logmnr.start_logmnr (
starttime => '08-May-2014 10:15:00',
endtime => '08-May-2014 10:40:00',
options => dbms_logmnr.dict_from_online_catalog +
dbms_logmnr.continuous_mine +
dbms_logmnr.no_sql_delimiter +
dbms_logmnr.print_pretty_sql
);
end;
/
PL/SQL procedure successfully completed.

SQL> set serveroutput on


SQL> declare
2 CURSOR c1 IS
3 select sql_undo from v$logmnr_contents
4 where username = 'HR'
5 and seg_name = 'DEPARTMENTS';
6 begin
7 for rec in c1 loop
8
execute immediate rec.sql_undo;
9
dbms_output.put_line(sql%rowcount||' row(s) updated.');
10 end loop;
11 end;
12 /
1 row(s) updated.
PL/SQL procedure successfully completed.
SQL> SELECT DEPARTMENT_NAME FROM HR.DEPARTMENTS WHERE DEPARTMANENT_NAME LIKE
'%Marketing%';
DEPARTMENT_NAME
-----------------------------Marketing

SQL> exec dbms_logmnr.end_logmnr;

Logminer Options

COMMITTED_DATA_ONLY:
If set, DML statements corresponding to committed transactions are returned.
DML statements corresponding to a committed transaction are grouped together.
Transactions are returned in their commit order. Transactions that are rolled back or
in-progress are filtered out

CONTINUOUS_MINE:
Directs LogMiner to automatically add redo log files, as needed, to find the data of
interest. You only need to specify the first log to start mining, or just the starting
SCN or date to indicate to LogMiner where to begin mining logs. You are not
required to specify any redo log files explicitly. LogMiner automatically adds and
mines the (archived and online) redo log files for the data of interest .

DICT_FROM_ONLINE_CATALOG:
Directs LogMiner to use the current online database dictionary rather than a
LogMiner dictionary contained in a flat file or in the redo log files being analyzed.

PRINT_PRETTY_SQL:
If set, LogMiner formats the reconstructed SQL statements for ease of
reading. These reconstructed SQL statements are not executable.

SKIP_CORRUPTION:
Directs a select operation on the GV$LOGMNR_CONTENTS view to skip any
corruptions in
the redo log file being analyzed and continue processing.

Supplemental Logging

Oracle Recommends for supplemental logging for Oracle Mining


Minimal Supplemental Logging:
Minimal supplemental logging logs the minimal amount of information
needed for LogMiner to identify, group, and merge the redo operations
associated with DML changes. This can be enabled by following query
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

All Column Logging:


This option specifies that when a row is updated, all columns of that
row (except for LOBs, LONGS, and ADTs) are placed in the redo log file.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

PRIMARY KEY Logging :


This option causes the database to place all columns of a row's primary key
in the redo log file whenever a row containing a primary key is updated
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY)
COLUMNS;

UNIQUE KEY Logging:


This option causes the database to place all columns of a row's unique key
in the redo log file whenever a row containing a primary key is updated
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;

FORIEGN KEY Logging:


This option causes the database to place all columns of a row's foreign key in
the redo log file if any column belonging to the foreign key is modified.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;

Вам также может понравиться