Академический Документы
Профессиональный Документы
Культура Документы
May 17
2012
Sai DBA
An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery.
contents
Overview of Oracle Grid Architecture Difference between a cluster and a grid Responsibilities of Database Administrators Creating the Database
Page No's
6 6 7 7 7
Managing Tablespaces and Datafiles Creating New Tablespaces Bigfile Tablespaces (Introduced in Oracle Ver. 10g) To Extend the Size of a tablespace To decrease the size of a tablespace Coalescing Tablespaces Taking tablespaces Offline or Online Making a Tablespace Read only. Renaming Tablespaces Dropping Tablespaces Temporary Tablespace Increasing or Decreasing the size of a Temporary Tablespace Tablespace Groups Creating a Temporary Tablespace Group Assigning a Tablespace Group as the Default Temporary Tablespace Diagnosing and Repairing Locally Managed Tablespace Problems Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap) Scenario 2: Dropping a Corrupted Segment Scenario 3: Fixing Bitmap Where Overlap is Reported Scenario 4: Correcting Media Corruption of Bitmap Blocks Scenario 5: Migrating from a Dictionary-Managed to a Locally Managed Tablespace Transporting Tablespaces Procedure for transporting tablespaces Transporting Tablespace Example Viewing Information about Tablespaces and Datafiles Relocating or Renaming Datafiles Renaming or Relocating Datafiles belonging to a Single Tablespace Procedure for Renaming and Relocating Datafiles in Multiple Tablespaces
11 11 12 12 13 13 13 14 14 14 15 15 15 16 16 16 17 18 18 18 18 19 20 21 24 24 24 25
27 27 27 27 28 28 28 29 29 30 30 31 31 32 36 36 36 37 37 37 38 39 39 41 42 43 44 44 45 45 45 47 47
Adding a New Redo Logfile Group Adding Members to an existing group Dropping Members from a group Dropping Logfile Group Resizing Logfiles Renaming or Relocating Logfiles Clearing REDO LOGFILES Viewing Information About Logfiles
4
Managing Control Files Multiplexing Control File Changing the Name of a Database Creating A New Control File Cloning an Oracle Database
Managing the UNDO TABLESPACE Switching to Automatic Management of Undo Space Calculating the Space Requirements For Undo Retention Altering UNDO Tablespace Dropping an Undo Tablespace Switching Undo Tablespaces Viewing Information about Undo Tablespace
6 SQL Loader
CASE STUDY (Loading Data from MS-ACCESS to Oracle) CASE STUDY (Loading Data from Fixed Length file into Oracle) Loading Data into Multiple Tables using WHEN condition Conventional Path Load and Direct Path Load Direct Path Restrictions on Using Direct Path Loads
7 Export and Import
Invoking Export and Import Command Line Parameters of Export tool Example of Exporting Full Database Example of Exporting Schemas
Exporting Individual Tables Exporting Consistent Image of the tables Using Import Utility Example Importing Individual Tables Example, Importing Tables of One User account into another User account Example Importing Tables Using Pattern Matching Migrating a Database across platforms
8 DATA PUMP Utility
47 48 49 49 49 49 49 51 51 51 52 52 53 53 53 55 55 55 55 55 56 56 56 56 58 58 58 60 61 62 63 63 63 64 64
Using Data Pump Export Utility Example of Exporting a Full Database Example of Exporting a Schema Exporting Individual Tables using Data Pump Export Excluding and Including Objects during Export Using Query to Filter Rows during Export Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)
9 Data Pump Import Utility
Importing Full Dump File Importing Objects of One Schema to another Schema Loading Objects of one Tablespace to another Tablespace Generating SQL File containing DDL commands using Data Pump Import Importing objects of only a Particular Schema Example Importing Only Particular Tables Running Import Utility in Interactive Mode
10 Flash Back Features
Flashback Query Using Flashback Version Query Using Flashback Table to return Table to Past States Purging Objects from Recycle Bin Flashback Drop of Multiple Objects With the Same Original Name Flashback Database: Alternative to Point-In-Time Recovery Enabling Flash Back Database To how much size we should set the flash recovery area How far you can flashback database Example: Flashing Back Database to a point in time
11 Log Miner
66 66 66 67 67 67 68 68 68 68 70 70 75 75 75 76 76 76
LogMiner Configuration LogMiner Dictionary Options Using the Online Catalog Extracting a LogMiner Dictionary to the Redo Log Files Extracting the LogMiner Dictionary to a Flat File Redo Log File Options Automatically Manually Example: Finding All Modifications in the Current Redo Log File Example of Mining Without Specifying the List of Redo Log Files Explicitly Example : Mining Redo Log Files in a Given Time Range
12 BACKUP AND RECOVERY
Opening the Database in Archivelog Mode Bringing the Database again in NoArchiveLog mode Taking Offline (COLD) Backups Taking Online (HOT) Backups Recovering from the Loss of a Datafile
Select the standard database block size. This is specified at database creation by the DB_BLOCK_SIZE initialization parameter and cannot be changed after the database is created. For databases, block size of 4K or 8K is widely used Before you start creating the Database it is best to write down the specification and then proceed The examples shown in these steps create an example database my_ica_db Let us create a database my_ica_db with the following specification Database name and System Identifier
= =
myicadb myicadb
(we will have 5 tablespaces in this database. With 1 datafile in each tablespace)
Size
Logfile Group Member Group 1 /u01/oracle/oradata/myica/log1.ora Group 2 /u01/oracle/oradata/myica/log2.ora CONTROL FILE PARAMETER FILE
/u01/oracle/oradata/myica/control.ora /u01/oracle/dbs/initmyicadb.ora
(rememer the parameter file name should of the format init<sid>.ora and it should be in ORACLE_HOME/dbs directory in Unix o/s and ORACLE_HOME/database directory in windows o/s) Now let us start creating the database. Step 1: Login to oracle account and make directories for your database. $mkdir $mkdir $mkdir $mkdir /u01/oracle/oradata/myica /u01/oracle/oradata/myica/bdump /u01/oracle/oradata/myica/udump /u01/oracle/oradata/myica/cdump
Step 2: Create the parameter file by copying the default template (init.ora) and set the required parameters $cd /u01/oracle/dbs $cp init.ora initmyicadb.ora Now open the parameter file and set the following parameters $vi initmyicadb.ora DB_NAME=myicadb DB_BLOCK_SIZE=8192 CONTROL_FILES=/u01/oracle/oradata/myica/control.ora BACKGROUND_DUMP_DEST=/u01/oracle/oradata/myica/bdump USER_DUMP_DEST=/u01/oracle/oradata/myica/udump CORE_DUMP_DEST=/u01/oracle/oradata/myica/cdump UNDO_TABLESPACE=undotbs UNDO_MANAGEMENT=AUTO After entering the above parameters save the file by pressing Esc :wq Step 3: Now set ORACLE_SID environment variable and start the instance. $export ORACLE_SID=myicadb $sqlplus Enter User: / as sysdba SQL>startup nomount Step 4: Give the create database command Here I am not specfying optional setting such as language, characterset etc. For these settings oracle will use the default values. I am giving the barest command to create the database to keep it simple. The command to create the database is
SQL>create database myicadb datafile /u01/oracle/oradata/myica/sys.dbf size 250M sysaux datafile /u01/oracle/oradata/myica/sysaux.dbf size 100m undo tablespace undotbs datafile /u01/oracle/oradata/myica/undo.dbf size 100m default temporary tablespace temp tempfile /u01/oracle/oradata/myica/tmp.dbf size 100m logfile group 1 /u01/oracle/oradata/myica/log1.ora size 50m, group 2 /u01/oracle/oradata/myica/log2.ora size 50m;
After the command fishes you will get the following message Database created. If you are getting any errors then see accompanying messages. If no accompanying messages are shown then you have to see the alert_myicadb.log file located in BACKGROUND_DUMP_DEST directory, which will show the exact reason why the command has failed. After you have rectified the error please delete all created files in /u01/oracle/oradata/myica directory and again give the above command.
Step 5: After the above command finishes, the database will get mounted and opened. Now create additional tablespaces To create USERS tablespace SQL>create tablespace users datafile /u01/oracle/oradata/myica/usr.dbf size 100M; To create INDEX_DATA tablespace SQL>create tablespace index_data datafile /u01/oracle/oradata/myica/indx.dbf size 100M
Step 6: To populate the database with data dictionaries and to install procedural options execute the following scripts First execute the CATALOG.SQL script to install data dictionaries SQL>@/u01/oracle/rdbms/admin/catalog.sql The above script will take several minutes. After the above script is finished run the CATPROC.SQL script to install procedural option. SQL>@/u01/oracle/rdbms/admin/catproc.sql This script will also take several minutes to complete. Step 7: Now change the passwords for SYS and SYSTEM account, since the default passwords change_on_install and manager are known by everybody. SQL>alter user sys identified by myica; SQL>alter user system identified by myica; Step 8: Create Additional user accounts. You can create as many user account as you like. Let us create the popular account SCOTT. SQL>create user scott default tablespace users identified by tiger quota 10M on users; SQL>grant connect to scott; Step 9: Add this database SID in listener.ora file and restart the listener process.
10
$cd /u01/oracle/network/admin $vi listener.ora (This file will already contain sample entries. Copy and paste one sample entry and edit the SID setting) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION =
(ADDRESS =(PROTOCOL = TCP)(HOST=200.200.100.1)(PORT = 1521))
) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME =/u01/oracle) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME=ORCL) (ORACLE_HOME=/u01/oracle) ) #Add these lines (SID_DESC = (SID_NAME=myicadb) (ORACLE_HOME=/u01/oracle) ) )
Save the file by pressing Esc :wq Now restart the listener process. $lsnrctl stop $lsnrctl start Step 10: It is recommended to take a full database backup after you just created the database. How to take backup is deal in the Backup and Recovery Section.
11
Separate user data from data dictionary data to reduce contention among dictionary objects and schema objects for the same datafiles. Separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. Store different the datafiles of different tablespaces on different disk drives to reduce I/O contention. Take individual tablespaces offline while others remain online, providing better overall availability.
Concurrency and speed of space operations is improved, because space allocations and deallocations modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources such as enqueues Performance is improved, because recursive operations that are sometimes required during dictionarymanaged space allocation are eliminated
AUTOALLOCATE causes the tablespace to be system managed with a minimum extent size of 64K. The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is managed with extents of uniform size. You can specify that size in the SIZE clause of UNIFORM. If you omit SIZE, then the default size is 1M. The following example creates a Locally managed tablespace with uniform extent size of 256K SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K; To Create Dictionary Managed Tablespace SQL> CREATE TABLESPACE ica_lmt DATAFILE '/u02/oracle/ica/ica01.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY;
12
This will increase the size from 50M to 100M Option 2 You can also extend the size of a tablespace by adding a new datafile to a tablespace. This is useful if the size of existing datafile is reached o/s file size limit or the drive where the file is existing does not have free space. To add a new datafile to an existing tablespace give the following command. SQL> alter tablespace add datafile Option 3 You can also use auto extend feature of datafile. In this, Oracle will automatically increase the size of a datafile whenever space is required. You can specify by how much size the file should increase and Maximum size to which it should extend. To make a existing datafile auto extendable give the following command SQL> alter database datafile /u01/oracle/ica/icatbs01.dbf auto extend ON next 5M maxsize 500M; You can also make a datafile auto extendable while creating a new tablespace itself by giving the following command. SQL> create tablespace ica datafile /u01/oracle/ica/icatbs01.dbf size 50M auto extend ON next 5M maxsize 500M; /u02/oracle/ica/icatbs02.dbf size 50M;
13
Coalescing Tablespaces
A free extent in a dictionary-managed tablespace is made up of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the database uses the free extent closest in size to the required extent. In some cases, when segments are dropped, their extents are deallocated and marked as free, but adjacent free extents are not immediately recombined into larger free extents. The result is fragmentation that makes allocation of larger extents more difficult. You should often use the ALTER TABLESPACE ... COALESCE statement to manually coalesce any adjacent free extents. To Coalesce a tablespace give the following command SQL> alter tablespace ica coalesce;
14 SQL>alter database datafile /u01/oracle/ica/ica_tbs01.dbf offline; Again to bring it back online give the following command SQL> alter database datafile /u01/oracle/ica/ica_tbs01.dbf online; Note: You cant take individual datafiles offline it the database is running in NOARCHIVELOG mode. If the datafile has become corrupt or missing when the database is running in NOARCHIVELOG mode then you can only drop it by giving the following command SQL>alter database datafile /u01/oracle/ica/ica_tbs01.dbf offline for drop;
Renaming Tablespaces
Using the RENAME TO clause of the ALTER TABLESPACE, you can rename a permanent or temporary tablespace. For example, the following statement renames the users tablespace:
ALTER TABLESPACE users RENAME TO usersts;
The COMPATIBLE parameter must be set to 10.0 or higher. If the tablespace being renamed is the SYSTEM tablespace or the SYSAUX tablespace, then it will not be renamed and an error is raised. If any datafile in the tablespace is offline, or if the tablespace is offline, then the tablespace is not renamed and an error is raised.
Dropping Tablespaces
You can drop a tablespace and its contents (the segments contained in the tablespace) from the database if the tablespace and its contents are no longer required. You must have the DROP TABLESPACE system privilege to drop a tablespace.
15
Caution: Once a tablespace has been dropped, the data in the tablespace is not recoverable. Therefore, make sure
that all data contained in a tablespace to be dropped will not be required in the future. Also, immediately before and after dropping a tablespace from a database, back up the database completely To drop a tablespace give the following command. SQL> drop tablespace ica; This will drop the tablespace only if it is empty. If it is not empty and if you want to drop it anyhow then add the following keyword SQL>drop tablespace ica including contents; This will drop the tablespace even if it is not empty. But the datafiles will not be deleted you have to use operating system command to delete the files. But If you include datafiles keyword then, the associated datafiles will also be deleted from the disk.
Temporary Tablespace
Temporary tablespace is used for sorting large tables. Every database should have one temporary tablespace. To create temporary tablespace give the following command. SQL>create temporary tablespace temp tempfile /u01/oracle/data/ica_temp.dbf size 100M extent management local uniform size 5M; The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The AUTOALLOCATE clause is not allowed for temporary tablespaces.
The following statement drops a temporary file and deletes the operating system file: SQL> ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP DATAFILES; INCLUDING
Tablespace Groups
A tablespace group enables a user to consume temporary space from multiple tablespaces. A tablespace group has the following characteristics:
16
It contains at least one tablespace. There is no explicit limit on the maximum number of tablespaces that are contained in a group. It shares the namespace of tablespaces, so its name cannot be the same as any tablespace. You can specify a tablespace group name wherever a tablespace name would appear when you assign a default temporary tablespace for the database or a temporary tablespace for a user.
You do not explicitly create a tablespace group. Rather, it is created implicitly when you assign the first temporary tablespace to the group. The group is deleted when the last temporary tablespace it contains is removed from it. Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces. The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces.
Description Verifies the consistency of the extent map of the segment. Marks the segment corrupt or valid so that appropriate error recovery can
17 Procedure Description be done. Cannot be used for a locally managed SYSTEM tablespace.
SEGMENT_DROP_CORRUPT
Drops a segment currently marked corrupt (without reclaiming space). Cannot be used for a locally managed SYSTEM tablespace. Dumps the segment header and extent map of a given segment. Verifies that the bitmaps and extent maps for the segments in the tablespace are in sync. Rebuilds the appropriate bitmap. Cannot be used for a locally managed SYSTEM tablespace. Marks the appropriate data block address range (extent) as free or used in bitmap. Cannot be used for a locally managed SYSTEM tablespace. Rebuilds quotas for given tablespace.
SEGMENT_DUMP TABLESPACE_VERIFY
TABLESPACE_REBUILD_BITMAPS
TABLESPACE_FIX_BITMAPS
TABLESPACE_REBUILD_QUOTAS
Cannot be used to migrate a locally managed SYSTEM tablespace to a dictionary-managed SYSTEM tablespace.
TABLESPACE_MIGRATE_TO_LOCAL
Migrates a tablespace from dictionary-managed format to locally managed format. Relocates the bitmaps to the destination specified. Cannot be used for a locally managed system tablespace.
TABLESPACE_RELOCATE_BITMAPS
TABLESPACE_FIX_SEGMENT_STATES Fixes the state of the segments in a tablespace in which migration was
aborted.
Be careful using the above procedures if not used properly you will corrupt your database. Contact Oracle Support before using these procedures. Following are some of the Scenarios where you can use the above procedures
Scenario 1: Fixing Bitmap When Allocated Blocks are Marked Free (No Overlap)
The TABLESPACE_VERIFY procedure discovers that a segment has allocated blocks that are marked free in the bitmap, but no overlap between segments is reported.
18 In this scenario, perform the following tasks: 1. Call the SEGMENT_DUMP procedure to dump the ranges that the administrator allocated to the segment. 2. For each range, call the TABLESPACE_FIX_BITMAPS procedure with the TABLESPACE_EXTENT_MAKE_USED option to mark the space as used. 3. Call TABLESPACE_REBUILD_QUOTAS to fix up quotas.
19 For example if you want to migrate a dictionary managed tablespace ICA2 to Locally managed then give the following command.
EXEC DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('ica2');
Transporting Tablespaces
You can use the transportable tablespaces feature to move a subset of an Oracle Database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. The tablespaces being transported can be either dictionary managed or locally managed. Starting with Oracle9i, the transported tablespaces are not required to be of the same block size as the target database standard block size. Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are simply copied to the destination location, and you use an import utility to transfer only the metadata of the tablespace objects to the new database. Starting with Oracle Database 10g, you can transport tablespaces across platforms. This functionality can be used to Allow a database to be migrated from one platform to another. However not all platforms are supported. To see which platforms are supported give the following query.
SQL> COLUMN PLATFORM_NAME FORMAT A30 SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM; PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------ -------------1 Solaris[tm] OE (32-bit) 2 Solaris[tm] OE (64-bit) 7 Microsoft Windows NT 10 Linux IA (32-bit) 6 AIX-Based Systems (64-bit) 3 HP-UX (64-bit) 5 HP Tru64 UNIX 4 HP-UX IA (64-bit) 11 Linux IA (64-bit) 15 HP Open VMS 10 rows selected. Big Big Little Little Big Big Little Big Little Little
If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of
20 the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform. Important: Before a tablespace can be transported to a different platform, the datafile header must identify the platform to which it belongs. In an Oracle Database with compatibility set to 10.0.0 or higher, you can accomplish this by making the datafile read/write at least once. SQL> alter tablespace ica read only; Then, SQL> alter tablespace ica read write;
21
This step is only necessary if you are transporting the tablespace set to a platform different from the source platform. If ica_sales_1 and ica_sales_2 were being transported to a different platform, you can execute the following query on both platforms to determine if the platforms are supported and their endian formats:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.
Step 2: Pick a Self-Contained Set of Tablespaces
There may be logical or physical dependencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. That is it should not have tables with foreign keys referring to primary key of tables which are in other tablespaces. It should not have tables with some partitions in other tablespaces. To find out whether the tablespace is self contained do the following
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ica_sales_1,ica_sales_2', TRUE);
After executing the above give the following query to see whether any violations are there.
22
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
VIOLATIONS --------------------------------------------------------------------------Constraint DEPT_FK between table SAMI.EMP in tablespace ICA_SALES_1 and table SAMI.DEPT in tablespace OTHER Partitioned table SAMI.SALES is partially contained in the transportable set
These violations must be resolved before ica_sales_1 and ica_sales_2 are transportable Step 3: Generate a Transportable Tablespace Set After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions: Make all tablespaces in the set you are copying read-only. SQL> ALTER TABLESPACE ica_sales_1 READ ONLY; Tablespace altered. SQL> ALTER TABLESPACE ica_sales_2 READ ONLY; Tablespace altered. Invoke the Export utility on the host system and specify which tablespaces are in the transportable set. SQL> HOST $ exp system/password FILE=/u01/oracle/expdat.dmp TRANSPORT_TABLESPACES = ica_sales_1,ica_sales_2
If ica_sales_1 and ica_sales_2 are being transported to a different platform, and the endianness of the platforms is different, and if you want to convert before transporting the tablespace set, then convert the datafiles composing the ica_sales_1 and ica_sales_2 tablespaces. You have to use RMAN utility to convert datafiles
$ RMAN TARGET / Recovery Manager: Release 10.1.0.0.0 Copyright (c) 1995, 2003, Oracle Corporation.
23 Convert the datafiles into a temporary location on the source platform. In this example, assume that the temporary location, directory /temp, has already been created. The converted datafiles are assigned names by the system. RMAN> CONVERT TABLESPACE ica_sales_1,ica_sales_2 TO PLATFORM 'Microsoft Windows NT' FORMAT '/temp/%U';
Starting backup at 08-APR-03 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=11 devtype=DISK channel ORA_DISK_1: starting datafile conversion input datafile fno=00005 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-ADMIN_TBS_FNO-5_05ek24v5 channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile conversion input datafile fno=00004 name=/u01/oracle/oradata/ica_salesdb/ica_sales_101.dbf converted datafile=/temp/data_D-10_I-3295731590_TS-EXAMPLE_FNO-4_06ek24vl channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45 Finished backup at 08-APR-07 Step 4: Transport the Tablespace Set
Transport both the datafiles and the export file of the tablespaces to a place accessible to the target database. You can use any facility for copying flat files (for example, an operating system copy utility, ftp, the DBMS_FILE_TRANSFER package, or publishing on CDs).
Step 5: Plug In the Tablespace Set Plug in the tablespaces and integrate the structural information using the Import utility, imp:
IMP system/password FILE=expdat.dmp DATAFILES=/ica_salesdb/ica_sales_101.dbf,/ica_salesdb/ica_sales_201.dbf REMAP_SCHEMA=(smith:sami) REMAP_SCHEMA=(williams:john)
The REMAP_SCHEMA parameter changes the ownership of database objects. If you do not specify REMAP_SCHEMA, all database objects (such as tables and indexes) are created in the same user schema as in the source database, and those users must already exist in the target database. If they do not exist, then the import utility returns an error. In this example, objects in the tablespace set owned by smith in the source database will be owned by sami in the target database after the tablespace set is plugged in. Similarly, objects owned by williams in the source database will be owned by john in the target database. In this case, the target database is not required to have users smith and williams, but must have users sami and john.
After this statement executes successfully, all tablespaces in the set being copied remain in read-only mode. Check the import logs to ensure that no error has occurred.
Now, put the tablespaces into read/write mode as follows:
24
ALTER TABLESPACE ica_sales_1 READ WRITE; ALTER TABLESPACE ica_sales_2 READ WRITE;
For Example suppose you have a tablespace users with the following datafiles
25 /u01/oracle/ica/usr01.dbf /u01/oracle/ica/usr02.dbf Now you want to relocate /u01/oracle/ica/usr01.dbf to /u02/oracle/ica/usr01.dbf and want to rename /u01/oracle/ica/usr02.dbf to /u01/oracle/ica/users02.dbf then follow the given the steps
1.
2.
Copy the file to new location using o/s command. $cp /u01/oracle/ica/usr01.dbf /u02/oracle/ica/usr01.dbf
Rename the file /u01/oracle/ica/usr02.dbf to /u01/oracle/ica/users02.dbf using o/s command. $mv 3. /u01/oracle/ica/usr02.dbf /u01/oracle/ica/users02.dbf
Now start SQLPLUS and type the following command to rename and relocate these files SQL> alter tablespace users rename file /u01/oracle/ica/usr01.dbf, /u01/oracle/ica/usr02.dbf to /u02/oracle/ica/usr01.dbf,/u01/oracle/ica/users02.dbf;
4.
Now bring the tablespace Online SQL> alter tablespace users online;
26
TO '/u02/oracle/rbdb1/temp01.dbf', '/u02/oracle/rbdb1/users03.dbf;
Always provide complete filenames (including their paths) to properly identify the old and new datafiles. In particular, specify the old datafile names exactly as they appear in the DBA_DATA_FILES view. 4. Back up the database. After making any structural changes to a database, always perform an immediate and complete backup. 5. Start the Database
27
28 Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.
Resizing Logfiles
You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.
2. Move the logfile from Old location to new location using operating system command $mv /u01/oracle/ica/log1.ora /u02/oracle/ica/log1.ora
29
This statement overcomes two situations where dropping redo logs is not possible:
If there are only two log groups The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived. If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. The database writes a message in the alert log describing the backups from which you cannot recover
To See how many members are there and where they are located give the following query SQL>SELECT * FROM V$LOGFILE;
GROUP# -----1 2 STATUS ------MEMBER ---------------------------------/U01/ORACLE/ICA/LOG1.ORA /U01/ORACLE/ICA/LOG2.ORA
30
The database name Names and locations of associated datafiles and redo log files The timestamp of the database creation The current log sequence number Checkpoint information
It is strongly recommended that you multiplex control files i.e. Have at least two control files one in one hard disk and another one located in another disk, in a database. In this way if control file becomes corrupt in one disk the another copy will be available and you dont have to do recovery of control file. You can multiplex control file at the time of creating a database and later on also. If you have not multiplexed control file at the time of creating a database you can do it now by following given procedure.
2. Copy the control file from old location to new location using operating system command. For example. $cp /u01/oracle/ica/control.ora /u02/oracle/ica/control.ora
3. Now open the parameter file and specify the new location like this CONTROL_FILES=/u01/oracle/ica/control.ora Change it to CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica/control.ora
31 Now Oracle will start updating both the control files and, if one control file is lost you can copy it from another location.
1. First generate the create controlfile statement SQL>alter database backup controlfile to trace;
After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. The trace file will be randomly named something like ORA23212.TRC and it is created in USER_DUMP_DEST directory.
2. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql
3. Now open the c.sql file in text editor and set the database name from ica to prod shown in an example below
CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log', '/u01/oracle/ica/redo01_02.log'), GROUP 2 ('/u01/oracle/ica/redo02_01.log', '/u01/oracle/ica/redo02_02.log'), GROUP 3 ('/u01/oracle/ica/redo03_01.log', '/u01/oracle/ica/redo03_02.log') RESETLOGS DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M, '/u01/oracle/ica/rbs01.dbs' SIZE 5M, '/u01/oracle/ica/users01.dbs' SIZE 5M,
32
'/u01/oracle/ica/temp01.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;
6. Now open the database with RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS;
PARAMETER FILE located in /u01/oracle/ica/initica.ora CONTROL FILES=/u01/oracle/ica/control.ora BACKGROUND_DUMP_DEST=/u01/oracle/ica/bdump USER_DUMP_DEST=/u01/oracle/ica/udump CORE_DUMP_DEST=/u01/oracle/ica/cdump LOG_ARCHIVE_DEST_1=location=/u01/oracle/ica/arc1
33
/u01/oracle/ica/rbs.dbf /u01/oracle/ica/tmp.dbf /u01/oracle/ica/sysaux.dbf LOGFILE= /u01/oracle/ica/log1.ora /u01/oracle/ica/log2.ora Now you want to copy this database to SERVER 2 and in SERVER 2 you dont have /u01 filesystem. In SERVER 2 you have /d01 filesystem.
To Clone this Database on SERVER 2 do the following. Steps :-
1. In SERVER 2 install the same version of o/s and same version Oracle as in SERVER 1.
Now, go to the USER_DUMP_DEST directory and open the latest trace file. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste in a file. Let the filename be cr.sql
34
MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;
3. In SERVER 2 create the following directories $cd /d01/oracle $mkdir ica $mkdir arc1 $cd ica $mkdir bdump udump cdump
Shutdown the database on SERVER 1 and transfer all datafiles, logfiles and control file to SERVER 2 in /d01/oracle/ica directory.
Copy parameter file to SERVER 2 in /d01/oracle/dbs directory and copy all archive log files to SERVER 2 in /d01/oracle/ica/arc1 directory. Copy the cr.sql script file to /d01/oracle/ica directory.
4. Open the parameter file SERVER 2 and change the following parameters
5. Now, open the cr.sql file in text editor and change the locations like this
CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('//d01/oracle/ica/log1.ora' GROUP 2 ('//d01/oracle/ica/log2.ora'
35
DATAFILE '//d01/oracle/ica/sys.dbf' SIZE 300M, '//d01/oracle/ica/rbs.dbf' SIZE 50M, '//d01/oracle/ica/usr.dbf' SIZE 50M, '//d01/oracle/ica/tmp.dbf' SIZE 50M, //d01/oracle/ica/sysaux.dbf size 100M; MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;
In SERVER 2 export ORACLE_SID environment variable and start the instance $export ORACLE_SID=ica $sqlplus Enter User:/ as sysdba SQL> startup nomount;
36
Roll back transactions when a ROLLBACK statement is issued Recover the database Provide read consistency Analyze data as of an earlier point in time by using Flashback Query Recover from logical corruptions using Flashback features
Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.
37
where:
UndoSpace is the number of undo blocks UR is UNDO_RETENTION in seconds. This value should take into consideration long-running queries and any flashback requirements. UPS is undo blocks for each second overhead is the small overhead for metadata (transaction tables, bitmaps, and so forth)
As an example, if UNDO_RETENTION is set to 3 hours, and the transaction rate (UPS) is 100 undo blocks for each second, with a 8K block size, the required undo space is computed as follows: (3 * 3600 * 100 * 8K) = 8.24GBs To get the values for UPS, Overhead query the V$UNDOSTAT view. By giving the following statement SQL> Select * from V$UNDOSTAT;
SQL> ALTER TABLESPACE myundo ADD DATAFILE '/u01/oracle/ica/undo02.dbf' SIZE 200M AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;
An undo tablespace can only be dropped if it is not currently used by any instance. If the undo tablespace contains any outstanding transactions (for example, a transaction died but has not yet been recovered), the DROP TABLESPACE statement fails.
38
Assuming myundo is the current undo tablespace, after this command successfully executes, the instance uses myundo2 in place of myundo as its undo tablespace.
39
SQL Loader
SQL LOADER utility is used to load data from other data source into Oracle. For example, if you have a table in FOXPRO, ACCESS or SYBASE or any other third party database, you can use SQL Loader to load the data into Oracle Tables. SQL Loader will only read the data from Flat files. So If you want to load the data from Foxpro or any other database, you have to first convert that data into Delimited Format flat file or Fixed length format flat file, and then use SQL loader to load the data into Oracle. Following is procedure to load the data from Third Party Database into Oracle using SQL Loader. 1. 2. 3. 4. Convert the Data into Flat file using third party database command. Create the Table Structure in Oracle Database using appropriate datatypes Write a Control File, describing how to interpret the flat file and options to load the data. Execute SQL Loader utility specifying the control file in the command line argument
This table contains some 10,000 rows. Now you want to load the data from this table into an Oracle Table. Oracle Database is running in LINUX O/S. Solution Steps Start MS-Access and convert the table into comma delimited flat (popularly known as csv) , by clicking on File/Save As menu. Let the delimited file name be emp.csv 1. Now transfer this file to Linux Server using FTP command a. Go to Command Prompt in windows b. At the command prompt type FTP followed by IP address of the server running Oracle. FTP will then prompt you for username and password to connect to the Linux Server. Supply a valid username and password of Oracle User in Linux For example:C:\>ftp 200.200.100.111 Name: oracle Password:oracle FTP>
40
c. Now give PUT command to transfer file from current Windows machine to Linux machine. FTP>put Local file:C:\>emp.csv remote-file:/u01/oracle/emp.csv File transferred in 0.29 Seconds FTP> d. Now after the file is transferred quit the FTP utility by typing bye command. FTP>bye Good-Bye 2. Now come the Linux Machine and create a table in Oracle with the same structure as in MSACCESS by taking appropriate datatypes. For example, create a table like this $sqlplus scott/tiger SQL>CREATE TABLE emp (empno number(5), name varchar2(50), sal number(10,2), jdate date); 3. After creating the table, you have to write a control file describing the actions which SQL Loader should do. You can use any text editor to write the control file. Now let us write a controlfile for our case study $vi emp.ctl 1 LOAD DATA 2 INFILE /u01/oracle/emp.csv /u01/oracle/emp.bad 3 BADFILE 4 DISCARDFILE /u01/oracle/emp.dsc 5 INSERT INTO TABLE emp 6 FIELDS TERMINATED BY , OPTIONALLY ENCLOSED BY TRAILING NULLCOLS 7 (empno,name,sal,jdate date mm/dd/yyyy) Notes:
(Do not write the line numbers, they are meant for explanation purpose) 1. 2. 3. 4. The LOAD DATA statement is required at the beginning of the control file. The INFILE option specifies where the input file is located Specifying BADFILE is optional. If you specify, then bad records found during loading will be stored in this file. Specifying DISCARDFILE is optional. If you specify, then records which do not meet a WHEN condition will be written to this file. You can use any of the following loading option 1. 2. 3. 4.
5.
INSERT : Loads rows only if the target table is empty APPEND: Load rows if the target table is empty or not. REPLACE: First deletes all the rows in the existing table and then, load rows. TRUNCATE: First truncates the table and then load rows.
41
6. This line indicates how the fields are separated in input file. Since in our case the fields are separated by , so we have specified , as the terminating char for fields. You can replace this by any char which is used to terminate fields. Some of the popularly use terminating characters are semicolon ;, colon :, pipe | etc. TRAILING NULLCOLS means if the last column is null then treat this as null value, otherwise, SQL LOADER will treat the record as bad if the last column is null. In this line specify the columns of the target table. Note how do you specify format for Date columns
7.
4. After you have wrote the control file save it and then, call SQL Loader utility by typing the following command $sqlldr userid=scott/tiger control=emp.ctl log=emp.log After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded. The LOG option of sqlldr specifies where the log file of this sql loader session should be created. The log file contains all actions which SQL loader has performed i.e. how many rows were loaded, how many were rejected and how much time is taken to load the rows and etc. You have to view this file for any errors encountered while running SQL Loader.
CASE STUDY (Loading Data from Fixed Length file into Oracle)
Suppose we have a fixed length format file containing employees data, as shown below, and wants to load this data into an Oracle table.
7782 CLARK 7839 KING 7934 MILLER 7566 JONES 7499 ALLEN 7654 MARTIN 7658 CHAN 7654 MARTIN MANAGER PRESIDENT CLERK MANAGER SALESMAN SALESMAN ANALYST SALESMAN 7782 7839 7698 7698 7566 7698 7839 2572.50 5500.00 920.00 3123.75 1600.00 1312.50 3450.00 1312.50 10 10 10 20 300.00 30 1400.00 30 20 1400.00 30
SOLUTION: Steps :1. First Open the file in a text editor and count the length of fields, for example in our fixed length file, employee number is from 1st position to 4th position, employee name is from 6th position to 15th position, Job name is from 17th position to 25th position. Similarly other columns are also located. 2. Create a table in Oracle, by any name, but should match columns specified in fixed length file. In our case give the following command to create the table.
42
SQL> CREATE TABLE emp (empno NUMBER(5), name VARCHAR2(20), job VARCHAR2(10), mgr NUMBER(5), sal NUMBER(10,2), comm NUMBER(10,2), deptno NUMBER(3) );
3. After creating the table, now write a control file by using any text editor $vi empfix.ctl 1) LOAD DATA 2) INFILE '/u01/oracle/fix.dat' 3) INTO TABLE emp 4) (empno POSITION(01:04) INTEGER EXTERNAL, name POSITION(06:15) CHAR, job POSITION(17:25) CHAR, mgr POSITION(27:30) INTEGER EXTERNAL, sal POSITION(32:39) DECIMAL EXTERNAL, comm POSITION(41:48) DECIMAL EXTERNAL, 5) deptno POSITION(50:51) INTEGER EXTERNAL) Notes: (Do not write the line numbers, they are meant for explanation purpose)
1. 2. 3. 4. The LOAD DATA statement is required at the beginning of the control file. The name of the file containing data follows the INFILE parameter. The INTO TABLE statement is required to identify the table to be loaded into. Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded into that column. empno, name, job, and so on are names of columns in table emp. The datatypes (INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the datatype of data fields in the file, not of corresponding columns in the emp table. Note that the set of column specifications is enclosed in parentheses.
5.
4. After saving the control file now start SQL Loader utility by typing the following command. $sqlldr userid=scott/tiger control=empfix.ctl log=empfix.log direct=y After you have executed the above command SQL Loader will shows you the output describing how many rows it has loaded.
43
7566 JONES 7499 ALLEN 7654 MARTIN 7658 CHAN 7654 MARTIN MANAGER SALESMAN SALESMAN ANALYST SALESMAN 7839 7698 7698 7566 7698 3123.75 1600.00 1312.50 3450.00 1312.50 20 300.00 30 1400.00 30 20 1400.00 30
Now we want to load all the employees whose deptno is 10 into emp1 table and those employees whose deptno is not equal to 10 in emp2 table. To do this first create the tables emp1 and emp2 by taking appropriate columns and datatypes. Then, write a control file as shown below $vi emp_multi.ctl Load Data infile /u01/oracle/empfix.dat append into table scott.emp1 WHEN (deptno=10 ) (empno POSITION(01:04) name POSITION(06:15) job POSITION(17:25) mgr POSITION(27:30) sal POSITION(32:39) comm POSITION(41:48) deptno POSITION(50:51) INTO TABLE scott.emp2 WHEN (deptno<>10 ) (empno POSITION(01:04) name POSITION(06:15) job POSITION(17:25) mgr POSITION(27:30) sal POSITION(32:39) comm POSITION(41:48) deptno POSITION(50:51)
EXTERNAL,
EXTERNAL,
After saving the file emp_multi.ctl run sqlldr $sqlldr userid=scott/tiger control=emp_multi.ctl
44
When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL statements are generated, passed to Oracle, and executed. The Oracle database looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically. Direct Path In Direct Path Loading, Oracle will not use SQL INSERT statement for loading rows. Instead it directly writes the rows, into fresh blocks beyond High Water Mark, in datafiles i.e. it does not scan for free blocks before high water mark. Direct Path load is very fast because
Partial blocks are not used, so no reads are needed to find them, and fewer writes are performed. SQL*Loader need not execute any SQL INSERT statements; therefore, the processing load on the Oracle database is reduced. A direct path load calls on Oracle to lock tables and indexes at the start of the load and releases them when the load is finished. A conventional path load calls Oracle once for each array of rows to process a SQL INSERT statement. A direct path load uses multiblock asynchronous I/O for writes to the database files. During a direct path load, processes perform their own write I/O, instead of using Oracle's buffer cache. This minimizes contention with other Oracle users.
Restrictions on Using Direct Path Loads The following conditions must be satisfied for you to use the direct path load method:
Tables are not clustered. Tables to be loaded do not have any active transactions pending. Loading a parent table together with a child Table Loading BFILE columns
45
When you import the tables the import tool will perform the actions in the following order, new tables are created, data is imported and indexes are built, triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, function-based, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data
46
Keyword
Description (Default)
-------------------------------------------------------------USERID BUFFER FILE COMPRESS GRANTS INDEXES DIRECT LOG ROWS CONSISTENT FULL OWNER TABLES username/password size of data buffer output files (EXPDAT.DMP) import into one extent (Y) export grants (Y) export indexes (Y) direct path (N) log file of screen output export data rows (Y) cross-table consistency(N) export entire file (N) list of owner usernames list of table names
RECORDLENGTH length of IO record INCTYPE RECORD TRIGGERS STATISTICS PARFILE incremental export type track incr. export (Y) export triggers (Y) analyze objects (ESTIMATE) parameter filename
CONSTRAINTS export constraints (Y) OBJECT_CONSISTENT transaction set to read only during object export (N) FEEDBACK FILESIZE FLASHBACK_SCN FLASHBACK_TIME display progress every x rows (0) maximum size of each dump file SCN used to set session snapshot back to time used to get the SCN closest to the specified time
47
select clause used to export a subset of a table suspend when a space related error is encountered(N) text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE TTS_FULL_CHECK TABLESPACES perform full or partial dependency check for TTS list of tablespaces to export export transportable tablespace metadata (N)
TRANSPORT_TABLESPACE TEMPLATE
The Export and Import tools support four modes of operation FULL :Exports all the objects in all schemas OWNER :Exports objects only belonging to the given OWNER TABLES :Exports Individual Tables TABLESPACE :Export all objects located in a given TABLESPACE. Example of Exporting Full Database The following example shows how to export full database $exp USERID=scott/tiger FULL=y FILE=myfull.dmp In the above command, FILE option specifies the name of the dump file, FULL option specifies that you want to export the full database, USERID option specifies the user account to connect to the database. Note, to perform full export the user should have DBA or EXP_FULL_DATABASE privilege. Example of Exporting Schemas To export Objects stored in a particular schemas you can run export utility with the following arguments $exp USERID=scott/tiger OWNER=(SCOTT,ALI) FILE=exp_own.dmp The above command will export all the objects stored in SCOTT and ALIs schema. Exporting Individual Tables To export individual tables give the following command $exp USERID=scott/tiger TABLES=(scott.emp,scott.sales) FILE=exp_tab.dmp This will export scotts emp and sales tables.
48
Exporting Consistent Image of the tables If you include CONSISTENT=Y option in export command argument then, Export utility will export a consistent image of the table i.e. the changes which are done to the table during export operation will not be exported.
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line. Keyword USERID BUFFER FILE SHOW IGNORE GRANTS INDEXES ROWS LOG FULL FROMUSER TOUSER TABLES RECORDLENGTH INCTYPE COMMIT PARFILE CONSTRAINTS DESTROY INDEXFILE SKIP_UNUSABLE_INDEXES FEEDBACK TOID_NOVALIDATE Description (Default) username/password size of data buffer input files (EXPDAT.DMP) just list file contents (N) ignore create errors (N) import grants (Y) import indexes (Y) import data rows (Y) log file of screen output import entire file (N) list of owner usernames list of usernames list of table names length of IO record incremental import type commit array insert (N) parameter filename import constraints (Y) overwrite tablespace data file (N) write table/index info to specified file skip maintenance of unusable indexes (N) display progress every x rows(0) skip validation of specified type ids
49
maximum size of each dump file import precomputed statistics (always) suspend when a space related error is encountered(N) text string used to identify resumable statement wait time for RESUMABLE compile procedures, packages, and functions (Y) import streams general metadata (Y) import streams instantiation metadata (N)
Example Importing Individual Tables To import individual tables from a full database export dump file give the following command $imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(emp,dept) This command will import only emp, dept tables into Scott user and you will get a output similar to as shown below
Export file created by EXPORT:V10.00.00 via conventional path import done in WE8DEC character set and AL16UTF16 NCHAR character set . importing SCOTT's objects into SCOTT . . importing table . . importing table "DEPT" "EMP" 4 rows imported 14 rows imported
Example, Importing Tables of One User account into another User account For example, suppose Ali has exported tables into a dump file mytables.dmp. Now Scott wants to import these tables. To achieve this Scott will give the following import command $imp scott/tiger FILE=mytables.dmp FROMUSER=ali TOUSER=scott
Then import utility will give a warning that tables in the dump file was exported by user Ali and not you and then proceed. Example Importing Tables Using Pattern Matching Suppose you want to import all tables from a dump file whose name matches a particular pattern. To do so, use % wild character in TABLES option. For example, the following command will import all tables whose names starts with alphabet e and those tables whose name contains alphabet d $imp scott/tiger FILE=myfullexp.dmp FROMUSER=scott TABLES=(a%,%d%)
50
The following steps present a general overview of how to move a database between platforms. 1. As a DBA user, issue the following SQL query to get the exact name of all tablespaces. You will need this information later in the process. SQL> SELECT tablespace_name FROM dba_tablespaces; 2. As a DBA user, perform a full export from the source database, for example: > exp system/manager FULL=y FILE=myfullexp.dmp 3. Move the dump file to the target database server. If you use FTP, be sure to copy it in binary format (by entering binary at the FTP prompt) to avoid file corruption. 4. Create a database on the target server. 5. Before importing the dump file, you must first create your tablespaces, using the information obtained in Step 1. Otherwise, the import will create the corresponding datafiles in the same file structure as at the source database, which may not be compatible with the file structure on the target system. 6. As a DBA user, perform a full import with the IGNORE parameter enabled: > imp system/manager FULL=y IGNORE=y FILE=myfullexp.dmp
Using IGNORE=y instructs Oracle to ignore any creation errors during the import and permit the import to complete.
51
Most Data Pump export and import operations occur on the Oracle database server. i.e. all the dump files are created in the server even if you run the Data Pump utility from client machine. This results in increased performance because data is not transferred through network.
You can Stop and Re-Start export and import jobs. This is particularly useful if you have started an export or import job and after some time you want to do some other urgent work.
The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations.
The ability to estimate how much space an export job would consume, without actually performing the export
Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs
52
The above command will export the full database and it will create the dump file full.dmp in the directory on the server /u01/oracle/my_dump_dir In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit, and hence export will fail. In this situation you can create multiple dump files by typing the following command $expdp scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
This will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so on. The FILESIZE parameter specifies how much larger the dump file should be.
If you want to export tables located in a particular tablespace you can type the following command
53
Suspending and Resuming Export Jobs (Attaching and Re-Attaching to the Jobs)
You can suspend running export jobs and later on resume these jobs or kill these jobs using Data Pump Export. You can start a job in one client machine and then, if because of some work, you can suspend it. Afterwards when your work has been finished you can continue the job from the same client, where you stopped the job, or you can restart the job from another client machine. For Example, suppose a DBA starts a full database export by typing the following command at one client machine CLNT1 by typing the following command $expdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Export> prompt where he can type interactive commands Now he wants to stop this export job so he will type the following command
Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y
54
After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command
$expdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the client. Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesnt want to continue with the export job.
55
This example imports everything from the expfull.dmp dump file. In this example, a DIRECTORY parameter is not provided. Therefore, a directory object must be provided on both the DUMPFILE parameter and the LOGFILE parameter
If SCOTT account exist in the database then hr objects will be loaded into scott schema. If scott account does not exist, then Import Utility will create the SCOTT account with an unusable password because, the dump file was exported by the user SYSTEM and imported by the user SYSTEM who has DBA privileges.
The above example loads tables, stored in users tablespace, in the sales tablespace.
Generating SQL File containing DDL commands using Data Pump Import
56
You can generate SQL file which contains all the DDL commands which Import would have executed if you actually run Import utility The following is an example of using the SQLFILE parameter.
$ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql
The hr and oe schemas are imported from the expdat.dmp file. The log file, schemas.log, is written to
dpump_dir1
This will import only employees and jobs tables from the DUMPFILE.
57
$impdp scott/tiger@mydb FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob After some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C to enter into interactive mode. Then he will get the Import> prompt where he can type interactive commands Now he wants to stop this export job so he will type the following command
Import> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state and exits the client. After finishing his other work, the DBA wants to resume the export job and the client machine from where he actually started the job is locked because, the user has locked his/her cabin. So now the DBA will go to another client machine and he reattach to the job by typing the following command
$impdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed, he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Import> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the client. Note: After reattaching to the Job a DBA can also kill the job by typing KILL_JOB, if he doesnt want to continue with the import job.
58
Flashback Query
SQL>select * from emp as of timestamp sysdate-1/24; Or SQL> SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP('2007-06-07 10:00:00', 'YYYY-MM-DD HH:MI:SS') To insert the accidently deleted rows again in the table he can type SQL> insert into emp (select * from emp as of timestamp sysdate-1/24)
To understand lets see the following example Before Starting this example lets us collect the Timestamp SQL> select to_char(SYSTIMESTAMP,YYYY-MM-DD HH:MI:SS) from dual;
59
TO_CHAR(SYSTIMESTAMP,YYYYY --------------------------2007-06-19 20:30:43 Suppose a user creates a emp table and inserts a row into it and commits the row. SQL> Create table emp (empno number(5),name varchar2(20),sal number(10,2)); SQL> insert into emp values (101,Sami,5000); SQL>commit; At this time emp table has one version of one row. Now a user sitting at another machine erroneously changes the Salary from 5000 to 2000 using Update statement SQL> update emp set sal=sal-3000 where empno=101; SQL> commit; Subsequently, a new transaction updates the name of the employee from Sami to Smith. SQL>update emp set name=Smith where empno=101; SQL> commit; At this point, the DBA detects the application error and needs to diagnose the problem. The DBA issues the following query to retrieve versions of the rows in the emp table that correspond to empno 101. The query uses Flashback Version Query pseudocolumns SQL> SQL> SQL> SQL> Connect / as sysdba column versions_starttime format a16 column versions_endtime format a16 set linesize 120;
SQL> select versions_xid,versions_starttime,versions_endtime, versions_operation,empno,name,sal from emp versions between timestamp to_timestamp(2007-06-19 20:30:00,yyyy-mm-dd hh:mi:ss) and to_timestamp(2007-06-19 21:00:00,yyyy-mm-dd hh:mi:ss); VERSION_XID ----------0200100020D 02001003C02 0002302C03A V U U I STARTSCN -------11323 11345 12320 ENDSCN -----EMPNO ----101 101 101 NAME SAL -------- ---SMITH 2000 SAMI 2000 SAMI 5000
The Output should be read from bottom to top, from the output we can see that an Insert has taken place and then erroneous update has taken place and then again update has taken place to change the name. The DBA identifies the transaction 02001003C02 as erroneous and issues the following query to get the SQL command to undo the change
60
SQL> select operation,logon_user,undo_sql from flashback_transaction_query where xid=HEXTORAW(02001003C02); OPERATION LOGON_USER UNDO_SQL --------- ---------- --------------------------------------U SCOTT update emp set sal=5000 where ROWID = 'AAAKD2AABAAAJ29AAA'
Now DBA can execute the command to undo the changes made by the user
SQL> update emp set sal=5000 where ROWID ='AAAKD2AABAAAJ29AAA' 1 row updated
The emp table is restored to its state when the database was at the time specified by the timestamp.
Example:At 17:00 an HR administrator discovers that an employee "JOHN" is missing from the EMPLOYEE table. This employee was present at 14:00, the last time she ran a report. Someone accidentally deleted the record for "JOHN" between 14:00 and the present time. She uses Flashback Table to return the table to its state at 14:00, as shown in this example:
FLASHBACK TABLE EMPLOYEES TO TIMESTAMP TO_TIMESTAMP('2007-06-21 14:00:00','YYYY-MM-DD HH:MI:SS')
61
ENABLE TRIGGERS;
You have to give ENABLE TRIGGERS option otherwise, by default all database triggers on the table will be disabled.
62
SQL>PURGE DBA_RECYCLEBIN;
To view the contents of Recycle Bin give the following command SQL> show recycle bin; Permanently Dropping Tables If you want to permanently drop tables without putting it into Recycle Bin drop tables with purge command like this
SQL> drop table emp purge;
This will drop the table permanently and it cannot be restored. Flashback Drop of Multiple Objects With the Same Original Name You can create, and then drop, several objects with the same original name, and they will all be stored in the recycle bin. For example, consider these SQL statements:
CREATE TABLE EMP ( ...columns ); # EMP version 1 DROP TABLE EMP; CREATE TABLE EMP ( ...columns ); # EMP version 2 DROP TABLE EMP; CREATE TABLE EMP ( ...columns ); # EMP version 3 DROP TABLE EMP;
In such a case, each table EMP is assigned a unique name in the recycle bin when it is dropped. You can use a FLASHBACK TABLE... TO BEFORE DROP statement with the original name of the table, as shown in this example:
FLASHBACK TABLE EMP TO BEFORE DROP;
The most recently dropped table with that original name is retrieved from the recycle bin, with its original name. You can retrieve it and assign it a new name using a RENAME TO clause. The following example shows the retrieval from the recycle bin of all three dropped EMP tables from the previous example, with each assigned a new name:
FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_3; FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_2; FLASHBACK TABLE EMP TO BEFORE DROP RENAME TO EMP_VER_1;
63
Important Points: 1. There is no guarantee that objects will remain in Recycle Bin. Oracle might empty recycle bin whenever Space Pressure occurs i.e. whenever tablespace becomes full and transaction requires new extents then, oracle will delete objects from recycle bin 2. A table and all of its dependent objects (indexes, LOB segments, nested tables, triggers, constraints and so on) go into the recycle bin together, when you drop the table. Likewise, when you perform Flashback Drop, the objects are generally all retrieved together. 3. There is no fixed amount of space allocated to the recycle bin, and no guarantee as to how long dropped objects remain in the recycle bin. Depending upon system activity, a dropped object may remain in the recycle bin for seconds, or for months.
(Note: the db_flashback_retention_target is specified in minutes here we have specified 3 days i.e. 3x24x60=4320)
Step 2. Start the instance and mount the Database. SQL>startup mount; Step 3. Now enable the flashback database by giving the following command SQL>alter database flashback on; Now Oracle start writing Flashback logs to recovery area. To how much size we should set the flash recovery area. After you have enabled the Flashback Database feature and allowed the database to generate some flashback logs, run the following query:
64
SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;
This will show how much size the recovery area should be set to. How far you can flashback database. To determine the earliest SCN and earliest Time you can Flashback your database, give the following query:
SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;
Suppose, a user erroneously drops a schema at 10:00AM. You as a DBA came to know of this at 5PM. Now since you have configured the flashback area and set up the flashback retention time to 3 Days, you can flashback the database to 9:50AM by following the given procedure
1.
2.
Run the FLASHBACK DATABASE command to return the database to 9:59AM by typing the following command RMAN> FLASHBACK DATABASE TO TIME timestamp('2007-06-21 09:59:00'); or, you can also type this command.
RMAN> FLASHBACK DATABASE TO TIME (SYSDATE-8/24);
3. When the Flashback Database operation completes, you can evaluate the results by opening the database read-only and run some queries to check whether your Flashback Database has returned the database to the desired state.
RMAN> SQL 'ALTER DATABASE OPEN READ ONLY';
Option 1:-
65
If you are content with your result you can open the database by performing ALTER DATABASE OPEN RESETLOGS
Option 2:If you discover that you have chosen the wrong target time for your Flashback Database operation, you can use RECOVER DATABASE UNTIL to bring the database forward, or perform FLASHBACK DATABASE again with an SCN further in the past. You can completely undo the effects of your flashback operation by performing complete recovery of the database: RMAN> RECOVER DATABASE;
Option 3:If you only want to retrieve some lost data from the past time, you can open the database read-only, then perform a logical export of the data using an Oracle export utility, then run RECOVER DATABASE to return the database to the present time and re-import the data using the Oracle import utility 4. Since in our example only a schema is dropped and the rest of database is good, third option is relevant for us. Now, come out of RMAN and run EXPORT utility to export the whole schema $exp userid=system/manager file=scott.dmp owner=SCOTT
5.
Now Start RMAN and recover database to the present time $rman target / RMAN> RECOVER DATABASE;
6.
After database is recovered shutdown and restart the database in normal mode and import the schema by running IMPORT utility $imp userid=system/manager file=scott.dmp
66
Log Miner
Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.
LogMiner Configuration
There are three basic objects in a LogMiner configuration that you should be familiar with: the source database, the LogMiner dictionary, and the redo log files containing the data of interest:
The source database is the database that produces all the redo log files that you want LogMiner to analyze. The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data. For example, consider the following the SQL statement: INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);
The redo log files contain the changes made to the database or database dictionary.
Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.
67
Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.
This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead. Using the Online Catalog To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Extracting a LogMiner Dictionary to the Redo Log Files To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not). To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Extracting the LogMiner Dictionary to a Flat File When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files. 1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the initialization parameter file:
UTL_FILE_DIR = /oracle/database
2. Start the Database SQL> startup 3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/database/',
68
DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
Step 2 Start LogMiner. Start LogMiner and specify the dictionary to use.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
69
Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
USR ---HR HR
SQL_REDO SQL_UNDO ---------------------------------------------------set transaction read write; insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL", "PHONE_NUMBER","HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('306','Mohammed','Sami', 'MDSAMI', '1234567890', TO_DATE('10-jan-2003 13:34:43', 'dd-mon-yyyy hh24:mi:ss'), 'HR_REP','120000', '.05', '105','10'); set transaction read write; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') where "PRODUCT_ID" = '1799' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and ROWID = 'AAAHTKAABAAAY9mAAB'; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') where "PRODUCT_ID" = '1801' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') and ROWID = 'AAAHTKAABAAAY9mAAC'; insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL", "PHONE_NUMBER","HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('307','John','Silver', 'JSILVER', '5551112222', TO_DATE('10-jan-2003 13:41:03', 'dd-mon-yyyy hh24:mi:ss'), 'SH_CLERK','110000', '.05', '105','50'); update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') where "PRODUCT_ID" = '1799' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and ROWID = 'AAAHTKAABAAAY9mAAB'; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+01-00') where "PRODUCT_ID" = '1801' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and ROWID ='AAAHTKAABAAAY9mAAC'; delete from "HR"."EMPLOYEES" "EMPLOYEE_ID" = '307' and "FIRST_NAME" = 'John' and "LAST_NAME" = 'Silver' and "EMAIL" = 'JSILVER' and "PHONE_NUMBER" = '5551112222' and "HIRE_DATE" = TO_DATE('10-jan2003 13:41:03', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" ='105' and "DEPARTMENT_ID" = '50' and ROWID = 'AAAHSkAABAAAY6rAAP'; delete from "HR"."EMPLOYEES" where "EMPLOYEE_ID" = '306' and "FIRST_NAME" = 'Mohammed' and "LAST_NAME" = 'Sami' and "EMAIL" = 'MDSAMI' and "PHONE_NUMBER" = '1234567890' and "HIRE_DATE" = TO_DATE('10-JAN-2003 13:34:43', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" = 'HR_REP' and "SALARY" = '120000' and "COMMISSION_PCT" = '.05' and "DEPARTMENT_ID" = '10' and ROWID = 'AAAHSkAABAAAY6rAAO';
OE OE
1.1.1484 1.1.1484
OE
1.1.1484
HR
1.11.1476
OE HR HR
commit; set transaction read write; delete from "HR"."EMPLOYEES" where "EMPLOYEE_ID" = '205' and "FIRST_NAME" = 'Shelley' and "LAST_NAME" = 'Higgins' and "EMAIL" = 'SHIGGINS' and "PHONE_NUMBER" = '515.123.8080' and "HIRE_DATE" = TO_DATE( '07-jun-1994 10:05:01', 'dd-mon-yyyy hh24:mi:ss') and "JOB_ID" = 'AC_MGR' and "SALARY"= '12000' and "COMMISSION_PCT" IS NULL insert into "HR"."EMPLOYEES"( "EMPLOYEE_ID","FIRST_NAME", "LAST_NAME","EMAIL","PHONE_NUMBER", "HIRE_DATE", "JOB_ID","SALARY", "COMMISSION_PCT","MANAGER_ID", "DEPARTMENT_ID") values ('205','Shelley','Higgins', and 'SHIGGINS','515.123.8080', TO_DATE('07-jun-1994 10:05:01', 'dd-mon-yyyy hh24:mi:ss'), 'AC_MGR','12000',NULL,'101','110');
70
and "MANAGER_ID" = '101' and "DEPARTMENT_ID" = '110' and ROWID = 'AAAHSkAABAAAY6rAAM';
OE OE
1.8.1484 1.8.1484
set transaction read write; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+12-06') where "PRODUCT_ID" = '2350' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') and ROWID = 'AAAHTKAABAAAY9tAAD'; commit; update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') where "PRODUCT_ID" = '2350' and "WARRANTY_PERIOD" = TO_YMINTERVAL('+20-00') and ROWID ='AAAHTKAABAAAY9tAAD';
HR
1.11.1476
Example of Mining Without Specifying the List of Redo Log Files Explicitly
The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use the DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest. Example : Mining Redo Log Files in a Given Time Range This example assumes that you want to use the data dictionary extracted to the redo log files. Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary.
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
NAME -------------------------------------------/usr/oracle/data/db1arch_1_207_482701534.dbf
Step 2 Display all the redo log files that have been generated so far. This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS
71
WHERE LOW_TIME > '10-jan-2003 12:01:34';
Step 3 Start LogMiner. Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '10-jan-2003 12:01:34', ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);
Step 4 Query the V$LOGMNR_LOGS view. This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;
72
Step 5 Query the V$LOGMNR_CONTENTS view. To reduce the number of rows returned by the query, exclude all DML statements done in the sys or system schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.) Note that all reconstructed SQL statements returned by the query are correctly translated.
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL_REDO ----------------------------------set transaction read write; create table oe.product_tracking (product_id number not null, modified_time date, old_list_price number(8,2), old_warranty_period interval year(2) to month);
SYS
1.2.1594
commit;
SYS SYS
1.18.1602 1.18.1602
set transaction read write; create or replace trigger oe.product_tracking_trigger before update on oe.product_information for each row when (new.list_price <> old.list_price or new.warranty_period <> old.warranty_period) declare begin insert into oe.product_tracking values (:old.product_id, sysdate, :old.list_price, :old.warranty_period); end;
73
SYS 1.18.1602 commit;
OE
1.9.1598
update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'), "LIST_PRICE" = 100 where "PRODUCT_ID" = 1729 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 80 and ROWID = 'AAAHTKAABAAAY9yAAA';
OE
1.9.1598
insert into "OE"."PRODUCT_TRACKING" values "PRODUCT_ID" = 1729, "MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03', 'dd-mon-yyyy hh24:mi:ss'), "OLD_LIST_PRICE" = 80, "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');
OE
1.9.1598
update "OE"."PRODUCT_INFORMATION" set "WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'), "LIST_PRICE" = 92 where "PRODUCT_ID" = 2340 and "WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and "LIST_PRICE" = 72 and ROWID = 'AAAHTKAABAAAY9zAAA';
OE
1.9.1598
74
"MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07', 'dd-mon-yyyy hh24:mi:ss'), "OLD_LIST_PRICE" = 72, "OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');
OE
1.9.1598
commit;
75
76
STEP 4: Give the following Commands SQL> ALTER DATABASE NOARCHIVELOG; STEP 5: Shutdown the database and take full offline backup.
77
"/u01/ica/usr1.dbf ". Give the following series of commands to take online backup of USERS tablespace. $sqlplus Enter User:/ as sysdba SQL> alter tablespace users begin backup; SQL> host cp /u01/ica/usr1.dbf /u02/backup
78
STEP 1: Take a full backup of current database. STEP 2: Restore from full database backup i.e. copy all the files from backup to their original locations. (UNIX) Suppose the backup is in "/u2/oracle/backup" directory. Then do the following. $cp /u02/backup/* /u01/ica
This will copy all the files from backup directory to original destination. Also remember to copy the control files to all the mirrored locations.
79
80
STEP 5. Open the database and reset the logs. Because you have performed a Incomplete Recovery, like this SQL> alter database open resetlogs; STEP 6. After database is open. Export the table to a dump file using Export Utility. STEP 7. Restore from the full database backup which you have taken on Saturday. STEP 8. Open the database and Import the table.