Академический Документы
Профессиональный Документы
Культура Документы
BACKUP PROCEDURES
It is a very important aspect of any database which should be planned carefully as recovery depends
upon the back up strategy which are being followed. Backup strategy depends upon the mode of
database. Different methods are adopted for the database running in archivelog mode or database
running in no-archivelog mode .
Criteria:
[1] When database is running in archive-log mode.
1.Cold backup:
In init.ora, search for parameter control_files to find the name of control file for that database. Query
the v$datafiles and v$logfiles views to find the names of datafiles and redo logfiles associated with the
database. Use the operating system command to take the backup of these files. * In init.ora, search
for parameter log_archive_dest to find the location of archived files. Use the operating system
command to take the backup of these files.
Frequency: This backup has to be taken weekly.
Only Oracle database related file should be backedup. It will solve two problems
[a] Downtime of database will be less.
[b] Retrieval from cartridge will take less time.
[c] Less numbers of cartridges will be required.
File System Backup : Generally its frequency should be low. It will act as a backup for all the files
(Oracle+O.S.+Other). It will be needed if all the disks crash. If you are creating important files on
server then its frequency should be increased as decided by the site incharge.
2.Hot backup:
In init.ora, search for parameter control_files to find the name of control file for that database. Query
the v$datafiles views to find the names of datafiles associated with the database. Use the operating
system command to take the backup of these files.* In init.ora, search for parameter log_archive_dest
to find the location of archived files. Use the operating system command to take the backup of these
files. *
3.Logical Backup:
[1] Ideally Complete database export should be taken daily.
It is also called base backup.
[2] Take incremental export daily except on weekends.
On weekends, cumulative database export should be taken. When cumulative export is taken, one
should remove incremental export to save space on disk.
On month end, take complete database export and remove previously stored cumulative export logical
backups.
[3] Take important user level export daily.
Either of the above option can be implemented at the site but order of prefrence should be first try
[1], if not then use [2] else last option should be [3]
Cartridges Strategy:
If you are taking complete database export then use three different sets of cartridges ( Grandfather ,
Father and Son concepts .) on three different days . And rotate these cartridges again .
For Incremental Backups use six different sets of cartridges on six different days . And rotate these
cartridges again after successfully completion of Cumulative database export backup .
For Cumulative Backups use different cartridges in every week and rotate those cartridges in the next
month after successfully completion of Complete database export backup.
Recovery:
In day to day operation the most common type of failure is table drop or partial data loss in any table
or instance failure. Using export backed up dump file (expdat.dmp) one can recover first two type of
problems.
For instance failure, simply restart the database, oracle will automatically recover the database
(Instance recovery).
For more complicated type of problems like media crash (data file loss etc.), please refer to annexure1.
[2] When database is running in no archive-log mode.
Cold backup :
In init.ora, search for parameter control_files to find the name of control file for that database. Query
the v$datafiles and v$logfiles views to find the names of datafiles and redo logfiles associated with the
database. Use the operating system command to take the backup of these files. Ideally this backup
should be taken daily.
Logical Backup:
[1] Ideally Complete database export should be taken daily.
It is also called base backup.
[2] Take incremental export daily except on weekends.
On weekends, cumulative database export should be taken. When cumulative export is taken, one
should remove incremental export to save space on disk.
On month end, take complete database export and remove previously stored cumulative export logical
backups.
[3] Take important user level export daily.
Either of the above option can be implemented at the site but order of prefrence should be first try
[1], if not then use [2] else last option should be [3]
Cartridges Strategy :
If you are taking complete database export then use three different sets of cartridges ( Grand father ,
Father and Son concepts .) on three different days . And rotate these cartridges again .
For Incremental Backups use six different sets of cartridges on six different days . And rotate these
cartridges again after successfully completion of Cummulative database export backup .
For Cummulative Backups use different cartridges in every week and rotate those cartridges in the
next month after successfully completion of Complete database export backup.
Recovery :
It is a very important process and it should be done very carefully. In day to day operation the most
common type of failure is table drop or partial data loss in any table or instance failure. Using export
backed up dump file (expdat.dmp) one can recover first two type of problems. For instance failure,
simply restart the database, oracle will automatically recover the database (Instance recovery). For
more complicated type of problems like media crash (data file loss etc.), please refer to annexure-1.
* Commands to be used in copying file(s) to backup device :
In Unix :
[a] cpio -ocBv < [name of file] > [/dev/rmt0.1|/dev/rmt0]
or
find / -name [pattern] -depth -print|cpio -ocBv > [/dev/rmt0.1|/dev/rmt0]
[b] tar -cvf [name of file] > [/dev/rmt0.1|/dev/rmt0]
or
tar -cvf /
Complete file system backup :
In Unix :
[a] find / -name -depth -print|cpio -ocBv > [/dev/rmt0.1|/dev/rmt0]
or
[b] To copy all files of unix to backup device
tar -cvf /
In Window NT :
Use backup utility to copy the necessary files.
Commands to be used in restoring file(s) from backup device :
In Unix :
[a] cpio -icBv < [/dev/rmt0.1|/dev/rmt0]
or
cpio -icBv [pattern] < [/dev/rmt0.1|/dev/rmt0]
[b] tar -xvf < [/dev/rmt0.1|/dev/rmt0]
Annexure 1
1. LOSS OF NON-ESSENTIAL DATAFILE WHEN DATABASE IS DOWN
( DATABASE CAN BE IN ARCHIVELOG MODE OR NO ARCHIVELOG
MODE )
SCENARIO
[1] Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.
REQUIREMENT
[1] The script which will recreate the objects in the datafile like script which will create indexex.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN (5 MIN+ TIME TAKEN TO CREATE INDEXES)
NON-ESSENTIAL DATAFILES
DATAFILE OF INDEX TABLESPACE, TEMPORARY TABLESPACE.
SOLUTION
Shutdown the database.(shutdown immediate).
Take complete backup of current database.
Startup mount
Query the v$recover_file view along with v$datafile with a join on file# and note down the name of file
say it is /prodebs/test/ind.dbf.
Alter database datafile /prodebs/test/ind.dbf offline;
(if database is in noarchivwlog mode command will be
Alter database datafile /prodebs/test/ind.dbf offline drop; )
Alter database open;
Drop tablespace user_index including contents;
Create tablespace user_index
datafile /prodebs/test/ind.dbf size 1M;
Run the script which will built indexes*.
Shutdown the database and take backup if necessary.
Startup.
* NB : For temporary tablespace skip this step.
2. MISSING MIRRORED ONLINE REDO LOG FILES (DATABASE IS UP/DOWN)
SCENARIO
Database opens neatly but in alert log two error messages are logged with errors :
(error from lgwr. Error is also written in lgwr trace file)
ora 313 open failed for members ..
ora 312 name of redo log memeber missing
ora 7360 OS error
ora 321 Can not update logfile header
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN 5 MIN.
SOLUTION
Shutdown the database.(shutdown).
Startup mount
Query v$logfile view and find which member has become invalid.
Query v$log view and find which group is current and size of group members (say it is b).
If the member of current log group (say it is 1) is corrupted issue the following commands :
Alter system switch logfile
If you can add one more member to corrupted log group ie maximum log member is not reached add
one more
member to that group
Alter database add logfile member filespec to group 1;
Shutdown the database
Startup the database
If you can not add one more member to corrupted log group
create one more log group with equal members and size of non corrupted log group.
Alter database group 3 (filespec,'filespec) size b;
Drop corrupted log group.
alter database drop logfile group 1;
Manually remove other members of this corrupted log group (ie rm in unix)
Shut down the database
Startup the database
3. RECOVER A LOST DATAFILE WITH NO BACKUP AND ALL ARCHIVED LOG FILES
SCENARIO
Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.
REQUIREMENT
For full recovery, database should be in archivelog mode.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 10 mins.
SOLUTION
Shutdown the database.(shutdown).
Startup mount;
Query v$recover_file
Query v$datafile and find the name of datafile which is missing.(say it is /prodebs/test/user_odc.dbf)
Now issue the following commands in the given order :
alter database datafile /prodebs/test/user_odc.dbf offline;
alter database create datafile /prodebs/test/user_odc.dbf as /prodebs/test/user_odc1.dbf;
(removed file) (new file)
4. RECOVER A LOST DATAFILE WITH BACKUP AND ALL ARCHIVED LOG FILES
SCENARIO
Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.
REQUIREMENT
For full recovery, database should be in archivelog mode.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN 5 mins.
SOLUTION
Shutdown the database.(shutdown).
Startup mount;
Query v$recover_file
Query v$datafile and find the name of datafile which is missing.(say it is /prodebs/test/user_odc.dbf)
Copy the archived datafile (old one that is in backup) and give following commands :
recover database;
alter database open;
Shutdown the database and take necessary backup if required.
Start the database.
5. RECOVER A LOST DATAFILE WITH BACKUP AND MISSING ARCHIVED LOG FILES.
SCENARIO
Database startup fails with errors :
10
11
12
SOLUTION
Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing system database file(s).
Startup mount exclusive;
recover database;
alter database open;
Database is ready for use.
9. LOSS OF NON SYSTEM DATAFILE WITHOUT ROLLBACK SEGMENTS WHEN THE DATABASE
IS IN ARCHIVE LOG MODE AND RECOVERY.
SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last
activity
on sunday.
Here data file (s) associated with user tablespace is (are) lost.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION -1 (DATABASE RECOVERY)
Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing database file(s).
Startup mount exclusive;
recover database;
alter database open;
Database is ready for use.
SOLUTION -2 (DATAFILE RECOVERY)*
13
14
15
activity
on sunday.
All the online redo log files are lost.
All the data files and current control files are intact.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION
Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the all the database files from latest offline or online backup.
Startup mount exclusive;
recover database until cancel;
alter database open resetlogs;
shutdown the database.
Take cold backup. It is strongely advised.
Start the database.
Database is ready for use.
12. DATABASE CRASH DURING HOT BACKUP WHEN THE DATABASE IS IN ARCHIVE LOG
MODE AND RECOVERY.
SCENARIO
While taking hot backup, database crashes.
(a) When Oracle Version is 7.2 or more.
(b) When Oracle Version is 7.1.
TIME TAKEN IN RECOVERY
[a] DATABASE WILL BE READY FOR USE IN MIN 2 mins.
[b] DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION (Oracle Version is 7.2 or more)
Shutdown the database.(shutdown).
Take complete backup of current database.
16
17
18
/prodebs/test/redo_odc11.dbf,
/prodebs/test/redo_odc12.dbf
) SIZE 50K,
GROUP 2 (
/prodebs/test/redo_odc21.dbf,
/prodebs/test/redo_odc22.dbf
) SIZE 50K
DATAFILE
/prodebs/test/sys_odc.dbf,
/prodebs/test/sys_odc1.dbf,
/prodebs/test/user_odc.dbf,
/prodebs/test/temp_odc.dbf,
/prodebs/test/rbs_odc.dbf,
/prodebs/test/ind.dbf
;
RECOVER DATABASE
ALTER SYSTEM ARCHIVE LOG ALL;
ALTER DATABASE OPEN;
shutdown the database.
Start the database.
Database is ready for use.
SOLUTION (Loss of control file when there is a backup and it is not mirrored)
Shutdown the database.(shutdown).
Copy the old control file to this disk.
startup mount exclusive;
If you have any tablespace which is read only, take all the datafile offiline related to this tablespace.
recover database using backup controlfile;
Offline datafile should bring to online status. (alter database datafile < name of datafile> online;)
(for read only tablespace)
alter database open resetlogs;
Shutdown the database.
Take cold backup. It is strongely advised.
Start the database.
Database is ready for use.
19
14. DATABASE SPACE MANAGEMENT WHEN THE DATABASE IS IN ARCHIVE LOG MODE AND
RECOVERY (RESIZING DATAFILE).
SCENARIO
Space management when :
(a) Oracle Version is 7.2 or more.
(b) Oracle Version is 7.1.
Oracle Error is : ora 00376 file # can not be read at this time.
ora 01110 name of datafile.
TIME TAKEN IN RECOVERY
[a] DATABASE WILL BE READY FOR USE IN MIN 5 mins.
[b] DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION (Oracle Version is 7.2 or more)
Shutdown the database.(shutdown).
Take complete backup of current database.
Startup open;
Query the view v$datafile get the name of file which you want to resize.
alter database datafile resize [m/k];
shutdown the database.
Take backup if necessary.
Start the database.
Database is ready for use.
SOLUTION (Oracle Version is 7.1 )
[a] Restore the datafile and apply recovery. Resizing is not possible.
Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the deleted datafile
Startup mount exclusive;
recover database ;
( This may take significant amount of time if a large number of archived logs are to be applied)
20
21
CONDITIONS :
[A] When online redo logs files are deleted.
[B] Loss of all control files.
[C] When recovery is done through old control files.
Events
[1]
a- Cold backup is taken.
b- Loss of redo log file.and media recovery. At this moment, a backup is taken.
c- Loss of data file
[2]
a- Cold backup is taken.
b- Loss of redo log file.and media recovery. At this moment, a backup is not taken.
c- Loss of data file .
SOLUTION [1]
Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the most recent cold backup of datafiles.
Startup mount exclusive;
Recover database;
alter database open;
shutdown the database.
Take a cold backup.
Start the database.
Database is ready for use.
Advantage : All the data will be recovered.
SOLUTION [2]-i
Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database datafile offline;
22
23
on sunday.
Datafile(s) associated with user tablespace is (are) permanently lost.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION
Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database create datafile ;
recover datafile ;
alter database open;
shutdown the database.
Start the database.
Database is ready for use.
17. SYSTEM CLOCK CHANGE AND POINT-IN-TIME RECOVERY.
SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last
activity
on sunday.
Datafile(s) associated with user tablespace is (are) permanently lost.
TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.
SOLUTION
Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
24
25
Type
Version Available
Recovery
(RMAN)
Physical
Physical
All versions
Manager
Operating System
Requirements
Operating system
example, UNIX cp)
manager
backup
(only
utility
if
(for
26
Export
All versions
Feature
N/A
Recovery Manager
Operating System
Export
Closed
backups
Open
backups
Requires rollback or
database Do not use BEGIN/END BACKUP Use
BEGIN/END undo segments to
statements.
BACKUP statements generate
consistent
backups.
Incremental backups
Supported.
Corrupt
detection
block
Not supported
Automatic backup
Backup catalogs
Supported.
Backups
are
recorded in the recovery catalog
and in the control file, or Not supported
exclusively in the target control
file.
Backups
manager
to
Backs up initialization
parameter files and Not supported
password files
Operating
independent
language
system
Supported
interface).
(uses
PL/SQL
Not supported
Not supported
Supported. Identifies
corrupt blocks in the
export log.
Not supported
Supported. Backup to
tape is manual or
supported
controlled by a media
manager.
supported
Not supported
Not supported
supported
Oracle to start an instance depends on the Oracle background processes, system global area (SGA)
apart from Oracle kernel. SGA is a group of memory structures that contain data and control
information for one Oracle Instance. Oracle automatically allocates SGA when an instance is started.
When startup command is given to oracle to start up a particular instance pointed to a parameter file,
which is called init<sid>.ora file, the background processes associated with the parameters and SGA
are initialized. There are various stages in starting up an instance.
Command
Startup no mount
Read init.ora, identify control files, create and initialize SGA and start background
processes (this stage is to create a database or to re-create control files after the
loss of current control file)
Startup mount
Opens control files and mounts database and acquires an instance lock
Startup open
Open and lock data files. If first instance, get startup lock and open inline redo
27
log files. If first instance also perform crash recovery if necessary to open the
database in a consistent state
From the above it is understood that oracle has an internal mechanism to check the consistency
basing on data files, control file and online redo log files. If any of these three structures are not in
consistency they need to be restored and/or recovered.
Like wise what happens during shutdown is also important to understand the way the consistency is
maintained and not maintained and when recovery is required.
Shutdown normal
Shutdown immediate
Shutdown abort
and rollback
caches
log caches
4. Drop file locks
5.
Complete
transactions
going 5.
Complete
transactions
going
7. Close threads
7. Close threads
NO REVOVERY REQUIRED
CRASH
RECOVERY
REQUIRED
DURING NEXT STARTUP
Backup
It is to maintain a copy the data for use. Incase the database crashes to avoid data loss the data is
backed up to another device or/and destination and protected. It is the basic material for the
reconstruction of the lost / crashed database for some reason.
Restore
It is replacement of lost or damaged file with a backup. The files can be restored using RMAN or using
OS commands like cp on UNIX machines copy on Windows machines.
Recover
It is the process of updating the data files and/or control files restored using the redo records and/or
saved to archived redo log files generated by oracle. This process is also called rolling forward.
What Oracle Does For You?
Oracle does crash recovery and instance recovery.
Crash Recovery:
28
Crash recovery is the process of applying the online redo log files to the database files and control files
to bring the database to a consistent state and then closing all threads that are open at the time of
crash. After every startup of the machine and after every startup of the instance which is not closed
with normal and immediate options, the crash recovery starts with the applying of the threads that are
not closed at the time of last closure and after applying all the open threads SMON enables crash and
transaction recovery and opens the database for use.
In crash recovery, an instance automatically recovers the database before opening it. In general, the
first instance to open the database after a crash or shutdown abort automatically performs crash
recovery in the case of Oracle Real application Clusters.
Instance recovery:
Crash Recovery and Instance Recovery are considered synonymous as long as the database has only
one Oracle Instance to recover.
In Oracle Parallel Server (OPS) or Oracle Real Application Clusters (ORAC) the database is accessed by
multiple instances.
In OPS/ORAC configuration, the application of redo data to an open database by an instance when this
instance discovers that another instance has crashed. A surviving instance automatically uses the redo
log to recover the data in the instances buffer cache. Oracle undoes any uncommitted transactions
that were in progress on the failed instance when it crashed and then clears any locks held by the
instance after recovery is complete.
If one or more instances die/crash the instance is recovered by the living instances following the crash
recovery methodology of applying the on line redo logs and closing the open redo threads to sync the
database with the data that was in the cache, in redo and in rollback. This is done by Oracle itself.
If all the instances die then the database is to be restarted by starting all the instances. During the
opening of the database Oracle instances do crash recovery and then if necessary instance recovery to
bring the other dead instances to life.
Media Recovery:
Media Failure is the result of a physical problem that arises when Oracle fails in its attempt to write or
read a file that is required to operate the database. A common example is a disk head crash that
causes the loss of all data on the disk drive. Disk failure can affect a variety of files; including data
files redo log files and control files. Because the database instance cannot continue to function
properly, it cannot write the data in the buffer cache of the SGA to the data files.
Then it is necessary for a media recovery. Media recover means the application of redo or incremental
backups to a restored backup datafile or individual data block to bring it to a specified time. Datafile
media recover always begins at the lowest SCN recorded in the datafile header.
When doing the media recovery:
29
30
If there is down time for the database we can have consistent backup of the database. If there is no
downtime for the database the only alternative is to backup the database in inconsistent mode and
then there also backup archive logs which can be used in case of recovery.Consistent backup is called
cold backup or off-line of the database after the database is shutdown with NORMAL OR IMMEDIATE
option but not ABORT option. In this the database is not open and no user is able to access the
database. The instance is shutdown and database is closed in normal mode and at the operating
system level the backup takes place.The following files are backed up in this type of backups:
1.
All datafiles
2. All controlfiles
3. All online redo log files
4. The init.ora file and config.ora file
5. Password file
Inconsistent backup
A backup in which some of the files in the backup contain changes made after they were check
pointed. This type of backup needs recovery before it can be made consistent. Taking online database
backups usually creates inconsistent backup. It means that the data files are backed up while they are
open and are used. Inconsistent backup can also be done while the database is closed. This is possible
in the following circumstances:
(1) A backup taken immediately after the instance crashed (or when all instances crashed in multiinstance environment like Oracle Parallel Server now called Oracle Real Application Clusters)
(2) When the instance was shutdown with shutdown abort option.
Utilities for the Backup
RMAN
Recovery Manager is a utility provided by Oracle for Backup and Recovery Procedures.
Oracle Recovery Manager User Manual says:
Recovery Manager is able to:
Use server sessions to back up and copy the database, tablespaces, datafiles, control files, and
archived redo logs. Compress backups of datafiles so that only those data blocks that have been
written to are included in a backup. Store frequently executed backup and recovery operations in
scripts. Perform incremental backups, which back up only those data blocks that have changed since a
previous backup. Recover changes to objects created with the NOLOGGING option by applying
incremental backups (recovery with redo logs does not apply these changes). Create a duplicate of
your production database for testing purposes. Use third-party media management software. Generate
a printable message log of all backup and recovery operations. Use the recovery catalog to automate
both restore and recovery operations. Perform automatic parallelization of backup and restore
31
operations. Restore a backup using a backup control file and automatically adjust the control file to
reflect the structure of the restored datafiles. Find datafiles that require a backup based on userspecified limits on the amount of redo that must be applied for recovery. Perform crosschecks to
determine whether archived materials in the media management catalog are still available. Test
whether specified backups can be restored.
(2) Logical Backups
These backups are to be performed only when the database is open and available. Export utility
provided by oracle enables the exports. The Export is done in (1) Full Mode (2) User Mode and (3)
Table Mode and the types of exports are (1) complete (2) cumulative and (3) incremental. These types
can be used only when full=y is set as a parameter and then inctype=complete or cumulative or
incremental can be set. When the export is performed using the inctype parameter, the database is to
be recovered by:
a.
b.
c.
The database is recovered to the to point-in-time of the last incremental export and the data after that
point of time is lost as no archive log files are used for recovering the database and they become
useless.
Upside of Exports and Imports:
a.
b.
c.
d.
e.
Transportable Tablespaces. They are like plug-in tablespaces. This is a feature of 8.1. In this
the time taken to export the tablespace/s is just that of the time needed to copy a datafile
physically to another location. This facility has some limitations. Such as
1. The source and target databases should be running on the same hardware platforms.
2.
The
3.
The
source
4.
The
source
5.
source
and
target
databases
database
and
Must
must
target
not
databases
transport
should
have
be
a
should
using
the
tablespace
have
the
self-contained
same
by
same
set
character
the
data
of
same
set.
name.
block
size.
objects.
6. Snapshots, materialized views, function based indexes, domain indexes, scoped refs,
advanced
queues
with
more
than
one
recipient
cannot
be
transported.
7. The source database must set the tablespace to READ ONLY mode for a short period of time
32
i.e.,.
until
meta
data
of
the
tablespace
is
exported
and
data
file
is
copied.
userid="sys/password
as
sysdba"
transaport_tablespace=y
tablespaces=(t1,
t2,
t3)
file=tr_tbsp.dmp
After the export is terminated successfully without warnings,
Step 05.
At the command prompt copy or xcopy the datafiles, associated with those tablespaces exported, to
the destination within the framework of hardware compatibility and software compatibility and data
block size compatibility etc discussed above.
Step 06.
33
After they are successfully copied verify the sizes with the original files and at Server Manager or SQL*
Plus issue the following commands at the source database where from the tablespaces are exported to
enable all DML transactions.
Alter tablespace t1 read write;
Alter tablespace t2 read write;
Alter tablespace t3 read write;
Step 07.
Now import the tablespaces into target database:
Imp
file=tr_tbsp.dmp
userid
="
sys/password
as
sysdba
"
transport_tablespace=y
34
import client character sets are different from the export clients or the import client character set is
different from the being imported into database character set, there are issues and they are to be
handled carefully by setting the local environment variables such as $NLS_LANG on UNIX boxes and
Registry entry in Windows environment.
f. Limitations to the transportable tablespaces concept.
To improve the performance of Exports and Imports the following may be considered:
1.
Set DIRECT parameter in the export parameter file and this enables to bypass the SQL
command processing layer.
2.
3.
While importing set BUFFER parameter a little high to have array inserts and also use
COMMIT=Y as additional parameter. Too high BUFFER parameter can result in paging or
swaping which negatively effects the performance.
4.
Generate index file for the indexes to be separately after the data is imported with no indexes
on the tables. INDEXES=N and INDEXFILE=name and path of index file
Setting up RMAN
RMAN setup in 8.0 is different from 8.1
RMAN in Oracle 8 and 8.1 with no catalog
Step 01 Create a rman backup command file for the datafiles (rman_bu.rcv)
run {
allocate channel t1 type SBT_TAPE;
setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;
# Backup the database
backup
full
tag db_full_backup
filesperset 6
format db_full_%d_t%t_s%s_p%p
(database);
# Release the tape device
release channel t1;
}
# The following commands are run outside of the run {}
# command.
# Save control file to trace file.
35
36
run {
allocate channel t1 type SBT_TAPE;
setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;
# Archive the current online log
sql alter system archive log current;
# save archive logs
backup
filesperset 20
format al_%d_t%t_s%s_p%p
(archivelog all delete input);
# Release the tape device
release channel t1;
}
list backupset of database;
list backupset of archivelog all;
RMAN in Oracle 8.1 with catalog
Step 01 create the tablespace for the RMAN schema and user
create tablespace rcvcat datafile path\rcvcat.ora size 100M default storage (initial 1M next 1M
minextents 1 maxextents unlimited pctincrease 0);
Step 02 set auto extension on
alter database datafile path\rcvcat.ora autoextend on;
Step 03 create user for owning rman schema
create user rman_<sid_name> identified by rman_<sid_name> temporary tablespace temp01 default
tablespace rcvcat quota unlimited on rcvcat;
Step 04 make grants
grant create session to rman_<sid_name>;
grant recovery_catalog_owner to rman_<sid_name>;
Step 05 at the command prompt in a single line
Rman target internal/oracle@connect_string rcvcat
rman_<sid_name>/rman_<sid_name>@connecting_string
Step 06 now create catalog by issuing the following command at RMAN prompt
create catalog;
Step 07 after the catalog is created register the database
register database;
that is it you are done
37
A piece of advice:
Create the RMAN catalog as a separate database. For this create a small database with about 500 MB
and create RMAN user and create catalog for him in that database and that database can be put on
cold backup daily after the RMAN backup is completed for the database and this protects the database
in case of disaster as RMAN database is immediately restorable and all the tapes or devices on to
which the backups are done can be used for recovery.
End backup
Introduction
ALTER TABLESPACE BEGIN BACKUP and ALTER TABLESPACE END BACKUP,
and why it is mandatory to use it when the online backup is done with a tool that is external
to Oracle ( such as OS backups using cp, tar, BCV, etc. )
It also gives an answer to those frequent questions:
Does Oracle write to data files while in hot backup mode ?
What about ALTER DATABASE BEGIN BACKUP ?
Why it is not used with RMAN backups
What if you do an online backup without setting tablespaces in backup mode ?
What if the instance crashes while the tablespaces is in backup mode ?
How to check which datafiles are in backup mode
What are the minimal archive logs to keep with the hot backup ?
Why use OS backups instead of RMAN ?
Description
Offline backup (Cold backup)
38
A cold OS backup is simple: the database has been cleanly shut down (not crashed, not shutdown
abort) so that:
all datafiles are consistent (same SCN) and no redo is needed in case of restore
the datafiles are closed: they will not be updated during the copy operation
Thus, it can be restored entirely and the database can be opened without the need to recover.
Online backup (Hot backup)
An hot backup does the copy while the database is running. That means that the copy is inconsistent
and will need redo applied to be usable.
Recovery is the process of applying redo log information in order to roll-forward file modifications as
they were done in the original files.
When the copy is done with Oracle (RMAN), Oracle copies the datafile blocks to backupset so that it
will be able to restore them and recover them.
When the copy is done from the OS (i.e with a tool that is not aware of the Oracle file structure),
several issues come up:
Header inconsistency: Nothing guaranties the order in which the files are copied, thus the header of
the file may reflect its state at the beginning or at the end of the copy.
Fractured blocks: Nothing guaranties that an Oracle block is read in one single i/o so that two halves
of a block may reflect its state at two different points in time.
Backup consistency:As the copy is running while the datafile is updated, it reads blocks at different
point in time. The recovery is able to roll forward blocks from the past, but cannot deal with blocks
from the future, thus the recovery of the copy must be recovered at least up to the SCN that was at
the end of the copy.
So it is all about consistency in the copy: consistency between datafiles, consistency within datafiles
and consistency within data blocks, and keep this consistency in the current files (obviously) as well as
in the copy (as it will be needed for a restore/recovery)
39
Backup mode
The goal of ALTER TABLESPACE BEGIN BACKUP and ALTER TABLESPACE END BACKUP is to set
special actions in the current database files in order to make their copy usable, without affecting the
current operations.
Nothing needs to be changed in the current datafiles, but, as the copy is done by an external tool, the
only way to have something set in the copy is to do it in the current datafiles before the copy, and
revert it back at the end.
This is all about having a copy that can be recovered, with no control on the program that
does the copy, and with the minimal impact on the current database.
In order to deal with the 3 previous issues, the instance that will do the recovery of the restored
datafiles has to know:
that the files need recovery
from which SCN, and up to which SCN it has to be recovered at least
enough information to fix fractured blocks
During backup mode, for each datafile in the tablespace, here is what happens:
1- When BEGIN BACKUP is issued:
The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup
copy.
This is to manage the backup consistency issue when the copy will be used for a recovery.
A checkpoint is done for the tablespace, so that in case of recovery, no redo generated before that
point will be applied.
Begin backup command completes only when checkpoint is done.
2- During backup mode:
40
The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at
the beginning of the backup.
Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to
apply the archived redo logs.
This is to avoid the header inconsistency issue.
That means that any further checkpoints do not update the datafile header SCN (but they do update a
backup SCN)
Each first modification to a block in buffer cache will write the full block into the redo thread (in
addition to the default behaviour that writes only the change vector). This is to avoid the fractured
block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery
with the full block image.
That means that everything goes as normal except for two operations:
- at checkpoint the datafile header SCN is not updated
- when updating a block, the first time it is updated since it came in the buffer cache,
the whole before image of the block is recorded in redo
- direct path writes do not go through the buffer cache, but they always write full
blocks and then full block is written to redo log (if not in nologging)
3- When END BACKUP is issued:
A record that marks the end of backup is written to the redo thread so that if the copy is restored
and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency
issue.
The hot backup flag in the datafile headers is unset.
The header SCN is written with the current one.
Remarks:
41
1. the fractured block is not frequent as it happens only if the i/o for the copy is done at the same
time on the same block as the i/o for the update. But the only mean to avoid the problem is to do that
full logging of block for each block, just in case.
2. if the OS i/o size is multiple of the Oracle block size (e.g backup done with dd bs=1M), that
supplemental logging is not useful as fractured blocks cannot happen.
3. the begin backup checkpoint is mandatory to manage the fractured block issue: as Oracle writes
the whole before image of the block, it needs to ensure that it does not overwrite a change done
previously. With the checkpoint at the beginning, it is sure that no change vector preceding the begin
backup has to be applied be applied.
4. The supplemental logging occurs when accessing the block for the first time in the buffer cache. If
the same block is reloaded again in the buffer cache, supplemental logging will occur again. I havent
seen that point documented, but a testcase doing a flush buffer_cache proves that.
Consequence on the copy (the backup)
When the copy has been done between begin backup and end backup, the copy is fully available to be
restored and recovered using the archive log files that where generated since the begin backup.
After the files have been restored, Oracle sees that the SCN is older than the current one and says
that the database needs recovery.
The recovery must be done up to a point in time ulterior to the end backup point in time so that we
are sure that there is no blocks in the datafile that comes from the future.
Consequence on the current database
All operations can be done during the backup mode.
However, as more logging is written, it should be done during a low activity period. And for the same
reason, it is better to do the tablespaces one after one instead of putting all the database tablespaces
in backup mode.
In addition, it is not possible to shutdown the database while a tablespace is in hot backup. This is
because, as the datafile header is frozen with a non current SCN, the datafile would be seen as if it
42
requires recovery. However that cannot be avoided if the instance crashes (or shutdown abort), and
then the startup of the database will raise:
ORA-1113: file needs media recovery
This is the only case I know where instance recovery is not automatic, you need to issue
alter database end backup; before opening the database.
Frequent questions
Does Oracle write to data files while in hot backup mode ?
Yes of course, it would not be called online backup if it were not the case.
What about ALTER DATABASE BEGIN BACKUP ?
That command put all database tablespaces in backup mode at the same time. As seen previously, it
is a bad idea to put all tablespaces in backup mode, as it is better to do it one by one in order to
minimize the supplemental redo logging overhead.
Oracle introduces this shortcut for one reason only: when doing backup with a mirror split (BCV,
Flashcopy, etc), the copy gets all the datafiles at the same time, and the copy lasts only few seconds.
In that case, it is easier to use that command to put all tablespaces in backup mode during the
operation.
Why it is not used with RMAN backups
RMAN is an Oracle tool, that is totally aware of the datafile structure, and the way they are
written. Then, it knows how it can read the datafiles in a way the copy is consistent: write the good
version of datafile header, read the blocks with an i/o size that is multiple of the Oracle block size so
that there is no fractured blocks, and check head and tail of the block to see if block is fractured (in
that case, it re-reads the block to get a consistent image). That is one advantage among many others
of using RMAN for backups.
What if you do an online backup without setting tablespaces in backup mode ?
43
If you dont put the tablespace in backup mode, we cant be sure that the copy is recoverable. It may
be fine, but it may have inconsistencies. We can suppose that the copy is consistent if we make the
copy with the following
conditions
Header inconsistency: If the file copy is done from beginning to end, then the datafile header should
reflect the right SCN
Fractured blocks: If the copy does i/o with a size that is multiple of the Oracle block size, then you
should not have fractured blocks
Backup consistency:If you take care to recover later than the point in time of the end of the copy,
you should not have inconsistency
But there may be other internal mechanisms that are not documented so that we cant be sure that
this list of issues is exhaustive. And, as it is not supported, we cannot rely on a backup done like that.
Note that you will have no message
What if the instance crashes while the tablespaces is in backup mode ?
When you start the database after that, Oracle will say that it requires recovery. This is because the
SCN was frozen and it is the expected behaviour because if you restore the copied file, it has to be
recovered. (and the only way Oracle has to set its value in the copy is to set it in the current file while
it is copied) In that case you can can issue:
ALTER DATABASE END BACKUP;
ALTER DATABASE OPEN;
to open the database.
But your backup is not usable, you have to do it again.
How to check which datafiles are in backup mode
The V$BACKUP view shows the datafiles that are currently in backup mode (status ACTIVE). Some old
documentation says to check V$DATAFILE_HEADER column FUZZY. This is because in previous
44
versions (<9i) the begin backup unsets the online fuzzy bit in the datafile header, and set it back at
when end backup is issued. Since 9i, the online fuzzy bit is unset only when datafile is offline or readonly, not for backup mode.
What are the minimal archive logs to keep with the hot backup ?
The backup done online is unusable if there is not at least the possibility to restore archive logs
- from the archive log that was the current redo log when the backup started,
- up to the archive log that was archived just after the backup (of the whole database) ended.
That is sufficient to do an incomplete media recovery up to the point of end backup. Subsequent
archive logs will be needed to recover up to the point of failure.
Why use OS backups instead of RMAN
The best way to do online backups is using RMAN as it has a tons of features that you cannot have
with OS backups. Yet, the OS backup are still used when using OS tools that can copy an entire
database in seconds, using mirror split (BCV, Flashcopy, etc), for very large databases.
45
46
47
Connected.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
SVRMGR> recover database;
Media recovery complete.
SVRMGR> alter database open;
Statement processed.
SVRMGR>
Recovering the Datafile
SVRMGR> connect internal/oracle@srinivas
Connected.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
SVRMGR>
SVRMGR>alter database datafile H:\ORACLE\ORADATA\SRINIVAS\USERS01.DBF offline;
Statement processed.
SVRMGR> alter database open;
Statement processed.
SVRMGR> alter tablespace users offline;
48
49
The default destination directory for control_2.ctl is %ORACLE_HOME%\dbs and the control file is to
be created / restored in a non- default destination like %ORACLE_HOME%\database.
01. If the instance is still running
shutdown abort;
02. If there is any hardware problem correct that problem and then
for UNIX platforms
cp control_3.ctl \oracle\database\control_2.ctl
03. Edit the init.ora file for the instance
old parameter:
UNIX:
controlfiles=/oracle/dbs/control_1.ctl,'/oracle/dbs/control_2.ctl,'/oracle/dbs/control_3.ctl
new parameter:
controlfiles=d:\oracle\dbs\control_1.ctl,'e:\oracle\database\control_2.ctl,'f:\oracle\dbs\control_3.ctl
UNIX:
controlfiles=/oracle/dbs/control_1.ctl,'/oracle/database/control_2.ctl,'/oracle/dbs/control_3.ctl
04. start the new instance
startup
Loosing all the members of the control files when the back up of the control files is available
When a control file is inaccessible, then you can start the instance, but not mount the database. If you
attempt to mount the database when the control file is unavailable, you see this error message:
ORA-00205: error in identifying controlfile, check alert log for more info
If the control files are restored from the backup the database can not be opened without resetlogs
option.
There are 4 scenarios on which the recovery procedure is dependent.
Status of Online Logs
Status of Datafiles
Response
Current
Unavailable
Current
Available
Backup
Available
50
open RESETLOGS.
Unavailable
Backup
If the control files are being restored to the default locations the initialization parameter file need not
be edited.
Procedure:
01. Shutdown abort; if the instance is still running.
02. Correct if there are any hardware problems
03. Restore the backup control files to the respective destinations. Use cp for UNIX platforms and copy
for the windows platforms.
04. Startup mount;
05. Issue the following command to recover the database
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
(select auto while applying the redo log files and when prompted that that it can not find any redo log
files for application then type cancel to cancel the recovery process. At the end of the process Media
recovery complete. is spit out)
06. Then issue the following statement at SQL or svrmgrl prompt:
ALTER DATABASE OPEN RESETLOGS;
07. Backup the database afresh and keep that copy safe.
If the control files are being restored to the non-default locations the initialization
parameter file need not be edited.
Procedure:
01. Shutdown abort; if the instance is still running.
02. Correct if there are any hardware problems
03. Restore the backup control files to new locations/destinations. Use cp for UNIX platforms and copy
for the windows platforms.
04. Edit the initialization parameter file suitably to reflect the new locations for the controlfiles
parameter
05. Startup mount;
06. Issue the following command to recover the database
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
(select auto while applying the redo log files and when prompted that that it can not find any redo log
files for application then type cancel to cancel the recovery process. At the end of the process Media
recovery complete. is spit out)
07. Then issue the following statement at SQL or svrmgrl prompt:
51
Then . . .
Executed
ALTER
DATABASE
BACKUP
CONTROLFILE TO TRACE NORESETLOGS
Use the CREATE CONTROLFILE statement from the trace
after you made the last structural change
output as-is.
to the database, and if you have saved the
SQL command trace output
Edit the output of ALTER DATABASE BACKUP
Performed your most recent execution of
CONTROLFILE TO TRACE to reflect the change. For
ALTER DATABASE BACKUP CONTROLFILE
example, if you recently added a datafile to the
TO TRACE before you made a structural
database, then add this datafile to the DATAFILE clause
change to the database
of the CREATE CONTROLFILE statement.
Backed up the control file with the ALTER
DATABASE BACKUP CONTROLFILE TO
filename statement (not the TO TRACE
option)
Use the control file copy to obtain SQL output. Copy the
backup control file and execute STARTUP MOUNT before
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
NORESETLOGS. If the control file copy predated a recent
structural change, then edit the trace output to reflect
the structural change.
MAXINSTANCES 16
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 (
52
H:\ORACLE\ORADATA\SRINIVAS\REDO03.LOG,
H:\ORACLE\ORADATA\SRINIVAS\REDO_3.LOG
) SIZE 1M,
GROUP 2 (
H:\ORACLE\ORADATA\SRINIVAS\REDO02.LOG,
H:\ORACLE\ORADATA\SRINIVAS\REDO_2
) SIZE 1M
DATAFILE
H:\ORACLE\ORADATA\SRINIVAS\SYSTEM01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\RBS01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\USERS01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TEMP01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TOOLS01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\INDX01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\DR01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\USERS02.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TEST_TBSP_01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TTS_EX2.DBF,
H:\ORACLE\ORADATA\SRINIVAS\OEM_REPOSITORY.ORA,
H:\ORACLE\ORADATA\SRINIVAS\PERFSTAT.DBF,
H:\ORACLE\ORADATA\SRINIVAS\P1.DBF,
H:\ORACLE\ORADATA\SRINIVAS\P2.DBF,
H:\ORACLE\ORADATA\SRINIVAS\USER01.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TEST.DBF,
H:\ORACLE\ORADATA\SRINIVAS\TEST_TBSP_02.DBF
CHARACTER SET WE8ISO8859P1 ;
03.After control file is created Oracle MOUNTS the database. Then issue the following statement.
RECOVER DATABASE
04.Open the database when it is recovered.
Alter database open; note that reset logs option is not necessary.
05. after the database is open backup control file to trace and another destination with reuse clause.
alter database backup controlfile to trace;
for UNIX platforms
alter database backup controlfile to /oracle/database/bkup_control.ctl reuse;
Preparing for Closed Database Recovery
53
In this stage, you shut down the instance and inspect the media device that is causing the problem.
To prepare for closed database recovery:
If the database is open, then shut it down with the ABORT option:
SHUTDOWN ABORT
If recovering from a media error, then correct it if possible. If the hardware problem that caused the
media failure was temporary, and if the data was undamaged (for example, a disk or controller power
failure), then no media recovery is required: simply start the database and resume normal operations.
If you cannot repair the problem, then proceed to the next step.
Restoring Backups of the Damaged or Missing Files
In this stage, restore all necessary backups.
To restore the necessary files:
Determine
which
datafiles
to
recover
by
querying
the
v$recover_file.
If the files are permanently damaged, then identify the most recent backups for the damaged files.
Restore only the datafiles damaged by the media failure: do not restore any undamaged datafiles or
any online redo log files.
For example, if /oracle/dbs/tbs_10.f is the only damaged file, then you may consult your records and
determine that /oracle/backup/tbs_10.backup is the most recent backup of this file. If you do not have
a backup of a specific datafile, then you may be able to create an empty replacement file that can be
recovered.
Use an operating system utility to restore the files to their default location or to a new location. For
example,
UNIX
user
restoring/oracle/dbs/tbs_10.f
to
its
default
location
might
enter:
% cp /oracle/backup/tbs_10.backup /oracle/dbs/tbs_10.f
Follow these guidelines when determining where to restore datafile backups:
If . . .
Then . . .
The hardware problem is repaired and you can Restore the datafiles to their default locations and
restore the datafiles to their default locations
begin media recovery.
The hardware problem persists and you cannot The hardware problem persists and you cannot
restore datafiles to their original locations
restore datafiles to their original locations.
Recovering the Database
In
the
final
stage,
you
recover
the
datafiles
that
you
have
restored.
Connect to the database with administrator privileges, then start a new instance and mount, but do
not open, the database. For example, enter:
STARTUP MOUNT
54
Obtain the datafile names and statuses of all datafiles by checking the list of datafiles that normally
accompanies the current control file or querying the V$DATAFILE view. For example, enter:
SELECT NAME,STATUS FROM V$DATAFILE;
Ensure that all datafiles of the database are online. All datafiles of the database requiring recovery
must be online unless an offline tablespace was taken offline normally or is part of a read-only
tablespace. For example, to guarantee that a datafile named /oracle/dbs/tbs_10.dbf is online, enter
the following:
ALTER DATABASE DATAFILE /oracle/dbs/tbs_10.dbf ONLINE; UNIX
If a specified datafile is already online, then Oracle ignores the statement. If you prefer, create a script
to bring all datafiles online at once as in the following:
SPOOL onlineall.sql
SELECT ALTER DATABASE DATAFILE ||name|| ONLINE; FROM V$DATAFILE;
SPOOL OFF
SQL> @onlineall
Issue the statement to recover the database, tablespace, or datafile. For example, enter one of the
following RECOVER command:
RECOVER DATABASE # recovers whole database
RECOVER TABLESPACE users # recovers specific tablespace
RECOVER DATAFILE /oracle/dbs/tbs_10; # recovers specific datafile
Follow these guidelines when deciding which statement to execute:
To . . .
Then . . .
If you choose not to automate the application of archived logs, then you must accept or reject each
required redo log that Oracle prompts you for. If you automate recovery, then Oracle applies the
necessary logs automatically. Oracle continues until all required archived and online redo log files have
been applied to the restored datafiles.
Oracle notifies you when media recovery is complete:
Media recovery complete
If no archived redo log files are required for complete media recovery, then Oracle applies all
necessary online redo log files and terminates recovery.
After
recovery
terminates,
then
open
the
database
for
use:
55
For
example,
recover
the
users
and
sales
tablespaces
RECOVER TABLESPACE users, sales; # begins recovery on datafiles in users and sales
If the damaged datafiles are associated with one tablespace only then:
RECOVER TABLESPACE users;
as
follows:
56
Oracle begins the roll forward phase of media recovery by applying the necessary redo log files
(archived and online) to reconstruct the restored datafiles. Unless the applying of files is automated
with RECOVER AUTOMATIC or SET AUTORECOVERY ON, Oracle prompts for each required redo log file.
Oracle continues until all required archived redo log files have been applied to the restored datafiles.
The online redo log files are then automatically applied to the restored datafiles to complete media
recovery.
If no archived redo log files are required for complete media recovery, then Oracle does not prompt for
any. Instead, all necessary online redo log files are applied, and media recovery is complete.
When the damaged tablespaces are recovered up to the moment that media failure occurred, bring
the offline tablespaces online. For example, to bring tablespaces users and sales online, issue the
following statements:
ALTER TABLESPACE users ONLINE;
Issue the statement for each tablespace that is to be brought online separately. There is no possibility
to issue a single statement for multiple tablepsaces as is done in the case of RECOVER statement.
Backup and Recovery Strategies and Procedures Part III
Loss of Online Redo Files
When media failure has effected the online redo log files appropriate recovery procedure is to be
adopted. The adoption of the recovery procedures is dependent on:
(1) How the online redo log files are configured. That means whether the online redo log files are
mirrored or not.
(2)
What
type
of
media
failure
is
there?
Temporary
or
permanent.
(3) Types of the online redo log file that are affected by the media failure.
To identify the configuration of the redo log files the following query is to be issued at SQL prompt
logging in as a user with administrative privileges.
select group#, members, status from v$log;
If each group has more than one member online redo log files are mirrored. The media failure is to be
identified and fixed.
After the media failure is identified and fixed the status of the online redo log files is to be examined
by accessing the v$log file. The table given below describes the status of online redo log file and their
meanings.
Status
Description
UNUSED
CURRENT
The log is active, that is, needed for instance recovery, and it is the log to
which Oracle is currently writing. The redo log can be open or closed.
ACTIVE
The log is active, that is, needed for instance recovery, but is not the log to
which Oracle is currently writing.It may be in use for block recovery, and may
57
The log is being re-created as an empty log after an ALTER DATABASE CLEAR
LOGFILE statement. After the log is cleared, then the status changes to
UNUSED.
The current log is being cleared of a closed thread. The log can stay in this
CLEARING_CURRENT status if there is some failure in the switch such as an I/O error writing the
new log header.
INACTIVE
The log is no longer needed for instance recovery. It may be in use for media
recovery, and may or may not be archived
The status of online redo log files in v$logfile should not be mistaken with the status of the online redo
log files in v$log.
Status
Description
Inactive
Active
Current
When one of the multiplexed online redo log file is lost and there is at least one member in the group
then:
(1) The log group is not affected by the media failure and Oracle allows the database to function as
normal.
(2) Oracle writes error messages to the LGWR trace file and the alert log of the database.
If the hardware problem is temporary, then correct it. LGWR accesses the previously unavailable
online redo log files as if the problem never existed.
If the hardware problem is permanent the redo log group member is to be dropped and recreated.
The procedure for that is:
(1) Locate the file name from v$logfile with the status INVALID
select group#, status, member from v$logfile where status = INVALID;
(2) drop the damaged member
ALTER DATABASE DROP LOGFILE MEMBER <redo log file name with path>;
(3) Add new member to that group
ALTER DATABASE ADD LOGFILE MEMBER <new redo log file name with path> TO GROUP <group
number>;
The group number is the group number obtained using the query in (1)
or
(3) Add a existing member which is of same size as the other member/s in the group
58
ALTER DATABASE ADD LOGFILE MEMBER <existing member redo log file name> REUSE TO GROUP
<group number>;
When all the members of a group are lost there are three scenarios:
(1)
When
(2)
When
an
an
INACTIVE
ACTIVE
group
group
is
lost.
is
lost.
59
(b) Reuse the currently configured log filename to re-create the redo log file because the name
itself is invalid or unusable (for example, due to media failure).
(2) When an ACTIVE group is lost the procedures are different in NOARCHIVELOG and
ARCHIVELOG modes.
(2) When an ACTIVE group is lost in NOARCHIVELOG mode:
(1) if the media failure is temporary and is corrected when instance is restarted Oracle reuses the
group as if nothing had happened.
(2) if the media failure is permanent and is corrected the database is to be restored from the latest
backup available ( in noarchivelog mode only consistent backups are possible and no online backup
can be done as media recovery is not enabled). If the online redo log files are also backed up, restore
them to their default locations and start the database. there is no necessacity to recover the database.
But if the online redo log files are not backed up then,
(3) mount the database
startup mount;
(3) mimic the recovery until cancel.
RECOVER DATABASE UNTIL CANCEL
and issue CANCEL to cancel the recovery.
(4) Open the database using reset logs option
ALTER DATABASE OPEN RESETLOGS;
(6) backup the database completely including the online redo log files after shutting down the
database with normal or immediate option as the database is in noarchivelog mode.
Note: The database restored contains data until the latest backup which is used to restore the
database. All the data is to be re-entered after that point of restoration.
(3) When an ACTIVE group is lost in ARCHIVELOG mode:
If the media failure is temporary, Oracle will start writing to the redo log files when the instance is
restarted.
But, in the case of permanent failure of media the following procedure is to be adopted.
(1) do an incomplete media recovery by canceling the recovery at that point where online redo logs
files are lost. This results in loss of the data that the redo log file contains.
(2) Ensure that the current name of the lost redo log can be used for a newly created file. If not, then
rename the members of the damaged online redo log group to a new location. The procedure to add
online redo log group and member are already discussed above. But, to rename the redo log files
issue the following statement:
60
ALTER DATABASE RENAME FILE <old redo log file name with path> TO < new redo log file name with
path>;
This statement is to be issued to each member in that group that has been lost.
(3) Open the database with reset logs option
ALTER DATABASE OPEN RESETLOGS;
All updates executed from the endpoint of the incomplete recovery to the present must be reexecuted. Else there is loss of data.
When a CURRENT group is lost, there will be data loss, in ARCHIVELOG mode or
NOARCHIVELOG mode. This is an un avoidable situation.
In NOARCHIVELOG mode if online redo log files are also backed up and when the database is restored
including the online redo log files no more data is recovered and all the data is to be re-entered after
that point of time.
In ARCHIVELOG mode the database is restored from the latest backup and roll forward of archived
redo logs is done, but the recover is incomplete as the current redo log files are not available and the
recovery is to be on UNTIL CANCEL basis. The loss of data is minimized to the extent of the data lost
with the current online redo log file.
Recovering After the Loss of Archived Redo Log Files:
If the database is operating in ARCHIVELOG mode, and if the only copy of an archived redo log file is
damaged, then the damaged file does not affect the present operation of the database. The following
situations can arise, however, depending on when the redo log was written and when you backed up
the datafile.
If you backed up . . .
Then . . .
61
A "backup" is a copy of the database. This copy could be a physical copy of the
database (the database files are copied, so this is a physical backup) or alogical
copy (the database information is copied in another format which can be "imported"
again into the database to get back the old data). A logical copy of the database (or a
part of it) is named logical Backup.
When the physical files (from a backup) are copied back to the initial location is
named a "restore" operation.
When a database is restored, sometimes we need to apply last changes (from archive
log files/ log files) to bring the database data up-to-date. This operation is
named "recovery", because the last information is recovered.
2. How we can take a physical Backup ?
a) At the Operating System level using "copy" command ( cp for UNIX, Solaris,
Linux ).
b) using Oracle RMAN, or other backup tools.
3. How can we take a logical backup ?
62
RMAN Backup
User-Managed Backup
Export
Closed
database
backups
Supported.
Not
supported.
Supported. No need to
useBEGIN/END BACKUP statements.
Requires
rollback or
Supported. Must
undo
use BEGIN/ENDBACKUP statements segments to
.
generate
consistent
backups.
Open
database
backups
Incremental
Supported.
backups
Not supported.
Not
supported.
Corrupt
block
detection
Supported.
Identifies
corrupt
blocks in the
export log.
Automatic
record
keeping of
files in
backups
Supported.
Performs
either full,
user, or
table
backups.
Recovery
catalogs
Backups to
media
manager
Supported.
Backs up
Supported.
Supported.
Not
63
supported.
Not supported.
Platformindependent
Supported.
language for
backups
Supported.
Not
supported.
Not supported.
Supported.
1. What is RMAN ?
RMAN (Recovery Manager) is the recommended tool for Oracle database backup,
restore and recovery operations. RMAN is an Oracle product.
64
The online backup doesn't put the tablespace in "backup mode", so no extra
redo logs are not generated
3. Where does RMAN store the metadata information about the backups ?
Recovery Catalog Database: Store the metadata for the RMAN activities. It
doesn't store the backup of the target database. The target control files also
could keep the RMAN metadata.
Backup Set: A Backup Set store one or more physical files or backup
pieces. You cannot split a file across different backup sets or mix archived
logs and datafiles into one backup set.
65
RMAN catalog must be installed in an Oracle database (new or existing). As rule, the
RMAN catalog mustn't be created on the same database as the target database, for
data security reason (once the database is used, the RMAN catalog cannot be used
and so, the database cannot be restored & recovered). Even if the RMAN catalog is
installed on the same database as the target database (but is strongly recommended
not to do it), RMAN Catalog must be created on a different disk.
Planning the size of the Recovery Catalog
66
90 MB
Temp tablespace
5 MB
5 MB
This oracle database user will store all the data/ information for the RMAN Recovery
Catalog. The name of this user is not important, however rman is the best name we
can use for this purpose (will tell us exactly the usage of this schema).4
1. Create the tablespace for RMAN user (the TOOLS tablespace could be used as well):
67
In order to connect and work with a target database, RMAN must be connected to the
catalog and to the target database.
To connect to the target database and the recovery catalog we have to use this
command (run on the RMAN database server):
68
To register the target database with the RMAN catalog the command we have to use
is REGISTER DATABASE;
Now we can see that the DB table in RMAN schema has one database register (In this
table we can see the database identifier of the new registered database). To check
this we can run (connected as RMAN) SELECT DB_ID from DB;
69
Sometimes we need to backup the database changes only from the last backup (only
the last changes are backed up). This is an incremental backup. There are 2 types of
incremental backup: DIFFERENTIAL (by default) & CUMULATIVE.
NOTE: The incremental backups are only for the DATA files.
CUMULATIVE backup = which backs up all blocks changed after the most recent
incremental backup at level 0. See the picture bellow.
70
Because a CUMULATIVE backup is taken each day during the week, we need to
restore 2 times: 1st we have to restore the full backup and after that the last
cumulative backup (for a complete restore). If differential backups are taken we have
to restore all the differential backups until the database crash.
71
So, this command restores the file from the full backup.
72
First we have to take a look to the export log file. Also, we can import the data in
another database to see if the import is done without errors.
2. Validate a physical backup (taken using RMAN "image copy" option or by copying the files
at the OS level)
In this case we have to validate the .dbf files. This is done by using DBVERIFY Utility.
73
If there are no errors during the recovery ... validate, the backup is good. This
command (recovery database validate) doesn't recover the database; only the check
is done.
To find the backup set of the backups we can run the RMAN command LIST BACKUP;
74
Export (exp), Import (imp) are Oracle utilities which allow you to write data in an
ORACLE-binary format from the database into operating system files and to read data
back from those operating system files.
2. Which are the Import/ Export modes ?
a) Full export/export
The EXP_FULL_DATABASE & IMP_FULL_DATABASE, respectively, are needed to
perform a full export. Use the full export parameter for a full export.
b) Tablespace
Use the tablespaces export parameter for a tablespace export.
c) User
This mode can be used to export and import all objects that belong to a user. Use the
owner export parameter and the fromuser import parameter for a user (owner)
export-import.
d) Table
Specific tables (or partitions) can be exported/imported with table export mode. Use
the tables export parameter for a table export/ import mode.
75
import the table using INDEXFILE parameter (the import is not done, but a file which contains
the indexes creation is generated)
modify this script to create the indexes in the tablespace we want
import the table using IGNORE=y option (because the table exists)
Detect database corruption. Ensure that all the data can be read (if the data can be read
that means there is no block corruption)
Transporting tablespaces between databases
If you run multiple export sessions, ensure they write to different physical disks.
Import the table using INDEXFILE parameter (the import is not done, but a file which contains
the indexes creation is generated), import the data and recreate the indexes
Store the dump file to be imported on a separate physical disk from the oracle data files
If there are any constraints on the target table, the constraints should be disabled during
the import and enabled after import
Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB) ) and COMMIT
=y or set COMMIT=n (is the default behavior: import commits after each table is loaded,
however, this use a lot of the rollback segments or undo space for huge tables.)
use the direct path to import the data (DIRECT=y)
76
Default
value
Description
buffer
compress
consistent
constraints
direct
feedback
file
filesize
flashback_scn
flashback_time
full
grants
help
indexes
log
object_consistent
owner
parfile
77
recordlength
resumable
resumable_name
resumable_timeout
rows
statistics
2h
Y
ESTIMATE
tables
tablespaces
transport_tablespace
triggers
tts_full_check
FALSE
userid
export.
Specifies the maximum number of bytes in an export file
on each tape volume.
volsize
Default
value
Description
Specifies the size, in bytes, of the buffer (array) used to
78
commit
compile
constraints
datafiles
(only with
transport_tablespace)
destroy
feedback
file
filesize
fromuser
full
grants
help
ignore
indexes
79
indexfile
log
parfile
recordlength
resumable
resumable_name
resumable_timeout
2h
rows
show
skip_unusable_indexes
statistics
ALWAYS
streams_configuration
streams_instantiation
tables
tablespaces
to_user
transport_tablespace
tts_owners
userid
80
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows.
Use IGNORE=N to skip tables that already exist (imp will give an error if the object is recreated) or the table could be dropped/ truncated and re-imported if we need to do a table
refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import
parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or
use parameter CONSISTENT=NO (However this option could create possible referential
problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set
parameter COMMIT=Y (with an appropriate BUFFER parameter ) while importing.
RMAN incremental backup for an Oracle DB
1. Incremental database backup Overview
Sometimes we need to backup the database changes only from the last backup (only
the last changes are backed up). This is an incremental backup. There are 2 types of
incremental backup: DIFFERENTIAL (by default) & CUMULATIVE.
NOTE: The incremental backups are only for the DATA files.
2. Incremental DIFFERENTIAL backup
DIFFERENTIAL backup = which backs up all blocks changed after the most recent
incremental backup at level 1 or 0. (See Picture 1). The following RMAN command is used to take a
DIFFERENTIAL database backup:
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
81
CUMULATIVE backup = which backs up all blocks changed after the most recent incremental
backup at level 0. (See Picture 2)
The following RMAN command is used to take a CUMULATIVE database backup:
RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;
4. The advantages of each types of incremental backup
Here are the advantages of each types of incremental backup:
Advantages of the DIFFERENTIAL backup
1.
2.
3.
RMAN Configuration
1. Which is the default RMAN configuration ?
show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; #
default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/10gOHome/dbs/snapcf_db10.f'; # default
82
NOTES:
The parameters which are currently modified are in bold.
The changes in the RMAN configuration are saved automatically in the control file/ RMAN
catalog.
2. How could I restore the actual configuration to the default value ?
CONFIGURE RETENTION POLICY CLEAR;
CONFIGURE BACKUP OPTIMIZATION CLEAR;
CONFIGURE DEFAULT DEVICE TYPE CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT CLEAR;
CONFIGURE DEVICE TYPE DISK CLEAR;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK CLEAR;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT CLEAR;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK CLEAR;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT CLEAR;
CONFIGURE CHANNEL DEVICE TYPE DISK CLEAR;
CONFIGURE CHANNEL DEVICE TYPE SBT CLEAR;
CONFIGURE MAXSETSIZE CLEAR;
CONFIGURE SNAPSHOT CONTROLFILE NAME CLEAR;
3. Using substitution variables
RMAN can make use of substitution variables in creating format strings to generate UNIQUE
file names. If the file names are not unique the files will be overwritten and the data will be lost.
Format
Description
%d
%u
%p
specifies the backup piece number within the backup set. This value starts at 1 for each backup
set and is incremented by 1 as each backup piece is created.
%c
Specifies the copy number of the backup piece within a set of duplexed backup pieces. If you did
not issue the set duplex command, then this variable will be 1 for regular backup sets and 0 for proxy
copies. If you issued set duplex, the variable identifies the copy number: 1, 2, 3, or 4.
%U
Specifies a convenient shorthand for %u_%p_%c that guarantees uniqueness in generated backup
filenames. If you do not specify a format, RMAN uses %U by default.
%t
specifies the backup set timestamp. The combination of %s and %t can be used to form a unique
name for the backup set.
%s
specifies the backup set number. This number is a counter in the control file that is incremented
for each backup set. The counter value starts at 1 and is unique for the lifetime of the control file. If you
restore a backup control file, then duplicate values can result. Also, CREATE CONTROLFILE initializes the
counter back to 1.
4. Configure RETENTION POLICY
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
>> After 30 days the backup will become OBSOLETE.
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
>> The latest 3 backups will NOT be OBSOLETE. The others will be.
When configuring a retention policy, RMAN will NOT cause backups to be automatically
deleted.
83
crosscheck backup; -> check if the backup files exist physically on the disk
crosscheck copy; -> check if the files of a copy operation exist physically on the disk
list backup;
list expired backup; -> To identify those backups that were not found during a crosscheck
DELETE EXPIRED BACKUP; -> To delete the information about the expired backups in the
RMAN repository
DELETE EXPIRED COPY; -> To delete the information about the expired copies in the RMAN
repository
5. Configure DEFAULT DEVICE TYPE
CONFIGURE DEFAULT DEVICE TYPE TO DISK; --> by default
CONFIGURE DEFAULT DEVICE TYPE TO SBT;
This is overridden by the RUN command, or by DEVICE TYPE on the BACKUP command itself.
6. Configure CONTROLFILE AUTOBACKUP (starting from 9i)
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP OFF; --> by default
RMAN writes both the CONTROLFILE and the SPFILE (if the database was started with an SPFILE) to
the same backup piece.
To set the location of the Control file backup:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'/oracle/RMAN_backup/%F';
The %F tag is essential for RMAN to be able to restore the file without a recovery catalog.
84
85
RUN {
ALLOCATE CHANNEL ch00 TYPE disk ;
ALLOCATE CHANNEL ch01 TYPE disk ;
ALLOCATE CHANNEL ch02 TYPE disk ;
BACKUP
$BACKUP_TYPE
86
EOF
When the RMAN_NOARCHIVELOG_backup.sh script will run, it will generate the following
log information:
[oracle@PROD scripts]$ ./RMAN_NOARCHIVELOG_backup.sh
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Mar 26 22:59:41 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: DB1 (DBID=1244100437)
connected to recovery catalog database
87
88
Here is the way the database could be restored if all the files (+control files)
are lost:
89
90
Description
Log history
Records that are created whenever a log switch occurs. Note that log history records
describe an online log switch, not a log archival.
Records associated with archived logs that were created by archiving an online redo log,
copying an existing archived redo log, or restoring backups of archived redo logs.
Backups
Records associated with backup sets, backup pieces, proxy copies, and image copies.
Physical schema
Records associated with datafiles and tablespaces. If the target database is open, then
rollback segment information is also updated.
91
>> If this is done using RMAN, the RMAN will recognize the new database as a
new one and will use this new incarnation number for the following backups/ restore.
>> If this is NOT done using RMAN, the RMAN will NOT recognize the new
database as a new one. To let RMAN know that this is a new incarnation we have to
reset the database using the command: RESET DATABASE;
RMAN> list expired backup; -> To identify those backups that were not found during
a
Crosscheck
RMAN> list backup by file;
RMAN> list archivelog all; -> List all archived log files
RMAN> list backup summary; -> Backups summary
RMAN> crosscheck backup; -> check if the backup files exist physically on the disk
Use the need backup option to identify which datafiles need a new backup:
RMAN> report need backup days = 9 database; # needs at least 9 days of logs to
recover
92
Use the obsolete option to identify which backups are obsolete because they are no
longer needed for recovery. The redundancy parameter specifies the minimum level
of redundancy considered necessary for a backup or copy to be obsolete (if you do
not specify the parameter, redundancy defaults to 1).
93
To be sure that the RMAN Catalog is not lost, the RMAN database must be in
ARCHIVELOG and maintained as an Oracle production database (must be backed up
daily). Also, logical backups for RMAN schema could be taken with imp/exp utility.
7.Delete the database files
To test the backup, I stop the database (even if is not mandatory :) ), I delete all the
database files (control files, log file, data files) and after that I restore the files using
RMAN.
If I want to startup the database I receive a message of error because the control
files are not present:
94
To restore the database (physically the files are restored, but the files = the
database) we have to connect to the target database and recovery catalog, to startup
the database in nomount state, restore the control file and alter the database to be in
mount state.
95
(We can run RMAN> RESTORE DATABASE VALIDATE, as well to check that all is ok
and database could be opened.)
And after that we can open the database (in RESETLOG mode)
96
The new database will be registered with the RMAN catalog automatically:
alter system flush buffer_cache; force to get the data from the data file and not from
the buffer cache.
97
>> Write something wrong to the block 28 (the next block after the header) to
corrupt the block (at the OS level):
$ dd if=/dev/zero of=/DB1/oradata/db1/users01.dbf bs=8192 conv=notrunc seek=28
count=1
98
bs = block size
conv = convert the file ( In this case = do not truncate the output file)
seek = which block will be corrupt
count = how many blocks
99
handle=/home/oracle/Desktop/Backup_rman/backup/bk_34_1_645659824 tag=HOT
_D B_BK_LEVEL0
channel ORA_DISK_1: block restore complete, elapsed time: 00:00:01
starting media recovery
archive log thread 1 sequence 3 is already on disk as file /DB1/flash_recovery_a
rea/DB1/archivelog/2008_02_02/o1_mf_1_3_3tb95vdp_.arc
archive log thread 1 sequence 4 is already on disk as file /DB1/flash_recovery_a
rea/DB1/archivelog/2008_02_02/o1_mf_1_4_3tbbkzk3_.arc
channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece
/home/oracle/Desktop/Backup_rman/b ackup/al_38_1_645659875
channel ORA_DISK_1: restored backup piece 1
.
.
media recovery complete, elapsed time: 00:00:01
Finished blockrecover at 03-FEB-08
select EMPNO,ENAME,JOB,MGR, HIREDATE from emp; will returm the emp data:
100
A user-managed backup is made by performing a physical copy of data files using the
OS commands. These copies are moved to a separate location using OS commands.
The user maintains a record of the backups. For the recovery operation we have to
move back (or to the new location of the database) the files and perform the
recovery.
The user-managed backups could be take at the following levels:
Tablespace level
Database level
copy the physical files associated with this tablespace on another location
using OS commands
Archive the unachieved redo logs so that the redo required to recover the
tablespace backups is archived
( SQL> ALTER SYSTEMARCHIVE LOG CURRENT; )
Take a backup of all archived redo log files generated between Begin Backup
and End Backup using OS commands
NOTES:
101
When a tablespace is in backup mode, Oracle will stop updating its file
headers, but will continue to write to the data files. When in backup mode,
Oracle will write complete changed blocks to the redo log files. Normally
only deltas (change vectors) are logged to the redo logs. This is done to
enable reconstruction of a block if only half of it was backed up (split blocks).
Because of this, one should notice increased log activity and archiving during
on-line backups. To fix this problem, simply switch to RMAN backups.
If the tablespace is in READ ONLY mode, we don't need to put the tablespace
in Backup Mode.
V$TABLESPACE t,
V$DATAFILE f
WHERE
t.TS# = f.TS#
ORDER BY t.NAME;
backup the control file as the database has gone through structural changes
102
5. TABLESPACE Recovery
SYSTEM tablespace never can be recovered because is online all the time.
6. DATAFILE Recovery
If the database is down (shutdown IMMEDIATE, NORMAL) was used, we have only to
copy the data files, redo log files and control files to a new location. This kind of
backup is used for a database in NOARCHIVELOG which is not used for a 24x7
business.
103
An Open Database Backup is a backup taken when the database is up and running.
This is done by putting the tablespace in Backup mode and copying the data files and
control files. All the latest archived log files must be copied as well. The V$BACKUP
and V$DATAFILE_HEADER should be queried after the database backup to see if all
the data files are in online mode.
An OS failure occurs
The database could be stopped by using shutdown abort. See the picture below:
104
When the database is brought up a media recovery is needed and the database will be
in mount state:
In mount state we can query the files to see what is happening (optional):
105
To see the status of the files during the online user-managed backups the following
select could be used:
106
107
108
target is not resyncd and a backup controlfile is restored, the new records must be cataloged
manually (catalog archivelog <logname>;).
2.8. Snapshot Controlfile
When RMAN needs to resynchronize from a read-consistent version of the control file, it creates a
temporary snapshot control file. The default name for the snapshot control file is port-specific. Use the
set snapshot controlfile name to file_name command to change the name of the snapshot control
file; subsequent snapshot control files that RMAN creates use the name specified in the command. The
snapshot control file name can also be set to a raw device. This operation is important for OPS
databases in which more than one instance in the cluster use RMAN because server sessions on each
node must be able to create a snaphost control file with the same name and location.
2.9. Resetlogs Operation
Whenever you open the database with the RESETLOGS option, all datafiles get a new RESETLOGS SCN
and timestamp. Archived redo logs also have these two values in their header. Because Oracle will not
apply an archived redo log to a datafile unless the RESETLOGS SCN and timestamps match, the
RESETLOGS operations prevents you from corrupting your datafiles with old archived logs.
2.1.0 Database Incarnation
Whenever you perform incomplete recovery or perform recovery using a backup control file, you must
reset the online redo logs when you open the database. The new version of the reset database is
called a new incarnation. All archived redo logs generated after the point of the RESETLOGS on the
old incarnation are invalid in the new incarnation.
2.1.1. Resetting the Recovery Catalog
Before you can use RMAN again with a target database that you have opened with the RESETLOGS
option, notify RMAN that you have reset the database incarnation. The reset database command
directs RMAN to create a new database incarnation record in the recovery catalog. This new
incarnation record indicates the current incarnation. RMAN associates all subsequent backups and log
archiving done by the target database with the new database incarnation. If you issue the ALTER
DATABASE OPEN RESETLOGS statement but do not reset the database, then RMAN cannot access the
recovery catalog because it cannot distinguish between a RESETLOGS command and an accidental
restore of an old control file. By resetting the database, you inform RMAN that the database has been
109
opened with the RESETLOGS option. In the rare situation in which you wish to undo the effects of
opening with the RESETLOGS option by restoring backups of a prior incarnation of the database, use
the reset database to incarnation key command to change the current incarnation to an older
incarnation.
3. The recovery catalog
The recovery catalog is a repository of information that is used and maintained by RMAN. RMAN uses
the information in the recovery catalog to determine how to execute requested backup and restore
actions... The recovery catalog can be in a schema of an existing Oracle8 database. However if RMAN
is being used to backup multiple databases, it is probably worth creating a dedicated recovery catalog
database. THE RECOVERY CATALOG DATABASE CANNOT BE USED TO CATALOG BACKUPS OF ITSELF.To
set up the recovery catalog, firstly ensure that catalog and catproc have been run, then execute the
following:
SVRMGR> spool create_rman.log
SVRMGR> connect internal
SVRMGR> create user rman identified by rman
temporary tablespace temp
default tablespace rcvcat quota unlimited on rcvcat;
SVRMGR> grant recovery_catalog_owner to rman;
SVRMGR> grant connect, resource to rman;
Note: Following steps only apply for an Oracle8 8.0.x catalog creation.
SVRMGR> connect rman/rman
SVRMGR> @?/rdbms/admin/catrman
Check create_rman.log for errors. The above commands assume that the
TEMP and RCVCAT tablespaces have been created.
In Oracle8i the catalog is created a little differently.
Note: Following steps only apply to Oracle8i 8.1.5 and greater.
From the UNIX shell run:
110
% set ORACLE_SID=RCAT
% rman catalog rman/rman
RMAN> create catalog;
This will generate the recovery catalog schema in the default tablespace for
RMAN.
Also ensure that catproc has been run on the target database as SYS
(do _not_ use SYSTEM); RMAN makes extensive use of RPCs.
It is very important that the recovery catalog database is backed up
regularly and frequently.
Note: Although you are not required to use a recovery catalog with RMAN,
it is
recommended. Because most of the information in the recovery catalog is available via the
target databases controlfile, RMAN can use this information for recovery purposes.
4. Starting RMAN
RMAN has a command line interface, or can be run from Enterprise Manager. For the purposes of this
document, only the CLI will be covered. The command line interface has following syntax:
rman target <qstring> [rcvcat <qstring> | cmdfile <qstring> |
msglog <qstring> | append | trace <qstring>]
Argument
TARGET A connect string containing a userid and password for the database on which Recovery
Manager is to operate.
rman target system/manager@target
RCVCAT A connect string that contains a userid and password for the database that contains the
recovery catalog.
rman rcvcat rman/rman@rcvcat
111
CMDFILE The name of a file that contains the input commands for RMAN. If this argument is specified,
RMAN operates in batch mode; otherwise, RMAN operates in interactive line mode. MSGLOG The name
of a file where RMAN records commands and output Results. If not specified, RMAN outputs to the
screen. APPEND This parameters causes the msglog file to be opened in append mode. If this
parameter is not specified and a file with the same name as the msglog file already exists, it is
overwritten.
TRACE
112
113
Database status:
Recovery catalog: open
Target: mounted or open
The target database must be registered with the recovery catalog before using RMAN against the
database for the first time:
RMAN> register database;
6. Adding existing backups to the recovery catalog
Database status:
Recovery catalog: open
Target: mounted or open
If user-created backups existed under version 8.x prior to registering with the target database, these
can be added to the recovery catalog as follows:
RMAN> catalog datafilecopy /supp/ . /systargdb.dbf;
To view this file in the catalog, use the following command:
RMAN> list copy of database;
7. Backing up in noarchivelog mode
Database status:
Recovery catalog: open
Target: database mounted
Recovery catalog database is OPEN, target database is started (optionally mounted). Because the
target database is not in archivelog mode, it must not be open when performing backups of datafiles.
This would be equivalent of making filesystem copies of datafiles without putting tablespaces into
hot backup mode. If the database is open and not in archivelog mode, RMAN will generate an error
when you attempt to perform a datafile backup
7.1. Example of how to back up a complete database
114
RMAN> run {
2> # backup the complete database to disk
3> allocate channel dev1 type disk;
4> backup
5> full
6> tag full_db_sunday_night
7> format /oracle/backups/db_t%t_s%s_p%p
8> (database);
9> release channel dev1;
10> }
Line#
2: Comment line (anything after the # is a comment)
3&9: See section 15 Channels
5: Full backup (default if full or incremental not specified)
6: Meaningful string (<=30 chars)
7: Filename to use for backup pieces, including substitution variables.
8: Indicates all files including controlfiles are to be backed up
To view this backup in the catalog, use the following command:
RMAN> list backupset of database;
7.2. Example of how to back up a tablespace
RMAN> run {
2> allocate channel dev1 type disk;
3> backup
4> tag tbs_users_read_only
5> format /oracle/backups/tbs_users_t%t_s%s
6> (tablespace users);
7> release channel dev1;
10> }
Line#
6: Specifying only the USERS tablespace for backup
115
To view this tablespace backup in the catalog, use the following command:
RMAN> list backupset of tablespace users;
If for example the USERS tablespace is going to be put READ ONLY after being backed up, subsequent
full database backups would not need to backup this tablespace. To cater for this, specify the skip
readonly option in subsequent backups.
Note that although this is a tablespace backup, the target database does NOT have to be
open, only mounted. This is because tablespace information is stored in the controlfile in
o8.
7.3. Example of how to backup individual datafiles
RMAN> run {
2> allocate channel dev1 type SBT_TAPE;
3> backup
4> format %d_%u
5> (datafile /oracle/dbs/sysbigdb.dbf);
6> release channel dev1;
7> }
Line#
2: Allocates a tape drive using the media manager layer (MML)
Note that no tag was specified and is therefore null.
To view this tablespace backup in the catalog, use the following command:
RMAN> list backupset of datafile 1;
7.4. Copying datafiles
RMAN> run {
2> allocate channel dev1 type SBT_TAPE;
3> copy datafile /oracle/dbs/temp.dbf to /oracle/backups/temp.dbf;
4> release channel dev1;
5> }
116
To view this file copy in the catalog, use the following command:
RMAN> list copy of datafile /oracle/dbs/temp.dbf;
Copying a datafile is different to backing up a datafile. A datafile copy is an image copy of the file. A
backup of the file creates a backupset.
7.5. Backing up the controlfile
RMAN> run {
2> allocate channel dev1 type SBT_TAPE;
3> backup
4> format cf_t%t_s%s_p%p
5> tag cf_monday_night
6> (current controlfile);
7> release channel dev1;
8> }
Note that a database backup will automatically back up the controlfile.
8. Backing up in archivelog mode
Database status:
Recovery catalog: open
Target: instance started, database mounted or open
The commands are identical to those in section 7 except that the target database is in archivelog
mode.
8.1. Backing up archived logs
The following script backs up all archive logs:
RMAN> run {
2> allocate channel dev1 type disk;
3> backup
4> format /oracle/backups/log_t%t_s%s_p%p
5> (archivelog all);
117
118
RMAN> run {
2> allocate channel dev1 type disk;
3> sql alter system archive log current;
4> backup
5> format /oracle/backups/log_t%t_s%s_p%p
6> (archivelog from time sysdate-1 all delete input);
7> release channel dev1;
8> }
The above script might be run after performing a full database open backup. It would ensure that all
redo to recover the database to a consistent state would be backed up.
Note, you cannot tag archive log backupsets.
9. Incremental backups
A level N incremental backup backs up blocks that have changed since the most recent incremental
backup at level N or less.
9.1. Level 0 the basis of the incremental backup strategy
RMAN> run {
2> allocate channel dev1 type disk;
3> backup
4> incremental level 0
5> filesperset 4
6> format /oracle/backups/sunday_level0_%t
7> (database);
8> release channel dev1;
9> }
Line#
4: Level 0 backup backups of level > 0 can be applied to this
5: Specifies maximum files in the backupset
A list of the database backupsets will show the above backup. The type
column is marked Incremental; the LV column shows 0.
119
120
for 8.1
SERIAL#
- -
CONTEXT
% Complete
12
Time Now
56
980408 14:21:07
12. Recovery
As with backup, recovery is probably best explained with a few examples
12.1. Database open, datafile deleted
Datafile has been deleted from a running database. There are two methods of open database
recovery: restore the datafile and recover either the datafile, or the tablespace. The next two
examples show both methods:
(a) Datafile recovery
RMAN> run {
2> allocate channel dev1 type disk;
3> sql alter tablespace users offline immediate;
4> restore datafile 4;
5> recover datafile 4;
6> sql alter tablespace users online;
7> release channel dev1;
8> }
(b) Tablespace recovery
RMAN> run {
2> allocate channel dev1 type disk;
3> sql alter tablespace users offline immediate;
4> restore tablespace users;
121
122
123
124
125
126
127
128
retention policy, requirements for backup recovery strategy. If not using a catalog, ensure that your
controlfile record keep time instance parameter matches your retention policy.
SQL> alter system set control_file_record_keep_time=21 scope=both;
This will keep 21 days of backup records.
Run regular catalog maintenance.
REASON: Delete obsolete will remove backups that are outside your retention policy. If obsolete
backups are not deleted, the catalog will continue to grow until performance becomes an issue.
RMAN> delete obsolete;
REASON: crosschecking will check that the catalog/controlfile matches the physical backups.
If a backup is missing, it will set the piece to EXPIRED so when a restore is started, that it will not be
eligible, and an earlier backup will be used. To remove the expired backups from the catalog/controlfile
use the delete expired command.
RMAN> crosscheck backup;
RMAN> delete expired backup;
8. Prepare for loss of controlfiles.
set autobackup on
REASON: This will ensure that you always have an up to date controlfile available that has been
taken at the end of the current backup not during.
RMAN> configure controlfile autobackup on;
keep your backup logs
REASON: The backup log contains parameters for your tape access, locations on controlfile backups
that can be utilised if complete loss occurs.
129
LSNRCTL> exit
Password File is created in the default path
[oracle@REDDY admin]$ cd $ORACLE_HOME/dbs
[oracle@REDDY dbs]$ orapwd file=orapwtarun password=tarun entries=5
Set in the parameter file
[oracle@REDDY dbs]$ vi inittarun.ora
-
130
*.remote_login_passwordfile=exclusive
:wq!
Enable the archivelog
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /home/oracle/tarun/archive
Oldest online log sequence 41
Next log sequence to archive 42
Current log sequence 42
Client Machine(Catalog database is placed,RMAN configuration is done):
Create separate tablespace for rman
SQL> create tablespace rman_tbs datafile/home/oracle/tarun4/datafiles/rman01.dbf size 20m;
Tablespace created.
Create user rman
SQL> create user rman identified by rman
2 default tablespace rman_tbs
131
132
133
134
135
136
137
138
database mounted
Total System Global Area 343932928 bytes
Fixed Size 1219328 bytes
Variable Size 289408256 bytes
Database Buffers 50331648 bytes
Redo Buffers 2973696 bytes
RMAN> restore database;
Starting restore at 19-NOV-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=37 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /home/oracle/tarun/datafiles/system01.dbf
restoring datafile 00002 to /home/oracle/tarun/datafiles/undotbs1.dbf
restoring datafile 00003 to /home/oracle/tarun/datafiles/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/rmanbackup_07k03r5g
channel ORA_DISK_1: restored backup piece 1
piece handle=/home/oracle/rmanbackup_07k03r5g tag=TAG20081119T163248
channel ORA_DISK_1: restore complete, elapsed time: 00:01:16
Finished restore at 19-NOV-08
139
140
141
142
143
144
145
146
147
148
149
-
1 200 TEMP 200 /home/oracle/tarun/datafiles/temp01.dbf
RMAN> backup database;
Starting backup at 26-NOV-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/home/oracle/tarun/datafiles/system01.dbf
input datafile fno=00002 name=/home/oracle/tarun/datafiles/undotbs1.dbf
input datafile fno=00003 name=/home/oracle/tarun/datafiles/sysaux01.dbf
input datafile fno=00004 name=/home/oracle/tarun/datafiles/users01.dbf
channel ORA_DISK_1: starting piece 1 at 26-NOV-08
channel ORA_DISK_1: finished piece 1 at 26-NOV-08
piece handle=/home/oracle/rmanbackup_0tk0m4s0 tag=TAG20081126T150847 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:05
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current control file in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 26-NOV-08
150
151
- - -
890 Full 6.67M DISK 00:00:02 26-NOV-08
BP Key: 892 Status: AVAILABLE Compressed: NO Tag: TAG20081126T150847
Piece Name: /home/oracle/rmanbackup_0uk0m4vt
Control File Included: Ckp SCN: 343497 Ckp time: 26-NOV-08
SPFILE Included: Modification time: 08-NOV-08
Where target database is present:
SQL> show user
USER is SYS
SQL> select username from dba_users;
USERNAME
OUTLN
SYS
SYSTEM
TARUN
TSMSYS
DIP
DBSNMP
7 rows selected.
152
153
154
OUTLN
SYS
SYSTEM
155
TARUN
TSMSYS
DIP
DBSNMP
7 rows selected.
RECOVERING TABLESPACE IF ONE OF THE DATAFILE IS LOST OR CORRUPTD:
[oracle@REDDY ~]$ rman
Recovery Manager: Release 10.2.0.1.0 Production on Sat Nov 29 15:18:50 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys@REDDY;
target database Password:
connected to target database: TARUN (DBID=2431589971)
RMAN> connect catalog rman/rman@REDDY1;
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN> resync catalog;
starting full resync of recovery catalog
156
FILE_NAME
SYSTEM
/home/oracle/tarun/datafiles/system01.dbf
UNDOTBS1
/home/oracle/tarun/datafiles/undotbs1.dbf
SYSAUX
/home/oracle/tarun/datafiles/sysaux01.dbf
TABLESPACE_NAME
FILE_NAME
TEST
157
/home/oracle/tarun/datafiles/test01.dbf
TEST
/home/oracle/tarun/datafiles/test02.dbf
RMAN> backup database plus archivelog;
Starting backup at 29-NOV-08
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=44 recid=3 stamp=672076709
channel ORA_DISK_1: starting piece 1 at 29-NOV-08
channel ORA_DISK_1: finished piece 1 at 29-NOV-08
piece handle=/home/oracle/oracle/product/10.2.0/db_1/dbs/07k0u4t6_1_1
tag=TAG20081129T155830 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 29-NOV-08
Starting backup at 29-NOV-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/home/oracle/tarun/datafiles/system01.dbf
158
159
160
161
TABLESPACE_NAME
/home/oracle/tarun/datafiles/system01.dbf
SYSTEM
/home/oracle/tarun/datafiles/undotbs1.dbf
UNDOTBS1
/home/oracle/tarun/datafiles/sysaux01.dbf
SYSAUX
FILE_NAME
TABLESPACE_NAME
162
/home/oracle/tarun/datafiles/test01.dbf
TEST
/home/oracle/tarun/datafiles/test02.dbf
TEST
SQL> !
[oracle@REDDY ~]$ cd tarun/
[oracle@REDDY tarun]$ ls
archive bdump control datafiles dbc.sql udump
[oracle@REDDY tarun]$ cd datafiles/
[oracle@REDDY datafiles]$ ls
redo01.log sysaux01.dbf temp01.dbf test02.dbf
redo02.log system01.dbf test01.dbf undotbs1.dbf
RECOVERING THE LOST OR CORRUPTD DATAFILE AND RESTORE THE BACKUP DATAFILE TO
THE NEW LOCATION:
[oracle@REDDY ~]$ rman
Recovery Manager: Release 10.2.0.1.0 Production on Tue Dec 2 13:58:19 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys@REDDY;
target database Password:
connected to target database: TARUN (DBID=2431589971)
RMAN> connect catalog rman/rman@REDDY1;
connected to recovery catalog database
RMAN> create catalog;
recovery catalog created
RMAN> register database;
database registered in recovery catalog
163
SYSTEM
UNDOTBS1
SYSAUX
TEMP
TEST
SQL> select name from v$datafile;
NAME
/home/oracle/tarun/datafiles/system01.dbf
/home/oracle/tarun/datafiles/undotbs1.dbf
164
/home/oracle/tarun/datafiles/sysaux01.dbf
/home/oracle/test01.dbf
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /home/oracle/tarun/archive
Oldest online log sequence 14
Next log sequence to archive 15
Current log sequence 15
RMAN> list backup;
RMAN> backup database plus archivelog;
starting full resync of recovery catalog
full resync complete
Starting backup at 02-DEC-08
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=25 stamp=672158787
input archive log thread=1 sequence=2 recid=26 stamp=672408184
..
piece handle=/home/oracle/rmanbackup_0nk18ggt tag=TAG20081202T141758 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 02-DEC-08
Starting backup at 02-DEC-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/home/oracle/tarun/datafiles/system01.dbf
input datafile fno=00002 name=/home/oracle/tarun/datafiles/undotbs1.dbf
input datafile fno=00003 name=/home/oracle/tarun/datafiles/sysaux01.dbf
input datafile fno=00004 name=/home/oracle/test01.dbf
channel ORA_DISK_1: starting piece 1 at 02-DEC-08
channel ORA_DISK_1: finished piece 1 at 02-DEC-08
piece handle=/home/oracle/rmanbackup_0ok18ggv tag=TAG20081202T141807 comment=NONE
165
166
167
SYSTEM
UNDOTBS1
SYSAUX
TEMP
TEST
SQL> select name from v$datafile;
NAME
/home/oracle/tarun/datafiles/system01.dbf
/home/oracle/tarun/datafiles/undotbs1.dbf
/home/oracle/tarun/datafiles/sysaux01.dbf
/home/oracle/tarun/datafiles/test01.dbf
WHEN THE DATABASE OPEN TO RESTORE THE BACKUP DATAFILE TO THE NEW LOCATION:
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
SYSTEM
UNDOTBS1
SYSAUX
TEMP
TEST
168
/home/oracle/tarun/datafiles/system01.dbf
/home/oracle/tarun/datafiles/undotbs1.dbf
/home/oracle/tarun/datafiles/sysaux01.dbf
/home/oracle/tarun/datafiles/test01.dbf
RMAN> run{
2> sqlALTER TABLESPACE TEST OFFLINE IMMEDIATE;
3> set newname for datafile/home/oracle/tarun/datafiles/test01.dbf to
/home/oracle/tarun/test01.dbf;
4> restore tablespace TEST;
5> recover tablespace TEST;
6> switch datafile all;
7> recover tablespace TEST;
8> sql ALTER TABLESPACE TEST ONLINE;
9> }
sql statement: ALTER TABLESPACE TEST OFFLINE IMMEDIATE
executing command: SET NEWNAME
Starting restore at 02-DEC-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00004 to /home/oracle/tarun/test01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/rmanbackup_0ok18ggv
channel ORA_DISK_1: restored backup piece 1
piece handle=/home/oracle/rmanbackup_0ok18ggv tag=TAG20081202T141807
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 02-DEC-08
Starting recover at 02-DEC-08
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 02-DEC-08
datafile 4 switched to datafile copy
169
/home/oracle/tarun/datafiles/system01.dbf
/home/oracle/tarun/datafiles/undotbs1.dbf
/home/oracle/tarun/datafiles/sysaux01.dbf
/home/oracle/tarun/test01.dbf
RECOVERING THE DROPED TABLESPACE:
[oracle@REDDY ~]$ rman
Recovery Manager: Release 10.2.0.1.0 Production on Fri Dec 5 11:08:28 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys@REDDY;
target database Password:
connected to target database: TARUN (DBID=2432194154)
RMAN> connect catalog rman/rman@REDDY1;
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
170
SYSTEM
UNDOTBS1
SYSAUX
171
TEMP
SQL> create tablespace users datafile/home/oracle/tarun/datafiles/users01.dbf size 20m;
Tablespace created.
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
SQL> commit;
Commit complete.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /home/oracle/tarun/archive
Oldest online log sequence 45
Next log sequence to archive 46
Current log sequence 46
RMAN> run
2> {
3> backup database plus archivelog;
4> backup current controlfile;
5> }
172
173
174
175
- - -
2005 Full 6.55M DISK 00:00:01 05-DEC-08
BP Key: 2007 Status: AVAILABLE Compressed: NO Tag: TAG20081205T112638
Piece Name: /home/oracle/rman_05k1g3je
Control File Included: Ckp SCN: 152519 Ckp time: 05-DEC-08
SQL> set time on;
11:27:38 SQL> drop tablespace users including contents and datafiles;
Tablespace dropped.
RMAN> shutdown immediate;
starting full resync of recovery catalog
full resync complete
database closed
database dismounted
Oracle instance shut down
RMAN> startup nomount;
connected to target database (not started)
Oracle instance started
Total System Global Area 343932928 bytes
Fixed Size 1219328 bytes
Variable Size 289408256 bytes
Database Buffers 50331648 bytes
Redo Buffers 2973696 bytes
RMAN> restore controlfile;
Starting restore at 05-DEC-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: reading from backup piece /home/oracle/rman_05k1g3je
channel ORA_DISK_1: restored backup piece 1
piece handle=/home/oracle/rman_05k1g3je tag=TAG20081205T112638
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output filename=/home/oracle/tarun/control/c1.ctl
Finished restore at 05-DEC-08
RMAN> alter database mount;
176
database mounted
released channel: ORA_DISK_1
RMAN> run{
2> set until time to_date(12/05/08 11:27:38,MM/DD/YY HH24:MI:SS);
3> restore database;
4> recover database;
5> }
executing command: SET until clause
Starting restore at 05-DEC-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /home/oracle/tarun/datafiles/system01.dbf
restoring datafile 00002 to /home/oracle/tarun/datafiles/undotbs1.dbf
restoring datafile 00003 to /home/oracle/tarun/datafiles/sysaux01.dbf
restoring datafile 00004 to /home/oracle/tarun/datafiles/users01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/rman_02k1g3fu
channel ORA_DISK_1: restored backup piece 1
piece handle=/home/oracle/rman_02k1g3fu tag=TAG20081205T112446
channel ORA_DISK_1: restore complete, elapsed time: 00:01:16
Finished restore at 05-DEC-08
Starting recover at 05-DEC-08
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 47 is already on disk as file /home/oracle/tarun/datafiles/redo01.log
archive log thread 1 sequence 48 is already on disk as file /home/oracle/tarun/datafiles/redo02.log
archive log filename=/home/oracle/tarun/datafiles/redo01.log thread=1 sequence=47
archive log filename=/home/oracle/tarun/datafiles/redo02.log thread=1 sequence=48
media recovery complete, elapsed time: 00:00:01
Finished recover at 05-DEC-08
RMAN> alter database open resetlogs;
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
177
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
NOTE:
1. I am using Controlfile Instead of Recovery catalog for RMAN repository
2. Dont use AUTOBACKUP controlfile option becuase we need backup controlfile for incomplete
recovery not current controlfile.
RECOVERING THE DROPED REDOLOG GROUP AND REDO LOGFILES:
[oracle@REDDY ~]$ rman
Recovery Manager: Release 10.2.0.1.0 Production on Fri Dec 5 11:08:28 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect target sys@REDDY;
target database Password:
connected to target database: TARUN (DBID=2432194154)
RMAN> connect catalog rman/rman@REDDY1;
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN> resync catalog;
starting full resync of recovery catalog
178
179
System altered.
SQL> /
System altered.
RMAN> run
2> {
3> backup database plus archivelog;
4> backup current controlfile;
5> }
Starting backup at 05-DEC-08
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=41 recid=1 stamp=672664479
input archive log thread=1 sequence=42 recid=2 stamp=672664484
input archive log thread=1 sequence=43 recid=3 stamp=672664485
input archive log thread=1 sequence=44 recid=4 stamp=672664488
input archive log thread=1 sequence=45 recid=5 stamp=672664495
input archive log thread=1 sequence=46 recid=6 stamp=672665081
input archive log thread=1 sequence=47 recid=10 stamp=672665697
input archive log thread=1 sequence=48 recid=11 stamp=672665697
channel ORA_DISK_1: starting piece 1 at 05-DEC-08
channel ORA_DISK_1: finished piece 1 at 05-DEC-08
piece handle=/home/oracle/rman_0ek1g6r3 tag=TAG20081205T122154 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=12 stamp=672667321
input archive log thread=1 sequence=2 recid=13 stamp=672667326
input archive log thread=1 sequence=3 recid=14 stamp=672667332
input archive log thread=1 sequence=4 recid=15 stamp=672667334
input archive log thread=1 sequence=5 recid=16 stamp=672667340
input archive log thread=1 sequence=6 recid=17 stamp=672667439
input archive log thread=1 sequence=7 recid=22 stamp=672668106
input archive log thread=1 sequence=8 recid=21 stamp=672668106
180
181
182
183
184
185
186
187
188
189
RMAN-00571:
===========================================================
RMAN-03002: failure of startup command at 12/05/2008 14:32:43
ORA-00205: error in identifying control file, check alert log for more info
RMAN> shutdown immediate;
Oracle instance shut down
RMAN> startup nomount;
connected to target database (not started)
Oracle instance started
Total System Global Area 343932928 bytes
Fixed Size 1219328 bytes
Variable Size 289408256 bytes
Database Buffers 50331648 bytes
Redo Buffers 2973696 bytes
RMAN> restore controlfile;
Starting restore at 05-DEC-08
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: restoring control file
190
191
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
Note:Use of Resetlogs
SQL> select SEQUENCE#, RESETLOGS_TIME , RESETLOGS_CHANGE# from v$log_history;
SEQUENCE# RESETLOGS RESETLOGS_CHANGE#
-
45 04-DEC-08 1
46 04-DEC-08 1
47 04-DEC-08 1
48 04-DEC-08 1
49 04-DEC-08 1
1 05-DEC-08 172874
2 05-DEC-08 172874
3 05-DEC-08 172874
1 05-DEC-08 175625
2 05-DEC-08 175625
3 05-DEC-08 175625
192
OPEN
SQL> select name from v$controlfile;
NAME
/home/oracle/tarun/control/c1.ctl
SQL> !
[oracle@REDDY ~]$ cd tarun/control/
[oracle@REDDY control]$ ls
c1.ctl
193
194
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> /
System altered.
RMAN> run
2> {
3> backup database plus archivelog;
4> backup current controlfile;
5> }
starting full resync of recovery catalog
full resync complete
Starting backup at 06-DEC-08
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
195
196
197
198
248959
199
200
201
202
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
TEST
6 rows selected.
203
SYSTEM
UNDOTBS1
204
SYSAUX
TEMP
USERS
TEST
TBS
TC
8 rows selected.
SQL> alter system switch logfile;
System altered.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /home/oracle/tarun/archive
Oldest online log sequence 9
Next log sequence to archive 10
Current log sequence 10
RMAN> resync catalog;
starting full resync of recovery catalog
full resync complete
RMAN> run
205
2> {
3> backup database plus archivelog;
4> backup current controlfile;
5> }
Starting backup at 11-DEC-08
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=94 stamp=673107981
input archive log thread=1 sequence=2 recid=95 stamp=673187822
input archive log thread=1 sequence=3 recid=96 stamp=673188203
input archive log thread=1 sequence=4 recid=97 stamp=673188358
input archive log thread=1 sequence=5 recid=98 stamp=673188428
input archive log thread=1 sequence=6 recid=99 stamp=673188429
input archive log thread=1 sequence=7 recid=100 stamp=673188436
input archive log thread=1 sequence=8 recid=101 stamp=673188442
input archive log thread=1 sequence=9 recid=102 stamp=673188448
input archive log thread=1 sequence=10 recid=103 stamp=673188526
channel ORA_DISK_1: starting piece 1 at 11-DEC-08
206
207
208
209
210
211
212
SYSTEM
UNDOTBS1
SYSAUX
TEMP
USERS
TEST
TBS
TC
8 rows selected.
Allocating Multiple Channels Using RMAN:
Connect to the target database
[oracle@REDDY ~]$ rman
213
214
215
Definition
Benefits
Drawbacks
Full
Backup:
Restoration is fast,
since you only need
one set of backup
data.
The backing up
process is slow.
High storage
requirements.
Differential
Backup:
Restoration is slower
than using a full
backup.
Restoration is faster
than using
incremental backup.
Not as much storage
needed as in a full
backup.
Creating a differential
backup is slower than
creating an
incremental backup.
216
Restoring from
incremental backups
is the slowest because
it may require several
sets of data to fully
restore all the data.
For example if you
had a full backup and
six incremental
backups. To restore
the data would
require you to process
the full backup and all
six incremental
backups.
217
The Flashback Database is not just our database rewind button. It is a Time Machine for our
Database data that is one single command away from us.
The Flashback Database Architecture :
Flashback Database uses its own type of log files, called Flashback Database Log Files. To support this
mechanism, Oracle uses new background process called RVWR (Recovery Writer) and a new buffer in
the SGA, called Flashback Buffer. The Oracle database periodically logs before images of data blocks
in the flashback buffer. The flashback buffer records images of all changed data blocks in the
database. This means that every time a data block in the database is altered, the database writes a
before image of this block to the flashback buffer. This before image can be used to reconstruct a
datafile
to
the
current
point
of
time.
The maximum allowed memory for the flashback buffer is 16 MB. We dont have direct control on its
size. The flashback buffer size depends on the size of the current redo log buffer that is controlled by
Oracle. Starting at 10g R2, the log buffer size cannot be controlled manually by setting the
initialization parameter LOG_BUFFER.
In 10G R2, Oracle combines fixed SGA area and redo buffer together. If there is a free space after
218
Flashback log files can be created only under the Flash Recovery Area (that must be configured before
enabling the Flashback Database functionality). RVWR creates flashback log files into a directory
named FLASHBACK under FRA. The size of every generated flashback log file is again under Oracles
control. According to current Oracle environment during normal database activity flashback log files
have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a
generated flashback log file can differs during shutdown and startup database activities. Flashback log
file sizes can differ during high intensive write activity as well.
Flashback log files can be written only under FRA (Flash Recovery Area). FRA is closely related and is
built on top of Oracle Managed Files (OMF). OMF is a service that automates naming, location, creation
and deletion of database files. By using OMF and FRA, Oracle manages easily flashback log files. They
are created with automatically generated names with extension .FLB. For instance, this is the name of
one flashback log file: O1_MF_26ZYS69S_.FLB
By its nature flashback logs are similar to redo log files. LGWR writes contents of the redo log buffer to
online redo log files, RVWR writes contents of the flashback buffer to flashback database log files.
Redo log files contain all changes that are performed in the database, that data is needed in case of
media or instance recovery. Flashback log files contain only changes that are needed in case of
flashback operation. The main differences between redo log files and flashback log files are :
Flashback log files are never archived - they are reused in a circular manner.
Redo log files are used to forward changes in case of recovery while flashback log files are
used to backward changes in case of flashback operation.
Flashback log files can be compared with UNDO data (contained in UNDO tablespaces) as well.
While UNDO data contains changes at the transaction level, flashback log files contain UNDO data
at the data block level. While UNDO tablespace doesnt record all operations performed on the
database (for instance, DDL operations), flashback log files record that data as well. In few
words, flashback log files contain the UNDO data for our database.
To Summarize
219
UNDO data doesnt contain all changes that are performed in the database while flashback
logs contain all altered blocks in the database .
UNDO data is used to backward changes at the transaction level while flashback logs are used
to backward changes at the database level .
We can query the V$FLASHBACK_DATABASE_LOGFILE to find detailed info about our flashback log
files. Although this view is not documented it can be very useful to check and monitor generated
flashback logs.
There is a new record section within the control file header that is named FLASHBACK LOGFILE
RECORDS. It is similar to LOG FILE RECORDS section and contains info about the lowest and highest
SCN contained in every particular flashback database log file .
***************************************************************************
FLASHBACK LOGFILE RECORDS
***************************************************************************
(size = 84, compat size = 84, section max = 2048, section in-use = 136,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 139, numrecs = 2048)
FLASHBACK LOG FILE #1:
(name #4) E:\ORACLE\FLASH_RECOVERY_AREA\ORCL102\FLASHBACK\O1_MF_26YR1CQ4_.FLB
Thread 1 flashback log links: forward: 2 backward: 26
size: 1000 seq: 1 bsz: 8192 nab: 0x3e9 flg: 0x0 magic: 3 dup: 1
Low scn: 0x0000.f5c5a505 05/20/2006 21:30:04
High scn: 0x0000.f5c5b325 05/20/2006 22:00:38
What does a Flashback Database operation ?
When we perform a flashback operation, Oracle needs all flashback logs from now on to the desired
time. They will be applied consecutively starting from the newest to the oldest. For instance, if we
want to flashback the database to SCN 4123376440, Oracle will read flashback logfile section in
control file and will check for the availability of all needed flashback log files. The last needed
flashback log should be this with Low scn and High scn values between the desired SCN 4123376440 .
In current environment this is the file with name: O1_MF_26YSTQ6S_.FLB and with values of:
Low SCN : 4123374373
High SCN : 4123376446
Note: If we want to perform successfully a flashback operation we will always need to have available
at least one archived (or online redo) log file. This is a particular file that contains redo log information
about changes around the desired flashback point in time (SCN 4123376440). In this case, this is the
archived redo log with name: ARC00097_0587681349.001 that has values of:
First change#: 4123361850
Next change#: 4123380675
The flashback operation will not succeed without this particular archived redo log. The reason for this :
Flashback log files contain information about before-images of data blocks, related to some SCN
(System Change Number). When we perform flashback operation to SCN 4123376440, Oracle cannot
apply all needed flashback logs and to complete successfully the operation because it applying beforeimages of data. Oracle needs to restore each data block copy (by applying flashback log files) to its
state at a closest possible point in time before SCN 4123376440. This will guarantee that the
subsequent redo apply operation will forward the database to SCN 4123376440 and the database
will be in consistent state. After applying flashback logs, Oracle will perform a forward operation by
220
A flashback log is created whenever necessary to satisfy the flashback retention target, as
long as there is enough space in the flash recovery area.
A flashback log can be reused, once it is old enough that it is no longer needed to satisfy the
flashback retention target.
If the database needs to create a new flashback log and the flash recovery area is full or there
is no disk space, then the oldest flashback log is reused instead.
If the flash recovery area is full, then an archived redo log may be automatically deleted by
the flash recovery area to make space for other files. In such a case, any flashback logs that would
require the use of that redo log file for the use of FLASHBACK DATABASE are also deleted.
Note : Re-using the oldest flashback log shortens the flashback database window. If enough flashback
logs are reused due to a lack of disk space, the flashback retention target may not be satisfied.
Limitations of Flashback Database :
Since Flashback Database works by undoing changes to the datafiles that exist at the moment
that we run the command, it has the following limitations:Flashback Database can only undo changes
to a datafile made by an Oracle database. It cannot be used to repair media failures, or to recover
from accidental deletion of datafiles.
we cannot use Flashback Database to undo a shrink datafile operation.
If the database control file is restored from backup or re-created, all accumulated flashback
log information is discarded. We cannot use FLASHBACK DATABASE to return to a point in time
before the restore or re-creation of a control file.
221
The Flashback Database should be part of our Backup & Recovery Strategy but it not
supersedes the normal physical backup & recovery strategy. It is only an additional protection of our
database data.
The Flashback Database can be used to flashes back a database to its state to any point in
time into the flashback window, only if all flashback logs and their related archived redo logs for the
spanned time period are physically available and accessible.
Always ensure that archived redo logs covering the flashback window are available on either
the tape or disk.
We cannot perform flashback database operation if we have media failure. In this case we
must use the traditional database point-in-time media recovery method.
Always write down the current SCN or/and create a restore point (10g R2) before any
significant change over our database: applying of patches, running of batch jobs that can can corrupt
the data, etc. As we know: The most common cause for downtime is change.
Always write down the current SCN or/and create a restore point (10g R2) before to start a
flashback operation .
Flashback database is the only one flashback operation that can be performed to undone
result of a TRUNCATE command (FLASHBACK DROP, FLASHBACK TABLE, or FLASHBACK QUERY
cannot be used for this).
Dropping of tablespace cannot be reversed with Flashback Database. After such an operation,
the flashback database window begins at the time immediately following that operation.
Shrink a datafile cannot be reversed with Flashback Database. After such an operation, the
flashback database window begins at the time immediately following that operation.
Resizing of datafile cannot be reversed with Flashback Database. After such an operation, the
flashback database window begins at the time immediately following that operation. If we need to
perform flashback operation in this time period, we must offline this datafile before performing of
flashback operation.
Recreating or restoring of control file prevents using of Flashback Database before this point of
time.
222
We can flashback database to a point in time before a RESETLOGS operation. This feature is
available from 10g R2 because the flashback log files are not deleted after RESETLOGS operation. We
cannot do this in 10g R1 because old flashback logs are deleted immediately after an RESETLOGS
operation.
Dont exclude the SYSTEM tablespace from flashback logging. Otherwise we will not be able to
flashback the database.
Monitor regularly the size of the FRA and generated flashback logs to ensure that there is no
space pressure and the flashback log data is within the desired flashback window
VALUE
--------1440
VALUE
---------------------------------------------D:\ORACLE\PRODUCT\10.2.0\DB_1\
DATABASE\SPFILENOIDA.ORA
223
shut immediate
startup mount
alter database archivelog ;
alter database open ;
SQL> alter
scope=both;
system
set
db_recovery_file_dest='D:\oracle\product\10.2.0\flash_recovery_area'
System altered.
5.) Set the recovery file destination size. This is the hard limit on the total space to be used
by target database recovery files created in the flash recovery area .
224
from
v$database ;
Flashback technology provides a set of features to view and rewind data back and forth in time. The
flashback features offer the capability to query past versions of schema objects, query historical data,
perform change analysis, and perform self-service repair to recover from logical corruption while the
database is online.Here we will discuss some more features of FlashBack .
225
226
TO_CHAR(SYSTIMESTAM
--------------------------------2011-08-12 13:54:38
VERSIONS_STARTTIME
VERSIONS_ENDSCN
VERSIONS_OPERATION
DESCRIPTION
12.08.11 13:53:35.000
THREE
12.08.11 13:53:35.000
1366212
12.08.11
TWO
1366209
13:53:35.000
3 rows selected
12.08.11
ONE
VERSIONS_XID - ID of the transaction that created the row in it's current state.
3.) Flashback Transaction Query : Flashback transaction query can be used to get extra information
about the transactions listed by flashback version queries. The VERSIONS_XID column values from a
flashback version query can be used to query the FLASHBACK_TRANSACTION_QUERY view.
OPERATION
--------------
UPDATE
START_SCN
--------------
725208
COMMIT_SCN
----------------
725209
LOGON_USER
--------------
SCOTT
where ROWID
227
Flashback table lets we recover a table to a previous point in time, we don't have to take the
tablespace offline during a recovery, however oracle acquires exclusive DML locks on the table or
tables that we are recovering, but the table continues to be online. When using flashback table oracle
does not preserve the ROWIDS when it restores the rows in the changed data blocks of the tables,
since it uses DML operations to perform its work, we must have enabled row movement in the tables
that we are going to flashback, only flashback table requires we to enable row movement. If the data
is not in the undo segments then we cannot recover the table by using flashback table, however we
can use other means to recover the table.
Restriction on flashback table recovery : we cannot use flashback table on SYS objects we cannot
flashback a table that has had preceding DDL operations on the table like table structure changes,
dropping columns, etc The flashback must entirely exceed or it will fail, if flashing back multiple tables
all tables must be flashed back or none. Any constraint violations will abort the flashback operation we
cannot flashback a table that has had any shrink or storage changes to the table (pct-free, initrans
and maxtrans. The following example creates a table, inserts some data and flashbacks to a point
prior to the data insertion. Finally it flashbacks to the time after the data insertion.Here is demo of
the Flashback Table :
228
In Oracle 10g the default action of a DROP TABLE command is to move the table to the recycle bin (or
rename it), rather than actually dropping it. The PURGE option can be used to permanently drop a
table.
The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP
privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and
purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered
from the recycle bin.
Recycle Bin :
There is no room in the tablespace for new rows or updates to existing rows.
We can view the dropped objects in the recycle bin from two dictionary views:
user_recyclebin
dba_recyclebin
229
id NUMBER(10) ) ;
OBJECT TYPE
DROPTIME
--------------------------table
201108:15:58:31EST
-- Specific table.
-- Specific index.
-- All tables in a specific tablespace.
-- All tables in a specific tablespace for a
-- The current users entire recycle bin.
-- The whole recycle bin.
There is no fixed size for the recycle bin. The time an object remains in the recycle bin can
vary.
The objects in the recycle bin are restricted to query operations only (no DDL or DML).
230
Tables and all dependent objects are placed into, recovered and purged from the recycle bin at
the same time.
Tables with Fine Grained Access policies are not protected by the recycle bin.
Flashback Database
Flashback database is not enabled by default, when enabled flashback database a process (RVWR
recovery Writer) copies modified blocks to the flashback buffer. This buffer is then flushed to disk
(flashback logs). Remember the flashback logging is not a log of changes but a log of the complete
block images. Not every changed block is logged as this would be too much for the database to cope
with, so only as many blocks are copied such that performance is not impacted. Flashback database
will construct a version of the data files that is just before the time we want. The data files probably
will be in a inconsistent state as different blocks will be at different SCNs, to complete the flashback
process, Oracle then uses the redo logs to recover all the blocks to the exact time requested thus
synchronizing all the data files to the same SCN. Archiving mode must be enabled to use flashback
database. An important note to remember is that Flashback can never reserve a change only to redo
them.
The advantage in using flashback database is speed and convenience with which we can take the
database back in time.we can use rman, sql and Enterprise manager to flashback a database. If the
flash recovery area does not have enough room the database will continue to function but flashback
operations may fail. It is not possible to flashback one tablespace, we must flashback the whole
database. If performance is being affected by flashback data collection turn some tablespace
flashbacking off .
we cannot undo a resized data file to a smaller size. When using backup recovery area and backup
recovery files controlfiles , redo logs, permanent files and flashback logs will not be backed up.
SQL> CREATE TABLE flashback_database_test (id NUMBER(10));
Table created.
SQL> conn / as sysdba
SQL> shut immediate
231
The
window
of
time
that
is
available
for
flashback
is
determined
by
the
db_flashback_retention_target parameter . The maximum flashback can be determined by
querying the v$flashback_database_log view . It is only possible to flashback to a point in time
after flashback was enabled on the database and since the last RESETLOGS command.
232
to_timestamp
('09/5/2011
Now, after some time, when the undo data has been purged out of the undo segments, query the
flashback data again:
SQL> select
salary
from
hr.employees
as
of
timestamp
10:55:00','mm/dd/yyyy hh24:mi:ss') where employee_id =121 ;
SALARY
---------
to_timestamp
('09/5/2010
233
to_timestamp
('09/5/2010
234
235
236
237
23. Where RMAN keeps information of backups if you are using RMAN without Catalog?
RMAN keeps information of backups in the control file.
CATALOG vs NOCATALOG
the difference is only who maintains the backup records like when is the last successful backup
incremental differential etc.
In CATALOG mode another database (TARGET database) stores all the information.
In NOCATALOG mode controlfile of Target database is responsible.
24. How do you see information about backups in RMAN?
RMAN> List Backup;
Use this SQL to check
SQL> SELECT sid totalwork sofar FROM v$session_longops WHERE sid 153;
Here give SID when back start it will show SID
25. How RMAN improves backup time?
RMAN backup time consumption is very less than compared to regular online backup as RMAN copies
only modified blocks
26. What is the advantage of RMAN utility?
Central Repository
Incremental Backup
Corruption Detection
Advantage over tradition backup system:
1). copies only the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are filled with
data then RMAN will only create a backup for that 500 filled blocks.
2). incremental and accumulative backup.
3). catalog and no catalog option.
4). detection of corrupted blocks during backup;
5). can create and store the backup and recover scripts.
6). increase performance through automatic parallelization( allocating channels) less redo generation.
27. List the encryption options available with RMAN?
RMAN offers three encryption modes: transparent mode, password mode and dual mode.
28. What are the steps required to perform in $ORACLE_HOME for enabling the RMAN
backups with netbackup or TSM tape library software?
I can explain what are all the steps to take a rman backup with TSM tape library as follows
1.Install TDPO (default path /usr/tivoli/tsm/client/oracle/)
2.Once u installed the TDPO automatically one link is created from TDPO directory to /usr/lib.Now we
need to Create soft link between OS to ORACLE_HOME
ln -s /usr/lib/libiobk64.a $ORACLE_HOME/lib/libobk.a(very imporatant)
3.Uncomment and Modify tdpo.opt file which in
/usr/tivoli/tsm/client/oracle/bin/tdpo.opt as follows
DSMI_ORC_CONFIG /usr/Tivoli/tsm/client/oracle/bin64/dsm.opt
DSMI_LOG /home/tmp/oracle
TDPO_NODE backup
TDPO_PSWDPATH /usr/tivoli/tsm/client/oracle/bin64
4.create dsm.sys file in same path and add the entries
SErvername <Server name >
TCPPort 1500
passwordacess prompt
nodename backup
enablelanfree yes
TCPSERVERADDRESS <Server Address>
5.Create dsm.opt file add an entry
SErvername <Server name >
6.Then take backup
RMAN>run
238
239
240
241
242
243
244
245
246
247