Вы находитесь на странице: 1из 20

Backup Fundamentals 2 Mins Drill : Without backups, database recovery is not possible in the

event of a database failure that destroys data.


The business requirements that affect database availability,
whether the database should be recoverable to the point in
time the database failure occurred, along with the overall
volatility of data in the database, should all be considered
when developing a backup strategy.
Disaster recovery for any computer system can have the
following impact: loss of time spent recovering the system,
loss of user productivity correcting data errors or waiting for
the system to come online again, the threat of permanent loss
of data, and the cost of replacing hardware.
The final determination of the risks an organization is willing
to take with regard to their backup strategy should be handled
by management.
Complete recovery of data is possible in the Oracle database
but depends on a good backup strategy.
Database recovery consists of two goals: the complete
recovery of lost data and the rapid completion of the recovery
operation.
Testing backup and recovery strategy has three benefits:
weaknesses in the strategy can be corrected, data corruption
in the database that is being copied into the backups can be
detected, and the DBA can improve his or her own skills and
tune the overall process to save time.
The difference between logical and physical backups is the
same as the difference between the logical and physical view
of Oracles usage of disk resources on the machine hosting
the database.

Logical backups are used to copy the data from the Oracle
database in terms of the tables, indexes, sequences, and
other database objects that logically occupy an Oracle
database.
The EXPORT and IMPORT tools are used for logical database
object export and import.
Physical backups are used to copy Oracle database files that
are present from the perspective of the operating system.
This includes datafiles, redo log files, control files, the
password file, and the parameter file.
To know what datafiles are present in the database, use the
V$DATAFILE or the DBA_DATA_FILES dictionary views.
To know what control files are present in the database, use the
show parameters control_files command from Server Manager
or look in the V$CONTROLFILE view.
To know what redo log files are available in the database, use
the V$LOGFILE dictionary view.
There are two types of physical backups: offline backups and
online backups.
Offline backups are complete backups of the database taken
when the database is closed. In order to close the database,
use the shutdown normal or shutdown immediate commands.
Online or "hot" backups are backups of tablespaces taken
while the database is running. This option requires that Oracle
be archiving its redo logs. To start an online backup, the DBA
must issue the alter tablespace name begin backup statement
from Server Manager. When complete, the DBA must issue the
alter tablespace name end backup statement.
Archiving redo logs is crucial for providing complete data
recovery to the point in time that the database failure occurs.

Redo logs can only be used in conjunction with physical


backups.
When the DBA is not archiving redo logs, recovery is only
possible to the point in time the last backup was taken.
Databases that must be available 24 hours a day generally
require online backups because they cannot afford the
database downtime required for logical backups or offline
backups.
Database recovery time consists of two factors: the amount of
time it takes to restore a backup, and the amount of time it
takes to apply database changes made after the most recent
backup.
If archiving is used, then the time spent applying the changes
made to the database since the last backup consists of
applying archived redo logs. If not, then the time spent
applying the changes made to the database since the last
backup consists of users manually reentering the changes
they made to the database since the last backup.
The more changes made after the last database backup, the
longer it generally takes to provide full recovery to the
database.
Shorter recovery time can be achieved with more frequent
backups.
Each type of backup has varied time implications. In general,
logical and offline physical database backups require
database downtime.
Only online database backups allow users to access the data
in the database while the backup takes place.
The more transactions that take place on a database, the
more redo information that is generated by the database.

An infrequently backed-up database with many archived redo


logs is just as recoverable as a frequently backed-up database
with few online redo logs. However, the time spent handling
the recovery is longer for the first option than the second.
Read-only tablespaces need backup only once, after the
database data changes and the tablespace is set to read-only.

Logical Backups 2 mins Drill : The types of database failure are user error, statement failure,
process failure, instance failure, and media failure.
User error comes when the user permanently changes or
removes data from a database in error. Rollback segments give
supplemental ability to correct uncommitted user errors.
Statement failure occurs when there is something syntactically
wrong with SQL statements issued by users in the database.
Oracle rolls back these statements automatically.
Process failure occurs when a statement running against the
database is terminated either by Oracle or by the user.
Statement rollback, release of locks, and other process cleanup
actions occur automatically by PMON.

Instance failure occurs when there is some problem with the host
system running Oracle that forces the database to shut down.
Recovery from this problem occurs when the instance is
restarted. Instance recovery is handled automatically by the
SMON process.
Media failure occurs when there is some problem with the disks
that store Oracle data that renders the data unavailable. The
DBA must manually intervene in these situations to restore lost
data using backups.
Logical backup and recovery with EXPORT and IMPORT is one
means by which the DBA can support backup and recovery.
EXPORT and IMPORT both accept certain parameters that will
determine how the processes run.
These parameters are divided according to function.
There are parameters that handle the logistics of the database
export. These parameters are USERID, FILE, CONSISTENT, and
BUFFER.
There are parameters that limit the database objects that will be
exported. These parameters are INDEXES, CONSTRAINTS,
TRIGGERS, and GRANTS.
There are parameters that determine what mode the export will
run in. These parameters are OWNER, TABLES, FULL, and
INCTYPE.
Database export can happen with the EXPORT tool in three
modestable, user, and full.
In table mode, the DBA specifies a list of tables that will be
exported by EXPORT.
In user mode, the DBA specifies a list of users whose database
objects will be exported.

In full mode, the DBA will export all database objects, depending
on certain factors.
There are three types of full exports. They are complete,
cumulative, and incremental.
The type of export depends on the value specified for the
INCTYPE parameter. Values are complete, cumulative, and
incremental.
Complete exports save all database objects in the export file.
Incremental exports save all database objects that have been
altered or added since the last export of any type was taken.
Cumulative exports save all database objects that have been
altered or added since the last complete or cumulative export
was taken.
There are parameters that handle the logistics of the database
import. These parameters are USERID, FILE, CONSISTENT, and
BUFFER.
There are parameters that limit the database objects that will be
imported. These parameters are INDEXES, CONSTRAINTS,
TRIGGERS, and GRANTS.
There are parameters that determine what mode import will run
in. These parameters are FROMUSER and TOUSER for user mode,
TABLES for table mode, and FULL and INCTYPE for full mode.
Database export can happen with the IMPORT tool in three
modestable, user, and full.
In table mode, the DBA specifies a list of tables that will be
imported by IMPORT.
In user mode, the DBA specifies a list of users whose database
objects will be imported.
In full mode, the DBA will import all database objects, depending
on certain factors.

There are two types of full imports. They are system imports and
restore imports.
The type of export executed depends on the value specified for
the INCTYPE parameter mentioned. Values are system and
restore.
Imports with FULL=y and INCTYPE=system restore database
objects in the export file for the data dictionary. This complete
import should always be the first performed in the event of a
database recovery. The most recent export should be used.
Imports with FULL=y and INCTYPE=restore restore all other
database objects in the export file to the damaged database.
There is a particular order required for database import.
First, the last export taken should be applied using the SYSTEM
import.
Next, the most recent complete export should be applied using
the RESTORE import.
Then, all cumulative exports taken since the complete export,
starting from least to most recent, should be applied using the
RESTORE import.
Finally, all incremental exports taken after the most recent
cumulative export should be applied, from least to most recent
incremental export, using the RESTORE import.
There are three dictionary tables used to track exported data.
INCEXP lists all exported database objects and the exports that
contain them.
INCFIL lists all database exports by export ID number.
INCVID lists the most recent export ID number for the purpose of
generating a new export ID number.
Read consistency is established when data doesnt change for
the life of a statement or transaction.

The CONSISTENT parameter handles read consistency for the


database export.
Use of the CONSISTENT parameter is only permitted with the
complete or cumulative export.
Read consistency can also be established by barring user access
to the database during the time the backup is taken. This is done
by issuing alter system enable restricted session.
Export obtains data from the database using the SQL execution
mechanism that all other user processes use.
For better performance, the export can be run with the direct
path, eliminating some of the steps required to handle standard
SQL statements while optimizing the processing of other steps.
Direct path is specified using the DIRECT parameter. When using
the direct path export , the BUFFER parameter is not valid.
Logical exports do not work in conjunction with archived redo
logs. This has two implications. First, without archived redo logs,
it is not possible to recover database changes to the point in
time the database failed. Second, there is no value added by
archiving redo logs.
Exports are in the same character set as the database from
which the data came.
Data from a database can only be exported in the same
character set as the database the data came from.
Data can be imported into another database using that
databases character set. If the character sets are different,
IMPORT will execute a data conversion, which will lengthen the
time required for the import.
Backup and Recovery Without Archiving 2 mins Drill

:-

There are two types of media failure: temporary and permanent.


Temporary media failure is usually the result of hardware failure
of something other than the actual disk drive. After it is
corrected, the database can access its data again.
Permanent media failure is usually the result of damage to the
disk drive itself. Usually, the drive will need to be replaced and
the DBA will need to recover the data on the disk from backup.
In crisis situations, it is beneficial to the DBA to have strong
communication skills to facilitate important decisions in tough
situations with input from users and managers.
There are several different types of recovery. With archiving in
place, the DBA has more options to choose from.
Two categories of recovery exist: recovery with archiving and
recovery without.
In recovery with archiving, there are two categories: complete
recovery to the point in time of the database failure, and
incomplete recovery to some point in time before the failure
occurred.
There are three types of incomplete recovery when the database
runs in archivelog mode: change-based, cancel-based, and timebased.
Change-based recovery is where the DBA specifies a system
change number that Oracle should use to denote the end of
database recovery.
Time-based recovery is where the DBA specifies a date and time
that Oracle should use to determine the end of database
recovery.
Cancel-based recovery runs until the DBA issues the cancel
command, taking advantage of the interactive process that
happens as Oracle restores archived redo log information.

Automatic recovery can be used to reduce the amount of


interaction required for database recovery. When enabled, Oracle
will apply its suggestions for archive logs to apply automatically.
When the DBA opens the database after recovery, the resetlogs
option can be used to discard online redo logs and to reset the
sequence number.
Database recovery can be accomplished from full offline
backups. The DBA should ensure that all files are restored from
backup, not just damaged ones, to ensure that the database is
consistent to a single point in time.
When restoring read only tablespaces, it is important that a
backup of the control file be made after the status of the
tablespace was changed to read-write or to read only.
There are two methods for determining the archive status of the
database: the DBA can look in V$DATABASE or execute archive
log list from Server Manager.
The DBA can set archiving on or off using the archivelog or
noarchivelog options in create database or alter database
statements.
There are two methods available for archiving redo logs: manual
and automatic.
Automatic archiving is started with the alter system archive log
start statement. Substitute stop for start to shut off automatic
archiving.
If manual archiving is used, the DBA must make sure to archive
redo logs before LGWR runs out of online redo logs to write
information to. If archiving is used, LGWR will not overwrite an
online redo log until it has been archived. If Oracle runs out of
online redo logs for LGWR to write redo information to, no user
can make database changes until archiving happens.

Automatic archiving needs the LOG_ARCHIVE_DEST and


LOG_ARCHIVE_FORMAT parameters to be set in init.ora.
LOG_ARCHIVE_DEST determines where redo log archives will be
placed.
LOG_ARCHIVE_FORMAT determines the nomenclature for the
archived redo information.
Information about the archived redo log files is listed in the
V$LOG_HISTORY dynamic performance view.
Selective archiving of redo information is possible with the use of
several options for manual archiving. Those options are seq,
change, current, group, logfile, next, thread, and all.
seq allows the DBA to archive redo logs according to sequence
number. Each redo log is assigned a sequence number as LGWR
fills the online redo log.
change can be used to archive a redo log that contains a certain
SCN.
current archives the redo log that is currently being written by
LGWR, which forces Oracle to perform a log switch.
group allows the DBA to specify a redo log group for archiving.
logfile allows the DBA to archive redo logs by named redo log
member files.
next allows the DBA to archive redo information based on which
redo log is next to be archived.
thread is also an option for archiving redo logs. A thread is a
number representing the redo information for a single instance in
a multi-instance parallel server setup using Oracles Parallel
Server Option.
The thread option can be set for any of the options for manually
or automatically archiving redo log information using the alter
system archive log statement.

The all option specifies archival of all redo logs that are currently
in need of being archived.
Backup and Recovery with Archiving :
Offline backups usually are full backups taken with the database
offline. They are required for complete database recovery using
the recover database option and for incomplete recovery.
Online backups are usually tablespace backups taken with the
database online. They are required for complete recovery using
the recover tablespace option. Tablespace recovery is not an
option for incomplete recovery.
Online backups of tablespaces are taken in the following way:
Prepare the database for backup using the alter tablespace begin
backup statement.
Make backups of the tablespace datafiles using operating system
commands.
End the tablespace backup using the alter tablespace end
backup statement.
Due to the increased archive redo information taken during
online backups, and to the increased damage caused by
database failure during a backup, it is recommended that online
backups are taken one tablespace at a time, rather than doing
them in parallel.
A control file can be backed up in two ways. The first creates an
actual usable control file for the DBA to incorporate. This backup
is created with the alter database backup controlfile to filename
statement. The second creates a script that can be run to create
the control file. This backup is created with the same statement,
replacing filename with the keyword trace.

Complete database recovery with archiving is when the DBA can


recover the database to the point in time of a database failure.
Incomplete recovery is recovery to any point in time in the past.
There are three types of incomplete recovery: time-based,
change-based, and cancel-based. They are differentiated in the
recover database option by what follows the until clause. Cancelbased uses until cancel, change-based uses until change scn,
and time-based uses until yyyy-mm-dd:hh24:mi:ss.
Information about the status of a recovery can be found in two
dynamic performance views on the database: the
V$RECOVERY_FILE_STATUS and V$RECOVERY_STATUS
performance views.
Information about system change numbers contained in each
archived redo log can be found in V$LOG_HISTORY.
Database recovery is an interactive process where Oracle
prompts the DBA to supply the names of archived redo logs to
apply while also making suggestions based on V$LOG_HISTORY
and the two parameters for automatic archiving:
LOG_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT.
The DBA can automate this process by specifying the automatic
option in the recover database statement. This option may not
be used in conjunction with cancel-based recovery.
For complete recovery using offline backups, or for incomplete
recovery, the database cannot be available for users. For
complete recovery of a tablespace only, the undamaged or
unaffected parts of the database can be available for use.
In some cases, it may be necessary to move datafiles as part of
recovery. The control file must be modified, if this is required,
with the alter database rename file statement.

Complete recovery is accomplished with offline backups in the


following way:
Have the database mounted in exclusive mode but not opened.
Restore all backup copies of datafiles.
Specify new locations of datafiles if any were moved.
Execute the recover database operation, applying appropriate
archived redo logs.
Take a complete backup of database.
Open the database using the resetlogs option to discard archives
and reset sequence number.
Complete recovery is accomplished with online backups in the
following way:
The database can be open for use, but the damaged tablespace
must be offline.
Restore all backup copies of datafiles.
Specify new locations of datafiles if any were moved.
Execute the recover tablespace operation, applying appropriate
archived redo logs.
Bring the tablespace online.
Take an online backup of tablespace.
Situations that require incomplete recovery include when a data
change is made in error at some point in the past and many
other changes are made as a result, in effect batch processing.
Incomplete recovery may be required when the DBA loses an
archived redo log file. To illustrate, there are three archived redo
logs for a database, numbered 1, 2, and 3. Each archive contains
information for 10 transactions (SCN 09, 1019, and 2029), for
a total of 30 transactions. If archive sequence 3 is lost, the DBA
can only recover the database through SCN 19, or archive

sequence 2. If 2 is lost, then the DBA can only recover the


database through SCN 9, and if archive sequence 1 is lost, then
no archived redo log information can be applied.
Incomplete recovery is accomplished with offline backups in the
following way:
Have the database mounted in exclusive mode but not opened.
Restore all backup copies of datafiles.
Specify new locations of datafiles if any were moved.
Execute recover database operation, applying appropriate
archived redo logs. Use the appropriate incomplete recovery
option: Cancel-based uses until cancel, change-based uses until
change scn, time-based uses until yyyy-mm-dd:hh24:mi:ss.
Take complete backup of database.
Open the database using the resetlogs option to discard archives
and reset sequence number.
Create a new control file, if required, before initiating recovery
using the create controlfile statement. Be sure to specify
resetlogs and archivelog. If available, use the control file script
created when the trace option is used in backing up the control
file.

The reason the database will start quickly after a system failure
is due to the fast transaction rollback feature of the Oracle
database.
There are two general parts to database recovery: roll forward
and rollback. Fast transaction recovery allows Oracle to open the
database after roll forward is complete, executing the rollback
while users access the database.

Fast transaction recovery eliminates the wait for a database to


open after system failure, minimizing downtime so the DBA can
initiate recovery quicker. However, users entering the database
may still encounter delays due to rollback segments still being
involved with recovery, and locks on tables and rows that are still
held by dead transactions.
In some organizations, a media failure may not impact all users.
To allow the users who are not impacted to continue using the
database even when datafiles are missing, the DBA can open the
database in the following way. First, the DBA should use startup
mount from Server Manager to start the instance and mount the
database. From there, the DBA can take the tablespaces
containing missing or damaged datafiles offline with the alter
tablespace offline statement. After that, the DBA can open the
database with the alter database open statement.
Recovery on parts of a database that are missing can be
accomplished using the methods for complete tablespace
recovery. The DBA will require an online backup and all
appropriate archived redo logs. For this recovery, the tablespace
must be offline.
When the DBA is done recovering the tablespace damaged, while
the rest of the database is used by the users, the DBA can back
up the recovered tablespace or the entire database.
Parallel recovery can be used to improve recovery time for the
Oracle database. To engage in parallel recovery, the DBA can use
the recover database parallel statement from Server Manager.
There are two clauses that must be set for parallel recovery:
degree and instances.

The degree option indicates the degree of parallelism for the


recovery, or the number of processes that will be used to
execute the recovery.
The instances option indicates the number of instances that will
engage in database recovery.
The total number of processes that will engage in parallel
recovery equals degree times instances. This value may not
exceed the integer set for the RECOVERY_PARALLELISM
initialization parameter set in the init.ora file.
Troubleshooting the Oracle database can be accomplished by
looking in trace files or the alert log.
Every background process writes its own trace file, containing
information about when the background process started and any
errors it may have encountered.
A special trace file exists for the entire database, called the alert
log. This file contains trace information for database startup and
shutdown, archiving, any structural database change, and errors
encountered by the database.
The V$ performance views may also be used to detect errors in
the operation of the database.
The V$DATAFILE view carries information about the datafiles of
the database. One item it contains is the status of a datafile. If
the status of a datafile is RECOVER, there may be a problem with
media failure on the disk containing that datafile.
The V$LOGFILE view carries information about the redo log files
of the database. One item it contains is the status of a log file. If
the status of the logfile is INVALID, there could be a problem with
media failure on the disk containing that redo log.

Another cause of problems in the Oracle database is the problem


of data integrity. If there is a corruption in the online redo log of
the database during archiving or an archived redo log during
recovery, the DBA risks having an unusable set of backups for
database recovery.
To minimize risk of storing corrupt archived redo information, the
DBA can use a verification process available for redo logs. To use
this verification process, the DBA should set the
LOG_ARCHIVE_CHECKSUM parameter in the init.ora file to TRUE
and restart the database.
The redo log verification process works as follows. At a log
switch, Oracle will check every data block in the redo log as it
writes the archive. If one is corrupt, Oracle will look at the same
data block in another redo log member. If that data block is
corrupt in all members of the online redo log file, Oracle will not
archive the redo log.
If Oracle does not archive the redo log, the DBA must intervene
by clearing the log file. This step is accomplished with the alter
database clear unarchived logfile group statement. If the log file
group has been archived, the unarchived clause above can be
eliminated.
Verifying the integrity of a database can also be executed on its
datafiles using the DBVERIFY utility. DBVERIFY is a stand-alone
utility that verifies a file or files of an offline database or
tablespace.
Operation of DBVERIFY involves specifying parameters to
manage its runtime behavior. The parameters that may be
specified include FILE, START, END, BLOCKSIZE, LOGFILE,
FEEDBACK, HELP, and PARFILE.

FILE is used to identify the filename of the datafile that DBVERIFY


will analyze.
START is used to identify the Oracle block where DBVERIFY will
start analysis.
END is used to identify the Oracle block where DBVERIFY will end
analysis.
BLOCKSIZE is used to identify the size of blocks in the datafile.
LOGFILE is used to identify a file that DBVERIFY will write all
execution output to. If not used, DBVERIFY writes to the screen.
FEEDBACK is a special feature whereby DBVERIFY writes out dots
on the screen (or logfile) based on progress, or the number of
pages it has written.
HELP is used to obtain information about the other parameters.
PARFILE is used to place all other parameters in a parameter file.
The standby database is used by DBAs to create and maintain a
clone database for the purpose of minimizing downtime in the
event of a disaster.
The hardware used to support the standby database should be
identical to the machine that supports the production database.
To create a standby database, the DBA must do the following.
First, take a full backup of the database, either offline or online.
Then, create a standby database control file with the alter
database create standby controlfile as filename. Finally, the
DBA should archive the current set of redo logs using the alter
system archive log current statement and move it all to the
standby machine.
After creating the standby database on the other machine, the
DBA should put the standby database into perpetual recovery
mode. The first step is to use the startup nomount option to start
the database with Server Manager. Then, the DBA must issue the

recover standby database statement. At this point, the database


will apply archived redo logs from the primary database to the
standby database.
When a disaster strikes, the DBA can recover the final
transactions made to the primary database and move data to the
standby database. Starting the standby database is
accomplished in the following way. The DBA should shut down
the standby database and restart it using startup mount. From
there, the DBA should execute a recover database statement,
being sure to omit the standby clause.
As part of this recovery, the DBA should try to apply the current
online redo logs on the production database to the standby
database in order to capture all transaction information up to the
moment of failure on the other database. After recovery of the
standby database is complete, the DBA can execute the alter
database activate standby database statement.
From this point on, the standby database is now the production
database. There is no step later where the DBA switches it back,
unless the original database is turned into a standby for the new
production database and the new production database fails.
The standby database, though costly, is the best option for
minimizing downtime to make a fast recovery.

Вам также может понравиться