Вы находитесь на странице: 1из 28

Miscellaneous Topics:

1. RMAN Delete Input Tips

RMAN Delete Input


It is not necessary to keep archived redo log files after backup.  We can easily delete them, but
they may be deleted by RMAN automatically after backup.  To do this, use the delete input
command. As noted above, when RMAN backs up the archived redo log files, it refers only to
one log destination even though there may be more than one. The same holds true with the delete
input command.  Using this command, RMAN deletes the archived redo log files from only one
destination.  If the delete all input command is used, all archived redo log files from all
destinations will be deleted.  The following example explains this process in more detail.

We have two log archive destinations, arc_1 and arc_2, which reside in directory C:\ and they
have two archived redo log files whose sequence numbers are 6 and 7, accordingly.
C:\>dir arc_1, arc_2
 Directory of C:\arc_1
7.08.2009  17:35            10 752 arc00006_0695149148.001
17.08.2009  17:35             1 024 arc00007_0695149148.001
               2 File(s)         11 776 bytes
 Directory of C:\arc_2
17.08.2009  17:35            10 752 arc00006_0695149148.001
17.08.2009  17:35             1 024 arc00007_0695149148.001
               2 File(s)         11 776 bytes

First, backup archived log 6 and use the delete input command to delete it, but only from one log
destination:
RMAN> backup archivelog sequence 6 delete input;

With that command, after backing up that archived redo log file, RMAN deletes it from only one
destination. See both destinations as follows:
C:\>dir arc_1, arc_2
 
 Directory of C:\arc_1
17.08.2009  17:35            10 752 arc00006_0695149148.001
17.08.2009  17:35             1 024 arc00007_0695149148.001
               2 File(s)         11 776 bytes

 Directory of C:\arc_2
17.08.2009  17:35             1 024 arc00007_0695149148.001
               1 File(s)          1 024 bytes

The archived redo log file from the arc_2 folder has been deleted. Now, back up archived redo
log file 7 and use the delete all input command to delete that file from all log destinations.
RMAN> backup archivelog sequence 7 delete all input;
C:\>dir arc_1, arc_2
Directory of C:\arc_1
17.08.2009  17:35            10 752 arc00006_0695149148.001
               1 File(s)         10 752 bytes
 
 Directory of C:\arc_2
               0 File(s)              0 bytes

All archived redo log files with sequence number 7 have been deleted from both directories.

Back Up Tablespaces

To back up a tablespace, use the backup tablespace command. Using this command, RMAN
automatically creates a backup of all datafiles in the tablespace.
RMAN> backup tablespace users, system;
input datafile fno=00001
name=c:\oracle\product\10.2.0\oradata\test\system01.dbf
input datafile fno=00004 name=c:\oracle\product\10.2.0\oradata\test\users01.dbf

To create a backup file with specific format, add the format keyword:
RMAN> backup tablespace users format 'C:\users_tbs.bkp;

Moreover, image copy of datafiles in a tablespace may be backed up. In this example, we get the
size of the two datafiles from the OS.
C:\oracle\product\10.2.0\oradata\test1>dir sysaux01.dbf, users01.dbf
09.08.2009  01:00       251 666 432 sysaux01.dbf
09.08.2009  00:58         5 251 072 users01.dbf

Then back up these datafiles at the tablespace level and compare the two results:
RMAN> backup as copy tablespace users, sysaux format 'c:\backup_tbs_%t_4.bkp';
c:\>dir *.bkp
09.08.2009  00:58       251 666 432 backup_tbs_694400269_4.bkp
09.08.2009  00:58         5 251 072 backup_tbs_694400284_4.bkp

As can be seen, the size of the image copy and the datafile are the same. Those files will be
restored with RMAN if the database crashes. RMAN does not back up temporary tablespace. 
Those can be viewed in the list of files shown in the RMAN backup output.  If RMAN is not
able to create temporary tablespaces for some reason, then the message is recorded to the
alert.log file and the database will open successfully.   If backing up the temporary tablespace is
attempted, an error message like this appears:
RMAN> backup tablespace temp;

Starting backup at 15-AUG-09


using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=143 devtype=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== error message stack follows ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 08/15/2009 11:34:05
RMAN-20202: tablespace not found in the recovery catalog
RMAN-06019: could not translate tablespace name "TEMP"

Because RMAN does not back up the temporary tablespace, during the recovery process it
creates the temporary tablespace automatically.  In this example, the database was restored at
11:43, and all datafiles are restored except tempfile.  At 11.44, when the database is recovered,
we see that the temporary tablespace is created automatically as follows:
C:\oracle\product\10.2.0\oradata\orcl>dir /tc /od *.dbf
15.08.2009  11:43       503 324 672 system01.dbf
15.08.2009  11:43       251 666 432 sysaux01.dbf
15.08.2009  11:43        26 222 592 undotbs01.dbf
15.08.2009  11:43         5 251 072 users01.dbf
15.08.2009  11:44        20 979 712 temp01.dbf
               5 File(s)    807 444 480 bytes
               0 Dir(s)  14 827 810 816 bytes free
C:\oracle\product\10.2.0\oradata\orcl>

 
2. RMAN crosscheck Tips

The RMAN crosscheck command is used when RMAN indicates that a file is "missing:

RMAN> crosscheck backuppiece xxx;

crosschecked backup piece: found to be 'EXPIRED'

backup piece handle=xxx

crosschecked 1 objects

As we see the RMAN crosscheck command compares the RMAN  catalog entries with the actual
OS files and reports to locate "expired" or "obsolete" RMAN catalog entries.

Once a RMAN crosscheck identified expired, deleted on on obsolete catalog entries you can run
an RMAN delete command to remove these entries to synchronize the RMAN catalog with the
real database files:
RMAN> delete noprompt obsolete;

The RMAN crosscheck command can also be used to archive RMAN catalog entries:
crosscheck archivelog all ;

This crosscheck command is used then you get a RMAN-06059:


oracle> oerr RMAN 06059

RMAN-06059 expected archived log not found, loss of archived log compromises recoverability

Cause: The archived log was not found. The repository thinks it does exist. If the archived log
has in fact been lost and there is no backup, then the database is no longer recoverable across
the point in time covered by the archived log. This may occur because the archived log was
removed by an outside utility without updating the repository.

Action: If the archived log has been removed with an outside utility and the archivelog has
already been backed up, then you can synchronize the repository by running CROSSCHECK ARCHIVELOG
ALL. If the archivelog has not been previously backed up, then you should take a full backup of
the database and archivelogs to preserve recoverability. Previous backups are not fully
recoverable.

 The syntax for the RMAN crosscheck command is simple:


allocate channel ch1 type disk ;
crosscheck archivelog all ;
release channel ch1 ;
3. RMAN recovery window Retention Policy Tips

The Recovery Window retention policy doesn't just keep the last N days of the backup. It
calculates the point-of-recovery (POR) value, accounting the current date and the number of the
days mentioned in the retention policy.

The POR is more than just the backup of the last N days. In addition, you would also need the
backup of all those which come under the calculation of POR. For example, assume today is 20th
of April (we shall not count year here since it's not needed) and you decide to go with the
Recovery Window of 2 days.

In the case of non-incremental backup, this is how it would look:


Current_Date        Status         POR           
20-April           Available       20-2=18th April
21-April           Available       21-2= 19th April
22-April           Available       22-2=20th April
23-April  Backup of 20th Obsoleted 23-2=21st April

So in the case of 23rd, the backup would be marked as obsolete.

The same would be true in the case of the incremental backup as well with one additional layer
that now, instead of just one backup, oracle would consider this also that whether the backup is a
level 0 or level 1 backup and won't consider the backup of level 1 obsolete if it's still needed for
the recovery.

RMAN Recovery Window Retention Policy


It is always important to have backups of the database. In the same manner, it is also important to
delete those backups which are no longer required for the recovery. RMAN makes the
identification of these backups very easy using retention policy, which governs which backups
are safe to be deleted.

Using retention policy, backups are marked as obsolete, which means not required anymore.
There are two possible settings for retention policy:

 Redundancy
 Recovery window

Redundancy stands for the number of copies that must be available. Once you have the chosen
number of backup copies, any older backups are marked as obsolete.  This does not put any sort
of restriction on how long the backups can be kept. This kind of setting works best for
environments where, depending on the SLA (service level agreement), some specified number of
backup copies must be kept. By default, this is the setting on which RMAN works, using only
the most current backup copy. Depending on the requirement, the number of copies can be
changed using the configure command like the following:
RMAN> configure retention policy to redundancy 2;

In the above example, the redundancy has been set to two. This means that if you take two
backups of your database now, the oldest copy of the backup would be marked as obsolete.

The other setting, recovery window, is not based on the number of backup copies, but on the
number of days for which you want to keep the backup. This does not put any condition on the
number of backup copies that can be retained; you can have n number of copies within the
specified period of the recovery window. Any backup that is older than the recovery window is
marked as obsolete. 

In the recovery window, Oracle checks the current backup and looks for its relevance backwards
in time. This is a default set to seven days which means that the backup must be kept for exactly
seven days, ensuring that you can use it for the recovery within any point of time for this time
period. Any file which does not come in this range of n days would require a backup to be done
for it and the same would be reported from RMAN as well.  To illustrate the above, look at a
hypothetical situation where the backup is done after every week and the recovery window is
also set for seven days, its default value.

You can see here that the backup is taken every seventh day.  Now assume that you started on
July 1st and have taken backups on the 8th, 15th, 22nd and 29th of July. Assuming the current
date is the 25th of July, according to the recovery window of seven days, the point of
recoverability goes up to the 18th of July. This means that to ensure the recoverability, the
backup taken on 15th will be kept by Oracle so that you can recover up to that point.

Remember: To be able to recover to the 18th with a database backup of the 15th, subsequent
archived redo logs from the 15th to the 18th are required.

One interesting part of this type of retention policy setting is that it is not mandatory for RMAN
to keep the backup only for the mentioned number of days. If you have taken the last level 0
backup and the time period of that exceeds the recovery window timing, Oracle would have to
ensure that it is not marked as obsolete because just using the level 1 backup will not guarantee
the complete recovery.

The default retention policy is set to a redundancy of 1. You can change it from its default value
by using the configure command. Note that both redundancy and recovery window retention
policies are mutually exclusive. Now see what retention policy your database is set to. You can
use the show retention policy command for this:
RMAN> show retention policy;
RMAN configuration parameters are:
configure retention policy to redundancy 1;

So you have the retention policy set to redundancy 1. Change it to the recovery window of seven
days:
RMAN> configure retention policy to recovery window of 7 days;
old RMAN configuration parameters:
configure retention policy to redundancy 1;
new RMAN configuration parameters:
configure retention policy to recovery window of 7 days;
new RMAN configuration parameters are successfully stored

So now you have the retention policy set to seven days of the recovery window. In case you want
to disable the retention policy for some reason, you can even do that. Here is how. 
configure retention policy to none;

This would disable the retention policy settings. However, doing so would stop the report obsolete and
delete obsolete commands from marking any of your previously done backups as obsolete.
4. Summary of RMAN Commands
Recovery Manager Commands
Command Purpose
@ (at sign) Run a command file.

@@ (double at Run a command file in the same directory as another command file that is currently
sign) running. The @@ command differs from the @ command only when run from within a
command file.

ADVISE FAILURE Display repair options.

ALLOCATE CHANNEL Establish a channel, which is a connection between RMAN and a database instance.

ALLOCATE CHANNEL Allocate a channel in preparation for issuing maintenance commands such as DELETE.
FOR MAINTENANCE

ALTER DATABASE Mount or open a database.

BACKUP Back up database files, copies of database files, archived logs, or backup sets.

CATALOG Add information about file copies and user-managed backups to the repository.

CHANGE Mark a backup piece, image copy, or archived redo log as having the status
UNAVAILABLE or AVAILABLE; remove the repository record for a backup or copy;
override the retention policy for a backup or copy; update the recovery catalog with the
DB_UNIQUE_NAME for the target database.

CONFIGURE Configure persistent RMAN settings. These settings apply to all RMAN sessions until
explicitly changed or disabled.

CONNECT Establish a connection between RMAN and a target, auxiliary, or recovery catalog
database.

CONVERT Convert data file formats for transporting tablespaces and databases across platforms.

CREATE CATALOG Create the schema for the recovery catalog.

CREATE SCRIPT Create a stored script and store it in the recovery catalog.

CROSSCHECK Determine whether files managed by RMAN, such as archived logs, data file copies,
and backup pieces, still exist on disk or tape.

DELETE Delete backups and copies, remove references to them from the recovery catalog, and
update their control file records to status DELETED.

DELETE SCRIPT Delete a stored script from the recovery catalog.

DROP CATALOG Remove the schema from the recovery catalog.


Command Purpose
DROP DATABASE Delete the target database from disk and unregisters it.

DUPLICATE Use backups of the target database to create a duplicate database that you can use for
testing purposes or to create a standby database.

EXECUTE SCRIPT Run an RMAN stored script.

EXIT Quit the RMAN executable.

FLASHBACK Return the database to its state at a previous time or SCN.


DATABASE

GRANT Grant privileges to a recovery catalog user.

HOST Invoke an operating system command-line subshell from within RMAN or run a
specific operating system command.

IMPORT CATALOG Imports the metadata from one recovery catalog into a different recovery catalog.

LIST Produce a detailed listing of backup sets or copies.

PRINT SCRIPT Display a stored script.

QUIT Exit the RMAN executable.

RECOVER Apply redo log files and incremental backups to data files or data blocks restored from
backup or dataflow copies, to update them to a specified time.

REGISTER Register the target database in the recovery catalog.


DATABASE

RELEASE CHANNEL Release a channel that was allocated with an ALLOCATE CHANNEL command or
ALLOCATE CHANNEL FOR MAINTENANCE command.

REPAIR FAILURE Repair one or more failures recorded in the automated diagnostic repository.

REPLACE SCRIPT Replace an existing script stored in the recovery catalog. If the script does not exist,
then REPLACE SCRIPT creates it.

REPORT Perform detailed analyses of the content of the recovery catalog.

RESET DATABASE Inform RMAN that the SQL statement ALTER DATABASE OPEN RESETLOGS has
been executed and that a new incarnation of the target database has been created, or
reset the target database to a prior incarnation.

RESTORE Restore files from backup sets or from disk copies to the default or a new location.

RESYNC CATALOG Perform a full resynchronization, which creates a snapshot control file and then copies
any new or changed information from that snapshot control file to the recovery catalog.

REVOKE Revoke privileges from a recovery catalog user.


Command Purpose
RMAN Start RMAN from the operating system command line.

RUN Execute a sequence of one or more RMAN commands, which are one or more
statements executed within the braces of RUN.

SEND Send a vendor-specific quoted string to one or more specific channels.

SET Set the value of various attributes that affect RMAN behavior for the duration of a
RUN block or a session.

SHOW Display the current CONFIGURE settings.

SHUTDOWN Shut down the target database. This command is equivalent to the SQL*Plus
SHUTDOWN command.

SPOOL Write RMAN output to a log file.

SQL Execute a SQL statement from within Recovery Manager.

STARTUP Start up the target database. This command is equivalent to the SQL*Plus STARTUP
command.

SWITCH Specify that a data file copy is now the current data file, that is, the data file pointed to
by the control file. This command is equivalent to the SQL statement ALTER
DATABASE RENAME FILE as it applies to data files.

TRANSPORT Create transportable tablespace sets from backup for one or more tablespaces.
TABLESPACE

UNREGISTER Unregister a database from the recovery catalog.

UPGRADE CATALOG Upgrade the recovery catalog schema from an older version to the version required by
the RMAN executable.

VALIDATE Examine a backup set and report whether its data is intact. RMAN scans all of the
backup pieces in the specified backup sets and looks at the checksums to verify that the
contents can be successfully restored.
5. Transparent Application Failover (TAF)

The TAF function is controlled by processes external to the Oracle Database RAC cluster
control. The cluster failover types and methods can be unique for each Oracle Net client. Under
more complex environments, application code may have to be altered or modified to fully
support TAF, using Oracle OCI calls. In its most basic form, TAF supports:

* Active Transactions: Any active, non-committed transaction (INSERT, UPDATE, or


DELETE) is rolled back at the time of failure, because TAF cannot preserve active transactions
after failover. The application will receive an error message until a rollback command is
submitted.

* Client-Server Database Connections:  TAF automatically reestablishes the client-server


database connection using the same connect string or an alternate connect string specified when
configuring TAF in the tnsnames file.

* Executed Commands:  If a command was committed at the time of the connection failure and
the command changed the state of the database, TAF will not reissue the command. If, by
mistake, TAF reconnects in response to a command that may have changed the database, TAF
issues an error message to the application.

* Open Cursors Used for Fetching:  TAF enables applications that began fetching rows from a
cursor before the failover event to re-fetch the rows after failover. This type of failover is called
select failover. TAF accomplishes this by re-executing the cursor select statement using the same
working set, effectively retrieving the rows again. TAF verifies that the discarded rows are those
that were returned initially, or it returns an error message to the application.

* Server side Program Variables:  Server side program variables, such as PL/SQL package
states, are lost during failures. TAF cannot recover them. Making a call from the failover OCI
callback functions to the server-side processes can initialize them.

* Users' Database Sessions:  TAF logs the users in with the same user IDs that were in use prior
to failure. If multiple users were using the connection, TAF automatically logs them in as they
attempt to process database commands. Unfortunately, TAF cannot automatically restore non-
persistent session properties. These properties can, however, be restored by invoking an OCI
callback function that notifies the calling transaction of the switchover and requests the non-
persistent session properties.

In a nutshell, the above means that at the time of failover, in-progress queries are reissued and
processed from the beginning. Rows already fetched are discarded. All of this discarding of rows
and re-fetching can delay the completion of the original transaction. However, TAF can also be
configured to issue two connections at a time, one to the main instance and one to the standby
instance. This speeds the processing by eliminating the reconnection penalty. DDL operations
are not reissued. Committed transactions are not reissued.
Uncommitted INSERT, UPDATE and DELETE commands are rolled back and must be
resubmitted after reconnection. Again, use of the OCI packages should be utilized to have the
DML operations reissued.

The Oracle Net process carries out TAF functionality. The failover is configured in the tnsnames
file. The TAF settings are placed in the net service name area, within the connect_data section of
the tnsnames, using the failover_mode and instance_role parameters.
failover_mode

Subparameter Descriptions

BACKUP

Used to set a different net service name for backup instance connections. A backup should be
specified when using preconnect to pre-establish connections.

TYPE

Used to specify the type of connection failover. There are three types of Oracle Net failover
functionality available to Oracle Call Interface (OCI) applications:

* session: Used to set to failover the session. When a user's connection is lost, a new session is
automatically created for the user on the backup. This type of failover will not recover selects. 

* select: Used to enable users with open cursors (selects) to continue fetching on them after
failure. It should be noted that this mode involves some overhead on the client side during
normal select operations. 

* none: This setting is the default. With none, no failover functionality is provided. If the goal is
to prevent failover, use the none setting.

METHOD

This is used to determine how failover occurs from the primary node to the backup node:

* basic: Set this mode to establish connections only at failover time. Since no preconnection is
done, basic requires virtually no work on the backup server until failover occurs.

* preconnect: Set this mode to pre-established connections to a backup server. The preconnect
setting provides for faster failover but does require that the backup instance be capable of
supporting all connections from all supported instances.
 

RETRIES

This sets the number of times that the server will attempt to connect after a failover. With
DELAY specified but RETRIES not specified, RETRIES will default to five retry attempts.

DELAY

This specifies the number of seconds between connection attempts. When RETRIES is specified
and DELAY is not, DELAY defaults to one second.

Table 10.1: fail_over Mode Options

Setting load_balance=YES instructs Net to progress through the list of listener addresses in a
random sequence, balancing the load on the various listeners. When set to OFF, load_balance
instructs Net to try the addresses sequentially until one succeeds. This parameter must be
correctly coded in the net service name or connect descriptor. By default, this parameter is set to
ON for description_list.  Load balancing can be specified for an address_list, associated with a
set of addresses or set descriptions. If address_list is used, the load_balance=YES should be
within the (address_list=) portion.  If address_list is not used, the load_balance=YES should be
within the DESCRIPTION clause.

failover=ON is the default for address_list, description_list and the set of description; therefore,
it does not have to be specified.  This only applies for connect time failover not transparent
application failover (TAF).  

The failover_mode parameter must be included in the connect_data portion of a net_service_name.


There is no BACKUP=failover in failover_mode=. This implies (failover_mode=(TYPE=SELECT)
(METHOD=BASIC) (BACKUP=failover)), meaning whenever failover occurs, the connected session will
failover to the net_service_name failover again.  A backup should be specified when using PRECONNECT
to pre-establish connections.

If it is desired to have remote instances registered with the listener, even if the listener is using
port 1521, local_listener still needs to be in the init.ora file. Otherwise, with
remote_listener="<remote_listener>" alone, the remote instances will not be registered with the
listener, and there will be no server-side listener connection load balancing.  This is due to bug
2194549 that is fixed in 10g.

If the configuration is not using the default port 1521, the local_listener parameter in the
initialization file is required. If the hostname output is the interconnect IP address as opposed to
the public Ethernet IP address, the PMON process will register the service and instance with the
hostname's listener. In this case, the local_listener parameter should be set to instruct the PMON
to register the service and instance with the public Ethernet IP address listener.
The following script demonstrates the initialization parameters that would be set for the example
server setup.  Both nodes? init.ora file would have the following parameters:

ault1.local_listener="LISTENER_ault1"
ault2.local_listener="LISTENER_ault2"
db_name='ault'
ault1.instance_name='ault1'
ault2.instance_name='ault2'
remote_listener='LISTENERS_AULT'

The TAF parameters must be manually added to the tnsnames file, since the network
configuration assistant (NETCA) cannot configure them. Once configured, Oracle Net will
failover the connection, transparently to the user in many cases, with the exceptions noted in the
list of failover objects above.

In order to configure TAF, the static service information must be removed in the
<sid>_list_<listener_name> entry from the listener.ora, allowing the instance to self-register.  This is
known as Dynamic Service Registration and has been available since Oracle8i. In addition, the
global_dbname parameter must be removed from the tnsnames file or TAF will be disabled.

TAF WITH SCAN IN 11GR2

TRANSPARENT APPLICATION FAILOVER WITH SCAN

Transparent Application Failover is process by which the sessions are automatically failover to the other
available instance in the cluster database. New connection would be point to the available instance on
the cluster database and if you have any active transaction and it will be rollback to the last committed
transaction.

TRANSPARENT APPLICATION FAILOVER (TAF) FAILOVER METHODS

(METHOD = BASIC/PRECONNECT)

BASIC FAILOVER 

The second connection to the available instance is made only if and when the primary instance is down
or disconnected for some reason.
PRECONNECT FAILOVER 

The Second connection to another available instance is made at the same time as the connection to the
primary instance. Having the backup connection already in place can reduce the time needed for a
failover in the event that one needs to take place. We have considered the Over Head while the sessions
are pre-connected.

TRANSPARENT APPLICATION FAILOVER (TAF) FAILOVER TYPE


(TYPE=SESSION/SELECT/NONE)
SESSION

When the TYPE= Session specified, it will failover to the available instance in the cluster database and it
doesn’t continue with selected, if it is in the process. OCI will start the select process again.
SELECT

When the type=select specified, it will failover to the available instance in


the cluster database and the failover process also replays the queries that is
in the progress.

NONE

No failover function is used. When you specify it as none, no failover


function will be used.

TAF can be configured at the Oracle Client side in tnsnames.ora or at the Database Server side using
srvctl

SRVCTL SYNTAX FOR CREATING SERVICE

srvctl add service -d <db_unique_name> -s <service_name>


-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
-g <server_pool> [-c {UNIFORM | SINGLETON}]
[-k <net_num>]
[-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]]
[-y {AUTOMATIC | MANUAL}]
[-q {TRUE|FALSE}]
[-x {TRUE|FALSE}]
[-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}]
[-e {NONE|SESSION|SELECT}]
       -- This defines the type of TAF whether SESSION or SELECT.
[-m {NONE|BASIC}]
       --This defines the method of TAF.
[-z <failover_retries>]
       -- the number of times to attempt to connect after a failover.
[-w <failover_delay>]
       -- the amount of time in seconds to wait between connect attempts.

SYNTAX

          srvctl add service -d <db_unique_name> -s <service_name>


          -r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}]
         [-e {NONE|SESSION|SELECT}]
         [-m {NONE|BASIC}]
         [-z <failover_retries>]
         [-w <failover_delay>]

PRECONNECT USING SCAN NAMES

When you are connecting using the scan names and having the failover method as PRE-CONNECT then
you use should configured with the (preferred instance & available instance).

Ex:

Bhuvan =

(description =

(address = (protocol=tcp)(host=bhuvanrac-scan)(port=1521))

(connect_data=

(service_name=bhuvanrac)

(failover_mode= (backup=bhuvanrac_preconnect)

(type=select) (method=preconnect)

))

bhuvanrac_preconnect =

(description =

(address = (protocol=tcp)(host=bhuvanrac-scan)(port=1521))
(connect_data=(service_name=bhuvanrac_preconnect))

BASIC USING SCAN NAMES

SCAN provides load balancing and failover for client connections to the database. The SCAN works as a
cluster alias for databases in the cluster.

bhuvanrac =

(description =

(address = (protocol=tcp)(host=bhuvanrac-scan)(port=1521))

(connect_data=(service_name=bhuvanrac))

TRADITIONAL METHOD

JDBC connection

BHUVAN=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=bhuora01)(Port=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=bhuora02)(PORT = 1521))

(CONNECT_DATA=(SERVICE_NAME=BHUVANRAC)(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))))

TNS ENTRY

BHUVAN=

(DESCRIPTION=

(ADDRESS=(PROTOCOL=TCP)(Host=bhuora01)(Port=1521))

(ADDRESS=(PROTOCOL=TCP)(HOST=bhuora02)(PORT=1521))

(CONNECT_DATA=(SERVICE_NAME=BHUVANRAC)
(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC))))

Start the service


$ srvctl start service -d <DB_NAME> -s <SERVICE_NAME>

Stop the service


$ srvctl stop service -d <DB_NAME> -s <SERVICE_NAME>

To Modify the service

$ srvctl modify service -d <DB_NAME> -s <SERVICE_NAME>

To see the configuration of the service

$ srvctl config service -d <DB_NAME> -s <SERVICE_NAME>


6. Difference between ASMM and AMM tips

Oracle has implemented many new and cool features and functionality within its flagship product over the
years. Infact, if you look at how Oracle's flagship product has evolved over the last 10 years even, you'll
notice a shift from the Hands On DBA, to more of a Hands Off DBA (so to speak). And Memory
Management has definitely been one of those areas within oracle, that has definitely improved as the
years have gone on.

Back in 8i, and earlier, all the specific memory regions within the SGA and PGA had to be given sizes via
parameters. For example, db_cache_size, java_pool_size, large_pool_size, streams_pool_size,
sort_area_size, hash_area_size, etc. Particular care was needed to make sure that the databases
instance, i.e. the databases memory structures, were sized correctly. This typically was trial and error,
and usually over several weeks, or even months, before things were set optimally. And even then, these
required constant monitoring depending on how fast the database grew, both in physical size, and
customer base. In other words, leaving the databases instance static was usually going to provide a
performance downgrade, over time, if these parameters weren't constantly kept in check.

Since 8i, we've seen the following memory management improvements...

Program Global Area


9i, brought us the concept of the PGA as a global resource. Configurable with the following parameters:
1 - PGA_AGGREGATE_TARGET
2 - WORKAREA_SIZE_POLICY

When PGA_AGGREGATE_TARGET is set to a non zero value, and WORKAREA_SIZE_POLICY is set


to AUTO, oracle will automatically maintain the PGA memory for all sessions, database wide. Thus,
ignoring the *_AREA_SIZE parameters (SORT, HASH, etc). This allow the DBA to A, to use less memory,
overall, for PGA space, while B, covering nearly all session PGA requirements within the database.

Automatic Shared Memory Management


10g brought us another very important concept with regards to the database instance. The concept of
Automatic Shared Memory Management (ASMM). To configure ASMM, you needed to configure the
following parameters:
1 - SGA_MAX_SIZE
2 - SGA_TARGET

SGA_MAX_SIZE is used to set the maximum allowable SGA size (not dynamic), and SGA_TARGET is
used to set the current SGA size (is dynamic, to a max value of SGA_MAX_SIZE). And you need to
watch out for this, but, oracle will always grab SGA_MAX_SIZE of memory, but only ever use
SGA_TARGET of it (the same principle applies to the AMM parameters, MEMORY_MAX_SIZE and
MEMORY_TARGET). This is so the SGA, or total memory in AMMs case, can be adjusted on the fly,
without the need ot a database bounce. I've always found this rather pointless, because unless you have
a massively growing database instance requirement, why would any DBA waste available ram? You'd let
the database use what it needs, and let the OS use the rest to alleviate any kernel performance issues
due to lack of available memory.
The more important internal change, was that SGA_TARGET, which sets the current size of the entire
SGA within the instance, encompasses the following memory regions:
* DB_CACHE_SIZE (default buffer pool)
* SHARED_POOL_SIZE (shared pool)
* JAVA_POOL_SIZE (java pool)
* LARGE_POOL_SIZE (large pool)

And oracle has the ability to move memory blocks between those regions, on the fly, depending on
workload. For example, the workload during a normal business day means that more memory is needed
within the default buffer pool, so oracle might need to shrink the other areas to the bare minimum to meet
this requirement, yet at night, when the rman backups run, oracle might take some from the buffer pool
and move it to the Large Pool to help those.
Oracle also set the sizes of these regions automatically, which allowed the above parameters to be set to
0. However, oracle did something reasonable cool. They didn't deprecate those parameters. they still
allowed their use. If the above parameters are set, with the SGA_TARGET greater then zero, the above
regions of memory within the SGA were given those values as a minimum. e.g. if the SGA_TARGET was
set to 12G, and the DB_CACHE_SIZE was set to 6G, it meant that within the SGA, when oracle needs to
move memory blocks between regions, the default buffer pool can never drop below 6G in size.

Huge Pages
10gR2, gave us Huge Pages (or Large Pages). This was introduced purely as a result of it being a new
feature of linux/unix OS's at the time (kernel 2.6 and higher). They are typically 2meg in size, but in some
cases, can be a large as 256meg in size. For Oracle Databases, using HugePages reduces the operating
system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.

The following are the advantages of using HugePages:- Increased performance through increased TLB
hits.
- Pages are locked in memory and are never swapped out which guarantees that shared memory like
SGA remains in RAM.
- Contiguous pages are preallocated and cannot be used for anything else but for System V shared
memory (for example, SGA)
- Less bookkeeping work for the kernel for that part of virtual memory due to larger page sizes

You can read about how to configure Huge Pages HERE. But in the database, the USE_LARGE_PAGES
parameter is used to tell oracle to use the OS's HugePages implementation for the SGA portion of the
instance. This meant that instead of using 4k pages, oracle utilised the OS's huge pages pages. This
allowed the kernel to maintain less memory pages, thus providing a performance boost to oracles
instance maintenance. This parameter can have 3 values:
 - TRUE: use large pages, when those run out, use regular page sizes. Potential ora-4030  erros may
result in oracle versions 11.2.0.2 and prior
 - FALSE: use regular memory pages
 - ONLY: fail to start instance if not enough huge pages are configured at the OS level

Automatic Memory Management


11g, gave the final concept for our instance management, and that is AMM, or Automatic Memory
Management. AMM takes the ASMM concept 1 step further, in that when you set
MEMORY_MAX_TARGET and MEMORY_TARGET, this encompasses both the SGA and the PGA
memory areas. And the same memory swapping that occurs in ASMM, happens here, but within the
entire memory area. So, not only between internal SGA areas, but also between the SGA and the PGA.
One thing to remember about AMM though, is that its NOT compatible with Huge Pages.
Memory Adviors
Along with the above technology improvements to how oracle manages and configures the instances
(memory structures), oracle has also implemented advisors to help the dba determine their optimal size,
based on workload, over time. These include advisors for the SGA and the PGA. Be careful though, these
are Oracle EE addons, and are separetely licensable. So make sure this is in place before you start using
them.

AMM or ASMM
It's important to know when to use which memory implementation, as both have a place. However, please
note, that ASM instances will/should always use AMM, as ASM instances are configured using the
MEMORY_TARGET and MEMORY_MAX_TARGET parameters. Therefore ASM instances cannot be
implementations can co-exist.

When using AMM you have to consider the interaction of these parameters:

 sga_target: (pre 11g):  If the sga_target is set to some value then the automatic shared
memory management (ASMM) is enabled, the sga_target value can be adjusted up to the
sga_max_size parameter, not more than sga_max_size parameter value.
o sga_max_size: The sga_max_size sets the overall amount of memory the SGA
can consume but is not dynamic. The sga_max_size parameter is the max
allowable size to resize the SGA memory area parameters.
 memory_target (starting in 11g):  If memory_target  is set, then AMM is enabled: If
memory_target is set to non zero value and :
o sga_target, sga_max_size and pga_aggregate_target are set to 0, then 60% of
memory mentioned in memory_target is allocated to SGA and rest 40% is kept
for PGA.
o sga_target and pga_aggregate_target are set to non-zero values, then these values
will be considered minimum values.
o sga_target is set to non zero value and pga_aggregate_target is not set. still these
values will be auto-tuned and pga_aggregate_target will be initialized with value
of (memory_target-sga_target).

pga_aggregate_target is set and sga_target is not set. still both parameters will be auto-tuned. The
sga_target will be initialized to a value of (memory_target-pga_aggregate_target).

memory_target sga_target sga_max_size pga_aggregate_target Behavior

60% of memory_target to SGA,


non-zero 0 0 0
40% to PGA

non-zero non-zero   non-zero Minimum values

non-zero non-zero   un-set pga_aggregate_target =


memory_target - sga_target

sga_target is set to
non-zero un-set   un-set memory_target -
pga_aggregate_target

AMM and the memory_target parameter

In Oracle 11g and beyond, automatic memory management is enabled by using the memory_target  and
memory_max_target initialization parameters.  The memory_target parameter specifies the amount of
shared memory available for Oracle to use when dynamically controlling the SGA and PGA. The
memory_max_target AMM parameter specifies the max size that memory_target may take.  The Oracle
documentation notes that the memory_target parameter specifies a target value for system-wide RAM
memory, both PGA and SGA RAM:

  "MEMORY_TARGET specifies the Oracle system-wide usable memory.

The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA
and PGA as needed."

Disabling Automatic Shared Memory Management

Simply re-set sga_target to zero.  Be aware that the data buffer cache (db_cache_size) will take-on the
current value from the spfile.

Using Automatic Shared Memory Management

You need to note that Automatic Shared Memory Management does not change the multiple block sizes
and the KEEP pool.  In an article titled Automatic Shared Memory Management we see:

Some pools in SGA are not subject to dynamic resizing, and must be specified explicitly. Notable among
them are the buffer pools for nonstandard block sizes and the non-default ones for KEEP or RECYCLE. If
your database has a block size of 8K, and you want to configure 2K, 4K, 16K, and 32K block-size pools,
you must set them manually.

Their sizes will remain constant; they will not shrink or expand based on load. You should consider this
factor when using multiple-size buffer, KEEP, and RECYCLE pools.

In addition, log buffer is not subject to the memory adjustment—the value set in the parameter
log_buffer is constant, regardless of the workload. ( In 10g, a new type of pool can also be defined in
the SGA: Streams pool, set with parameter streams_pool_size. This pool is also not subject to
automatic memory tuning.)

From the bestselling book "Oracle 10g New Features", we see how Oracle monitors the pool workloads::

A new background process named Memory Manager (MMAN) manages the automatic shared memory.
MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The
SGA Memory Broker keeps track of the sizes of the components and pending resize operations.

Monitoring Automatic Shared Memory Management

MOSC Note:295626.1, How To Use Automatic Shared Memory Management (ASMM) In Oracle10g, has
some script for monitoring Oracle Automatic Shared Memory Management:
 

select
   component,
   oper_type,
   oper_mode,
   initial_size/1024/1024 "Initial",
   TARGET_SIZE/1024/1024  "Target",
   FINAL_SIZE/1024/1024   "Final",
   status
from
   v$sga_resize_ops;

COMPONENT                      OPER_TYPE     OPER_MODE INITIAL    TARGET     FINAL      STATUS


------------------------------ ------------- --------- ---------- ---------- ---------- ---------
DEFAULT buffer cache           SHRINK        MANUAL    160         148       148        COMPLETE
streams pool                   GROW          MANUAL      0          12        12        COMPLETE

select
   component,
   current_size/1024/1024 "CURRENT_SIZE",
   min_size/1024/1024 "MIN_SIZE",
   user_specified_size/1024/1024 "USER_SPECIFIED_SIZE",
   last_oper_type "TYPE"
from
   v$sga_dynamic_components;

COMPONENT                      CURRENT_SIZE MIN_SIZE   USER_SPECIFIED_SIZE TYPE


------------------------------ ------------ ---------- ------------------- -------------
shared pool                    80           80         80                  STATIC
large pool                      8            8          8                  STATIC
java pool 4                     8           48         48                  STATIC
streams pool                   12            0         12                  GROW
DEFAULT buffer cache           48           24         24                  SHRINK
KEEP buffer cache               0            0          0                  STATIC
RECYCLE buffer cache            0            0          0                  STATIC
DEFAULT 2K buffer cache         0            0          0                  STATIC
DEFAULT 4K buffer cache         0            0          0                  STATIC
DEFAULT 8K buffer cache         0            0          0                  STATIC
DEFAULT 16K buffer cache        0            0          0                  STATIC
DEFAULT 32K buffer cache        0            0          0                  STATIC
OSM Buffer Cache                0            0         24                  STATIC
 

Click here to see scripts for monitoring the shared pool for free RAM chunks.  You can also issue these
queries to see free RAM in Oracle10g:

 select sum(value) from v$sga;


 select sum(bytes) from v$sgastat;
 select sum(current_size) from v$sga_dynamic_components;
 select * from v$sga_dynamic_free_memory;
 

Automatic Shared Memory Management Internals

With the advent of the advisory utilities in Oracle9i (v$db_cache_advice, v$shared_pool_advice, and
v$pga_target_advice, we see how Oracle plots marginal benefits from different pool sizes:

Robert Freeman notes the default behavior with a blank init.ora file:

I did shutdown my local 10g database and brought it up with a blank init.ora (only had the db_name
parameter in it). The actual default in XP 10g is db_cache_size=48m, shared_pool_size=32m.

MOSC Note:257643.1, Oracle Database 10g Automated SGA Memory Tuning gives hints about how
MMAN determines current workloads.  The documentation shows that Oracle uses the memory
advisories from Oracle9i and applies heuristics (rules of thumb) to determine the best shift in RAM pool
sizes.  These heuristics consist of hypothesis testing with "what if" scenarios, computing the ratio of the
marginal reduction in physical disk reads, and choosing the size with the greatest overall marginal
benefit:

The SGA Memory Broker observes the system and workload in order to determine the ideal distribution
of memory. It is never complacent and performs this check every few minutes so that memory can
always be present where needed.
Based on workload information, automatic shared memory tuning:

 Captures statistics periodically in the background


 Uses the different memory advisories
 Performs ?what-if? analyses to determine best distribution of memory
 Moves memory to where it is most needed
 Has no need to configure parameters for the worst-case scenario
 Resurrects component sizes from last shutdown if SPFILE is used

Automatic Shared Memory Management Errors

ASMM can precipitate these errors:

ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0

alter system set sga_target = 330M


*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00827: could not shrink sga_target to specified value

alter system set sga_target=160M;


alter system set sga_target=160M

* ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00823: Specified value of sga_target greater than sga_max_size

SQL> startup

ORA-00824: cannot set sga_target due to existing internal settings

Cause:  If you enable automatic SGA Management by setting SGA_Target >0 and
if you are using db_block_buffers(Obsolete parameter) in your init.ora.
Startup of Database fails with ORA-00824 Error

Potential issues with ASMM

The Oracle 10g Automatic Storage Memory Management (ASMM) feature (the default on Oracle10g)
should anticipate high updates and allocate additional data buffers during high update periods. 

For example, here is an actual output from an Oracle10g database where it appears that ASMM is not
allocating enough free blocks to accommodate concurrent updates:

STATUS  NUMBER_BUFFERS
------- --------------
cr                       616
free                      1 
xcur             14790
 
Here we see the double-underscore ASMM hidden parms:
 
 316     6 __db_cache_size                       130,023,424
  96     6 __java_pool_size                       12,582,912
  94     6 __large_pool_size                       4,194,304
  92     6 __shared_pool_size                    142,606,336
 

How to disable AMM:  See these important notes on disabling AMM (Automatic Space
Memory Management)

 
7. Bitmap and B-tree indexes:

As you can see here, with the B-tree index, the optimizer opted for a full table scan, whereas in the case of the
bitmap index, it used the index to answer the query. You can deduce performance by the number of I/Os required to
fetch the result.

In summary, bitmap indexes are best suited for DSS regardless of cardinality for these reasons:

 With bitmap indexes, the optimizer can efficiently answer queries that include AND, OR, or XOR. (Oracle
supports dynamic B-tree-to-bitmap conversion, but it can be inefficient.)

 With bitmaps, the optimizer can answer queries when searching or counting for nulls. Null values are also
indexed in bitmap indexes (unlike B-tree indexes).

 Most important, bitmap indexes in DSS systems support ad hoc queries, whereas B-tree indexes do not.
More specifically, if you have a table with 50 columns and users frequently query on 10 of them—either the
combination of all 10 columns or sometimes a single column—creating a B-tree index will be very difficult. If
you create 10 bitmap indexes on all these columns, all the queries can be answered by these indexes,
whether they are queries on all 10 columns, on 4 or 6 columns out of the 10, or on a single column. The
AND_EQUAL hint provides this functionality for B-tree indexes, but no more than five indexes can be used
by a query. This limit is not imposed with bitmap indexes.

In contrast, B-tree indexes are well suited for OLTP applications in which users' queries are relatively routine (and
well tuned before deployment in production), as opposed to ad hoc queries, which are much less frequent and
executed during nonpeak business hours. Because data is frequently updated in and deleted from OLTP
applications, bitmap indexes can cause a serious locking problem in these situations.

The data here is fairly clear. Both indexes have a similar purpose: to return results as fast as possible. But your
choice of which one to use should depend purely on the type of application, not on the level of cardinality
8. Internals of Oracle Indexing:

Вам также может понравиться