Вы находитесь на странице: 1из 270

Oracle Database 11g: New

Features for Administrators

Volume II Student Guide

D50081GC10
Edition 1.0
July 2007
D51901 L$


Authors Copyright 2007, Oracle. All rights reserved.

Christian Bauwens Disclaimer


Maria Billings This course provides an overview of features and enhancements planned in release
Christine Jeal 11g. It is intended solely to help you assess the business benefits of upgrading to 11g
Srinivas Putrevu and to plan your IT projects.

James Spiller This course in any form, including its course labs and printed matter, contains
Kesavan Srinivasan proprietary information that is the exclusive property of Oracle. This course and the
information contained herein may not be disclosed, copied, reproduced, or distributed
Jenny Tsai
to anyone outside Oracle without prior written consent of Oracle. This course and its
Jean-Francois Verrier contents are not part of your license agreement nor can they be incorporated into any
James Womack contractual agreement with Oracle or its subsidiaries or affiliates.

This course is for informational purposes only and is intended solely to assist you in
Technical Contributors planning for the implementation and upgrade of the product features described. It is
and Reviewers not a commitment to deliver any material, code, or functionality, and should not be
relied upon in making purchasing decisions. The development, release, and timing of
Maqsood Alam any features or functionality described in this document remain at the sole discretion
Kalyan Bitra of Oracle.

Harald Van Breederode This document contains proprietary information and is protected by copyright and
Edward Choi other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered in
Al Flournoy any way. Except where your use constitutes "fair use" under copyright law, you may
Andy Fortunak not use, share, download, upload, copy, print, display, perform, reproduce, publish,
Gerlinde Frenzen license, post, transmit, or distribute this document in whole or in part without the
express authorization of Oracle.
Greg Gagnon
Joel Goodman The information contained in this document is subject to change without notice. If you
find any problems in the document, please report them in writing to: Oracle University,
Hansen Han
500 Oracle Parkway, Redwood Shores, California 94065 USA.
Uwe Hesse Restricted Rights Notice
Sunil Hingorani
If this documentation is delivered to the United States Government or anyone using
Magnus Isaksson the documentation on behalf of the United States Government, the following notice is
Susan Jang applicable:
Martin Jensen
U.S. GOVERNMENT RIGHTS
Pete Jones The U.S. Governments rights to use, modify, reproduce, release, perform, display, or
Yash Kapani disclose these training materials are restricted by the terms of the applicable Oracle
license agreement and/or the applicable U.S. Government contract.
Pierre Labrousse
Richard.W.Lewis Trademark Notice
Hakan Lindfors
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other
Russ Lowenthal names may be trademarks of their respective owners.
Kurt Lysy
Silvia Marrone This document is not warranted to be error-free.

Heejin Park
Editors
Jagannath Poosarla
Eric Siglin Raj Kumar
Ranbir Singh Daniel Milne
Jeff Skochil Vijayalakshmi Narasimhan
George Spears Atanu Raychaudhuri
Birgitte Taagholt Richard Wallis
Glenn Tripp
Anthony Woodell
Publishers
Sujatha Nagendra
Graphic Designers Srividya Rameshkumar
Rajiv Chandrabhanu Michael Sebastian
Samir Mozumdar Jobi Varghese
Using Flashback and LogMiner

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Describe new features for flashback and LogMiner
Use Flashback Data Archive to create, protect, and use
history data
Prepare your database
Create, change, and drop a flashback data archive
View flashback data archive metadata
Use Flashback Transaction Backout
Set up Flashback Transaction prerequisites
Query transactions with and without dependencies
Choose backout options and flash back transactions
Use EM LogMiner

11 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 11 - 2


New and Enhanced Features for
Flashback and LogMiner

Ease-of-use in the Oracle Database 11g:


Flashback Data Archive for automatic tracking and
secure storing of all transactional changes to a record
during its lifetime (instead of application logic)
Flashback Transaction and dependent transactions or
Job Backout for increased agility to undo logical errors
Browser-based Enterprise Manager (EM) LogMiner
interface for integration with Flashback Transaction

11 - 3 Copyright 2007, Oracle. All rights reserved.

New and Enhanced Features for Flashback and LogMiner


Organizations often have the requirement to track and store all transactional changes to a record for
the duration of its lifetime. You no longer need to build this intelligence into the application.
Flashback Data Archive satisfies long-retention requirements (that exceed the undo retention) in a
secure way.
Oracle Database 11g allows you to flash back selected transactions and all the dependent
transactions. This recovery operation uses undo data to create and execute the corresponding
compensating transactions that revert the affected data back to its original state. Flashback
Transaction or Job Backout increases availability during logical recovery by easily and quickly
backing out a specific transaction or a set of transactions, and their dependent transactions, with one
command, while the database remains online.
In prior releases, administrators were required to install and use the stand-alone Java Console for
LogMiner. With the Enterprise Manager interface, administrators have one installation task less and
an integrated interface with Flashback Transaction.
These enhancements increase ease-of-use and time savings because they provide a task-based,
intuitive approach (via the EM graphical user interface), or reduce the complexity of applications.

Oracle Database 11g: New Features for Administrators 11 - 3


Flashback Data Archive Overview:
Oracle Total Recall

Transparently tracks historical changes to all Oracle data


in a highly secure and efficient manner
Secure
No possibility to modify historical data
Retained according to your specifications
Automatically purged based on your retention policy
Efficient
Special kernel optimizations to minimize performance
overhead of capturing historical data
Stored in compressed form in tablespaces to minimize
storage requirements
Completely transparent to applications
Easy to set up

11 - 4 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Overview


A new database object, a flashback data archive is a logical container for storing historical
information. It is stored in one or more tablespaces and tracks the history for one or more tables. You
specify a retention duration for each flashback data archive. You can group the historical table data
by your retention requirements in a flashback data archive. Multiple tables can share the same
retention and purge policies.
With the Oracle Total Recall option, Oracle Database 11g has been specifically enhanced to track
history with minimal performance impact and to store historical data in compressed form. This
efficiency cannot be duplicated by your own triggers, which also cost time and effort to set up and
maintain.
Operations that invalidate history or prevent historical capture are not allowed, for example,
dropping or truncating a table.

Oracle Database 11g: New Features for Administrators 11 - 4


Flashback Data Archive Comparison

Flashback Data Archive Flashback Database

Main benefit Access to data at any point Physically moves entire


in time without changing database back in time
the current data

Operation Online operation, tracking Offline operation, requires


enabled, minimal resource preconfiguration and
usage resources

Granularity Table Database

Access point-in- Any number per table One per database


time

11 - 5 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive Comparison


How the Flashback Data Archive technology compares with Flashback Database:
Flashback Data Archive offers the ability to access the data as of any point in time without
actually changing the current data. This is in contrast with Flashback Database, which takes the
database physically back in time.
Tracking has to be enabled for historical access, while Flashback Database requires
preconfiguration. Flashback Database is an offline operation, which requires resources.
Flashback Data Archive is an online operation (historical access seamlessly coexists with
current access). Because a new background process is used, it has almost no effect on the
existing processes.
Flashback Data Archive is enabled at the granularity of a table, whereas Flashback Database
works only at the database level.
With Flashback Data Archive, you can go back to different points in time for different rows of a
table or for different tables, whereas with Flashback Database, you can go back to only one
point in time for a particular invocation.

Oracle Database 11g: New Features for Administrators 11 - 5


Flashback Data Archive: Overview

For long-retention requirements exceeding undo

Undo data
Original
data in
buffer cache

DML operations

FBDA

Example: Three flashback data


archives with retention of:
1 year

Flashback data archives 2 years


stored in tablespaces 5 years

11 - 6 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Overview


A flashback data archive is a historical data store. Oracle Database 11g automatically tracks and
archives the data in tables enabled for Flashback Data Archive with a new Flashback Data Archive
background process, FBDA. You use this feature to satisfy long-retention requirements that exceed
the undo retention. Flashback data archives ensure that flashback queries obtain SQL-level access to
the versions of database objects without getting a snapshot-too-old error.
A flashback data archive consists of one or more tablespaces or parts thereof. You can have multiple
flashback data archives. Each is configured with a specific retention duration. Based on your
retention duration requirements, you should create different flashback data archivesfor example,
one for all records that must be kept for one year, another for all records that must be kept for two
years, and so on.
FBDA asynchronously collects and writes original data to a flashback data archive. It does not
include the original indexes, because your retrieval pattern of historical information might be quite
different than your retrieval pattern of current information.
Note: You might want to create appropriate indexes just for the duration of historical queries.

Oracle Database 11g: New Features for Administrators 11 - 6


Flashback Data Archive: Architecture

DML changes
used by FBDA

Old values
Undo

Buffer cache
1 2

FBDA

3
History or archive tables:
- Compressed storage
- With automatic digital
shredding
Flashback data archives

11 - 7 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Architecture


The Flashback Data Archive background process, FBDA, starts with the database.
1. FBDA operates first on the undo in the buffer cache.
2. In case the undo has already left the buffer cache, FBDA could also read the required values
from the undo segments.
3. FBDA consolidates the modified rows of flashback archiveenabled tables and writes them into
the appropriate history tables, which make up the flashback data archive.
You can find the internally assigned names of the history tables by querying the
*_FLASHBACK_ARCHIVE_TABLES view. History tables are compressed and internally partitioned.
The database automatically purges all historical information on the day after the retention period
expires. (It deletes the data, but does not destroy the flashback data archive.) For example, if the
retention period is 10 days, then every day after the tenth day, the oldest information is deleted; thus
leaving only 10 days of information in the archive. This is a way to implement digital shredding.

Oracle Database 11g: New Features for Administrators 11 - 7


Preparing Your Database

To satisfy long-retention requirements, use flashback data


archives. Begin with the following steps:
For your archive administrator:
Create one or more tablespaces for data archives and
grant QUOTA on the tablespaces.
Grant the ARCHIVE ADMINISTER system privilege to
create and maintain flashback archives.
For archive users:
Grant the FLASHBACK ARCHIVE object privilege (to
enable history tracking for specific tables in the given
flashback archives).
Grant FLASHBACK and SELECT privileges to query specific
objects.

11 - 8 Copyright 2007, Oracle. All rights reserved.

Preparing Your Database


To enable Flashback Data Archive features, ensure that the following tasks are performed:
Create one or more tablespaces for the data archives and grant access and the appropriate quota to
your archive administrator.
Also, grant the ARCHIVE ADMINISTER system privilege to your archive administrator, to allow
execution of the following statements:
CREATE FLASHBACK ARCHIVE
ALTER FLASHBACK ARCHIVE
DROP FLASHBACK ARCHIVE

To allow a specific user the use a specific flashback data archive, grant the FLASHBACK ARCHIVE
object privilege on that flashback data archive to the archive user. The archive user can then enable
flashback archive on tables, by using the specific flashback data archive.
Example executed as archive administrator:
GRANT FLASHBACK ARCHIVE ON FLA1 TO HR;
Most likely, your users will use other Flashback functionality. To allow access to specific objects
during queries, grant the FLASHBACK and SELECT privileges on all objects involved in the query.
If your users need access to the DBMS_FLASHBACK package, then you need to grant them the
SELECT privilege for this package. Users can then use the DBMS_FLASHBACK.ENABLE and
DBMS_FLASHBACK.DISABLE procedures to enable and disable the flashback data archives.
Oracle Database 11g: New Features for Administrators 11 - 8
Preparing Your Database

Configuring undo:
Creating an undo tablespace (default: Automatically
extensible tablespace)
Enabling Automatic Undo Management (11g default)
Understanding automatic tuning of undo:
Fixed-size tablespace: Automatic tuning for best retention
Automatically extensible undo tablespace: Automatic tuning
for longest-running query
Recommendation for Flashback: Fixed-size undo
tablespace

11 - 9 Copyright 2007, Oracle. All rights reserved.

Preparing Your Database (continued)


Oracle Database 11g uses the following defaults database initialization parameters:
UNDO_MANAGEMENT='AUTO'
UNDO_TABLESPACE='UNDOTBS1'
UNDO_RETENTION=900

In other words, Automatic Undo Management is now enabled by default. If needed, enable
Automatic Undo Management, as explained in the Oracle Database Administrators Guide.
An automatically extensible undo tablespace is created upon database installation.
For a fixed-size undo tablespace, the Oracle database automatically tunes the system to give the
undo tablespace the best possible undo retention.
For an automatically extensible undo tablespace (default), the Oracle database retains undo data
to satisfy at a minimum, the retention periods needed by the longest-running query and the
threshold of undo retention, specified by the UNDO_RETENTION parameter.
Automatic tuning of undo retention generally achieves better results with a fixed-size undo
tablespace. If you want to change the undo tablespace to fixed size for this or other reasons, the
Undo Advisor can help you determine the proper fixed size to allocate.
If you are uncertain about your space requirements and you do not have access to the Undo Advisor,
follow these steps:
1. You can start with an automatically extensible undo tablespace.
2. Observe it through one business cycle (for example, this could be 1 or 2 days, or longer).
Oracle Database 11g: New Features for Administrators 11 - 9
Preparing Your Database (continued)
3. Collect undo block information with the V$UNDO_STAT view, calculate your space
requirements, and use them to create an appropriately sized fixed undo tablespace. (The
calculation formula is given in the Oracle Database Administrators Guide.)
4. You can query V$UNDOSTAT.TUNED_UNDORETENTION to determine the amount of time for
which undo is retained for the current undo tablespace. Setting the UNDO_RETENTION
parameter does not guarantee that unexpired undo data is not overwritten. If the system needs
more space, the Oracle database can overwrite unexpired undo with more recently generated
undo data.
- Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that
unexpired undo data is not discarded.
- To satisfy long-retention requirements that exceed the undo retention, create a flashback
data archive.

Oracle Database 11g: New Features for Administrators 11 - 10


Flashback Data Archive: Workflow

1. Create the flashback data archive.


2. Optionally, specify the default flashback data archive.
3. Enable the flashback data archive.
4. View flashback data archive data.

11 - 11 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Workflow


The first step is to create a flashback data archive. A flashback data archive consists of one or more
tablespaces. You can have multiple flashback data archives.
Second, you can optionally specify a default flashback data archive for the system. A flashback data
archive is configured with retention time. Data archived in the flashback data archive is retained for
the retention time.
Third, you can enable flashback archiving (and then disable it again) for a table. While flashback
archiving is enabled for a table, some DDL statements are not allowed on that table. By default,
flashback archiving is off for any table.
Fourth, when you query data past your possible undo retention, your query is transparently rewritten
to use historical tables in the flashback data archive.

Oracle Database 11g: New Features for Administrators 11 - 11


Using Flashback Data Archive

Basic workflow to access historical data:


1. Create the flashback data archive:
CREATE FLASHBACK ARCHIVE fla1
TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;

2. Enable history tracking for a table in the FLA1 archive:

ALTER TABLE inventory FLASHBACK ARCHIVE fla1;

3. View the historical data:


SELECT product_number, product_name, count
FROM inventory AS OF TIMESTAMP TO_TIMESTAMP
('2007-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS');

11 - 12 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Scenario


You create a flashback data archive with the CREATE FLASHBACK ARCHIVE statement.
You can optionally specify the default flashback data archive for the system. If you omit this
option, you can still make this flashback data archive the default later.
You need to provide the name of the flashback data archive.
You need to provide the name of the first tablespace of the flashback data archive.
You can identify the maximum amount of space that the flashback data archive can use in the
tablespace. The default is unlimited. Unless your space quota on the first tablespace is unlimited,
you must specify this value, or else an ORA-55621 will ensue.
You need to provide the retention time (number of days that flashback data archive data for the
table is guaranteed to be stored).
The basic workflow to create and use a flashback data archive has only three steps:
1. The archive administrator creates a flashback data archive named fla1, which uses up to 10
GB of the tbs1 tablespace, and whose data will be retained for five years.
2. In the second step, the archive user enables the Flashback Data Archive. If Automatic Undo
Management is disabled, you receive the error ORA-55614 when you try to modify the table.
3. The third step shows the access of historical data with an AS OF query.

Oracle Database 11g: New Features for Administrators 11 - 12


Configuring a Default Flashback Data Archive

Using a default flashback archive:


1. Create a default flashback data archive:
CREATE FLASHBACK ARCHIVE DEFAULT fla2
TABLESPACE tbs1 QUOTA 10G RETENTION 2 YEAR;

2. Enable history tracking for a table:

ALTER TABLE stock_data FLASHBACK ARCHIVE fla2;

Note: The name of the flashback data archive is not needed


because the default one is used.
3. Disable history tracking:
ALTER TABLE stock_data NO FLASHBACK ARCHIVE;

11 - 13 Copyright 2007, Oracle. All rights reserved.

Configuring a Default Flashback Data Archive


In the FLASHBACK ARCHIVE clause, you can specify the flashback data archive where the
historical data for the table will be stored. By default, the system has no flashback data archive. In
the preceding example, the default flashback data archive is specified for the system.
You can create a default flashback archive in one of two ways:
1. Specify the name of an existing flashback data archive in the SET DEFAULT clause of the
ALTER FLASHBACK ARCHIVE statement.
2. Include DEFAULT in the CREATE FLASHBACK ARCHIVE statement when you create a
flashback data archive.
You enable and disable flashback archiving for a table with the ALTER TABLE command. You can
assign the internal archive table to a specific flashback data archive by specifying the flashback data
archive name. If the name is omitted, the default flashback data archive is used. Specify NO
FLASHBACK ARCHIVE to disable archiving of a table.

Oracle Database 11g: New Features for Administrators 11 - 13


Filling the Flashback Data Archive Space

What happens when your flashback data archive gets full?


90% space usage
Raising of errors:
ORA-55623 "Flashback Archive \"%s\" is
blocking and tracking on all tables is
suspended"
ORA-55617 "Flashback Archive \"%s\" is
blocking and tracking on all tables is
suspended"
Generating alert log entry
Suspending tracking

11 - 14 Copyright 2007, Oracle. All rights reserved.

Filling the Flashback Data Archive Space


When you are out of space in a flashback data archive, the FBDA and also all foreground processes
(that generate tracked undo) raise either an ORA-55617 or an ORA-55623 error. An alert log entry is
added, stating that Flashback archive fla1 is full, and archiving is suspended.\n. By default, this
occurs when 90% of the assigned space has been used.
Examples:
55623, 00000, "Flashback Archive \"%s\" is blocking and tracking on
all tables is suspended"
// *Cause: Flashback archive tablespace has run out of space.
// *Action: Add tablespace or increase tablespace quota for the
flashback archive.
//
55623, 00000, "Flashback Archive \"%s\" is blocking and tracking on
all tables is suspended"
// *Cause: Flashback archive tablespace quota is running out.
// *Action: Add tablespace or increase tablespace quota for the
flashback archive.

Oracle Database 11g: New Features for Administrators 11 - 14


Maintaining Flashback Data Archives

1. Adding space:
ALTER FLASHBACK ARCHIVE fla1
ADD TABLESPACE tbs3 QUOTA 5G;
2. Changing retention time:
ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEAR;
3. Purging data:
ALTER FLASHBACK ARCHIVE fla1 PURGE BEFORE
TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' day);

4. Dropping a flashback data archive:


DROP FLASHBACK ARCHIVE fla1;

11 - 15 Copyright 2007, Oracle. All rights reserved.

Maintaining Flashback Data Archives


1. Example 1 adds up to 5 GB of the TBS3 tablespace to the FLA1 flashback data archive. (The
archive administrator cannot exceed tablespace quota granted by the DBA.)
2. Example 2 changes the retention time for the FLA1 flashback data archive to two years.
3. Example 3 purges all historical data older than one day from the FLA1 flashback data archive.
Normally, purging is done automatically, on the day after your retention time expires. You can
also override this for ad hoc clean-up.
4. Example 4 drops the FLA1 flashback data archive and historical data, but not its tablespaces.
With the ALTER FLASHBACK ARCHIVE command, you can:
- Change the retention time of a flashback data archive
- Purge some or all of its data
- Add, modify, and remove tablespaces
Note: Removing all tablespaces of a flashback data archive causes an error.

Oracle Database 11g: New Features for Administrators 11 - 15


Flashback Data Archive: Examples

1. To enforce digital shredding:


CREATE FLASHBACK ARCHIVE tax7_archive
TABLESPACE tbs1 RETENTION 7 YEAR;

2. To access historical data:


SELECT symbol, stock_price FROM stock_data
AS OF TIMESTAMP TO_TIMESTAMP ('2006-12-31 23:59:00',
'YYYY-MM-DD HH24:MI:SS')

3. To recover data:
INSERT INTO employees
SELECT * FROM employees AS OF TIMESTAMP
TO_TIMESTAMP('2007-06-12 11:30:00','YYYY-MM-DD HH24:MI:SS')
WHERE name = 'JOE';

11 - 16 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: Examples


Organizations require historical data stores for several purposes. Flashback Data Archive provides
seamless access to historical data with as of queries. You can use Flashback Data Archive for
compliance reporting, audit reports, data analysis, and decision support.
You want to set up your database so that information in the TAX7_ARCHIVE is automatically
deleted, the day after seven years are complete. To do this, you just specify a command as
shown in example 1.
To retrieve the stock price at the close of business on December 31, 2006, use a query as shown
in example 2.
You discover that JOEs employee record was deleted by error, and that it still existed at 11:30
on the 12th of June 2007. You can insert it again as shown in example 3.

Oracle Database 11g: New Features for Administrators 11 - 16


Flashback Data Archive: DDL Restrictions

Using any of the following DDL statements on a table


enabled for Flashback Data Archive causes the error ORA-
55610:
ALTER TABLE statement that does any of the following:
Drops, renames, or modifies a column
Performs partition or subpartition operations
Converts a LONG column to a LOB column
Includes an UPGRADE TABLE clause, with or without an
INCLUDING DATA clause
DROP TABLE statement
RENAME TABLE statement
TRUNCATE TABLE statement

11 - 17 Copyright 2007, Oracle. All rights reserved.

Flashback Data Archive: DDL Restrictions


For the sake of security and legal compliance, the preceding restrictions ensure that data in a
flashback data archive cannot be invalidated.

Oracle Database 11g: New Features for Administrators 11 - 17


Viewing Flashback Data Archives

Viewing the results:


View Name Description

*_FLASHBACK_ARCHIVE Displays information about flashback


data archives

*_FLASHBACK_ARCHIVE_TS Displays tablespaces of flashback data


archives

*_FLASHBACK_ARCHIVE_TABLES Displays information about tables that


are enabled for flashback archiving

11 - 18 Copyright 2007, Oracle. All rights reserved.

Viewing Flashback Data Archives


You can use the dynamic data dictionary views to view tracked tables and flashback data archive
metadata. To access the USER_FLASHBACK views, you need table ownership privileges. For the
others, you need SYSDBA privileges.
Examples:
Query the time when the flashback data archive(s) have been created:
SELECT FLASHBACK_ARCHIVE_NAME, CREATE_TIME, STATUS
FROM DBA_FLASHBACK_ARCHIVE;
To list the tablespace(s), which are used for flashback data archives:
SELECT *
FROM DBA_FLASHBACK_ARCHIVE_TS;
To list the archive table name for a specific table:
SELECT ARCHIVE_TABLE_NAME
FROM USER_FLASHBACK_ARCHIVE_TABLES
WHERE TABLE_NAME = 'EMPLOYEES';
You cannot retrieve past data from a dynamic performance (V$) view. A query on such a view
always returns current data. However, you can perform queries on past data in static data dictionary
views, such as *_TABLES.

Oracle Database 11g: New Features for Administrators 11 - 18


Guidelines and Usage Tips

COMMIT or ROLLBACK before querying past data


Use of current session settings
Obtain SCN with the
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER
function.
Compute a past time with:
(SYSTIMESTAMP - INTERVAL '10' MINUTE)
Use System Change Number (SCN) where precision is
needed (time stamps have a three-second granularity).

11 - 19 Copyright 2007, Oracle. All rights reserved.

Guidelines and Usage Tips


To ensure database consistency, always perform a COMMIT or ROLLBACK operation before
querying past data.
Remember that all flashback processing uses the current session settings, such as national
language and character set, not the settings that were in effect at the time being queried.
To obtain an SCN to use later with a flashback feature, you can use the
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function.
To compute or retrieve a past time to use in a query, use a function return value as a time stamp
or an SCN argument. For example, add or subtract an INTERVAL value to the value of the
SYSTIMESTAMP function.
To query past data at a precise time, use an SCN. If you use a time stamp, the actual time
queried might be up to 3 seconds earlier than the time you specify. The Oracle database uses
SCNs internally and maps them to time stamps at a granularity of 3 seconds.

Oracle Database 11g: New Features for Administrators 11 - 19


Flashback Transaction Backout

Logical recovery option to roll back a specific


transaction and all its dependent transactions
Using undo, redo logs, and supplemental logging
Creating and executing compensating transactions
You finalize changes with commit, or roll back.
Faster and easier than laborious manual approach

Dependent transactions include:


Write-after-write (WAW) and
primary key constrains,
but not foreign key constraints

11 - 20 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction Backout


Flashback Transaction Backout is a logical recovery option to roll back a specific transaction and
dependent transactions while the database remains online. A dependent transaction is related by
either a write-after-write (WAW) relationship, in which a transaction modifies the same data that was
changed by the target transaction, or a primary key constraint relationship, in which a transaction
reinserts the same primary key value that was deleted by the target transaction. Flashback
Transaction utilizes the undo and the redo generated for undo blocks to create and execute a
compensating transaction for reverting the affected data back to its original state.

Oracle Database 11g: New Features for Administrators 11 - 20


Flashback Transaction

Setting up Flashback Transaction prerequisites


Stepping through a possible workflow
Using the Flashback Transaction Wizard
Querying transactions with and without dependencies
Choosing backout options and flashing back
transactions
Reviewing the results

11 - 21 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction
You can use the Flashback Transaction functionality from within Enterprise Manager or with
PL/SQL packages.
DBMS_FLASHBACK.TRANSACTION_BACKOUT

Oracle Database 11g: New Features for Administrators 11 - 21


Prerequisites

and database must be in ARCHIVELOG mode

11 - 22 Copyright 2007, Oracle. All rights reserved.

Prerequisites
In order to use this functionality, supplemental logging must be enabled and the correct privileges
established. For example, the HR user in the HR schema decides to use Flashback Transaction for the
REGIONS table. The SYSDBA ensures that the database is in archive log mode and performs the
following setup steps in SQL*Plus:
alter database add supplemental log data;
alter database add supplemental log data (primary key) columns;
grant execute on dbms_flashback to hr;
grant select any transaction to hr;
The HR user needs to either own the tables (as is the case in the preceding example) or have the
SELECT, UPDATE, DELETE, and INSERT privileges, to allow execution of the compensating undo
SQL code.

Oracle Database 11g: New Features for Administrators 11 - 22


Flashing Back a Transaction

You can flash back a transaction with Enterprise


Manager or the command line.
EM uses the Flashback Transaction Wizard, which calls
the DBMS_FLASHBACK.TRANSACTION_BACKOUT
procedure with the NOCASCADE option.
If the PL/SQL call finishes successfully, it means that
the transaction does not have any dependencies and a
single transaction is backed out successfully.

11 - 23 Copyright 2007, Oracle. All rights reserved.

Flashing Back a Transaction


Security Privileges
To flash back or back-out a transaction, that is, to create a compensating transaction, you must have
the SELECT, FLASHBACK and DML privileges on all affected tables.
Conditions of Use
Transaction Backout is not support across conflicting DDL.
Transaction Backout inherits data type support from LogMiner. See the Oracle Database 11g
documentation for supported data types.
Recommendation
When you discover the need for a transaction backout, performance is better, if you start the
backout operation sooner. Large redo logs and high transaction rates result in slower transaction
backout operations.
Provide a transaction name for the backout operation to facilitate later auditing. If you do not
provide a transaction name, it will be automatically generated for you.

Oracle Database 11g: New Features for Administrators 11 - 23


Possible Workflow

Viewing data in a table


Discovering a logical problem
Using Flashback Transaction
Performing a query
Selecting a transaction
Flashing back a transaction (with no conflicts)
Choosing other backout options (if conflicts exists)
Reviewing Flashback Transaction results

11 - 24 Copyright 2007, Oracle. All rights reserved.

Possible Workflow
Assume that several transactions occurred as indicated below:
connect hr/hr
INSERT INTO hr.regions VALUES (5,'Pole');
COMMIT;
UPDATE hr.regions SET region_name='Poles' WHERE region_id = 5;
UPDATE hr.regions SET region_name='North and South Poles' WHERE region_id
= 5;
COMMIT;
INSERT INTO hr.countries VALUES ('TT','Test Country',5);
COMMIT;
connect sys/<password> as sysdba
ALTER SYSTEM ARCHIVE LOG CURRENT;

Oracle Database 11g: New Features for Administrators 11 - 24


Viewing Data

11 - 25 Copyright 2007, Oracle. All rights reserved.

Viewing Data
To view the data in a table in Enterprise Manager, select Schema > Tables.
While viewing the content of the HR.REGIONS table, you discover a logical problem. Region 20 is
misnamed. You decide to immediately address this issue.

Oracle Database 11g: New Features for Administrators 11 - 25


Flashback Transaction Wizard

11 - 26 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction Wizard


In Enterprise Manager, select Schema > Tables > HR.REGIONS, then select Flashback
Transaction from the Actions drop-down list, and click Go. This invokes the Flashback Transaction
Wizard for your selected table. The Flashback Transaction: Perform Query page is displayed.
Select the appropriate time range and add query parameters. (The more specific you can be, the
shorter is the search of the Flashback Transaction Wizard.)
In Enterprise Manager, Flashback Transaction and LogMiner are seamlessly integrated (as this page
demonstrates).
Without Enterprise Manager, use the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure,
which is described in the PL/SQL Packages and Types Reference. Essentially, you take an array of
transaction IDs as the starting point of your dependency search. For example:
CREATE TYPE XID_ARRAY AS VARRAY(100) OF RAW(8);
CREATE OR REPLACE PROCEDURE TRANSACTION_BACKOUT(
numberOfXIDs NUMBER, -- number of transactions passed as input
xids XID_ARRAY, -- the list of transaction ids
options NUMBER default NOCASCADE, -- back out dependent
txn timeHint TIMESTAMP default MINTIME -- time hint on the txn
start
);

Oracle Database 11g: New Features for Administrators 11 - 26


Flashback Transaction Wizard

11 - 27 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction Wizard (continued)


The Flashback Transaction: Select Transaction page displays the transactions according to your
previously entered specifications. First, display the transaction details to confirm that you are
flashing back the correct transaction. Then select the offending transaction and continue with the
wizard.

Oracle Database 11g: New Features for Administrators 11 - 27


Flashback Transaction Wizard

11 - 28 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction Wizard (continued)


The Flashback Transaction Wizard now generates the undo script and flashes back the transaction,
but it gives you control to COMMIT this flashback. Click the Transaction ID to review its
compensating SQL statements.

Oracle Database 11g: New Features for Administrators 11 - 28


Flashback Transaction Wizard

11 - 29 Copyright 2007, Oracle. All rights reserved.

Flashback Transaction Wizard (continued)


Before you commit the transaction, you can use the Execute SQL area at the bottom of the Flashback
Transaction: Review page, to view what the result of your COMMIT will be.

Oracle Database 11g: New Features for Administrators 11 - 29


Flashback Transaction Wizard

COMMIT

11 - 30 Copyright 2007, Oracle. All rights reserved.

Finishing Up
On the Flashback Transaction: Review page, click the Show Undo SQL Script button to view the
compensating SQL commands. Click Finish to commit your compensating transaction.

Oracle Database 11g: New Features for Administrators 11 - 30


Choosing Other Backout Options

11 - 31 Copyright 2007, Oracle. All rights reserved.

Choosing Other Backout Options


The TRANSACTION_BACKOUT procedure checks dependencies, such as:
Write-after-write (WAW)
Primary and unique constraints
A transaction can have a WAW dependency, which means a transaction updates or deletes a row that
has been inserted or updated by a dependent transaction. This can occur, for example, in a
master/detail relationship of primary (or unique) and mandatory foreign key constraints.
To understand the difference between the NONCONFLICT_ONLY and the NOCASCADE_FORCE
options, assume that the T1 transaction changes rows R1, R2, and R3 and the T2 transaction changes
rows R1, R4, and R5. In this scenario, both transactions update row R1, so it is a conflicting row.
The T2 transaction has a WAW dependency on the T1 transaction. With the NONCONFLICT_ONLY
option, R2 and R3 are backed out, because there is no conflict and it is assumed that you know what
to do with the R1 row. With the NOCASCADE_FORCE option, all three rows (R1, R2, and R3) are
backed out.
Note: This screenshot is not part of the workflow example, but shows additional details of a more
complex situation.

Oracle Database 11g: New Features for Administrators 11 - 31


Choosing Other Backout Options

11 - 32 Copyright 2007, Oracle. All rights reserved.

Choosing Other Backout Options (continued)


The Flashback Transaction Wizard works as follows:
If the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE
option fails (because there are dependent transactions), you can change the recovery options.
With the NONCONFLICT_ONLY option, nonconflicting rows within a transaction are backed
out, which implies that database consistency is maintained (although the transaction atomicity is
broken for the sake of data repair).
If you want to forcibly back out the given transactions, without paying attention to the
dependent transactions, use the NOCASCADE_FORCE option. The server just executes the
compensating DML commands for the given transactions in reverse order of their commit times.
If no constraints break, you can proceed to commit the changes, otherwise roll back.
To initiate the complete removal of the given transactions and all their dependents in a post
order fashion, use the CASCADE option.

Oracle Database 11g: New Features for Administrators 11 - 32


Final Steps Without EM

After choosing your backout option, the dependency


report is generated in the DBA_FLASHBACK_TXN_STATE and
DBA_FLASHBACK_TXN_REPORT tables.
Review the dependency report that shows all
transactions which were backed out.
Commit the changes to make them permanent.
Roll back to discard the changes.

11 - 33 Copyright 2007, Oracle. All rights reserved.

Final Steps Without EM


The DBA_FLASHBACK_TXN_STATE view contains the current state of a transaction: whether it is
alive in the system or effectively backed out. This table is atomically maintained with the
compensating transaction. For each compensating transaction, there could be multiple rows, where
each row provides the dependency relation between the transactions that have been compensated by
the compensating transaction.
The DBA_FLASHBACK_TXN_REPORT view provides detailed information about all compensating
transactions that have been committed in the database. Each row in this view is associated with one
compensating transaction.
For a detailed description of these tables, see the Oracle Database Reference.

Oracle Database 11g: New Features for Administrators 11 - 33


Viewing Flashback Transaction Metadata

View Name Description

*_FLASHBACK_TXN_REPORT Displays related XML information

*_FLASHBACK_TXN_STATE Displays the transaction identifiers for


backed-out transactions

SQL> SELECT * FROM DBA_FLASHBACK_TXN_STATE;

COMPENSATING_XID XID BACKOUT_MODE DEPENDENT_XID USER#


---------------- ---------------- ------------ --------------- --------
0500150069050000 03000000A9050000 4 0
0500150069050000 05001E0063050000 4 03000000A9050000 0

11 - 34 Copyright 2007, Oracle. All rights reserved.

Viewing Flashback Transaction Metadata


You can use the data dictionary views to view information about Flashback Transaction Backouts.
Sample content of DBA_ FLASHBACK_TXN_REPORT:
COMPENSATING_XID
----------------
COMPENSATING_TXN_NAME
-----------------------------------------------------------------------------
COMMIT_TI
---------
XID_REPORT
-----------------------------------------------------------------------------
USER#
----------
0500150069050000

26-JUN-07
<?xml version="1.0" encoding="ISO-8859-1"?>
<COMP_XID_REPORT XID="05001500690500
0

Oracle Database 11g: New Features for Administrators 11 - 34


Using LogMiner

Powerful audit tool for Oracle databases


Direct access to redo logs
User interfaces:
SQL command line
Graphical user interface (GUI)
Integrated with Enterprise Manager

11 - 35 Copyright 2007, Oracle. All rights reserved.

Using LogMiner
What you already know: LogMiner is a powerful audit tool for Oracle databases, which allows you
to easily locate changes in the database, enabling sophisticated data analyses, and providing undo
capabilities to roll back logical data corruptions or user errors. LogMiner directly accesses the Oracle
redo logs, which are complete records of all activities performed on the database, and the associated
data dictionary. The tool offers two interfaces: SQL command line and a GUI interface.
What is new: Enterprise Manager Database Control now has an interface for LogMiner. In prior
releases, administrators were required to install and use the stand-alone Java Console for LogMiner.
With this new interface, administrators have a task-based, intuitive approach to using LogMiner. This
improves the manageability of LogMiner. In Enterprise Manager, select Availability > View and
Manage Transactions.
LogMiner supports the following activities:
Specifying query parameters
Stopping the query and showing partial results, if the query takes a long time
Partial querying, then showing the estimated complete query time
Saving the query result
Re-mining or refining the query based on initial results
Showing transaction details, dependencies, and compensating undo SQL script
Flashing back and committing the transaction
For more details see the High-Availability eStudy and documentation.
Oracle Database 11g: New Features for Administrators 11 - 35
Summary

In this lesson, you should have learned how to:


Describe new and enhanced features for Flashback
and LogMiner
Prepare your database for flashback
Create, change, and drop a flashback data archive
View flashback data archive metadata
Set up Flashback Transaction prerequisites
Query transactions with and without dependencies
Choose backout options and flash back transactions
Use EM LogMiner

11 - 36 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 11 - 36


Practice 11: Overview
Using Flashback Technology

This practice covers the following topics:


Using Flashback Data Archive
Using Flashback Transaction Backout

11 - 37 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 11 - 37


Diagnosability Enhancements

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Set up Automatic Diagnostic Repository
Use Support Workbench
Run health checks
Use SQL Repair Advisor

12 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 12 - 2


Oracle Database 11g R1 Fault Management

Goal: Reduce Time to Resolution

Change assurance
and Automatic Proactive
Intelligent
Diagnostic patching
automatic health resolution
Workflow
checks

Diagnostic Solution Delivery

Prevention Resolution

12 - 3 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g R1 Fault Management


The goals of the fault diagnosability infrastructure are the following:
Detecting problems proactively
Limiting damage and interruptions after a problem is detected
Reducing problem diagnostic time
Reducing problem resolution time
Simplifying customer interaction with Oracle Support

Oracle Database 11g: New Features for Administrators 12 - 3


Ease Diagnosis: Automatic Diagnostic Workflow
Automatic
Critical Diagnostic
Error Repository

DBA

Alert DBA
Auto incident creation
1 2 Targeted health checks
First failure capture Assisted SR filling

No Known
DBA bug?

Yes
EM Support Workbench:
4 Package incident info EM Support Workbench:
Data Repair
Apply patch/Data repair DBA
3

12 - 4 Copyright 2007, Oracle. All rights reserved.

Ease Diagnosis: Automatic Diagnostic Workflow


An always-on, in-memory tracing facility enables database components to capture diagnostic data
upon first failure for critical errors. A special repository, called Automatic Diagnostic Repository, is
automatically maintained to hold diagnostic information about critical error events. This information
can be used to create incident packages to be sent to Oracle Support Services for investigation.
Here is a possible workflow for a diagnostic session:
1. Incident causes an alert to be raised in Enterprise Manager (EM).
2. The DBA can view the alert via the EM Alert page.
3. The DBA can drill down to incident and problem details.
4. DBA or Oracle Support Services can decide or ask for that information to be packaged and sent
to Oracle Support Services via MetaLink. The DBA can add files to the data to be packaged
automatically.

Oracle Database 11g: New Features for Administrators 12 - 4


Automatic Diagnostic Repository
DIAGNOSTIC_DEST
Support Workbench
$ORACLE_BASE BACKGROUND_DUMP_DEST
CORE_DUMP_DEST
USER_DUMP_DEST
$ORACLE_HOME/log
ADR
Base

diag

rdbms

DB
Name

ADR metadata
SID
Home

alert cdump incpkg incident hm trace (others)

incdir_1 incdir_n
ADRCI
log.xml
V$DIAG_INFO
alert_SID.log

12 - 5 Copyright 2007, Oracle. All rights reserved.

Automatic Diagnostic Repository (ADR)


ADR is a file-based repository for database diagnostic data such as traces, incident dumps and
packages, the alert log, Health Monitor reports, core dumps, and more. It has a unified directory
structure across multiple instances and multiple products stored outside of any database. It is,
therefore, available for problem diagnosis when the database is down. Beginning with Oracle
Database 11g R1, the database, Automatic Storage Management (ASM), Cluster Ready Services
(CRS), and other Oracle products or components store all diagnostic data in ADR. Each instance of
each product stores diagnostic data underneath its own ADR home directory. For example, in a Real
Application Clusters environment with shared storage and ASM, each database instance and each
ASM instance have a home directory within ADR. ADRs unified directory structure uses consistent
diagnostic data formats across products and instances, and a unified set of tools enable customers and
Oracle Support to correlate and analyze diagnostic data across multiple instances.
Starting with Oracle Database 11g R1, the traditional _DUMP_DEST initialization parameters are
ignored. The ADR root directory is known as the ADR base. Its location is set by the
DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null, the database
sets DIAGNOSTIC_DEST upon startup as follows: If the environment variable ORACLE_BASE is
set, DIAGNOSTIC_DEST is set to $ORACLE_BASE. If the environment variable ORACLE_BASE is
not set, DIAGNOSTIC_DEST is set to $ORACLE_HOME/log.

Oracle Database 11g: New Features for Administrators 12 - 5


Automatic Diagnostic Repository (ADR) (continued)
Within the ADR base, there can be multiple ADR homes, where each ADR home is the root
directory for all diagnostic data for a particular instance of a particular Oracle product or component.
The location of an ADR home for a database is shown in the graphic given in the preceding slide.
Also, two alert files are now generated. One is textual, exactly like the alert file used with previous
releases of the Oracle database and is located under the TRACE directory of each ADR home. In
addition, an alert message file conforming to the XML standard is stored in the ALERT subdirectory
inside the ADR home. You can view the alert log in text format (with the XML tags stripped) with
Enterprise Manager and with the ADRCI utility.
The graphic in the slide shows you the directory structure of an ADR home. The INCIDENT
directory contains multiple subdirectories, where each subdirectory is named for a particular
incident, and where each contains dumps pertaining only to that incident.
The HM directory contains the checker run reports generated by the Health Monitor.
There is also a METADATA directory that contains important files for the repository itself. You can
compare this to a database dictionary. This dictionary can be queried using ADRCI.
The ADR Command Interpreter (ADRCI) is utility that you can use to perform all of the tasks
permitted by the Support Workbench, but in a command-line environment. The ADRCI utility also
enables you to view the names of the trace files in ADR, and to view the alert log with XML tags
stripped, with and without content filtering.
In addition, you can use V$DIAG_INFO to list some important ADR locations.

Oracle Database 11g: New Features for Administrators 12 - 6


ADRCI: The ADR Command-Line Tool

Allows interaction with ADR from OS prompt


Can invoke IPS with command line instead of EM
DBAs should use EM Support Workbench:
Leverages same toolkit/libraries that ADRCI is built upon
Easy to follow GUI

ADRCI> show incident


ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orcl/orcl:
*****************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
------------ -------------------------------------- ---------------------------------
1681 ORA-600_dbgris01:1,_addr=0xa9876541 17-JAN-07 09.17.44.843125000
1682 ORA-600_dbgris01:12,_addr=0xa9876542 18-JAN-07 09.18.59.434775000
2 incident info records fetched
ADRCI>

12 - 7 Copyright 2007, Oracle. All rights reserved.

ADRCI: The ADR Command-Line Tool


ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in
Oracle Database Release 11g. ADRCI enables you to:
View diagnostic data within Automatic Diagnostic Repository (ADR).
Package incident and problem information into a zip file for transmission to Oracle Support.
This is done using a service called Incident Package Service (IPS).
ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition,
ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of
SQL and PL/SQL commands.
There is no need to log in to ADRCI, because the data in ADR is not intended to be secure. ADR data
is secured only by operating system permissions on the ADR directories.
The easiest way to package and otherwise manage diagnostic data is with the Support Workbench of
Oracle Enterprise Manager. ADRCI provides a command-line alternative to most of the functionality
of Support Workbench, and adds capabilities such as listing and querying trace files.
The slide example shows you an ADRCI session where you are listing all open incidents stored in
ADR.
Note: For more information about ADRCI, refer to the Oracle Database Utilities guide.

Oracle Database 11g: New Features for Administrators 12 - 7


V$DIAG_INFO

SQL> SELECT * FROM V$DIAG_INFO;

NAME VALUE
------------------- ---------------------------------------------------------------
Diag Enabled TRUE
ADR Base /u01/app/oracle
ADR Home /u01/app/oracle/diag/rdbms/orcl/orcl
Diag Trace /u01/app/oracle/diag/rdbms/orcl/orcl/trace
Diag Alert /u01/app/oracle/diag/rdbms/orcl/orcl/alert
Diag Incident /u01/app/oracle/diag/rdbms/orcl/orcl/incident
Diag Cdump /u01/app/oracle/diag/rdbms/orcl/orcl/cdump
Health Monitor /u01/app/oracle/diag/rdbms/orcl/orcl/hm
Default Trace File /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_11424.trc
Active Problem Count 3
Active Incident Count 8

12 - 8 Copyright 2007, Oracle. All rights reserved.

V$DIAG_INFO
The V$DIAG_INFO view lists all important ADR locations:
ADR Base: Path of ADR base
ADR Home: Path of ADR home for the current database instance
Diag Trace: Location of the text alert log and background/foreground process trace files
Diag Alert: Location of an XML version of the alert log

Default Trace File: Path to the trace file for your session. SQL Trace files are written here.

Oracle Database 11g: New Features for Administrators 12 - 8


Location for Diagnostic Traces

Diagnostic Data Previous Location ADR Location


Foreground process USER_DUMP_DEST $ADR_HOME/trace
traces

Background process BACKGROUND_DUMP_DEST $ADR_HOME/trace


traces

Alert log data BACKGROUND_DUMP_DEST $ADR_HOME/alert&trace

Core dumps CORE_DUMP_DEST $ADR_HOME/cdump

Incident dumps USER|BACKGROUND_DUMP_DEST $ADR_HOME/incident/incdir_n

ADR trace = Oracle Database 10g trace critical error trace

12 - 9 Copyright 2007, Oracle. All rights reserved.

Location for Diagnostic Traces


The table shown in the slide describes the different classes of trace data and dumps that reside both in
Oracle Database 10g and in Oracle Database 11g.
With Oracle Database 11g, there is no distinction between foreground and background trace files.
Both types of files go into the $ADR_HOME/trace directory.
All nonincident traces are stored inside the TRACE subdirectory. This is the main difference
compared with previous releases where critical error information is dumped into the corresponding
process trace files instead of incident dumps. Incident dumps are placed in files separated from the
normal process trace files starting with Oracle Database 11g.
Note: The main difference between a trace and a dump is that a trace is more of a continuous output
such as when SQL tracing is turned on, and a dump is a one-time output in response to an event such
as an incident. Also, a core is a binary memory dump that is port specific.
In the slide, $ADR_HOME is used to denote the ADR home directory. However, there is no official
environment variable called ADR_HOME.

Oracle Database 11g: New Features for Administrators 12 - 9


Viewing the Alert Log Using Enterprise Manager

12 - 10 Copyright 2007, Oracle. All rights reserved.

Viewing the Alert Log Using Enterprise Manager


You can view the alert log with a text editor, with Enterprise Manager, or with the ADRCI utility. To
view the alert log with Enterprise Manager:
1. Access the Database Home page in Enterprise Manager.
2. Under Related Links, click Alert Log Contents.
The View Alert Log Contents page appears.
3. Select the number of entries to view, and then click Go.

Oracle Database 11g: New Features for Administrators 12 - 10


Viewing the Alert Log Using ADRCI
adrci>>show alert tail

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:


*************************************************************************
2007-04-16 22:10:50.756000 -07:00
ORA-1654: unable to extend index SYS.I_H_OBJ#_COL# by 128 in tablespace
SYSTEM
2007-04-16 22:21:20.920000 -07:00
Thread 1 advanced to log sequence 400
Current log# 3 seq# 400 mem# 0: +DATA/orcl/onlinelog/group_3.266.618805031
Current log# 3 seq# 400 mem# 1: +DATA/orcl/onlinelog/group_3.267.618805047

Thread 1 advanced to log sequence 401
Current log# 1 seq# 401 mem# 0: +DATA/orcl/onlinelog/group_1.262.618804977
Current log# 1 seq# 401 mem# 1: +DATA/orcl/onlinelog/group_1.263.618804993
DIA-48223: Interrupt Requested - Fetch Aborted - Return Code [1]

adrci>>

adrci>>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'"

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:


*************************************************************************
adrci>>

12 - 11 Copyright 2007, Oracle. All rights reserved.

Viewing the Alert Log Using ADRCI


You can also use ADRCI to view the content of your alert log file. Optionally, you can change the
current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET
HOMEPATH command to change the current ADR home.
Ensure that operating system environment variables such as ORACLE_HOME are set properly, and
then enter the following command at the operating system command prompt: adrci.
The utility starts and displays its prompt as shown in the slide.
Then use the SHOW ALERT command. To limit the output, you can look at the last records using the
TAIL option. This displays the last portion of the alert log (about 20 to 30 messages), and then
waits for more messages to arrive in the alert log. As each message arrives, it is appended to the
display. This command enables you to perform live monitoring of the alert log. Press CTRL-C to
stop waiting and return to the ADRCI prompt. You can also specify the amount of lines to be printed
if you want.
You can also filter the output of the SHOW ALERT command as shown in the bottom example in the
slide, where you want to display only those alert log messages that contain the string ORA-600.
Note: ADRCI allows you to spool the output to a file exactly like in SQL*Plus.

Oracle Database 11g: New Features for Administrators 12 - 11


Problems and Incidents
Problem ID
Critical
Error Problem

Problem
Aut Key Incident Status
o ma
tica Collecting
Flood lly
control Automatic
Ready
Incident transition
Tracking
lly Incident ID
nua Data-Purged
Ma
DBA Closed

Traces

ADR
MMON Auto-purge

Non-critical
Error
Package to be
sent to
Oracle Support

12 - 12 Copyright 2007, Oracle. All rights reserved.

Problems and Incidents


To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure
introduces two concepts for the Oracle database: problems and incidents.
A problem is a critical error in the database. Problems are tracked in ADR. Each problem is
identified by a unique problem ID and has a problem key, which is a set of attributes that
describe the problem. The problem key includes the ORA error number, error parameter values,
and other information. Here is a possible list of critical errors:
- All internal Errors ORA-60x errors
- All system access violations (SEGV, SIGBUS)
- ORA-4020 (Deadlock on library object), ORA-8103 (Object no longer exists), ORA-1410
(Invalid ROWID), ORA-1578 (Data block corrupted), ORA-29740 (Node eviction), ORA-
255 (Database is not mounted), ORA-376 (File cannot be read at this time), ORA-4030
(Out of process memory), ORA-4031 (Unable to allocate more bytes of shared memory),
ORA-355 (The change numbers are out of order), ORA-356 (Inconsistent lengths in change
description), ORA-353 (Log corruption), ORA-7445 (Operating System exception)
An incident is a single occurrence of a problem. When a problem occurs multiple times, as is
often the case, an incident is created for each occurrence. Incidents are tracked in ADR. Each
incident is identified by a numeric incident ID, which is unique within an ADR home.

Oracle Database 11g: New Features for Administrators 12 - 12


Problems and Incidents (continued)
When an incident occurs, the database makes an entry in the alert log, gathers diagnostic data
about the incident (a stack trace, the process state dump, and other dumps of important data
structures), tags the diagnostic data with the incident ID, and stores the data in an ADR subdirectory
created for that incident. Each incident has a problem key and is mapped to a single problem. Two
incidents are considered to have the same root cause if their problem keys match. Large amounts of
diagnostic information can be created very quickly if a large number of sessions stumble across the
same critical error. Having the diagnostic information for more than a small number of the incidents
is not required. That is why ADR provides flood control so that only a certain number of incidents
under the same problem can be dumped in a given time interval. Note that flood-controlled incidents
still generate incidents; they only skip the dump actions. By default, only five dumps per hour for a
given problem are allowed.
You can view a problem as a set of incidents that are perceived to have the same symptoms. The
main reason to introduce this concept is to make it easier for users to manage errors on their systems.
For example, a symptom that occurs 20 times should be reported to Oracle only once. Mostly, you
will manage problems instead of incidents, using IPS to package a problem to be sent to Oracle
Support. Most commonly incidents are automatically created when a critical error occurred.
However, you are also allowed to create an incident manually, via the GUI provided by the EM
Support Workbench. Manual incident creation is mostly done when you want to report problems that
are not accompanied by critical errors raised inside the Oracle code.
As time goes by, more and more incidents will be accumulated in ADR. A retention policy allows
you to specify how long to keep the diagnostic data. ADR incidents are controlled by two different
policies:
The incident metadata retention policy controls how long the metadata is kept around. This
policy has a default setting of one year.
The incident files and dumps retention policy controls how long generated dump files are kept
around. This policy has a default setting of one month.
You can change these setting by using the Incident Package Configuration link on the EM Support
Workbench page. Inside the RDBMS component, MMON is responsible for purging automatically
expired ADR data.

Oracle Database 11g: New Features for Administrators 12 - 13


Problems and Incidents (continued)
The status of an incident reflects the state of the incident. An incident can be in any one of the
following states:
Collecting: The incident has been newly created and is in the process of collecting diagnostic
information. In this state, the incident data can be incomplete and should not be packaged, and
should be viewed with discretion.
Ready: The data collection phase has completed. The incident is now ready to be used for
analysis, or to be packaged to be sent to Oracle Support.
Tracking: The DBA is working on the incident, and prefers the incident to be kept in the
repository indefinitely. You have to manually change the incident status to this value.
Closed: The incident is now in a done state. In this state, ADR can elect the incident to be
purged after it passes its retention policy.
Data-Purged: The associated files have been removed from the incident. In some cases, even if
the incident files may still be physically around, it is not advisable for users to look at them
because they can be in an inconsistent state. Note that the incident metadata itself for the
incident is still valid for viewing.
You can view an incident status by using either ADRCI (show incident -mode detail), or
directly in Support Workbench.
If an incident has been in either the Collection or the Ready state for over twice its retention length,
the incident automatically moves to the Closed state. You can manually purged incident files.
For simplicity, problem metadata is internally maintained by ADR. Problems are automatically
created when the first incident (of the problem key) occurs. The Problem metadata is removed after
its last incident is removed from the repository.
Note: It is not possible to disable automatic incident creation for critical errors.

Oracle Database 11g: New Features for Administrators 12 - 14


Incident Packaging Service (IPS)

Uses rules to correlate all relevant dumps and traces


from ADR for a given problem and allow you to
package them to ship to Oracle Support
Rules can involve files that were generated around the
same time, and associated with the same client, same
error codes, and so on.
DBAs can explicitly add/edit or remove files before
packaging.
Access IPS through either EM or ADRCI.

12 - 15 Copyright 2007, Oracle. All rights reserved.

Incident Packaging Service


With the Incident Packaging Service (IPS), you can automatically and easily gather all diagnostic
data (traces, dumps, health check reports, SQL test cases, and more) pertaining to a critical error and
package the data into a zip file suitable for transmission to Oracle Support. Because all diagnostic
data relating to a critical error are tagged with that errors incident number, you do not have to search
through trace files, dump files, and so on to determine the files that are required for analysis; the
Incident Packaging Service identifies all required files automatically and adds them to the package.

Oracle Database 11g: New Features for Administrators 12 - 15


Incident Packages
Pkg_database_ORA_600__qksdie_-_feature_QKSFM_CVM__021207074555_COM_1.zip

An incident package is a logical


structure inside ADR representing ADR
one or more problems. Base

A package is a zip file containing diag

dump information related to an rdbms

incident package. DB
Name
By default, only the first and last
ADR metadata
three incidents of each Home
SID

problem are included to


alert cdump incpkg incident hm trace (others)
an incident package.
You can generate complete pkg_1 pkg_n

or incremental zip files.

12 - 16 Copyright 2007, Oracle. All rights reserved.

Incident Packages
To upload diagnostic data to Oracle Support Services, you first collect the data in an incident
package. When you create an incident package, you select one or more problems to add to the
incident package. The Support Workbench then automatically adds to the incident package the
incident information, trace files, and dump files associated with the selected problems. Because a
problem can have many incidents (many occurrences of the same problem), by default only the first
three and last three incidents for each problem are added to the incident package. You can change
this default number on the Incident Packaging Configuration page accessible from the Support
Workbench page.
After the incident package is created, you can add any type of external file to the incident package,
remove selected files from the incident package, or edit selected files in the incident package to
remove sensitive data.
An incident package is a logical construct only, until you create a physical file from the incident
package contents. That is, an incident package starts out as a collection of metadata in ADR. As you
add and remove incident package contents, only the metadata is modified. When you are ready to
upload the data to Oracle Support Services, you invoke either a Support Workbench or an ADRCI
function that gathers all the files referenced by the metadata, places them into a zip file, and then
uploads the zip to MetaLink.

Oracle Database 11g: New Features for Administrators 12 - 16


EM Support Workbench: Overview

Wizard that guides you through the process of


handling problems
You can perform the following tasks with the Support
Workbench:
View details on problems and incidents.
Run health checks.
Generate additional diagnostic data.
Run advisors to help resolve problems.
Create and track service requests through MetaLink.
Generate incident packages.
Close problems when resolved.

12 - 17 Copyright 2007, Oracle. All rights reserved.

EM Support Workbench: Overview


The Support Workbench is an Enterprise Manager wizard that helps you through the process of
handling critical errors. It displays incident notifications, presents incident details, and enables you to
select incidents for further processing. Further processing includes running additional health checks,
invoking the IPS to package all diagnostic data about the incidents, adding SQL test cases and
selected user files to the package, filing a technical assistance request (TAR) with Oracle Support,
shipping the packaged incident information to Oracle Support, and tracking the TAR through its life
cycle.
You can perform the following tasks with the Support Workbench:
View details on problems and incidents.
Manually run health checks to gather additional diagnostic data for a problem.
Generate additional dumps and SQL test cases to add to the diagnostic data for a problem.
Run advisors to help resolve problems.
Create and track a service request through MetaLink, and add the service request number to the
problem data.
Collect all diagnostic data relating to one or more problems into an incident package and then
upload the incident package to Oracle Support Services.
Close the problem when the problem is resolved.

Oracle Database 11g: New Features for Administrators 12 - 17


Oracle Configuration Manager

12 - 18 Copyright 2007, Oracle. All rights reserved.

Oracle Configuration Manager


Enterprise Manager Support Workbench uses Oracle Configuration Manager to upload the physical
files generated by IPS to MetaLink. If Oracle Configuration Manager is not installed or properly
configured, the upload may fail. In this case, a message is displayed with a path to the incident
package zip file and a request that you upload the file to Oracle Support manually. You can upload
manually with MetaLink.
During an Oracle Database 11g installation, the Oracle Universal Installer has a special Oracle
Configuration Manager Registration screen shown in the slide. On that screen, you need to select the
Enable Oracle Configuration Manager check box and accept license agreement before you can enter
your Customer Identification Number (CSI), your MetaLink account username, and your country
code.
If you do not configure Oracle Configuration Manager, you will still be able to manually upload
incident packages to MetaLink.
Note: For more information about Oracle Configuration Manager, see the Oracle Configuration
Manager Installation and Administration Guide, available at the following URL:
http://www.oracle.com/technology/documentation/oem.html

Oracle Database 11g: New Features for Administrators 12 - 18


EM Support Workbench Roadmap

View critical
1 error alerts in
Enterprise Manager.

7 Close incidents. View problem


2 details.

Gather additional
6 Track the SR and 3 diagnostic
implement repairs.
information.

Package and upload


4
Create a
diagnostic data service request.
to Oracle Support.
5

12 - 19 Copyright 2007, Oracle. All rights reserved.

EM Support Workbench Roadmap


The graphic gives a summary of the tasks that you complete to investigate, report, and in some cases,
resolve a problem using Enterprise Manager Support Workbench:
1. Start by accessing the Database Home page in Enterprise Manager and reviewing critical error
alerts. Select an alert for which to view details.
2. Examine the problem details and view a list of all incidents that were recorded for the problem.
Display findings from any health checks that were automatically run.
3. Optionally, run additional health checks and invoke the SQL Test Case Builder, which gathers
all required data related to a SQL problem and packages the information in a way that enables
the problem to be reproduced by Oracle Support. The type of information that the SQL Test
Case Builder gathers includes query being executed, table and index definitions (but no data),
optimizer statistics, and initialization parameter settings.
4. Create a service request with MetaLink and optionally record the service request number with
the problem information.
5. Invoke a wizard that automatically packages all gathered diagnostic data for a problem and
uploads the data to Oracle Support. Optionally, edit the data to remove sensitive information
before uploading.
6. Optionally, maintain an activity log for the service request in the Support Workbench. Run
Oracle advisors to help repair SQL failures or corrupted data.
7. Set status for one, some, or all incidents for the problem to Closed.
Oracle Database 11g: New Features for Administrators 12 - 19
View Critical Error Alerts in Enterprise Manager

12 - 20 Copyright 2007, Oracle. All rights reserved.

View Critical Error Alerts in Enterprise Manager


You begin the process of investigating problems (critical errors) by reviewing critical error alerts on
the Database Home page. To view critical error alerts, access the Database Home page in Enterprise
Manager. From the Home page, you can look at the Diagnostic Summary section from where you
can click the Active Incidents link if there are incidents. You can also use the Alerts section and look
for critical alerts flagged as Incidents.
When you click the Active Incidents link, you access the Support Workbench page on which you can
retrieve details about all problems and corresponding incidents. From there, you can also retrieve all
Health Monitor checker run and created packages.
Note: The tasks described in this section are all Enterprise Manager based. You can also accomplish
all of these tasks with the ADRCI command-line utility. See Oracle Database Utilities for more
information about the ADRCI utility.

Oracle Database 11g: New Features for Administrators 12 - 20


View Problem Details

12 - 21 Copyright 2007, Oracle. All rights reserved.

View Problem Details


On the Problems subpage on the Support Workbench page, click the ID of the problem you want to
investigate. This takes you to the corresponding Problem Details page.
On this page, you can see all incidents that are related to your problem. You can associate your
problem with a MetaLink service request and bug number. In the Investigate and Resolve section of
the page, you have a Self Service subpage that has direct links to the operation you can perform on
this problem. In the same section, the Oracle Support subpage has direct links to MetaLink.
The Activity Log subpage shows you the system-generated operations that have occurred on your
problem so far. This subpage allows you to add your own comments while investigating your
problem.
From the Incidents subpage, you can click a related incident ID to get to the corresponding Incident
Details page.

Oracle Database 11g: New Features for Administrators 12 - 21


View Incident Details

12 - 22 Copyright 2007, Oracle. All rights reserved.

View Incident Details


After the Incident Details page opens, the Dump Files subpage appears and lists all corresponding
dump files. You can then click the eyeglass icon for a particular dump file to visualize the file
content with its various sections.

Oracle Database 11g: New Features for Administrators 12 - 22


View Incident Details

12 - 23 Copyright 2007, Oracle. All rights reserved.

View Incident Details (continued)


On the Incident Details page, click Checker Findings to view the Checker Findings subpage. This
page displays findings from any health checks that were automatically run when the critical error was
detected. Most of the time, you have the option to select one or more findings, and invoke an advisor
to fix the issue.

Oracle Database 11g: New Features for Administrators 12 - 23


Create a Service Request

12 - 24 Copyright 2007, Oracle. All rights reserved.

Create a Service Request


Before you can package and upload diagnostic information for the problem to Oracle Support, you
must create a service request. To create a service request, you need to go to MetaLink first. MetaLink
can be accessed directly from the Problem Details page when you click the Go to Metalink button in
the Investigate and Resolve section of the page. When MetaLink opens, log in and create a service
request in the usual manner.
When done, you have the option to enter that service request for your problem. This is entirely
optional and is for your reference only.
In the Summary section, click the Edit button that is adjacent to the SR# label, and in the window
that opens, enter the SR#, and then click OK.

Oracle Database 11g: New Features for Administrators 12 - 24


Package and Upload Diagnostic Data to
Oracle Support

12 - 25 Copyright 2007, Oracle. All rights reserved.

Package and Upload Diagnostic Data to Oracle Support


The Support Workbench provides two methods for creating and uploading an incident package: the
Quick Packaging method and the Custom Packaging method. The example in the slide shows you
how to use Quick Packaging.
Quick Packaging is a more automated method with a minimum of steps. You select a single problem,
provide an incident package name and description, and then schedule the incident package upload,
either immediately or at a specified date and time. The Support Workbench automatically places
diagnostic data related to the problem into the incident package, finalizes the incident package,
creates the zip file, and then uploads the file. With this method, you do not have the opportunity to
add, edit, or remove incident package files or add other diagnostic data such as SQL test cases. To
package and upload diagnostic data to Oracle Support:
1. On the Problem Details page, in the Investigate and Resolve section, click Quick Package. The
Create New Package page of the Quick Packaging wizard appears.
2. Enter a package name and description.
3. Enter the service request number to identify your problem.
4. Click Next, and then proceed with the remaining pages of the Quick Packaging wizard. Click
Submit on the Review page to upload the package.

Oracle Database 11g: New Features for Administrators 12 - 25


Track the SR and Implement Repairs

12 - 26 Copyright 2007, Oracle. All rights reserved.

Track the SR and Implement Repairs


After uploading diagnostic information to Oracle Support, you may perform various activities to
track the service request and implement repairs. Among these activities are the following:
Add an Oracle bug number to the problem information. To do so, on the Problem Details page, click
the Edit button that is adjacent to the Bug# label. This is for your reference only.
Add comments to the problem activity log. To do so, complete the following steps:
1. Access the Problem Details page for the problem.
2. Click Activity Log to display the Activity Log subpage.
3. In the Comment field, enter a comment, and then click Add Comment. Your comment is
recorded in the activity log.
Respond to a request by Oracle Support to provide additional diagnostics. Your Oracle Support
representative may provide instructions for gathering and uploading additional diagnostics.

Oracle Database 11g: New Features for Administrators 12 - 26


Track the SR and Implement Repairs

12 - 27 Copyright 2007, Oracle. All rights reserved.

Track the SR and Implement Repairs (continued)


On the Incident Details page, you can run an Oracle advisor to implement repairs. Access the
suggested advisor in one of the following ways:
In the Self-Service tab of the Investigate and Resolve section of the Problem Details page
On the Checker Findings subpage of the Incident Details page as shown in the slide
The advisors that help you repair critical errors are:
Data Recovery Advisor: Corrupted blocks, corrupted or missing files, and other data failures
SQL Repair Advisor: SQL statement failures

Oracle Database 11g: New Features for Administrators 12 - 27


Close Incidents and Problems

12 - 28 Copyright 2007, Oracle. All rights reserved.

Close Incidents and Problems


When a particular incident is no longer of interest, you can close it. By default, closed incidents are
not displayed on the Problem Details page. All incidents, whether closed or not, are purged after 30
days. You can disable purging for an incident on the Incident Details page.
To close incidents:
1. Access the Support Workbench home page.
2. Select the desired problem, and then click View. The Problem Details page appears.
3. Select the incidents to close and then click Close. A Confirmation page appears.
4. Click Yes on the Confirmation page to close your incident.

Oracle Database 11g: New Features for Administrators 12 - 28


Incident Packaging Configuration

12 - 29 Copyright 2007, Oracle. All rights reserved.

Incident Packaging Configuration


As already seen, you can configure various aspects of retention rules and packaging generation.
Using the Support Workbench, you can access the Incident Packaging Configuration page from the
Related Links section of the Support Workbench page by clicking the Incident Packaging
Configuration link. Here are the parameters that you can change:
Incident Metadata Retention Period: Metadata is basically information about the data. As for
incidents, it is the incident time, ID, size, problem, and so forth. Data is the actual contents of an
incident, such as traces.
Cutoff Age for Incident Inclusion: This value includes incidents for packaging that are in the
range to now. If the cutoff date is 90, for instance, the system only includes the incidents that are
within the last 90 days.
Leading Incidents Count: For every problem included in a package, the system selects a
certain number of incidents from the problem from the beginning (leading) and the end
(trailing). For example, if the problem has 30 incidents, and the leading incident count is 5 and
the trailing incident count is 4, the system includes the first 5 incidents and the last 4 incidents.
Trailing Incidents Count: See above.

Oracle Database 11g: New Features for Administrators 12 - 29


Incident Packaging Configuration (continued)
Correlation Time Proximity: This parameter is the exact time interval that defines happened
at the same time. There is a concept of correlated incidents/problems to a certain
incident/problemthat is, what problems seem to have a connection with a said problem. One
criterion for correlation is time correlation: find the incidents that happened at the same time as
the incidents in a problem.
Time Window for Package Content: Time window for content inclusion is from x hours
before first included incident to x hours after last incident (where x is the number specified in
that field).
Note: You have access to more parameters if you are using the ADRCI interface. For a complete
description of all possible configurable parameters, issue the ips show configuration
command in ADRCI.

Oracle Database 11g: New Features for Administrators 12 - 30


Custom Packaging: Create New Package

12 - 31 Copyright 2007, Oracle. All rights reserved.

Custom Packaging: Create New Package


Custom Packaging is a more manual method than Quick Packaging, but gives you greater control
over the incident package contents. You can create a new incident package with one or more
problems, or you can add one or more problems to an existing incident package. You can then
perform a variety of operations on the new or updated incident package, including:
Adding or removing problems or incidents
Adding, editing, or removing trace files in the incident package
Adding or removing external files of any type
Adding other diagnostic data such as SQL test cases
Manually finalizing the incident package and then viewing incident package contents to
determine whether you must edit or remove sensitive data or remove files to reduce incident
package size.
With the Custom Packaging method, you create the zip file and request upload to Oracle Support as
two separate steps. Each of these steps can be performed immediately or scheduled for a future date
and time.
To package and upload a problem with Custom Packaging:
1. In the Problems subpage at the bottom of the Support Workbench home page, select the first
problem that you want to package, and then click Package.
2. On the Package: Select packaging mode subpage, select the Custom Packaging option, and
then click Continue.
Oracle Database 11g: New Features for Administrators 12 - 31
Custom Packaging: Create New Package (continued)
3. The Custom Packaging: Select Package page appears. To create a new incident package, select
the Create New Package option, enter an incident package name and description, and then click
OK. To add the selected problems to an existing incident package, select the Select from
Existing Packages option, select the incident package to update, and then click OK.
In the example given in the preceding slide, you decide to create a new package.

Oracle Database 11g: New Features for Administrators 12 - 32


Custom Packaging: Manipulate Incident Package

12 - 33 Copyright 2007, Oracle. All rights reserved.

Custom Packaging: Manipulate Incident Package


On the Customize Package page, you get the confirmation that your new package has been created.
This page displays the incidents that are contained in the incident package, plus a selection of
packaging tasks to choose from. You run these tasks against the new incident package or the updated
existing incident package.
As you can see from the slide, you can exclude/include incidents or files as well as many other
possible tasks.

Oracle Database 11g: New Features for Administrators 12 - 33


Custom Packaging: Finalize Incident Package

12 - 34 Copyright 2007, Oracle. All rights reserved.

Custom Packaging: Finalize Incident Package


Finalizing an incident package is used to add correlated files from other components, such as Health
Monitor, to the package. Recent trace files and log files are also included in the package.
You can finalize a package by clicking the Finish Contents Preparation link in the Packaging Tasks
section as shown in the slide. A confirmation page is displayed that lists all files that will be part of
the physical package.

Oracle Database 11g: New Features for Administrators 12 - 34


Custom Packaging: Generate Package

12 - 35 Copyright 2007, Oracle. All rights reserved.

Custom Packaging: Generate Package


After your incident package has been finalized, you can generate the package file. You need to go
back to the corresponding package page and click Generate Upload File.
The Generate Upload File page appears. On this page, select the Full or Incremental option to
generate a full incident package zip file or an incremental incident package zip file.
For a full incident package zip file, all the contents of the incident package (original contents and all
correlated data) are always added to the zip file.
For an incremental incident package zip file, only the diagnostic information that is new or modified
since the last time that you created a zip file for the same incident package is added to the zip file.
When done, select the Schedule and click Submit. If you scheduled the generation immediately, a
Processing page appears until packaging is finished. This is followed by the Confirmation page,
where you can click OK.
Note: The Incremental option is unavailable if a physical file was never created for the incident
package.

Oracle Database 11g: New Features for Administrators 12 - 35


Custom Packaging: Upload Package

12 - 36 Copyright 2007, Oracle. All rights reserved.

Custom Packaging: Upload Package


After you have generated the physical package, you can go back to the Customize Package page on
which you can click the View/Send Uploaded Files link in the Packaging Tasks section.
This takes you to the View/Send Upload Files page from where you can select your package, and
click the Send to Oracle button.
The Send to Oracle page appears. There, you can enter the service request number for your
problem and choose a Schedule. You can then click Submit.

Oracle Database 11g: New Features for Administrators 12 - 36


Viewing and Modifying Incident Packages

12 - 37 Copyright 2007, Oracle. All rights reserved.

Viewing and Modifying Incident Packages


After a package is created, you can always modify it through customization.
For example, go to the Support Workbench page and click the Packages tab. This takes you to the
Packages subpage. From this page, you can select a package and delete it, or click the package link to
go to the Package Details page. There, you can click Customize to go to the Customize Package page
from where you can manipulate your package by adding/removing problems, incidents, or files.

Oracle Database 11g: New Features for Administrators 12 - 37


Creating User-Reported Problems

12 - 38 Copyright 2007, Oracle. All rights reserved.

Creating User-Reported Problems


Critical errors generated internally to the database are automatically added to Automatic Diagnostic
Repository (ADR) and tracked in the Support Workbench. However, there may be a situation in
which you want to manually add a problem that you noticed to the ADR so that you can put that
problem through the Support Workbench workflow. An example of such a situation would be if the
performance of the database or of a particular query suddenly noticeably degraded. The Support
Workbench includes a mechanism for you to create and work with such a user-reported problem.
To create a user-reported problem, open the Support Workbench page and click the Create User-
Reported Problem link in the Related Links section. This takes you to the Create User-Reported
Problem page from where you are asked to run a corresponding advisor before continuing. This is
necessary only if you are not sure about your problem. However, if you already know exactly what is
going on, select the issue that describes most the type of problem you are encountering and click
Continue with Creation of Problem.
By clicking this button, you basically create a pseudo-problem inside the Support Workbench. This
allows you to manipulate this problem using the previously seen Support Workbench workflow for
handling critical errors. So, you end up on a Problem Details page for your issue. Note that at first the
problem does not have any diagnostic data associated with it. At this point, you need to create a
package and upload necessary trace files by customizing that package. This has already been
described previously.

Oracle Database 11g: New Features for Administrators 12 - 38


Invoking IPS Using ADRCI

IPS SET CONFIGURATION INCIDENT

PROBLEM | PROBLEMKEY

IPS CREATE PACKAGE


SECONDS | TIME

INCIDENT

NEW INCIDENTS IPS ADD

FILE
IN FILE
IPS COPY
OUT FILE
INCIDENT
IPS REMOVE
FILE

IPS FINALIZE PACKAGE

IPS GENERATE PACKAGE

12 - 39 Copyright 2007, Oracle. All rights reserved.

Invoking IPS Using ADRCI


Creating a package is a two-step process: you first create the logical package, and then generate the
physical package as a zip file. Both steps can be performed using ADRCI commands. To create a
logical package, the IPS CREATE PACKAGE command is used. There are several variants of this
command that allow you to choose the contents:
IPS CREATE PACKAGE creates an empty package.
IPS CREATE PACKAGE PROBLEMKEY creates a package based on problem key.
IPS CREATE PACKAGE PROBLEM creates a package based on problem ID.
IPS CREATE PACKAGE INCIDENT creates a package based on incident ID.
IPS CREATE PACKAGE SECONDS creates a package containing all incidents generated
from seconds ago until now.
IPS CREATE PACKAGE TIME creates a package based on the specified time range.

It is also possible to add contents to an existing package. For instance:


IPS ADD INCIDENT PACKAGE adds an incident to an existing package.
IPS ADD FILE PACKAGE adds a file inside ADR to an existing package.

Oracle Database 11g: New Features for Administrators 12 - 39


Invoking IPS Using ADRCI (continued)
IPC COPY copies files between ADR and the external file system. It has two forms:
IN FILE, to copy an external file into ADR, associating it with an existing package, and
optionally an incident
OUT FILE, to copy a file from ADR to a location outside ADR.

IPS COPY is essentially used to COPY OUT a file, edit it, and COPY IN it back into ADR.
IPS FINALIZE is used to finalize a package for delivery, which means that other components,
such as the Health Monitor, are called to add their correlated files to the package. Recent trace files
and log files are also included in the package. If required, this step is run automatically when a
package is generated.
To generate the physical file, the IPS GENERATE PACKAGE command is used. The syntax is:
IPS GENERATE PACKAGE IN [COMLPETE | INCREMENTAL]
It generates a physical zip file for an existing logical package. The file name contains either COM for
complete or INC for incremental, followed by a sequence number that is incremented each time a zip
file is generated.
IPS SET CONFIGURATION is used to set IPS rules.
Note: Refer to the Oracle Database Utilities guide for more information about ADRCI.

Oracle Database 11g: New Features for Administrators 12 - 40


Health Monitor: Overview
V$HM_CHECK
DB-offline
Critical
error Redo Check ADRCI
V$HM_RUN
Database Cross Check EM
DBMS_HM

hm
(reports)

Reactive
ADR
Manual Health
EM or DBMS_HM Monitor

V$HM_CHECK
DB-online
DBA
Logical Block Check Undo Segment Check
Table Row Check Data Block Check
Transaction Check Table Check
Table-Index Row Mismatch
Database Dictionary Check
Table-Index Cross Check

12 - 41 Copyright 2007, Oracle. All rights reserved.

Health Monitor: Overview


Beginning with Release 11g, the Oracle database includes a framework called Health Monitor for
running diagnostic checks on various components of the database.
Health Monitor checkers examine various components of the database, including files, memory,
transaction integrity, metadata, and process usage. These checkers generate reports of their findings
as well as recommendations for resolving problems. Health Monitor checks can be run in two ways:
Reactive: The fault diagnosability infrastructure can run Health Monitor checks automatically
in response to critical errors.
Manual: As a DBA, you can manually run Health Monitor checks by using either the DBMS_HM
PL/SQL package or the Enterprise Manager interface.
In the slide, you can see some of the checks that Health Monitor can run. For a complete description
of all possible checks, look at V$HM_CHECK. These health checks fall into one of two categories:
DB-online: These checks can be run while the database is open (that is, in OPEN mode or
MOUNT mode).
DB-offline: In addition to being runnable while the database is open, these checks can also be
run when the instance is available and the database itself is closed (that is, in NOMOUNT mode).

Oracle Database 11g: New Features for Administrators 12 - 41


Health Monitor: Overview (continued)
After a checker has run, it generates a report of its execution. This report contains information about
the checkers findings, including the priorities (low, high, or critical) of the findings, descriptions of
the findings and their consequences, and basic statistics about the execution. Health Monitor
generates reports in XML and stores the reports in ADR. You can view these reports by using
V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager.
Note: Redo Check and Database Cross Check are DB-offline checks. All other checks are DB-online
checks. There are around 25 checks you can run.

Oracle Database 11g: New Features for Administrators 12 - 42


Running Health Checks Manually: EM Example

12 - 43 Copyright 2007, Oracle. All rights reserved.

Running Health Checks Manually: EM Example


Enterprise Manager provides an interface for running Health Monitor checkers. You can find this
interface in the Checkers tab on the Advisor Central page. The page lists each checker type, and you
can run a checker by clicking it and then OK on the corresponding checker page after you have
entered the parameters for the run. The slide shows how you can run the Data Block Checker
manually.
After a check is completed, you can view the corresponding checker run details by selecting the
checker run from the Results table and clicking Details. Checker runs can be reactive or manual.
On the Findings subpage you can see the various findings and corresponding recommendations
extracted from V$HM_RUN, V$HM_FINDING, and V$HM_RECOMMENDATION.
If you click View XML Report on the Runs subpage, you can view the run report in XML format.
Viewing the XML report in Enterprise Manager generates the report for the first time if it is not yet
generated in your ADR. You can then view the report using ADRCI without needing to generate it.

Oracle Database 11g: New Features for Administrators 12 - 43


Running Health Checks Manually:
PL/SQL Example
SQL> exec dbms_hm.run_check('Dictionary Integrity Check',
'DicoCheck',0,'TABLE_NAME=tab$');

SQL> set long 100000


SQL> select dbms_hm.get_run_report('DicoCheck') from dual;

DBMS_HM.GET_RUN_REPORT('DICOCHECK')
--------------------------------------------------------------------------------
Basic Run Information (Run Name,Run Id,Check Name,Mode,Status)
Input Paramters for the Run
TABLE_NAME=tab$
CHECK_MASK=ALL
Run Findings And Recommendations
Finding
Finding Name : Dictionary Inconsistency
Finding ID : 22
Type : FAILURE
Status : OPEN
Priority : CRITICAL
Message : SQL dictionary health check: invalid column number 8 on
object TAB$ failed
Message : Damaged rowid is AAAAACAABAAAS7PAAB - description: Object
SCOTT.TABJFV is referenced

12 - 44 Copyright 2007, Oracle. All rights reserved.

Running Health Checks Manually: PL/SQL Example


You can use the DBMS_HM.RUN_CHECK procedure for running a health check. To call
RUN_CHECK, supply the name of the check found in V$HM_CHECK, the name for the run (this is
just a label used to retrieve reports later), and the corresponding set of input parameters for
controlling its execution. You can view these parameters by using V$HM_CHECK_PARAM.
In the example in the slide, you want to run a Dictionary Integrity Check for the TAB$ table. You
call this run DICOCHECK, and you do not want to set any timeout for this check.
After DICOCHECK is executed, you execute the DBMS_HM.GET_RUN_REPORT function to get the
report extracted from V$HM_RUN, V$HM_FINDING, and V$HM_RECOMMENDATION. The output
clearly shows you that a critical error was found in TAB$. This table contains an entry for a table
with an invalid number of columns. Furthermore, the report gives you the name of the damaged table
in TAB$.
When you call the GET_RUN_REPORT function, it generates the XML report file in the HM directory
of your ADR. For this example, the file is called HMREPORT_DicoCheck.hm.
Note: Refer to the Oracle Database PL/SQL Packages and Types Reference for more information
about DBMS_HM.

Oracle Database 11g: New Features for Administrators 12 - 44


Viewing HM Reports Using the ADRCI Utility
adrci> show hm_run

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
HM RUN RECORD 1
**********************************************************
RUN_ID 1
RUN_NAME HM_RUN_1
CHECK_NAME DB Structure Integrity Check
NAME_ID 2
MODE 2
START_TIME 2007-07-02 17:31:54.271917 +07:00
RESUME_TIME <NULL>
END_TIME 2007-07-02 17:31:57.579834 +07:00
MODIFIED_TIME 2007-07-02 17:31:57.579834 +07:00
TIMEOUT 0
FLAGS 0
STATUS 5
SRC_INCIDENT_ID 0
NUM_INCIDENTS 0
ERR_NUMBER 0
REPORT_FILE <NULL>

adrci> create report hm_run HM_RUN_1
Adrci> show report hm_run HM_RUN_1

12 - 45 Copyright 2007, Oracle. All rights reserved.

Viewing HM Reports Using the ADRCI Utility


You can create and view Health Monitor checker reports using the ADRCI utility. To do that, ensure
that operating system environment variables such as ORACLE_HOME are set properly, and then enter
the following command at the operating system command prompt: adrci.
The utility starts and displays its prompt as shown in the slide. Optionally, you can change the
current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET
HOMEPATH command to change the current ADR home.
You can then enter the SHOW HM_RUN command to list all the checker runs registered in ADR and
visible from V$HM_RUN. Locate the checker run for which you want to create a report and note the
checker run name using the corresponding RUN_NAME field. The REPORT_FILE field contains a
file name if a report already exists for this checker run. Otherwise, you can generate the report using
the CREATE REPORT HM_RUN command as shown in the slide. To view the report, use the SHOW
REPORT HM_RUN command.

Oracle Database 11g: New Features for Administrators 12 - 45


SQL Repair Advisor: Overview
SQL Generate
statement Execute Statement incident in ADR
crashes automatically

Trace files

DBA run DBA gets


SQL Repair Advisor

SQL Repair Advisor alerted


investigates

DBA
DBA accept Statement
executes
SQL patch successfully
again
Execute

SQL patch
generated SQL statement
patched

12 - 46 Copyright 2007, Oracle. All rights reserved.

SQL Repair Advisor: Overview


You run the SQL Repair Advisor after a SQL statement fails with a critical error that generates a
problem in ADR. The advisor analyzes the statement and in many cases recommends a patch to
repair the statement. If you implement the recommendation, the applied SQL patch circumvents the
failure by causing the query optimizer to choose an alternate execution plan for future executions.
This is done without changing the SQL statement itself.
Note: In case no workaround is found by the SQL Repair Advisor, you are still able to package the
incident files and send the corresponding diagnostic data to Oracle Support.

Oracle Database 11g: New Features for Administrators 12 - 46


Accessing the SQL Repair Advisor Using EM

12 - 47 Copyright 2007, Oracle. All rights reserved.

Accessing the SQL Repair Advisor Using EM


There are basically two ways to access the SQL Repair Advisor from Enterprise Manager.
The first and the easiest way is when you get alerted in the Diagnostic Summary section of the
database home page. Following a SQL statement crash that generates an incident in ADR, you are
automatically alerted through the Active Incidents field. You can click the corresponding link to get
to the Support Workbench Problems page from where you can click the corresponding problem ID
link. This takes you to the Problem Details page from where you can click the SQL Repair Advisor
link in the Investigate and Resolve section of the page.

Oracle Database 11g: New Features for Administrators 12 - 47


Accessing the SQL Repair Advisor Using EM

12 - 48 Copyright 2007, Oracle. All rights reserved.

Accessing the SQL Repair Advisor Using EM (continued)


If the SQL statement crash incident is no longer active, you can always go to the Advisor Central
page, where you can click the SQL Advisors link and choose the Click here to go to Support
Workbench link in the SQL Advisor section of the SQL Advisors page. This takes you directly to
the Problem Details page, where you can click the SQL Repair Advisor link in the Investigate and
Resolve section of the page.
Note: To access the SQL Repair Advisor in case of nonincident SQL failures, you can go either to
the SQL Details page or to SQL Worksheet.

Oracle Database 11g: New Features for Administrators 12 - 48


Using the SQL Repair Advisor from EM

12 - 49 Copyright 2007, Oracle. All rights reserved.

Using the SQL Repair Advisor from EM


On the SQL Repair Advisor: SQL Incident Analysis page, specify a Task Name, a Task Description,
and a Schedule. When done, click Submit to schedule a SQL diagnostic analysis task. If you specify
Immediately, you end up on the Processing: SQL Repair Advisor Task page that shows you the
various steps of the task execution.

Oracle Database 11g: New Features for Administrators 12 - 49


Using the SQL Repair Advisor from EM

12 - 50 Copyright 2007, Oracle. All rights reserved.

Using the SQL Repair Advisor from EM (continued)


After the SQL Repair Advisor task executes, you are sent to the SQL Repair Results for that task. On
this page, you can see a corresponding Recommendations section, especially if SQL Patch was
generated to fix your problem. As shown in the slide, you can select the statement for which you
want to apply the generated SQL Patch and click View. This takes you to the Repair
Recommendations for SQL ID page from where you can ask the system to implement the SQL
Patch by clicking Implement after selecting the corresponding Findings. You then get a confirmation
for the implementation and you can execute your SQL statement again.

Oracle Database 11g: New Features for Administrators 12 - 50


Using SQL Repair Advisor from PL/SQL: Example

declare
rep_out clob;
t_id varchar2(50);
begin
t_id := dbms_sqldiag.create_diagnosis_task(
sql_text => 'delete from t t1 where t1.a = ''a'' and rowid <> (select max(rowid)
from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)',
task_name => 'sqldiag_bug_5869490',
problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR);

dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE',
dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS);
dbms_sqldiag.execute_diagnosis_task (t_id);
rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT);
dbms_output.put_line ('Report : ' || rep_out);
end;
/

execute dbms_sqldiag.accept_sql_patch(task_name => 'sqldiag_bug_5869490',


task_owner => 'SCOTT', replace => TRUE);

12 - 51 Copyright 2007, Oracle. All rights reserved.

Using the SQL Repair Advisor from PL/SQL: Example


It is also possible that you invoke the SQL Repair Advisor directly from PL/SQL.
After you get alerted about an incident SQL failure, you can execute a SQL Repair Advisor task by
using the DBMS_SQLDIAG.CREATE_DIGNOSIS_TASK function as illustrated in the slide. You
need to specify the SQL statement for which you want the analysis to be done, as well as a task name
and a problem type you want to analyze (possible values are
PROBLEM_TYPE_COMPILATION_ERROR and PROBLEM_TYPE_EXECUTION_ERROR).
You can then give the created task parameters by using the
DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER procedure.
When you are ready, you can execute the task by using the
DBMS_SQLDIAG.EXECUTE_DIAGNOSIS_TASK procedure.
Finally, you can get the task report by using the DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK
function.
In the example given in the slide, it is assumed that the report asks you to implement a SQL Patch to
fix the problem. You can then use the DBMS_SQLDIAG.ACCEPT_SQL_PATCH procedure to
implement the SQL Patch.

Oracle Database 11g: New Features for Administrators 12 - 51


Viewing, Disabling, or Removing a SQL Patch

12 - 52 Copyright 2007, Oracle. All rights reserved.

Viewing, Disabling, or Removing a SQL Patch


After you apply a SQL patch with the SQL Repair Advisor, you may want to view it to confirm its
presence, disable it, or remove it. One reason to remove a patch is if you install a later release of the
Oracle database that fixes the problem that caused the failure in the nonpatched SQL statement.
To view, disable/enable, or remove a SQL Patch, access the Server page in Enterprise Manager and
click the SQL Plan Control link in the Query Optimizer section of the page. This takes you to the
SQL Plan Control page. On this page, click the SQL Patch tab.
From the resulting SQL Patch subpage, locate the desired patch by examining the associated SQL
statement. Select it, and perform the corresponding task: Disable, Enable, or Delete.

Oracle Database 11g: New Features for Administrators 12 - 52


Using the SQL Test Case Builder

12 - 53 Copyright 2007, Oracle. All rights reserved.

Using the SQL Test Case Builder


The SQL Test Case Builder automates the somewhat difficult and time-consuming process of
gathering as much information as possible about a SQL-related problem and the environment in
which it occurred, so that the problem can be reproduced and tested by Oracle Support Services. The
information gathered by the SQL Test Case Builder includes the query being executed, table and
index definitions (but not the actual data), PL/SQL functions, procedures and packages, optimizer
statistics, and initialization parameter settings.
From the Support Workbench page, to access the SQL Test Case Builder:
1. Click the corresponding Problem ID to open the problem details page.
2. Click the Oracle Support tab.
3. Click Generate Additional Dumps and Test Cases.
4. On the Additional Dumps and Test Cases page, click the icon in the Go To Task column to
run the SQL Test Case Builder against your particular Incident ID.
The output of the SQL Test Case Builder is a SQL script that contains the commands required to re-
create all the necessary objects and the environment.
Note: You can also invoke the SQL Test Case Builder by using the DBMS_SQLDIAG.
EXPORT_SQL_TESTCASE_DIR_BY_INC function. This function takes the incident ID as well as
a directory object. It generates its output for the corresponding incident in the specified directory.

Oracle Database 11g: New Features for Administrators 12 - 53


Data Recovery Advisor

The Oracle database provides outstanding tools for repairing


problems.
Lost files, corrupt blocks, and so on
Analyzing the underlying problem and choosing the right
solution is often the biggest component of down time.
The advisor analyzes failures based on symptoms.
For example, Open failed because data files missing
It intelligently determines repair strategies.
Aggregates failures for efficient repair
For example, for many bad blocks restore entire file
Presents only feasible repair options
Are there backups?
Is there a standby database?
Ranked by repair time and data loss
The advisor can automatically perform repairs.

12 - 54 Copyright 2007, Oracle. All rights reserved.

Intelligent Resolution: Data Recovery Advisor


Data Recovery Advisor: Enterprise Manager integrates with database health checks and RMAN to
display data corruption problems, assess the extent of the problem (critical, high priority, or low
priority), describe the impact of the problem, recommend repair options, conduct a feasibility check
of the customer-chosen option, and automate the repair process.
Note: For more information about the Data Recovery Advisor, refer to the corresponding lesson in
this course.

Oracle Database 11g: New Features for Administrators 12 - 54


Summary

In this lesson, you should have learned how to:


Set up Automatic Diagnostic Repository
Use the Support Workbench
Run health checks
Use the SQL Repair Advisor

12 - 55 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 12 - 55


Practice 12: Overview

This practice covers the following topics:


Using the Health Monitor and the Support Workbench
Using the SQL Repair Advisor

12 - 56 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 12 - 56


Using the Data Recovery Advisor

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Describe your options for repairing data failure
Use the new RMAN data repair commands to:
List failures
Receive a repair advice
Repair failures
Perform proactive failure checks
Query the Data Recovery Advisor views

13 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 13 - 2


Repairing Data Failures

Data Guard provides failover to a standby database, so


that your operations are not affected by down time.
Data Recovery Advisor, a new feature in Oracle
Database 11g, analyzes failures based on symptoms
and determines repair strategies:
Aggregating multiple failures for efficient repair
Presenting a single, recommended repair option
Performing automatic repairs at your request
The Flashback technology protects the life cycle of a
row and assists in repairing logical problems.

13 - 3 Copyright 2007, Oracle. All rights reserved.

Repairing Data Failures


A data failure is a missing, corrupted, or inconsistent data, log, control, or other file, whose content
the Oracle instance cannot access. When your database has a problem, analyzing the underlying
cause and choosing the correct solution is often the biggest component of down time. Oracle
Database 11g offers several new and enhanced tools for analyzing and repairing database problems.
Data Guard, by allowing you to fail over to a standby database (that has its own copy of the
data), enables you to continue operation if the primary database gets a data failure. Then, after
failing over to the standby, you can take the time to repair the failed database (old primary)
without worrying about the impact on your applications. There are many enhancements to Data
Guard.
Data Recovery Advisor is a built-in tool that automatically diagnoses data failures and reports
the appropriate repair option. If, for example, the Data Recovery Advisor discovers many bad
blocks, it recommends restoring the entire file, rather than repairing individual blocks.
Therefore, it assists you to perform the correct repair for a failure. You can either repair a data
failure manually or request the Data Recovery Advisor to execute the repair for you. This
decreases the amount of time to recover from a failure.

Oracle Database 11g: New Features for Administrators 13 - 3


Repairing Data Failures (continued)
You can use the Flashback technology to repair logical problems.
Flashback Archive maintains persistent changes of table data for a specified period of time,
allowing you to access the archived data.
Flashback Transaction allows you to back out of a transaction and all conflicting
transactions with a single click. For more details, see the lesson titled Using Flashback and
LogMiner.
What you already know:
RMAN automates data file media recovery (a common form of recovery that protects against
logical and physical failures) and block media recovery (that recovers individual blocks rather
than a whole data file). For more details, see the lesson titled Using RMAN Enhancements.
Automatic Storage Management (ASM) protects against storage failures.

Oracle Database 11g: New Features for Administrators 13 - 4


Data Recovery Advisor

Fast detection, analysis, and repair of failures


Minimizing disruptions for users
Down-time and run-time failures
User interfaces:
EM GUI interface
(several paths)
RMAN command line
Supported database configurations:
Single-instance
Not RAC
Supporting failover to standby, but not analysis and
repair of standby databases

13 - 5 Copyright 2007, Oracle. All rights reserved.

Functionality of the Data Recovery Advisor


The Data Recovery Advisor automatically gathers data failure information when an error is
encountered. In addition, it can proactively check for failures. In this mode, it can potentially detect
and analyze data failures before a database process discovers the corruption and signals an error.
(Note that repairs are always under human control.)
Data failures can be very serious. For example, if your current log files are missing, you cannot start
your database. Some data failures (such as block corruptions in data files) are not catastrophic, in that
they do not take the database down or prevent you from starting the Oracle instance. The Data
Recovery Advisor handles both cases: the one when you cannot start up the database (because some
required database files are missing, inconsistent, or corrupted) and the one when file corruptions are
discovered during run time.
The preferred way to address serious data failures is to first fail over to a standby database, if you are
in a Data Guard configuration. This allows users to come back online as soon as possible. Then you
need to repair the primary cause of the data failure, but fortunately, this does not impact your users.

Oracle Database 11g: New Features for Administrators 13 - 5


User Interfaces
The Data Recovery Advisor is available from Enterprise Manager (EM) Database Control and
Grid Control. When failures exist, there are several ways to access the Data Recovery Advisor.
The following examples all begin on the Database Instance home page:
Availability tabbed page > Perform Recovery > Advise and Recover
Active Incidents link > on the Support Workbench Problems page: Checker Findings
tabbed page > Launch Recovery Advisor
Database Instance Health > click the specific link, for example, ORA 1578 in the Incidents
section > Support Workbench, Problems Detail page > Data Recovery Advisor
Database Instance Health > Related Links section: Support Workbench > Checker Findings
tabbed page: Launch Recovery Advisor
Related Link: Advisor Central > Advisors tabbed page: Data Recovery Advisor
Related Link: Advisor Central > Checkers tabbed page: Details > Run Detail tabbed page:
Launch Recovery Advisor
You can also use it via the RMAN command-line. For example:
rman target / nocatalog
rman> list failure all;
Supported Database Configurations
In the current release, the Data Recovery Advisor supports single-instance databases. Oracle Real
Application Clusters (RAC) databases are not supported.
The Data Recovery Advisor cannot use blocks or files transferred from a standby database to
repair failures on a primary database. Also, you cannot use the Data Recovery Advisor to diagnose
and repair failures on a standby database. However, the Data Recovery Advisor does support
failover to a standby database as a repair option (as mentioned above).

Oracle Database 11g: New Features for Administrators 13 - 6


Data Recovery Advisor

Reducing down time by eliminating confusion:

1. Assess data failures. Health Monitor

2. List failures by severity.


Data
Recovery
3. Advise on repair. Advisor

4. Choose and execute repair.

5. Perform proactive checks. DBA

13 - 7 Copyright 2007, Oracle. All rights reserved.

Data Recovery Advisor


The automatic diagnostic workflow in Oracle Database 11g performs workflow steps for you. With
the Data Recovery Advisor, you only need to initiate an advice and a repair.
1. The Health Monitor automatically executes checks and logs failures and their symptoms as
findings into Automatic Diagnostic Repository (ADR). For more details about the Health
Monitor, see the eStudy titled Diagnosability.
2. The Data Recovery Advisor consolidates findings into failures. It lists the results of previously
executed assessments with failure severity (critical or high).
3. When you ask for repair advice on a failure, the Data Recovery Advisor maps failures to
automatic and manual repair options, checks basic feasibility, and presents you with the repair
advice.
4. You can choose to manually execute a repair or request the Data Recovery Advisor to do it for
you.
5. In addition to the automatic, primarily reactive checks of the Health Monitor and Data
Recovery Advisor, Oracle recommends to additionally use the VALIDATE command as a
proactive check.

Oracle Database 11g: New Features for Administrators 13 - 7


Assessing Data Failures

1 Database Instance Health 3 Problem Details


... 2 error link

13 - 8 Copyright 2007, Oracle. All rights reserved.

Assessing Data Failures


This slide illustrates different access routes, which you can use to navigate between the Health
Monitor and the Data Recovery Advisor. It also demonstrates the interaction of Health Monitor and
Data Recovery Advisor.

Oracle Database 11g: New Features for Administrators 13 - 8


Data Failures

13 - 9 Copyright 2007, Oracle. All rights reserved.

Data Failures
Data failures are detected by checks, which are diagnostic procedures that assess the health of the
database or its components. Each check can diagnose one or more failures, which are mapped to a
repair.
Checks can be reactive or proactive. When an error occurs in the database, reactive checks are
automatically executed. You can also initiate proactive checks, for example, by executing the
VALIDATE DATABASE command.
In Enterprise Manager, select Availability > Perform Recovery, or click the Perform Recovery
button, if you find your database in a down or mounted state.

Oracle Database 11g: New Features for Administrators 13 - 9


Data Failure: Examples

Not accessible components, for example:


Missing data files at the OS level
Incorrect access permissions
Offline tablespace, and so on
Physical corruptions, such as block checksum failures
or invalid block header field values
Logical corruptions, such as inconsistent dictionary,
corrupt row piece, corrupt index entry, or corrupt
transaction
Inconsistencies, such as control file is older or newer
than the data files and online redo logs
I/O failures, such as a limit on the number of open files
exceeded, channels inaccessible, network or I/O error
13 - 10 Copyright 2007, Oracle. All rights reserved.

Data Failure: Examples


The Data Recovery Advisor can analyze failures and suggest repair options for issues, as outlined in
the slide.

Oracle Database 11g: New Features for Administrators 13 - 10


Listing Data Failures

13 - 11 Copyright 2007, Oracle. All rights reserved.

Listing Data Failures


On the Perform Recovery page, click Advise and Repair.
The View and Manage Failures page is the home page of the Data Recovery Advisor. The example
in the screenshot shows how the Data Recovery Advisor lists data failures and details. Activities that
you can initiate include advising, setting priorities, and closing failures.
The underlying RMAN LIST FAILURE command can also display data failures and details.
Failure assessments are not initiated here; they are previously executed and stored in ADR.
Failures are listed in decreasing priority order: CRITICAL, HIGH, and LOW. Failures with the same
priority are listed in increasing time-stamp order.

Oracle Database 11g: New Features for Administrators 13 - 11


Advising on Repair

(1) After manual repair


(2) Automatic repair

2a

2b

13 - 12 Copyright 2007, Oracle. All rights reserved.

Advising on Repair
On the View and Manage Failures page, after you click the Advise button, the Data Recovery
Advisor generates a manual checklist. Two types of failures could appear:
Failures that require human intervention. An example is a connectivity failure, when a disk cable
is not plugged in.
Failures that are repaired faster if you can undo a previous erroneous action. For example, if you
renamed a data file by error, it is faster to rename it back, rather than initiate RMAN restoration
from backup.
You can initiate the following actions:
Click Re-assess Failures after you have performed a manual repair. Failures, which are
resolved, are implicitly closed; any remaining ones are displayed on the View and Manage
Failures page.
Click Continue with Advise to initiate an automated repair. When the Data Recovery Advisor
generates an automated repair option, it generates a script that shows you how RMAN plans to
repair the failure. Click Continue, if you want to execute the automated repair. If you do not
want the Data Recovery Advisor to automatically repair the failure, then you can use this script
as a starting point for your manual repair.

Oracle Database 11g: New Features for Administrators 13 - 12


Executing Repairs

. . . In less than
one second

13 - 13 Copyright 2007, Oracle. All rights reserved.

Executing Repairs
In the preceding example, the Data Recovery Advisor executes a successful repair in less than one
second.

Oracle Database 11g: New Features for Administrators 13 - 13


Data Recovery Advisor
RMAN Command-Line Interface

RMAN Command Action


LIST FAILURE Lists previously executed failure assessment

ADVISE FAILURE Displays recommended repair option

REPAIR FAILURE Repairs and closes failures (after ADVISE in the


same RMAN session)

CHANGE FAILURE Changes or closes one or more failures

13 - 14 Copyright 2007, Oracle. All rights reserved.

Data Recovery Advisor: RMAN Command-Line Interface


If you suspect or know that a database failure has occurred, then use the LIST FAILURE command
to obtain information about these failures. You can list all or a subset of failures and restrict output in
various ways. Failures are uniquely identified by failure numbers. Note that these numbers are not
consecutive, so gaps between failure numbers have no significance.
The ADVISE FAILURE command displays a recommended repair option for the specified failures.
It prints a summary of the input failure and implicitly closes all open failures that are already fixed.
The default behavior when no option is used is to advise on all the CRITICAL and HIGH priority
failures that are recorded in ADR.
The REPAIR FAILURE command is used after an ADVISE FAILURE command within the same
RMAN session. By default, the command uses the single, recommended repair option of the last
ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE
command initiates an implicit ADVISE FAILURE command. After completing the repair, the
command closes the failure.
The CHANGE FAILURE command changes the failure priority or closes one or more failures. You
can change a failure priority only for HIGH or LOW priorities. Open failures are closed implicitly
when a failure is repaired. However, you can also explicitly close a failure.

Oracle Database 11g: New Features for Administrators 13 - 14


Listing Data Failures

The RMAN LIST FAILURE command lists previously


executed failure assessment.
Including newly diagnosed failures
Removing closed failures (by default)

Syntax:
LIST FAILURE
[ ALL | CRITICAL | HIGH | LOW | CLOSED |
failnum[,failnum,] ]
[ EXCLUDE FAILURE failnum[,failnum,] ]
[ DETAIL ]

13 - 15 Copyright 2007, Oracle. All rights reserved.

Listing Data Failures


The RMAN LIST FAILURE command lists failures. If the target instance uses a recovery catalog,
it can be in STARTED mode, otherwise it must be in MOUNTED mode. The LIST FAILURE
command does not initiate checks to diagnose new failures; rather, it lists the results of previously
executed assessments. Repeatedly executing the LIST FAILURE command revalidates all existing
failures. If the database diagnoses new ones (between command executions), they are displayed. If a
user manually fixes failures, or if transient failures disappear, then the Data Recovery Advisor
removes these failures from the LIST FAILURE output.
The following is a description of the syntax:
failnum: Number of the failure to display repair options for
ALL: List failures of all priorities.
CRITICAL: List failures of CRITICAL priority and OPEN status. These failures require
immediate attention, because they make the whole database unavailable (for example, a missing
control file).
HIGH: List failures of HIGH priority and OPEN status. These failures make a database partly
unavailable or unrecoverable; so they should be repaired quickly (for example, missing archived
redo logs).
LOW: List failures of LOW priority and OPEN status. Failures of a low priority can wait, until
more important failures are fixed.
CLOSED: List only closed failures.

Oracle Database 11g: New Features for Administrators 13 - 15


Listing Data Failures (continued)
EXCLUDE FAILURE: Exclude the specified list of failure numbers from the list.
DETAIL: List failures by expanding the consolidated failure. For example, if there are
multiple block corruptions in a file, the DETAIL option lists each one of them.
See the Oracle Database Backup and Recovery Reference for details of the command syntax.
Example of Listing Data Failures
[oracle1@stbbv06 orcl]$ rman
Recovery Manager: Release 11.1.0.5.0 - Beta on Thu Jun 21 13:33:52 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN> connect target sys/oracle@orcl


connected to target database: ORCL (DBID=1153427045)

RMAN>
RMAN> LIST FAILURE;

List of Database Failures


=========================

Failure ID Priority Status Time Detected Summary


---------- -------- --------- ------------- -------
142 HIGH OPEN 21-JUN-07 One or more non-system
datafiles are missing

RMAN> LIST FAILURE DETAIL;

List of Database Failures


=========================

Failure ID Priority Status Time Detected Summary


---------- -------- --------- ------------- -------
142 HIGH OPEN 21-JUN-07 One or more non-system
datafiles are missing
List of child failures for parent failure ID 142
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
306 HIGH OPEN 21-JUN-07 Datafile 5:
'/u01/app/oracle/oradata/orcl/example01.dbf' is missing
Impact: Some objects in tablespace EXAMPLE might be unavailable
300 HIGH OPEN 21-JUN-07 Datafile 4:
'/u01/app/oracle/oradata/orcl/users01.dbf' is missing
Impact: Some objects in tablespace USERS might be unavailable

RMAN>

Oracle Database 11g: New Features for Administrators 13 - 16


Advising on Repair

The RMAN ADVISE FAILURE command:


Displays a summary of input failure list
Includes a warning, if new failures appeared in ADR
Displays a manual checklist
Lists a single recommended repair option
Generates a repair script (for automatic or manual
repair)
. . .
Repair script:
/u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2979
128860.hm
RMAN>

13 - 17 Copyright 2007, Oracle. All rights reserved.

Advising on Repair
The RMAN ADVISE FAILURE command displays a recommended repair option for the specified
failures. If this command is executed from within Enterprise Manager, then Data Guard is presented
as a repair option. (This is not the case, if the command is executed directly from the RMAN
command line). The ADVISE FAILURE command prints a summary of the input failure. The
command implicitly closes all open failures that are already fixed.
The default behavior (when no option is used) is to advise on all the CRITICAL and HIGH priority
failures that are recorded in Automatic Diagnostic Repository (ADR). If a new failure has been
recorded in ADR since the last LIST FAILURE command, this command includes a WARNING
before advising on all CRITICAL and HIGH failures.
Two general repair options are implemented: no-data-loss and data-loss repairs.
When the Data Recovery Advisor generates an automated repair option, it generates a script that
shows you how RMAN plans to repair the failure. If you do not want the Data Recovery Advisor to
automatically repair the failure, then you can use this script as a starting point for your manual repair.
The operating system (OS) location of the script is printed at the end of the command output. You
can examine this script, customize it (if needed), and also execute it manually if, for example, your
audit trail requirements recommend such an action.

Oracle Database 11g: New Features for Administrators 13 - 17


Advising on Repair (continued)
Syntax
ADVISE FAILURE
[ ALL | CRITICAL | HIGH | LOW | failnum[,failnum,] ]
[ EXCLUDE FAILURE failnum [,failnum,] ]

Command Line Example


RMAN> ADVISE FAILURE;

List of Database Failures


=========================

Failure ID Priority Status Time Detected Summary


---------- -------- --------- ------------- -------
142 HIGH OPEN 21-JUN-07 One or more non-system
datafiles are missing
List of child failures for parent failure ID 142
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
306 HIGH OPEN 21-JUN-07 Datafile 5:
'/u01/app/oracle/oradata/orcl/example01.dbf' is missing
Impact: Some objects in tablespace EXAMPLE might be unavailable
300 HIGH OPEN 21-JUN-07 Datafile 4:
'/u01/app/oracle/oradata/orcl/users01.dbf' is missing
Impact: Some objects in tablespace USERS might be unavailable

analyzing automatic repair options; this may take some time


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=152 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions


========================
no manual actions available

Optional Manual Actions


=======================
1. If file /u01/app/oracle/oradata/orcl/users01.dbf was unintentionally
renamed or moved, restore it
2. If file /u01/app/oracle/oradata/orcl/example01.dbf was unintentionally
renamed or moved, restore it

Automated Repair Options


========================
Option Repair Description
------ ------------------
1 Restore and recover datafile 4; Restore and recover datafile 5
Strategy: The repair includes complete media recovery with no data loss
Repair script:
/u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_3909424189.hm

RMAN>

Oracle Database 11g: New Features for Administrators 13 - 18


Executing Repairs

The RMAN REPAIR FAILURE command:


Follows the ADVISE FAILURE command
Repairs the specified failure
Closes the repaired failure

Syntax: Example:
REPAIR FAILURE RMAN> repair failure;
[PREVIEW]
[NOPROMPT]

13 - 19 Copyright 2007, Oracle. All rights reserved.

Executing Repairs
This command should be used after an ADVISE FAILURE command in the same RMAN session.
By default (with no option), the command uses the single, recommended repair option of the last
ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE
command initiates an implicit ADVISE FAILURE command.
By default, you are asked to confirm the command execution, because you may be requesting
substantial changes, that take time to complete. During execution of a repair, the output of the
command indicates what phase of the repair is being executed.
After completing the repair, the command closes the failure.
You cannot run multiple concurrent repair sessions. However, concurrent REPAIR PREVIEW
sessions are allowed.
PREVIEW means: Do not execute the repair(s); instead, display the previously generated RMAN
script with all repair actions and comments.
NOPROMPT means: Do not ask for confirmation.

Oracle Database 11g: New Features for Administrators 13 - 19


Example of Repairing a Failure
RMAN> REPAIR FAILURE PREVIEW;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2101176755.hm

contents of repair script:


# restore and recover datafile
restore datafile 4;
recover datafile 4;

RMAN> REPAIR FAILURE;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2101176755.hm

contents of repair script:


# restore and recover datafile
restore datafile 4;
recover datafile 4;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting restore at 21-JUN-07


using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to
/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: reading from backup piece
/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_nnnd
f_TAG20070621T043615_37m7gpfp_.bkp
channel ORA_DISK_1: piece
handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_
mf_nnndf_TAG20070621T043615_37m7gpfp_.bkp tag=TAG20070621T043615
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-JUN-07
Starting recover at 21-JUN-07
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 20 is already on disk as file
/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_2
0_37m7lhgx_.arc
archived log for thread 1 with sequence 21 is already on disk as file
/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_2
1_37m7llgp_.arc
archived log for thread 1 with sequence 22 is already on disk as file
/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_2
2_37m7logv_.arc
archived log for thread 1 with sequence 23 is already on disk as file
/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_mf_1_2
3_37n046y3_.arc

Oracle Database 11g: New Features for Administrators 13 - 20


Example of Repairing a Failure (continued)
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=16
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=17
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=18
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=19
channel ORA_DISK_1: reading from backup piece
/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_mf_annn
n_TAG20070621T043805_37m7l46t_.bkp
channel ORA_DISK_1: piece
handle=/u01/app/oracle/flash_recovery_area/ORCL/backupset/2007_06_21/o1_
mf_annnn_TAG20070621T043805_37m7l46t_.bkp tag=TAG20070621T043805
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_16_37n7ptq0_.arc thread=1 sequence=16
channel default: deleting archived log(s)
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_16_37n7ptq0_.arc RECID=20 STAMP=625844810
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_17_37n7ptrv_.arc thread=1 sequence=17
channel default: deleting archived log(s)
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_17_37n7ptrv_.arc RECID=22 STAMP=625844810
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_18_37n7ptqo_.arc thread=1 sequence=18
channel default: deleting archived log(s)
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_18_37n7ptqo_.arc RECID=21 STAMP=625844810
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_19_37n7ptsh_.arc thread=1 sequence=19
channel default: deleting archived log(s)
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_19_37n7ptsh_.arc RECID=23 STAMP=625844810
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_20_37m7lhgx_.arc thread=1 sequence=20
archived log file
name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2007_06_21/o1_m
f_1_21_37m7llgp_.arc thread=1 sequence=21
media recovery complete, elapsed time: 00:00:01
Finished recover at 21-JUN-07
repair failure complete
Do you want to open the database (enter YES or NO)? YES
database opened
RMAN>
Oracle Database 11g: New Features for Administrators 13 - 21
Classifying (and Closing) Failures

The RMAN CHANGE FAILURE command:


Changes the failure priority (except for CRITICAL)
Closes one or more failures
Example:
RMAN> change failure 5 priority low;
List of Database Failures
=========================
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
5 HIGH OPEN 20-DEC-06 one or more
datafiles are missing
Do you really want to change the above failures (enter YES or
NO)? yes
changed 1 failures to LOW priority

13 - 22 Copyright 2007, Oracle. All rights reserved.

Classifying (and Closing) Failures


The CHANGE FAILURE command is used to change the failure priority or close one or more
failures.
Syntax
CHANGE FAILURE
{ ALL | CRITICAL | HIGH | LOW | failnum[,failnum,] }
[ EXCLUDE FAILURE failnum[,failnum,] ]
{ PRIORITY {CRITICAL | HIGH | LOW} |
CLOSE } change status of the failure(s) to closed
[ NOPROMPT ] do not ask user for a confirmation
A failure priority can be changed only from HIGH to LOW and from LOW to HIGH. It is an error to
change the priority level of CRITICAL. (One reason why you may want to change a failure from
HIGH to LOW is to avoid seeing it on the default output list of the LIST FAILURE command. For
example, if a block corruption has HIGH priority, you may want to temporarily change it to LOW if
the block is in a little-used tablespace.)
Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close
a failure. This involves a reevaluation of all other open failures, because some of them may become
irrelevant as the result of the closure of the failure.
By default, the command asks the user to confirm a requested change.

Oracle Database 11g: New Features for Administrators 13 - 22


Data Recovery Advisor Views

Querying dynamic data dictionary views:


V$IR_FAILURE: List of all failures, including closed
ones (result of the LIST FAILURE command)
V$IR_MANUAL_CHECKLIST: List of manual advice
(result of the ADVISE FAILURE command)
V$IR_REPAIR: List of repairs (result of the ADVISE
FAILURE command)
V$IR_REPAIR_SET: Cross-reference of failure and
advice identifiers

13 - 23 Copyright 2007, Oracle. All rights reserved.

Data Recovery Advisor Views


A usage example: Assume that you need to display all failures that were detected on June 21, 2007.
SELECT * FROM v$ir_failure
WHERE trunc (time_detected) = '21-JUN-2007';
(Output formatted to fit page)
FAILURE_ID PARENT_ID CHILD_COUNT CLASS_NAME TIME_DETE MODIFIED
DESCRIPTION
IMPACTS PRIORITY STATUS
142 0 0 PERSISTENT_DATA 21-JUN-07 21-JUN-07 One
or more non-system datafiles are missing
See impact for individual child failures HIGH CLOSED
145 142 0 PERSISTENT_DATA 21-JUN-07 21-JUN-07
Datafile 4: '/u01/app/oracle/oradata/orcl/users01.dbf' is missing
Some objects in tablespace USERS might be unavailable HIGH CLOSED
151 142 0 PERSISTENT_DATA 21-JUN-07 21-JUN-07
Datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing
Some objects in tablespace EXAMPLE might be unavailable HIGH CLOSED
See the Oracle Database Reference for details of the dynamic data dictionary views that the Data
Recovery Advisor uses.

Oracle Database 11g: New Features for Administrators 13 - 23


Best Practice: Proactive Checks

Invoking proactive health check of the database and its


components:
Health Monitor or RMAN VALIDATE DATABASE
command
Checking for logical and physical corruption
Findings logged in ADR

13 - 24 Copyright 2007, Oracle. All rights reserved.

Best Practice: Proactive Checks


For very important databases, you may want to execute additional proactive checks (possibly daily
during low peak interval periods). You can schedule periodic health checks through the Health
Monitor or by using the RMAN VALIDATE command. In general, when a reactive check detects
failure(s) in a database component, you may want to execute a more complete check of the affected
component.
The RMAN VALIDATE DATABASE command is used to invoke health checks for the database and
its components. It extends the existing VALIDATE BACKUPSET command. Any problem detected
during validation is displayed to you. Problems initiate the execution of a failure assessment. If a
failure is detected, it is logged into ADR as a finding. You can use the LIST FAILURE command
to view all failures recorded in the repository.
The VALIDATE command supports validation of individual backup sets and data blocks. In a
physical corruption, the database does not recognize the block at all. In a logical corruption, the
contents of the block are logically inconsistent. By default, the VALIDATE command checks for
physical corruption only. You can specify CHECK LOGICAL to check for logical corruption as well.

Oracle Database 11g: New Features for Administrators 13 - 24


Best Practice: Proactive Checks (continued)
Block corruptions can be divided into interblock corruption and intrablock corruption. In
intrablock corruption, the corruption occurs within the block itself and can be either physical or
logical corruption. In interblock corruption, the corruption occurs between blocks and can be only
logical corruption. The VALIDATE command checks for intrablock corruptions only.
Example
RMAN> validate database;

Starting validate at 21-DEC-06


using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf
input datafile file number=00005
name=/u01/app/oracle/oradata/orcl/example01.dbf
input datafile file number=00003
name=/u01/app/oracle/oradata/orcl/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:00:15
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1 OK 0 13168 85760 981642
File Name: /u01/app/oracle/oradata/orcl/system01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 60619
Index 0 9558
Other 0 2415

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2 OK 0 22892 66720 981662
File Name: /u01/app/oracle/oradata/orcl/sysaux01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 10529
Index 0 9465
Other 0 23834

Oracle Database 11g: New Features for Administrators 13 - 25


Best Practice: Proactive Checks (continued)
Example (continued)
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
3 OK 0 104 7680 981662
File Name: /u01/app/oracle/oradata/orcl/undotbs01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 7576

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 24 640 963835
File Name: /u01/app/oracle/oradata/orcl/users01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 43
Index 0 63
Other 0 510

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 0 1732 12800 745885
File Name: /u01/app/oracle/oradata/orcl/example01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 4416
Index 0 1303
Other 0 5349
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
including current control file for validation
including current SPFILE in backup set
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Control File and SPFILE
===============================
File Type Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE OK 0 2
Control File OK 0 594
Finished validate at 21-DEC-06
RMAN>

Oracle Database 11g: New Features for Administrators 13 - 26


Setting Parameters to Detect Corruption

Prevent memory and data corruption

...
Detect I/O storage, disk corruption

...
Detect non-persistent writes on physical standby

New
...
Specify defaults for corruption detection
EM > Server > Initialization Parameters

13 - 27 Copyright 2007, Oracle. All rights reserved.

Setting Parameters to Detect Corruption


You can use the DB_ULTRA_SAFE parameter for easy manageability. It affects the default values of
the following parameters:
DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often
prevent memory and data corruption. (Default: FALSE, recommended: FULL)
DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache
header of every data block when writing it to disk. Checksums assist in detecting corruption
caused by underlying disks, storage systems, or I/O systems. (Default: TYPICAL,
recommended: TYPICAL)
DB_LOST_WRITE_PROTECT, which initiates checking for lost writes. Data block lost
writes occur on a physical standby database, when the I/O subsystem signals the completion of a
block write, which has not yet been completely written in persistent storage. Of course, the write
operation has been completed on the primary database. (Default: TYPICAL, recommended:
TYPICAL)
If you set any of these parameters explicitly, then your values remain in effect. The
DB_ULTRA_SAFE parameter (which is new in Oracle Database 11g) changes only the default
values for these parameters.

Oracle Database 11g: New Features for Administrators 13 - 27


Setting Parameters to Detect Corruption

DB_ULTRA_SAFE OFF DATA_ONLY DATA_AND_INDEX

DB_BLOCK_CHECKING OFF or MEDIUM FULL or TRUE


FALSE

DB_BLOCK_CHECKSUM TYPICAL FULL FULL

DB_LOST_WRITE_PROTECT TYPICAL TYPICAL TYPICAL

13 - 28 Copyright 2007, Oracle. All rights reserved.

Setting Parameters to Detect Corruption (continued)


Depending on your systems tolerance for block corruption, you can intensify the checking for block
corruption. Enabling the DB_ULTRA_SAFE parameter (default: OFF) results in increased system
overhead, because of these more intensive checks. The amount of overhead is related to the number
of blocks changed per second; so it cannot be easily quantified. For a high-update application, you
can expect a significant increase in CPU, likely in the ten to twenty percent range, but possibly
higher. This overhead can be alleviated by allocating additional CPUs.
When the DB_ULTRA_SAFE parameter is set to DATA_ONLY, then the
DB_BLOCK_CHECKING parameter is set to MEDIUM. This checks that data in a block are
logically self-consistent. Basic block header checks are performed after block contents change in
memory (for example, after UPDATE or INSERT commands, on-disk reads, or inter-instance
block transfers in Oracle RAC). This level of checks includes semantic block checking for all
non-index-organized table blocks.
When the DB_ULTRA_SAFE parameter is set to DATA_AND_INDEX, then the
DB_BLOCK_CHECKING parameter is set to FULL. In addition to the preceding checks,
semantic checks are executed for index blocks (that is, blocks of subordinate objects that can
actually be dropped and reconstructed when faced with corruption).
When the DB_ULTRA_SAFE parameter is set to DATA_ONLY or DATA_AND_INDEX, then the
DB_BLOCK_CHECKSUM parameter is set to FULL and the DB_LOST_WRITE_PROTECT
parameter is set to TYPICAL.
Oracle Database 11g: New Features for Administrators 13 - 28
Summary

In this lesson, you should have learned how to:


Describe your options for repairing data failure
Use the new RMAN data repair commands to:
List failures
Receive a repair advice
Repair failures
Perform proactive failure checks
Query the Data Recovery Advisor views

13 - 29 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 13 - 29


Practice 13: Overview
Repairing Failures

This practice covers the following topics:


Repairing a down database with Enterprise Manager
Repairing block corruption with Enterprise Manager
Repairing a down database with the RMAN command
line

13 - 30 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 13 - 30


Security: New Features

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Configure the password file to use case-sensitive
passwords
Encrypt a tablespace
Configure fine-grained access to network services

14 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 14 - 2


Secure Password Support

Passwords in Oracle Database 11g :


Are case-sensitive
Contain more characters
Use more secure hash algorithm
Use salt in the hash algorithm
Usernames are still Oracle identifiers (up to 30 characters,
non-case-sensitive).

14 - 3 Copyright 2007, Oracle. All rights reserved.

Secure Password Support


You must use more secure passwords to meet the demands of compliance to various security and
privacy regulations. Passwords that are very short and passwords that are formed from a limited set
of characters are susceptible to brute force attacks. Longer passwords with more different characters
allowed make the password much more difficult to guess or find. In Oracle Database 11g, the
password is handled differently than in previous versions:
Passwords are case-sensitive. Uppercase and lowercase characters are now different characters
when used in a password.
A password may contain multibyte characters without it being enclosed in quotation marks. A
password must be enclosed in quotation marks if it contains any special characters apart from $,
_, or #.
Passwords are always passed through a hash algorithm, and then stored as a user credential.
When the user presents a password, it is hashed and then compared to the stored credential. In
Oracle Database 11g, the hash algorithm is SHA-1 of the public algorithm used in previous
versions of the database. SHA-1 is a stronger algorithm using a 160-bit key.
Passwords always use salt. A hash function always produces the same output, given the same
input. Salt is a unique (random) value that is added to the input to ensure that the output
credential is unique.

Oracle Database 11g: New Features for Administrators 14 - 3


Automatic Secure Configuration

Default password profile


Default auditing
Built-in password complexity checking

14 - 4 Copyright 2007, Oracle. All rights reserved.

Automatic Secure Configuration


Oracle Database 11g installs and creates the database with certain security features recommended by
the Center for Internet Security (CIS) benchmark. The CIS recommended configuration is more
secure than the 10gR2 default installation; yet open enough to allow the majority of applications to
be successful. Many customers have adopted this benchmark already. There are some
recommendations of the CIS benchmark that may be incompatible with some applications.

Oracle Database 11g: New Features for Administrators 14 - 4


Password Configuration

By default:
Default password profile is enabled
Account is locked after 10 failed login attempts
In upgrade:
Passwords are not case-sensitive until changed
Passwords become case-sensitive when the ALTER
USER command is used
On creation:
Passwords are case-sensitive

14 - 5 Copyright 2007, Oracle. All rights reserved.

Secure Default Configuration


When creating a custom database using the Database Configuration Assistant (DBCA), you can
specify the Oracle Database 11g default security configuration. By default, if a user tries to connect
to an Oracle instance multiple times using an incorrect password, the instance delays each login after
the third try. This protection applies for attempts made from different IP addresses or multiple client
connections. Later, it gradually increases the time before the user can try another password, up to a
maximum of about ten seconds.
The default password profile is enabled with these settings at database creation:
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION NULL
When an Oracle Database 10g database is upgraded, passwords are not case-sensitive until the
ALTER USER command is used to change the password.
When the database is created, the passwords will be case-sensitive by default.

Oracle Database 11g: New Features for Administrators 14 - 5


Enable Built-in Password Complexity Checker

Execute the utlpwdmg.sql script to create the password


verify function:

SQL> CONNECT / as SYSDBA


SQL> @?/rdbms/admin/utlpwdmg.sql

Alter the default profile:

ALTER PROFILE DEFAULT


LIMIT
PASSWORD_VERIFY_FUNCTION verify_function_11g;

14 - 6 Copyright 2007, Oracle. All rights reserved.

Enable Built-in Password Complexity Checker


verify_function_11g is a sample PL/SQL function that can be easily modified to enforce the
password complexity policies at your site. This function does not require special characters to be
embedded in the password. Both verify_function_11g and the older verify_function
are included in the utlpwdmg.sql file.
To enable the password complexity checking, create a verification function owned by SYS. Use one
of the supplied functions or modify one of them to meet your requirements. The example shows how
to use the utlpwdmg.sql script. If there is an error in the password complexity check function
named in the profile or it does not exist, you cannot change passwords nor can you create users. The
solution is to set the PASSWORD_VERIFY_FUNCTION to NULL in the profile, until the problem is
solved.
The verify_function_11g function checks that the password contains at least eight characters;
contains at least one number and one alphabetic character; and differs from the previous password by
at least three characters. The function also checks that the password is not a username or username
appended with any number 1100; a username reversed; a server name or server name appended with
1100; or one of a set of well-known and common passwords such as welcome1, database1,
oracle123, or oracle (appended with 1100), and so on.

Oracle Database 11g: New Features for Administrators 14 - 6


Managing Default Audits

Review audit logs: Default audit options cover important


security privileges.
Archive audit records:
Export
Copy to another table
Remove archived audit records.

14 - 7 Copyright 2007, Oracle. All rights reserved.

Managing Default Audits


Review the audit logs. By default, auditing is enabled in Oracle Database 11g for certain privileges
that are very important to security. The audit trail is recorded in the database AUD$ table by default;
the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database
performance, for most sites. Oracle recommends the use of OS audit trail files.
Archive audit records. To retain audit records, export them using Oracle Data Pump Export, or use
the SELECT statement to capture a set of audit records into a separate table.
Remove archived audit records. Remove audit records from the SYS.AUD$ table after reviewing
and archiving them. Audit records take up space in the SYSTEM tablespace. If the SYSTEM
tablespace cannot grow, and there is no more space for audit records, errors will be generated for
each audited statement. Because CREATE SESSION is one of the audited privileges, no new
sessions may be created except by a user connected as SYSDBA. Archive the audit table with the
export utility, using the QUERY option to specify the WHERE clause with a range of dates or SCNs.
Then delete the records from the audit table by using the same WHERE clause.
When AUDIT_TRAIL=OS, separate files are created for each audit record in the directory specified
by AUDIT_FILE_DEST. All files as of a certain time can be copied, and then removed.
Note: The SYSTEM tablespace is created with the autoextend on option. So the SYSTEM
tablespace grows as needed until there is no more space available on the disk.

Oracle Database 11g: New Features for Administrators 14 - 7


Managing Default Audits (continued)
The following privileges are audited for all users on success and failure, and by access:
CREATE EXTERNAL JOB
CREATE ANY JOB
GRANT ANY OBJECT PRIVILEGE
EXEMPT ACCESS POLICY
CREATE ANY LIBRARY
GRANT ANY PRIVILEGE
DROP PROFILE
ALTER PROFILE
DROP ANY PROCEDURE
ALTER ANY PROCEDURE
CREATE ANY PROCEDURE
ALTER DATABASE
GRANT ANY ROLE
CREATE PUBLIC DATABASE LINK
DROP ANY TABLE
ALTER ANY TABLE
CREATE ANY TABLE
DROP USER
ALTER USER
CREATE USER
CREATE SESSION
AUDIT SYSTEM
ALTER SYSTEM

Oracle Database 11g: New Features for Administrators 14 - 8


Adjust Security Settings

14 - 9 Copyright 2007, Oracle. All rights reserved.

Adjust Security Settings


When you create a database using the DBCA tool, you are offered a choice of security settings:
Keep the enhanced 11g default security settings (recommended). These settings include enabling
auditing and new default password profile.
Revert to pre-11g default security settings. To disable a particular category of enhanced settings
for compatibility purposes, choose from the following:
- Revert audit settings to pre-11g defaults
- Revert password profile settings to pre-11g defaults
These settings can also be changed after the database is created using the DBCA. Some applications
may not work properly under the 11g default security settings.
Secure permissions on software are always set. It is not impacted by a users choice for the Security
Settings option.

Oracle Database 11g: New Features for Administrators 14 - 9


Setting Security Parameters

Use case-sensitive passwords:


SEC_SEC_CASE_SENSITIVE_LOGON
Protect against DoS attacks:
SEC_PROTOCOL_ERROR_FURTHER_ACTION
SEC_PROTOCOL_ERROR_TRACE_ACTION
Protect against brute force attacks:
SEC_MAX_FAILED_LOGIN_ATTEMPTS

14 - 10 Copyright 2007, Oracle. All rights reserved.

Setting Security Parameters


A set of new parameters have been added to Oracle Database 11g to enhance the default security of
the database. These parameters are systemwide and static.
Use Case-Sensitive Passwords to Improve Security
A new parameter, SEC_CASE_SENSITIVE_LOGON, allows you to set the case-sensitivity of user
passwords. Oracle recommends that you retain the default setting of TRUE. You can specify non-
case-sensitive passwords for backward compatibility by setting this parameter to FALSE:
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE
Note: Disabling case-sensitivity increases vulnerability to brute force attacks.
Protect Against Denial of Service (DoS) Attacks
The two parameters listed in the slide specify the actions to be taken when the database receives bad
packets from a client. The assumption is that the bad packets are from a possible malicious client.
The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken
with the client connection: continue, drop the connection, or delay accepting requests. The other
parameter, SEC_PROTOCOL_ERROR_TRACE_ACTION, specifies a monitoring action: NONE,
TRACE, LOG, or ALERT.

Oracle Database 11g: New Features for Administrators 14 - 10


Setting Security Parameters (continued)
Protect Against Brute Force Attacks
A new initialization parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS, which has a default
setting of 10, causes a connection to be automatically dropped after the specified number of attempts.
This parameter is enforced even when the password profile is not enabled.
This parameter prevents a program from making a database connection and then attempting to
authenticate by trying hundreds or thousands of passwords.

Oracle Database 11g: New Features for Administrators 14 - 11


Setting Database Administrator Authentication

Use password file with case-sensitive passwords.


Enable strong authentication for administrator roles:
Grant the administrator role in OID.
Use Kerberos tickets.
Use certificates with SSL.

14 - 12 Copyright 2007, Oracle. All rights reserved.

Setting Database Administrator Authentication


The database administrator must always be authenticated. In Oracle Database 11g, there are new
methods that make administrator authentication more secure and centralize the administration of
these privileged users. Case-sensitive passwords have also been extended to remote connections for
privileged users. You can override this default behavior with the following command:
orapwd file=orapworcl entries=5 ignorecase=Y
If your concern is that the password file might be vulnerable or that the maintenance of many
password files is a burden, then strong authentication can be implemented:
Grant SYSDBA, or SYSOPER enterprise role in Oracle Internet Directory (OID).
Use Kerberos tickets
Use certificates over SSL
To use any of the strong authentication methods, the LDAP_DIRECTORY_SYSAUTH initialization
parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication
methods. Authentication through OID or through Kerberos also can provide centralized
administration or single sign-on.
If the password file is configured, it is checked first. The user may also be authenticated by the local
OS by being a member of the OSDBA or OSOPER groups.
For more information, see the Oracle Database Advanced Security Administrators Guide 11g
Release 1.

Oracle Database 11g: New Features for Administrators 14 - 12


Transparent Data Encryption

New features in TDE include:


Tablespace Encryption
Support for LogMiner
Support for Logical Standby
Support for Streams
Support for Asynchronous Change Data Capture
Hardware-based master key protection

14 - 13 Copyright 2007, Oracle. All rights reserved.

Transparent Data Encryption


Several new features enhance the capabilities of Transparent Data Encryption (TDE), and build on
the same infrastructure.
The changes in LogMiner to support TDE provide the infrastructure for change capture engines used
for Logical Standby, Streams, and Asynchronous Change Data Capture. For LogMiner to support
TDE, it must be able to access the encryption wallet. To access the wallet, the instance must be
mounted and the wallet open. LogMiner does not support Hardware Security Module (HSM) or user-
held keys.
For Logical Standby, the logs may be mined either on the source or the target database, thus the
wallet must be the same for both databases.
Encrypted columns are handled the same way in both Streams and the Streams-based Change Data
Capture. The redo records are mined at the source, where the wallet exists. The data is transmitted
unencrypted to the target and encrypted using the wallet at the target. The data can be encrypted in
transit by using Advanced Security Option to provide network encryption.

Oracle Database 11g: New Features for Administrators 14 - 13


Using Tablespace Encryption

Create an encrypted tablespace.


1. Create or open the encryption wallet:

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY


"welcome1";

2. Create a tablespace with the encryption keywords:

SQL> CREATE TABLESPACE encrypt_ts


2> DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M
3> ENCRYPTION USING '3DES168'
4> DEFAULT STORAGE (ENCRYPT);

14 - 14 Copyright 2007, Oracle. All rights reserved.

Tablespace Encryption
Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on
read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The
SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the
encryption wallet must be open.
The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption
properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify
USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid
algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the
properties in the V$ENCRYPTED_TABLESPACES view.
The encrypted data is protected during operations such as JOIN and SORT. This means that the data
is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected.
Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet.
Restrictions
Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.)
Bfiles and external tables are not encrypted.
Transportable tablespaces across different endian platforms are not supported.
The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a
tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators 14 - 14
Hardware Security Module

Encrypt and decrypt operations


are performed on the Hardware
hardware security module. Security
Module

Encrypted data

Client Database server

14 - 15 Copyright 2007, Oracle. All rights reserved.

Hardware Security Module


A Hardware Security Module (HSM) is a physical device that provides secure storage for encryption
keys. It also provides secure computational space (memory) to perform encryption and decryption
operations. HSM is a more secure alternative to the Oracle wallet.
Transparent Data Encryption (TDE) can use HSM to provide enhanced security for sensitive data. An
HSM is used to store the master encryption key used for TDE. The key is secure from unauthorized
access attempts because the HSM is a physical device and not an operating system file. All
encryption and decryption operations that use the master encryption key are performed inside the
HSM. This means that the master encryption key is never exposed in insecure memory.
There are several vendors that provide Hardware Security Modules. The vendor must also supply the
appropriate libraries.

Oracle Database 11g: New Features for Administrators 14 - 15


Encryption for LOB Columns

CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128')


LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING );

LOB encryption is allowed only for SECUREFILE LOBs.


All LOBs in the LOB column are encrypted.
LOBs can be encrypted on per-column or per-partition
basis.
Allows for the coexistence of SECUREFILE and
BASICFILE LOBs

14 - 16 Copyright 2007, Oracle. All rights reserved.

Encryption for LOB Columns


Oracle Database 11g introduces a completely reengineered large object (LOB) data type that
dramatically improves performance, manageability, and ease of application development. This
Secure Files implementation (of LOBs) offers advanced, next-generation functionality such as
intelligent compression and transparent encryption. The encrypted data in Secure Files is stored in-
place and is available for random reads and writes.
You must create the LOB with the SECUREFILE parameter, with encryption enabled (ENCRYPT) or
disabled (DECRYPTthe default) on the LOB column. The current TDE syntax is used for
extending encryption to LOB data types.
LOB implementation from earlier versions is still supported for backward compatibility and is now
referred to as Basic Files. If you add a LOB column to a table, you can specify whether it should be
created as SECUREFILES or BASICFILES. To ensure backward compatibility, the default LOB
type is BASICFILES.
Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192.
Note: For further discussion on Secure Files, see the lesson titled Oracle SecureFiles.

Oracle Database 11g: New Features for Administrators 14 - 16


Enterprise Manager Security Management

Manage security through EM.


Policy Manager replaced for:
Virtual Private Database
Application Context
Oracle Label Security
Enterprise User Security pages
added
TDE pages added

14 - 17 Copyright 2007, Oracle. All rights reserved.

Enterprise Manager Security Management


Security management has been integrated into Enterprise Manager.
The Policy Manager Java consolebased tool has been superseded. Oracle Label Security,
Application Context, and Virtual Private Database previously administered through the Oracle Policy
Manager tool are now managed through the Enterprise Manager. The Oracle Policy Manager tool is
still available.
The Enterprise Manager Security tool has been superseded by Enterprise Manager features.
Enterprise User Security is also now managed though Enterprise Manager. The menu item for
Enterprise Manager appears as soon as the ldap.ora file is configured. See the Enterprise User
Administrators Guide for configuration details. The Enterprise Security Manager tool is still
available.
TDE can now be managed through Enterprise Manager, including wallet management. You can
create, open, and close the wallet from Enterprise Manager pages.

Oracle Database 11g: New Features for Administrators 14 - 17


Using RMAN Security Enhancements

Configure backup shredding:


RMAN> CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON;

Use backup shredding:

RMAN> DELETE FORCE;

14 - 18 Copyright 2007, Oracle. All rights reserved.

Using RMAN Security Enhancements


Backup shredding is a key management feature that allows the DBA to delete the encryption key of
transparent encrypted backups, without physical access to the backup media. The encrypted backups
are rendered inaccessible if the encryption key is destroyed. This does not apply to password-
protected backups.
Configure backup shredding with:
CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON;
Or
SET ENCRYPTION EXTERNAL KEY STORAGE ON;
The default setting is OFF, and backup shredding is not enabled. To shred a backup, no new
command is needed, simply use:
DELETE FORCE;

Oracle Database 11g: New Features for Administrators 14 - 18


Managing Fine-Grained Access to External
Network Services

1. Create an ACL and its privileges:

BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'us-oracle-com-permissions.xml',
description => 'Permissions for oracle network',
principal => 'SCOTT',
is_grant => TRUE,
privilege => 'connect');
END;

14 - 19 Copyright 2007, Oracle. All rights reserved.

Managing Fine-Grained Access to External Network Services


The network utility family of PL/SQL packages, such as UTL_TCP, UTL_INADDR, UTL_HTTP,
UTL_SMTP, and UTL_MAIL, allow Oracle users to make network callouts from the database using
raw TCP or using higher-level protocols built on raw TCP. A user either did or did not have the
EXECUTE privilege on these packages and there was no control over which network hosts were
accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using
access control lists (ACL) implemented by XML DB.
1. Create an access control list (ACL). The ACL is a list of users and privileges held in an XML
file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in XML
DB. In the example given in the slide, SCOTT is granted connect. The username is case-sensitive
in the ACL and must match the username of the session. There are only resolve and connect
privileges. The connect privilege implies resolve. Optional parameters can specify a start and
end time stamp for these privileges. To add more users and privileges to this ACL, use the
ADD_PRIVILEGE procedure.

Oracle Database 11g: New Features for Administrators 14 - 19


Managing Fine-Grained Access to External
Network Services

2. Assign an ACL to one or more network hosts:


BEGIN
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
acl => 'us-oracle-com-permissions.xml',
host => '*.us.oracle.com',
lower_port => 80,
upper_port => null);
END

14 - 20 Copyright 2007, Oracle. All rights reserved.

Managing Fine-Grained Access to External Network Services (continued)


2. Assign an ACL to one or more network hosts. The ASSIGN_ACL procedure associates the ACL
with a network host and, optionally, a port or range of ports. In the example, the host parameter
allows wildcard characters for the host name to assign the ACL to all the hosts of a domain. The use
of wildcard characters affects the order of precedence for the evaluation of the ACL. Fully qualified
host names with ports are evaluated before hosts with ports. Fully qualified host names are evaluated
before partial domain names, and subdomains are evaluated before the top-level domains.
Multiple hosts can be assigned to the same ACL and multiple users can be added to the same ACL in
any order after the ACL has been created.

Oracle Database 11g: New Features for Administrators 14 - 20


Summary

In this lesson, you should have learned how to:


Configure the password file to use case-sensitive
passwords
Encrypt a tablespace
Configure fine-grained access to network services

14 - 21 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 14 - 21


Practice 14: Overview

This practice covers the following topics:


Changing the use of case-sensitive passwords
Implementing a password complexity function
Encrypting a tablespace

14 - 22 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 14 - 22


Oracle SecureFiles

Consolidated Secure Management of Data

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Describe how SecureFiles enhances the performance of
large object (LOB) data types
Use SQL and PL/SQL APIs to access SecureFiles

15 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 15 - 2


Managing Enterprise Information

Organizations need to efficiently and securely manage


many types of data:
Structured: Simple data, object-relational data
Semi-structured: XML documents, Word-processing
documents
Unstructured: Media, medical data, imaging

F
PD

Structured Semistructured Unstructured

15 - 3 Copyright 2007, Oracle. All rights reserved.

Managing Enterprise Information


Today, applications must deal with many kinds of data, broadly classified as structured, semi-
structured, and unstructured data. The features of large objects (LOBs) allow you to store all these
kinds of data in the database as well as in operating system (OS) files that are accessed from the
database. The simplicity and performance of file systems have made it attractive to store file data in
file systems, while keeping object-relational data in a relational database.

Oracle Database 11g: New Features for Administrators 15 - 3


Problems with Existing LOB Implementation

Limitations in LOB sizing


Considered mostly write once, read many times data
Offered low concurrency of DMLs
User-defined version control
Uniform CHUNK size
Affecting fragmentation
Upper size limit
Scalability issues with Oracle Real Application Clusters
(RAC)

15 - 4 Copyright 2007, Oracle. All rights reserved.

Problems with Existing LOB Implementation


In Oracle8i, LOB design decisions were made with the following assumptions:
LOB instantiation was expected to be several megabytes in size.
LOBs were considered mostly write once, read many times type of data. Updates would be
rare; therefore, you could version entire chunks for all kinds of updateslarge or small.
Few batch processes were expected to stream data. An online transaction processing (OLTP)
kind of workload was not anticipated.
The amount of undo retained is user-controlled with two parameters PCTVERSION and
RETENTION. This is an additional management burden.
The CHUNK size is a static parameter under the assumption that LOB sizes are typically uniform.
There is an upper limit of 32 KB on CHUNK size.
High concurrency writes in Oracle RAC was not anticipated.
Since their initial implementation, business requirements have dramatically changed. LOBs are now
being used in a manner similar to that of relational data, storing semi-structured and unstructured
data of all possible sizes. The size of the data can vary from a few kilobytes for an HTML link to
several terabytes for streaming video. Oracle file systems that store all the file system data in LOBs
experience OLTP-like high concurrency access. As Oracle RAC is being more widely adopted, the
scalability issues of Oracle RAC must be addressed. The existing design of LOB space structures
does not cater to these new requirements.

Oracle Database 11g: New Features for Administrators 15 - 4


Oracle SecureFiles

Oracle SecureFiles rearchitects the handling of


unstructured (file) data, offering entirely new:
Disk format
Variable chunk size
Network protocol
Improved I/O
Versioning and sharing mechanisms
Redo and undo algorithms
No user configuration
Space and memory enhancements

15 - 5 Copyright 2007, Oracle. All rights reserved.

Oracle SecureFiles
Oracle Database 11g completely reengineers the LOB data type as Oracle SecureFiles, dramatically
improving the performance, manageability, and ease of application development. The new
implementation also offers advanced, next-generation functionality such as intelligent compression
and transparent encryption.
With SecureFiles, chunks vary in size from Oracle data block size up to 64 MB. The Oracle database
attempts to colocate data in physically adjacent locations on disk, thereby minimizing internal
fragmentation. By using variable chunk sizes, SecureFiles avoids versioning of large, unnecessary
blocks of LOB data.
SecureFiles also offer a new client/server network layer allowing for high-speed data transfer
between the client and server supporting significantly higher read and write performance. SecureFiles
automatically determines the most efficient way for generating redo and undo, eliminating user-
defined parameters. SecureFiles automatically determines whether to generate redo and undo for
only the change, or create a new version by generating a full redo record.
SecureFiles is designed to be intelligent and self-adaptable as it maintains different in-memory
statistics that help in efficient memory and space allocation. This provides for easier manageability
due to lower number of tunable parameters that are harder to tune with unpredictable loads.

Oracle Database 11g: New Features for Administrators 15 - 5


Enabling SecureFiles Storage

SecureFiles storage can be enabled:


Using the DB_SECUREFILE initialization parameter,
which can have the following values:
ALWAYS | FORCE | PERMITTED | NEVER | IGNORE
Using Enterprise Manager:

Using the ALTER SESSION | SYSTEM command:

SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS';

15 - 6 Copyright 2007, Oracle. All rights reserved.

Enabling SecureFiles Storage


The DB_SECUREFILE initialization parameter allows database administrators (DBAs) to determine
the usage of SecureFiles, where valid values are:
ALWAYS: Attempts to create all LOBs as SecureFile LOBs but creates any LOBs not in
Automatic Segment Space Management (ASSM) tablespaces as BasicFile LOBs
FORCE: Forces all LOBs created going forward to be SecureFile LOBs
PERMITTED: Allows SecureFiles to be created (default)
NEVER: Disallows SecureFiles from being created going forward
IGNORE: Disallows SecureFiles and ignores any errors that would otherwise be caused by
forcing BasicFiles with SecureFiles options
If NEVER is specified, any LOBs that are specified as SecureFiles are created as BasicFiles. All
SecureFiles-specific storage options and features (for example, compression, encryption, and
deduplication) cause an exception if used against BasicFiles. BasicFiles defaults are used for any
storage options not specified. If ALWAYS is specified, all LOBs created in the system are created as
SecureFiles. The LOB must be created in an ASSM tablespace, otherwise an error occurs. Any
BasicFiles storage options specified are ignored. The SecureFiles defaults for all storage can be
changed using the ALTER SYSTEM command as shown in the slide.
You can also use Enterprise Manager to set the parameter from the Server tab > Initialization
Parameters link.

Oracle Database 11g: New Features for Administrators 15 - 6


SecureFiles: Advanced Features

Oracle SecureFiles offers the following advanced


capabilities:
Intelligent LOB compression
Deduplication
Transparent encryption
These capabilities leverage the security, reliability, and
scalability of the database.

15 - 7 Copyright 2007, Oracle. All rights reserved.

SecureFiles: Advanced Features


Oracle SecureFiles implementation also offers advanced, next-generation functionality such as
intelligent compression and transparent encryption. Compression enables you to explicitly compress
SecureFiles. SecureFiles transparently uncompresses only the required set of data blocks for random
read or write access, automatically maintaining the mapping between uncompressed and compressed
offsets. If the compression level is changed from MEDIUM to HIGH, the mapping is automatically
updated to reflect the new compression algorithm. Deduplication automatically detects duplicate
SecureFile LOB data and conserves space by storing only one copyimplementing disk storage, I/O,
and redo logging savings. Deduplication can be specified at the table level or partition level and does
not span across partitioned LOBs. Deduplication requires the Advanced Compression Option.
Encrypted LOB data is now stored in place and is available for random reads and writes offering
enhanced data security. SecureFile LOBs can be encrypted only on a per-column basis (same as
Transparent Data Encryption). All partitions within a LOB column are encrypted using the same
encryption algorithm. BasicFiles data cannot be encrypted. SecureFiles supports the industry-
standard encryption algorithms: 3DES168, AES128, AES192 (default), and AES256. Encryption is
part of the Advanced Security Option.
Note: The COMPATIBLE initialization parameter must be set to 11.0.0.0.0 or later to use
SecureFiles. The BasicFiles (previous LOB) format is still supported under 11.1.0.0.0 compatibility.
There is no downgrade capability after 11.0.0.0.0 is set.

Oracle Database 11g: New Features for Administrators 15 - 7


SecureFiles: Storage Options

MAXSIZE: Specifies the maximum LOB segment size


RETENTION: Specifies the retention policy to use
MAX: Keep old versions until MAXSIZE is reached.
MIN: Keep old versions at least MIN seconds.
AUTO: Default
NONE: Reuse old versions as much as possible.
The following storage clauses do not apply to
SecureFiles:
CHUNK, PCTVERSION, FREEPOOLS, FREELISTS, and
FREELIST GROUPS

15 - 8 Copyright 2007, Oracle. All rights reserved.

SecureFiles: Storage Options


MAXSIZE is a new storage clause governing the physical storage attribute for SecureFiles.
MAXSIZE specifies the maximum segment size related to the storage clause level.
RETENTION signifies the following for SecureFiles:
MAX is used to start reclaiming old versions after segment MAXSIZE is reached.
MIN keeps old versions for the specified least amount of time.
AUTO is the default setting, which is basically a trade-off between space and time. This is
automatically determined.
NONE reuses old versions as much as possible.

Altering the RETENTION with the ALTER TABLE statement affects the space created only after the
statement is executed.
For SecureFiles, you no longer need to specify CHUNK, PCTVERSION, FREEPOOLS, FREELISTS,
and FREELIST GROUPS. For compatibility with existing scripts, these clauses are parsed but not
interpreted.

Oracle Database 11g: New Features for Administrators 15 - 8


Creating SecureFiles

CREATE TABLE func_spec(


id number, doc CLOB ENCRYPT USING 'AES128' )
LOB(doc) STORE AS SECUREFILE
(DEDUPLICATE LOB CACHE NOLOGGING);

CREATE TABLE test_spec (


id number, doc CLOB)
LOB(doc) STORE AS SECUREFILE
(COMPRESS HIGH KEEP_DUPLICATES CACHE NOLOGGING);

CREATE TABLE design_spec (id number, doc CLOB)


LOB(doc) STORE AS SECUREFILE (ENCRYPT);
CREATE TABLE design_spec (id number,
doc CLOB ENCRYPT)
LOB(doc) STORE AS SECUREFILE;

15 - 9 Copyright 2007, Oracle. All rights reserved.

Creating SecureFiles
You create SecureFiles with the storage keyword SECUREFILE in the CREATE TABLE statement
with a LOB column. The LOB implementation available in prior database versions is still supported
for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table,
you can specify whether it should be created as SecureFiles or BasicFiles. If you do not specify the
storage type, the LOB is created as BasicFiles to ensure backward compatibility.
In the first example in the slide, you create a table called FUNC_SPEC to store documents as
SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the
LOB should be cached when read, and that redo should not be generated when updates are performed
to the LOB. In addition, you are specifying that the documents stored in the doc column should be
encrypted using the AES128 encryption algorithm. KEEP_DUPLICATE is the opposite of
DEDUPLICATE, and can be used in an ALTER statement.
In the third example in the slide, you are creating a table called DESIGN_SPEC that stores
documents as SecureFiles. For this table you have specified that duplicates may be stored, and that
the LOBs should be stored in compressed format and should be cached but not logged. Default
compression is MEDIUM, which is the default. The compression algorithm is implemented on the
server-side, which allows for random reads and writes to LOB data. That property can also be
changed via ALTER statements.

Oracle Database 11g: New Features for Administrators 15 - 9


Creating SecureFiles Using Enterprise Manager

15 - 10 Copyright 2007, Oracle. All rights reserved.

Creating SecureFiles Using Enterprise Manager


You can use Enterprise Manager to create SecureFiles from the Schema tab > Tables link. After you
click the Create button, you can click the Advanced Attributes button against the column you are
storing as a SecureFile, to enter any SecureFiles options.
The LOB implementation available in prior versions is still supported for backward compatibility
reasons and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify
whether it should be created as a SecureFile or a BasicFile. If you do not specify the storage type, the
LOB is created as a BasicFile to ensure backward compatibility.
You can select the following as values for the Cache option:
CACHE: Oracle places LOB pages in the buffer cache for faster access.
NOCACHE: As a parameter in the STORE AS clause, NOCACHE specifies that LOB values are
not brought into the buffer cache.
CACHE READS: LOB values are brought into the buffer cache only during read and not during
write operations.
NOCACHE is the default for both SecureFile and BasicFile LOBs.

Oracle Database 11g: New Features for Administrators 15 - 10


Shared I/O Pool

LOB Cache

Direct I/O

Buffer Shared I/O


Cache Pool

Block-size buffers Memory Allocation


Shared I/O Pool buffers

15 - 11 Copyright 2007, Oracle. All rights reserved.

Shared I/O Pool


The Shared I/O Pool memory component is added in Oracle Database 11g to support large I/Os from
shared memory, as opposed to Program Global Area (PGA), for direct path access. This is only when
SecureFiles are created as NOCACHE (the default). The Shared I/O Pool defaults to zero in size and,
only if there is SecureFiles NOCACHE workload, the system increases its size to 4% of cache.
Because this is a shared resource, it may get used by large concurrent SecureFiles workloads. Unlike
other pools, such as the large pool or shared pool, the user process will not generate a ORA-04031
error but will fall back to the PGA temporarily until more shared I/O pool buffers get freed.
The LOB Cache is a new component in the SecureFiles architecture, improving LOB access
performance by gathering and batching data as well as overlapping network and disk I/O. The LOB
Cache borrows memory from the buffer cacheeither regular buffers or memory from the Shared
I/O Pool. Because memory borrowed from buffer cache buffers is naturally suitable for doing
database I/Os as well as suitable for injecting back into the buffer cache after I/Os have been done,
unnecessary copying of memory can be avoided.
In multi-instance Oracle Real Application Clusters, the LOB Cache holds one single lock for each of
the LOB accessed.

Oracle Database 11g: New Features for Administrators 15 - 11


Altering SecureFiles
ALTER TABLE t1 Disable deduplication.
MODIFY LOB(a) ( KEEP_DUPLICATES );
ALTER TABLE t1 Enable deduplication.
MODIFY LOB(a) ( DEDUPLICATE LOB );
ALTER TABLE t1 Enable partition
deduplication.
MODIFY PARTITION p1 LOB(a) ( DEDUPLICATE LOB );
ALTER TABLE t1 Disable compression.
MODIFY LOB(a) ( NOCOMPRESS );
Enable compression.
ALTER TABLE t1
MODIFY LOB(a) (COMPRESS HIGH); Enable compression on SecureFiles
ALTER TABLE t1 within a single partition.
MODIFY PARTITION p1 LOB(a) ( COMPRESS HIGH );

ALTER TABLE t1 MODIFY Enable encryption using 3DES168.


( a CLOB ENCRYPT USING '3DES168');
Enable encryption on partition.
ALTER TABLE t1 MODIFY PARTITION p1
( LOB(a) ( ENCRYPT );
Enable encryption and build
ALTER TABLE t1 MODIFY the encryption key using a
( a CLOB ENCRYPT IDENTIFIED BY ghYtp); password.

15 - 12 Copyright 2007, Oracle. All rights reserved.

Altering SecureFiles
Using the DEDUPLICATE option, you can specify that LOB data that is identical in two or more
rows in a LOB column should share the same data blocks. The opposite of this is
KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs
with identical content into a single copy, reducing storage and simplifying storage management. The
LOB keyword is optional and is for syntactic clarity only.
The COMPRESS or NOCOMPRESS keywords enable or disable LOB compression, respectively. All
LOBs in the LOB segment are altered with the new compression setting.
The ENCRYPT or DECRYPT keyword turns on or off LOB encryption using Transparent Data
Encryption (TDE). All LOBs in the LOB segment are altered with the new setting. A LOB segment
can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update
the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be
updated using the ALTER TABLE REKEY syntax. Encryption is done at the block level allowing
for better performance (smallest encryption amount possible) when combined with other options.
Note: For a full description of the options available for the ALTER TABLE statement, see the
Oracle Database SQL Reference.

Oracle Database 11g: New Features for Administrators 15 - 12


Accessing SecureFiles Metadata

The data layer interface is the same as with BasicFiles.

DBMS_LOB
SecureFiles
GETOPTIONS()
SETOPTIONS
GET_DEDUPLICATE_REGIONS

DBMS_SPACE.SPACE_USAGE

15 - 13 Copyright 2007, Oracle. All rights reserved.

Accessing SecureFiles Metadata


DBMS_LOB package: LOBs inherit the LOB column settings for deduplication, encryption, and
compression, which can also be configured on a per-LOB level using the LOB locator API. However,
the LONG API cannot be used to configure these LOB settings. You must use the following
DBMS_LOB package additions for these features:
DBMS_LOB.GETOPTIONS: Settings can be obtained using this function. An integer
corresponding to a predefined constant based on the option type is returned.
DBMS_LOB.SETOPTIONS: This procedure sets features and allows the features to be set on a
per-LOB basis, overriding the default LOB settings. It incurs a round-trip to the server to make
the changes persistent.
DBMS_LOB.GET_DEDUPLICATE_REGIONS: This procedure outputs a collection of records
identifying the deduplicated regions in a LOB. LOB-level deduplication contains only a single
deduplicated region.
DBMS_SPACE.SPACE_USAGE: The existing SPACE_USAGE procedure is overloaded to return
information about LOB space usage. It returns the amount of disk space in blocks used by all the
LOBs in the LOB segment. This procedure can be used only on tablespaces created with ASSM and
does not treat LOB chunks belonging to BasicFiles as used space.
Note: For further details, see the Oracle Database PL/SQL Packages and Types Reference.

Oracle Database 11g: New Features for Administrators 15 - 13


Migrating to SecureFiles

Use online redefinition

SecureFiles

15 - 14 Copyright 2007, Oracle. All rights reserved.

Migrating to SecureFiles
A superset of LOB interfaces allows easy migration from BasicFile LOBs. The two recommended
methods for migration to SecureFiles are partition exchange and online redefinition.
Partition Exchange
Needs additional space equal to the largest of the partitions in the table
Can maintain indexes during the exchange
Can spread the workload out over several smaller maintenance windows
Requires that the table or partition needs to be offline to perform the exchange
Online Redefinition (recommended practice)
No need to take the table or partition offline
Can be done in parallel
Requires additional storage equal to the entire table and all LOB segments to be available
Requires that any global indexes be rebuilt
These solutions generally mean using twice the disk space used by the data in the input LOB column.
However, using partitioning and taking these actions on a partition-by-partition basis may help lower
the disk space required.

Oracle Database 11g: New Features for Administrators 15 - 14


SecureFiles Migration: Example

create table tab1 (id number not null, c clob)


partition by range(id)
(partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1,
partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2,
partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3);

Insert your data.

create table tab1_tmp (id number not null, c clob)


partition by range(id)
(partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1,
partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2,
partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3);

begin
dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c');
dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1,
true,true,true,false,error_count);
dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp');
end;

15 - 15 Copyright 2007, Oracle. All rights reserved.

SecureFiles Migration: Example


The example in the slide can be used to migrate BasicFile LOBs to SecureFile LOBs.
First, you create your table using BasicFiles. The example uses a partitioned table.
Then, you insert data in your table.
Following this, you create a transient table that has the same number of partitions but this time using
SecureFiles. Note that this transient table has the same columns and types.
The last section demonstrates the redefinition of your table using the previously created transient
table with the DBMS_REDEFINITION procedure.

Oracle Database 11g: New Features for Administrators 15 - 15


SecureFiles Monitoring

The following views have been modified to show


SecureFiles usage:
*_SEGMENTS
*_LOBS
*_LOB_PARTITIONS
*_PART_LOBS
SQL> SELECT segment_name, segment_type, segment_subtype
2 FROM dba_segments
3 WHERE tablespace_name = 'SECF_TBS2'
4 AND segment_type = 'LOBSEGMENT'
5 /

SEGMENT_NAME SEGMENT_TYPE SEGMENT_SU


---------------------------- ------------------ ----------
SYS_LOB0000071583C00004$$ LOBSEGMENT SECUREFILE

15 - 16 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 15 - 16


Summary

In this lesson, you should have learned how to use:


SecureFiles to improve LOB performance
SQL and PL/SQL APIs to access SecureFiles

15 - 17 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 15 - 17


Practice 15: Overview

This practice covers exploring the advantages of using


SecureFiles for compression, data encryption, and
performance.

15 - 18 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 15 - 18


Miscellaneous New Features

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Describe enhancements to locking mechanisms
Use the SQL query result cache
Use the enhanced PL/SQL recompilation mechanism
Create and use invisible indexes
Describe Adaptive Cursor Sharing
Manage your SPFILE

16 - 2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 16 - 2


Foreground Statistics

New columns report foreground-only statistics:


V$SYSTEM_EVENT:
TOTAL_WAITS_FG
TOTAL_TIMEOUTS_FG
TIME_WAITED_FG
AVERAGE_WAIT_FG
TIME_WAITED_MICRO_FG
V$SYSTEM_WAIT_CLASS:
TOTAL_WAITS_FG
TIME_WAITED_FG

16 - 3 Copyright 2007, Oracle. All rights reserved.

Foreground Statistics
New columns have been added to the V$SYSTEM_EVENT and the V$SYSTEM_WAIT_CLASS
views that allow you to easily identify events that are caused by foreground or background processes.
V$SYSTEM_EVENT has five new NUMBER columns that represent the statistics from purely
foreground sessions:
TOTAL_WAITS_FG
TOTAL_TIMEOUTS_FG
TIME_WAITED_FG
AVERAGE_WAIT_FG
TIME_WAITED_MICRO_FG

V$SYSTEM_WAIT_CLASS has two new NUMBER columns that represent the statistics from purely
foreground sessions:
TOTAL_WAITS_FG
TIME_WAITED_FG

Oracle Database 11g: New Features for Administrators 16 - 3


Online Redefinition Enhancements

Online table redefinition supports the following:


Tables with materialized views and view logs
Triggers with ordering dependency
Online redefinition does not systematically invalidate
dependent objects.

16 - 4 Copyright 2007, Oracle. All rights reserved.

Online Redefinition Enhancements


Oracle Database 11g supports online redefinition for tables with materialized views and view logs. In
addition, online redefinition supports triggers with the FOLLOWS or PRECEDES clause, which
establishes an ordering dependency between the triggers.
In previous database versions, all directly and indirectly dependent views and PL/SQL packages
would be invalidated after an online redefinition or other DDL operations. These views and PL/SQL
packages would automatically be recompiled whenever they are next invoked. If there are a lot of
dependent PL/SQL packages and views, the cost of the revalidation or recompilation can be
significant.
In Oracle Database 11g, views, synonyms, and other table-dependent objects (with the exception of
triggers) that are not logically affected by the redefinition, are not invalidated. So, for example, if
referenced column names and types are the same after the redefinition, then they are not invalidated.
This optimization is transparent, that is, it is turned on by default.
Another example: If the redefinition drops a column, only those procedures and views that reference
the column are invalidated. The other dependent procedures and views remain valid. Note that all
triggers on a table being redefined are invalidated (as the redefinition can potentially change the
internal column numbers and data types), but they are automatically revalidated with the next DML
execution to the table.

Oracle Database 11g: New Features for Administrators 16 - 4


Minimizing Dependent Recompilations

Adding a column to a table does not invalidate its


dependent objects.
Adding a PL/SQL unit to a package does not invalidate
dependent objects.
Fine-grain dependencies are tracked automatically.
No configuration is required.

16 - 5 Copyright 2007, Oracle. All rights reserved.

Minimizing Dependent Recompilations


Starting with Oracle Database 11g, you have access to records that describe more precise dependency
metadata. This is called fine-grain dependencies and it is automatically on.
Earlier Oracle Database releases record dependency metadatafor example, that PL/SQL unit P
depends on PL/SQL unit F, or that the view V depends on the table Twith the precision of the
whole object. This means that dependent objects are sometimes invalidated without logical
requirement. For example, if the V view depends only on the A and B columns in the T table, and
column D is added to the T table, the validity of the V view is not logically affected. Nevertheless,
before Oracle Database Release 11.1, the V view is invalidated by the addition of the D column to the
T table. With Oracle Database Release 11.1, adding the D column to the T table does not invalidate
the V view. Similarly, if procedure P depends only on elements E1 and E2 within a package, adding
the E99 element (to the end of a package to avoid changing slot numbers or entry point numbers of
existing top-level elements) to the package does not invalidate the P procedure.
Reducing the invalidation of dependent objects in response to changes to the objects on which they
depend increases application availability, both in the development environment and during online
application upgrade.

Oracle Database 11g: New Features for Administrators 16 - 5


Locking Enhancements

DDL commands can now wait for DML locks to be


released:
DDL_LOCK_TIMEOUT initialization parameter
New WAIT [<timeout>] clause for the LOCK TABLE
command
The following commands will no longer acquire
exclusive locks (X), but shared exclusive locks (SX):
CREATE INDEX ONLINE
CREATE MATERIALIZED VIEW LOG
ALTER TABLE ENABLE CONSTRAINT NOVALIDATE

16 - 6 Copyright 2007, Oracle. All rights reserved.

Locking Enhancements
You can limit the time that DDL commands wait for DML locks before failing by setting the
DDL_LOCK_TIMEOUT parameter at the system or session level. This initialization parameter is
set by default to 0, that is NOWAIT, which ensures backward compatibility. The range of values
is 0100,000 (in seconds).
The LOCK TABLE command has new syntax that you can use to specify the maximum number
of seconds the statement should wait to obtain a DML lock on the table. Use the WAIT clause to
indicate that the LOCK TABLE statement should wait up to the specified number of seconds to
acquire a DML lock. There is no limit on the value of the integer.
In highly concurrent environments, the requirement of acquiring an exclusive lockfor
example, at the end of an online index creation and rebuildcould lead to a spike of waiting
DML operations and, therefore, a short drop and spike of system usage. While this is not an
overall problem for the database, this anomaly in system usage could trigger operating system
alarm levels. The commands listed in the slide no longer require exclusive locks.

Oracle Database 11g: New Features for Administrators 16 - 6


Invisible Index: Overview

Use index Do not use index

Optimizer view point

VISIBLE INVISIBLE
Index Index
OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE

Data view point

Update index Update index


Update table Update table

16 - 7 Copyright 2007, Oracle. All rights reserved.

Invisible Index: Overview


Beginning with Release 11g, you can create invisible indexes. An invisible index is an index that is
ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES
initialization parameter to TRUE at the session or system level. The default value for this parameter is
FALSE.
Making an index invisible is an alternative to making it unusable or dropping it. Using invisible
indexes, you can do the following:
Test the removal of an index before dropping it.
Use temporary index structures for certain operations or modules of an application without
affecting the overall application.
Unlike unusable indexes, an invisible index is maintained during DML statements.

Oracle Database 11g: New Features for Administrators 16 - 7


Invisible Indexes: Examples

Index is altered as not visible to the optimizer:


ALTER INDEX ind1 INVISIBLE;

Optimizer does not consider this index:


SELECT /*+ index(TAB1 IND1) */ COL1 FROM TAB1 WHERE ;

Optimizer will always consider the index:


ALTER INDEX ind1 VISIBLE;

Creating an index as invisible initially:


CREATE INDEX IND1 ON TAB1(COL1) INVISIBLE;

16 - 8 Copyright 2007, Oracle. All rights reserved.

Invisible Indexes: Examples


When an index is invisible, the optimizer generates plans that do not use the index. If there is no
discernible drop in performance, you can then drop the index. You can also create an index initially
as invisible, perform testing, and then determine whether to make the index visible.
You can query the VISIBILITY column of the *_INDEXES data dictionary views to determine
whether the index is VISIBLE or INVISIBLE.
Note: For all the statements given in the slide, it is assumed that
OPTIMIZER_USE_INVISIBLE_INDEXES is set to FALSE.

Oracle Database 11g: New Features for Administrators 16 - 8


SQL Query Result Cache: Overview

Cache the result of a query or query block for future


reuse.
Cache is used across statements and sessions unless
it is stale.
Benefits:
Scalability
Reduction of memory usage
Good candidate statements:
SQL
Access many rows Query Result
Cache
Return few rows 2 3
SELECT SELECT
Session 1 Session 2
1

16 - 9 Copyright 2007, Oracle. All rights reserved.

SQL Query Result Cache: Overview


The SQL query result cache enables explicit caching of query result sets and query fragments in
database memory. A dedicated memory buffer stored in the shared pool can be used for storing and
retrieving the cached results. The query results stored in this cache become invalid when data in the
database objects being accessed by the query is modified.
Although the SQL query cache can be used for any query, good candidate statements are the ones
that need to access a very high number of rows to return only a fraction of them. This is mostly the
case for data warehousing applications.
In the graphic shown in the slide, if the first session executes a query, it retrieves the data from the
database and then caches the result in the SQL query result cache. If a second session executes the
exact same query, it retrieves the result directly from the cache instead of using the disks.
Note
Each node in a RAC configuration has a private result cache. Results cached on one instance
cannot be used by another instance. However, invalidations work across instances. To handle all
synchronization operations between RAC instances related to the SQL query result cache, the
special RCBG process is used on each instance.
With parallel query, entire result can be cached (in RAC it is cached on query coordinator
instance) but individual parallel query processes cannot use the cache.

Oracle Database 11g: New Features for Administrators 16 - 9


Setting Up SQL Query Result Cache

Set at database level using the RESULT_CACHE_MODE


initialization parameter. Values:
AUTO: The optimizer determines the results that need to
be stored in the cache based on repetitive executions.
MANUAL: Use the result_cache hint to specify results
to be stored in the cache.
FORCE: All results are stored in the cache.

16 - 10 Copyright 2007, Oracle. All rights reserved.

Setting Up SQL Query Result Cache


The query optimizer manages the result cache mechanism depending on the settings of the
RESULT_CACHE_MODE parameter in the initialization parameter file.
You can use this parameter to determine whether or not the optimizer automatically sends the results
of queries to the result cache. You can set the RESULT_CACHE_MODE parameter at the system,
session, and table level. The possible parameter values are AUTO, MANUAL, and FORCE:
When set to AUTO, the optimizer determines which results are to be stored in the cache based on
repetitive executions.
When set to MANUAL (the default), you must specify, by using the RESULT_CACHE hint, that a
particular result is to be stored in the cache.
When set to FORCE, all results are stored in the cache.
Note: For both the AUTO and FORCE settings, if the statement contains a [NO_]RESULT_CACHE
hint, then the hint takes precedence over the parameter setting.

Oracle Database 11g: New Features for Administrators 16 - 10


Managing the SQL Query Result Cache

Use the following initialization parameters:


RESULT_CACHE_MAX_SIZE
It sets the memory allocated to the result cache.
Result cache is disabled if you set the value to 0.
Default is dependent on other memory settings (0.25% of
memory_target or 0.5% of sga_target or 1% of
shared_pool_size)
Cannot be greater than 75% of shared pool
RESULT_CACHE_MAX_RESULT
Sets maximum cache memory for a single result
Defaults to 5%
RESULT_CACHE_REMOTE_EXPIRATION
Sets the expiry time for cached results depending on remote
database objects
Defaults to 0

16 - 11 Copyright 2007, Oracle. All rights reserved.

Managing SQL Query Results Cache


You can alter various parameter settings in the initialization parameter file to manage the SQL query
result cache of your database.
By default, the database allocates memory for the result cache in the Shared Pool inside the SGA.
The memory size allocated to the result cache depends on the memory size of the SGA as well as the
memory management system. You can change the memory allocated to the result cache by setting
the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set its value to 0.
The value of this parameter is rounded to the largest multiple of 32 KB that is not greater than the
specified value. If the rounded value is 0, then the feature is disabled.
Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache
memory that can be used by any single result. The default value is 5%, but you can specify any
percentage value between 1 and 100. This parameter can be implemented at the system and session
level.
Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of
minutes) for which a result that depends on remote database objects remains valid. The default value
is 0, which implies that results using remote objects should not be cached. Setting this parameter to a
nonzero value can produce stale answers: for example, if the remote table used by a result is
modified at the remote database.

Oracle Database 11g: New Features for Administrators 16 - 11


Using the RESULT_CACHE Hint
EXPLAIN PLAN FOR
SELECT /*+ RESULT_CACHE */ department_id, AVG(salary)
FROM employees
GROUP BY department_id;
--------------------------------------------------------------
| Id | Operation | Name |Rows
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11
| 1 | RESULT CACHE | 8fpza04gtwsfr6n595au15yj4y |
| 2 | HASH GROUP BY | | 11
| 3 | TABLE ACCESS FULL| EMPLOYEES | 107
--------------------------------------------------------------

SELECT /*+ NO_RESULT_CACHE */ department_id, AVG(salary)


FROM employees
GROUP BY department_id;

16 - 12 Copyright 2007, Oracle. All rights reserved.

Using the Result_Cache Hint


If you want to use the query result cache and the RESULT_CACHE_MODE initialization parameter is
set to MANUAL, you must explicitly specify the RESULT_CACHE hint in your query. This introduces
the ResultCache operator into the execution plan for the query. When you execute the query, the
ResultCache operator looks up the result cache memory to check whether the result for the query
already exists in the cache. If it exists, then the result is retrieved directly out of the cache. If it does
not yet exist in the cache, then the query is executed, the result is returned as output, and is also
stored in the result cache memory.
If the RESULT_CACHE_MODE initialization parameter is set to AUTO or FORCE, and you do not
want to store the result of a query in the result cache, you must then use the NO_RESULT_CACHE
hint in your query. For example, when the RESULT_CACHE_MODE value equals FORCE in the
initialization parameter file, and you do not want to use the result cache for the EMPLOYEES table,
then use the NO_RESULT_CACHE hint.
Note: Use of the [NO_]RESULT_CACHE hint takes precedence over the parameter settings.

Oracle Database 11g: New Features for Administrators 16 - 12


In-Line View: Example

SELECT prod_subcategory, revenue


FROM (SELECT /*+ RESULT_CACHE */ p.prod_category,
p.prod_subcategory,
sum(s.amount_sold) revenue
FROM products p, sales s
WHERE s.prod_id = p.prod_id and
s.time_id BETWEEN to_date('01-JAN-2006','dd-MON-yyyy')
and
to_date('31-DEC-2006','dd-MON-yyyy')
GROUP BY ROLLUP(p.prod_category, p.prod_subcategory))
WHERE prod_category = 'Women';

16 - 13 Copyright 2007, Oracle. All rights reserved.

In-Line View: Example


In the example given in the slide, the RESULT_CACHE hint is used in the in-line view. In this case,
the following optimizations are disabled: view merging, predicate push-down, and column
projection. This is at the expense of the initial query, which might take a longer time to execute.
However, subsequent executions will be much faster because of the SQL query cache. The other
benefit in this case is that similar queries (queries using a different predicate value for
prod_category in the last WHERE clause) will also be much faster.

Oracle Database 11g: New Features for Administrators 16 - 13


Using the DBMS_RESULT_CACHE Package

Use the DBMS_RESULT_CACHE package to:


Manage memory allocation for the query result cache
View the status of the cache:
SELECT DBMS_RESULT_CACHE.STATUS FROM DUAL;
Retrieve statistics on the cache memory usage:
EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT;
Remove all existing results and clear cache memory:
EXECUTE DBMS_RESULT_CACHE.FLUSH;
Invalidate cached results depending on specified
object:
EXEC DBMS_RESULT_CACHE.INVALIDATE('JFV','MYTAB');

16 - 14 Copyright 2007, Oracle. All rights reserved.

Using the DBMS_RESULT_CACHE Package


The DBMS_RESULT_CACHE package provides statistics, information, and operators that enable you
to manage memory allocation for the query result cache. You can use the DBMS_RESULT_CACHE
package to perform various operations such as viewing the status of the cache (OPEN or CLOSED),
retrieving statistics on the cache memory usage, and flushing the cache. For example, to view the
memory allocation statistics, use the following SQL procedure:
SQL> set serveroutput on
SQL> execute dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1024 bytes
Maximum Cache Size = 720896 bytes (704 blocks)
Maximum Result Size = 35840 bytes (35 blocks)
[Memory]
Total Memory = 46284 bytes [0.036% of the Shared Pool]
... Fixed Memory = 10640 bytes [0.008% of the Shared Pool]
... State Object Pool = 2852 bytes [0.002% of the Shared Pool]
... Cache Memory = 32792 bytes (32 blocks) [0.025% of the Shared Pool]
....... Unused Memory = 30 blocks
....... Used Memory = 2 blocks
........... Dependencies = 1 blocks
........... Results = 1 blocks
............... SQL = 1 blocks

Note: For more information, refer to the PL/SQL Packages and Types Reference Guide.
Oracle Database 11g: New Features for Administrators 16 - 14
Viewing SQL Result Cache Dictionary Information

The following views provide information about the query


result cache:
(G)V$RESULT_CACHE_STATISTICS Lists the various cache settings and
memory usage statistics

(G)V$RESULT_CACHE_MEMORY Lists all the memory blocks and the


corresponding statistics

(G)V$RESULT_CACHE_OBJECTS Lists all the objects (cached results


and dependencies) along with their
attributes

(G)V$RESULT_CACHE_DEPENDENCY Lists the dependency details between


the cached results and dependencies

16 - 15 Copyright 2007, Oracle. All rights reserved.

Viewing SQL Result Cache Dictionary Information


Note: For further information, see the Oracle Database Reference guide.

Oracle Database 11g: New Features for Administrators 16 - 15


SQL Query Result Cache: Considerations

Result cache is disabled for queries containing:


Temporary or dictionary tables
Nondeterministic PL/SQL functions
Sequence CURRVAL and NEXTVAL
SQL functions current_date, sysdate, sys_guid, and
so on
DML/DDL on remote database does not expire cached
results.
Flashback queries can be cached.

16 - 16 Copyright 2007, Oracle. All rights reserved.

SQL Query Result Cache: Considerations


Note: Any user-written function used in a function-based index must have been declared with the
DETERMINISTIC keyword to indicate that the function will always return the same output value
for any given set of input argument values.

Oracle Database 11g: New Features for Administrators 16 - 16


SQL Query Result Cache: Considerations

Result cache does not automatically release memory.


It grows until maximum size is reached.
DBMS_RESULT_CACHE.FLUSH purges memory.
Bind variables
Cached result is parameterized with variable values.
Cached results can only be found for the same variable
values.
Cached result will not be built if:
Query is built on a noncurrent version of data (read
consistency enforcement)
Current session has outstanding transaction on table(s)
in query

16 - 17 Copyright 2007, Oracle. All rights reserved.

SQL Query Result Cache: Considerations (continued)


Note
The purge works only if the cache is not in use; disable (close) the cache for flush to succeed.
With bin variables, cached result is parameterized with variable values. Cached results can be
found only for the same variable values. That is, different values or bind variable names cause
cache miss.

Oracle Database 11g: New Features for Administrators 16 - 17


OCI Client Query Cache

Extends server-side query caching to client-side


memory
Ensures better performance by eliminating round-trips
to the server
Leverages client-side memory
Improves server scalability by saving server CPU
resources
Result cache automatically refreshed if the result set is
changed on the server
Particularly good for lookup tables

16 - 18 Copyright 2007, Oracle. All rights reserved.

OCI Client Query Cache


You can enable caching of query result sets in client memory with Oracle Call Interface (OCI) Client
Query Cache in Oracle Database 11g.
The cached result set data is transparently kept consistent with any changes done on the server side.
Applications leveraging this feature see improved performance for queries that have a cache hit.
Additionally, a query serviced by the cache avoids round trips to the server for sending the query and
fetching the results. Server CPU, which would have been consumed for processing the query, is
reduced thus improving server scalability.
Before using client-side query cache, determine whether your application will benefit from this
feature. Client-side caching is useful when you have applications that produce repeatable result sets,
small result sets, static result sets, or frequently executed queries.
Client and server result caches are autonomous; each can be enabled/disabled independently.
Note: You can monitor the client query cache using the client_result_cache_stats$ view
or v$client_result_cache_stats view.

Oracle Database 11g: New Features for Administrators 16 - 18


Using Client-Side Query Cache

You can use client-side query caching by:


Setting initialization parameters
CLIENT_RESULT_CACHE_SIZE
CLIENT_RESULT_CACHE_LAG
Using the client configuration file
OCI_RESULT_CACHE_MAX_SIZE
OCI_RESULT_CACHE_MAX_RSET_SIZE
OCI_RESULT_CACHE_MAX_RSET_ROWS
Client result cache is then used depending on:
Tables result cache mode
RESULT CACHE hints in your SQL statements

16 - 19 Copyright 2007, Oracle. All rights reserved.

Using Client-Side Query Cache


The following two parameters can be set in your initialization parameter file:
CLIENT_RESULT_CACHE_SIZE: A nonzero value enables the client result cache. This is the
maximum size of the client per-process result set cache in bytes. All OCI client processes get
this maximum size and can be overridden by the OCI_RESULT_CACHE_MAX_SIZE
parameter.
CLIENT_RESULT_CACHE_LAG: Maximum time (in milliseconds) since the last round-trip to
the server, before which the OCI client query executes a round-trip to get any database changes
related to the queries cached on client.
A client configuration file is optional and overrides the cache parameters set in the server
initialization parameter file. Parameter values can be part of a sqlnet.ora file. When parameter
values shown in the slide are specified, OCI client caching is enabled for OCI client processes using
the configuration file. OCI_RESULT_CACHE_MAX_RSET_SIZE/ROWS denotes the maximum
size of any result set in bytes/rows in the per-process query cache. OCI applications can use
application hints to force result cache storage. This overrides the deployment time settings of ALTER
TABLE/ALTER VIEW. The application hints can be:
SQL hints /*+ result_cache */, and /*+ no_result_cache */
OCIStmtExecute() modes. These override SQL hints.

Note: To use this feature, your applications must be relinked with Release 11.1 or higher client
libraries and be connected to a Release 11.1 or higher server.

Oracle Database 11g: New Features for Administrators 16 - 19


PL/SQL Function Cache

Stores function results in cache, making them available


to other sessions.
Uses the Query Result Cache
Cached
exec Calculate();
results
First
query
BEGIN
exec Calculate(); exec Calculate();
SELECT
FROM table;

Subsequent
END;
queries

16 - 20 Copyright 2007, Oracle. All rights reserved.

PL/SQL Function Cache


Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result caching
mechanism. This caching mechanism provides you with a language-supported and system-managed
means for storing the results of PL/SQL functions in a shared global area (SGA), which is available
to every session that runs your application. The caching mechanism is both efficient and easy to use,
and it relieves you of the burden of designing and developing your own caches and cache
management policies.
Oracle Database 11g provides the ability to mark a PL/SQL function to indicate that its result should
be cached to allow lookup, rather than recalculation, on the next access when the same parameter
values are called. This function result cache saves significant space and time. This is done
transparently using the input parameters as the lookup key. The cache is instancewide so that all
distinct sessions invoking the function benefit. If the result for a given set of parameters changes, you
can use constructs to invalidate the cache entry so that it will be properly recalculated on the next
access. This feature is especially useful when the function returns a value that is calculated from data
selected from schema-level tables. For such uses, the invalidation constructs are simple and
declarative. You can include syntax in the source text of a PL/SQL function to request that its results
be cached and, to ensure correctness, that the cache be purged when any of a list of tables
experiences DML. When a particular invocation of the result-cached function is a cache hit, then the
function body is not executed; instead, the cached value is returned immediately.

Oracle Database 11g: New Features for Administrators 16 - 20


Using PL/SQL Function Cache

Include the RESULT_CACHE option in the function


declaration section of a package or function definition.
Optionally include the RELIES_ON clause to specify any
tables or views on which the function results depend.
CREATE OR REPLACE FUNCTION productName
(prod_id NUMBER, lang_id VARCHAR2)
RETURN NVARCHAR2
RESULT_CACHE RELIES_ON (product_descriptions)
IS
result VARCHAR2(50);
BEGIN
SELECT translated_name INTO result
FROM product_descriptions
WHERE product_id = prod_id AND language_id = lang_id;
RETURN result;
END;

16 - 21 Copyright 2007, Oracle. All rights reserved.

Using PL/SQL Function Cache


In the example shown in the slide, the productName function has result caching enabled through
the RESULT_CACHE option in the function declaration. In this example, the RELIES_ON clause is
used to identify the PRODUCT_DESCRIPTIONS table on which the function results depend.
Usage Notes
If function execution results in an unhandled exception, the exception result is not stored in the
cache.
The body of a result-cached function executes:
- The first time a session on this database instance calls the function with these parameter
values
- When the cached result for these parameter values is invalid. A cached result becomes
invalid when any database object specified in the RELIES_ON clause of the function
definition changes.
- When the cached result for these parameter values has aged out. If the system needs
memory, it might discard the oldest cached values.
- When the function bypasses the cache
The function should not have any side effects.
The function should not depend on session-specific settings.
The function should not depend on session-specific application contexts.

Oracle Database 11g: New Features for Administrators 16 - 21


PL/SQL Function Cache: Considerations

PL/SQL Function Cache cannot be used when:


The function is defined in a module that has the
invokers rights or in an anonymous block.
The function is a pipelined table function.
The function has OUT or IN OUT parameters.
The function has IN parameter of the following types:
BLOB, CLOB, NCLOB, REF CURSOR, collection, object, or
record.
The functions return type is: BLOB, CLOB, NCLOB, REF
CURSOR, object, record, or collection with one of the
preceding unsupported return types.

16 - 22 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 16 - 22


PL/SQL and Java Native Compilation
Enhancements

100+% faster for pure PL/SQL or Java code


1030% faster for typical transactions with SQL
PL/SQL
Just one parameter: On/Off
No need for C compiler
No file system DLLs
Java
Just one parameter: On/Off
JIT on the fly compilation
Transparent to user (asynchronous, in background)
Code stored to avoid recompilations

16 - 23 Copyright 2007, Oracle. All rights reserved.

PL/SQL and Java Native Compilation Enhancements


PL/SQL Native Compilation: The Oracle executable generates native dynamic linked lists (DLLs)
directly from the PL/SQL source code without needing to use a third-party C compiler. In Oracle
Database 10g, the DLL is stored canonically in the database catalog. In Oracle Database 11g, when it
is needed, the Oracle executable loads it directly from the catalog without needing to stage it first on
the file system.
The execution speed of natively compiled PL/SQL programs will never be slower than in Oracle
Database 10g and may be improved in some cases by as much as an order of magnitude. The
PL/SQL native compilation is automatically available with Oracle Database 11g. No third-party
software (neither a C compiler nor a DLL loader) is needed.
Java Native Compilation: Enabled by default (JAVA_JIT_ENABLED initialization parameter) and
similar to the Java Development Kit JIT (just-in-time), this feature compiles Java in the database
natively and transparently without the need of a C compiler.
The JIT runs as an independent session in a dedicated Oracle server process. There is at most one
compiler session per database instance; it is Oracle RAC-aware and amortized over all Java sessions.
This feature brings two major benefits to Java in the database: increased performance of pure Java
execution in the database and ease of use as it is activated transparently, without the need of an
explicit command, when Java is executed in the database.
As this feature removes the need for a C compiler, there are cost and license savings.

Oracle Database 11g: New Features for Administrators 16 - 23


Setting Up and Testing PL/SQL
Native Compilation

1. Set PLSQL_CODE_TYPE to NATIVE:


ALTER SYSTEM | ALTER SESSION | ALTER COMPILE
2. Compile your PL/SQL units (example):
CREATE OR REPLACE PROCEDURE hello_native AS
BEGIN
DBMS_OUTPUT.PUT_LINE('Hello world.');
END hello_native;
/

ALTER PROCEDURE hello_native COMPILE PLSQL_CODE_TYPE=NATIVE;

3. Make sure you succeeded:


SELECT plsql_code_type
FROM all_plsql_object_settings
WHERE name = 'HELLO_NATIVE';

16 - 24 Copyright 2007, Oracle. All rights reserved.

Setting Up and Testing PL/SQL Native Compilation


To set up and test one or more program units through native compilation:
1. Set up the PLSQL_CODE_TYPE initialization parameter. This parameter determines whether
PL/SQL code is natively compiled or interpreted. The default setting is INTERPRETED, which
is recommended during development. To enable PL/SQL native compilation, set the value of
PLSQL_CODE_TYPE to NATIVE. Make sure that the PLSQL_OPTIMIZE_LEVEL
initialization parameter is not less than 2 (which is the default). You can set
PLSQL_CODE_TYPE at the system, session, or unit level. A package specification and its body
can have different PLSQL_CODE_TYPE settings.
2. Compile one or more program units, using one of these methods:
- Use CREATE OR REPLACE to create or recompile the program unit.
- Use the various ALTER <PL/SQL unit type> COMPILE commands as shown in the slide
example.
- Run one of the SQL*Plus scripts that creates a set of Oracle-supplied packages.
- Create a database using a preconfigured initialization file with
PLSQL_CODE_TYPE=NATIVE.
3. To be sure that the process worked, query the data dictionary to see that a program unit is
compiled for native execution. You can use ALL|USER_PLSQL_OBJECT_SETTINGS views.
The PLSQL_CODE_TYPE column has a value of NATIVE for program units that are compiled
for native execution, and INTERPRETED otherwise.

Oracle Database 11g: New Features for Administrators 16 - 24


Recompiling the Entire Database for
PL/SQL Native Compilation

1. Shut down the database.


2. Set PLSQL_CODE_TYPE to NATIVE.
3. Start up the database in UPGRADE mode.
4. Execute the dbmsupgnv.sql script.
5. Shut down/start up your database in restricted mode.
6. Execute the utlrp.sql script.
7. Disable restricted mode.

16 - 25 Copyright 2007, Oracle. All rights reserved.

Recompiling the Entire Database for PL/SQL Native Compilation


If you have DBA privileges, you can recompile all PL/SQL modules in an existing database to
NATIVE or INTERPRETED, using the dbmsupgnv.sql and dbmsupgin.sql scripts,
respectively. To recompile all PL/SQL module to NATIVE, perform the following steps:
1. Shut down application services, the listener, and the database in normal or immediate mode. The
first two are used to make sure that all of the connections to the database have been terminated.
2. Set PLSQL_CODE_TYPE to NATIVE in the initialization parameter file. The value of
PLSQL_CODE_TYPE does not affect the conversion of the PL/SQL units in these steps.
However, it does affect all subsequently compiled units and it should be explicitly set to the
compilation type that you want.
3. Start up the database in UPGRADE mode, using the UPGRADE option. It is assumed that there
are no invalid objects at this point.
4. Run the $ORACLE_HOME/rdbms/admin/dbmsupgnv.sql script as the SYS user to
update the plsql_code_type setting to NATIVE in the dictionary tables for all PL/SQL
units. This process also invalidates the units. Use TRUE with the script to exclude package
specifications; FALSE to include the package specifications. The script is guaranteed to
complete successfully or roll back all the changes. Package specifications seldom contain
executable code, so the run-time benefits of compiling to NATIVE are not measurable.

Oracle Database 11g: New Features for Administrators 16 - 25


Recompiling the Entire Database for PL/SQL Native Compilation (continued)
5. Shut down the database and restart in NORMAL mode. Oracle recommends that no other sessions
be connected to avoid possible problems. You can ensure this with the following statement:
ALTER SYSTEM ENABLE RESTRICTED SESSION;
6. Run the $ORACLE_HOME/rdbms/admin/utlrp.sql script as the SYS user. This script
recompiles all the PL/SQL modules using a default degree of parallelism.
7. Disable the restricted session mode for the database, and then start the services that you
previously shut down. To disable restricted session mode, use the following statement: ALTER
SYSTEM DISABLE RESTRICTED SESSION;
Note: During the conversion to native compilation, TYPE specifications are not recompiled to
NATIVE because these specifications do not contain executable code.

Oracle Database 11g: New Features for Administrators 16 - 26


Adaptive Cursor Sharing: Overview

Adaptive Cursor Sharing allows for intelligent cursor


sharing only for statements that use bind variables.
Adaptive Cursor Sharing is used to compromise
between cursor sharing and optimization.
Adaptive Cursor Sharing benefits:
Automatically detects when different executions would
benefit from different execution plans
Limits the number of generated child cursors to a
minimum
Automated mechanism that cannot be turned off
One plan not always appropriate for all bind values

16 - 27 Copyright 2007, Oracle. All rights reserved.

Adaptive Cursor Sharing: Overview


Bind variables were designed to allow the Oracle database to share a single cursor for multiple SQL
statements to reduce the amount of shared memory used to parse SQL statements. However, cursor
sharing and SQL optimization are two conflicting goals. Writing a SQL statement with literals
provides more information for the optimizer and naturally leads to better execution plans, while
increasing memory and CPU overhead caused by excessive hard parses. Oracle9i Database was the
first attempt to introduce a compromising solution by allowing similar SQL statements using
different literal values to be shared. For statements using bind variables, Oracle9i also introduced the
concept of bind peeking. Using bind peeking, the optimizer looks at the bind values the first time the
statement is executed. It then uses these values to determine an execution plan that will be shared by
all other executions of that statement. To benefit from bind peeking, it is assumed that cursor sharing
is intended and that different invocations of the statement are supposed to use the same execution
plan. If different invocations of the statement would significantly benefit from different execution
plans, then bind peeking is of no use in generating good execution plans.
To address this issue as much as possible, Oracle Database 11g introduces Adaptive Cursor Sharing.
This feature is a more sophisticated strategy designed to not share the cursor blindly, but generate
multiple plans per SQL statement with bind variables if the benefit of using multiple execution plans
outweighs the parse time and memory usage overhead. However, because the purpose of using bind
variables is to share cursors in memory, a compromise must be found regarding the number of child
cursors that need to be generated.
Oracle Database 11g: New Features for Administrators 16 - 27
Adaptive Cursor Sharing: Architecture
Bind-sensitive cursor
System 1
observes SELECT * FROM emp WHERE sal = :1 and dept = :2
statement
for a while. Bind-aware cursor
Initial selectivity cube Same selectivity cube No need
Initial plan
for new plan
GB S
o GB
HJ f
t HJ

0.0025 .
HJ
P
a
0.003 . HJ 2
r
s
e
0.15 0.18

:1=A & :2=B S(:1)=0.15 S(:2)=0.0025 :1=C & :2=D S(:1)=0.18 S(:2)=0.003

Merged selectivity cubes No need Second selectivity cube Need new plan
for new plan
H
a
r
GB GB H
a
r
0.009 . GB GB

d HJ HJ HJ HJ
d
4 P
a
0.004 . HJ HJ
P HJ HJ 3
a
r r
s Cubes merged s
e e
0.28 0.3

:1=G & :2=H S(:1)=0.28 S(:2)=0.004 :1=E & :2=F S(:1)=0.3 S(:2)=0.009

16 - 28 Copyright 2007, Oracle. All rights reserved.

Adaptive Cursor Sharing: Architecture


Using Adaptive Cursor Sharing, the following steps take place in the scenario illustrated in the slide:
1. The cursor starts its life with a hard parse, as usual. If bind peeking takes place, and a histogram
is used to compute selectivity of the predicate containing the bind variable, then the cursor is
marked as a bind-sensitive cursor. In addition, some information is stored about the predicate
containing the bind variables, including the predicate selectivity. In the slide example, the
predicate selectivity that would be stored is a cube centered around (0.15,0.0025). Because of
the initial hard parse, an initial execution plan is determined using the peeked binds. After the
cursor is executed, the bind values and the execution statistics of the cursor are stored in that
cursor.
During the next execution of the statement when a new set of bind values is used, the system
performs a usual soft parse, and finds the matching cursor for execution. At the end of
execution, execution statistics are compared with the ones currently stored in the cursor. The
system then observes the pattern of the statistics over all previous runs (see V$SQL_CS_
views on next slide) and decides whether or not to mark the cursor as bind-aware.
2. On the next soft parse of this query, if the cursor is now bind-aware, bind-aware cursor matching
is used. Suppose that the selectivity of the predicate with the new set of bind values is now
(0.18,0.003). Because selectivity is used as part of bind-aware cursor matching, and because the
selectivity is within an existing cube, the statement uses the existing child cursors execution
plan to run.
Oracle Database 11g: New Features for Administrators 16 - 28
Adaptive Cursor Sharing: Architecture (continued)
3. On the next soft parse of this query, suppose that the selectivity of the predicate with the new set
of bind values is now (0.3,0.009). Because that selectivity is not within an existing cube, no
child cursor match is found. So the system does a hard parse, which generates a new child cursor
with a second execution plan in that case. In addition, the new selectivity cube is stored as part
of the new child cursor. After the new child cursor executes, the system stores the bind values
and execution statistics in the cursor.
4. On the next soft parse of this query, suppose that the selectivity of the predicate with the new set
of bind values is now (0.28,0.004). Because that selectivity is not within one of the existing
cubes, the system does a hard parse. Suppose that this time, the hard parse generates the same
execution plan as the first one. Because the plan is the same as the first child cursor, both child
cursors are merged. That is, both cubes are merged into a new bigger cube, and one of the child
cursor is deleted. The next time there is a soft parse, if the selectivity falls within the new cube,
the child cursor will match.

Oracle Database 11g: New Features for Administrators 16 - 29


Adaptive Cursor Sharing Views

The following views provide information about Adaptive


Cursor Sharing usage:
V$SQL Two new columns show whether a
cursor is bind-sensitive or bind-
aware.

V$SQL_CS_HISTOGRAM Shows the distribution of the


execution count across the execution
history histogram.

V$SQL_CS_SELECTIVITY Shows the selectivity cubes stored for


every predicate containing a bind
variable and whose selectivity is used
in the cursor sharing checks.

V$SQL_CS_STATISTICS Shows execution statistics of a cursor


using different bind sets.

16 - 30 Copyright 2007, Oracle. All rights reserved.

Adaptive Cursor Sharing Views


These views determine whether a query is bind-aware or not, and is handled automatically, without
any user input. However, information about what is going on is exposed through V$ views so that the
DBA can diagnose any problems. Two new columns have been added to V$SQL:
IS_BIND_SENSITIVE: Indicates if a cursor is bind-sensitive; value YES | NO. A query for
which the optimizer peeked at bind variable values when computing predicate selectivities and
where a change in a bind variable value may lead to a different plan is called bind-sensitive.
IS_BIND_AWARE: Indicates if a cursor is bind-aware; value YES | NO. A cursor in the cursor
cache that has been marked to use bind-aware cursor sharing is called bind-aware.
V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across a three-bucket
execution history histogram.
V$SQL_CS_SELECTIVITY: Shows the selectivity cubes or ranges stored in a cursor for every
predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. It
contains the text of the predicates and the selectivity range low and high values.
V$SQL_CS_STATISTICS: Adaptive Cursor Sharing monitors execution of a query and collects
information about it for a while, and uses this information to decide whether to switch to using bind-
aware cursor sharing for the query. This view summarizes the information that it collects to make this
decision: for a sample of executions, it keeps track of rows processed, buffer gets, and CPU time.
The PEEKED column has the value YES if the bind set was used to build the cursor, and NO
otherwise.
Oracle Database 11g: New Features for Administrators 16 - 30
Interacting with Adaptive Cursor Sharing

CURSOR_SHARING:
If CURSOR_SHARING <> EXACT, then statements
containing literals may be rewritten using bind variables.
If statements are rewritten, Adaptive Cursor Sharing may
apply to them.
SQL Plan Management (SPM):
If OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to
TRUE, then only the first generated plan is used.
As a workaround, set this parameter to FALSE, and run
your application until all plans are loaded in the cursor
cache.
Manually load the cursor cache into the corresponding
plan baseline.

16 - 31 Copyright 2007, Oracle. All rights reserved.

Interacting with Adaptive Cursor Sharing


Adaptive Cursor Sharing is independent of the CURSOR_SHARING parameter. The setting of
this parameter determines whether literals are replaced by system-generated bind variables. If
they are, then Adaptive Cursor Sharing behaves just as it would if the user supplied binds to
begin with.
When using the SPM automatic plan capture, the first plan captured for a SQL statement with
bind variables is marked as the corresponding SQL plan baseline. If another plan is found for
that same SQL statement (which maybe the case with Adaptive Cursor Sharing), it is added to
the SQL statements plan history and marked for verification: It will not be used. So even though
Adaptive Cursor Sharing has come up with a new plan based on a new set of bind values, SPM
does not let it be used until the plan has been verified. Thus reverting back to10g behavior, only
the plan generated based on the first set of bind values will be used by all subsequent executions
of the statement. One possible workaround is to run the system for some time with automatic
plan capture set to false, and after the cursor cache has been populated with all of the plans a
SQL statement with bind will have, load the entire plan directly from the cursor cache into the
corresponding SQL plan baseline. By doing this, all the plans for a single SQL statement are
marked as SQL baseline plans by default.

Oracle Database 11g: New Features for Administrators 16 - 31


Temporary Tablespace Shrink

Sort segment extents are managed in memory after


being physically allocated.
This method can be an issue after big sorts are done.
To release physical space from your disks, you can
shrink temporary tablespaces:
Locally managed temporary tablespaces
Online operation
CREATE TEMPORARY TABLESPACE temp
TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE
UNLIMITED
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m;

ALTER TABLESPACE temp SHRINK SPACE [KEEP 200m];

ALTER TABLESPACE temp SHRINK TEMPFILE 'tbs_temp.dbf';

16 - 32 Copyright 2007, Oracle. All rights reserved.

Temporary Tablespace Shrink


Huge sorting operations can cause temporary tablespace to grow a lot. For performance reasons, after
a sort extent is physically allocated, it is managed in memory to avoid physical deallocation later. As
a result, you can end up with a huge tempfile that stays on disk until it is dropped. One possible
workaround is to create a new temporary tablespace with smaller size, set this new tablespace as
default temporary tablespace for users, and then drop the old tablespace. However, there is a
disadvantage that the procedure requires no active sort operations happening at the time of dropping
old temporary tablespace.
Starting with Oracle Database 11g Release 1, you can use the ALTER TABLESPACE SHRINK
SPACE command to shrink a temporary tablespace, or you can use the ALTER TABLESPACE
SHRINK TEMPFILE command to shrink one tempfile. For both commands, you can specify the
optional KEEP clause that defines the lower bound that the tablespace/tempfile can be shrunk to. If
you omit the KEEP clause, then the database attempts to shrink the tablespace/tempfile as much as
possible (total space of all currently used extents) as long as other storage attributes are satisfied.
This operation is done online. However, if some currently used extents are allocated above the shrink
estimation, the system waits until they are released to finish the shrink operation.
Note: The ALTER DATABASE TEMPFILE RESIZE command generally fails with ORA-03297
because the tempfile contains used data beyond requested RESIZE value. As opposed to ALTER
TABLESPACE SHRINK, the ALTER DATABASE command does not try to deallocate sort extents
after they are allocated.
Oracle Database 11g: New Features for Administrators 16 - 32
DBA_TEMP_FREE_SPACE

Lists temporary space usage information


Central point for temporary tablespace space usage

Column name Description

TABLESPACE_NAME Name of the tablespace

TABLESPACE_SIZE Total size of the tablespace, in bytes

ALLOCATED_SPACE Total allocated space, in bytes, including space that is currently


allocated and used and space that is currently allocated and
available for reuse

FREE_SPACE Total free space available, in bytes, including space that is


currently allocated and available for reuse and space that is
currently unallocated

16 - 33 Copyright 2007, Oracle. All rights reserved.

DBA_TEMP_FREE_SPACE
This dictionary view reports temporary space usage information at the tablespace level. The
information is derived from various existing views.

Oracle Database 11g: New Features for Administrators 16 - 33


Tablespace Option for Creating Temporary Table

Specify which temporary tablespace to use for your


global temporary tables.
Decide a proper temporary extent size.

CREATE TEMPORARY TABLESPACE temp


TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE
UNLIMITED
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m;

CREATE GLOBAL TEMPORARY TABLE temp_table (c varchar2(10))


ON COMMIT DELETE ROWS TABLESPACE temp;

16 - 34 Copyright 2007, Oracle. All rights reserved.

Tablespace Option for Creating Temporary Table


Starting with Oracle Database 11g Release 1, it becomes possible to specify a TABLESPACE clause
when you create a global temporary table. If no tablespace is specified, the global temporary table is
created in your default temporary tablespace. In addition, indexes created on the temporary table are
also created in the same temporary tablespace as the temporary table.
This possibility allows you to decide a proper extent size that reflects your sort-specific usage,
especially when you have several types of temporary space usage.
Note: You can find in DBA_TABLES which tablespace is used to store your global temporary tables.

Oracle Database 11g: New Features for Administrators 16 - 34


Easier Recovery from Loss of SPFILE

The FROM MEMORY clause allows the creation of current


systemwide parameter settings.

CREATE PFILE [= 'pfile_name' ]


FROM { { SPFILE [= 'spfile_name'] } | MEMORY } ;

CREATE SPFILE [= 'spfile_name' ]


FROM { { PFILE [= 'pfile_name' ] } | MEMORY } ;

16 - 35 Copyright 2007, Oracle. All rights reserved.

Easier Recovery from Loss of SPFILE


In Oracle Database 11g, the FROM MEMORY clause creates a pfile or spfile using the current
systemwide parameter settings. In a RAC environment, the created file contains the parameter
settings from each instance.
During instance startup, all parameter settings are logged to the alert.log file. As of Oracle
Database 11g, the alert.log parameter dump text is written in valid parameter syntax. This
facilitates cutting and pasting of parameters into a separate file, and then using as a pfile for a
subsequent instance. The name of the pfile or spfile is written to the alert.log at instance
startup time. In cases when an unknown client-side pfile is used, the alert log indicates this as
well.
To support this additional functionality, the COMPATIBLE initialization parameter must be set to
11.0.0.0 or higher.

Oracle Database 11g: New Features for Administrators 16 - 35


Summary

In this lesson, you should have learned how to:


Describe enhancements to locking mechanisms
Use the SQL query result cache
Use the enhanced PL/SQL recompilation mechanism
Create and use invisible indexes
Describe Adaptive Cursor Sharing
Manage your SPFILE

16 - 36 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 16 - 36


Practice 16: Overview

This practice covers the following topics:


Using the client result cache
Using the PL/SQL result cache

16 - 37 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators 16 - 37


Installation and Upgrade
Enhancements

Copyright 2007, Oracle. All rights reserved.


Manual Upgrade

B-2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 2


Performing a Manual Upgrade: 1

1. Install Oracle Database 11g, Release 1 in a new


ORACLE_HOME.
2. Analyze the existing database:
Use rdbms/admin/utlu111i.sql with the existing
server.
SQL> spool pre_upgrade.log
SQL> @utlu111i
3. Adjust redo logs and tablespace sizes if necessary.
4. Copy existing initialization files to the new
ORACLE_HOME and make recommended adjustments.
5. Shut down immediately, back up, and then switch to
the new ORACLE_HOME.

B-3 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 3


Performing a Manual Upgrade: 2

6. Start up using the Oracle Database 11g, Release 1


server:
SQL> startup upgrade
7. If you are upgrading from 9.2, create a SYSAUX
tablespace:
SQL> create tablespace SYSAUX datafile
'e:\oracle\oradata\empdb\sysaux01.dbf'
size 500M reuse
extent management local
segment space management auto
online;
8. Run the upgrade (automatically shuts down database):
SQL> spool upgrade.log
SQL> @catupgrd.sql

B-4 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 4


Performing a Manual Upgrade: 3

9. Restart the database instance in normal mode:


SQL> startup
10. Run the Post-Upgrade Status Tool to display the
results of the upgrade:
SQL>@utlu111s.sql
11. Run post-upgrade actions:
SQL> @catuppst.sql
12. Recompile and revalidate any remaining application
objects:
SQL> @utlrp (parallel compile on multiprocessor
system)

B-5 Copyright 2007, Oracle. All rights reserved.

Note
catuppst.sql is the post-upgrade script that performs the remaining upgrade actions that do not
require the database to be open in UPGRADE mode. It can be run at the same time that utlrp.sql
is being run.

Oracle Database 11g: New Features for Administrators B - 5


Downgrading a Database

B-6 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 6


Downgrading a Database: 1

1. Major release downgrades are supported back to 10.2 and


10.1.
2. Downgrade to only the release from which you upgraded.
3. Shut down and start up the instance in DOWNGRADE mode:
SQL> startup downgrade
4. Run the downgrade script, which automatically determines
the version of the database and calls the specific component
scripts:
SQL> SPOOL downgrade.log
SQL> @catdwgrd.sql
5. Shut down the database immediately after the downgrade
script ends:
SQL> shutdown immediate;

B-7 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 7


Downgrading a Database: 2

6. Move to the old ORACLE_HOME environment and start up the


database in the upgrade mode:
SQL> startup upgrade
7. Reload the old packages and views:
SQL> SPOOL reload.sql
SQL> @catrelod.sql
8. Shut down and restart the instance for normal operation:
SQL> shutdown immediate;
SQL> startup
9. Run utlrp.sql to recompile all existing packages,
procedures, and types that were previously in an INVALID
state:
SQL> @utlrp
10. Perform any necessary post-downgrade tasks.

B-8 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators B - 8


Best Practices: 1

The three Ts: Test, Test, Test


Test the upgrade.
Test the application(s).
Test the recovery strategy.
Functional testing
Clone your production database on a machine with
similar resources.
Use DBUA for your upgrade.
Run your application and tools to ensure that they work.

B-9 Copyright 2007, Oracle. All rights reserved.

Best Practices: 1
Perform the planned tests on the current database and on the test database that you upgraded to
Oracle Database 11g, Release 1 (11.1). Compare the results and note anomalies. Repeat the test
upgrade as many times as necessary.
Test the newly upgraded test database with the existing applications to verify that they operate
properly with a new Oracle database. You might also want to test enhanced functions by adding the
available Oracle Database features. However, first make sure that the applications operate in the
same manner as they did in the current database.
Functional testing is a set of tests in which new and existing features and functions of the system are
tested after the upgrade. Functional testing includes all database, networking, and application
components. The objective of functional testing is to verify that each component of the system
functions as it did before upgrading and to verify that the new functions are working properly.
Create a test environment that does not interfere with the current production database.
Practice upgrading the database using the test environment. The best upgrade test, if possible, is
performed on an exact copy of the database to be upgraded, rather than on a downsized copy or test
data.
Do not upgrade the actual production database until after you successfully upgrade a test subset of
this database and test it with applications (as described in the next step).
The ultimate success of your upgrade depends heavily on the design and execution of an appropriate
backup strategy.
Oracle Database 11g: New Features for Administrators B - 9
Best Practices: 2

Performance analysis
Gather performance metrics prior to upgrade:
Gather AWR or Statspack baselines during various
workloads.
Gather sample performance metrics after upgrade:
Compare metrics before and after upgrade to catch issues.
Upgrade production systems only after performance and
functional goals have been met.
Pre-upgrade analysis
You can run DBUA without clicking Finish to get a pre-
upgrade analysis or utlu111i.sql.
Read general and platform-specific release notes to catch
special cases.

B - 10 Copyright 2007, Oracle. All rights reserved.

Best Practices: 2
Performance testing of the new Oracle database compares the performance of various SQL
statements in the new Oracle database with the statements performance in the current database.
Before upgrading, you should understand the performance profile of the application under the current
database. Specifically, you should understand the calls that the application makes to the database
server.
For example, if you are using Oracle Real Application Clusters and you want to measure the
performance gains realized from using cache fusion when you upgrade to Oracle Database 11g,
Release 1 (11.1), then make sure that you record your systems statistics before upgrading.
For that, you can use various V$ views or AWR/Statspack reports.

Oracle Database 11g: New Features for Administrators B - 10


Best Practices: 3

Automate your upgrade:


Use DBUA in command-line mode to automate your
upgrade.
Useful for upgrading a large number of databases
Logging
For manual upgrade, spool upgrade results and check
logs for possible issues.
DBUA can also do this for you.
Automatic conversion from 32-bit to 64-bit database
software
Check for sufficient space in SYSTEM, UNDO, TEMP, and
redo log files.

B - 11 Copyright 2007, Oracle. All rights reserved.

Best Practices: 3
If you are installing the 64-bit Oracle Database 11g, Release 1 (11.1) software but were previously
using a 32-bit Oracle Database installation, then the database is automatically converted to 64-bit
during a patch release or major release upgrade to Oracle Database 11g, Release 1 (11.1).
However, you must increase the initialization parameters affecting the system global area, such as
sga_target and shared_pool_size, to support the 64-bit operation.

Oracle Database 11g: New Features for Administrators B - 11


Best Practices: 4

Use Optimal Flexible Architecture (OFA)


Offers best practices for locating your database files,
configuration files, and ORACLE_HOME
Use new features:
Migrate to CBO from RBO.
Automatic management features for SGA, Undo, PGA,
and so on.
Use AWR/ADDM to diagnose performance issues.
Consider using SQL Tuning Advisor.
Change COMPATIBLE and OPTIMIZER_FEATURES_ENABLE
parameters to enable new optimizer features.

B - 12 Copyright 2007, Oracle. All rights reserved.

Best Practices: 4
Oracle recommends the Optimal Flexible Architecture (OFA) standard for your Oracle Database
installations. The OFA standard is a set of configuration guidelines for efficient and reliable Oracle
databases that require little maintenance.
OFA provides the following benefits:
Organizes large amounts of complicated software and data on disk to avoid device bottlenecks
and poor performance
Facilitates routine administrative tasks, such as software and data backup functions, which are
often vulnerable to data corruption
Alleviates switching among multiple Oracle databases
Adequately manages and administers database growth
Helps to eliminate fragmentation of free space in the data dictionary, isolates other
fragmentation, and minimizes resource contention
If you are not currently using the OFA standard, switching to the OFA standard involves modifying
your directory structure and relocating your database files.

Oracle Database 11g: New Features for Administrators B - 12


Best Practices: 5

Use Enterprise Manager Grid Control to manage your


enterprise:
Use EM to set up new features and try them out.
EM provides complete manageability solution for
databases, applications, storage, security, and networks.
Collect object and system statistics to improve plans
generated by the CBO.
Check for invalid objects in the database before
upgrading:
SQL> select owner, object_name, object_type,
status
from dba_objects where status<>'VALID';

B - 13 Copyright 2007, Oracle. All rights reserved.

Best Practices: 5
When you upgrade to Oracle Database 11g, Release 1 (11.1), optimizer statistics are collected for
dictionary tables that lack statistics. This statistics collection can be time consuming for databases
with a large number of dictionary tables, but statistics gathering occurs only for those tables that lack
statistics or are significantly changed during the upgrade.
To decrease the amount of down time incurred when collecting statistics, you can collect statistics
prior to performing the actual database upgrade. As of Oracle Database 10g, Release 1 (10.1), Oracle
recommends that you also use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to
gather dictionary statistics in addition to database component statistics such as SYS, SYSMAN, XDB,
and so on using the DBMS_STATS.GATHER_SCHEMA_STATS procedure.

Oracle Database 11g: New Features for Administrators B - 13


Best Practices: 6

Avoid upgrading in a crisis:


Keep up with security alerts.
Keep up with critical patches needed for your
applications.
Keep track of the de-support schedules.
Always upgrade to the latest supported version of the
RDBMS.
Make sure patchset is available for all your platforms.
Data Vault option needs to be turned off for upgrade.

B - 14 Copyright 2007, Oracle. All rights reserved.

Best Practices: 6
If you have enabled Oracle Database Vault, you must disable it before upgrading the database. Then
enable it again when the upgrade is finished.

Oracle Database 11g: New Features for Administrators B - 14


Security New Features

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Configure the password file to use case-sensitive
passwords
Encrypt a tablespace
Configure fine-grained access to network services

C-2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators C - 2


Secure Password Support

Passwords in Oracle Database 11g:


Are case-sensitive
Contain more characters
Use more secure hash algorithm
Use salt in the hash algorithm
Usernames are still Oracle identifiers (up to 30
characters, non-case-sensitive)

C-3 Copyright 2007, Oracle. All rights reserved.

Secure Password Support


You must use more secure passwords to meet the demands of compliance to various security and
privacy regulations. Passwords that are very short and passwords that are formed from a limited set
of characters are susceptible to brute force attacks. Longer passwords with more different characters
allowed make the password much more difficult to guess or find. In Oracle Database 11g, the
password is handled differently than in previous versions:
Passwords are case-sensitive. Uppercase and lowercase characters are now different characters
when used in a password.
A password may contain multibyte characters without it being enclosed in quotation marks. A
password must be enclosed in quotation marks if it contains any special characters apart from $,
_, or #.
Passwords are always passed through a hash algorithm, and then stored as a user credential.
When the user presents a password, it is hashed and then compared to the stored credential. In
Oracle Database 11g, the hash algorithm is SHA-1 of the public algorithm used in previous
versions of the database. SHA-1 is a stronger algorithm using a 160-bit key.
Passwords always use salt. A hash function always produces the same output, given the same
input. Salt is a unique (random) value that is added to the input to ensure that the output
credential is unique.

Oracle Database 11g: New Features for Administrators C - 3


Automatic Secure Configuration

Default password profile


Default auditing
Built-in password complexity checking

C-4 Copyright 2007, Oracle. All rights reserved.

Automatic Secure Configuration


Oracle Database 11g installs and creates the database with certain security features recommended by
the Center for Internet Security (CIS) benchmark. The CIS recommended configuration is more
secure than the 10gR2 default installation; yet open enough to allow the majority of applications to
be successful. Many customers have adopted this benchmark already. There are some
recommendations of the CIS benchmark that may be incompatible with some applications.

Oracle Database 11g: New Features for Administrators C - 4


Password Configuration

By default:
Default password profile is enabled
Account is locked after 10 failed login attempts
In upgrade:
Passwords are non-case-sensitive until changed
Passwords become case-sensitive when the ALTER USER
command is used
On creation:
Passwords are case-sensitive

C-5 Copyright 2007, Oracle. All rights reserved.

Secure Default Configuration


When creating a custom database using the Database Configuration Assistant (DBCA), you can
specify the Oracle Database 11g default security configuration. By default, if a user tries to connect
to an Oracle instance multiple times using an incorrect password, the instance delays each login after
the third try. This protection applies for attempts made from different IP addresses or multiple client
connections. Later, it gradually increases the time before the user can try another password, up to a
maximum of about ten seconds.
The default password profile is enabled with these settings at database creation:
PASSWORD_LIFE_TIME 180
PASSWORD_GRACE_TIME 7
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1
PASSWORD_VERIFY_FUNCTION NULL
When an Oracle Database 10g database is upgraded, passwords are non-case-sensitive until the
ALTER USER command is used to change the password.
When the database is created, the passwords will be case-sensitive by default.

Oracle Database 11g: New Features for Administrators C - 5


Enable Built-in Password Complexity Checker

Execute the utlpwdmg.sql script to create the password


verify function:

SQL> CONNECT / as SYSDBA


SQL> @?/rdbms/admin/utlpwdmg.sql

Alter the default profile:

ALTER PROFILE DEFAULT


LIMIT
PASSWORD_VERIFY_FUNCTION verify_function_11g;

C-6 Copyright 2007, Oracle. All rights reserved.

Enable Built-in Password Complexity Checker


The verify_function_11g is a sample PL/SQL function that can be easily modified to enforce
the password complexity policies at your site. This function does not require special characters to be
embedded in the password. Both verify_function_11g and the older verify_function
are included in the utlpwdmg.sql file.
To enable the password complexity checking, create a verification function owned by SYS. Use one
of the supplied functions or modify one of them to meet your requirements. The example shows
using the utlpwdmg.sql script. If there is an error in the password complexity check function
named in the profile or it does not exist, you cannot change passwords nor create users. The solution
is to set the PASSWORD_VERIFY_FUNCTION to NULL in the profile, until the problem is solved.
The verify_function11g function checks that the password contains at least eight characters,
contains at least one number and one alphabetic character, and differs from the previous password by
at least three characters. The function also checks that the password is not a username or username
appended with any number 1100; a username reversed; a server name or server name appended with
1100; or one of a set of well-known and common passwords such as welcome1, database1,
oracle123, or oracle (appended with 1100), and so on.

Oracle Database 11g: New Features for Administrators C - 6


Managing Default Audits

Review audit logs:


Default audit options cover important security privileges
Archive audit records
Export
Copy to another table
Remove archived audit records

C-7 Copyright 2007, Oracle. All rights reserved.

Managing Default Audits


Review the audit logs: By default, auditing is enabled in Oracle Database 11g for certain privileges
that are very important to security. The audit trail is recorded in the database AUD$ table by default;
the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database
performance, for most sites. Oracle recommends the use of OS audit trail files.
Archive audit records: To retain audit records export using Oracle Data Pump Export, or use the
SELECT statement to capture a set of audit records into a separate table.
Remove archived audit records: Remove audit records from the SYS.AUD$ table after reviewing
and archiving them. Audit records take up space in the SYSTEM tablespace. If the SYSTEM
tablespace cannot grow, and there is no more space for audit records, errors will be generated for
each audited statement. Because CREATE SESSION is one of the audited privileges, no new
sessions may be created except by a user connected as SYSDBA. Archive the audit table with the
Export utility using the QUERY option to specify the WHERE clause with a range of dates or SCNs.
Then delete the records from the audit table using the same WHERE clause.
When AUDIT_TRAIL=OS, separate files are created for each audit record in the directory specified
by AUDIT_FILE_DEST. All files as of a certain time can be copied, and then removed.
Note: The SYSTEM tablespace is created with the autoextend on option. So the SYSTEM
tablespace will grow as needed until there is no more space available on the disk.

Oracle Database 11g: New Features for Administrators C - 7


Managing Default Audits (continued)
The following privileges are audited for all users on success and failure, and by access:
CREATE EXTERNAL JOB
CREATE ANY JOB
GRANT ANY OBJECT PRIVILEGE
EXEMPT ACCESS POLICY
CREATE ANY LIBRARY
GRANT ANY PRIVILEGE
DROP PROFILE
ALTER PROFILE
DROP ANY PROCEDURE
ALTER ANY PROCEDURE
CREATE ANY PROCEDURE
ALTER DATABASE
GRANT ANY ROLE
CREATE PUBLIC DATABASE LINK
DROP ANY TABLE
ALTER ANY TABLE
CREATE ANY TABLE
DROP USER
ALTER USER
CREATE USER
CREATE SESSION
AUDIT SYSTEM
ALTER SYSTEM

Oracle Database 11g: New Features for Administrators C - 8


Adjust Security Settings

Need Beta 5 Screenshot

C-9 Copyright 2007, Oracle. All rights reserved.

Adjust Security Settings


When you create a database using the DBCA tool, you are offered a choice of security settings:
Keep the enhanced 11g default security settings (recommended). These settings include enabling
auditing and new default password profile.
Revert to pre-11g default security settings. To disable a particular category of enhanced settings
for compatibility purposes, choose from the following:
- Revert audit settings to pre-11g defaults
- Revert password profile settings to pre-11g defaults.
These settings can also be changed after the database is created using the DBCA. Some applications
may not work properly under the 11g default security settings.
Secure permissions on software are always set. It is not impacted by a users choice for the Security
Settings option.

Oracle Database 11g: New Features for Administrators C - 9


Setting Security Parameters

Use case-sensitive passwords


SEC_SEC_CASE_SENSITIVE_LOGON
Protect against DoS attacks
SEC_PROTOCOL_ERROR_FURTHER_ACTION
SEC_PROTOCOL_ERROR_TRACE_ACTION
Protect against brute force attacks
SEC_MAX_FAILED_LOGIN_ATTEMPTS

C - 10 Copyright 2007, Oracle. All rights reserved.

Setting Security Parameters


A set of new parameters have been added to the Oracle Database 11g to enhance the default security
of the database. These parameters are systemwide and static.
Use Case-Sensitive Passwords to Improve Security
A new parameter, SEC_CASE_SENSITIVE_LOGON, allows you to set the case-sensitivity of user
passwords. Oracle recommends that you retain the default setting of TRUE. You can specify case
insensitive passwords for backward compatibility by setting this parameter to FALSE:
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE
Note: Disabling case sensitivity increases vulnerability to brute force attacks.
Protect Against Denial of Service (DoS) Attacks
The two parameters shown specify the actions to be taken when the database receives bad packets
from a client. The assumption is that the bad packets are from a possible malicious client. The
SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with
the client connection: continue, drop the connection, or delay accepting requests. The other
parameter, SEC_PROTOCOL_ERROR_TRACE_ACTION, specifies a monitoring action: NONE,
TRACE, LOG, or ALERT.

Oracle Database 11g: New Features for Administrators C - 10


Setting Security Parameters (continued)
Protect Against Brute Force Attacks
A new initialization parameter, SEC_MAX_FAILED_LOGIN_ATTEMPTS, that has a default setting
of 10 causes a connection to be automatically dropped after the specified number of attempts. This
parameter is enforced even when the password profile is not enabled.
This parameter prevents a program from making a database connection and then attempting to
authenticate by trying hundreds or thousands of passwords.

Oracle Database 11g: New Features for Administrators C - 11


Setting Database Administrator Authentication

Use password file with case-sensitive passwords.


Enable strong authentication for administrator roles:
Grant the administrator role in OID.
Use Kerberos tickets.
Use certificates with SSL.

C - 12 Copyright 2007, Oracle. All rights reserved.

Setting Database Administrator Authentication


The database administrator must always be authenticated. In Oracle Database 11g, there are new
methods that make administrator authentication more secure and centralize the administration of
these privileged users. Case-sensitive passwords have also been extended to remote connections for
privileged users. You can override this default behavior with the following command:
orapwd file=orapworcl entries=5 ignorecase=Y
If your concern is that the password file might be vulnerable or that the maintenance of many
password files is a burden, then strong authentication can be implemented:
Grant SYSDBA, or SYSOPER enterprise role in Oracle Internet Directory (OID).
Use Kerberos tickets.
Use certificates over SSL.
To use any of the strong authentication methods, the LDAP_DIRECTORY_SYSAUTH initialization
parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication
methods. Authentication through OID or through Kerberos also can provide centralized
administration or single sign-on.
If the password file is configured, it is checked first. The user may also be authenticated by the local
OS by being a member of the OSDBA or OSOPER groups.
For more information, see the Oracle Database Advanced Security Administrators Guide 11g
Release 1.

Oracle Database 11g: New Features for Administrators C - 12


Set Up Directory Authentication
for Administrative Users

1. Create the user in the directory.


2. Grant the SYSDBA or SYSOPER enterprise role to user.
3. Set the LDAP_DIRECTORY_SYSAUTH parameter in the
database.
4. Check whether the LDAP_DIRECTORY_ACCESS
parameter is set to PASSWORD or SSL.
5. Test the connection.

$sqlplus fred/t%3eEGQ@orcl AS SYSDBA

C - 13 Copyright 2007, Oracle. All rights reserved.

Set Up Directory Authentication for Administrative Users


To enable Oracle Internet Directory (OID) server to authorize SYSDBA and SYSOPER connections:
1. Configure the administrative user by using the same procedures you would use to configure a
typical user.
2. In OID, grant the SYSDBA or SYSOPER enterprise role to the user for the database the user will
administer.
3. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the
LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to
authenticate to the database, by a strong authentication method.
4. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. The
possible values are PASSWORD or SSL.
5. Later, the administrative user can log in by including the net service name in the CONNECT
statement. For example, for Fred to log in as SYSDBA if the net service name is orcl:
CONNECT fred/t%3eEGQ@orcl AS SYSDBA
Note: If the database is configured to use a password file for remote authentication, the password file
will be checked first.

Oracle Database 11g: New Features for Administrators C - 13


Set Up Kerberos Authentication
for Administrative Users

1. Create the user in the Kerberos domain.


2. Configure OID for Kerberos authentication.
3. Grant the SYSDBA or SYSOPER enterprise role to the
user in OID.
4. Set the LDAP_DIRECTORY_SYSAUTH parameter in the
database.
5. Set the LDAP_DIRECTORY_ACCESS parameter.
6. Test the connection.

$sqlplus /@orcl AS SYSDBA

C - 14 Copyright 2007, Oracle. All rights reserved.

Set Up Kerberos Authentication for Administrative Users


To enable Kerberos to authorize SYSDBA and SYSOPER connections:
1. Configure the administrative user by using the same procedures you would use to configure a
typical user. For more information about configuring Kerberos authentication, see the Oracle
Database Advanced Security Administrators Guide 11g.
2. Configure OID for Kerberos authentication. See the Oracle Database Enterprise User
Administrators Guide 11g Release 1
3. In OID, grant the SYSDBA or SYSOPER enterprise role to the user for the database the user will
administer.
4. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the
LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to
authenticate to the database, by a strong authentication method.
5. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. This
will be set to either PASSWORD or SSL.
6. Later, the administrative user can log in by including the net service name in the CONNECT
statement. For example, to log in as SYSDBA if the net service name is orcl:
CONNECT /@orcl AS SYSDBA

Oracle Database 11g: New Features for Administrators C - 14


Set Up SSL Authentication
for Administrative Users

1. Configure client to use SSL.


2. Configure server to use SSL.
3. Configure OID for SSL user authentication.
4. Grant SYSOPER or SYSDBA to the user.
5. Set the LDAP_DIRECTORY_SYSAUTH parameter in the
database.
6. Test the connection.

$sqlplus /@orcl AS SYSDBA

C - 15 Copyright 2007, Oracle. All rights reserved.

Set Up SSL Authentication for Administrative Users


To enable SYSDBA and SYSOPER connections using certificates and SSL (for more information
about configuring SSL authentication, see the Oracle Database Advanced Security Administrators
Guide 11g):
1. Configure the client to use SSL
- Set up client wallet and user certificate. Update wallet location in sqlnet.ora.
- Configure Oracle net service name to include server-distinguished names and use TCP/IP
with SSL in tnsnames.ora.
- Configure TCP/IP with SSL in listener.ora.
- Set the client SSL cipher suites and required SSL version; set SSL as an authentication
service in sqlnet.ora.
2. Configure the server to use SSL:
- Enable SSL for your database listener on TCPS and provide a corresponding TNS name.
- Stored your database PKI credentials in the database wallet.
- Set the LDAP_DIRECTORY_ACCESS initialization parameter to SSL.
3. Configure OID for SSL user authentication. See the Oracle Database Enterprise User
Administrators Guide 11g Release 1.
4. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer.

Oracle Database 11g: New Features for Administrators C - 15


Set Up SSL Authentication for Administrative Users (continued)
5. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the
LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to
authenticate to the database, by a strong authentication method.
6. Later, the administrative user can log in by including the net service name in the CONNECT
statement. For example, to log in as SYSDBA if the net service name is orcl:
CONNECT /@orcl AS SYSDBA

Oracle Database 11g: New Features for Administrators C - 16


Transparent Data Encryption

New features in TDE include:


Tablespace Encryption
Support for LogMiner
Support for Logical Standby
Support for Streams
Support for Asynchronous Change Data Capture
Hardware-based master key protection

C - 17 Copyright 2007, Oracle. All rights reserved.

Transparent Data Encryption


Several new features enhance the capabilities of Transparent Data Encryption (TDE), and build on
the same infrastructure.
The changes in LogMiner to support TDE provide the infrastructure for change capture engines used
for Logical Standby, Streams, and Asynchronous Change Data Capture. For LogMiner to support
TDE, it must be able to access the encryption wallet. To access the wallet, the instance must be
mounted and the wallet open. LogMiner does not support Hardware Security Module (HSM) or user-
held keys.
For Logical Standby, the logs may be mined either on the source or the target database, thus the
wallet must be the same for both databases.
Encrypted columns are handled the same way in both Streams and the Streams-based Change Data
Capture. The redo records are mined at the source, where the wallet exists. The data is transmitted
unencrypted to the target and encrypted using the wallet at the target. The data can be encrypted in
transit by using Advanced Security Option to provide network encryption.

Oracle Database 11g: New Features for Administrators C - 17


Using Tablespace Encryption

Create an encrypted tablespace.


1. Create or open the encryption wallet:

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY


"welcome1";

2. Create a tablespace with the encryption keywords:

SQL> CREATE TABLESPACE encrypt_ts


2> DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M
3> ENCRYPTION USING '3DES168'
4> DEFAULT STORAGE (ENCRYPT);

C - 18 Copyright 2007, Oracle. All rights reserved.

Tablespace Encryption
Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on
read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The
SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the
encryption wallet must be open.
The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption
properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify
USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid
algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the
properties in the V$ENCRYPTED_TABLESPACES view.
The encrypted data is protected during operations such as JOIN and SORT. This means that the data
is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected.
Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet.
Restrictions
Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.)
Bfiles and external tables are not encrypted.
Transportable tablespaces across different endian platforms are not supported.
The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a
tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators C - 18
TDE and LogMiner

LogMiner supports TDE-encrypted columns.


Restrictions:
The wallet holding the TDE master keys must be open.
Hardware Security Modules are not supported.
User-held keys are not supported.

C - 19 Copyright 2007, Oracle. All rights reserved.

TDE and LogMiner


With Transparent Data Encryption (TDE), the encrypted column data is encrypted in the data files,
the undo segments and the redo logs. Oracle Logical Standby depends on the LogMiners ability to
transform redo logs into SQL statements for SQL Apply. LogMiner has been enhanced to support
TDE. This enhancement provides the ability to support TDE on a logical standby database.
The wallet containing the master keys for TDE must be open for LogMiner to decrypt the encrypted
columns. The database instance must be mounted to open the wallet; therefore, LogMiner cannot
populate V$LOGMNR_CONTENTS to support TDE if the database instance is not mounted.
LogMiner populates V$LOGMNR_CONTENTS for tables with encrypted columns, displaying the
column data unencrypted for rows involved in DML statements. Note that this is not a security
violation: TDE is a file-level encryption feature and not an access control feature. It does not prohibit
DBAs from looking at encrypted data.
At Oracle Database 11g, LogMiner does not support TDE with hardware security module (HSM) for
key storage. User-held keys for TDE are PKI public and private keys supplied by the user for TDE
master keys. User-held keys are not supported by LogMiner.

Oracle Database 11g: New Features for Administrators C - 19


TDE and Logical Standby

Logical standby database with TDE:


Wallet on the standby is a copy of the wallet on the
primary.
Master key may be changed only on the primary.
Wallet open and close commands are not replicated.
Table key may be changed on the standby.
Table encryption algorithm may be changed on the
standby.

C - 20 Copyright 2007, Oracle. All rights reserved.

TDE and Logical Standby


The same wallet is required for both databases. The wallet must be copied from the primary database
to the standby database every time the master key has been changed using alter system set
encryption key identified by <wallet_password>. An error is raised if the DBA
attempts to change the master key on the standby database.
If auto-login wallet is not used, the wallet must be opened on the standby. Wallet open and close
commands are not replicated on standby. A different password can be used to open the wallet on the
standby. The wallet owner can change the password to be used for the copy of the wallet on the
standby.
The DBA has the ability to change the encryption key or the encryption algorithm of a replicated
table at the logical standby. This does not require a change to the master key or wallet. This operation
is performed with:
ALTER TABLE table_name REKEY USING '3DES168';
There can be only one algorithm per table. Changing the algorithm at the table changes the algorithm
for all the columns. A column on the standby can have a different algorithm than the primary or no
encryption. To change the table key, the guard setting must be lowered to NONE.
TDE can be used on local tables in the logical standby independent of the primary, if encrypted
columns are not replicated into the standby.

Oracle Database 11g: New Features for Administrators C - 20


TDE and Streams

Oracle Streams now provides the ability to transparently:


Decrypt values protected by TDE for filtering and
processing
Reencrypt values so that they are never in clear text
while on disk

Capture Staging Apply

C - 21 Copyright 2007, Oracle. All rights reserved.

TDE and Streams


In Oracle Database 11g, Oracle Streams supports TDE. Oracle Streams now provides the ability to
transparently:
Decrypt values protected by TDE for filtering, processing and so on
Reencrypt values so that they are never in clear text while on disk (as opposed to memory)
If the corresponding column in the apply database has TDE support, the applied data is transparently
reencrypted using the local databases keys. If the column value was encrypted at the source, and the
corresponding column in the apply database is not encrypted, the apply process raises an error unless
the apply parameter ENFORCE_ENCRYPTION is set to FALSE. Whenever logical change records
(LCRs) are stored on disk, such as due to queue or apply spilling and apply error creation, the data is
encrypted if the local database supports TDE. This is performed transparently without any user
intervention. LCR message tracing does not display clear text of encrypted column values.

Oracle Database 11g: New Features for Administrators C - 21


Hardware Security Module

Encrypt and decrypt operations


are performed on the Hardware
hardware security module. Security
Module

Encrypted data

Client Database server

C - 22 Copyright 2007, Oracle. All rights reserved.

Hardware Security Module


A hardware security module (HSM) is a physical device that provides secure storage for encryption
keys. It also provides secure computational space (memory) to perform encryption and decryption
operations. HSM is a more secure alternative to the Oracle wallet.
Transparent data encryption can use an HSM to provide enhanced security for sensitive data. An
HSM is used to store the master encryption key used for transparent data encryption. The key is
secure from unauthorized access attempts because the HSM is a physical device and not an operating
system file. All encryption and decryption operations that use the master encryption key are
performed inside the HSM. This means that the master encryption key is never exposed in insecure
memory.
There are several vendors that provide Hardware Security Modules. The vendor must supply the
appropriate libraries.

Oracle Database 11g: New Features for Administrators C - 22


Using a Hardware Security Module
with TDE

1. Configure sqlnet.ora:
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)
(METHOD_DATA=
(DIRECTORY=/app/oracle/admin/SID1/wallet)))

2. Copy the PKCS#11 library to the correct path.


3. Set up the HSM.
4. Generate a master encryption key for HSM-based
encryption:
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY
user_Id:password

5. Ensure that the HSM is accessible.


6. Encrypt and decrypt data.

C - 23 Copyright 2007, Oracle. All rights reserved.

Using a Hardware Security Module with TDE


Using HSM involves an initial setup of the HSM device. You also need to configure transparent data
encryption to use HSM. After the initial setup is done, HSM can be used just like an Oracle software
wallet. The following steps discuss configuring and using hardware security modules:
1. Set the ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora:
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)
(METHOD_DATA=(DIRECTORY=/app/oracle/admin/SID1/wallet)))
The directory is required to find the old wallet when migrating from a software-based wallet.
2. Copy the PKCS#11 library to its correct path.
3. Set up the HSM per the instruction provided by the HSM vendor. A user account is required for
the database to interact with the HSM.
4. Generate a master encryption key for HSM-based encryption:
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password
[MIGRATE USING wallet_password]
user_Id:password refers to the user account in step 3. The MIGRATE clause is used when
the TDE is already in place. MIGRATE decrypts the existing column encryption keys and then
encrypts them with the newly created, HSM-based, master encryption key.
5. Ensure that the HSM is accessible:
ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY user_Id:password
6. Encrypt and decrypt data as you would with a software wallet.

Oracle Database 11g: New Features for Administrators C - 23


Encryption for LOB Columns

CREATE TABLE test1 (doc CLOB


ENCRYPT USING 'AES128')
LOB(doc) STORE AS SECUREFILE
(CACHE NOLOGGING );

LOB encryption is allowed only for SECUREFILE LOBs.


All LOBs in the LOB column are encrypted.
LOBs can be encrypted on per-column or per-partition
basis.
Allows for the coexistence of SECUREFILE and
BASICFILE LOBs

C - 24 Copyright 2007, Oracle. All rights reserved.

Encryption for LOB Columns


Oracle Database 11g introduces a completely reengineered large object (LOB) data type that
dramatically improves performance, manageability, and ease of application development. This
SecureFiles implementation (of LOBs) offers advanced, next-generation functionality such as
intelligent compression and transparent encryption. The encrypted data in SecureFiles is stored in-
place and is available for random reads and writes.
You must create the LOB with the SECUREFILE parameter, with encryption enabled (ENCRYPT) or
disabled (DECRYPTthe default) on the LOB column. The current TDE syntax is used for
extending encryption to LOB data types.
LOB implementation from prior versions is still supported for backward compatibility and is now
referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be
created as SECUREFILES or BASICFILES. The default LOB type is BASICFILES to ensure
backward compatibility.
Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192.
Note: For further discussion on SecureFiles, please see the lesson titled Oracle SecureFiles.

Oracle Database 11g: New Features for Administrators C - 24


Using Kerberos Enhancements

Use stronger encryption algorithms (no action


required).
Interoperability between MS KDC and MIT KDC (no
action required)
Longer principal name:
CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS
'KerberosUser@SOMEORGANIZATION.COM';

Convert a DB user to Kerberos user:

ALTER USER DBUSER IDENTIFIED EXTERNALLY AS


'KerberosUser@SOMEORGANIZATION.COM';

C - 25 Copyright 2007, Oracle. All rights reserved.

Kerberos Enhancements
The Oracle client Kerberos implementation now makes use of secure encryption algorithms such as
3DES and AES in place of DES. This makes using Kerberos more secure. The Kerberos
authentication mechanism in the Oracle database now supports the following encryption types:
DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum)
RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum)
AES128-CTS (AES algorithm with 128-bit key in CTS mode with HMAC-SHA1 as checksum)
AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum)
The Kerberos implementation has been enhanced to interoperate smoothly with Microsoft and MIT
Key Distribution Centers.
The Kerberos principal name can now contain more than 30 characters. It is no longer restricted by
the number of characters allowed in a database username. If the Kerberos principal name is longer
than 30 characters, use:
CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS
'KerberosUser@SOMEORGANIZATION.COM';
Database users can be converted to Kerberos users without requiring a new user to be created using
the ALTER USER syntax:
ALTER USER DBUSER IDENTIFIED EXTERNALLY AS
'KerberosUser@SOMEORGANIZATION.COM';

Oracle Database 11g: New Features for Administrators C - 25


Enterprise Manager Security Management

Manage security through EM.


Policy Manager replaced for:
Virtual Private Database
Application Context
Oracle Label Security
Enterprise User Security pages
added
TDE pages added

C - 26 Copyright 2007, Oracle. All rights reserved.

Enterprise Manager Security Management


Security management has been integrated into Enterprise Manager.
The Policy Manager Java consolebased tool has been superseded. Oracle Label Security,
Application Contexts, and Virtual Private Database, previously administered through the Oracle
Policy Manager tool, are managed through Enterprise Manager. The Oracle Policy Manager tool is
still available.
The Enterprise Manager Security tool has been superseded by Enterprise Manager features.
Enterprise User Security is also now managed though Enterprise Manager. The menu item for
Enterprise Manager appears as soon as the ldap.ora file is configured. See the Enterprise User
Administrators Guide for configuration details. The Enterpriser Security Manager tool is still
available.
Transparent Data Encryption can now be managed through Enterprise Manager, including wallet
management. You can create, open, and close the wallet from Enterprise Manager pages.

Oracle Database 11g: New Features for Administrators C - 26


Managing TDE with Enterprise Manager

C - 27 Copyright 2007, Oracle. All rights reserved.

Managing TDE with Enterprise Manager


The administrator using Enterprise Manager can open and close the wallet, move the location of the
wallet, and generate a new master key.
The example in the slide shows that TDE options are part of the Create or Edit Table processes.
Table encryption options allow you to choose the encryption algorithm and salt.
The table key can also be reset.
The other place where TDE changed the management pages is Export and Import Data. If TDE is
configured, the wallet is open, and the table to export has encrypted columns, then the export wizard
will offer data encryption. The same arbitrary key (password) that was used on export must be
provided on import in order to import any encrypted columns. A partial import that does not include
tables that contain encrypted columns does not require the password.

Oracle Database 11g: New Features for Administrators C - 27


Managing Tablespace Encryption
with Enterprise Manager

C - 28 Copyright 2007, Oracle. All rights reserved.

Managing Tablespace Encryption with Enterprise Manager


You can manage tablespace encryption from the same console as you manage Transparent Database
Encryption. After encryption has been enabled for the database, the DBA can set the encryption
property of a tablespace on the Create Tablespace page.

Oracle Database 11g: New Features for Administrators C - 28


Managing Virtual Private Database

C - 29 Copyright 2007, Oracle. All rights reserved.

Managing Virtual Private Database


With Enterprise Manager 11g, you can now manage the Virtual Private Database policies from the
console. You can enable, disable, add, and drop polices. The console also allows you to manage
application contexts. The application context page is not shown.

Oracle Database 11g: New Features for Administrators C - 29


Managing Label Security
with Enterprise Manager

C - 30 Copyright 2007, Oracle. All rights reserved.

Managing Label Security with Database Control


Oracle Label Security (OLS) Management is integrated with Enterprise Manager Database Control.
The Database Administrator can manage OLS from the same console that is used for managing the
database instances, listeners, and host. The differences between database control and grid control are
minimal.
Oracle Label Security (OLS) Management is integrated with Enterprise Manager Grid control. The
Database Administrator can manage OLS from the same console that is used for managing the
database instances, listeners, and other targets.

Oracle Database 11g: New Features for Administrators C - 30


Managing Label Security
with Oracle Internet Directory

C - 31 Copyright 2007, Oracle. All rights reserved.

Label Security with OID


Oracle Label Security policies can now be created and stored in Oracle Internet Directory, and then
applied to one or more databases. A database will subscribe to a policy making the policy available
to the database, and the policy can be applied to tables and schemas in the database.
Label authorizations can be assigned to enterpriser users in the form of profiles.

Oracle Database 11g: New Features for Administrators C - 31


Managing Enterprise Users
with Enterprise Manager

C - 32 Copyright 2007, Oracle. All rights reserved.

Enterprise Users/Enterprise Manager


The functionality of the Enterprise Security Manager has been integrated into Enterprise Manager.
Enterprise Manager allows you to create and configure enterprise domains, enterprise roles, user
schema mappings and proxy permissions. Databases can be configured for enterprise user security
after they have been registered with OID. The registration is performed through the DBCA tool.
Enterprise users and groups can also be configured for enterprise user security. The creation of
enterprise users and groups can be done through Delegated Administration Service (DAS).
Administrators for the database can be created and given the appropriate roles in OID through
Enterprise Manager.
Enterprise Manager allows you to manage enterprise users and roles, schema mappings, domain
mappings, and proxy users.

Oracle Database 11g: New Features for Administrators C - 32


Enterprise Manager
Policy Trend

C - 33 Copyright 2007, Oracle. All rights reserved.

Enterprise Manager Policy Trend


With Enterprise Manager Policy Trend, you can view the compliance of your database configuration
against a set of Oracle Security best practices.

Oracle Database 11g: New Features for Administrators C - 33


Managing Enterprise Users
with Enterprise Manager

C - 34 Copyright 2007, Oracle. All rights reserved.

Managing Enterprise Users with Enterprise Manager


The functionality of the Enterprise Security Manager has been integrated into Enterpriser Manager.
Enterprise Users can be created and configured. Databases can be configured for enterprise user
security after they have been registered with OID. The registration is performed through the DBCA
tool.
Administrators for the database can be created and given the appropriate roles in OID through
Enterprise Manager.
Enterpriser Manager allows you to manage enterprise users and roles, schema mappings, domain
mappings, and proxy users.

Oracle Database 11g: New Features for Administrators C - 34


Oracle Audit Vault Enhancements

Audit Vault enhancements to Streams:


Harden Streams configuration
DML/DDL capture on SYS and SYSTEM schemas
Capture changes to SYS.AUD$ and SYS.FGA_LOG$

C - 35 Copyright 2007, Oracle. All rights reserved.

Oracle Audit Vault Enhancements


Oracle Audit Vault provides auditing in a heterogeneous environment. Audit Vault consists of a
secure database to store and analyze audit information from various sources such as databases, OS
audit trails, and so on. Oracle Streams is an asynchronous information-sharing infrastructure that
facilitates sharing of events within a database or from one database to another. Events could be DML
or DDL changes happening in a database. These events are captured by Streams implicit capture and
are propagated to a queue in a remote database where they are consumed by a subscriber, which is
typically the Streams apply process. Oracle Streams has been enhanced to support Audit Vault.
The Streams configurations are controlled from the Audit Vault location. After the initial
configuration has been completed, the Streams setup at both the Audit Source and Audit Vault will
be completely driven from the Audit Vault. This prevents configurations from being changed at the
Audit Source.
Oracle Streams has been enhanced to allow capture of changes to the SYS and SYSTEM schemas.
Oracle Streams already captures for user schemas all DML on participating tables and all DDL to the
database. Streams is enhanced to capture the events that change the database audit trail, forwarding
that information to Audit Vault.

Oracle Database 11g: New Features for Administrators C - 35


Using RMAN Security Enhancements

Configure backup shredding:


RMAN> CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON;

Using backup shredding:

RMAN> DELETE FORCE;

C - 36 Copyright 2007, Oracle. All rights reserved.

Using RMAN Security Enhancements


Backup shredding is a key management feature that allows the DBA to delete the encryption key of
transparent encrypted backups, without physical access to the backup media. The encrypted backups
are rendered inaccessible if the encryption key is destroyed. This does not apply to password-
protected backups.
Configure backup shredding with:
CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON;
Or
SET ENCRYPTION EXTERNAL KEY STORAGE ON;
The default setting is OFF, and backup shredding is not enabled. To shred a backup, no new
command is needed; simply use:
DELETE FORCE;

Oracle Database 11g: New Features for Administrators C - 36


Managing Fine-Grained Access to External
Network Services

1. Create an ACL and its privileges:

BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'us-oracle-com-permissions.xml',
description => Permissions for oracle network',
principal => SCOTT',
is_grant => TRUE,
privilege => 'connect');
END;

C - 37 Copyright 2007, Oracle. All rights reserved.

Managing Fine-Grained Access to External Network Services


The network utility family of PL/SQL packages such as UTL_TCP, UTL_INADDR, UTL_HTTP,
UTL_SMTP, and UTL_MAIL allow Oracle users to make network callouts from the database using
raw TCP or using higher-level protocols built on raw TCP. A user either did or did not have the
EXECUTE privilege on these packages and there was no control over which network hosts were
accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using
access control lists (ACL) implemented by XML DB.
1. Create an access control list (ACL): The ACL is a list of users and privileges held in an XML
file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in
the XML DB. In the example, SCOTT is granted connect. The username is case-sensitive in
the ACL and must match the username of the session. There are only resolve and connect
privileges. The connect privilege implies resolve. Optional parameters can specify a start
and end time stamp for these privileges. To add more users and privileges to this ACL, use the
ADD_PRIVILEGE procedure.

Oracle Database 11g: New Features for Administrators C - 37


Managing Fine-Grained Access to External
Network Services

2. Assign an ACL to one or more network hosts:

BEGIN
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
acl => us-oracle-com-permissions.xml',
host => *.us.oracle.com',
lower_port => 80,
upper_port => null);
END

C - 38 Copyright 2007, Oracle. All rights reserved.

Managing Fine-Grained Access to External Network Services (continued)


2. Assign an ACL to one or more network hosts: The ASSIGN_ACL procedure associates the
ACL with a network host and, optionally, a port or range of ports. In the example, the host
parameter allows wildcard character for the host name to assign the ACL to all the hosts of a
domain. The use of wildcards affects the order of precedence for the evaluation of the ACL.
Fully qualified host names with ports are evaluated before hosts with ports. Fully qualified host
names are evaluated before partial domain names, and subdomains are evaluated before the
top-level domain level.
Multiple hosts can be assigned to the same ACL and multiple users can be added to the same
ACL in any order after the ACL has been created.

Oracle Database 11g: New Features for Administrators C - 38


Summary

In this lesson, you should have learned how to:


Configure the password file to use case-sensitive
passwords
Encrypt a tablespace
Configure fine-grained access to network services

C - 39 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators C - 39


Practice 14: Overview

This practice covers the following topics:


Changing the use of case-sensitive passwords
Implementing a password complexity function
Encrypting a tablespace

C - 40 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators C - 40


Remote Jobs

Copyright 2007, Oracle. All rights reserved.


Objectives

After completing this lesson, you should be able to:


Configure remote jobs
Create remote jobs

D-2 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators D - 2


New Scheduler Features

Remote jobs:
External jobs (OS based)
Database jobs

D-3 Copyright 2007, Oracle. All rights reserved.

New Scheduler Features


The Scheduler in Oracle Database 11g has been enhanced with the goal of unifying all scheduling
and jobs functionality into one facility. This has the effect of reducing the administration of jobs
(with fewer places to look for scheduled jobs) and reducing the number of background processes that
start, stop, and monitor scheduled jobs.
A DBA responsible for more than a few databases on multiple servers often needs to be familiar with
the operating system (OS) scheduling tools to do everything required. In Oracle Database 11g, a
Scheduler Agent allows the Scheduler to create jobs not only on machines where a database resides,
but also on any machine where a Scheduler Agent is installed. These jobs can be external OS-based
jobs or database jobs. The DBA can now administer jobs across the network from one location.

Oracle Database 11g: New Features for Administrators D - 3


Remote Jobs

Schedule
Remote jobs: jobs
Operating systemlevel
jobs
Scripts, binaries, and so
on
No Oracle database SA
required.
Agent starts and Scheduler
manages jobs. Agent (SA)

Execute Execute
OS job DB job

D-4 Copyright 2007, Oracle. All rights reserved.

Remote Jobs
The Oracle Scheduler can now create and run remote jobs. The ability to run a job from a centralized
scheduler on remote hosts or databases gives the DBA the tools to manage many more machines. The
Oracle Scheduler Agent provides the ability to run a job against remote databases or on hosts without
a database.
The agent must register with one or more databases that are acting as the Scheduler source. The
Scheduler source database must have the XMLDB features installed. The Scheduler must be
configured to communicate with the agent. A port must be allocated and it must be unused. A
password must be created for the agent to register.
The DBMS_SCHEDULER.SET_ATTRIBUTES procedure enables you to specify the destination
host or database by providing the host:port of the scheduler agent.

Oracle Database 11g: New Features for Administrators D - 4


Configuring the Source Database

1. Confirm that the XMLDB is installed and get the HTTP


port:

SQL> DESC RESOURCE_VIEW


SQL> SELECT DBMS_XDB.GETHTTPPORT() FROM DUAL;

2. Run the prvtsch.plb script:


SQL> @$ORACLE_HOME/rdbms/admin/prvtsch.plb

3. Set an agent registration password:

SQL> EXEC DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS( -


> registration_password => 'my_password')

D-5 Copyright 2007, Oracle. All rights reserved.

Configuring the Source Database


Before a database can be used as a source of remote jobs, it must have the following configuration
steps completed.
1. Confirm that the XMLDB is installed. If the XMLDB is installed, the RESOURCE_VIEW view
will exist. If XMLDB is not installed, use Oracle Universal Installer to install it.
SQL > DESC RESOURCE_VIEW
Find the HTTP port that is configured for the XML Database:
SQL> SELECT DBMS_XDB.GETHTTPPORT() FROM DUAL;
2. Run the prvtsch.plb script on the source database. Connect as the SYS user. The
prvtsch.plb script will be in the $ORACLE_HOME/rdbms/admin directory.
SQL> CONNECT / AS SYSDBA
SQL> @$ORACLE_HOME/rdbms/admin/prvtsch.plb
3. Set a registration password for the agent registration. You can limit the lifetime of this password
and the number of registrations that use this password. The user who sets the password must
have the MANAGE SCHEDULER privilege. The following example allows the password to be
used 10 times in the next 24 hours. Oracle recommends that the password be limited to a short
period of time.
SQL> EXEC DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS ( -
> registration_password => 'my_password',-
> expiration_date => SYSTIMESTAMP + INTERVAL '1' DAY,-
> max_uses => 10 )
Oracle Database 11g: New Features for Administrators D - 5
Installing the Agent

D-6 Copyright 2007, Oracle. All rights reserved.

Installing the Agent


The agent is a separately installable component that may be installed from the Oracle Transparent
Gateway media. During installation of the agent, an additional step is necessary to register with the
source database and to start the agent in the background.
During installation, the agent should be registered with at least one database. It is possible to
automate this registration if the user is willing to include the database registration password in the
installer file. This enables silent automated installs.
Optional information includes:
Path to install the agent into
Whether to automatically start the agent
Whether to set up the agent to automatically start on every computer startup
If, after installation of the agent, another database is required to run jobs on the agent, the agent must
be registered with that database.

Oracle Database 11g: New Features for Administrators D - 6


Registering the Scheduler Agent

On the remote machine:


Review the schagent.conf file.
Execute the command to register the agent.
$ schagent -registerdatabase database_host
database_xmldb_http_port

Start the Scheduler Agent


On Unix and Linux
$ schagent -start

On Windows (install and start service)


C:\> schagent -installagentservice

D-7 Copyright 2007, Oracle. All rights reserved.

Registering the Scheduler Agent


After installation, the Scheduler Agent must be registered with one more source databases. The
source database is where the remote jobs will be created and where the job status information will be
sent. The Scheduler Agent uses a specified port to communicate with the source database. This is the
port used by the XMLDB HTTP listener.
In the command to register the database, provide the host name of the machine where the schedule is
running (called the source database) and the HTTP port that you found with the SELECT
DBMS_XDB.GETHTTPPORT() FROM DUAL; command.
schagent -registerdatabase database_host database_xmldb_http_port
On Unix or Linux, start the Scheduler Agent:
schagent -start
On Windows, install and start the service:
schagent -installagentservice

Oracle Database 11g: New Features for Administrators D - 7


Scheduler APIs to
Support Remote Jobs

New DBMS_SCHEDULER procedures


CREATE_CREDENTIAL
DROP_CREDENTIAL
SET_AGENT_REGISTRATION_PASS
GET_FILE
PUT_FILE
Modified DBMS_SCHEDULER procedures
STOP_JOB
RUN_JOB

D-8 Copyright 2007, Oracle. All rights reserved.

Scheduler APIs to Support Remote Jobs


These procedures are part of the DBMS_SCHEDULER package. The procedures in the slide are new
or modified to support the remote job features.
CREATE_CREDENTIAL and DROP CREDENTIAL: These procedures are used to create or drop a
stored username/password pair called a credential. Credentials reside in a particular schema and can
be created by any user with the CREATE JOB system privilege. To drop a public credential, the SYS
schema must be explicitly given. Only a user with the MANAGE SCHEDULER system privilege is
able to drop a public credential. For a regular credential, only the owner of the credential or a user
with the CREATE ANY JOB system privilege is able to drop the credential.
SET_AGENT_REGISTRATION_PASS: This procedure is used to set the agent registration
password for a database. Remote agents must register with the database before the database can
submit jobs to the agent. To prevent abuse, this password can be set to expire after a given date or
after a maximum number of successful registrations. This procedure overwrites any password already
set. Setting the password to NULL prevents any agent registrations. This requires the MANAGE
SCHEDULER system privilege. By default, max_uses is set to 1, which means that this password
can be used for only a single agent registration. Oracle recommends that an agent registration
password be reset after every agent registration or after every known set of agent registrations.
Oracle further recommends that this password be set to NULL if no new agents are being registered.

Oracle Database 11g: New Features for Administrators D - 8


Scheduler APIs to Support Remote Jobs (continued)
GET_FILE and PUT_FILE: These procedures retrieve a file from a particular host or save a file to
a particular host. They differ from the equivalent UTL_FILE procedure in that they use a specified
credential and can retrieve files from remote hosts that have an execution agent installed. The caller
must have the CREATE EXTERNAL JOB system privilege and have EXECUTE privileges on the
credential.
STOP_JOB and RUN_JOB: These procedures have been modified to stop or run remote jobs.
For more information about the Scheduler APIs, see the Oracle Database PL/SQL Packages and
Types Reference.

Oracle Database 11g: New Features for Administrators D - 9


Dictionary Views for Remote Jobs

New views
*_SCHEDULER_CREDENTIALS
*_SCHEDULER_REMOTE_JOBSTATE
Modified views to support remote jobs
*_SCHEDULER_JOBS
*_SCHEDULER_JOB_RUN_DETAILS
Job_subname

D - 10 Copyright 2007, Oracle. All rights reserved.

Dictionary Views for Remote Jobs


The following dictionary views have been added:
*_SCHEDULER_CREDENTIALS: Lists all regular credentials in the current users schema
The *_SCHEDULER_JOBS views have been modified with new columns to support remote jobs:
source: Global database ID of the scheduler source database
destination: Global database ID of the destination database for remote database jobs
credential_name: Name of the credential to be used for an external job
credential_owner: Owner of the credential to be used for an external job
deferred_drop: Indicates whether the job will be dropped when completed following user
request (TRUE) or not (FALSE)
instance_id: Instance name of the preferred instance to run the job (for jobs running on a
RAC database)

Oracle Database 11g: New Features for Administrators D - 10


Dictionary Views for Remote Jobs (continued)
input VARCHAR2(4000): String to be provided as a standard input to an external job
environment_variables VARCHAR2(4000): Semicolon-separated list of name-value
pairs to be set as environment variables for an external job
login_script VARCHAR2(1000): Path to and name of the script to be executed prior to
an external job
*_SCHEDULER_REMOTE_JOBSTATE: Views added to show the state of enabled remote and
distributed jobs on each of the destinations

Oracle Database 11g: New Features for Administrators D - 11


Summary

In this lesson, you should have learned how to:


Configure remote jobs
Create remote jobs

D - 12 Copyright 2007, Oracle. All rights reserved.

Oracle Database 11g: New Features for Administrators D - 12

Вам также может понравиться