Академический Документы
Профессиональный Документы
Культура Документы
D50081GC10
Edition 1.0
July 2007
D51901 L$
Authors Copyright 2007, Oracle. All rights reserved.
James Spiller This course in any form, including its course labs and printed matter, contains
Kesavan Srinivasan proprietary information that is the exclusive property of Oracle. This course and the
information contained herein may not be disclosed, copied, reproduced, or distributed
Jenny Tsai
to anyone outside Oracle without prior written consent of Oracle. This course and its
Jean-Francois Verrier contents are not part of your license agreement nor can they be incorporated into any
James Womack contractual agreement with Oracle or its subsidiaries or affiliates.
This course is for informational purposes only and is intended solely to assist you in
Technical Contributors planning for the implementation and upgrade of the product features described. It is
and Reviewers not a commitment to deliver any material, code, or functionality, and should not be
relied upon in making purchasing decisions. The development, release, and timing of
Maqsood Alam any features or functionality described in this document remain at the sole discretion
Kalyan Bitra of Oracle.
Harald Van Breederode This document contains proprietary information and is protected by copyright and
Edward Choi other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered in
Al Flournoy any way. Except where your use constitutes "fair use" under copyright law, you may
Andy Fortunak not use, share, download, upload, copy, print, display, perform, reproduce, publish,
Gerlinde Frenzen license, post, transmit, or distribute this document in whole or in part without the
express authorization of Oracle.
Greg Gagnon
Joel Goodman The information contained in this document is subject to change without notice. If you
find any problems in the document, please report them in writing to: Oracle University,
Hansen Han
500 Oracle Parkway, Redwood Shores, California 94065 USA.
Uwe Hesse Restricted Rights Notice
Sunil Hingorani
If this documentation is delivered to the United States Government or anyone using
Magnus Isaksson the documentation on behalf of the United States Government, the following notice is
Susan Jang applicable:
Martin Jensen
U.S. GOVERNMENT RIGHTS
Pete Jones The U.S. Governments rights to use, modify, reproduce, release, perform, display, or
Yash Kapani disclose these training materials are restricted by the terms of the applicable Oracle
license agreement and/or the applicable U.S. Government contract.
Pierre Labrousse
Richard.W.Lewis Trademark Notice
Hakan Lindfors
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other
Russ Lowenthal names may be trademarks of their respective owners.
Kurt Lysy
Silvia Marrone This document is not warranted to be error-free.
Heejin Park
Editors
Jagannath Poosarla
Eric Siglin Raj Kumar
Ranbir Singh Daniel Milne
Jeff Skochil Vijayalakshmi Narasimhan
George Spears Atanu Raychaudhuri
Birgitte Taagholt Richard Wallis
Glenn Tripp
Anthony Woodell
Publishers
Sujatha Nagendra
Graphic Designers Srividya Rameshkumar
Rajiv Chandrabhanu Michael Sebastian
Samir Mozumdar Jobi Varghese
Using Flashback and LogMiner
Undo data
Original
data in
buffer cache
DML operations
FBDA
DML changes
used by FBDA
Old values
Undo
Buffer cache
1 2
FBDA
3
History or archive tables:
- Compressed storage
- With automatic digital
shredding
Flashback data archives
To allow a specific user the use a specific flashback data archive, grant the FLASHBACK ARCHIVE
object privilege on that flashback data archive to the archive user. The archive user can then enable
flashback archive on tables, by using the specific flashback data archive.
Example executed as archive administrator:
GRANT FLASHBACK ARCHIVE ON FLA1 TO HR;
Most likely, your users will use other Flashback functionality. To allow access to specific objects
during queries, grant the FLASHBACK and SELECT privileges on all objects involved in the query.
If your users need access to the DBMS_FLASHBACK package, then you need to grant them the
SELECT privilege for this package. Users can then use the DBMS_FLASHBACK.ENABLE and
DBMS_FLASHBACK.DISABLE procedures to enable and disable the flashback data archives.
Oracle Database 11g: New Features for Administrators 11 - 8
Preparing Your Database
Configuring undo:
Creating an undo tablespace (default: Automatically
extensible tablespace)
Enabling Automatic Undo Management (11g default)
Understanding automatic tuning of undo:
Fixed-size tablespace: Automatic tuning for best retention
Automatically extensible undo tablespace: Automatic tuning
for longest-running query
Recommendation for Flashback: Fixed-size undo
tablespace
In other words, Automatic Undo Management is now enabled by default. If needed, enable
Automatic Undo Management, as explained in the Oracle Database Administrators Guide.
An automatically extensible undo tablespace is created upon database installation.
For a fixed-size undo tablespace, the Oracle database automatically tunes the system to give the
undo tablespace the best possible undo retention.
For an automatically extensible undo tablespace (default), the Oracle database retains undo data
to satisfy at a minimum, the retention periods needed by the longest-running query and the
threshold of undo retention, specified by the UNDO_RETENTION parameter.
Automatic tuning of undo retention generally achieves better results with a fixed-size undo
tablespace. If you want to change the undo tablespace to fixed size for this or other reasons, the
Undo Advisor can help you determine the proper fixed size to allocate.
If you are uncertain about your space requirements and you do not have access to the Undo Advisor,
follow these steps:
1. You can start with an automatically extensible undo tablespace.
2. Observe it through one business cycle (for example, this could be 1 or 2 days, or longer).
Oracle Database 11g: New Features for Administrators 11 - 9
Preparing Your Database (continued)
3. Collect undo block information with the V$UNDO_STAT view, calculate your space
requirements, and use them to create an appropriately sized fixed undo tablespace. (The
calculation formula is given in the Oracle Database Administrators Guide.)
4. You can query V$UNDOSTAT.TUNED_UNDORETENTION to determine the amount of time for
which undo is retained for the current undo tablespace. Setting the UNDO_RETENTION
parameter does not guarantee that unexpired undo data is not overwritten. If the system needs
more space, the Oracle database can overwrite unexpired undo with more recently generated
undo data.
- Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that
unexpired undo data is not discarded.
- To satisfy long-retention requirements that exceed the undo retention, create a flashback
data archive.
1. Adding space:
ALTER FLASHBACK ARCHIVE fla1
ADD TABLESPACE tbs3 QUOTA 5G;
2. Changing retention time:
ALTER FLASHBACK ARCHIVE fla1 MODIFY RETENTION 2 YEAR;
3. Purging data:
ALTER FLASHBACK ARCHIVE fla1 PURGE BEFORE
TIMESTAMP(SYSTIMESTAMP - INTERVAL '1' day);
3. To recover data:
INSERT INTO employees
SELECT * FROM employees AS OF TIMESTAMP
TO_TIMESTAMP('2007-06-12 11:30:00','YYYY-MM-DD HH24:MI:SS')
WHERE name = 'JOE';
Flashback Transaction
You can use the Flashback Transaction functionality from within Enterprise Manager or with
PL/SQL packages.
DBMS_FLASHBACK.TRANSACTION_BACKOUT
Prerequisites
In order to use this functionality, supplemental logging must be enabled and the correct privileges
established. For example, the HR user in the HR schema decides to use Flashback Transaction for the
REGIONS table. The SYSDBA ensures that the database is in archive log mode and performs the
following setup steps in SQL*Plus:
alter database add supplemental log data;
alter database add supplemental log data (primary key) columns;
grant execute on dbms_flashback to hr;
grant select any transaction to hr;
The HR user needs to either own the tables (as is the case in the preceding example) or have the
SELECT, UPDATE, DELETE, and INSERT privileges, to allow execution of the compensating undo
SQL code.
Possible Workflow
Assume that several transactions occurred as indicated below:
connect hr/hr
INSERT INTO hr.regions VALUES (5,'Pole');
COMMIT;
UPDATE hr.regions SET region_name='Poles' WHERE region_id = 5;
UPDATE hr.regions SET region_name='North and South Poles' WHERE region_id
= 5;
COMMIT;
INSERT INTO hr.countries VALUES ('TT','Test Country',5);
COMMIT;
connect sys/<password> as sysdba
ALTER SYSTEM ARCHIVE LOG CURRENT;
Viewing Data
To view the data in a table in Enterprise Manager, select Schema > Tables.
While viewing the content of the HR.REGIONS table, you discover a logical problem. Region 20 is
misnamed. You decide to immediately address this issue.
COMMIT
Finishing Up
On the Flashback Transaction: Review page, click the Show Undo SQL Script button to view the
compensating SQL commands. Click Finish to commit your compensating transaction.
26-JUN-07
<?xml version="1.0" encoding="ISO-8859-1"?>
<COMP_XID_REPORT XID="05001500690500
0
Using LogMiner
What you already know: LogMiner is a powerful audit tool for Oracle databases, which allows you
to easily locate changes in the database, enabling sophisticated data analyses, and providing undo
capabilities to roll back logical data corruptions or user errors. LogMiner directly accesses the Oracle
redo logs, which are complete records of all activities performed on the database, and the associated
data dictionary. The tool offers two interfaces: SQL command line and a GUI interface.
What is new: Enterprise Manager Database Control now has an interface for LogMiner. In prior
releases, administrators were required to install and use the stand-alone Java Console for LogMiner.
With this new interface, administrators have a task-based, intuitive approach to using LogMiner. This
improves the manageability of LogMiner. In Enterprise Manager, select Availability > View and
Manage Transactions.
LogMiner supports the following activities:
Specifying query parameters
Stopping the query and showing partial results, if the query takes a long time
Partial querying, then showing the estimated complete query time
Saving the query result
Re-mining or refining the query based on initial results
Showing transaction details, dependencies, and compensating undo SQL script
Flashing back and committing the transaction
For more details see the High-Availability eStudy and documentation.
Oracle Database 11g: New Features for Administrators 11 - 35
Summary
Change assurance
and Automatic Proactive
Intelligent
Diagnostic patching
automatic health resolution
Workflow
checks
Prevention Resolution
DBA
Alert DBA
Auto incident creation
1 2 Targeted health checks
First failure capture Assisted SR filling
No Known
DBA bug?
Yes
EM Support Workbench:
4 Package incident info EM Support Workbench:
Data Repair
Apply patch/Data repair DBA
3
diag
rdbms
DB
Name
ADR metadata
SID
Home
incdir_1 incdir_n
ADRCI
log.xml
V$DIAG_INFO
alert_SID.log
NAME VALUE
------------------- ---------------------------------------------------------------
Diag Enabled TRUE
ADR Base /u01/app/oracle
ADR Home /u01/app/oracle/diag/rdbms/orcl/orcl
Diag Trace /u01/app/oracle/diag/rdbms/orcl/orcl/trace
Diag Alert /u01/app/oracle/diag/rdbms/orcl/orcl/alert
Diag Incident /u01/app/oracle/diag/rdbms/orcl/orcl/incident
Diag Cdump /u01/app/oracle/diag/rdbms/orcl/orcl/cdump
Health Monitor /u01/app/oracle/diag/rdbms/orcl/orcl/hm
Default Trace File /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_11424.trc
Active Problem Count 3
Active Incident Count 8
V$DIAG_INFO
The V$DIAG_INFO view lists all important ADR locations:
ADR Base: Path of ADR base
ADR Home: Path of ADR home for the current database instance
Diag Trace: Location of the text alert log and background/foreground process trace files
Diag Alert: Location of an XML version of the alert log
Default Trace File: Path to the trace file for your session. SQL Trace files are written here.
adrci>>
Problem
Aut Key Incident Status
o ma
tica Collecting
Flood lly
control Automatic
Ready
Incident transition
Tracking
lly Incident ID
nua Data-Purged
Ma
DBA Closed
Traces
ADR
MMON Auto-purge
Non-critical
Error
Package to be
sent to
Oracle Support
incident package. DB
Name
By default, only the first and last
ADR metadata
three incidents of each Home
SID
Incident Packages
To upload diagnostic data to Oracle Support Services, you first collect the data in an incident
package. When you create an incident package, you select one or more problems to add to the
incident package. The Support Workbench then automatically adds to the incident package the
incident information, trace files, and dump files associated with the selected problems. Because a
problem can have many incidents (many occurrences of the same problem), by default only the first
three and last three incidents for each problem are added to the incident package. You can change
this default number on the Incident Packaging Configuration page accessible from the Support
Workbench page.
After the incident package is created, you can add any type of external file to the incident package,
remove selected files from the incident package, or edit selected files in the incident package to
remove sensitive data.
An incident package is a logical construct only, until you create a physical file from the incident
package contents. That is, an incident package starts out as a collection of metadata in ADR. As you
add and remove incident package contents, only the metadata is modified. When you are ready to
upload the data to Oracle Support Services, you invoke either a Support Workbench or an ADRCI
function that gathers all the files referenced by the metadata, places them into a zip file, and then
uploads the zip to MetaLink.
View critical
1 error alerts in
Enterprise Manager.
Gather additional
6 Track the SR and 3 diagnostic
implement repairs.
information.
PROBLEM | PROBLEMKEY
INCIDENT
NEW INCIDENTS IPS ADD
FILE
IN FILE
IPS COPY
OUT FILE
INCIDENT
IPS REMOVE
FILE
IPS COPY is essentially used to COPY OUT a file, edit it, and COPY IN it back into ADR.
IPS FINALIZE is used to finalize a package for delivery, which means that other components,
such as the Health Monitor, are called to add their correlated files to the package. Recent trace files
and log files are also included in the package. If required, this step is run automatically when a
package is generated.
To generate the physical file, the IPS GENERATE PACKAGE command is used. The syntax is:
IPS GENERATE PACKAGE IN [COMLPETE | INCREMENTAL]
It generates a physical zip file for an existing logical package. The file name contains either COM for
complete or INC for incremental, followed by a sequence number that is incremented each time a zip
file is generated.
IPS SET CONFIGURATION is used to set IPS rules.
Note: Refer to the Oracle Database Utilities guide for more information about ADRCI.
hm
(reports)
Reactive
ADR
Manual Health
EM or DBMS_HM Monitor
V$HM_CHECK
DB-online
DBA
Logical Block Check Undo Segment Check
Table Row Check Data Block Check
Transaction Check Table Check
Table-Index Row Mismatch
Database Dictionary Check
Table-Index Cross Check
DBMS_HM.GET_RUN_REPORT('DICOCHECK')
--------------------------------------------------------------------------------
Basic Run Information (Run Name,Run Id,Check Name,Mode,Status)
Input Paramters for the Run
TABLE_NAME=tab$
CHECK_MASK=ALL
Run Findings And Recommendations
Finding
Finding Name : Dictionary Inconsistency
Finding ID : 22
Type : FAILURE
Status : OPEN
Priority : CRITICAL
Message : SQL dictionary health check: invalid column number 8 on
object TAB$ failed
Message : Damaged rowid is AAAAACAABAAAS7PAAB - description: Object
SCOTT.TABJFV is referenced
Trace files
DBA
DBA accept Statement
executes
SQL patch successfully
again
Execute
SQL patch
generated SQL statement
patched
declare
rep_out clob;
t_id varchar2(50);
begin
t_id := dbms_sqldiag.create_diagnosis_task(
sql_text => 'delete from t t1 where t1.a = ''a'' and rowid <> (select max(rowid)
from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)',
task_name => 'sqldiag_bug_5869490',
problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR);
dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE',
dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS);
dbms_sqldiag.execute_diagnosis_task (t_id);
rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT);
dbms_output.put_line ('Report : ' || rep_out);
end;
/
Data Failures
Data failures are detected by checks, which are diagnostic procedures that assess the health of the
database or its components. Each check can diagnose one or more failures, which are mapped to a
repair.
Checks can be reactive or proactive. When an error occurs in the database, reactive checks are
automatically executed. You can also initiate proactive checks, for example, by executing the
VALIDATE DATABASE command.
In Enterprise Manager, select Availability > Perform Recovery, or click the Perform Recovery
button, if you find your database in a down or mounted state.
2a
2b
Advising on Repair
On the View and Manage Failures page, after you click the Advise button, the Data Recovery
Advisor generates a manual checklist. Two types of failures could appear:
Failures that require human intervention. An example is a connectivity failure, when a disk cable
is not plugged in.
Failures that are repaired faster if you can undo a previous erroneous action. For example, if you
renamed a data file by error, it is faster to rename it back, rather than initiate RMAN restoration
from backup.
You can initiate the following actions:
Click Re-assess Failures after you have performed a manual repair. Failures, which are
resolved, are implicitly closed; any remaining ones are displayed on the View and Manage
Failures page.
Click Continue with Advise to initiate an automated repair. When the Data Recovery Advisor
generates an automated repair option, it generates a script that shows you how RMAN plans to
repair the failure. Click Continue, if you want to execute the automated repair. If you do not
want the Data Recovery Advisor to automatically repair the failure, then you can use this script
as a starting point for your manual repair.
. . . In less than
one second
Executing Repairs
In the preceding example, the Data Recovery Advisor executes a successful repair in less than one
second.
Syntax:
LIST FAILURE
[ ALL | CRITICAL | HIGH | LOW | CLOSED |
failnum[,failnum,] ]
[ EXCLUDE FAILURE failnum[,failnum,] ]
[ DETAIL ]
RMAN>
RMAN> LIST FAILURE;
RMAN>
Advising on Repair
The RMAN ADVISE FAILURE command displays a recommended repair option for the specified
failures. If this command is executed from within Enterprise Manager, then Data Guard is presented
as a repair option. (This is not the case, if the command is executed directly from the RMAN
command line). The ADVISE FAILURE command prints a summary of the input failure. The
command implicitly closes all open failures that are already fixed.
The default behavior (when no option is used) is to advise on all the CRITICAL and HIGH priority
failures that are recorded in Automatic Diagnostic Repository (ADR). If a new failure has been
recorded in ADR since the last LIST FAILURE command, this command includes a WARNING
before advising on all CRITICAL and HIGH failures.
Two general repair options are implemented: no-data-loss and data-loss repairs.
When the Data Recovery Advisor generates an automated repair option, it generates a script that
shows you how RMAN plans to repair the failure. If you do not want the Data Recovery Advisor to
automatically repair the failure, then you can use this script as a starting point for your manual repair.
The operating system (OS) location of the script is printed at the end of the command output. You
can examine this script, customize it (if needed), and also execute it manually if, for example, your
audit trail requirements recommend such an action.
RMAN>
Syntax: Example:
REPAIR FAILURE RMAN> repair failure;
[PREVIEW]
[NOPROMPT]
Executing Repairs
This command should be used after an ADVISE FAILURE command in the same RMAN session.
By default (with no option), the command uses the single, recommended repair option of the last
ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE
command initiates an implicit ADVISE FAILURE command.
By default, you are asked to confirm the command execution, because you may be requesting
substantial changes, that take time to complete. During execution of a repair, the output of the
command indicates what phase of the repair is being executed.
After completing the repair, the command closes the failure.
You cannot run multiple concurrent repair sessions. However, concurrent REPAIR PREVIEW
sessions are allowed.
PREVIEW means: Do not execute the repair(s); instead, display the previously generated RMAN
script with all repair actions and comments.
NOPROMPT means: Do not ask for confirmation.
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2101176755.hm
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2101176755.hm
Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2 OK 0 22892 66720 981662
File Name: /u01/app/oracle/oradata/orcl/sysaux01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 10529
Index 0 9465
Other 0 23834
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 24 640 963835
File Name: /u01/app/oracle/oradata/orcl/users01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 43
Index 0 63
Other 0 510
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 0 1732 12800 745885
File Name: /u01/app/oracle/oradata/orcl/example01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 4416
Index 0 1303
Other 0 5349
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
including current control file for validation
including current SPFILE in backup set
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Control File and SPFILE
===============================
File Type Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE OK 0 2
Control File OK 0 594
Finished validate at 21-DEC-06
RMAN>
...
Detect I/O storage, disk corruption
...
Detect non-persistent writes on physical standby
New
...
Specify defaults for corruption detection
EM > Server > Initialization Parameters
By default:
Default password profile is enabled
Account is locked after 10 failed login attempts
In upgrade:
Passwords are not case-sensitive until changed
Passwords become case-sensitive when the ALTER
USER command is used
On creation:
Passwords are case-sensitive
Tablespace Encryption
Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on
read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The
SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the
encryption wallet must be open.
The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption
properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify
USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid
algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the
properties in the V$ENCRYPTED_TABLESPACES view.
The encrypted data is protected during operations such as JOIN and SORT. This means that the data
is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected.
Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet.
Restrictions
Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.)
Bfiles and external tables are not encrypted.
Transportable tablespaces across different endian platforms are not supported.
The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a
tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators 14 - 14
Hardware Security Module
Encrypted data
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'us-oracle-com-permissions.xml',
description => 'Permissions for oracle network',
principal => 'SCOTT',
is_grant => TRUE,
privilege => 'connect');
END;
F
PD
Oracle SecureFiles
Oracle Database 11g completely reengineers the LOB data type as Oracle SecureFiles, dramatically
improving the performance, manageability, and ease of application development. The new
implementation also offers advanced, next-generation functionality such as intelligent compression
and transparent encryption.
With SecureFiles, chunks vary in size from Oracle data block size up to 64 MB. The Oracle database
attempts to colocate data in physically adjacent locations on disk, thereby minimizing internal
fragmentation. By using variable chunk sizes, SecureFiles avoids versioning of large, unnecessary
blocks of LOB data.
SecureFiles also offer a new client/server network layer allowing for high-speed data transfer
between the client and server supporting significantly higher read and write performance. SecureFiles
automatically determines the most efficient way for generating redo and undo, eliminating user-
defined parameters. SecureFiles automatically determines whether to generate redo and undo for
only the change, or create a new version by generating a full redo record.
SecureFiles is designed to be intelligent and self-adaptable as it maintains different in-memory
statistics that help in efficient memory and space allocation. This provides for easier manageability
due to lower number of tunable parameters that are harder to tune with unpredictable loads.
Altering the RETENTION with the ALTER TABLE statement affects the space created only after the
statement is executed.
For SecureFiles, you no longer need to specify CHUNK, PCTVERSION, FREEPOOLS, FREELISTS,
and FREELIST GROUPS. For compatibility with existing scripts, these clauses are parsed but not
interpreted.
Creating SecureFiles
You create SecureFiles with the storage keyword SECUREFILE in the CREATE TABLE statement
with a LOB column. The LOB implementation available in prior database versions is still supported
for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table,
you can specify whether it should be created as SecureFiles or BasicFiles. If you do not specify the
storage type, the LOB is created as BasicFiles to ensure backward compatibility.
In the first example in the slide, you create a table called FUNC_SPEC to store documents as
SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the
LOB should be cached when read, and that redo should not be generated when updates are performed
to the LOB. In addition, you are specifying that the documents stored in the doc column should be
encrypted using the AES128 encryption algorithm. KEEP_DUPLICATE is the opposite of
DEDUPLICATE, and can be used in an ALTER statement.
In the third example in the slide, you are creating a table called DESIGN_SPEC that stores
documents as SecureFiles. For this table you have specified that duplicates may be stored, and that
the LOBs should be stored in compressed format and should be cached but not logged. Default
compression is MEDIUM, which is the default. The compression algorithm is implemented on the
server-side, which allows for random reads and writes to LOB data. That property can also be
changed via ALTER statements.
LOB Cache
Direct I/O
Altering SecureFiles
Using the DEDUPLICATE option, you can specify that LOB data that is identical in two or more
rows in a LOB column should share the same data blocks. The opposite of this is
KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs
with identical content into a single copy, reducing storage and simplifying storage management. The
LOB keyword is optional and is for syntactic clarity only.
The COMPRESS or NOCOMPRESS keywords enable or disable LOB compression, respectively. All
LOBs in the LOB segment are altered with the new compression setting.
The ENCRYPT or DECRYPT keyword turns on or off LOB encryption using Transparent Data
Encryption (TDE). All LOBs in the LOB segment are altered with the new setting. A LOB segment
can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update
the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be
updated using the ALTER TABLE REKEY syntax. Encryption is done at the block level allowing
for better performance (smallest encryption amount possible) when combined with other options.
Note: For a full description of the options available for the ALTER TABLE statement, see the
Oracle Database SQL Reference.
DBMS_LOB
SecureFiles
GETOPTIONS()
SETOPTIONS
GET_DEDUPLICATE_REGIONS
DBMS_SPACE.SPACE_USAGE
SecureFiles
Migrating to SecureFiles
A superset of LOB interfaces allows easy migration from BasicFile LOBs. The two recommended
methods for migration to SecureFiles are partition exchange and online redefinition.
Partition Exchange
Needs additional space equal to the largest of the partitions in the table
Can maintain indexes during the exchange
Can spread the workload out over several smaller maintenance windows
Requires that the table or partition needs to be offline to perform the exchange
Online Redefinition (recommended practice)
No need to take the table or partition offline
Can be done in parallel
Requires additional storage equal to the entire table and all LOB segments to be available
Requires that any global indexes be rebuilt
These solutions generally mean using twice the disk space used by the data in the input LOB column.
However, using partitioning and taking these actions on a partition-by-partition basis may help lower
the disk space required.
begin
dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c');
dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1,
true,true,true,false,error_count);
dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp');
end;
Foreground Statistics
New columns have been added to the V$SYSTEM_EVENT and the V$SYSTEM_WAIT_CLASS
views that allow you to easily identify events that are caused by foreground or background processes.
V$SYSTEM_EVENT has five new NUMBER columns that represent the statistics from purely
foreground sessions:
TOTAL_WAITS_FG
TOTAL_TIMEOUTS_FG
TIME_WAITED_FG
AVERAGE_WAIT_FG
TIME_WAITED_MICRO_FG
V$SYSTEM_WAIT_CLASS has two new NUMBER columns that represent the statistics from purely
foreground sessions:
TOTAL_WAITS_FG
TIME_WAITED_FG
Locking Enhancements
You can limit the time that DDL commands wait for DML locks before failing by setting the
DDL_LOCK_TIMEOUT parameter at the system or session level. This initialization parameter is
set by default to 0, that is NOWAIT, which ensures backward compatibility. The range of values
is 0100,000 (in seconds).
The LOCK TABLE command has new syntax that you can use to specify the maximum number
of seconds the statement should wait to obtain a DML lock on the table. Use the WAIT clause to
indicate that the LOCK TABLE statement should wait up to the specified number of seconds to
acquire a DML lock. There is no limit on the value of the integer.
In highly concurrent environments, the requirement of acquiring an exclusive lockfor
example, at the end of an online index creation and rebuildcould lead to a spike of waiting
DML operations and, therefore, a short drop and spike of system usage. While this is not an
overall problem for the database, this anomaly in system usage could trigger operating system
alarm levels. The commands listed in the slide no longer require exclusive locks.
VISIBLE INVISIBLE
Index Index
OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE
Note: For more information, refer to the PL/SQL Packages and Types Reference Guide.
Oracle Database 11g: New Features for Administrators 16 - 14
Viewing SQL Result Cache Dictionary Information
Note: To use this feature, your applications must be relinked with Release 11.1 or higher client
libraries and be connected to a Release 11.1 or higher server.
0.0025 .
HJ
P
a
0.003 . HJ 2
r
s
e
0.15 0.18
:1=A & :2=B S(:1)=0.15 S(:2)=0.0025 :1=C & :2=D S(:1)=0.18 S(:2)=0.003
Merged selectivity cubes No need Second selectivity cube Need new plan
for new plan
H
a
r
GB GB H
a
r
0.009 . GB GB
d HJ HJ HJ HJ
d
4 P
a
0.004 . HJ HJ
P HJ HJ 3
a
r r
s Cubes merged s
e e
0.28 0.3
:1=G & :2=H S(:1)=0.28 S(:2)=0.004 :1=E & :2=F S(:1)=0.3 S(:2)=0.009
CURSOR_SHARING:
If CURSOR_SHARING <> EXACT, then statements
containing literals may be rewritten using bind variables.
If statements are rewritten, Adaptive Cursor Sharing may
apply to them.
SQL Plan Management (SPM):
If OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to
TRUE, then only the first generated plan is used.
As a workaround, set this parameter to FALSE, and run
your application until all plans are loaded in the cursor
cache.
Manually load the cursor cache into the corresponding
plan baseline.
DBA_TEMP_FREE_SPACE
This dictionary view reports temporary space usage information at the tablespace level. The
information is derived from various existing views.
Note
catuppst.sql is the post-upgrade script that performs the remaining upgrade actions that do not
require the database to be open in UPGRADE mode. It can be run at the same time that utlrp.sql
is being run.
Best Practices: 1
Perform the planned tests on the current database and on the test database that you upgraded to
Oracle Database 11g, Release 1 (11.1). Compare the results and note anomalies. Repeat the test
upgrade as many times as necessary.
Test the newly upgraded test database with the existing applications to verify that they operate
properly with a new Oracle database. You might also want to test enhanced functions by adding the
available Oracle Database features. However, first make sure that the applications operate in the
same manner as they did in the current database.
Functional testing is a set of tests in which new and existing features and functions of the system are
tested after the upgrade. Functional testing includes all database, networking, and application
components. The objective of functional testing is to verify that each component of the system
functions as it did before upgrading and to verify that the new functions are working properly.
Create a test environment that does not interfere with the current production database.
Practice upgrading the database using the test environment. The best upgrade test, if possible, is
performed on an exact copy of the database to be upgraded, rather than on a downsized copy or test
data.
Do not upgrade the actual production database until after you successfully upgrade a test subset of
this database and test it with applications (as described in the next step).
The ultimate success of your upgrade depends heavily on the design and execution of an appropriate
backup strategy.
Oracle Database 11g: New Features for Administrators B - 9
Best Practices: 2
Performance analysis
Gather performance metrics prior to upgrade:
Gather AWR or Statspack baselines during various
workloads.
Gather sample performance metrics after upgrade:
Compare metrics before and after upgrade to catch issues.
Upgrade production systems only after performance and
functional goals have been met.
Pre-upgrade analysis
You can run DBUA without clicking Finish to get a pre-
upgrade analysis or utlu111i.sql.
Read general and platform-specific release notes to catch
special cases.
Best Practices: 2
Performance testing of the new Oracle database compares the performance of various SQL
statements in the new Oracle database with the statements performance in the current database.
Before upgrading, you should understand the performance profile of the application under the current
database. Specifically, you should understand the calls that the application makes to the database
server.
For example, if you are using Oracle Real Application Clusters and you want to measure the
performance gains realized from using cache fusion when you upgrade to Oracle Database 11g,
Release 1 (11.1), then make sure that you record your systems statistics before upgrading.
For that, you can use various V$ views or AWR/Statspack reports.
Best Practices: 3
If you are installing the 64-bit Oracle Database 11g, Release 1 (11.1) software but were previously
using a 32-bit Oracle Database installation, then the database is automatically converted to 64-bit
during a patch release or major release upgrade to Oracle Database 11g, Release 1 (11.1).
However, you must increase the initialization parameters affecting the system global area, such as
sga_target and shared_pool_size, to support the 64-bit operation.
Best Practices: 4
Oracle recommends the Optimal Flexible Architecture (OFA) standard for your Oracle Database
installations. The OFA standard is a set of configuration guidelines for efficient and reliable Oracle
databases that require little maintenance.
OFA provides the following benefits:
Organizes large amounts of complicated software and data on disk to avoid device bottlenecks
and poor performance
Facilitates routine administrative tasks, such as software and data backup functions, which are
often vulnerable to data corruption
Alleviates switching among multiple Oracle databases
Adequately manages and administers database growth
Helps to eliminate fragmentation of free space in the data dictionary, isolates other
fragmentation, and minimizes resource contention
If you are not currently using the OFA standard, switching to the OFA standard involves modifying
your directory structure and relocating your database files.
Best Practices: 5
When you upgrade to Oracle Database 11g, Release 1 (11.1), optimizer statistics are collected for
dictionary tables that lack statistics. This statistics collection can be time consuming for databases
with a large number of dictionary tables, but statistics gathering occurs only for those tables that lack
statistics or are significantly changed during the upgrade.
To decrease the amount of down time incurred when collecting statistics, you can collect statistics
prior to performing the actual database upgrade. As of Oracle Database 10g, Release 1 (10.1), Oracle
recommends that you also use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to
gather dictionary statistics in addition to database component statistics such as SYS, SYSMAN, XDB,
and so on using the DBMS_STATS.GATHER_SCHEMA_STATS procedure.
Best Practices: 6
If you have enabled Oracle Database Vault, you must disable it before upgrading the database. Then
enable it again when the upgrade is finished.
By default:
Default password profile is enabled
Account is locked after 10 failed login attempts
In upgrade:
Passwords are non-case-sensitive until changed
Passwords become case-sensitive when the ALTER USER
command is used
On creation:
Passwords are case-sensitive
Tablespace Encryption
Tablespace encryption is based on block-level encryption that encrypts on write and decrypts on
read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The
SQL access paths are unchanged and all data types are supported. To use tablespace encryption, the
encryption wallet must be open.
The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption
properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify
USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid
algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the
properties in the V$ENCRYPTED_TABLESPACES view.
The encrypted data is protected during operations such as JOIN and SORT. This means that the data
is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected.
Encrypted tablespaces are transportable if the platforms have same endianess and the same wallet.
Restrictions
Temporary and undo tablespaces cannot be encrypted. (Selected blocks are encrypted.)
Bfiles and external tables are not encrypted.
Transportable tablespaces across different endian platforms are not supported.
The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a
tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators C - 18
TDE and LogMiner
Encrypted data
1. Configure sqlnet.ora:
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM)
(METHOD_DATA=
(DIRECTORY=/app/oracle/admin/SID1/wallet)))
Kerberos Enhancements
The Oracle client Kerberos implementation now makes use of secure encryption algorithms such as
3DES and AES in place of DES. This makes using Kerberos more secure. The Kerberos
authentication mechanism in the Oracle database now supports the following encryption types:
DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum)
RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum)
AES128-CTS (AES algorithm with 128-bit key in CTS mode with HMAC-SHA1 as checksum)
AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum)
The Kerberos implementation has been enhanced to interoperate smoothly with Microsoft and MIT
Key Distribution Centers.
The Kerberos principal name can now contain more than 30 characters. It is no longer restricted by
the number of characters allowed in a database username. If the Kerberos principal name is longer
than 30 characters, use:
CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS
'KerberosUser@SOMEORGANIZATION.COM';
Database users can be converted to Kerberos users without requiring a new user to be created using
the ALTER USER syntax:
ALTER USER DBUSER IDENTIFIED EXTERNALLY AS
'KerberosUser@SOMEORGANIZATION.COM';
BEGIN
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
acl => 'us-oracle-com-permissions.xml',
description => Permissions for oracle network',
principal => SCOTT',
is_grant => TRUE,
privilege => 'connect');
END;
BEGIN
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
acl => us-oracle-com-permissions.xml',
host => *.us.oracle.com',
lower_port => 80,
upper_port => null);
END
Remote jobs:
External jobs (OS based)
Database jobs
Schedule
Remote jobs: jobs
Operating systemlevel
jobs
Scripts, binaries, and so
on
No Oracle database SA
required.
Agent starts and Scheduler
manages jobs. Agent (SA)
Execute Execute
OS job DB job
Remote Jobs
The Oracle Scheduler can now create and run remote jobs. The ability to run a job from a centralized
scheduler on remote hosts or databases gives the DBA the tools to manage many more machines. The
Oracle Scheduler Agent provides the ability to run a job against remote databases or on hosts without
a database.
The agent must register with one or more databases that are acting as the Scheduler source. The
Scheduler source database must have the XMLDB features installed. The Scheduler must be
configured to communicate with the agent. A port must be allocated and it must be unused. A
password must be created for the agent to register.
The DBMS_SCHEDULER.SET_ATTRIBUTES procedure enables you to specify the destination
host or database by providing the host:port of the scheduler agent.
New views
*_SCHEDULER_CREDENTIALS
*_SCHEDULER_REMOTE_JOBSTATE
Modified views to support remote jobs
*_SCHEDULER_JOBS
*_SCHEDULER_JOB_RUN_DETAILS
Job_subname