Академический Документы
Профессиональный Документы
Культура Документы
During this Oracle Database 12c new features article series, I shall be extensively exploring some of the
very important new additions and enhancements introduced in the area of Database Administration, RMAN,
High Availability and Performance Tuning.
Part I covers:
1. Online migration of an active data file
2. Online table partition or sub-partition migration
3. Invisible column
4. Multiple indexes on the same column
5. DDL logging
6. Temporary undo in- and- outs
7. New backup user privilege
8. How to execute SQL statement in RMAN
9. Table level recovery in RMAN
10. Restricting PGA size
Overwrite the data file with the same name, if it exists at the new location:
Copy the file to a new location whilst retaining the old copy in the old location:
The first example is used to move a table partition|sub-partition to a new tablespace offline. The second
example moves a table partition/sub-partitioning online maintaining any local/global indexes on the table.
Additionally, no DML operation will get interrupted when ONLINE clause is mentioned.
Important notes:
o The UPDATE INDEXES clause will avoid any local/global indexes going unusable on the table.
o Table online migration restriction applies here too.
o There will be locking mechanism involved to complete the procedure, also it might leads to performance
degradation and can generate huge redo, depending upon the size of the partition, sub-partition.
3. Invisible columns
In Oracle 11g R1, Oracle introduced a couple of good enhancements in the form of invisible indexes and
virtual columns. Taking the legacy forward, invisible column concepts has been introduced in Oracle 12c
R1. I still remember, in the previous releases, to hide important data columns from being displayed in the
generic queries we used to create a view hiding the required information or apply some sort of security
conditions.
In 12c R1, you can now have an invisible column in a table. When a column is defined as invisible, the
column wont appear in generic queries, unless the column is explicitly referred to in the SQL statement or
condition, or DESCRIBED in the table definition. It is pretty easy to add or modify a column to be invisible
and vice versa:
SQL> CREATE TABLE emp (eno number(6), ename name varchar2(40), sal number(9)
INVISIBLE);
SQL> ALTER TABLE emp MODIFY (sal visible);
You must explicitly refer to the invisible column name with the INSERT statement to insert the database into
invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary
tables, external tables and cluster tables wont support invisible columns.
4. Multiple indexes on the same column
Pre Oracle 12c, you cant create multiple indexes either on the same column or set of columns in any form.
For example, if you have an index on column {a} or columns {a,b}, you cant create another index on the
same column or set of columns in the same order. In 12c, you can have multiple indexes on the same
column or set of columns as long as the index type is different. However, only one type of index is
usable/visible at a given time. In order to test the invisible indexes, you need to set
the optimizer_use_use_invisible_indexes=true.
Heres an the example:
5. DDL logging
There was no direction option available to log the DDL action in the previous releases. In 12cR1, you can
now log the DDL action into xml and log files. This will be very useful to know when the drop or create
command was executed and by who. The ENABLE_DDL_LOGGING initiation parameter must be
configured in order to turn on this feature. The parameter can be set at the database or session levels.
When this parameter is enabled, all DDL commands are logged in an xml and a log file under
the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file contains information, such as
DDL command, IP address, timestamp etc. This helps to identify when a user or table dropped or when a
DDL statement is triggered.
To enable DDL logging
The following DDL statements are likely to be recorded in the xml/log file:
o CREATE|ALTER|DROP|TRUNCATE TABLE
o DROP USER
o CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE
6. Temporary Undo
Each Oracle database contains a set of system related tablespaces, such as, SYSTEM, SYSAUX, UNDO &
TEMP, and each are used for different purposes within the Oracle database. Pre Oracle 12c R1, undo
records generated by the temporary tables used to be stored in undo tablespace, much similar to a
general/persistent table undo records. However, with the temporary undo feature in 12c R1, the temporary
undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime
benefits of temporary undo includes: reduction in undo tablespace and less redo data generation as the
information wont be logged in redo logs. You have the flexibility to enable the temporary undo option either
at session level or database level.
Enabling temporary undo
Important notes:
o Ensure sufficient free space available under /u01 filesystem for auxiliary database and also to keep the
data pump file
o A full database backup must be exists, or at least the SYSTEM related tablespaces
The following limitations/restrictions are applied on table/partition recovery in RMAN:
Important notes:
When the current PGA limits exceeds, Oracle will automatically terminates/abort the session/process that
holds the most untenable PGA memory.
In part 2, you will learn more on new changes on Cluster, ASM, RMAN and database administration areas.
Oracle Database 12c New Features Part 2
During this Oracle Database 12c new features series, I shall be extensively exploring some of the
miscellaneous, yet very useful, new additions and enhancements introduced in the areas of Database
Administration, RMAN, Data Guard and Performance Tuning.
Part 2 covers:
1. Table partition maintenance enhancements
2. Database upgrade improvements
3. Restore/Recover data file over the network
4. Data Pump enhancements
5. Real-time ADDM
6. Concurrent statistics gathering
In Part I, I explained how to move a table partition or sub-partition to a different tablespace either offline or
online. In this section, you will learn other enhancements relating to table partitioning.
In the same way, you can add multiple new partitions to a list and system partitioned table, provided that
theMAXVALUE partition doesnt exist.
How to drop and truncate multiple partitions/sub-partitions
As part of data maintenance, you typically either use drop or truncate partition maintenance task on a
partitioned table. Pre 12c R1, it was only possible to drop or truncate one partition at a time on an existing
partitioned table. With Oracle 12c, multiple partitions or sub-partitions can be dropped or merged using a
single ALTER TABLE table_name {DROP|TRUNCATE} PARTITIONS command.
The following example explains how to drop or truncate multiple partitions on an existing partitioned table:
To keep indexes up-to-date, use the UPDATE INDEXES or UPDATE GLOBAL INDEXES clause, shown
below:
SQL> ALTER TABLE emp_part DROP PARTITIONS p4,p5 UPDATE GLOBAL INDEXES;
SQL> ALTER TABLE emp_part TRUNCATE PARTITIONS p4,p5 UPDATE GLOBAL INDEXES;
If you truncate or drop a partition without the UPDATE GLOBAL INDEXES clause, you can query the
columnORPHANED_ENTRIES in the USER_INDEXES or USER_IND_PARTITIONS dictionary views to
find out whether the index contains any stale entries.
The new enhanced SPLIT PARTITION clause in 12c will let you split a particular partition or sub-partition
into multiple new partitions using a single command. The following example explains how to split a partition
into multiple new partitions:
SQL> ALTER TABLE emp_part MERGE PARTITIONS p3,p4,p5 INTO PARTITION p_merge;
If the range falls in the sequence, you can use the following example:
Pre-upgrade script
A new and much improved pre-upgrade information script, preupgrd.sql, replaces the
legacy utlu[121]s.sql script in 12c R1. Apart from the preupgrade checks verification, the script is capable
of addressing the various issues in the form of fixup scripts that are raised during the pre-post upgrade
process.
The fixup scripts that are generated can be executed to resolve the problems at different levels, for
example, pre-upgrade and post upgrade. When upgrading the database manually, the script must be
executed manually before initiating the actual upgrade procedure. However, when the Database Upgrade
Assistant (DBUA) tool is used to perform a database upgrade, it automatically executes the pre-upgrade
scripts as part of the upgrade procedure and will prompt you to execute the fixup scripts in case of any
errors that are reported.
SQL> @$ORACLE_12GHOME/rdbms/admin/preupgrd.sql
The above script generates a log file and a [pre/post]upgrade_fixup.sql script. All these files are located
under the $ORACLE_BASE/cfgtoollogs directory. Before you continue with the real upgrade procedure,
you should run through the recommendations mentioned in the log file and execute the scripts to fix any
issues.
Note: Ensure you copy the preupgrd.sql and utluppkg.sql scripts from the 12c Oracle home/rdbms/admin
directory to the current Oracle database/rdbms/admin location.
Parallel-upgrade utility
The database upgrade duration is directly proportional to the number of components that are configured on
the database, rather than the database size. In previous releases, there was no direct option or workaround
available to run the upgrade process in parallel to quickly complete the overall upgrade procedure.
The catctl.pl (parallel-upgrade utility) that replaces the legacy catupgrd.sql script in 12c R1 comes with an
option to run the upgrade procedure in parallel mode to improve the overall duration required to complete
the procedure.
The following procedure explains how to initiate the parallel (with 3 processes) upgrade utility; you need to
run this after you STARTUP the database in UPGRADE mode:
cd $ORACLE_12_HOME/perl/bin
$ ./perl catctl.pl n 3 -catupgrd.sql
The above two steps need to be run explicitly when a database is upgraded manually. However, the DBUA
inherits the both new changes.
When there is a pretty long gap found between the primary and standby database, you no longer require the
complex roll-forward procedure to fill the gap between the primary and standby. RMAN is able to perform
standby recovery getting the incremental backups through the network and applying them to the physical
standby database. Having said that, you can directly copy the required data files from the standby location
to the primary site using the SERVICE name e.g. in the case of a data file, tablespace lost on the primary
database, or without actually restoring the data files from a backup set.
The following procedure demonstrates how to perform a roll forward using the new features to synchronize
the standby database with its primary database:
The above example uses the primary_db_tns connect string defined on the standby database, connects to
the primary database, performs an incremental backup, transfers these incremental backups over standby
destination, and then applies these files to the standby database to synchronize the standby. However, you
need to ensure you have configured primary_db_tns to point to the primary database on the standby
database side.
In the following example, I will demonstrate a scenario to restore a lost data file on the primary database by
fetching the data file from the standby database:
When you encounter an unresponsive database or hung state, and if you have configured Oracle Enterprise
Manager 12c Cloud Control, you can diagnose serious performance issues. This would give you a good
picture about whats currently going on in the database, and might also provide a remedy to resolve the
issue.
The following step-by-step procedure demonstrates how to analyze the situation on the Oracle EM 12c
Cloud Control :
o Select the Emergency Monitoring option from the Performance menu on the Access the
Database Home page.This will show the top blocking sessions in the Hang Analysis table.
o Select the Real-Time ADDM option from the Performance to perform Real-time ADDM analysis.
o After collecting the performance data, click on the Findings tab to get the interactive summary of all the
findings.
During this Oracle Database 12c new features article series; I shall be extensively exploring some of the
miscellaneous, yet very useful, new additions and enhancements introduced in the areas of Clusterware,
ASM and RAC database.
Part 3 covers:
1. Additions/Enhancements in ASM
2. Additions/Enhancements in Grid Infrastructure
3. Additions/Enhancements in Real Application Cluster (database)
Flex ASM
In a typical Grid Infrastructure installation, each node will have its own ASM instance running and act the as
the storage container for the databases running on the node. There is a single point-of-failure threat with this
setup. For instance, if the ASM instance on the node suffers or fails all the databases and instances running
on the node will be impacted. To avoid ASM instance single-point-failure, Oracle 12c provides a Flex ASM
feature. The Flex ASM is a different concept and architecture all together. Only a fewer number of ASM
Instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle
Clusterware automatically starts surviving (replacement) ASM instance on a different node to maintain
availability. In addition, this setup also provides ASM instance load balancing capabilities for the instances
running on the node. Another advantage of Flex ASM is that it can be configured on a separate node.
When you choose Flex Cluster option as part of the cluster installation, Flex ASM configuration will be
automatically selected as it is required by the Flex Cluster. You can also have traditional cluster over Flex
ASM. When you decide to use Flex ASM, you must ensure the required networks are available. You can
choose the Flex ASM storage option as part of Cluster installation, or use ASMCA to enable Flex ASM in a
standard cluster environment.
$ ./asmcmd showclustermode
$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value
is ASMPROX, then, the Flex ASM is configured.
SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE;
You can adjust the POWER limit based on the output you get from the dynamic view to improve the
rebalancing operations.
Flex Clusters
Oracle 12c support two types of cluster configuration at the time of Clusterware installation: Traditional
Standard Cluster and Flex cluster. In a traditional standard cluster, all nodes in a cluster are tightly
integrated to each other and interact through a private network and can access the storage directly. On the
other hand, the Flex Cluster introduced two types of nodes arranged in Hub and Leaf nodes architecture.
The nodes arranged in Hub nodes category are similar to the traditional standard cluster, i.e. they are
interconnected to each other through a private network and have the directly storage read/write access. The
Leaf nodes are different from the Hub nodes. They dont need to have direct access to the underlying
storage; rather they access the storage/data through Hub nodes.
You can configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex Cluster, you can
have Hub nodes without having Leaf nodes configured, but no Leaf nodes exist without Hub nodes. You can
configure multiple Leaf nodes to a single Hub node. In Oracle Flex Cluster, only Hub nodes will have direct
access to the OCR/Voting disks. When you plan large scale Cluster environments, this would be a great
feature to use. This sort of setup greatly reduces interconnect traffic, provides room to scale up the cluster
to the traditional standard cluster.
The following steps are required to convert a standard cluster mode to Flex Cluster mode:
1. Get the current status of the cluster using the following command:
With 12c, OCR can be now be backed-up in ASM disk group. This simplifies the access to the OCR backup
files across all nodes. In case of OCR restore, you dont need to worry about which node the OCR latest
backup is on. One can simply identify the latest backup stored in the ASM from any node and can perform
the restore easily.
The following example demonstrates how to set the ASM disk group as OCR backup location:
IPv6 support
With Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on the same network.
You can now configure public network (Public/VIP) either on IPv4, IPv6 or combination protocol
configuration. However, ensure you use the same set of IP protocol configuration across all nodes in a
cluster.
3. Additions/Enhancements in RAC (database)
Parts 1,2 & 3 focusssed more on the most useful improvements and enhancements of Database
administration: Performance Tuning, RMAN, Data Guard, ASM and Clusterware. This part of the series will
mainly focus on some of the new features that are useful to developers.
Part 4 covers:
o How to truncate a master table while child tables contain data
o Limiting ROWS for Top-N query results
o Miscellaneous SQL*Plus enhancements
o Session level sequences
o WITH clause improvements
o Extended data types
An ORA-14705 error will be thrown if no ON DELETE CASCADE option is defined with the foreign keys of
the child tables.
The following example limits the fetch to 10 per cent from the top salaries in the EMP table:
The following example offsets the first 5 rows and will display the next 5 rows from the table:
All these limits can be very well used within the PL/SQL block too.
BEGIN
SELECT sal BULK COLLECT INTO sal_v FROM EMP
FETCH FIRST 100 ROWS ONLY;
END;
When the procedure is executed, it return the formatted rows on the SQL*Plus.
Display invisible columns: In Part 1 of this series, I have explained and demonstrated about invisible
columns new feature. When the columns are defined as invisible, they wont be displayed when you
describe the table structure. However, you can display the information about the invisible columns by setting
the following on the SQL*Plus prompt:
The above setting is only valid for DESCRIBE command. It has not effect on the SELECT statement results
on the invisible columns.
The CACHE, NOCACHE, ORDER or NOORDER clauses are ignored for SESSION level sequences.
WITH
PROCEDURE|FUNCTION test1 ()
BEGIN
<logic>
END;
SELECT <referece_your_function|procedure_here> FROM table_name;
/
Although you cant use the WITH clause directly in the PL/SQL unit, it can be referred through a dynamic
SQL within that PL/SQL unit.
Note: Once modified, you cant change the settings back to STANDARD.
So I am back from Openworld and finally caught up on work. I plan to follow this post with several posts
about things I saw and/or learned at OOW this year but first I thought I would cover the new 12c fearures
that were talked about.
Of course, every presentation had a caveat: nothing discussed was guaranteed to be in the final product so
no business decisions should be made on these dicussions. Since 12c has not been announced yet, that is
still true. Anything you read on the internet might be false.
Having said that, some of the things coming are pretty cool. Here is my top 10 list. (I provide a link where I
can find a decent one.)
1. Pluggable Databases - Pluggable database are a neat feature. Bascially, you create a container database
(CDB) that contains all of the oracle level data and data dictionary. You then create pluggable databases
(PDB) that contain user data and the user portion of the data dictionary. Since the PDB files contain
everything about the user data, you can unplug a PDB from a CDB and plug it into a different CDB and be
up in seconds. All that needs to happen is a quick data dictionary update in the CDB.
2. Duplicate Indexes - Create duplicate indexes on the same set of columns. In 11.2 and below, if you try to
create an index using the same columns, in the same order, as an existing index, you get an error. In some
cases, you might want two different types of index on the same data (such as in a data warehouse where
you might want a bitmap index on the leading edge of a set of columns that exists in a btree index).
3. Implicit Result Sets - cretae a procedure, open a ref cursor, return the results. No types, not muss, no mess.
Streamlined data access (kind of a catch up to other databases).
4. PL/SQL Unit Security - A role can now be granted to a code unit. That means you can determine at a very
fine grain, who can access a sepcific unit of code.
5. MapReduce in the Database - MapReduce can be run from PL/SQL directly in the database. I don't have
much more info than that.
6. Interval-Ref Partitions - Can now create a ref partition (to relate several tables with the same partitions) as a
sub-partition to the interval type. Ease of use feature.
7. SQL WITH Clause Enhancement - I want to see some examples of this one. In 12c, you can declare
PL/SQL functions in the WITH Clause of a select statement.
8. Catch up with MySQL - Some catch up features: INDENTITY columns (auto-sequence on a PK), can now
use a sequence as a DEFAULT column value, (there's another that I cannot remember right now).
9. 32k VARCHAR2 Support - Yes, 32k varchar2 in the database. Stored like a CLOB.
10. Yeah - Booleans in SQL (sort of) - You can use booleans values in dynamic PL/SQL. Still no booleans as
database types.
That's about it for my top 10. There was a lot more info at OOW and, like I said above, I plan to blog some
more on these topics.
If anyone finds a good link with more information about these topics, please leave a comment. I'll update the
post. I would love to see some real examples of the PL/SQL improvements.
In my test environment, running Oracle Linux 5 update 7, I quickly installed Grid Infrastructure, configured
my ASM storage and installed the Database Software, and followed this by creating a new database. I did all
of this using two separate users, grid user owning the Grid Infrastructure home and oracle as the
Database software owner. All was done in less than an hour and I was impressed with the new installer and
pleased to see that it is kind of similar to the 11g installer. I found it easy to use and all the steps I performed
just worked, apart from the pre-requisite check stating I do not have enough swap space, which did not
bother me too much in my test lab and I ignored and completed the rest of the steps.
I noticed there are already a few installation guides and detailed steps posted on the web and I do
recommend that if you looking for more details on the installation process that you have a look at these
excellent guides from Tim Hall and Yury Velikanov . And, as always, make sure you review the Oracle
Installation guides and ensure you follow all the required pre-requisite steps.
Over the next few weeks I, along with many other DBAs, will be testing out the new 12c Database and all its
new features. But I would like to share my initial testing of three new features introduced in RMAN which
is one of my favorite utilities:
One of the new features for RMAN introduced in 12c is the ability to run SQL commands without the SQL
keyword. I even found SQL code block execution worked, which surprised me a little. Below is a basic
example:
oracle@dbvlin603[/home/oracle]:rman
RecoveryManager:Release12.1.0.1.0ProductiononWedJul317:37:572013
Copyright(c)1982,2013,Oracleand/oritsaffiliates.Allrightsreserved.
RMAN>connecttarget/
connectedtotargetdatabase:TESTDB(DBID=2602403303)
usingtargetdatabasecontrolfileinsteadofrecoverycatalog
RMAN>createtabletest(idnumber);
Statementprocessed
RMAN>select*fromtest;
norowsselected
RMAN>insertintotestvalues(1);
Statementprocessed
RMAN>select*fromtest;
ID
RMAN>begin
2>forc1in1..20loop
3>insertintotestvalues(c1);
4>endloop;
5>end;
6>/
Statementprocessed
RMAN>selectcount(1)fromtest;
COUNT(1)
21
RMAN>rollback;
Statementprocessed
RMAN>select*fromtest;
norowsselected
RMAN>droptabletestpurge;
Statementprocessed
RMAN>select*fromtest;
RMAN00571:===========================================================
RMAN00569:===============ERRORMESSAGESTACKFOLLOWS===============
RMAN00571:===========================================================
RMAN03002:failureofsqlstatementcommandat07/03/201319:07:24
ORA00942:tableorviewdoesnotexist
RMAN>
As you can see above, using SQL in RMAN can be useful and will open up many possibilities.
2. Refresh a single datafile on the primary from the standby (or standby from primary)
The second option, which I think is an excellent new feature, makes restoring specific datafiles from a
standby database easy. By using the new FROM SERVICE clause in the RESTORE DATAFILE
command, in effect your standby database is your backup and the restore is done via the network. This
method can also make use of the SECTION SIZE clause as well as encryption and compressed backup sets.
Below is an example I ran using 12c Standard Edition. My primary and standby database is called testdb
and I am using a service name called testdbdr which is pointing to my standby database. In this example I
am restoring datafile 6 from the standby database.
oracle@dbvlin603[/home/oracle]:rman
RecoveryManager:Release12.1.0.1.0ProductiononWedJul323:41:442013
Copyright(c)1982,2013,Oracleand/oritsaffiliates.Allrightsreserved.
RMAN>connecttarget/
connectedtotargetdatabase:TESTDB(DBID=2602403303)
RMAN>selectfile#,namefromv$datafile;
usingtargetdatabasecontrolfileinsteadofrecoverycatalog
FILE#NAME
1+DATA/TESTDB/DATAFILE/system.258.819075077
3+DATA/TESTDB/DATAFILE/sysaux.257.819075011
4+DATA/TESTDB/DATAFILE/undotbs1.260.819075143
6+DATA/TESTDB/DATAFILE/users.259.819075141
RMAN>alterdatabasedatafile6offline;
Statementprocessed
RMAN>restoredatafile'+DATA/TESTDB/DATAFILE/users.259.819075141'fromservice
testdbdrusingcompressedbackupset;
Startingrestoreat03/07/2013:23:46:38
usingchannelORA_DISK_1
channelORA_DISK_1:startingdatafilebackupsetrestore
channelORA_DISK_1:usingcompressednetworkbackupsetfromservicetestdbdr
channelORA_DISK_1:specifyingdatafile(s)torestorefrombackupset
channelORA_DISK_1:restoringdatafile00006to
+DATA/TESTDB/DATAFILE/users.259.819075141
channelORA_DISK_1:restorecomplete,elapsedtime:00:00:03
Finishedrestoreat03/07/2013:23:46:42
RMAN>selectname,statusfromv$datafile;
NAME STATUS
+DATA/TESTDB/DATAFILE/system.258.819075077SYSTEM
+DATA/TESTDB/DATAFILE/sysaux.257.819075011ONLINE
+DATA/TESTDB/DATAFILE/undotbs1.260.819075143ONLINE
+DATA/TESTDB/DATAFILE/users.259.819075141RECOVER
RMAN>recoverdatafile6;
Startingrecoverat03/07/2013:23:47:14
usingchannelORA_DISK_1
startingmediarecovery
archivedlogforthread1withsequence5isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_5.257.819151251
archivedlogforthread1withsequence6isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_6.258.819151417
archivedlogforthread1withsequence7isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_7.259.819156941
archivedlogforthread1withsequence8isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_28/thread_1_seq_8.260.819244859
archivedlogforthread1withsequence9isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_29/thread_1_seq_9.261.819352823
archivedlogforthread1withsequence10isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_29/thread_1_seq_10.262.819411105
archivedlogforthread1withsequence11isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_06_30/thread_1_seq_11.263.819468251
archivedlogforthread1withsequence12isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_07_01/thread_1_seq_12.264.819656061
archivedlogforthread1withsequence13isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_07_02/thread_1_seq_13.265.819756027
archivedlogforthread1withsequence14isalreadyondiskasfile
+FRA/TESTDB/ARCHIVELOG/2013_07_03/thread_1_seq_14.266.819842455
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_5.257.819151251
thread=1sequence=5
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_6.258.819151417
thread=1sequence=6
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_26/thread_1_seq_7.259.819156941
thread=1sequence=7
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_28/thread_1_seq_8.260.819244859
thread=1sequence=8
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_29/thread_1_seq_9.261.819352823
thread=1sequence=9
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_29/thread_1_seq_10.262.819411105
thread=1sequence=10
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_06_30/thread_1_seq_11.263.819468251
thread=1sequence=11
archivedlogfilename=+FRA/TESTDB/ARCHIVELOG/2013_07_01/thread_1_seq_12.264.819656061
thread=1sequence=12
mediarecoverycomplete,elapsedtime:00:00:15
Finishedrecoverat03/07/2013:23:47:36
RMAN>alterdatabasedatafile6online;
Statementprocessed
I now have a fully recovered datafile, and all by using my standby database as source for the restore.
The third new RMAN option I would like to highlight is the rolling forward of a standby database by making
use of incremental backups directly from the primary database. This used to be a long manual process but
can now be done via a quick and easy command. This option is especially useful if you are running into an
unrecoverable archive log gap. Instead of rebuilding the standby, you can make use of this recovery
command that will use incremental backups from the primary to update the standby. This method also
makes use of the FROM SERVICE command, and as with the restoring of files across the network, the
section size, encryption and compressed backupsets can be specified. Below is an example using this feature
in the same Standard Edition environment. In this I am connecting to my Standby database with RMAN and
then executing the recover command using the primary database service testdb_primary:
oracle@dbvlin604[/usr/local/dbvisit/standby]:rman
RecoveryManager:Release12.1.0.1.0ProductiononThuJul400:39:182013
Copyright(c)1982,2013,Oracleand/oritsaffiliates.Allrightsreserved.
RMAN>connecttarget/
connectedtotargetdatabase:TESTDB(DBID=2602403303,notopen)
RMAN>recoverdatabasefromservicetestdb_primaryusingcompressedbackupset;
Startingrecoverat04/07/2013:00:40:30
usingtargetdatabasecontrolfileinsteadofrecoverycatalog
allocatedchannel:ORA_DISK_1
channelORA_DISK_1:SID=14devicetype=DISK
channelORA_DISK_1:startingincrementaldatafilebackupsetrestore
channelORA_DISK_1:usingcompressednetworkbackupsetfromservicetestdb_primary
destinationforrestoreofdatafile00001:+DATA/TESTDB/DATAFILE/system.268.819081207
channelORA_DISK_1:restorecomplete,elapsedtime:00:00:15
channelORA_DISK_1:startingincrementaldatafilebackupsetrestore
channelORA_DISK_1:usingcompressednetworkbackupsetfromservicetestdb_primary
destinationforrestoreofdatafile00003:+DATA/TESTDB/DATAFILE/sysaux.267.819081257
channelORA_DISK_1:restorecomplete,elapsedtime:00:00:16
channelORA_DISK_1:startingincrementaldatafilebackupsetrestore
channelORA_DISK_1:usingcompressednetworkbackupsetfromservicetestdb_primary
destinationforrestoreofdatafile00004:+DATA/TESTDB/DATAFILE/undotbs1.266.819081299
channelORA_DISK_1:restorecomplete,elapsedtime:00:00:03
channelORA_DISK_1:startingincrementaldatafilebackupsetrestore
channelORA_DISK_1:usingcompressednetworkbackupsetfromservicetestdb_primary
destinationforrestoreofdatafile00006:+DATA/TESTDB/DATAFILE/users.265.819081319
channelORA_DISK_1:restorecomplete,elapsedtime:00:00:03
startingmediarecovery
mediarecoverycomplete,elapsedtime:00:00:01
Finishedrecoverat04/07/2013:00:41:20
But the interesting point to make with the above is that when I executed the recover standby database
command, it still requested an old archive log (sequence# 15):
SQL>recoverstandbydatabase;
ORA00279:change2097958generatedat07/03/201322:00:51neededforthread1
ORA00289:suggestion:+FRA
ORA15173:entry'ARCHIVELOG'doesnotexistindirectory'TESTDB'
ORA00280:change2097958forthread1isinsequence#15
Which in this case did not exist, as this was the missing or unrecoverable archive log in the example.
Investigation showed that my datafiles on the standby server were up to date with latest change, but the
standby controlfile was still showing old checkpoint change value. So I recreated the standby controlfile and
now the recover standby database command requested the expected archive log:
SQL>recoverstandbydatabase;
ORA00279:change2103689generatedat07/04/201300:40:38neededforthread1
ORA00289:suggestion:+FRA
ORA15173:entry'ARCHIVELOG'doesnotexistindirectory'TESTDB'
ORA00280:change2103689forthread1isinsequence#21
I was now able to send and apply logs again to the standby database. Now I am using Standard Edition and
will run this test later in Enterprise Edition as well, but it seems you still need to recreate the standby
controlfile after using this incremental backup option to update the standby database.
In summary some of these options are truly powerful and can save the DBA a lot of time especially when
working with Standby Databases.
Hope you are all enjoying playing with 12c, and all its new features