Академический Документы
Профессиональный Документы
Культура Документы
Configure the downstream database to receive and write shipped foreign redo data ..................... 37
Set up Extract1 to capture changes from archived logs sent by DBMS1 .......................................... 37
Set up Extract2 to capture changes from redo logs sent by DBMS2 ................................................. 38
Caution
This document touches briefly on many important and complex concepts and does not provide
a detailed explanation of any one topic since the intent is to present the material in the most
expedient manner. The goal is simply to help the reader become familiar enough with the
product to successfully design and implement an Oracle GoldenGate environment. To that end,
it is important to note that the activities of design, unit testing and integration testing which
are crucial to a successful implementation have been intentionally left out of the guide. All the
examples are provided as is. Oracle consulting service is highly recommended for any
customized implementation.
Executive Overview
This document is an introduction to Oracle GoldenGates best practices and guidelines for
configuring a downstream mining database for Integrated Extract. This document is intended
for Oracle Database Administrators (DBAs), and Oracle Developers with some basic
knowledge of Oracle GoldenGate software product. The document is intended to be a
supplement to the existing series of documentation available from Oracle.
The following assumptions have been made during the writing of this document:
The reader has basic knowledge of Oracle GoldenGate products and concepts
Referencing Oracle Data Guard Concepts and Administration 11g Release 2 (11.2)
Referencing Oracle Database Net Services Administrator's Guide 11g Release 2 (11.2)
Introduction
Oracle GoldenGate provides data capture from transaction logs and delivery for
heterogeneous databases and messaging systems. Oracle GoldenGate (OGG) provides a
flexible, de-coupled architecture that can be used to implement virtually all replication
scenarios.
In release 11.2.1, OGG released a feature which enables downstream capture of data from a
single source or multiple sources. This feature is specific to Oracle databases only. This
feature helps customers meet their IT requirement of limiting new processes being installed
on their production source system. This feature requires some configuration of log transport
to the downstream database on the source system. It also requires an open downstream
database which is where the Integrated Extract will be installed.
Downstream Integrated Extract can be configured to process either real-time or archive logs.
The real-time mining can be configured for only one source database at a given downstream
database server at one time. If the implementation requires more than a single source
database be replicated, the archive log mode of downstream Integrated Extract must be
configured.
Source
Downstream
Capture
Trail
Files
Target
Pump
Trail
Files
Delivery
This document is a supplement to the OGG 11.2.1 Oracle Installation Guide. It has more
details and examples on how to configure downstream Integrated Extract feature which was
introduced starting with Oracle GoldenGate version 11.2.1.
Prerequisites for Implementing Downstream Capture:
11.2.0.3 Database specific bundle patch for Integrated Extract 11.2.x (Doc ID
1411356.1 ).
For more information about the parameters used in this procedure, see the Oracle Database
Reference 11g Release 2 (11.2) and Oracle Data Guard Concepts and Administration 11g
Release 2 (11.2).
1. Configure Oracle Net so that each source database can communicate with the mining
database. The example below assumes the following and should be modified for your
environment:
DBMSCAP.EXAMPLE.COM is the downstream mining database
coe-02 is the hostname of the downstream mining server
1521 is the DBA assigned port to this downstream mining database
dmscap is the DBA assigned service name
For instructions, see Oracle Database Net Services Administrator's Guide 11g Release 2
(11.2).
Example: (TNSNAMES.ORA)
dbmscap =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = coe-02)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = dbmscap.example.com)
)
)
2. Configure authentication at each source database and the downstream mining database
to support the transfer of redo data. Redo transport sessions are authenticated using
either the Secure Sockets Layer (SSL) protocol or a remote login password file. If a source
database has a remote login password file, copy it to the appropriate directory of the
mining database system. The password file must be the same at all source databases as
well as the mining database. For more information about authentication requirements for
redo transport, see Preparing the Primary Database for Standby Database Creation in
Oracle Data Guard Concepts and Administration 11g Release 2 (11.2).
3. At each source database, configure at least one LOG_ARCHIVE_DEST_n initialization
parameter to transmit redo/archive data to the downstream mining database. Set the
attributes of this parameter as shown in one of the following examples, depending on
whether real-time or archive-log capture mode is to be used.
Example for a real-time downstream logmining server:
SQL >ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=DBMSCAP.EXAMPLE.COM
ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) REOPEN=10
DB_UNIQUE_NAME=dbmscap'
NOTE When using an archived-log-only downstream mining database, specify a value for
TEMPLATE attribute that keeps log files from all remote source databases separated from the
local database log files, and from each other.
4. At the source database, set a value of ENABLE for the LOG_ARCHIVE_DEST_STATE_n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter on
the downstream mining database, as shown in the following example.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
5. At both the source database and the downstream mining database for real-time mine, set
the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG initialization parameter to
include the DB_UNIQUE_NAME of the source database and the downstream database.
For archive log mine, DG_CONFIG attribute only needs to be set at the source. The
syntax is shown in the following example.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dbms1,dbmscap)'
+ .75 GB = 3.75 GB
10
Configure the database to receive and write shipped redo data locally
1. At the downstream mining database, set at least one local archive log destination in the
LOG_ARCHIVE_DEST_n initialization parameter as shown in the following example:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_1='LOCATION=/home/arc_dest/local_rl_dbmscap
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
NOTE Log files from all remote source databases must be kept separate from local mining
database log files. For information about configuring a fast recovery area, see Oracle
Database Backup and Recovery User's Guide 11g Release 2 (11.2).
2. Set the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the
LOG_ARCHIVE_DEST_n parameter to ENABLE, as shown in the following example.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE
11
1. At the downstream mining database, set at least one archive log destination for the
standby log groups in the LOG_ARCHIVE_DEST_n initialization parameter as shown in the
following example.
SQL>ALTER SYSTEM SET
LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbms1
VALID_FOR=(STANDBY_LOGFILE,ALL_ROLES)'
NOTE Log files from all remote source databases must be kept separate from local mining
database log files, and from each other. For information about configuring a fast recovery
area, see Oracle Database Backup and Recovery User's Guide 11g Release 2 (11.2).
2. Set the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the
LOG_ARCHIVE_DEST_n parameter to ENABLE as shown in the following example.
SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE
For more information about these initialization parameters, see Oracle Data Guard Concepts
and Administration 11g Release 2 (11.2).
Create the standby redo log files for real-time downstream mining
The following steps outline the procedure for adding standby redo log files to the
downstream mining database. The specific steps and SQL statements that are required to add
standby redo log files depend on your environment. See Oracle Data Guard Concepts and
Administration 11g Release 2 (11.2) for detailed instructions about adding standby redo log
files to a database.
1. In SQL*Plus, connect to the source database as an administrative user.
2. Determine the size of the source log file. Make note of the results.
SQL> SELECT BYTES FROM GV$LOG;
3. Determine the number of online log file groups that are configured on the source
database. Make note of the results.
SQL> SELECT COUNT(GROUP#) FROM GV$LOG;
12
5. Add the standby log file groups to the mining database. The standby log file size must be
at least the size of the source log file size. To determine the recommended number of
standby redo logs, use the following formula:
(maximum # of logfile groups +1) * maximum # of threads
For example, if a primary database has two instances (threads) and each thread has two
online log groups, then there should be six standby redo logs ((2 + 1) * 2 = 6). Having one
more standby log group for each thread (that is, one greater than the number of the online
redo log groups for the primary database) reduces the likelihood that the logwriter process
for the primary instance is blocked because a standby redo log cannot be allocated on the
standby database. The following example shows three standby log groups.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 3
('/oracle/dbs/slog3a.rdo', '/oracle/dbs/slog3b.rdo') SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4
('/oracle/dbs/slog4.rdo', '/oracle/dbs/slog4b.rdo') SIZE 500M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5
('/oracle/dbs/slog5.rdo', '/oracle/dbs/slog5b.rdo') SIZE 500M;
6. Confirm that the standby log file groups were added successfully.
SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM
V$STANDBY_LOG;
YES UNASSIGNED
YES UNASSIGNED
YES UNASSIGNED
7. Ensure that log files from the source database are appearing in the location that is
specified in the LOCATION attribute of the local LOG_ARCHIVE_DEST_n that you set. You
might need to switch the log file at the source database to see files in the directory.
13
2. Register Extract with the downstream mining database on the downstream server. Make
sure the ORACLE_SID is set to the name of the downstream database.
GGSCI> DBLOGIN USERID ggadm1@dbms1 PASSWORD ggs123
GGSCI> MININGDBLOGIN USERID ggadmcap PASSWORD ggs123
GGSCI> REGISTER EXTRACT ext1 DATABASE
4. Edit Extract parameter file. The following lines must be present to take advantage
downstream Integrated Extract.
USER ggadm1@dbms1 PASSWORD ggs123
TRANLOGOPTIONS MININGUSER ggadmcap MININGPASSWORD ggs123
--For Real-Time Integrated Extract (DEFAULT VALUE)
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
Or
-- For Archive Log Integrated Extract
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
5. Start Extract.
GGSCI> START EXTRACT ext1
14
resources: 1) a given number of processes and 2) a given amount of shared memory. These
resource values are dependent on your available system memory and cpu. In most cases, the
default value is sufficient. However, if your system is lower in resources, you may need to
decrease these values. You can control these resources by specifying the following
parameters in the INTEGRATEDPARAMS list of the Extract TRANLOGOPTIONS parameter:
To control the amount of shared memory used by the logmining server, specify the
max_sga_size parameter with a value in megabytes. The default value is 1 GB if
streams_pool_size is greater than 1 GB; otherwise, it is 75% of streams_pool_size.
To control the number of processes used by the logmining server, specify the
parallelism parameter. The default value is 2
For example, you can specify that the logmining server uses 200 MB of memory with a
parallelism of 3 by specifying the following in the Extract parameter file:
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 200, parallelism 3)
15
Source Database
dbms1
Source
Database
LOG_ARCHIVE_DEST_2
'SERVICE=DBMS2.EXAMPLE.COM ASYNC NOREGISTER
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbms2'
LGWR
Online
Online
Redo
Redo
Log
Log
LNSn
Local
Trail
Trail
RFS
dbmscap
dbmscap
LGWR
Mining
Standby
Redo
Log
Integrated
Capture
ext1
Database
Online
Redo
Log
ARC0
ARC0
LOG_ARCHIVE_DEST_1 Archived
Redo
Logs
LOG_ARCHIVE_DEST_1
ARCn
LOG_ARCHIVE_DEST_2
/home/arc_dest/srl_dbms1
/home/arc_dest/local_rl_dbmscap
Foreign
Archived
Redo
Redo
Logs
Logs
Local
Archived
Archived
Redo
Redo
Logs
Logs
GGADM1 in DBMS1 whose credentials Extract will use to fetch data and
metadata from DBMS1. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee=>'GGADM1', privilege_type=>'capture',
grant_select_privileges=>true, do_grants=>TRUE);
User
16
17
2. Confirm that the standby log file groups were added successfully.
SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM
V$STANDBY_LOG;
18
2. Register Extract with the downstream mining database. Make sure the ORACLE_SID for
the mining database is set in the environment.
GGSCI> DBLOGIN USERID ggadm1@dbms1 PASSWORD ggs123
GGSCI> MININGDBLOGIN USERID ggadmcap PASSWORD ggs123
GGSCI> REGISTER EXTRACT ext1 DATABASE
4. Edit Extract parameter file ext1.prm. The following lines must be present to take
advantage of real-time capture.
USERID ggadm1@dbms1 PASSWORD ggs123
TRANLOGOPTIONS MININGUSER ggadmcap MININGPASSWORD ggs123
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
19
5. Start Extract.
GGSCI> START EXTRACT ext1
NOTE Oracle does not allow real time mining of the redo logs for more than one source database. When
you need to configure multiple source databases, only one source database can be configured in real
time mode. The others have to be in archive mode.
20
Source Database
dbms1
RAC Source
Database
Node 1
LOG_ARCHIVE_DEST_2
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Online Redo
Logs
RAC Source
Database
Node 1
dbmscap
LNSn
Integrated
Capture
LGWR
ext1
RFS
ARCO
LOG_ARCHIVE_DEST_1
LGWR
Mining
Database
Online
Redo
Log
Archived
Redo Logs
ARC0
dbms1
RAC Source
Database
Node 2
LOG_ARCHIVE_DEST_2
Standby
Redo Log
Local
Trail
Trail
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
ARCn
Online Redo
Logs
LOG_ARCHIVE_DEST_2
LNSn
LOG_ARCHIVE_DEST_1
/home/arc_dest/local_rl_dbmscap
Local
Archived
Archived
Redo
Redo
Logs
Logs
/home/arc_dest/srl_dbms1
LGWR
Foreign
Archive Redo
Logs
ARCO
LOG_ARCHIVE_DEST_1
Archived
Redo Logs
21
Source Database
dbms1
Primary
Source
Database
Database
LOG_ARCHIVE_DEST_2
Local
Trail
Trail
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms1/dbms1_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Online
Online
Redo
Redo
Log
Log
ARC0
Integrated
Capture
LNSn
Online
Redo
Log
ext1
Foreign
Archived
Redo
Redo
Log
/usr/orcl/arc_dest/dbms1/
LOG_ARCHIVE_DEST_1
LGWR
Mining
Database
RFS
LGWR
dbmscap
dbms1_arch_%t_%s_%r.log
ARC0
LOG_ARCHIVE_DEST_1
/home/arc_dest/local_rl_dbmscap
Local
Archived
Archived
Redo
Redo
Logs
Logs
Archived
Redo
Redo
Logs
Logs
GGADM1 in DBMS1 whose credentials Extract will use to fetch data and
metadata from DBMS1. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee=>'GGADM1', privilege_type=>'capture',
grant_select_privileges=>true, do_grants=>TRUE);
22
User
23
24
4. Edit the Extract parameter file ext1.prm. The following lines must be present to take
advantage of archive log capture.
USERID ggadm1@dbms1 PASSWORD ggadm1pw
TRANLOGOPTIONS MININGUSER ggadmcap, MININGPASSWORD ggadmcappw
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
5. Start Extract.
GGSCI> START EXTRACT ext1
Source Database
dbms1
RAC Source
Database
Node 1
LOG_ARCHIVE_DEST_2
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms1/dbms1_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Online Redo
Logs
dbmscap
Mining
Database
LNSn
Integrated
Capture
LGWR
ext1
RFS
ARCO
LOG_ARCHIVE_DEST_1
LGWR
Online
Redo
Log
Archived
Redo Logs
ARC0
dbms1
RAC Source
Database
Node 2
LOG_ARCHIVE_DEST_2
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms1/dbms1_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Online Redo
Logs
Foreign
Archived
Redo Log
Local
Trail
Trail
/usr/orcl/arc_dest/dbms1/
dbms1_arch_%t_%s_%r.log
LOG_ARCHIVE_DEST_1
/home/arc_dest/local_rl_dbmscap
Local
Archived
Archived
Redo
Redo
Logs
Logs
LNSn
LGWR
LOG_ARCHIVE_DEST_1
ARCO
Archived
Redo Logs
25
The steps for configuring an archive log downstream mining database for a RAC source is
exactly the same as the previous example. The diagram shows the differences between a
single instance and RAC instance from an architecture perspective. But the actual steps to
setup the downstream capture are exactly the same as example 3. The
LOG_ARCHIVE_DEST_2 is a global parameter. Once it is set it is valid for both threads.
26
Source Databases
dbms1
LOG_ARCHIVE_DEST_2
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms1/dbms1_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Source
Database 1
Online Redo
Logs
Local
Trail
Trail
LNSn
Integrated
Capture
dbmscap
ext2
Mining
Integrated
Capture
LGWR
ext1
RFS
ARCO
LOG_ARCHIVE_DEST_1
LGWR
Database
Online
Redo
Log
Archived
Redo Logs
ARC0
LOG_ARCHIVE_DEST_2
dbms2
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms2/dbms2_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Source
Database 2
Foreign
Archived
Redo Log
Local
Trail
Trail
/home/arc_dest/local_rl_dbmscap
/usr/orcl/arc_dest/dbms1/
dbms1_arch_%t_%s_%r.log
/usr/orcl/arc_dest/dbms2/
dbms2_arch_%t_%s_%r.log
Online Redo
Logs
LOG_ARCHIVE_DEST_1
Local
Archived
Archived
Redo
Redo
Logs
Logs
LNSn
RFS
LGWR
ARCO
LOG_ARCHIVE_DEST_1
Archived
Redo Logs
GGADM1 in DBMS1 whose credentials Extract will use to fetch data and
metadata from DBMS1. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
27
User
GGADM2 in DBMS2 whose credentials Extract will use to fetch data and
metadata from DBMS2. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee=>'GGADM2', privilege_type=>'capture',
grant_select_privileges=>true, do_grants=>TRUE);
User
NOTE Oracle does not allow real time mining of the redo logs for more than one source database. When
you need to configure multiple source databases, only one source database can be configured in real
time mode. The others have to be in Archive mode.
28
29
30
4. Edit the Extract parameter file ext1.prm. The following lines must be present to take
advantage of archive log capture.
USERID ggadm1@dbms1 PASSWORD ggadm1pw
TRANLOGOPTIONS MININGUSER ggadmcap, MININGPASSWORD ggadmcappw
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
5. Start Extract.
GGSCI> START EXTRACT ext1
31
2. Register Extract with the mining database for source database DBMS2.
GGSCI> DBLOGIN USERID ggadm2@dbms2, PASSWORD ggadm2pw
GGSCI> MININGDBLOGIN USERID ggadmcap, PASSWORD ggadmcappw
GGSCI> REGISTER EXTRACT ext2 DATABASE
4. Edit the Extract parameter file ext2.prm. The following lines must be present to take
advantage of archive log capture.
5. Start Extract.
GGSCI> START EXTRACT ext2
32
Source Databases
LOG_ARCHIVE_DEST_2
dbms1
'SERVICE=DBMSCAP.EXAMPLE.COM ASYNC
OPTIONAL NOREGISTER TEMPLATE='/usr/orcl/
arc_dest/dbms1/dbms1_arch_%t_%s_%r.log
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dbmscap'
Source
Database 1
Online Redo
Logs
Local
Trail
Trail
LNSn
Integrated
Capture
ext2
dbmscap
Mining
Integrated
Capture
LGWR
ext1
RFS
ARCO
LOG_ARCHIVE_DEST_1
LGWR
Database
Online
Redo
Log
Archived
Redo Logs
ARC0
LOG_ARCHIVE_DEST_2
dbms2
Source
Database 2
Foreign
Archived
Redo Log
Local
Trail
Trail
/home/arc_dest/local_rl_dbmscap
/usr/orcl/arc_dest/dbms1/
dbms1_arch_%t_%s_%r.log
LOG_ARCHIVE_DEST_2
Online Redo
Logs
/usr/orcl/arc_dest/dbms2/
LNSn
ARCn
LGWR
LOG_ARCHIVE_DEST_1
ARCO
LOG_ARCHIVE_DEST_1
Local
Archived
Archived
Redo
Redo
Logs
Logs
dbms2_arch_%t_%s_%r.log
Standby
Redo Logs
Archived
Redo Logs
RFS
GGADM1 in DBMS1 whose credentials Extract will use to fetch data and
metadata from DBMS1. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
33
User
GGADM2 in DBMS2 whose credentials Extract will use to fetch data and
metadata from DBMS2. This user needs the
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE() procedure executed to
grant the appropriate permissions. For Oracle 10.2, the user must be granted the
additional permission of select on v_$database
SQL> exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee=>'GGADM2', privilege_type=>'capture',
grant_select_privileges=>true, do_grants=>TRUE);
User
34
dbmscap)';
35
2. Confirm that the standby log file groups were added successfully.
SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM
V$STANDBY_LOG;
36
37
4. Edit the Extract parameter file ext1.prm. The following lines must be present to take
advantage of archive log capture.
USERID ggadm1@dbms1 PASSWORD ggadm1pw
TRANLOGOPTIONS MININGUSER ggadmcap, MININGPASSWORD ggadmcappw
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine N)
5. Start Extract.
GGSCI> START EXTRACT ext1
2. Register Extract with the mining database for source database DBMS2.
GGSCI> DBLOGIN USERID ggadm2@dbms2, PASSWORD ggadm2pw
GGSCI> MININGDBLOGIN USERID ggadmcap, PASSWORD ggadmcappw
38
4. Edit the Extract parameter file ext2.prm. The following lines must be present to take
advantage of archive log capture.
USERID ggadm2@dbms2, PASSWORD ggadm2pwd
TRANLOGOPTIONS MININGUSER ggadmcap, MININGPASSWORD ggadmcappw
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
5. Start Extract.
GGSCI> START EXTRACT ext2
39
Conclusion
This paper has presented many different approaches to implementing downstream
Integrated Extract. With this information you can now determine which approach is best for
your environment.
40
\
Copyright 2012, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
Oracle Corporation
means, electronic or mechanical, for any purpose, without our prior written permission.
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective
owners.
U.S.A.
AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are
trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open
Company, Ltd. 0410
oracle.com
41