Академический Документы
Профессиональный Документы
Культура Документы
Executive Overview ........................................................................... 1 Configuration Overview ..................................................................... 2 Oracle GoldenGate........................................................................ 2 Oracle Exadata Database Machine ............................................... 3 Oracle Database File System ........................................................ 4 Oracle Clusterware ........................................................................ 4 Migrating to Oracle Exadata Database Machine ................................ 5 Configuration Best Practices ............................................................. 6 Step 1: Set Up DBFS on Oracle Exadata Database Machine ........ 6 Step 2: Install Oracle GoldenGate ............................................... 10 Step 3: Configure GoldenGate and Database Parameters .......... 10 Step 4: Set Up Checkpoint Files and Trail Files in DBFS ............. 12 Step 5: Set Up Page Files on the Local File System .................... 14 Step 6: Configure Replicat Commit Behavior ............................... 14 Step 7: Configure Autostart of Extract, Data Pump and Replicat Processes.................................................................................... 15 Step 8: Oracle Clusterware Configuration .................................... 15 Appendix A: Creating GoldenGate Clusterware Resource ............... 20 Recommendations When Deploying on Oracle RAC ................... 23 Appendix B: Example Agent Script .................................................. 24 References ...................................................................................... 29
Executive Overview
The strategic integration of Oracle Exadata Database Machine and Oracle Maximum Availability Architecture (MAA) best practices (Exadata MAA) provides the best and most comprehensive Oracle Database availability solution. This white paper describes best practices for configuring Oracle GoldenGate to work with Oracle Exadata Database Machine and Exadata storage. Oracle GoldenGate is instrumental for many reasons, including the following: To migrate to an Oracle Exadata Database Machine, incurring minimal downtime As part of an application architecture that requires Oracle Exadata Database Machine plus the flexible availability features provided by Oracle GoldenGate, such as active-active database for data distribution and continuous availability, and zero or minimal downtime during planned outages for system migrations, upgrades, and maintenance To implement a near real-time data warehouse or consolidated database on Oracle Exadata Database Machine, sourced from various, possibly heterogeneous source databases, populated by Oracle GoldenGate To capture from an OLTP application running on Oracle Exadata Database Machine to support further downstream consumption such as a SOA type integration This paper focuses on configuring Oracle GoldenGate to run on Oracle Exadata Database Machine. Oracle Exadata Database Machine can act as the source database, as the target database, or in some cases as both source and target databases for Oracle GoldenGate processing.
In addition, this paper covers the Oracle GoldenGate regular mode of continuously extracting logical changes from either online redo log files or archived redo log files.
Configuration Overview
This section introduces Oracle GoldenGate, Oracle Exadata Database Machine, and Oracle Database File System (DBFS). For more information about these features, see the References section at the end of this white paper.
Oracle GoldenGate
Oracle GoldenGate provides real-time, log-based change data capture and delivery between heterogeneous systems. Using this technology, it enables a cost-effective and low-impact realtime data integration and continuous availability solution. Oracle GoldenGate moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure. The architecture supports multiple data replication topologies such as one-to-many, many-to-many, cascading, and bidirectional. Its wide variety of use cases includes real-time business intelligence; query offloading; zero-downtime upgrades and migrations; and active-active databases for data distribution, data synchronization, and high availability. Figure 1 shows the Oracle GoldenGate architecture.
Network
Replicat
Extract Datasource for Change Synchronization: Transaction Log or Vendor Access Module
Collector
Trail or File
Replicat
(Optional)
Data Pump
Change Synchronization
Oracle Clusterware
Oracle Clusterware enables servers to communicate with each other, so that they appear to function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are standalone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one system to applications and end users. Oracle Clusterware provides the infrastructure necessary to run Oracle Real Application Clusters (Oracle RAC). Oracle Clusterware also manages resources, such as virtual IP (VIP) addresses, databases, listeners, services, and so on. There are APIs to register an application and instruct Oracle Clusterware regarding the way an application is managed in a clustered environment. You use the APIs to register the Oracle GoldenGate Manager process as an application managed through Oracle Clusterware. The Manager process should then be configured to automatically start or restart other Oracle GoldenGate processes.
GOLDEN GATE
Route LAN/WAN/Web/IP
GOLDEN GATE
Teradata Database
Delivery
Trail
Capture
Oracle GoldenGate supports an active-passive bidirectional configuration, where Oracle GoldenGate replicates data from an active primary database to a full replica database on a live standby system that is ready for failover during planned and unplanned outages. This provides the ability to migrate to Oracle Exadata Database Machine allowing the new system to work in tandem until testing is completed and a switchover planned. Using Oracle GoldenGate for database migration is most applicable when reduced downtime is a requirement and Oracle Data Guard cannot be used for this database migration. Refer to Exadata MAA Paper: Best Practices for Migrating to Exadata Database Machine for determining which migration option is best for your specific case. This paper includes instructions for configuring a target system on Oracle Exadata Database Machine that will act as the standby database shown in Figure 2.
Substitute the size parameters with your required trail file storage size.
The LOB segment used by DBFS should be configured with the storage options NOCACHE LOGGING which is the default:
-- Connect to the DBFS database SQL> connect system/<passwd>@<dbfs_tns_alias> -- View current LOB storage: SQL> SELECT table_name, segment_name,logging FROM dba_lobs WHERE tablespace_name='DBFS_GG_SOURCE_TBS'; -- More than likely it will be something like this: --- TABLE_NAME -- T_GOLDENGATE SEGMENT_NAME LOB_SFS$_FST_73 LOGGING CACHE YES NO -- ------------------ ---------------------- ------- ----------
Follow instructions in My Oracle Support note 1054431.1 for configuring the newly created DBFS file system so that the DBFS instance and mount point resources are automatically started by Cluster Ready Services (CRS) after a node failure. When registering the resource with Oracle Clusterware, be sure to create it as a cluster_resource instead of a local_resource as specified in the My Oracle Support note:
crsctl add resource $RESNAME \ -type cluster_resource \
Once the file system is mounted, create directories in the newly created filesystem for storing the Oracle GoldenGate files. Example:
% cd /mnt/dbfs_source/goldengate % mkdir dirchk % mkdir dirpcs % mkdir dirprm % mkdir dirdat % mkdir BR
Create symbolic links for the directories that are not controlled by Oracle GoldenGate parameters:
% ln s /mnt/dbfs_source/goldengate/dirprm $GG_HOME/dirprm % ln s /mnt/dbfs_source/goldengate/dirchk $GG_HOME/dirchk % ln s /mnt/dbfs_source/goldengate/dirpcs $GG_HOME/dirpcs
The Bounded Recovery (BR) feature was added to Extract in Oracle GoldenGate version 11.1.1. This feature guarantees an efficient recovery after Extract stops for any reason, planned or unplanned, no matter how many open (uncommitted) transactions there were at the time that Extract stopped, nor how old they were. Bounded Recovery sets an upper boundary for the maximum amount of time that it would take for Extract to recover to the point where it stopped and then resume normal processing. The Bounded Recovery checkpoint files should be placed on a shared file system such that, in an event of a failover when there are open long running transactions, Extract can use Bounded Recovery to reduce the time to taken to perform recovery. Starting in Oracle GoldenGate version 11.2.1 the Bounded Recovery files are supported and it is recommended that you place them on DBFS. With earlier releases the Bounded Recovery files need to be stored on NFS storage such as ZFS Storage appliance connected to Exadata. It is possible to store the checkpoint files on the local file system, but when Extract performs recovery after a node failure, the standard checkpoint mechanism will be used until new local Bounded Recovery checkpoint files are subsequently created. This will only be noticeable if there are long running transactions at the time of the failure. To set the Bounded Recovery file directory use the following Extract parameter:
BR BRDIR /mnt/dbfs_source/goldengate/BR
For more information on Bounded Recovery refer to the Oracle GoldenGate Reference Guide: http://docs.oracle.com/cd/E35209_01/doc.1121/e29399.pdf The location of the Extract/Datapump trail file directory is specified during process creation. For Extract it is also specified in the parameter file with the EXTTRAIL. Target Environment (Replicat) On the target environment, where the Replicat processes read the trail files and apply the data to the target database there is a requirement for two separate DBFS file systems to separate the different I/O requirements of the trail and checkpoint files. Trail files are written to by the Collection Server process on the target host using consecutive serial I/Os from the start to the end of the file, sized according to your Datapump configuration. The same trail files are read by each Replicat process, also using consecutive serial I/O requests. Once a portion of the trail is read by a Replicat process it will not normally be read a second time by the same process. When using multiple Replicat processes reading from the same trail files, it is rare that they remain in sync, reading from the same portion of the trail file at the same time. Because of this, the best configuration for DBFS is with NOCACHE LOGGING storage options. This is described above in configuring the source environment. The checkpoint files are small (approximately 4KB) but written to frequently, overwriting previous data. The file doesnt grow in size and is only read during process startup to determine the proper starting point for recovery or initiation. Because the checkpoint file is written to over and over, performance is best when the file is stored in DBFS with the CACHE LOGGING storage option. Setting the CACHE option causes the small amount of data being written to the checkpoint files to be written into the buffer cache of the DBFS instance, and not issuing direct writes to disk causing higher waits on I/O. In testing this has shown to increase checkpoint performance by a factor of 2 to 5 times compared to using the NOCACHE configuration with DBFS. Create the second DBFS file system for the checkpoint files in much the same way as the file system on the source environment (above). Some important notes:
The file system is only for checkpoint files, so it can be sized less than 100MB Create the file system using the same user as the first file system created. It is important to make sure the same user creates both file systems. Change the LOB storage parameters to CACHE LOGGING:
-- Connect to the DBFS database SQL> connect system/<passwd>@<dbfs_tns_alias> -- View current LOB storage: SQL> SELECT table_name, segment_name,logging FROM dba_lobs WHERE tablespace_name='DBFS_GG_CKPT_TBS'; -- Likely it will be something like this: --- TABLE_NAME SEGMENT_NAME
LOGGING CACHE
-- ------------------ ---------------------- ------- ----------- T_GOLDENGATE2 LOB_SFS$_FST_75 YES NO SQL> ALTER TABLE DBFS.<TABLE_NAME> MODIFY LOB (FILEDATA) (NOCACHE LOGGING); -- View the new LOB storage: SQL> SELECT table_name, segment_name,logging FROM dba_lobs WHERE tablespace_name='DBFS_GG_CKPT_TBS'; TABLE_NAME SEGMENT_NAME LOGGING CACHE ------------------ ---------------------- ------- ---------T_GOLDENGATE2 LOB_SFS$_FST_75 YES YES
Note: If you are using an Oracle GoldenGate Data Pump process to transfer the trail files from a source host on the database machine using DBFS, then contact Oracle GoldenGate Support to obtain the fix to Bug 10146318. This bug fix improves trail file creation performance on DBFS by the Oracle GoldenGate server/collector process. This only affects Oracle GoldenGate versions before 11.1.1.0.5.
4.
10
Extract can still be configured to capture directly from the redo logs for any supported Oracle version. This configuration is now called classic capture mode. 1. Extract using Integrated Capture mode. a. Set the database initialization parameter STREAMS_POOL_SIZE = 1.25GB X #Integrated Capture Processes. For further details about configuring Extract in Integrated Capture mode, refer to the Oracle GoldenGate Installation and Setup Guide: http://docs.oracle.com/cd/E35209_01/doc.1121/e35957.pdf 2. Extract in Classic Capture mode. a. Use the default Oracle Automatic Storage Manager (Oracle ASM) naming convention for the archived redo log files.
b. Configure the Oracle GoldenGate Extract parameter for the newer Oracle ASM log read API. Oracle GoldenGate release 11.1.1 introduces a new method of reading log files stored in Oracle ASM. This new method uses the database server to access the redo and archived redo log files, instead of connecting directly to the Oracle ASM instance. The database must contain the libraries with the API modules. The libraries are currently included with Oracle Database release 10.2.0.5, 11.2.0.2 and 11.2.0.3. To successfully mine the Oracle archived redo log files located on the storage cells that are managed by Oracle ASM, configure the Oracle GoldenGate Extract parameter as follows: Set the TRANLOGOPTIONS parameter to specify use of the new log read API. For example:
TRANLOGOPTIONS DBLOGREADER
3.
Configure Data Pump. Configure the Data Pump with the PASSTHRU parameter if the process is not carrying out any mappings or conversions. Using PASSTHRU reduces CPU by the Data Pump because it does not have to look up table definitions, either from the database or from a data definitions file. For further details on Extract configuration or Data Pump with PASSTHRU, refer to the Oracle GoldenGate Windows and UNIX Reference Guide: http://docs.oracle.com/cd/E35209_01/doc.1121/e29399.pdf
11
12
2.
3.
After creating the Extract, use the same EXTTRAIL parameter value to add the local trail:
% ggsci GGSCI (ggtest.oracle.com) 1> ADD EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa, EXTRACT ext_db, Megabytes 500
Further instructions about creating the Extract are available in the Oracle GoldenGate Administration Guide at http://docs.oracle.com/cd/E35209_01/doc.1121/e29397.pdf To configure Oracle GoldenGate trail files on DBFS for the target database: 1. 2. Make sure the DBFS directory is already created on the target environment Set the EXTTRAIL Replicat parameter, as follows:
EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa
3.
When adding the Replicat, use the same EXTTRAIL parameter value:
% ggsci GGSCI (ggtest.oracle.com) 1> ADD REPLICAT rep_db1, EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa
Do not place trail files on the local file system because it will lengthen restart times in the event of a node failure, reducing availability. To configure Data Pump between a source and target database outside the same Exadata Database Machine: 1. 2. Make sure Extract and Replicat are configured Set the RMTHOST Data Pump parameter to the IP or hostname that will be used for connecting to the target. In Step 7 below, the Application Virtual IP address is created with Cluster Ready Services (CRS) so that a single IP address can be moved between compute nodes, so that Data Pump can continue to connect to the target host when it moves from a failed node to a surviving node:
RMTHOST gg_dbmachine, MGRPORT 8901
3.
Set the RMTTRAIL Data Pump parameter to the trail file location on the target host:
RMTTRAIL /mnt/dbfs/goldengate/dirdat/aa
13
4.
Create a Data Pump process using the local trail file location on the source host:
% ggsci GGSCI (ggtest.oracle.com) 1> ADD EXTRACT dpump_1, EXTTRAILSOURCE /mnt/dbfs/goldengate/dirdat/aa
5.
Use the ADD RMTTRAIL command to specify the remote trail file location on the target host:
% ggsci GGSCI (ggtest.oracle.com) 1> ADD RMTTRAIL /mnt/dbfs/goldengate/dirdat/aa EXTRACT dpump_1, MEGABYTES 500
Further instructions about creating the Data Pump process are available in the Oracle GoldenGate Administration Guide at http://docs.oracle.com/cd/E35209_01/doc.1121/e29397.pdf
14
If required, instead of using the Oracle GoldenGate process name wildcard (*), explicitly name the processes you want to be restarted automatically. Example:
AUTOSTART EXTRACT EXT_1A AUTOSTART EXTRACT DPUMP_1A AUTORESTART EXTRACT EXT_1A AUTORESTART EXTRACT DPUMP_1A
15
a.
To create the application VIP, run the following as the root user:
$GRID_HOME/bin/appvipcfg create -network=1 \ -ip=10.1.41.93 \ -vipname=gg_vip_source \ -user=root
In the example: $GRID_HOME is the Oracle home in which Oracle 11g Release 2 Grid infrastructure components have been installed (for example: /u01/app/grid). network is the network number that you want to use. With Oracle Clusterware release 11.2.0.1, you can find the network number using the following command:
crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet
net1 in NAME=ora.net1.network indicates this is network 1, and the second line indicates the subnet on which the VIP will be created. ip is the IP address provided by your system administrator for the new Application VIP. This IP address must be in the same subnet as determined above. gg_vip_source is the name of the application VIP that you will create. b. Run the following command to give the Oracle Database installation owner permission to start the VIP:
$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x
c.
d. To validate whether the VIP is running and on which node it is running, execute:
$GRID_HOME/bin/crsctl status resource gg_vip_source
See the Oracle Clusterware documentation for further details about creating an Application VIP: http://docs.oracle.com/cd/E11882_01/rac.112/e16794/toc.htm
16
2.
Configure Oracle Grid Infrastructure Bundled Agent. Introduced in 11.2.0.3 for 64bit Linux, the Oracle Infrastructure Bundled Agents provide predefined clusterware resources for Oracle GoldenGate, Siebel, and Apache applications. Using the bundled agent for Oracle GoldenGate, it is simple to create dependencies on the source/target database, the application VIP, and the DBFS mount point. The agent command line utility (agctl) is used to start and stop Oracle GoldenGate and can also be used to relocate Oracle GoldenGate between the nodes in the cluster. The bundled agent for Oracle GoldenGate fully supports use of Extract running in classic or integrated capture modes. The current version certification matrix: Grid Infrastructure 11.2.0.3.+ Oracle GoldenGate 11.2.1.+ Database 10.2.0.5, 11.2.0.2+
The bundled agent software should be downloaded from the following location: http://www.oracle.com/technetwork/products/clusterware/downloads/index.html Follow the installation instructions provided in the readme.txt file. Example configuration of the bundled agent: a. Configure DBFS as detailed in MOS note 1054431.1 and test to make sure crsctl can be used to mount and unmount the file system. Mounting the file system should also start the DBFS instance if it is not already running. b. Use agctl to create the Clusterware resource: On the Source environment
% agctl add goldengate GG_Source --gg_home /home/oracle/goldengate \ --instance_type source \ --nodes dbm01db05,dbm01db06 \ --vip_name gg_vip_source \ --filesystems dbfs_mount --databases ggs \ --oracle_home /u01/app/oracle/product/11.2.0.3/dbhome_1 \ --monitor_extracts ext_1a,dpump_1a
Once the Oracle GoldenGate processes have been added to a bundled they should only be started and stopped using agctl. The bundled agent starts the Oracle
17
GoldenGate processes by starting Manager which in turn will automatically start the processes (Extract, Datapump, Replicat) that have been configured to autostart. If an Oracle GoldenGate process aborts due to a problem, as long as the manager process is still running it is okay to use ggsci to restart the failed process. To check the status of Oracle GoldenGate:
% agctl status goldengate GG_Source Goldengate instance 'GG_Source' is running on dbm01db06
ggsci can be used to check the status of individual Oracle GoldenGate processes. To start Oracle GoldenGate manager, and all processes that have autostart enabled:
% agctl start goldengate GG_Target [--serverpool serverpool_name | --node node_name]
Note: Oracle GoldenGate will start up on the node you issue the command from, unless a node name or serverpool name is specified. To stop all Oracle GoldenGate processes:
% agctl stop goldengate GG_Target [--serverpool serverpool_name | --node node_name]
Note: The Oracle GoldenGate resource MUST be running before relocating it. To view the configuration parameters for the Oracle GoldenGate resource:
% agctl config goldengate GG_Target
GoldenGate location is: /home/oracle/goldengate GoldenGate instance type is: source Configured to run on Nodes: dbm01db05 dbm01db06 ORACLE_HOME location is: /u01/app/oracle/product/11.2.0.3/dbhome_1 Databases needed: ggs File System resources needed: dbfs_mount
18
19
See Appendix B for an example agent script starts and stops the Oracle GoldenGate Manager, Extract, Data Pump, and Replicat processes. It is important to manually test the agent script to start and stop the Oracle GoldenGate processes before moving onto the next step. 2. Register a resource in Oracle Clusterware. Register Oracle GoldenGate as a resource in Oracle Clusterware using the crsctl utility. When using DBFS to store Oracle GoldenGate files, it is recommended to follow the configuration guidelines provided in My Oracle Support note 1054431.1 (detailed above). Mounting of DBFS is carried out by a Clusterware resource which in turn has a start dependency on the DBFS instance. IF the DBFS resource is started and the instance is not running, it will be started automatically. The DBFS mount resource is named as a start dependency for the Oracle GoldenGate resource such that the required file systems are mounted BEFORE the Oracle GoldenGate processes are started. It is also recommended to list the source or target databases as start dependencies for the Oracle GoldenGate resource so that the Extract or Replicat processes wont fail when it cant connect to the database.
20
1.
Determine the name of the DBFS or source/target database resource for the start dependency:
crsctl status resource | grep -i dbfs|source/target DB name
2.
Use the Oracle Grid Infrastructure user (oracle, in this example) to execute the following:
$GRID_HOME/bin/crsctl add resource GG_Source \ -type cluster_resource \ -attr "ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/11gr2_gg_action.scr, CHECK_INTERVAL=30, START_DEPENDENCIES='hard(gg_vip_source,dbfs_mount,ora.ggs.db) pullup(gg_vip_source)', STOP_DEPENDENCIES=hard(gg_vip_source), SCRIPT_TIMEOUT=300"
This paper assumes a single Oracle Exadata Database Machine is used for either a source (Extract) or target (Replicat) host. If the database machine is split into separate clusters such that the source and target run within the same database machine, then make sure the Extract and Replicat is restricted to the designated cluster nodes:
$GRID_HOME/bin/crsctl add resource GG_Source \ -type cluster_resource \ -attr "ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/11gr2_gg_action.scr, CHECK_INTERVAL=30, START_DEPENDENCIES='hard(gg_vip_source,dbfs_mount) pullup(gg_vip_source)', STOP_DEPENDENCIES=hard(gg_vip_source), HOSTING_MEMBERS=dbm01db05 dbm01db06, PLACEMENT=restricted, SCRIPT_TIMEOUT=300"
For more information about the crsctl add resource command and its options, see the Oracle Clusterware Administration and Deployment Guide at http://docs.oracle.com/cd/E11882_01/rac.112/e16794/toc.htm 3. Start the resource. Once the resource has been added, you should always use Oracle Clusterware to start Oracle GoldenGate. Login as the Oracle Grid Infrastructure user (oracle) and execute the following:
% $GRID_HOME/bin/crsctl start resource GG_Source
21
For example:
% crsctl status resource GG_Source NAME=GG_Source TYPE=cluster_resource TARGET=ONLINE STATE=ONLINE on dbm01db05
4.
Manage the application. To relocate Oracle GoldenGate onto a different cluster node, use the $GRID_HOME/bin/crsctl relocate resource API and include the force (-f) option. This command can be run on any node in the cluster as the Grid Infrastructure user (oracle). If there is a dependency on an Application VIP you need to relocate the VIP resource which in turn will stop Oracle GoldenGate, relocate and restart it on the new node. For example:
[oracle@dbm01db05 ~]$ crsctl relocate resource gg_vip_source f CRS-2673: Attempting to stop 'GG_Source' on 'dbm01db05' CRS-2677: Stop of 'GG_Source' on 'dbm01db05' succeeded CRS-2673: Attempting to stop 'gg_vip_source' on 'dbm01db05' CRS-2677: Stop of 'gg_vip_source' on 'dbm01db05' succeeded CRS-2672: Attempting to start 'gg_vip_source' on 'dbm01db06' CRS-2676: Start of 'gg_vip_source' on 'dbm01db06' succeeded CRS-2672: Attempting to start 'dbfs_mount' on 'dbm01db06' CRS-2676: Start of 'dbfs_mount' on 'dbm01db06' succeeded CRS-2672: Attempting to start 'GG_Source' on 'dbm01db06' CRS-2676: Start of 'GG_Source' on 'dbm01db06' succeeded
5.
CRS cleanup. To remove Oracle GoldenGate from Oracle Clusterware management, perform the following tasks: a) Stop Oracle GoldenGate (login as the Oracle Grid Infrastructure (oracle) user):
$GRID_HOME/bin/crsctl stop resource GG_Source
22
c) If no longer needed, delete the agent action script: 11gr2_gg_action.scr. This does not delete the Oracle GoldenGate or DBFS configuration, only the Clusterware resource.
Ensure that the DBFS database has instances running on all the database nodes involved in the Oracle RAC configuration. This action provides access to Oracle GoldenGate if it is restarted after a node failure.
Ensure that the DBFS file system is mountable on all database nodes in the Oracle RAC configuration. Mounting DBFS using a Clusterware resource to prevent the Extract or Replicat processes from being started on multiple nodes concurrently, mount the file system only on the node where Oracle GoldenGate is running. Use the same mount point names on all the nodes to ensure seamless failover.
23
24
### Changed default from local3 to user for Solaris default support on 17FEB-2012 ### This will allow us to log messages to the syslog ### (/var/log/messages on Linux, /var/adm/messages on Solaris) LOGGER_FACILITY=user ########################################### ### No editing is required below this point ########################################### ### determine platform UNAME_S=`uname -s` if fi LOGGER="/bin/logger -t GoldenGate" [ $UNAME_S = 'Linux' ]; then LINUX=1; SOLARIS=0; elif [ $UNAME_S = 'SunOS' ]; then LINUX=0; SOLARIS=1;
logit () { ### type: info, error, debug type=$1 msg=$2 if [ "$type" = "info" ]; then echo $msg $LOGGER -p ${LOGGER_FACILITY}.info "$msg" elif [ "$type" = "error" ]; then echo $msg $LOGGER -p ${LOGGER_FACILITY}.error "$msg" elif [ "$type" = "debug" ]; then echo $msg $LOGGER -p ${LOGGER_FACILITY}.debug "$msg" fi }
# check_process validates that Manager/Extract/Replicat process is # running #at PID that GoldenGate specifies. check_process () { PROCESS=$1 if [ ${PROCESS} = mgr ] then PFILE=MGR.pcm
25
elif [ ${PROCESS} = ext ] then PFILE=${EXTRACT}*.pce elif [ ${PROCESS} = rep ] then PFILE=${REPLICAT}*.pcr else PFILE=${DATAPUMP}*.pce fi if ( [ -f "${GGS_HOME}/dirpcs/${PFILE}" ] ) then pid=`cut -f8 "${GGS_HOME}/dirpcs/${PFILE}"` if [ ${pid} = `ps -e |grep ${pid} |grep ${PROCESS} |cut -d " " -f2` ] then logit info "${SCRIPTNAME}(check_process) - Procese(s) ${PROCESS} IS running" exit 0 else if [ ${pid} = `ps -e |grep ${pid} |grep ${PROCESS} |cut -d " " -f1` ] then logit info "${SCRIPTNAME}(check_process) - Procese(s) ${PROCESS} IS running" exit 0 else logit error "${SCRIPTNAME}(check_process) - Procese(s) ${PROCESS} is NOT running" exit 1 fi fi else logit error "${SCRIPTNAME}(check_process) - Procese(s) ${PROCESS} is NOT running - no pid file" exit 1 fi } # call_ggsci is a generic routine that executes a ggsci command call_ggsci () { ggsci_command=$1 ggsci_output=`${GGS_HOME}/ggsci << EOF ${ggsci_command} exit
26
EOF` } stop_everything () { # Before starting, make sure everything is shutdown and process files are removed #attempt a clean stop for all non-manager processes logit info "${SCRIPTNAME}(stop_everything) - Stopping all processes" call_ggsci 'stop er *' #ensure everything is stopped call_ggsci 'stop er *!' #in case there are lingering processes call_ggsci 'kill er *' #stop Manager without (y/n) confirmation call_ggsci 'stop manager!' #Remove the process files: rm -f $GGS_HOME/dirpcs/MGR.pcm rm -f $GGS_HOME/dirpcs/*.pce rm -f $GGS_HOME/dirpcs/*.pcr }
case $1 in 'start') # stop all GG processes and remove process files logit info "${SCRIPTNAME} - start - Starting all processes" stop_everything sleep ${start_delay_secs} #Now can start everything... #start Manager logit info "${SCRIPTNAME} - start - Starting Manager, autostarting processes" call_ggsci 'start manager'
27
#there is a small delay between issuing the start manager command #and the process being spawned on the OS <96> wait before checking sleep ${start_delay_secs} #start Extract or Replicats call_ggsci 'start er *' #check whether Manager is running and exit accordingly logit info "${SCRIPTNAME} - start - Checking Manager status" check_process mgr sleep ${start_delay_secs} #Check whether Extract is running logit info "${SCRIPTNAME} - start - Checking GoldenGate statuses" check_process ext check_process dpump ;; 'stop') # stop all GG processes and remove process files logit info "${SCRIPTNAME} - stop - Stopping all processes" stop_everything #exit success exit 0 ;; 'check') logit info "${SCRIPTNAME} - check - Checking all processes" check_process mgr check_process ext check_process dpump check_process rep ;; 'clean') # stop all GG processes and remove process files logit info "${SCRIPTNAME} - clean - Stopping all processes" stop_everything
28
#exit success exit 0 ;; 'abort') # stop all GG processes and remove process files logit info "${SCRIPTNAME} - abort - Stopping all processes" stop_everything #exit success exit 0 ;; esac
References
Oracle GoldenGate Administration Guide version 11.2.1.0.1 Oracle GoldenGate Oracle Installation and Setup Guide version 11.2.1.0.1 Oracle GoldenGate Reference Guide version 11.2.1.0.1 Oracle Database SecureFiles and Large Object Developers Guide (DBFS) Oracle Clusterware Administration and Deployment Guide Oracle Maximum Availability Architecture Web site http://www.otn.oracle.com/goto/maa
29
Oracle GoldenGate on Oracle Exadata Database Machine Configuration January 2013 Author: Stephan Haisley Contributing Authors: MAA team Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com
Copyright 2013, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 1010