Вы находитесь на странице: 1из 26

Configuring HP Serviceguard Toolkit for Oracle Data Guard

Technical white paper

Table of contents
Introduction ......................................................................................................................................... 2 Terms and definitions ........................................................................................................................... 2 Oracle Data Guard overview ................................................................................................................ 2 Physical and logical standby databases .............................................................................................. 2 Active Data Guard ........................................................................................................................... 3 Data Guard Broker........................................................................................................................... 3 Role transitions ................................................................................................................................ 3 Serviceguard support for Oracle Data Guard.......................................................................................... 3 Dependencies .................................................................................................................................. 4 Supported configurations .................................................................................................................. 4 Continentalclusters environment ......................................................................................................... 6 Metrocluster and EDC configurations .................................................................................................. 8 Metrocluster in Continentalclusters environment.................................................................................. 10 Supporting multiple instances of single-instance ODG configuration ..................................................... 11 HA for Data Guard Broker .............................................................................................................. 11 Installation and configuration of the toolkit ............................................................................................ 12 Setting up the application ............................................................................................................... 12 Setting up the toolkit ....................................................................................................................... 12 ODG package configuration example .............................................................................................. 14 Adding the package to the cluster .................................................................................................... 19 ODG maintenance ......................................................................................................................... 19 Troubleshooting ................................................................................................................................. 20 Known problems and workarounds ...................................................................................................... 21 For more information .......................................................................................................................... 22 Appendix A ...................................................................................................................................... 22 Configuring ODG example ............................................................................................................. 22 Preparing the primary database....................................................................................................... 22 Creating a physical standby database.............................................................................................. 23

Introduction
This document describes how the HP Serviceguard Toolkit for Oracle Data Guard assists in easy integration of Oracle Data Guard (ODG) with HP Serviceguard. ODG is the host-based datareplication software for Oracle Database. It provides management, monitoring, and automation features to create and maintain one or more standby databases to protect data from failures, disasters, human error, and data corruptions. To provide High Availability (HA) so that data replication continues in the face of failures, ODG can be deployed in a Serviceguard cluster.

Terms and definitions


Term ASM ECMT EDC LVM MNP ODG RAC Definition Automatic Storage Management Enterprise Cluster Master Toolkit Extended Distance Cluster Logical Volume Manager Multi-node Package Oracle Data Guard Real Application Clusters

Oracle Data Guard overview


Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruption. Data Guard maintains these standby databases as transactionally consistent copies of the production database. If the production database is unavailable in case of a planned or unplanned outage, then Data Guard can switch any standby database to take over in the production role, thus reducing the downtime associated with the outage. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability. A Data Guard configuration consists of one production database, known as the primary database, and up to nine standby databases.

Physical and logical standby databases


A standby database can be either a physical standby database or a logical standby database. A physical standby database provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, is the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database. A logical standby database contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo data received from the primary database into SQL statements and then executes the SQL statements on the standby database. A logical standby database can be used for other business purposes in addition to disaster recovery requirements. The users can access a logical standby database for queries and reporting purposes at any time.

The standby databases in a Data Guard configuration can be a combination of physical and logical standby databases.

Active Data Guard


Active Data Guard is a new feature for Oracle 11g. Oracle Active Data Guard enhances the quality of service by offloading resource-intensive activities from a production database to one or more synchronized standby databases. Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, Web-based access, and so on, while continuously applying changes received from the production database. Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.

Data Guard Broker


The Data Guard broker is a distributed management framework that automates the creation, maintenance, and monitoring of Data Guard configurations. The brokers interfaces improve usability and centralize management and monitoring of the Data Guard configuration.

Role transitions
An Oracle database can be operated in one of two roles: primary or standby. Using Data Guard, the role of a database can be changed using either a switchover or a failover operation. Switchover This operation allows the primary database to switch roles with one of its standby databases. There is no data loss during a switchover. After a switchover, each database continues to participate in the Data Guard configuration with its new role. Failover This operation changes a standby database to the primary role in response to a primary database failure. If the primary database was not operating in either maximum protection mode 1 or maximum availability 2 mode before the failure, some data loss may occur. If Flashback Database 3 is enabled on the primary database, it can be reinstated as a standby for the new primary database once the reason for the failure is corrected.

Serviceguard support for Oracle Data Guard


Providing HA for ODG is essential for mission-critical business that requires ODG to be deployed in an environment that has a wide range of applications. Integrating ODG with Serviceguard using the toolkit has the following additional advantages:
1. Provides HA for Data Guard processes and the Data Guard broker (if used) for both primary and

the standby databases. Serviceguard has built-in monitoring capabilities to monitor system resources like network, volume groups, and filesystems, and to initiate failover following detection.

2. Data Guard provides disaster protection and recovery only for Oracle databases. Data Guard

integration with Serviceguard Extended Distance Cluster (EDC), Metrocluster, or Continentalclusters provides disaster protection and recovery for Oracle databases as well as for the applications that use the databases.
1

2 3

This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Flashback Database removes the need to re-create the primary database after a failover. Flashback Database is similar to conventional point-in-time recovery in its effects, enabling you to return a database to its state at a time in the recent past.

Serviceguard has extensive dependencies that allow these application interdependencies to be built in for a complete environment/deployment. This is advantageous from the complete application stack, as opposed to accounting for the Oracle Database alone.
3. Serviceguards robust failover mechanism can be extended to provide reliable automatic Data

Guard role management (failover and switchover) operations.

4. Serviceguard is integrated with Insight DynamicsVSE for Integrity Servers. This enables workload

management, Instant Capacity, and other VSE technologies to be controlled in concert with Serviceguard. The ODG Toolkit integration similarly allows the users to take advantage of these capabilities for failure scenarios that affect ODG replication.

The Serviceguard Toolkit for Oracle Data Guard consists of a set of shell scripts that are used to start, stop, and monitor ODG primary and standby database instances. In the case of a single-instance database, this toolkit leverages scripts from the ECMT Oracle Toolkit to start, stop, and monitor the database, the listener, and the ASM instances.

Dependencies
The ODG Toolkit requires the ECMT Oracle Toolkit so that together they provide HA to single-instance ODG.
NOTE: For information about supportability and compatibility with various versions of

Serviceguard and HP-UX, please refer to the supportability matrix available at http://www.hp.com/go/hpux-serviceguard-docs

Supported configurations
Single-instance Oracle database

Figure 1: ODG Toolkit in a single-instance Oracle database environment

SG Cluster 1

SG Cluster 2

Data Guard replication Package for primary DB


Node 1 Node 2

Package for standby DB


Node 1 Node 2

Primary

DB

Standby

DB

The ODG Toolkit can be implemented as a combinational package with the ECMT Oracle Toolkit. A combinational package is one in which two applications are packaged together by combining their respective Serviceguard modules into one package. The ODG Toolkit has a hard dependency on the ECMT Oracle Toolkit. Hence, customers who do not already have the ECMT product must purchase it along with the ODG Toolkit.

As a combinational package, the Oracle database will be brought up first using the ECMT Oracle Toolkit; then the Data Guard processes will be started by the ODG Toolkit; then the application is monitored. Since the Oracle database and the Data Guard are packaged together, the package will failover if either the Oracle database or any of the Data Guard processes fail. Note that since the primary database and all the standby databases are configured in separate clusters, separate combinational packages must be created for each primary database and standby database. Configuring an ODG Toolkit involves two scenarios:
1. If a customer already has an ECMT Oracle Toolkit package running and wants to convert the

Oracle database into a Data Guard setup, then:

If the ECMT Oracle package is a legacy-style package, it must be migrated to modular style. You can achieve this using the cmmigratepkg tool. Subsequently, a new combinational package must be created. This is because the ODG Toolkit does not support legacy-style packaging. However, migrating the legacy package to modular style is not the recommended way. The recommended way is to discard the legacy package and create the Data Guard package afresh in modular style. If the ECMT Oracle package is a modular-style package, the Data Guard module can be inserted into it by using the following command: cmmakepkg i <pkg_ascii_file> -m <module_file_name> <output_file_name> where: pkg_ascii_file is the package file of the existing ECMT Oracle package. This can be generated using the command cmgetconf p <pkg_name> <output_filename> module_file_name is the name of the module to be included in the running package. In case of Data Guard package, its value will be tkit/dataguard/dataguard output_file_name is the template file that gets generated with the values of the ECMT Oracle database module populated in it. The user must edit this file and enter values for the Data Guard specific package attributes and then apply the package using the cmapplyconf command.
2. If the customer does not have an ECMT Oracle package running and wants to create the Data

Guard package afresh, then the command to create a combinational package is: cmmakepkg m ecmt/oracle/oracle m tkit/dataguard/dataguard <pkg_file_name> where: ecmt/oracle/oracle is the Oracle toolkit module shipped with ECMT Oracle Toolkit. tkit/dataguard/dataguard is the Data Guard toolkit module shipped with the ODG Toolkit. pkg_file_name is the template file that gets generated. The user needs to edit this file and enter values for the ECMT Oracle specific package attributes and also for the Data Guard specific package attributes, and then apply the file using cmapplyconf command to create the package.
NOTE: The package parameter, START_MODE must be set to mount when an ECMT Oracle Toolkit is used in combination with an ODG Toolkit. NOTE: In the case of Active Data Guard, the standby database will be started up to the open state. The user should set the ACTIVE_STANDBY parameter to yes when he wants to use this feature. Active Data Guard is supported in Oracle database version 11gR1 or later.

Continentalclusters environment
Figure 2: ODG Toolkit in a Continentalclusters environment

Continentalcluster
SG Cluster 1 ( Primary Cluster) SG Cluster 2 (Recovery Cluster)

Primary package

Recovery group

Data Receiver package Recovery package

Node 1

Node 2

Node 1

Node 2

Primary DB

Standby DB

Figure 2 depicts a typical Continentalclusters environment, which consists of a Primary Cluster (SG Cluster 1) and a Recovery Cluster (SG Cluster 2). Even though both these clusters can have more than two nodes, we have used only two nodes in the clusters in our example. In the Primary Cluster, Data Guard is configured using the ODG Toolkit and the Oracle database is placed on a disk that is shared between the nodes in the Primary Cluster. The standby database is created in the Recovery Cluster and is also placed on a shared disk. A Recovery group is created in the Continentalclusters environment with the following three packages in it:
1. Primary package: This package is created using the ODG Toolkit on the Primary Cluster. This

package will bring up the Oracle database on the Primary Cluster as a primary database and will start monitoring the primary database processes. If the primary database fails on Node 1, then the Primary package will be failed over to another node within the Primary Cluster.

2. Data Receiver package: This package is created using the ODG Toolkit on the Recovery Cluster.

This package will bring the Oracle database on the Recovery Cluster in Standby mode and will start monitoring the standby database processes. If the standby database fails, then this package will halt the Oracle database and will failover the package to another node within the Recovery Cluster.
3. Recovery package: This package is also created using the ODG Toolkit on the Recovery Cluster. It

will be configured to bring up the Oracle database on the Recovery Cluster in Primary mode. Initially, this package will be in halted state.

All the above three packages are configured as failover packages.

Initially, the Primary package and the Data Receiver package will be up and running and the Recovery package will be in halted state. The Primary package will bring up the Oracle database on the Primary Cluster in Primary mode and the Data Receiver package will bring up the Oracle database on the Recover Cluster in Standby mode. Thus, a typical Data Guard environment is set up with data replication done from the primary database on the Primary Cluster to the standby database on the Recovery Cluster. When the primary database fails, Serviceguard configured on the Primary Cluster fails over the database to another node within the Primary Cluster, thus providing HA to the primary database. Similarly, if the standby database fails, then Serviceguard configured on the Recovery Cluster allows the database to failover to another node within the Recovery Cluster. When the Primary Cluster goes down, the administrator must run the cmrecovercl command on the Recovery Cluster to bring up the Recovery package. This command will first halt the Data Receiver package, which will halt the standby database, and then bring up the Recovery package, which will bring up the database as a primary database. Note that in this case, role management is handled by ODG Toolkit. Restoring the cluster in a Continentalclusters to its original state is a manual process. The following steps must be performed to restore the clusters back to their original state:
1. Halt the Recovery package. 2. Start the Primary packagethis will bring up the database on Primary Cluster as a primary

database.
3. Start the Data Receiver packagethis will bring up the database on the Recovery Cluster as a

standby database.

In a Continentalclusters environment, the AUTO_RUN attribute of the packages must be disabled at the time of bringing up the packages in the Recovery group. This is because Continentalclusters does not want the packages to be brought up when the cluster is brought up. But if this attribute is not enabled, the local failover of the package within the cluster at the time of package failure will not happen. In such a case, package switching can be manually enabled using the cmmodpkg command after the packages in the Recovery group are started. Multiple standby databases cannot be supported in a Continentalclusters setup. This is because there can be only one Data Receiver package and only one Recovery package within a Recovery group. However, the ODG Toolkit in Continentalclusters will not prevent users from configuring other standby databases that are placed outside the Continentalclusters setup.

Metrocluster and EDC configurations


Metrocluster is an HP high availability product for Serviceguard customers who require integrated disaster recovery solutions. HP Metrocluster uses Serviceguard clustering technology to form a single cluster of systems that are located apart from each other at different data centers at metropolitan distances. While Metrocluster provides storage-based replication of data across metropolitan distances, organizations often like to build in additional data replicas, either locally or at other data centers. Having both storage replication and ODG replication can help to safeguard customer data. The integration of ODG with Metrocluster or EDC gives organizations this choice. Single-instance Data Guard configuration in Metrocluster

Figure 3: Single-instance Data Guard configuration in Metrocluster with standby database residing outside Metrocluster

Metrocluster
Data Center 1 (Primary site)
Primary package Primary DB

Data Center 2 (Recovery site)

Node 1

Node 2

Node 3

Node 4

Primary DB

Data replication in Metrocluster

Primary DB

Third location (Serviceguard cluster)


Standby package Standby DB

Node 1

Node 2

Standby DB

In Figure 3, Metrocluster is configured with two data centers: one as the primary site and the other as the recovery site. The Data Guard primary database instance is configured in Data Center 1 such that the archived logs and the primary database reside on the disk that is shared locally between the nodes in that data center. The primary database instance will be brought up on any one node in Data Center 1 while other nodes in the data center will serve as backup nodes for that instance. Array-based replication technology enables the data written to the storage array in Data Center 1 to be replicated to the storage array located in Data Center 2. It is not necessary to configure the standby database in Data Center 2 because the data replication is taken care of by the Metrocluster solution. The standby database instance is configured at a third location (say site 3), which is not configured within the Metrocluster environment. The third location is a separate Serviceguard cluster with two nodes and a shared disk. The standby database instance will be brought up on any one node in the third location, and the database and the archived logs will be located on the shared disk. If high availability is not desired for the standby database then the third location need not be configured in a Serviceguard cluster. In such a situation, there will be only one server at the third location and the standby database instance has to be brought up manually. In case of a disaster, if Data Center 1 is down, then the primary database instance will be failed over to Data Center 2 and will continue to function as a primary database. Since Metrocluster has achieved data replication from Data Center 1 to Data Center 2, there will not be any problems when starting the primary database at Data Center 2 using the replicated data from the shared disk. Once the primary database instance comes up at Data Center 2, it will continue to send the archived logs to the standby database located at the third location. Data Guard in EDC environment

Figure 4: ODG in EDC environment in which both the primary and the standby are single-instance databases

Site A Package for primary DB

EDC Package for standby DB

Site B

Node 1

Node 2

Node 3

Node 4

Primary DB

Standby DB

Figure 4 shows a Data Guard setup in an EDC environment. Both the primary and standby databases are configured as single instance Serviceguard failover packages inside the EDC. All the four nodes are present inside the same Serviceguard cluster. Failover from Site A to Site B and vice versa is not allowed. When Site A goes down, the role of Standby on Site B has to be manually changed to primary, since the role transitions will not be supported in the first release.

Metrocluster in Continentalclusters environment


Figure 5: Single-instance Data Guard setup in a Continentalclusters environment where the Primary Cluster is configured as a Metroclusters
Continentalcluster

Data Center 1 (Primary site)


Primary package Primary DB

Primary Cluster (Metrocluster )

Data Center 2 ( Recovery site)

Node 1

Node 2

Node 3

Node 4

Primary DB

Data replication in Metrocluster

Primary DB

Recovery Cluster (Third site)


Data Receiver package Standby DB

Recovery package

Node 1

Node 2

Standby DB

Figure 5 describes a Continentalclusters setup with two clusters spread over a total of three different sites. The Primary Cluster is configured as a Metrocluster spread over two different sites that are geographically dispersed within the confines of a metropolitan area. The Recovery Cluster is configured on the third site as a Serviceguard cluster. The primary database of Data Guard setup is configured as a Primary package in the Primary Cluster (Metrocluster instance). The Primary Cluster has two data centers (sites), each of which have storage array for storing the data. Metrocluster enables data written to the primary site to be replicated to the recovery site and hence the primary database instance can failover to any node within the Primary Cluster.

10

The standby database is configured as a Data Receiver package in the Recovery Cluster. It will receive the archived redo logs sent by the primary database instance irrespective of the node on which the primary database is running. Cross subnet failover is allowed across the sites. When the Primary Cluster fails, the user must run the cmrecovercl command to halt the Data Receiver package and start the Recovery package. Once this command is run, the Recovery package starts and brings up the Oracle database as a primary database. Thus there will be no instance of standby database running once the role of the standby database is changed to primary. Multiple standby configurations are not supported within a Continentalclusters setup. However, the ODG Toolkit in Continentalclusters will not prevent customers from configuring standby databases that are placed outside the Continentalclusters environment.

Supporting multiple instances of single-instance ODG configuration


Figure 6: Multiple Data Guard instances in one Serviceguard cluster
SG Cluster 1

SG Cluster 2

Primary DB
Package 2 Primary DB Package 1 Primary DB

Package 3 Standby DB

Standby DB

Package 4

Node 2

Standby DB

Node 1

Node 1

Node 2

Primary DB

Standby DB

To support multiple Data Guard instances in one Serviceguard cluster, all the instances must be configured in such a way that they function independently. This means that they should have different volume groups for storing the database and redo logs. They should use different Oracle database listeners and IP addresses or ports. Figure 6 shows a supported configuration where two two-node Serviceguard clusters are configured to provide HA to the Data Guard configurations. Two single-instance Data Guard primary databases are configured in Cluster 1 and one standby database is configured for each of the primary databases in Cluster 2. Note that both these clusters are independent of each other and there cannot be package failover between the two clusters. Both the primary database in Cluster 1 and the standby database in Cluster 2 and their corresponding redo logs are located on shared disks such that each can be accessed from any nodes in their respective clusters. This will enable failover of the primary and standby packages within the cluster, thus providing HA for both of them.

HA for Data Guard Broker


HA for Data Guard Broker is not supported on a single-instance database.

11

Installation and configuration of the toolkit


Setting up the application
ODG is included with the Enterprise Edition and Standard Edition of the Oracle database software. Oracle must be installed on all the nodes of the cluster, by the user oracle and shared storage has to be configured. For information on configuring ODG, please refer to Appendix A. NOTE: In the event of package failover to the adoptive node, make sure that the database instance on the adoptive node will be able to communicate in the Data Guard configuration. This requires appropriate configuration of the listener and network services. 4

Setting up the toolkit


This toolkit has to be used in combination with the ECMT Oracle module for a single-instance Oracle database. In this case, ensure that the ECMT B.06.00 has been installed. After installing the ODG Toolkit, two scripts (hadg.sh and hadg.conf) and a README file will be installed in the /opt/cmcluster/toolkit/dataguard directory. Two more scripts (tkit_module.sh and tkit_gen.sh) and one file (dataguard.1), which are used for modular packaging, will be installed in the /etc/cmcluster/scripts/tkit/dataguard directory and /etc/cmcluster/modules/tkit/dataguard respectively. These scripts are: - hadg.conf (user configuration file) This script contains a list of predefined variables that the user must customize for use with a particular database instance. This is a configuration file that is read by the toolkit script, hadg.sh. The following variables are contained in hadg.conf: TKIT_DIR: This directory is synonymous with the package directory and holds the toolkit configuration file. This parameter directs cmapplyconf to generate the hadg.conf file under this directory. To put the toolkit into maintenance mode, create the dataguard.debug file under this directory. ORACLE_HOME: The base directory of Oracle where it is installed. ORACLE_ADMIN: User name of the Oracle database administrator. This will be used for starting and stopping the database. For example: ORACLE_ADMIN=oracle SID_NAME: The Oracle session name. This uniquely identifies an Oracle database instance. START_MODE: This parameter determines the startup mode for Oracle database. The default value is open. It can take the options nomount, mount, or open.
NOTE:

For the ODG Toolkit, always specify the value of this parameter as mount.

MAINTENANCE_FLAG: The maintenance flag is used to bring this toolkit into maintenance mode. If set to yes, this will enable the maintenance feature in the toolkit. The Serviceguard Toolkit for ODG will look out for a file dataguard.debug in the package directory. If the file exists and if the maintenance feature is enabled, monitoring is paused, and database instance may be brought down for maintenance. The package will not be failed over to the adoptive node even though the instance has been brought down for maintenance. After the maintenance work, make sure that the instance is
4

Listener and network services can be configured by using Oracles netmgr utility.

12

brought up properly. Delete the file dataguard.debug from the package directory. This will enable the toolkit to continue monitoring the database server application. NOTE: If the maintenance flag is set to no, then the above feature will not be available and the toolkit cannot be brought into maintenance mode. MONITOR_INTERVAL: The time interval, in seconds, this script will wait between checks to make sure that the Oracle instance is running. The default value is 30 seconds. TIME_OUT: The time for which the toolkit waits for a completion of a normal shutdown before initiating forceful halt of the application. The TIME_OUT variable is used to protect against a worst-case scenario where a hung database or ASM instance prevents the halt script from completing, therefore preventing the standby node from starting the instance. The TIME_OUT variable has no effect on package failover times. The default value is 30 seconds. ACTIVE_STANDBY: This parameter determines whether the database instance is an active standby or not. The Active Data Guard Option available with Oracle Database 11g Enterprise Edition enables you to open a physical standby database for read-only access. This parameter can be set to yes or no. The default value is no. DG_BROKER: Specifies whether the ODG broker is to be used or not. ODG Broker management is not supported in this release hence this parameter must be set to no. This can take values yes or no. The default value is no. START_STANDBY_AS_PRIMARY: Specifies whether the standby database has to be started as the primary database by failing over the primary database. This parameter can be used in a Continentalcluster environment; the value of this parameter has to be yes in the package configuration file of the recovery package. ALERT_MAIL_ID: This parameter is used to specify the email address for sending alerts. Main Script (hadg.sh) This script contains a list of internally used variables and functions that support the starting and stopping of an ODG instance. This script will be called by tkit_module.sh to perform the following: - On package startup, it starts the Data Guard instance (primary/standby) and launches monitor processes. - On package halt, it stops the Data Guard instance and monitor process. NOTE: The following three scripts are used only during the modular method of packaging. - Attribute Definition File (Data Guard) The ADF is used to generate a package ASCII template file. - Module Script (tkit_module.sh) This script is called by the Master Control Script and acts as an interface between the Master Control Script and the Toolkit interface script (hadg.sh). It is also responsible for calling the Toolkit Configuration File Generator Script (described below). - Toolkit Configuration File Generator Script (tkit_gen.sh) This script is called by the Module Script when the package configuration is applied using cmapplyconf to generate the user configuration file (hadg.conf) in the package directory (TKIT_DIR).

13

ODG package configuration example


This section explains an ODG package configuration example. Follow the instructions in the chapter Building an HA Cluster Configuration in the manual Managing HP Serviceguard to create the logical volume infrastructure on shared disks. The disk must be available to all clustered nodes that will be configured to run the ODG Toolkit. Create file systems on all logical volumes on the volume groups. The ODG Toolkit can be configured in one of the following methods: a) Install directory method: Here the scripts remain in the installation directory. b) Configuration directory method: Here the user has to copy the scripts from the installation directory (including contents of subdirectories) to the configuration directory and define this location in the parameter TKIT_DIR in the package configuration file. The users can modify the scripts in the configuration directory to add any specific requirements. Serviceguard first tries to use the hadg.sh script in the configuration directory. If the script is not found in the configuration directory, it takes it from the installation directory. The example configuration uses the installation directory mode operation. This example on ODG Package Setup and Configuration is for an ODG configuration using LVM. The example illustrates the creation of a package for ODG in a single-instance environment. a. Creating a package configuration Create two packages: one for the primary database on the Primary Cluster and the other for the standby database on the Standby Cluster. Create a directory in /etc/cmcluster, for example, dataguard. This directory will eventually become TKIT_DIR and cd to this directory. Run the following commands to create the package configuration file templates. Note that the ODG Toolkit has to be used in combination with the ECMT Toolkit (in case of a single-instance database). A combinational package is the one in which two applications are packaged together by combining their respective Serviceguard modules into one package. In the example below, the ODG Toolkit module has been used with the ECMT Oracle modules as follows: #cmmakepkg -m ecmt/oracle/oracle -m tkit/dataguard/dataguard dgpkg.conf b. Specifying configuration parameters in the package Once the package configuration file has been created, the user must specify various parameter values. Only the parameters that are to be modified in dgpkg.conf specifically for this configuration are shown here. Note that the package configuration file shown below contains attributes of both the ECMT Oracle Toolkit and the ODG Toolkit. NOTE: The following attributes are specific to the Oracle Toolkit in ECMT # # package_name is the name that is used to identify the package. # Package names must be unique within a cluster. #

14

package_name

dgpkg

-----------------------------------------------------------------------# # package_description specifies the application that the package runs. # package_description Serviceguard Package

-----------------------------------------------------------------------# # package_type is the type of package. # The package_type attribute specifies the behavior for this package. # package_type failover

-----------------------------------------------------------------------# # run_script_timeout is the number of seconds allowed for the package to start. # halt_script_timeout is the number of seconds allowed for the package to halt. # run_script_timeout halt_script_timeout 300 610

-----------------------------------------------------------------------# script_log_file is the full path name for the package control script log # file. script_log_file /etc/cmcluster/dataguard/pkg.log

--------------------------------------------------------------------------# # Define package configuration directory # ecmt/oracle/oracle/TKIT_DIR -----------------------------------------------------------------------# # # ecmt/oracle/oracle/INSTANCE_TYPE -----------------------------------------------------------------------# # Define Oracle home database Define the instance type /etc/cmcluster/dataguard

15

# ecmt/oracle/oracle/ORACLE_HOME -----------------------------------------------------------------------# # Define user name of Oracle database administrator # ecmt/oracle/oracle/ORACLE_ADMIN -----------------------------------------------------------------------# # Define oracle session name # ecmt/oracle/oracle/SID_NAME -----------------------------------------------------------------------# # Define Oracle database startup mode # ecmt/oracle/oracle/START_MODE -----------------------------------------------------------------------# # Define whether the database instance is using ASM or not # ecmt/oracle/oracle/ASM -----------------------------------------------------------------------# # Define ASM disk groups used by the database instance # #ecmt/oracle/oracle/ASM_DISKGROUP ----------------------------------------------------------------------# # Define the volume groups used in the ASM disk groups for the # database instance # #ecmt/oracle/oracle/ASM_VOLUME_GROUP -----------------------------------------------------------------------# # The ASM home no mount ORCL oracle /var/orahome

16

# #ecmt/oracle/oracle/ASM_HOME -----------------------------------------------------------------------# # Define user name of Oracle ASM administrator # ecmt/oracle/oracle/ASM_USER ---------------------------------------------------------------------# # The ASM session name # #ecmt/oracle/oracle/ASM_SID -----------------------------------------------------------------------------# # Define whether the configured listener has to be started with the server # ecmt/oracle/oracle/LISTENER -----------------------------------------------------------------------# # Define the Oracle listener name(s) # ecmt/oracle/oracle/LISTENER_NAME -----------------------------------------------------------------------# # Define the listener password(s) # #ecmt/oracle/oracle/LISTENER_PASS -----------------------------------------------------------------------# #ecmt/oracle/oracle/LISTENER_RESTART # #ecmt/oracle/oracle/LISTENER_RESTART -----------------------------------------------------------------------NOTE: The following are the service commands for the combinational package. service_name service_cmd oracle_service_test $SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor LISTENER_ORCL yes oracle

17

service_restart

none no 300 oracle_listener_service_test $SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor_listener none no 300 oracle_hang_service_test $SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_hang_monitor 30 failover none no 300 dataguard_service_test $SGCONF/scripts/tkit/dataguard/tkit_module.sh dataguard_monitor none no 300

service_fail_fast_enabled service_halt_timeout service_name service_cmd service_restart

service_fail_fast_enabled service_halt_timeout service_name service_cmd service_restart

service_fail_fast_enabled service_halt_timeout service_name service_cmd service_restart

service_fail_fast_enabled service_halt_timeout

NOTE: The following attributes are specific to the ODG Toolkit. -----------------------------------------------------------------------# # # tkit/dataguard/dataguard/ACTIVE_STANDBY -----------------------------------------------------------------------# # # tkit/dataguard/dataguard/DG_BROKER -----------------------------------------------------------------------# # # tkit/dataguard/dataguard/START_STANDBY_AS_PRIMARY -----------------------------------------------------------------------# no Define START_STANDBY_AS_PRIMARY no Define DG_BROKER no Define ACTIVE_STANDBY

18

# #

Define email address for sending alerts

#tkit/dataguard/dataguard/ALERT_MAIL_ID -----------------------------------------------------------------------# # vg is used to specify which volume groups are used by this package. # vg vgora

-----------------------------------------------------------------------# # fs_name, fs_directory, fs_mount_opt, fs_umount_opt, fs_fsck_opt, # and fs_type specify the filesystems that are used by this package. # fs_name fs_directory fs_type fs_mount_opt #fs_umount_opt #fs_fsck_opt -----------------------------------------------------------------------/dev/vgora/lvol1 /oradb vxfs -o rw

Adding the package to the cluster


After the setup is complete, add the package to the Serviceguard cluster and start it up. $ cmapplyconf -P dgpkg.conf $ cmmodpkg -e -n <node1> -n <node2> dgpkg $ cmmodpkg -e dgpkg For more information on adding a package to the cluster, see the Managing HP Serviceguard document available at: http://www.hp.com/go/hpux-serviceguard-docs

ODG maintenance
There might be situations where the ODG database instances have to be taken down for maintenance purposes like changing configuration without having the instance migrate to standby node. The following procedure should be followed: The toolkit maintenance feature is enabled only when the configuration variable MAINTENANCE_FLAG is set to yes in the toolkit configuration file. Note that the toolkit maintenance feature is different from the Serviceguards package maintenance feature.

19

Note: The example assumes that the package name is dgpkg, package directory is /etc/cmcluster/pkg/dgpkg, and the ORACLE_HOME is configured as /orahome. 1. Disable the failover of the package through cmmodpkg command. $ cmmodpkg -d dgpkg 2. Pause the monitor script. Create an empty file /etc/cmcluster/pkg/dgpkg/dataguard.debug as shown below: $ touch /etc/cmcluster/pkg/dgpkg/dataguard.debug Toolkit monitor scripts (both database instance and listener monitoring scripts), which continuously monitor ODG process daemons processes, would now stop monitoring these daemon processes. The messages, Serviceguard Toolkit for ODG pausing Data Guard monitoring and entering maintenance mode appears in the Serviceguard Package log file. 3. If required, stop the database instance(s) as shown below: $ cd /etc/cmcluster/pkg/dgpkg/ $ $PWD/hadg.sh stop 4. Perform maintenance actions (Example: Database maintenance). 5. Restart the Oracle database instance again if you manually stopped it prior to maintenance. $ cd /etc/cmcluster/pkg/dgpkg/ $ $PWD/hadg.sh start 6. Allow monitoring scripts to continue normally as shown below: $ rm -f /etc/cmcluster/pkg/dgpkg/dataguard.debug The message Starting Oracle Data Guard monitoring again after maintenance appears in the Serviceguard Package Control script log. 7. Enable the package failover. $ cmmodpkg -e dgpkg Note: If a package failure occurs during maintenance operations, the package does not automatically failover to an adoptive node. You must manually start the package on the adoptive node. For more information on manually starting the package on an adoptive node, see the Managing HP Serviceguard guide available at http://www.hp.com/go/hpux-serviceguard-docs

Troubleshooting
This section provides the guidelines to verify if the ODG package has been configured properly or not. The following steps help the user to troubleshoot the possible causes for the package failure: 1. Verify ODG setup: To verify a Data Guard configuration, check if the standby database is able to receive the redo logs. Also check if the RFS process is running on the standby database by querying the table V$MANAGED_STANDBY. If the standby site is not receiving the logs, obtain information about the archiving status of the primary database by querying the V$ARCHIVE_DEST view. Check especially for error messages.

20

For example, enter: SQL> SELECT dest_id ID 2> status DB_status 3> destination Archive_dest 4> error Error 5> FROM v$archive_dest; ID DB_status Archive_dest -- --------- ------------------------------------------------------------------1 VALID 2 ERROR /vobs/oracle/work/arc_dest/arc standby1 ORA-16012: Archivelog standby database Error

identifier mismatch 3 INACTIVE 4 INACTIVE 5 INACTIVE 5 rows selected If the output of the query does not help, check the following list of possible issues. - The service name for the standby instance is not configured correctly in the tnsnames.ora file at the primary site. - The service name listed in the LOG_ARCHIVE_DEST_n parameter of the primary initialization parameter file is incorrect. - The LOG_ARCHIVE_DEST_STATE_n parameter specifying the state of the standby archiving destination has the value DEFER. - The listener.ora file has not been configured correctly at the standby site. - The listener is not started. - The standby instance is not started. - You have added a standby archiving destination to the primary initialization parameter file, but have not yet enabled the change. - You used an invalid backup as the basis for the standby database (for example, you used a backup from the wrong database, or did not create the standby control file using the correct method). Also, check the Oracle Alert log for errors. 2. Verify the Toolkit setup: Check whether the package configuration attributes specified in the package configuration file are valid or not.

Known problems and workarounds


1. If the replication does not happen from the primary database to the standby database then check

if the online redo log files are configured or not on the primary as well as the standby.

21

2. In case of a Metrocluster environment where only the primary database is configured within the

Metrocluster and the standby database is configured on a third site that is not part of the Metrocluster, the data replication can happen only from the site where the primary is running. Even if the proximity of the other site is better, data replication cannot happen from that site. This is a restriction put by Data Guard, and not the ODG Toolkit. Continentalclusters scenarios, the role management is handled by the ODG Toolkit and in other cases, if the role of any database is changed, then the ODG Toolkit package will fail.

3. The Data Guard broker should not be configured for automatic role transitions. In case of

4. There are use-cases in which only the primary database is configured in a SG cluster and the

standby is not part of any cluster. This is similar to configuring ECMT Toolkit, which can be run directly by the user. Legacy-type support is required to support this feature. ODG Toolkit does not support legacy-style configuration. there can be only one Recovery package and only one Data Receiver package within the Recovery group. Other standby instances can be configured outside the CC environment.

5. Multiple standby database instances cannot be supported in the Continentalclusters setup since

For more information


To learn more, see: HP Serviceguard Solutions: http://www.hp.com/go/serviceguardsolutions Technical Documentation: www.hp.com/go/hpux-serviceguard-docs

Appendix A
Configuring ODG example
ODG is included with the Enterprise Edition and Personal Edition of the Oracle database software. The standby database can be created in two ways: Physical standby database Logical standby database Configuring the physical standby database in a single-instance Oracle database is described in this section. Details about creating a logical standby database can be found at the Oracle Web site (http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/create_ls.htm). If customer already has a database and wants to make use of Data Guard for replication to another standby site, it is the customers responsibility to set up the standby in the same state as the primary. Customers have to use third party or Oracle-supplied tools to back up and copy the existing database to the standby site. Data Guard replicates only updates to the database. The following steps describe how to configure the Data Guard. Note that these are generic steps to configure the Data Guard. Detailed information about configuring Data Guard can be obtained from the Oracle Web site (http://www.oracle.com).

Preparing the primary database


Before creating the standby database, the primary database must be configured first. The following steps should be followed to create the primary database:
1. Enable force logging:

Place the primary database in FORCE LOGGING mode after the database creation using the

22

following SQL statement: SQL> ALTER DATABASE FORCE LOGGING;


2. Create a password file:

Create a password file if one does not already exist. Every database in a Data Guard configuration must use a password file, and the password for the SYS user must be identical on every system for redo data transmission to succeed.
3. Setting the primary database initialization parameters:

On the primary database, define initialization parameters that control log transport services while the database is in the primary role.
4. Enable archiving:

If archiving is not enabled, issue the following statements to put the primary database in ARCHIVELOG mode and enable automatic archiving: SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT; SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN;

5. It is recommended that the online redo log files are created on the shared disk so that when a

local failover of the primary database instance occurs, the online redo logs are also accessible to the new instance on the adoptive node. Heres a sample SQL statement that adds a new group of redo logs to the database: SQL> ALTER DATABASE ADD LOGFILE (<oracle_redo_file_1>.rdo, <oracle_redo_file_2>.rdo) SIZE 500K;

Creating a physical standby database


1. Create a backup copy of the primary database data files:

You can use any backup copy of the primary database to create the physical standby database as long as you have the necessary archived redo log files to completely recover the database. Oracle recommends that you use the Recovery Manager utility (RMAN). If the backup procedure required you to shut down the primary database, issue the following SQL*Plus statement to start the primary database:

2. Create a control file for the standby database:

SQL> STARTUP MOUNT; Then create the control file for the standby database, and open the primary database to user access, as shown in the following example: SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS /tmp/standby.ctl; SQL> ALTER DATABASE OPEN;
3. Prepare an initialization parameter file for the standby database. 4. Copy files from the primary system to the standby system:

Use an operating system copy utility to copy the following files from the primary system to the standby system: Backup data files Standby control file Initialization parameter file
5. Set up the environment to support the standby database:

a. Create a password file. Create a password file, and set the password for the SYS user to the same password used by the SYS user on the primary database. The password for the SYS user on every database in a Data Guard configuration must be identical for redo transmission to succeed.

23

b. Configure listeners for the primary and standby databases. On both the primary and standby sites, use Oracle Net Manager to configure a listener for the respective databases. To restart the listeners (to pick up the new definitions), enter the following LSNRCTL utility commands on both the primary and standby systems: % lsnrctl stop % lsnrctl start Ensure that the tnsnames.ora of a node contains the entries of all those nodes that it wants to communicate with. c. Enable broken-connection detection on the standby system. Enable broken-connection detection by setting theSQLNET.EXPIRE_TIME parameter to two minutes in the SQLNET.ORA parameter file on the standby system. For example: SQLNET.EXPIRE_TIME=2

d. Create Oracle Net service names. On both the primary and standby systems, use Oracle Net Manager to create a network service name for the primary and standby databases that will be used by log transport services. The Oracle Net service name must resolve to a connect descriptor that uses the same protocol, host address, port, and SID that you specified when you configured the listeners for the primary and standby databases. The connect descriptor must also specify that a dedicated server be used. e. Create a server parameter file for the standby database. On an idle standby database, use the SQL CREATE statement to create a server parameter file for the standby database from the text initialization parameter file SQL> CREATE SPFILE FROM PFILE=initstandby.ora;
6. Start the physical standby database:

Perform the following steps to start the physical standby database and Redo Apply.

a. Start the physical standby database. On the standby database, issue the following SQL statements to start and mount the database in read-only mode: SQL> STARTUP OPEN READ ONLY; b. Create a new temporary file for the physical standby database. Creating a new temporary file on the physical standby database at this point rather than later is beneficial. Temporary files enable disk sorting when the database is open in read-only mode and prepare the database for future role transitions. To add temporary files to the physical standby database, perform the following tasks: 1. Identify the table spaces that should contain temporary files. Do this by entering the following command on the standby database: SQL> SELECT TABLESPACE_NAME FROM DBA_TABLESPACES 2> WHERE CONTENTS = TEMPORARY; TABLESPACE_NAME -------------------------------TEMP1 TEMP2 Add new temporary files to the standby database.

24

For each table space identified in the previous query, add a new temporary file to the standby database. The following example adds a new temporary file called TEMP1 with size and reuse characteristics that match the primary database temporary files: SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE 2> /arch1/standby/temp01.dbf 3> SIZE 40M REUSE; c. Start Redo Apply. On the standby database, issue the following command to start Redo Apply: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; This statement automatically mounts the database. Also, the statement includes the DISCONNECT FROM SESSION option so that Redo Apply runs in a background session. d. Test archival operations to the physical standby database. The transmission of redo data to the remote standby location does not occur until after a log switch. A log switch occurs by default when an online redo log file becomes full. To force a log switch so that redo data is transmitted immediately, use the following ALTER SYSTEM statement on the primary database. For example: SQL> ALTER SYSTEM SWITCH LOGFILE;
7. Verify that the physical standby database is performing properly:

To see that redo data is being received on the standby database, you should first identify the existing archived redo log files on the standby database, force a log switch and archive a few online redo log files on the primary database, and then check the standby database again. The following steps show how to perform these tasks. a. Identify the existing archived redo log files. On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log. For example: SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2 FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; b. Force a log switch to archive the current online redo log file. On the primary database, issue the ALTER SYSTEM ARCHIVE LOG CURRENT statement to force a log switch and archive the current online redo log file group: SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; c. Verify that the new redo data was archived on the standby database. On the standby database, query the V$ARCHIVED_LOG view to verify the redo data was received and archived on the standby database: SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME 2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; d. Verify new archived redo log files were applied. On the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log files were applied. SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

25

Further preparations At this point, the physical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the physical standby database:
1. Upgrade the data protection mode:

The Data Guard configuration is initially set up in the maximum performance mode (the default). This can be change to either maximum protection mode or maximum availability mode.

2. Configure standby redo logs:

Standby redo logs are required for standby databases running in the maximum protection mode and maximum availability mode. However, configuring standby redo logs is recommended on all standby databases, because during a failover, Data Guard can recover and apply more redo data from standby redo log files than from the archived redo log files alone. The standby redo logs should exist on both primary and standby databases and have the same size and names. Flashback Database removes the need to re-create the primary database after a failover. Flashback Database is similar to conventional point-in-time recovery in its effects, enabling you to return a database to its state at a time in the recent past. Flashback Database is faster than pointin-time recovery because it does not require restoring data files from backup or the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both.

3. Enable Flashback Database:

HP welcomes your input. Please give us comments about this white paper, or suggestions for related documentation, through our technical documentation feedback website: http://www.hp.com/bizsupport/feedback/ww/webfeedback.html To learn more about the HP Serviceguard Toolkit for Oracle Data Guard, visit: http://www.hp.com/go/sgoracledg

Share with colleagues

Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. 4AA2-7717ENW, Created September 2010

Вам также может понравиться