Вы находитесь на странице: 1из 12

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 [ID 783044.1] Modified 15-FEB-2010 Type WHITE PAPER Status PUBLISHED

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12
Last Updated: July 21, 2009
Oracle Applications Release 12 (12.0.0) has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle Applications Release 12 (Release 12.0.0) running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 1 (11.1.0.6) with Automatic Storage Management (ASM).

Note: At present, this document applies to UNIX and Linux platforms only. If you are using Windows and want to migrate to Oracle RAC or ASM, you must follow the procedures described in the Real Application Clusters Setup and Configuration Guide 11g Release 1 (11.1), and the Oracle Database Administrator's Guide 11g Release 1 (11.1).

The most current version of this document can be obtained in My Oracle Support (formerly OracleMetaLink) Knowledge Document 783044.1. There is a change log at the end of this document. A number of conventions are used in describing the Oracle Applications architecture: Convention Application tier Database tier oracle Meaning Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. Machines (nodes) running Oracle Applications database. User account that owns the database file system (database ORACLE_HOME and files).

CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _. Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: /admin/.xml Database tier context file: /appsutil/.xml Oracle Applications database user password. Represents command line text. Type such a command exactly as shown. Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets. The backslash character is entered at the end of a command line to indicate continuation of the command on the next line.

CONTEXT_FILE

APPSpwd Monospace Text <> \

Rate this document

This document is divided into the following sections: Section 1: Overview Section 2: Environment Section 3: Database Installation and Oracle RAC Migration Section 4: References Appendix A: Oracle Net Files Appendix B: Sample rconfig xml file

Section 1: Overview
You should be familiar with Oracle Database 11g, and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC). When planning to set up Real Application Clusters and shared devices, refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) as required.

1.1 Cluster Terminology


You should understand the terminology used in a cluster environment. Key terms include the following. Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically. Cluster Ready Services (CRS) is the primary program that manages high availability operations in an RAC environment. The crs process manages designated cluster resources, such as databases, servic and listeners. Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an RAC environment, maximizing throughput and providing resilience to node failure. Real Application Clusters (RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An RAC environment also of resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

1.2 Configuration Prerequisites


The prerequisites for using Oracle RAC with Oracle Applications Release 12 are: 1. If you do not already have an existing single instance environment, perform an installation of Oracle Applications with Rapid Install. Note: If you have an existing single instance, your datafiles, and your data files, control files and redo log files are currently on a local disk, move all the files to a shared disk and recreate the control files. Refer to Oracle Database Administrator's Guide 11g Release 1 (11.1) for more information on recreating the control files.

2. Set up the required cluster hardware and interconnect medium. You must apply the following patches before you start to configure your environment. Oracle Applications Patches

1 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

1. Oracle E-Business Suite 12.0.4 Release Update Pack (RUP4) - patch 6435000. 2. Apply the latest AutoConfig patches by follow the relevant instructions in My Oracle Support Knowledge Document 387859.1, Using Autoconfig to Manage System Configurations with Oracle E-Business Suite R12. Note: Ensure you apply patch 6636108, which delivers the adbldxml utility. This is used to create a new context file on the database tier. Refer to Section 6 of Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12 for more details.

Section 2: Environment
The logical configuration used for creating this document is illustrated in the figure below. The Release 12.0.0 Rapid Install, which includes Oracle Database 10g Release 2 (10.2.0.2), was used as a starting point.

2.1 Software and Hardware Configuration


The following hardware and software components were used for this example installation. The architecture used here is not the only possible architecture. Component Version

Oracle Applications Release 12 12.0.4 Oracle Database Oracle Cluster Ready Services Operating System Storage Device 11.1.0.6 11.1.0.6 OEL 4.0 NetApp 880 filer with Data ONTAPT 6.1.2R3

You can obtain the latest 11.1.0.6 database software from: http://www.oracle.com/technology/software/products/database/index.html

2.2 ORACLE_HOME Nomenclature


This document refers to various ORACLE_HOMEs: ORACLE_HOME 10g ORACLE_HOME 11g ORACLE_HOME 11g CRS_ORACLE_HOME 11g ASM ORACLE_HOME 10.1.2 ORACLE_HOME 10.1.3 ORACLE_HOME Purpose Database ORACLE_HOME installed by Oracle Applications Release 12.0.4 Database ORACLE_HOME installed for Oracle 11g RAC Database ORACLE_HOME installed for 11g Cluster Ready Services (CRS) ORACLE_HOME used for creation of ASM instances ORACLE_HOME installed on Application Tier for forms and reports ORACLE_HOME installed on Application Tier for HTTP server

Section 3: Configuration Steps


The configuration steps you must carry out are divided into a number of stages: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Install Oracle Clusterware 11.1.0.6 Install Oracle Database Software 11.1.0.6 Install Oracle Database 11g Examples (formerly Companion) Install Oracle Database Software 11.1.0.6 and Upgrade Applications Database to 11.1.0.6 Listener Configuration Create ASM Instances/Diskgroups (Optional) Convert Database 11g to RAC using rconfig Enable AutoConfig on Applications Database Tier Establish Applications Environment for RAC Configure Parallel Concurrent Processing

Note: You should take complete backups of your environment before executing these procedures, and take further backups after each stage of the migration. These procedures should be fully tested in suitable environments before being performed in a production environment. Users must be logged off the system during the upgrade.

2 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

3.1 Install Oracle Clusterware 11.1.0.6 3.1.1 Check Network Requirements


All nodes must be configured as follows: Each node must have at least two network adapters: one for the public network interface, and one for the private network interface (interconnect). For the public network, each network adapter must support the TCP/IP protocol. For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters, and switches that support TCP/IP (Gigabit Ethernet or better is recommended). To improve availability, backup public and private network adapters can be configured for each node. The interface names associated with the network adapter(s) for each network must be the same on all nodes. Each node must have the following IP addresses specified: An IP address and associated host name for each public network interface, registered in DNS. An unused virtual IP address (VIP) and associated virtual host name that is registered in DNS, resolved in the host file, or both, that you will configure for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address fails over to another node. A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces.

3.1.2 Verify Kernel Parameters


Check kernel parameters as follows: Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1). Verify the kernel parameter settings required for Oracle Clusterware installation.

3.1.3 Check rsh and Host Equivalence


Verify that you have rsh (remote shell) package installed on all your hosts, by executing the command:

# rpm -qa | grep -i rsh

Test host equivalence by using the rcp command to copy some dummy files between host2 and host3, as follows: On host2:
# touch /u01/test # rcp /u01/test host3:/u01/test1 # rcp /u01/test int-host3:/u01/test2

On host3:
# # # # touch /u01/test rcp /u01/test host2:/u01/test1 rcp /u01/test int-host2:/u01/test2 ls /u01/test*

-- Returns /u01/test /u01/test1 /u01/test2host2


# ls /u01/test*

-- Returns /u01/test /u01/test1 /u01/test2

3.1.4 Set up Shared Storage


If your platform supports a cluster file system, set up the cluster file system on shared storage. If your platform does not support a cluster file system, or you want to use raw devices for database files for performance reasons, you will need to install the vendor-specific logical volume manager. Also see storage vendor-specific documentation for details of setting up the shared disk subsystem, and how to mirror and stripe these disks. Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) for more information about database storage.

3.1.5 Check Account Setup


Configure the oracle account's environment for Oracle Clusterware and Oracle Database 10g, as per Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1).

3.1.6 Configure Secure Shell


Configure Secure Shell (SSH) on all cluster nodes as follows: 1. Log in as the oracle user 2. Run the following commands to create the .ssh directory in the oracle user's home directory with suitable permissions:
$ mkdir ~/.ssh $ chmod 755 ~/.ssh

3. Generate a RSA key for version 2 of the SSH protocol using the following command:
$ /usr/bin/ssh-keygen -t rsa

Accept the default location for the key file. Enter and confirm a passphrase that is different from the oracle user's password. This command writes the public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone. 4. Enter the following command to generate a DSA key for version 2 of the SSH protocol:
$ /usr/bin/ssh-keygen -t dsa

Accept the default location for the key file at the prompt. 5. Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the /.ssh/authorized_keys file on this node, and to the same file on all other cluster nodes.

3 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Note: The ~/.ssh/authorized_keys file on every node must contain the contents from all the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files that you generated on all cluster nodes.

6. Run the following command to change the permissions on the ~/.ssh/authorized_keys file on all cluster nodes:
$ chmod 644 ~/.ssh/authorized_keys

7. To enable the Installer to use the ssh and scp commands without being prompted for a passphrase, follow these steps: 1. On the system where you want to install the software, log in as the oracle user and run the commands:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add

2. At the prompts, enter the passphrase for each key that you generated

3.1.7 Using Cluster Verification Utility (CVU)


1. Install cvuqdisk Package for Linux a. Locate the cvuqdisk RPM package, which is in the directory clusterware/rpm on the installation medium. b. Copy the cvuqdisk package to each node on the cluster (each node must be running the same version of Linux). c. Log in as root. d. Use the following command to see if you have an existing version of the cvuqdisk package:
# rpm -qi cvuqdisk

e. If you have an existing version, enter the following command to de-install the existing version:
# rpm -e cvuqdisk

f. Use the following command to install the cvuqdisk package:


# rpm -iv cvuqdisk-1.0.1-1.rpm

2. Log in as the oracle user, and run the following command to determine which pre-installation steps have been completed, and which steps remain to be performed:
$ <11g Software Stage>/runcluvfy.sh stage -pre crsinst -n

Substitute with with the names of the nodes in your cluster, separated by commas. 3. Use the following command to check the networking setup with CVU:
$ <11g Software Stage>/runcluvfy.sh comp nodecon -n [-verbose]

Substitute with with the names of the nodes in your cluster, separated by commas. 4. Use the following command to check the operating system requirement with CVU:
$ <11g Software Stage>/runcluvfy.sh comp sys -n -p {crs|database} \ -osdba osdba_group -orainv orainv_group -verbose

Substitute with with the names of the nodes in your cluster, separated by commas.

3.1.8 Install Oracle Clusterware 11.1.0.6


1. Use the same oraInventory location that was created during the installation of Oracle Applications Release 12 with 10g.

Note: You should take a backup of the oraInventory directory before starting this step.

2. Start runInstaller from the Oracle Clusterware 11.1.0.6 staging area. 3. In the File Locations Window, enter the name and path of CRS ORACLE_HOME and click Next. 4. Select the language option for your installation from the available list and click Next. 5. In the Cluster Configuration Window, enter the name of the Cluster Configuration. For the public node, enter the public alias specified in /etc/hosts, for example host2, host 3. Enter the corresponding private node names for the public nodes, for example host2-vlan2, host3-vlan2. Enter the corresponding virtual host names for the public host names and click Next. 6. Assign the network interface with the interface type, for example assign network interface eth1 with interface type private and its corresponding subnet mask. Refer to the cluster verification utility output generated earlier for more details on this network interface usage. Click Next. 7. Enter the location for Oracle Cluster Registry (OCR) and click Next.

Note: The OCR must be located on a shared file system that is accessible by all nodes. If you want to use OCR mirroring, select "Normal Redundancy". In such a case, you will have to specify two locations for the OCR.

8. Enter the location for Voting Disk. Click Next.

Note: The voting disk must be on a shared file system that is accessible by all nodes. If you want to use mirroring of the voting disk, select "Normal Redundancy". In such a case, you will have to specify three locations for the voting disks.

4 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

9. Verify the installation Summary Window and Click Install. 10. At the end of the installation, the installer will prompt for executing root.sh from both the nodes. Execute root.sh from the CRS ORACLE_HOME specified after logging in as root. This will also start CRS services on both the cluster nodes. 11. Execute CRS ORACLE_HOME/bin/olsnodes. If this returns all the cluster node names, then your CRS installation was successful. 12. Confirm Oracle Clusterware function: 1. After installation, log in as root, and use the following command to confirm that your Oracle Clusterware installation is installed and running correctly:
# /bin/crs_stat -t -v Name Type Target State Host ---------------------------------------------------------------------------------------ora....dbs.gsd application ONLINE ONLINE ap614dbs ora....dbs.ons application ONLINE ONLINE ap614dbs ora....dbs.vip application ONLINE ONLINE ap614dbs ora....dbs.gsd application ONLINE ONLINE ap615dbs ora....dbs.ons application ONLINE ONLINE ap615dbs ora....dbs.vip application ONLINE ONLINE ap615dbs

2. As an alternative, Oracle Clusterware installation can be verified using the following command:
# /bin/crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy

3.2 Install Oracle Database Software 11.1.0.6 and Upgrade Applications Database to 11.1.0.6
Note: You should make a full backup of the oraInventory directory before starting this stage. Run OUI (runInstaller) to perform an Oracle Database Installation with Oracle RAC . In Cluster Nodes Window, verify the cluster nodes shown for the installation. Select all nodes include in your RAC. To install the Oracle Database 11g (11.1.0.6) software and upgrade an existing database to 11.1.0.6, refer to My Oracle Support Knowledge Document 735276.1, Interoperability Notes E-Business Suite R12 with Oracle Database 11g R1 (11.1.0). Follow all the instructions and steps listed there except the following: 16. Start the new database listener (conditional) 20. Implement and run AutoConfig 25. Restart Applications server processes

3.3 Listener Configuration


Create Listener using NetCA: Execute netca from $ORACLE_HOME/bin Choose "Cluster Configuration " option in the NetCA wizard Choose the current nodename from the nodes list Choose the option "Listener Configuration" and then option "Add" Create the listener name as "LISTENER" which is default in the wizard Use the same port number as defined in the Release 11i configuration Repeat these steps on all nodes in the cluster Note: Ensure these steps are carried out on all nodes in the cluster, confirming that the listener has been successfully created and registered with CRS.

3.4 Create ASM Instances/Diskgroups (Optional)


This section applies only to customers who want to use ASM as a storage option for the database.

3.4.1 Pre-Creation Task


To include devices in a diskgroup, you can specify either whole-drive device names or partitions. Depending on the redundancy level required, you may need additional devices (or partitions). ASM disk group device creation details depend on the operating system and type of storage used. Refer to My Oracle Support Knowledge Document 265633.1, Automatic Storage Management Technical Best Practices for details of different operating system and storage system configurations. If you are using Network Attached Storage (NAS) devices for creating the ASM disk groups, refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) Oracle Support Knowledge Document 266028.1, ASM Using Files Instead of Real Devices on Linux.

3.4.2 Create ASM Instances/Diskgroups


Note: While you can create the ASM instances using the same Oracle Home location used for the database, Oracle recommends creating a separate Oracle Home for ASM instances. There are two alternative methods you can use for creating ASM instances and diskgroups: Option 1: Create ASM instance and diskgroups using dbca Option 2. Create ASM instances and diskgroups manually (without using dbca) Note: Oracle recommends using dbca to create ASM instances (option 2).

3.4.2.1 Creating ASM instances and diskgroups using dbca


1. As the oracle user, run dbca from $ORACLE_HOME/bin. 2. Select the "Real Application Clusters database" option. Click Next .

5 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

3. Select the "Configure Automatic Storage Management" option. Click Next. 4. Select all the nodes in the cluster. Click Next. 5. Enter the SYS password, and provide the spfile location on the shared disk. 6. ASM instances will now be created. 7. On the ASM Disk Groups page, click on Create New. 8. On the "Create Disk Group" screen, specify the: Disk group name Desired redundancy option (High, Normal, or External) Members/devices of the disk group 9. The disk group has now been created and mounted on all the instances. Click Finish. 10. As the owner of the CRS_ORACLE_HOME, issue the crs_stat command to verify that the ASM instances are registered with CRS. The instances will be named using the format specified with dbca during creation (for example, ora.myhost.+ASM1.asm) 11. Using NetCA create remote listener TNS aliases with the name LISTENERS_. 12. Update the ASM init parameter file with remote listener value.
$ sqlplus / as sysdba; SQL>alter system set remote_listener=LISTENERS_ scope=both;

13. Restart the ASM instances on all nodes. Note : Verify that all the ASM instances are registered with all the nodes. Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) and Oracle Database Administrator's Guide 11g Release 1 (11.1) for further details.

3.4.2.2 Creating ASM instances and diskgroups manually without dbca


1. Using NetCA, create local and remote listener TNS aliases for ASM instances. Use listener_ as alias name for local listener and listeners_ for remote listener alias. Ensure that these aliases are created on all nodes in the cluster. 2. Create ASM instances on all nodes in the cluster. Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) for information on creating the ASM instances. For ASM best practices, refer to My Oracle Support Knowledge Document 265633.1, Automatic Storage Management Technical Best Practices. 3. Start up all the ASM instances in the cluster. 4. Create the disk groups and mount them on all the ASM instances. Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) 5. Add the disk group entry against "asm_diskgroups" parameter in the init*.ora files of all the ASM instances. 6. Add ASM instances to CRS using the command:
$ srvctl add asm -n -i -o [-p ]

3.5 Convert 11g Database to Oracle RAC using rconfig


1. Download patch 6265373 from My Oracle Support and apply it to the new 11g ORACLE_HOME. 2. As the oracle user, navigate to the directory $ORACLE_HOME/assistants/ rconfig/sampleXMLs, and open the sample file ConvertToRAC.xml using a text editor such as vi. This XML sample file contains comment lines that provide instructions on how to edit the file to suit your site's specific needs. 3. Make a copy of the sample ConvertToRAC.xml file, and modify the parameters as required for your system. Keep a note of the name of your modified copy. Note: Study the example file (and associated notes) in Appendix B before you edit your own file and run rconfig. To use the same listener defined in Step 3.3, do not specify a port number for in the rconfig XML file. 4. Run rconfig using the option convert verify="ONLY" before carrying out the actual conversion. This optional but recommended step will perform a test run to validate parameters, and identify any issues that need to be corrected before actual conversion takes place. Note: Specify the 'SourceDBHome' variable in ConvertToRAC.xml as the Non-RAC Oracle Home ($OLD_ORACLE_HOME path). If you wish to specify a NEW_ORACLE_HOME, start the database from the new Oracle Home using the command:
SQL>startup pfile=/dbs/init.ora;

5. Shut down the database. 6. Create an spfile from the pfile using the command:
SQL>create spfile from pfile;

7. Move the $ORACLE_HOME/dbs/spfile.ora for this instance to the shared location. 8. Take a backup of existing $ORACLE_HOME/dbs/init.ora and create a new $ORACLE_HOME/dbs/init.ora with the following parameter:
spfile=/spfile.ora

9. Start up the instance. 10. Using NetCA, create local and remote listener tns aliases for database instances. Use listener_ as the alias name for the local listener, and listeners_ for the remote listener alias. 1. Execute netca from $ORACLE_HOME/bin. 2. Choose "Cluster Configuration" option in the NetCA wizard. 3. Choose the current nodename from the nodes list. 4. Choose "Local Net Service Name Configuration " option and click Next. 5. Select "Add" and in next screen enter the service name and click Next. 6. Enter the current node as Server Name and the port defined in Step 3.3.

6 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

7. Select "Do not perform Test" and click Next. 8. Enter the listener TNS alias name like LISTENER_ for local listener. 9. Repeat the above steps for remote listener, with the server name in step 6 as the secondary node and the listener name LISTENERS_. Note: Ensure that local and remote aliases are created on all nodes in the cluster. 11. Navigate to $ORACLE_HOME/bin, and use the following syntax to run the rconfig command:
$ ./rconfig

12. This rconfig run will: 1. Migrate the database to ASM storage (only if ASM is specified as storage option in the configuration XML file). 2. Create database instances on all nodes in the cluster. 3. Configure listener and NetService entries. 4. Configure and register CRS resources. 5. Start the instances on all nodes in the cluster.

3.6 Post RAC Migration Steps


3.6.1 Back Out Archive Log Mode Changes (Conditional)
The rconfig utility may put the database into archive log mode. If you do not want the database to be in archive log mode, you can disable it using the following steps: 1. Shut down the instances on all database nodes. 2. On any one node, set cluster_database=false in the $ORACLE_HOME/dbs/init.ora file. 3. Set the following environment variables:
ORACLE_HOME =<11g NEW_ORACLE_HOME> ORACLE_SID = PATH= $PATH:$ORACLE_HOME/bin;

4. Start up the instance using the 'mount' option. 5. Disable archive logging using the following SQL command:
$ sqlplus / as sysdba; SQL>alter database noarchivelog;

6. Shut down the database normally. 7. Set cluster_database=true in the $ORACLE_HOME/dbs/init.ora file. 8. Start up all the instances. 9. Check the archive log setting using the following SQL command:
$ sqlplus / as sysdba; SQL>archive log list;

This should show the value of 'Database log mode' as 'No Archive Mode'

3.6.2 Shut down the Listeners


Use the following command to shut down the listeners with the name LISTENER_, which were created in Step 3.3:
$ srvctl stop listener -n

3.7 Enable AutoConfig on Applications Database Tier


Note: For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12. 1. Ensure that you have applied the Oracle Applications patches listed in the pre-requisites section above. 2. To generate appsutil.zip for the database tier, run the following command:
$ $AD_TOP/bin/admkappsutil.pl

3. Copy the appsutil.zip file to the 11g NEW_ORACLE_HOME on the database tier, for example using ftp. 4. Unzip the appsutil.zip file to create the appsutil directory in the 11g NEW_ORACLE_HOME. 5. Copy the jre directory from OLD_ORACLE_HOME>/appsutil to 11g NEW_ORACLE_HOME>/appsutil. 6. Create a directory under $ORACLE_HOME/network/admin. Use the new instance name while creating the context directory. Append the instance number to the instance prefix you are going to put in the rconfig XML file. For example, if your database name is VISRAC, and you want to use "vis" as the instance prefix, create the context_name directory as vis1_. 7. Set the following environment variables:
ORACLE_HOME =<11g ORACLE_HOME> LD_LIBRARY_PATH = <11g ORACLE_HOME>/lib, <11g ORACLE_HOME>/ctx/lib ORACLE_SID = PATH= $PATH:$ORACLE_HOME/bin; TNS_ADMIN = $ORACLE_HOME/network/admin/>

8. De-register the current configuration using the Apps schema package FND_CONC_CLONE.SETUP_CLEAN.
SQL>exec fnd_conc_clone.setup_clean;

9. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to $TNS_ADMIN/tnsnames.ora file and edit changing the aliases for SID=.

7 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

10. To preserve TNS aliases (LISTENERS_ and LISTENER_) of ASM , create a file _ifile.ora under $TNS_ADMIN, and copy those entries to that file. 11. Create listener.ora as per the sample file in Appendix A. Change the instance name and Oracle home to match this environment. 12. Start the Listener. 13. From the 11g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
$ adbldxml.pl tier=db appsuser= appspasswd=

14. Set the value of s_virtual host_name to point to the virtual hostname for the database host, by editing the database context file $ORACLE_HOME/appsutil/_hostname.xml. 15. Rename $ORACLE_HOME/dbs/init.ora, to a new name (i.e. init.ora.old) in order to allow AutoConfig to regenerate the file using the RAC specific parameters. 16. From the 11g ORACLE_HOME/nls/data/old directory, execute cr9idata.pl on the database tier. 17. Ensure that the following context variable parameters are correctly specified.
s_jdktop=<11g ORACLE_HOME_PATH>/appsutil/jre s_jretop=<11g ORACLE_HOME_PATH>/appsutil/jre s_adjvaprg=<11g ORACLE_HOME_PATH>/appsutil/jre/bin/java

18. From the 11g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script. 19. Check the AutoConfig log file located in 11g ORACLE_HOME/appsutil/log// 20. Perform the above steps [1-19] on all other database nodes in the cluster: 21. Execute AutoConfig on all database nodes in the cluster by running the command:
$ $ORACLE_HOME/appsutil/scripts/adconfig.sh

22. Shut down the instances and listeners. 23. Edit $ORACLE_HOME/dbs/_APPS_BASE.ora file on all nodes. If ASM is being used, change the following parameter:
control_files =

24. Create SPFILE from the pfile on all nodes as follows.: 1. Create an spfile from the pfile, and then create a pfile in a temporary location from the new spfile, with commands as shown in the following example:
SQL>create spfile= from pfile; SQL>create pfile=/tmp/init.ora from spfile;

Repeat this step on all nodes. 2. Combine the initialization parameter files for all instances into one initdbname.ora file by copying all existing shared contents. All shared parameters defined in your initdbname.ora file must be global, with the format *.parameter=value 3. Modify all instance-specific parameter definitions in init.ora files using the following syntax, where the variable is the system identifier of the instance: .parameter=value Ensure that the parameters LOCAL_LISTENER,diagnostic_dest,undo_tablespace,thread,instance_number,instance_name are in .parameter format; for example, .LOCAL_LISTENER=. These parameters must have one entry for an instance. 4. Create the spfile in the shared location where rconfig created the spfile from the pfile in step 3 above.
SQL>create spfile= from pfile;

25. Ensure that listener.ora and tnsnames.ora are generated as per the format shown in Appendix A. 26. As AutoConfig creates the listener.ora and tnsnames.ora files in a context directory, and not in the $ORACLE_HOME/network/admin directory, the TNS_ADMIN path must be updated in CRS. Use the command:
$ srvctl setenv instance -d database -i instance -t TNS_ADMIN=/network/admin/

27. Start up the database instances and listeners on all nodes. 28. Run AutoConfig all nodes to ensure instance registers with all remote listeners. 29. Restart the database instances and listeners on all nodes.

3.9 Establish Applications Environment for Oracle RAC


3.9.1 Preparatory Steps
Carry out the following steps on all application tier nodes: 1. Source the Applications environment. 2. Edit SID= and PORT= in $TNS_ADMIN/tnsnames.ora file, to set up connection one of the instances in the Oracle RAC environment. 3. Confirm you are able to connect to one of the instances in the RAC environment. 4. Edit the context variable jdbc_url, adding the instance name to the connect_data parameter. 5. Execute AutoConfig by running the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/.

For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12. 6. Check the $INST_TOP/admin/log/ AutoConfig log file for errors.

8 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

7. Source the environment by using the latest environment file generated. 8. Verify the tnsnames.ora and listener.ora files. Copies of both are located in the $INST_TOP/ora/10.1.2/network/admin directory and $INST_TOP/ora/10.1.3/network/admin directory. In these files, ensure that the correct TNS aliases have been generated for load balance and failover, and that all the aliases are defined using the virtual hostnames. 9. Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.

3.9.2 Load Balancing


Implement load balancing for the Applications database connections: 1. Run the Context Editor (through the Oracle Applications Manager interface) and set the value of "Tools OH TWO_TASK" (s_tools_two_task), "iAS OH TWO_TASK" (s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias). 2. To load balance the forms based applications database connections, set the value of "Tools OH TWO_TASK" to point to the _balance alias generated in the tnsnames.ora file. 3. To load balance the self-service applications database connections, set the value of "iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the _balance alias generated in the tnsnames.ora file. 4. Execute AutoConfig by running the command:
$ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/

5. Restart the Applications processes, using the new scripts generated by AutoConfig. 6. Ensure that value of the profile option "Application Database ID" is set to dbc file name generated in $FND_SECURE. Note: If you are adding a new node to the application tier, repeat the above steps 1-6 for setting up load balancing on the new application tier node.

3.10 Configure Parallel Concurrent Processing


3.10.1 Check prerequisites for setting up Parallel Concurrent Processing
To set up Parallel Concurrent Processing (PCP), you must have more than one Concurrent Processing node in your environment. If you do not have this, follow the appropriate instructions in My Oracle Support Knowledge Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone. Note: If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 384248.1, Sharing the Application Tier File System in Oracle E-Business Suite Release 12, for configuration steps. If you are adding a new Concurrent Processing node to the application tier, you will need to set up load balancing on the new application by repeating steps 1-6 in Section 3.10.

3.10.2 Set Up PCP


1. Execute AutoConfig by running the following command on all concurrent processing nodes:
$ $INST_TOP/admin/scripts/adautocfg.sh

2. Source the Applications environment. 3. Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes. 4. Restart the Applications listener processes on each application tier node. 5. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered. 6. Verify that the Internal Monitor for each node is defined properly, with correct primary and secondary node specification, and work shift details. For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator. 7. Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system. 8. Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.) 9. Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.

3.10.3 Set Up Transaction Managers


1. Shut down the application services (servers) on all nodes 2. Shut down all the database instances cleanly in the Oracle RAC environment, using the command:
SQL>shutdown immediate;

3. Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters: _lm_global_posts=TRUE _immediate_commit_propagation=TRUE 4. Start the instances on all database nodes, one by one. 5. Start up the application services (servers) on all nodes. 6. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option Concurrent: TM Transport Type' to QUEUE', and verify that the transaction manager works across the RAC instance. 7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers. 8. Restart the concurrent managers. 9. If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.

3.10.4 Set Up Load Balancing on Concurrent Processing Nodes


1. Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (_balance>). 2. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

9 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Section 4: References
My Oracle Support Knowledge Document 745759.1: Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap My Oracle Support Knowledge Document 384248.1 : Sharing The Application Tier file system in Oracle E-Business Suite R12 My Oracle Support Knowledge Document 387859.1: Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12 My Oracle Support Knowledge Document 406982.1: Cloning Oracle Applications Release 12 with Rapid Clone My Oracle Support Knowledge Document 240575.1: RAC on Linux Best Practices My Oracle Support Knowledge Document 265633.1: Automatic Storage Management Technical Best Practices Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) Oracle Database Administrator's Guide 11g Release 1 (11.1) Oracle Database Backup and Recovery Advanced User's Guide 11g Release 1 (11.1) Oracle Applications System Administrator's Guide - Configuration Migration to ASM Technical White Paper

Appendix A
Sample LISTENER.ORA file for database node (without virtual host name)
=<> (ADDRESS_LIST = (ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROC)) (ADDRESS= (PROTOCOL= TCP)(Host= host2)(Port= db_port)) ) SID_LIST_ = (SID_LIST = (SID_DESC = (ORACLE_HOME= <11g oracle home path>) (SID_NAME = ) ) (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = <<11g oracle home path>) (PROGRAM = extproc) ) ) STARTUP_WAIT_TIME_ = 0 CONNECT_TIMEOUT_ = 10 TRACE_LEVEL_ = OFF LOG_DIRECTORY_ = <11g Oracle Home path>/network/admin LOG_FILE_ = TRACE_DIRECTORY_ = <11g Oracle Home path>/network/admin TRACE_FILE_ = ADMIN_RESTRICTIONS_ = OFF

Sample LISTENER.ORA file for database nodes (with virtual host name)
LISTENER_ = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = )(IP = FIRST))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = )(IP = FIRST))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))) ) ) SID_LIST_LISTENER_ = (SID_LIST = (SID_DESC = (ORACLE_HOME = <11g ORACLE_HOME>)(SID_NAME = )) (SID_DESC = (SID_NAME = PLSExtProc)(ORACLE_HOME = <11g ORACLE_HOME>)(PROGRAM = extproc)) ) STARTUP_WAIT_TIME_LISTENER_ = 0 CONNECT_TIMEOUT_LISTENER_ = 10 TRACE_LEVEL_LISTENER_ = OFF LOG_DIRECTORY_LISTENER_ = <11g ORACLE_HOME>/network/admin LOG_FILE_LISTENER_ = TRACE_DIRECTORY_LISTENER_ = <11g ORACLE_HOME>/network/admin TRACE_FILE_LISTENER_ = ADMIN_RESTRICTIONS_LISTENER_ = OFF SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER_ = OFF IFILE=<11g ORACLE_HOME>/network/admin//listener_ifile.ora

Sample TNSNAMES.ORA file for database nodes (with virtual host name)
= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=)) (CONNECT_DATA= (SERVICE_NAME=) (INSTANCE_NAME=) ) )

Appendix B
Example of an rconfig XML input file:

10 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values: Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC conversion have been met before it starts conversion. Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion. Convert verify="ONLY" rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks. In order to validate, and test the settings specified for converting to RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" before carrying out the actual conversion.

/oracle/product/10.2.0/db_1 /oracle/product/10.2.0/db_1 sys oracle sysdba

Note: The ASM instance name specified above is the local node's ASM instance, where rconfig is executed from to perform the RAC conversion. Before starting the actual conversion, ensure that ASM instances on all the nodes are running, and the required diskgroups are mounted on each instance.

sys welcome sysdba

sales -

Note: In order to use the existing listener definition and port assignment, you must specify a NULL entry for Listener port.

Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file. The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them. The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:
select name, state, total_mb, free_mb from v$asm_diskgroup;

+ASMDG

Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above xml file. If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for the TargetDatabaseArea ,rconfig will convert the files to Oracle Managed Files nomenclature.

+ASMDG

11 of 12

3/25/2011 11:18 AM

Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...

Change Log
Date July 21, 2009 Description Added "11.1.0.6" to title.

May 11, 2009

Corrected some links to Oracle RAC Administration and Deployment Guide 11g Release 1 (11.1). Changed all occurrences of "OracleMetaLink Note" to "My Oracle Support Knowledge Document".

Feb 16, 2009

Created this note as a holder for the 11.1.0.6 content that used to be in Note 466649.1, which now holds 11.1.0.7 content.

Nov 13, 2008

Added Note 745759.1 to References.

Sep 16, 2008

Updated Section 3.2.

Sep 15, 2008

Initial publication.

Note 783044.1Note 388577.1 by Oracle E-Business Suite Development Copyright 2008, 2009 Oracle Related Products Oracle E-Business Suite > Applications Technology > Technology Components > Oracle Applications Technology Stack

Back to top

12 of 12

3/25/2011 11:18 AM

Вам также может понравиться