Академический Документы
Профессиональный Документы
Культура Документы
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12 [ID 783044.1] Modified 15-FEB-2010 Type WHITE PAPER Status PUBLISHED
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12
Last Updated: July 21, 2009
Oracle Applications Release 12 (12.0.0) has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle Applications Release 12 (Release 12.0.0) running on a single database instance to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 1 (11.1.0.6) with Automatic Storage Management (ASM).
Note: At present, this document applies to UNIX and Linux platforms only. If you are using Windows and want to migrate to Oracle RAC or ASM, you must follow the procedures described in the Real Application Clusters Setup and Configuration Guide 11g Release 1 (11.1), and the Oracle Database Administrator's Guide 11g Release 1 (11.1).
The most current version of this document can be obtained in My Oracle Support (formerly OracleMetaLink) Knowledge Document 783044.1. There is a change log at the end of this document. A number of conventions are used in describing the Oracle Applications architecture: Convention Application tier Database tier oracle Meaning Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. Machines (nodes) running Oracle Applications database. User account that owns the database file system (database ORACLE_HOME and files).
CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _. Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: /admin/.xml Database tier context file: /appsutil/.xml Oracle Applications database user password. Represents command line text. Type such a command exactly as shown. Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets. The backslash character is entered at the end of a command line to indicate continuation of the command on the next line.
CONTEXT_FILE
This document is divided into the following sections: Section 1: Overview Section 2: Environment Section 3: Database Installation and Oracle RAC Migration Section 4: References Appendix A: Oracle Net Files Appendix B: Sample rconfig xml file
Section 1: Overview
You should be familiar with Oracle Database 11g, and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC). When planning to set up Real Application Clusters and shared devices, refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) as required.
2. Set up the required cluster hardware and interconnect medium. You must apply the following patches before you start to configure your environment. Oracle Applications Patches
1 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
1. Oracle E-Business Suite 12.0.4 Release Update Pack (RUP4) - patch 6435000. 2. Apply the latest AutoConfig patches by follow the relevant instructions in My Oracle Support Knowledge Document 387859.1, Using Autoconfig to Manage System Configurations with Oracle E-Business Suite R12. Note: Ensure you apply patch 6636108, which delivers the adbldxml utility. This is used to create a new context file on the database tier. Refer to Section 6 of Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12 for more details.
Section 2: Environment
The logical configuration used for creating this document is illustrated in the figure below. The Release 12.0.0 Rapid Install, which includes Oracle Database 10g Release 2 (10.2.0.2), was used as a starting point.
Oracle Applications Release 12 12.0.4 Oracle Database Oracle Cluster Ready Services Operating System Storage Device 11.1.0.6 11.1.0.6 OEL 4.0 NetApp 880 filer with Data ONTAPT 6.1.2R3
You can obtain the latest 11.1.0.6 database software from: http://www.oracle.com/technology/software/products/database/index.html
Note: You should take complete backups of your environment before executing these procedures, and take further backups after each stage of the migration. These procedures should be fully tested in suitable environments before being performed in a production environment. Users must be logged off the system during the upgrade.
2 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Test host equivalence by using the rcp command to copy some dummy files between host2 and host3, as follows: On host2:
# touch /u01/test # rcp /u01/test host3:/u01/test1 # rcp /u01/test int-host3:/u01/test2
On host3:
# # # # touch /u01/test rcp /u01/test host2:/u01/test1 rcp /u01/test int-host2:/u01/test2 ls /u01/test*
3. Generate a RSA key for version 2 of the SSH protocol using the following command:
$ /usr/bin/ssh-keygen -t rsa
Accept the default location for the key file. Enter and confirm a passphrase that is different from the oracle user's password. This command writes the public key to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file. Never distribute the private key to anyone. 4. Enter the following command to generate a DSA key for version 2 of the SSH protocol:
$ /usr/bin/ssh-keygen -t dsa
Accept the default location for the key file at the prompt. 5. Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the /.ssh/authorized_keys file on this node, and to the same file on all other cluster nodes.
3 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Note: The ~/.ssh/authorized_keys file on every node must contain the contents from all the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files that you generated on all cluster nodes.
6. Run the following command to change the permissions on the ~/.ssh/authorized_keys file on all cluster nodes:
$ chmod 644 ~/.ssh/authorized_keys
7. To enable the Installer to use the ssh and scp commands without being prompted for a passphrase, follow these steps: 1. On the system where you want to install the software, log in as the oracle user and run the commands:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add
2. At the prompts, enter the passphrase for each key that you generated
e. If you have an existing version, enter the following command to de-install the existing version:
# rpm -e cvuqdisk
2. Log in as the oracle user, and run the following command to determine which pre-installation steps have been completed, and which steps remain to be performed:
$ <11g Software Stage>/runcluvfy.sh stage -pre crsinst -n
Substitute with with the names of the nodes in your cluster, separated by commas. 3. Use the following command to check the networking setup with CVU:
$ <11g Software Stage>/runcluvfy.sh comp nodecon -n [-verbose]
Substitute with with the names of the nodes in your cluster, separated by commas. 4. Use the following command to check the operating system requirement with CVU:
$ <11g Software Stage>/runcluvfy.sh comp sys -n -p {crs|database} \ -osdba osdba_group -orainv orainv_group -verbose
Substitute with with the names of the nodes in your cluster, separated by commas.
Note: You should take a backup of the oraInventory directory before starting this step.
2. Start runInstaller from the Oracle Clusterware 11.1.0.6 staging area. 3. In the File Locations Window, enter the name and path of CRS ORACLE_HOME and click Next. 4. Select the language option for your installation from the available list and click Next. 5. In the Cluster Configuration Window, enter the name of the Cluster Configuration. For the public node, enter the public alias specified in /etc/hosts, for example host2, host 3. Enter the corresponding private node names for the public nodes, for example host2-vlan2, host3-vlan2. Enter the corresponding virtual host names for the public host names and click Next. 6. Assign the network interface with the interface type, for example assign network interface eth1 with interface type private and its corresponding subnet mask. Refer to the cluster verification utility output generated earlier for more details on this network interface usage. Click Next. 7. Enter the location for Oracle Cluster Registry (OCR) and click Next.
Note: The OCR must be located on a shared file system that is accessible by all nodes. If you want to use OCR mirroring, select "Normal Redundancy". In such a case, you will have to specify two locations for the OCR.
Note: The voting disk must be on a shared file system that is accessible by all nodes. If you want to use mirroring of the voting disk, select "Normal Redundancy". In such a case, you will have to specify three locations for the voting disks.
4 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
9. Verify the installation Summary Window and Click Install. 10. At the end of the installation, the installer will prompt for executing root.sh from both the nodes. Execute root.sh from the CRS ORACLE_HOME specified after logging in as root. This will also start CRS services on both the cluster nodes. 11. Execute CRS ORACLE_HOME/bin/olsnodes. If this returns all the cluster node names, then your CRS installation was successful. 12. Confirm Oracle Clusterware function: 1. After installation, log in as root, and use the following command to confirm that your Oracle Clusterware installation is installed and running correctly:
# /bin/crs_stat -t -v Name Type Target State Host ---------------------------------------------------------------------------------------ora....dbs.gsd application ONLINE ONLINE ap614dbs ora....dbs.ons application ONLINE ONLINE ap614dbs ora....dbs.vip application ONLINE ONLINE ap614dbs ora....dbs.gsd application ONLINE ONLINE ap615dbs ora....dbs.ons application ONLINE ONLINE ap615dbs ora....dbs.vip application ONLINE ONLINE ap615dbs
2. As an alternative, Oracle Clusterware installation can be verified using the following command:
# /bin/crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
3.2 Install Oracle Database Software 11.1.0.6 and Upgrade Applications Database to 11.1.0.6
Note: You should make a full backup of the oraInventory directory before starting this stage. Run OUI (runInstaller) to perform an Oracle Database Installation with Oracle RAC . In Cluster Nodes Window, verify the cluster nodes shown for the installation. Select all nodes include in your RAC. To install the Oracle Database 11g (11.1.0.6) software and upgrade an existing database to 11.1.0.6, refer to My Oracle Support Knowledge Document 735276.1, Interoperability Notes E-Business Suite R12 with Oracle Database 11g R1 (11.1.0). Follow all the instructions and steps listed there except the following: 16. Start the new database listener (conditional) 20. Implement and run AutoConfig 25. Restart Applications server processes
5 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
3. Select the "Configure Automatic Storage Management" option. Click Next. 4. Select all the nodes in the cluster. Click Next. 5. Enter the SYS password, and provide the spfile location on the shared disk. 6. ASM instances will now be created. 7. On the ASM Disk Groups page, click on Create New. 8. On the "Create Disk Group" screen, specify the: Disk group name Desired redundancy option (High, Normal, or External) Members/devices of the disk group 9. The disk group has now been created and mounted on all the instances. Click Finish. 10. As the owner of the CRS_ORACLE_HOME, issue the crs_stat command to verify that the ASM instances are registered with CRS. The instances will be named using the format specified with dbca during creation (for example, ora.myhost.+ASM1.asm) 11. Using NetCA create remote listener TNS aliases with the name LISTENERS_. 12. Update the ASM init parameter file with remote listener value.
$ sqlplus / as sysdba; SQL>alter system set remote_listener=LISTENERS_ scope=both;
13. Restart the ASM instances on all nodes. Note : Verify that all the ASM instances are registered with all the nodes. Refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) and Oracle Database Administrator's Guide 11g Release 1 (11.1) for further details.
5. Shut down the database. 6. Create an spfile from the pfile using the command:
SQL>create spfile from pfile;
7. Move the $ORACLE_HOME/dbs/spfile.ora for this instance to the shared location. 8. Take a backup of existing $ORACLE_HOME/dbs/init.ora and create a new $ORACLE_HOME/dbs/init.ora with the following parameter:
spfile=/spfile.ora
9. Start up the instance. 10. Using NetCA, create local and remote listener tns aliases for database instances. Use listener_ as the alias name for the local listener, and listeners_ for the remote listener alias. 1. Execute netca from $ORACLE_HOME/bin. 2. Choose "Cluster Configuration" option in the NetCA wizard. 3. Choose the current nodename from the nodes list. 4. Choose "Local Net Service Name Configuration " option and click Next. 5. Select "Add" and in next screen enter the service name and click Next. 6. Enter the current node as Server Name and the port defined in Step 3.3.
6 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
7. Select "Do not perform Test" and click Next. 8. Enter the listener TNS alias name like LISTENER_ for local listener. 9. Repeat the above steps for remote listener, with the server name in step 6 as the secondary node and the listener name LISTENERS_. Note: Ensure that local and remote aliases are created on all nodes in the cluster. 11. Navigate to $ORACLE_HOME/bin, and use the following syntax to run the rconfig command:
$ ./rconfig
12. This rconfig run will: 1. Migrate the database to ASM storage (only if ASM is specified as storage option in the configuration XML file). 2. Create database instances on all nodes in the cluster. 3. Configure listener and NetService entries. 4. Configure and register CRS resources. 5. Start the instances on all nodes in the cluster.
4. Start up the instance using the 'mount' option. 5. Disable archive logging using the following SQL command:
$ sqlplus / as sysdba; SQL>alter database noarchivelog;
6. Shut down the database normally. 7. Set cluster_database=true in the $ORACLE_HOME/dbs/init.ora file. 8. Start up all the instances. 9. Check the archive log setting using the following SQL command:
$ sqlplus / as sysdba; SQL>archive log list;
This should show the value of 'Database log mode' as 'No Archive Mode'
3. Copy the appsutil.zip file to the 11g NEW_ORACLE_HOME on the database tier, for example using ftp. 4. Unzip the appsutil.zip file to create the appsutil directory in the 11g NEW_ORACLE_HOME. 5. Copy the jre directory from OLD_ORACLE_HOME>/appsutil to 11g NEW_ORACLE_HOME>/appsutil. 6. Create a directory under $ORACLE_HOME/network/admin. Use the new instance name while creating the context directory. Append the instance number to the instance prefix you are going to put in the rconfig XML file. For example, if your database name is VISRAC, and you want to use "vis" as the instance prefix, create the context_name directory as vis1_. 7. Set the following environment variables:
ORACLE_HOME =<11g ORACLE_HOME> LD_LIBRARY_PATH = <11g ORACLE_HOME>/lib, <11g ORACLE_HOME>/ctx/lib ORACLE_SID = PATH= $PATH:$ORACLE_HOME/bin; TNS_ADMIN = $ORACLE_HOME/network/admin/>
8. De-register the current configuration using the Apps schema package FND_CONC_CLONE.SETUP_CLEAN.
SQL>exec fnd_conc_clone.setup_clean;
9. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to $TNS_ADMIN/tnsnames.ora file and edit changing the aliases for SID=.
7 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
10. To preserve TNS aliases (LISTENERS_ and LISTENER_) of ASM , create a file _ifile.ora under $TNS_ADMIN, and copy those entries to that file. 11. Create listener.ora as per the sample file in Appendix A. Change the instance name and Oracle home to match this environment. 12. Start the Listener. 13. From the 11g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
$ adbldxml.pl tier=db appsuser= appspasswd=
14. Set the value of s_virtual host_name to point to the virtual hostname for the database host, by editing the database context file $ORACLE_HOME/appsutil/_hostname.xml. 15. Rename $ORACLE_HOME/dbs/init.ora, to a new name (i.e. init.ora.old) in order to allow AutoConfig to regenerate the file using the RAC specific parameters. 16. From the 11g ORACLE_HOME/nls/data/old directory, execute cr9idata.pl on the database tier. 17. Ensure that the following context variable parameters are correctly specified.
s_jdktop=<11g ORACLE_HOME_PATH>/appsutil/jre s_jretop=<11g ORACLE_HOME_PATH>/appsutil/jre s_adjvaprg=<11g ORACLE_HOME_PATH>/appsutil/jre/bin/java
18. From the 11g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script. 19. Check the AutoConfig log file located in 11g ORACLE_HOME/appsutil/log// 20. Perform the above steps [1-19] on all other database nodes in the cluster: 21. Execute AutoConfig on all database nodes in the cluster by running the command:
$ $ORACLE_HOME/appsutil/scripts/adconfig.sh
22. Shut down the instances and listeners. 23. Edit $ORACLE_HOME/dbs/_APPS_BASE.ora file on all nodes. If ASM is being used, change the following parameter:
control_files =
24. Create SPFILE from the pfile on all nodes as follows.: 1. Create an spfile from the pfile, and then create a pfile in a temporary location from the new spfile, with commands as shown in the following example:
SQL>create spfile= from pfile; SQL>create pfile=/tmp/init.ora from spfile;
Repeat this step on all nodes. 2. Combine the initialization parameter files for all instances into one initdbname.ora file by copying all existing shared contents. All shared parameters defined in your initdbname.ora file must be global, with the format *.parameter=value 3. Modify all instance-specific parameter definitions in init.ora files using the following syntax, where the variable is the system identifier of the instance: .parameter=value Ensure that the parameters LOCAL_LISTENER,diagnostic_dest,undo_tablespace,thread,instance_number,instance_name are in .parameter format; for example, .LOCAL_LISTENER=. These parameters must have one entry for an instance. 4. Create the spfile in the shared location where rconfig created the spfile from the pfile in step 3 above.
SQL>create spfile= from pfile;
25. Ensure that listener.ora and tnsnames.ora are generated as per the format shown in Appendix A. 26. As AutoConfig creates the listener.ora and tnsnames.ora files in a context directory, and not in the $ORACLE_HOME/network/admin directory, the TNS_ADMIN path must be updated in CRS. Use the command:
$ srvctl setenv instance -d database -i instance -t TNS_ADMIN=/network/admin/
27. Start up the database instances and listeners on all nodes. 28. Run AutoConfig all nodes to ensure instance registers with all remote listeners. 29. Restart the database instances and listeners on all nodes.
For more information on AutoConfig, see My Oracle Support Knowledge Document 387859.1, Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12. 6. Check the $INST_TOP/admin/log/ AutoConfig log file for errors.
8 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
7. Source the environment by using the latest environment file generated. 8. Verify the tnsnames.ora and listener.ora files. Copies of both are located in the $INST_TOP/ora/10.1.2/network/admin directory and $INST_TOP/ora/10.1.3/network/admin directory. In these files, ensure that the correct TNS aliases have been generated for load balance and failover, and that all the aliases are defined using the virtual hostnames. 9. Verify the dbc file located at $FND_SECURE. Ensure that the parameter APPS_JDBC_URL is configured with all instances in the environment, and that load_balance is set to YES.
5. Restart the Applications processes, using the new scripts generated by AutoConfig. 6. Ensure that value of the profile option "Application Database ID" is set to dbc file name generated in $FND_SECURE. Note: If you are adding a new node to the application tier, repeat the above steps 1-6 for setting up load balancing on the new application tier node.
2. Source the Applications environment. 3. Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes. 4. Restart the Applications listener processes on each application tier node. 5. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered. 6. Verify that the Internal Monitor for each node is defined properly, with correct primary and secondary node specification, and work shift details. For example, Internal Monitor: Host2 must have primary node as host2 and secondary node as host3. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator. 7. Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system. 8. Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.) 9. Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.
3. Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters: _lm_global_posts=TRUE _immediate_commit_propagation=TRUE 4. Start the instances on all database nodes, one by one. 5. Start up the application services (servers) on all nodes. 6. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option Concurrent: TM Transport Type' to QUEUE', and verify that the transaction manager works across the RAC instance. 7. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers. 8. Restart the concurrent managers. 9. If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator.
9 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Section 4: References
My Oracle Support Knowledge Document 745759.1: Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap My Oracle Support Knowledge Document 384248.1 : Sharing The Application Tier file system in Oracle E-Business Suite R12 My Oracle Support Knowledge Document 387859.1: Using AutoConfig to Manage System Configurations with Oracle E-Business Suite R12 My Oracle Support Knowledge Document 406982.1: Cloning Oracle Applications Release 12 with Rapid Clone My Oracle Support Knowledge Document 240575.1: RAC on Linux Best Practices My Oracle Support Knowledge Document 265633.1: Automatic Storage Management Technical Best Practices Oracle Real Application Clusters Administration and Deployment Guide 11g Release 1 (11.1) Oracle Database Administrator's Guide 11g Release 1 (11.1) Oracle Database Backup and Recovery Advanced User's Guide 11g Release 1 (11.1) Oracle Applications System Administrator's Guide - Configuration Migration to ASM Technical White Paper
Appendix A
Sample LISTENER.ORA file for database node (without virtual host name)
=<> (ADDRESS_LIST = (ADDRESS= (PROTOCOL= IPC)(KEY= EXTPROC)) (ADDRESS= (PROTOCOL= TCP)(Host= host2)(Port= db_port)) ) SID_LIST_ = (SID_LIST = (SID_DESC = (ORACLE_HOME= <11g oracle home path>) (SID_NAME = ) ) (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = <<11g oracle home path>) (PROGRAM = extproc) ) ) STARTUP_WAIT_TIME_ = 0 CONNECT_TIMEOUT_ = 10 TRACE_LEVEL_ = OFF LOG_DIRECTORY_ = <11g Oracle Home path>/network/admin LOG_FILE_ = TRACE_DIRECTORY_ = <11g Oracle Home path>/network/admin TRACE_FILE_ = ADMIN_RESTRICTIONS_ = OFF
Sample LISTENER.ORA file for database nodes (with virtual host name)
LISTENER_ = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = )(IP = FIRST))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = )(IP = FIRST))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))) ) ) SID_LIST_LISTENER_ = (SID_LIST = (SID_DESC = (ORACLE_HOME = <11g ORACLE_HOME>)(SID_NAME = )) (SID_DESC = (SID_NAME = PLSExtProc)(ORACLE_HOME = <11g ORACLE_HOME>)(PROGRAM = extproc)) ) STARTUP_WAIT_TIME_LISTENER_ = 0 CONNECT_TIMEOUT_LISTENER_ = 10 TRACE_LEVEL_LISTENER_ = OFF LOG_DIRECTORY_LISTENER_ = <11g ORACLE_HOME>/network/admin LOG_FILE_LISTENER_ = TRACE_DIRECTORY_LISTENER_ = <11g ORACLE_HOME>/network/admin TRACE_FILE_LISTENER_ = ADMIN_RESTRICTIONS_LISTENER_ = OFF SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER_ = OFF IFILE=<11g ORACLE_HOME>/network/admin//listener_ifile.ora
Sample TNSNAMES.ORA file for database nodes (with virtual host name)
= (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=)) (CONNECT_DATA= (SERVICE_NAME=) (INSTANCE_NAME=) ) )
Appendix B
Example of an rconfig XML input file:
10 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values: Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC conversion have been met before it starts conversion. Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion. Convert verify="ONLY" rconfig only performs prerequisite checks; it does not start conversion after completing prerequisite checks. In order to validate, and test the settings specified for converting to RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" before carrying out the actual conversion.
Note: The ASM instance name specified above is the local node's ASM instance, where rconfig is executed from to perform the RAC conversion. Before starting the actual conversion, ensure that ASM instances on all the nodes are running, and the required diskgroups are mounted on each instance.
sales -
Note: In order to use the existing listener definition and port assignment, you must specify a NULL entry for Listener port.
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file. The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them. The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance:
select name, state, total_mb, free_mb from v$asm_diskgroup;
+ASMDG
Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above xml file. If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for the TargetDatabaseArea ,rconfig will convert the files to Oracle Managed Files nomenclature.
+ASMDG
11 of 12
3/25/2011 11:18 AM
Using Oracle 11g Release 1 (11.1.0.6) Real Application Clusters and Au...
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doc...
Change Log
Date July 21, 2009 Description Added "11.1.0.6" to title.
Corrected some links to Oracle RAC Administration and Deployment Guide 11g Release 1 (11.1). Changed all occurrences of "OracleMetaLink Note" to "My Oracle Support Knowledge Document".
Created this note as a holder for the 11.1.0.6 content that used to be in Note 466649.1, which now holds 11.1.0.7 content.
Initial publication.
Note 783044.1Note 388577.1 by Oracle E-Business Suite Development Copyright 2008, 2009 Oracle Related Products Oracle E-Business Suite > Applications Technology > Technology Components > Oracle Applications Technology Stack
Back to top
12 of 12
3/25/2011 11:18 AM