Академический Документы
Профессиональный Документы
Культура Документы
Cluster v3
Note: This note was created for 9i RAC. The 10g Oracle documentation provides installation
instructions for 10g RAC. These instructions can be found on OTN:
Oracle Real Application Clusters Installation and Configuration Guide
10g Release 1 (10.1) for AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64 UNIX, Linux,
Solaris Operating System (SPARC 64-bit)
Purpose
This document will provide the reader with step-by-step instructions on how to install a cluster,
install Oracle Real Application Clusters (RAC) and start a cluster database on Sun Cluster v3. For
additional explanation or information on any of these steps, please see the references listed at the
end of this document.
Disclaimer: If there are any errors or issues prior to step 3.3, please contact Sun Support.
The information contained here is as accurate as possible at the time of writing.
1.1.1 Hardware
o
o
As with any other system running the Solaris Operating System (SPARC) environment, you can configure
the root (/), /var, /usr, and /opt directories as separate file systems, or you can include all the
directories in the root (/) file system. The following describes the software contents of the root
(/), /var, /usr, and /opt directories in a Sun Cluster configuration. Consider this information when
you plan your partitioning scheme.
root (/) - The Sun Cluster software itself occupies less than 40 Mbytes of space in the root (/)
file system. For best results, you need to configure ample additional space and inode capacity for
the creation of both block special devices and character special devices used by VxVM software,
especially if a large number of shared disks are in the cluster. Therefore, add at least 100 Mbytes
to the amount of space you would normally allocate for your root (/) filesystem.
/var - The Sun Cluster software occupies a negligible amount of space in the /var file system at
installation time. However, you need to set aside ample space for log files. Also, more messages
might be logged on a clustered node than would be found on a typical standalone server. Therefore,
allow at least 100 Mbytes for the /var file system.
/usr - Sun Cluster software occupies less than 25 Mbytes of space in the /usr file system. VxVM
software require less than 15 Mbytes.
/opt - Sun Cluster framework software uses less than 2 Mbytes in the /opt file system. However,
each Sun Cluster data service might use between 1 Mbyte and 5 Mbytes. VxVM software can use over 40
Mbytes if all of its packages and tools are installed. In addition, most database and applications
software is installed in the /opt file system. If you use Sun Management Center software to monitor
the cluster, you need an additional 25 Mbytes of space on each node to support the Sun Management
Center agent and Sun Cluster module packages.
An example system disk layout is as follows:-
Allocation Description
(in Mbytes)
1168
swap
750
overlap
2028
/globaldevices 100
unused
unused
unused
volume manager
10
1.1.2 Software
For Solaris Operating System (SPARC)[ ], Sun Cluster, Volume Manager and File System
support consult the operating system vendor and see the RAC/Sun certification matrix. Sun Cluster
have scalable services with Global File Systems (GFS) based around the Proxy File Systems (PXFS).
PXFS allows file access locations transparent and is Sun's implementation of a Cluster File System.
Currently, the GFS is supported for Oracle binaries and archive logs only, not for database files.
1.1.3 Patches
The Sun Cluster nodes might require patches in the following areas:
$ showrev -p
For the latest Sun Cluster 3.0 required patches see SunSolve document id 24617 Sun Cluster 3.0 Early Notifier.
If not already installed, install host adapters in your cluster nodes. For the procedure on
installing host adapters, see the documentation that shipped with your host adapters and node
hardware. Install the transport cables (and optionally, transport junctions), depending on how many
nodes are in your cluster:
Sun Cluster supports, with two nodes only, the use of a point-to-point (crossover)
connection, requiring no cluster transport junctions. However, check the RAC/Sun certification
matrix for Oracle's position (since the use of a switch even in a 2-node cluster environment ensures
higher availability). At the time of writing, the RAC Technologies Compatibility Matrix (RTCM) for
Unix platforms/ RAC Technologies Compatibility Matrix (RTCM) for Solaris Clusters states that
crossover cables are not supported as an interconnect with 9iRAC/10gRAC on any platform.
You install the cluster software and configure the interconnect after you have installed all other
hardware.
2. Creating a Cluster
2.1 Sun Cluster Software Installation
The Sun Cluster v3 host system (node) installation process is completed in several major steps. The
general process is:
# cd /cdrom/suncluster_3_0_u2/SunCluster_3.0/Tools
# ./scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** Establishing a New Cluster ***
...
Do you want to continue (yes/no) [yes]? yes
When prompted whether to continue to install Sun Cluster software packages, type yes.
By default, Sun Cluster software permits a node to connect to the cluster only if the node is
physically connected to the private interconnect and if the node name was specified. However, the
node actually communicates with the sponsoring node over the public network, since the private
interconnect is not yet fully configured. DES authentication provides an additional level of
security at installation time by enabling the sponsoring node to more reliably authenticate nodes
that attempt to contact it to update the cluster configuration.
If you choose to use DES authentication for additional security, you must configure all necessary
encryption keys before any node can join the cluster. See
the keyserv(1M)and publickey(4) man pages for details.
Use the default port name for the "adapter" connection (yes/no) [yes]? no
What is the name of the port you want to use? 0
Choose the second cluster interconnect transport adapter.
Type help to list all transport adapters available to the node.
What is the name of the second cluster transport adapter (help) [adapter]?
You can configure up to two adapters by using the scinstall command. You can configure additional
adapters after Sun Cluster software is installed by using the scsetuputility.
If your cluster uses transport junctions, specify the name of the second transport junction and its
port.
Use the default port name for the "adapter" connection (yes/no) [yes]? no
What is the name of the port you want to use? 0
Specify the global devices file system name.
As the installation on each new node completes, each node reboots and comes up in install mode
without a quorum vote. If you reboot the first node at this point, all the other nodes would panic
because they cannot obtain a quorum. You can, however, reboot the second or later nodes freely. They
should come up and join the cluster without errors.
Cluster nodes remain in install mode until you use the scsetup command to reset the install mode.
You must perform postinstallation configuration to take the nodes out of install mode and also to
establish quorum disk(s).
o
o
Ensure that the first-installed node is successfully installed with Sun Cluster software and
that the cluster is established.
If you are adding a new node to an existing, fully installed cluster, ensure that you have
performed the following tasks.
Prepare the cluster to accept a new node.
Install Solaris Operating System (SPARC) software on the new node.
Become superuser on the cluster node to install.
Start the scinstall utility.
# ./scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Establish a new cluster using this machine as the first node
* 2) Add this machine as a node in an established cluster
3) Configure a cluster to be JumpStarted from this install server
4) Add support for new data services to this cluster node
5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 2
*** Adding a Node to an Established Cluster ***
...
Do you want to continue (yes/no) [yes]? yes
When prompted whether to continue to install Sun Cluster software packages, type yes.
Specify the name of any existing cluster node, referred to as the sponsoring node.
Did you specify that the cluster will use transport junctions? If yes, specify the transport
junctions.
Specify what the first transport adapter connects to. If the transport adapter uses a
transport junction, specify the name of the junction and its port.
What is the name of the second cluster transport adapter (help)? adapter
Specify what the second transport adapter connects to. If the transport adapter uses a
transport junction, specify the name of the junction and its port.
# scdidadm -L
The list on each node should be the same. Output resembles the following.
1
2
2
3
3
phys-schost-1:/dev/rdsk/c0t0d0
phys-schost-1:/dev/rdsk/c1t1d0
phys-schost-2:/dev/rdsk/c1t1d0
phys-schost-1:/dev/rdsk/c1t2d0
phys-schost-2:/dev/rdsk/c1t2d0
/dev/did/rdsk/d1
/dev/did/rdsk/d2
/dev/did/rdsk/d2
/dev/did/rdsk/d3
/dev/did/rdsk/d3
...
The scstat utility displays the current status of various cluster components. You can use it to
display the following information:
# /usr/cluster/bin/scstat -q
Cluster configuration information is stored in the CCR on each node. You should verify that the
basic CCR values are correct. The scconf -p command displays general cluster information along
with detailed information about each node in the cluster.
$ /usr/cluster/bin/scconf -p
$ /usr/cluster/bin/scstat -W -h
-- Cluster Transport Paths -Endpoint Endpoint Status
-------- -------- -----Transport path: :ge1 :ge1 Path online
Transport path: :ge0 :ge0 Path online
Checking Status Using the sccheck Command
The sccheck command verifies that all of the basic global device structure is correct on all
nodes. Run the sccheck command after installing and configuring a cluster, as well as after
performing any administration procedures that might result in changes to the devices, volume
manager, or Sun Cluster configuration.
You can run the command without any options or direct it to a single node. You can run it from any
active cluster member. There is no output from the command unless errors are encountered.
Typical sccheck command variations follow (as root):-
# /usr/cluster/bin/sccheck
# /usr/cluster/bin/sccheck -h
Checking Status Using the scinstall Command
During the Sun Cluster software installation, the scinstall utility is copied to
the /usr/cluster/bin directory. You can run the scinstall utility with options that display
the Sun Cluster revision and/or the names and revision of installed packages. The displayed
information is for the local node only. A typical scinstall status output follows:-
SUNWscvm: 3.0.0,REV=2000.10.01.01.00
SUNWmdm: 4.2.1,REV=2000.08.08.10.01
Starting & Stopping Cluster NodesThe Sun Cluster software starts automatically during a system
boot operation. Use the init command to shut down a single node. You use the scshutdown command
to shut down all nodes in the cluster.
Before shutting down a node, you should switch resource groups to the next preferred node and then
run init 0 on the node
You can shut down the entire cluster with the scshutdown command from any active cluster node. A
typical cluster shutdown example follows:-
# /usr/cluster/bin/scshutdown -y -g 30
Broadcast Message from root on ...
The cluster will be shutdown in 30 seconds
....
The system is down.
syncing file systems... done
Program terminated
ok
Log Files for Sun Cluster
The log files for Sun Cluster are stored
in /var/cluster/logs and /var/cluster/logs/install for installation. Both Solaris
Operating System (SPARC) and Sun Cluster software write error messages to the /var/adm/messages
file, which over time can fill the /var file system. If a cluster node's /var file system fills
up, Sun Cluster might not be able to restart on that node. Additionally, you might not be able to
log in to the node.
Check the RAC/Sun certification matrix for RAC currently supported Volume Managers.
The Real Application Clusters installation process includes four major tasks.
Install the operating system-dependent (OSD) clusterware.
Configure the shared disks and UNIX preinstallation tasks.
Run the Oracle Universal Installer to install the Oracle9i Enterprise Edition and the
Oracle9i Real Application Clusters software.
4.
Create and configure your database.
3.
# pkgadd -d . SUNWudlm
Once installed, Oracle's interface with this, the Oracle UDLM, can be installed.
Note:- for the UDLM to work properly the range of ports indicated
in /opt/SUNWudlm/etc/udlm.conf must be free, here are the defaults on Solaris8 64bit SC3:
udlm.port : 6000
udlm.num_ports : 32
netstat, lsof and the file /etc/services can help in performing this check.
To install Sun Cluster Support for RAC with VxVM, the following Sun Cluster 3 Agents data services
packages need to be installed as superuser (see Sun's Sun Cluster 3 Data Services Installation and
Configuration Guide):-
Shutdown all existing clients of Oracle Unix Distributed Lock Manager (including all Oracle
Parallel Server/RAC instances).
Become super user.
Reboot the cluster node in non-cluster mode (replace with your cluster node name):-
# scswitch -S -h
# shutdown -g 0 -y
... wait for the ok prompt
ok boot -x
uncompress ORCLudlm.tar.Z
tar xvf ORCLudlm.tar
cd /tmp
pkgadd -d . ORCLudlm
The udlm configuration files in SC2.X and SC3.0 are the following:
SC2.X: /etc/opt/SUNWcluster/conf/.ora_cdb
SC3.0: /etc/opt/SUNWcluster/conf/udlm.conf
The udlm log files in SC2.X and SC3.0 are the following:
SC2.X: /var/opt/SUNWcluster/dlm_/logs/dlm.log
SC3.0: /var/cluster/ucmm/dlm_/logs/dlm.log
Now that udlm (also referred to as the "Cluster Membership Monitor") is installed, you can
start it up by rebooting the cluster node in cluster mode:-
# shutdown -g 0 -y -i 6
Raw Volume
File
Size
SYSTEM tablespace
400 Mb db_name_raw_system_400m
USERS tablespace
120 Mb db_name_raw_users_120m
TEMP tablespace
UNDOTBS tablespace per
instance
CWMLITE tablespace
100 Mb db_name_raw_temp_100m
EXAMPLE
160 Mb db_name_raw_example_160m
OEMREPO
20 Mb db_name_raw_oemrepo_20m
INDX tablespace
70 Mb db_name_raw_indx_70m
TOOLS tablespace
12 Mb db_name_raw_tools_12m
DRYSYS tablespace
90 Mb db_name_raw_drsys_90m
110 Mb db_name_raw_controlfile1_110m
312 Mb db_name_raw_undotbsx_312m
100 Mb db_name_raw_cwmlite_100m
110 Mb db_name_raw_controlfile2_110m
120 Mb x db_name_thread_lognumber_120m
2
spfile.ora
srvmconfig
db_name_raw_spfile_5m
100 Mb db_name_raw_srvmconf_100m
5 Mb
Note: Automatic Undo Management requires an undo tablespace per instance therefore you would require
a minimum of 2 tablespaces as described above. By following the naming convention described in the
table above, raw partitions are identified with the database and the raw volume type (the data
contained in the raw volume). Raw volume size is also identified using this method.
Note: In the sample names listed in the table, the string db_name should be replaced with the actual
database name, thread is the thread number of the instance, and lognumber is the log number within a
thread.
On the node from which you run the Oracle Universal Installer, create an ASCII file identifying the
raw volume objects as shown above. The DBCA requires that these objects exist during installation
and database creation. When creating the ASCII file content for the objects, name them using the
format:
database_object=raw_device_file_path
When you create the ASCII file, separate the database objects from the paths with equals (=) signs
as shown in the example below:-
system=/dev/vx/rdsk/oracle_dg/db_name_raw_system_400m
spfile=/dev/vx/rdsk/oracle_dg/db_name_raw_spfile_5m
users=/dev/vx/rdsk/oracle_dg/db_name_raw_users_120m
temp=/dev/vx/rdsk/oracle_dg/db_name_raw_temp_100m
undotbs1=/dev/vx/rdsk/oracle_dg/db_name_raw_undotbs1_312m
undotbs2=/dev/vx/rdsk/oracle_dg/db_name_raw_undotbs2_312m
example=/dev/vx/rdsk/oracle_dg/db_name_raw_example_160m
cwmlite=/dev/vx/rdsk/oracle_dg/db_name_raw_cwmlite_100m
indx=/dev/vx/rdsk/oracle_dg/db_name_raw_indx_70m
tools=/dev/vx/rdsk/oracle_dg/db_name_raw_tools_12m
drsys=/dev/vx/rdsk/oracle_dg/db_name_raw_drsys_90m
control1=/dev/vx/rdsk/oracle_dg/db_name_raw_controlfile1_110m
control2=/dev/vx/rdsk/oracle_dg/db_name_raw_controlfile2_110m
redo1_1=/dev/vx/rdsk/oracle_dg/db_name_raw_log11_120m
redo1_2=/dev/vx/rdsk/oracle_dg/db_name_raw_log12_120m
redo2_1=/dev/vx/rdsk/oracle_dg/db_name_raw_log21_120m
redo2_2=/dev/vx/rdsk/oracle_dg/db_name_raw_log22_120m
You must specify that Oracle should use this file to determine the raw device volume names by
setting the following environment variable where filename is the name of the ASCII file that
contains the entries shown in the example above:
Make sure you have an osdba group defined in the /etc/group file on all nodes of your
cluster. To designate an osdba group name and group number and osoper group during installation,
these group names must be identical on all nodes of your UNIX cluster that will be part of the Real
Application Clusters database. The default UNIX group name for the osdba and osoper groups is dba.
A typical entry would therefore look like the following:
dba::101:oracle
oinstall::102:root,oracle
o
o
o
Create a mount point directory on each node to serve as the top of your Oracle
software directory structure so that:
The name of the mount point on each node is identical to that on the initial node
Setting
4294967295
SHMMIN
SHMMNI
100
SHMSEG
10
SEMMNI
1024
SEMMSL
100
SEMMNS
1024
SEMOPM
SEMVMX
(swap space)
100
32767
750 MB
Purpose
Maximum allowable size of one shared memory segment
(4 Gb)
Minimum allowable size of a single shared memory
segment.
Maximum number of shared memory segments in the
entire system.
Maximum number of shared memory segments one process
can attach.
Maximum number of semaphore sets in the entire
system.
Minimum recommended value. SEMMSL should be 10 plus
the largest PROCESSES parameter of any Oracle
database on the system.
Maximum semaphores on the system. This setting is a
minimum recommended value. SEMMNS should be set to
the sum of the PROCESSES parameter for each Oracle
database, add the largest one twice, plus add an
additional 10 for each database.
Maximum number of operations per semop call.
Maximum value of a semaphore.
Two to four times your system's physical memory
size.
Set a local bin directory in the user's PATH, such as /usr/local/bin, or /opt/bin.
It is necessary to have execute permissions on this directory.
Set the DISPLAY variable to point to the system's (from where you will run OUI) IP
address, or name, X server, and screen.
Set a temporary directory path for TMPDIR with at least 20 Mb of free space to which the OUI
has write permission.
Establish Oracle environment variables: Set the following Oracle environment variables:
Environment Variable
ORACLE_BASE
Suggested value
eg /u01/app/oracle
ORACLE_HOME
eg /u01/app/oracle/product/9201
ORACLE_TERM
NLS_LANG
ORA_NLS33
xterm
AMERICAN-AMERICA.UTF8 for example
PATH
CLASSPATH
$ORACLE_HOME/ocommon/nls/admin/data
Should contain $ORACLE_HOME/bin
$ORACLE_HOME/JRE:$ORACLE_HOME/jlib
$ORACLE_HOME/rdbms/jlib:
$ORACLE_HOME/network/jlib
Create the directory /var/opt/oracle and set ownership to the oracle user.
Verify the existence of the file /opt/SUNWcluster/bin/lkmgr. This is used by the
OUI to indicate that the installation is being performed on a cluster.
Note: There is a verification script InstallPrep.sh available which may be downloaded and run prior
to the installation of Oracle Real Application Clusters. This script verifies that the system is
configured correctly according to the Installation Guide. The output of the script will report any
further tasks that need to be performed before successfully installing Oracle 9.x DataServer
(RDBMS). This script performs the following verifications:
. ./InstallPrep.sh
You are currently logged on as oracle
Is oracle the unix user that will be installing Oracle Software? y or n
y
Enter the unix group that will be used during the installation
Default: dba
dba
Enter Location where you will be installing Oracle
Default: /u01/app/oracle/product/oracle9i
/u01/app/oracle/product/9.2.0.1
Your Operating System is SunOS
Gathering information... Please wait
Checking unix user ...
user test passed
Checking unix umask ...
umask test passed
Checking unix group ...
Unix Group test passed
Checking Memory & Swap...
Memory test passed
/tmp test passed
Checking for a cluster...
SunOS Cluster test
3.x has been detected
Cluster has been detected
You have 2 cluster members configured and 2 are curently up
No cluster warnings detected
Processing kernel parameters... Please wait
Running Kernel Parameter Report...
Check the report for Kernel parameter verification
Completed.
/tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before attempting to
install the Oracle Database Software
run lsnodes as part of the Oracle Installer temporary files to see what nodes are
available
check that the file /etc/opt/SUNWcluster/conf/SC30.cdb has correct host names
ensure that the udlm is running correctly on all nodes
To install the Oracle Software, perform the following (no license key or registration is required
for RAC):-
$ //runInstaller
At the OUI Welcome screen, click Next.
A prompt will appear for the Inventory Location (if this is the first time that OUI has been
run on this system). This is the base directory into which OUI will install files. The Oracle
Inventory definition can be found in the file /var/opt/oracle/oraInst.loc. Click OK.
Verify the UNIX group name of the user who controls the installation of the
Oracle9i software. If an instruction to run /tmp/orainstRoot.sh appears, the pre-installation
steps were not completed successfully. Typically, the /var/opt/oracle directory does not exist
or is not writeable by oracle. Run /tmp/orainstRoot.sh to correct this, forcing Oracle
Inventory files, and others, to be written to the ORACLE_HOME directory. Once again this screen
only appears the first time Oracle9i products are installed on the system. Click Next.
The File Location window will appear. Do NOT change the Source field. The Destination field
defaults to the ORACLE_HOME environment variable. Click Next.
Select the Products to install. In this example, select the Oracle 9i Server then
click Next.
Select the installation type. Choose the Enterprise Edition option. The selection on this
screen refers to the installation operation, not the database configuration. The next screen allows
for a customized database configuration to be chosen. Click Next.
Select the configuration type. In this example you choose the Advanced Configuration as this
option provides a database that you can customize, and configures the selected server products.
Select Customized and click Next.
Select the other nodes on to which the Oracle RDBMS software will be installed. It is not
necessary to select the node on which the OUI is currently running. Click Next.
Identify the raw partition in to which the Oracle9i Real Application Clusters (RAC)
configuration information will be written. It is recommended that this raw partition is a minimum of
100MB in size.
An option to Upgrade or Migrate an existing database is presented. Do NOT select the radio
button. The Oracle Migration utility is not able to upgrade a RAC database, and will error if
selected to do so.
The Summary screen will be presented. Confirm that the RAC database software will be
installed and then click Install. The OUI will install the Oracle9i software on to the local node,
and then copy this information to the other nodes selected.
Once Install is selected, the OUI will install the Oracle RAC software on to the local node,
and then copy software to the other nodes selected earlier. This will take some time. During the
installation process, the OUI does not display messages indicating that components are being
installed on other nodes - I/O activity may be the only indication that the process is continuing.
Verify that you correctly configured the shared disks for each tablespace (for non-cluster
file system platforms)
Create the database
Configure the Oracle network services
Start the database instances and listeners
Oracle Corporation recommends that you use the DBCA to create your database. This is because the
DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such
as the server parameter file and automatic undo management. The DBCA also enables you to define
arbitrary tablespaces as part of the database creation process. So even if you have datafile
requirements that differ from those offered in one of the DBCA templates, use the DBCA. You can also
execute user-specified scripts as part of the database creation process.
The DBCA and the Oracle Net Configuration Assistant (NETCA) also accurately configure your Real
Application Clusters environment for various Oracle high availability features and cluster
administration tools.
Note: Prior to running the DBCA it may be necessary to run the NETCA tool or to manually set up your
network files. To run the NETCA tool execute the command netca from
the $ORACLE_HOME/bin directory. This will configure the necessary listener names and protocol
addresses, client naming methods,
Net service names and Directory server usage. Also, it is recommended that the Global Services
Daemon (GSD) is started on all nodes prior to running DBCA. To run the GSD execute the
command gsd from the $ORACLE_HOME/bin directory.
DBCA will launch as part of the installation process, but can be run manually by executing
the command dbca from the $ORACLE_HOME/bin directory on UNIX platforms. The RAC Welcome Page
displays. Choose Oracle Cluster Database option and select Next.
The Operations page is displayed. Choose the option Create a Database and click Next.
The Node Selection page appears. Select the nodes that you want to configure as part of the
RAC database and click Next. If nodes are missing from the Node Selection then perform clusterware
diagnostics by executing the $ORACLE_HOME/bin/lsnodes -v command and analyzing its output.
Refer to your vendor's clusterware documentation if the output indicates that your clusterware is
not properly installed. Resolve the problem and then restart the DBCA.
The Database Templates page is displayed. The templates other than New Database include
datafiles. Choose New Database and then click Next.
The Show Details button provides information on the database template selected.
DBCA now displays the Database Identification page. Enter the Global Database
Name and Oracle System Identifier (SID). The Global Database Name is typically of the
form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an
instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the
Database Name field). In the RAC case the SID specified will be used as a prefix for the instance
number. For example, MYDB , would become MYDB1, MYDB2 for instance 1 and 2 respectively.
The Database Options page is displayed. Select the options you wish to configure and then
choose Next. Note: If you did not choose New Database from the Database Template page, you will not
see this screen.
The Additional database Configurations button displays additional database features. Make
sure both are checked and click OK.
Select the connection options desired from the Database Connection Options page. Note: If
you did not choose New Database from the Database Template page, you will not see this screen.
Click Next.
DBCA now displays the Initialization Parameters page. This page comprises a number of Tab
fields. Modify the Memory settings if desired and then select the File Locations tab to update
information on the Initialization Parameters filename and location. Then click Next.
The option Create persistent initialization parameter file is selected by default. If
you have a cluster file system, then enter a file system name, otherwise a raw device name for
the location of the server parameter file (spfile) must be entered. Then click Next.
The button File Location Variables displays variable information. Click OK.
The button All Initialization Parameters displays the Initialization Parameters dialog
box. This box presents values for all initialization parameters and indicates whether they are to be
included in the spfile to be created through the check box, included (Y/N). Instance specific
parameters have an instance value in the instance column. Complete entries in the All
Initialization Parameters page and select Close. Note: There are a few exceptions to what can be
altered via this screen. Ensure all entries in the Initialization Parameters page are complete and
select Next.
DBCA now displays the Database Storage Window. This page allows you to enter file names
for each tablespace in your database.
The file names are displayed in the Datafiles folder, but are entered by selecting
the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any names
displayed here can be changed. A configuration file can be used, see section 3.2.1, (pointed to by
the environment variable DBCA_RAW_CONFIG). Complete the database storage information and
click Next.
The Database Creation Options page is displayed. Ensure that the option Create
Database is checked and click Finish.
The DBCA Summary window is displayed. Review this information and then click OK.
Once the Summary screen is closed using the OK option, DBCA begins to create the database
according to the values specified.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to
work with an Oracle RAC database.
$ vi /var/opt/oracle/srvConfig.loc
srvconfig_loc=/dev/vx/rdsk/datadg/rac_srvconfig_10m
Then execute the following command to initialize this raw volume (Note: This cannot be run while
the gsd is running. Prior to 9i Release 2 you will need to kill
the .../jre/1.1.8/bin/... process to stop the gsd from running. From 9i Release 2 use
the gsdctl stop command):-
$ srvconfig -init
The first time you use the SRVCTL Utility to create the configuration, start the Global Services
Daemon (GSD) on all nodes so that SRVCTL can access your cluster's configuration information. Then
execute the srvctl add command so that Real Application Clusters knows what instances belong to
your cluster using the following syntax:-
$ srvctl config
racdb1
racdb2
$ srvctl
racnode1
$ srvctl
racnode1
config -p racdb1
racinst1racnode2 racinst2
config -p racdb1 -n racnode1
racinst1
stop -p racdb2
successfully stopped
successfully stopped
successfully stopped
successfully stopped
on
on
on
on
node:
node:
node:
node:
racnode2
racnode1
racnode2
racnode1
$ gsdctl start
Successfully started the daemon on the local node.
$ srvctl add database -d db_name -o oracle_home [-m domain_name] [s spfile]
Then for each instance enter the command:
$ srvctl config
racdb1
racdb2
$ srvctl config -p racdb1 -n racnode1
racnode1 racinst1 /u01/app/oracle/product/9.2.0.1
$ srvctl status database -d racdb1
Instance racinst1 is running on node racnode1
Instance racinst2 is running on node racnode2
Examples of starting and stopping RAC follow:-
5.0 References
<<Note 148673.1>> - SOLARIS: Quick Start Guide - 9.0.x RDBMS Installation
<<Note:178644.1>> - Veritas Volume Manager on Solaris & Real Application Clusters
<<Note:160121.1>> - Introduction to Sun Cluster v3
<<Note:160120.1>> - Oracle Real Application Clusters on Sun Cluster v3
<<Note:137288.1>> - Database Creation in Oracle9i RAC
<<Note:263699.1>> - The use of RSM with Oracle RAC
Increase Scalability through Real Application Clusters (RAC) on Solaris
RAC/Sun certification matrix
Stripe And Mirror Everything, Optimal Storage Configuration Made Easy, Juan Loaiza, Vice
President Systems Technology Group
Oracle9i Real Application Clusters Administration
Oracle9i Real Application Clusters Concepts
Oracle9i Real Application Clusters Deployment and Performance
Oracle9i Real Application Clusters Documentation Online Roadmap
Oracle9i Real Application Clusters Real Application Clusters Guard I - Concepts and
Administration
Oracle9i Real Application Clusters Real Applications Clusters Guard I Configuration Guide
Release 2 (9.2.0.1.0) for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX,
and Sun Solaris
Oracle9i Real Application Clusters Setup and Configuration
Oracle9i Release Notes for Sun Solaris (32-bit)
Oracle9i Release Notes for Sun Solaris (64-bit)
Oracle9i Installation Guide Release 2 for UNIX Systems: AIX-Based Systems, Compaq Tru64
UNIX, HP 9000 Series HP-UX, Linux Intel, and Sun Solaris
Sun Cluster 3.0 Release Notes
Sun Cluster 3.0 Installation Guide, Part No. 806-1419-10
Sun Cluster 3.0 System Administration Guide, Part No. 806-1423-10
Sun Cluster 3.0 Hardware Guide, Part No. 806-1420-10
Sun Cluster 3.0 Concepts, Part No. 806-1424-1
Sun Cluster 3.0 Error Messages Manual, Part No. 806-1426-10
Sun Cluster 3.0 Data Services Installation and Configuration Guide, Part No. 806-1421-10
Sun Cluster 3.0 Administration Training Course ES-333, February 2001, Revision A
Designing Enterprise Solutions with Sun Cluster 3.0 (Ch 6 - Database Cluster), Richard
Elling & Tim Read, Sun Microsystems Press, Prentice Hall, 2002