Академический Документы
Профессиональный Документы
Культура Документы
Contents
1. Overview
2. Oracle Real Application Cluster (RAC) 10g Introduction
3. FAQs about Oracle10g Standard Edition and RAC
4. Configuring the Operating System and Hardware
5. Downloading Oracle RAC 10g Software
6. Installing Oracle Cluster Ready Services (CRS) Software
7. Stamp the Logical Drives for ASM
8. Installing Oracle10g Database Software with RAC
9. Ensure Valid Environment Variables on Both Nodes
10. Creating / Altering Tablespaces
11. Applying the Oracle10g Release 1 (10.1.0.4) Patch Set 2 for Microsoft
Windows
12. Verifying TNS Networking Files
13. Verifying the RAC Cluster / Database Configuration
14. Starting & Stopping the Cluster
15. Creating Second Disk Group for Flash Recovery Area
16. Enterprise Manager - Database Console
17. Transparent Application Failover - (TAF)
18. About the Author
Overview
Oracle extended their high availability product offerings by now licensing Oracle
RAC 10g with Standard Edition. This has been a welcome move by many customers
who can now take advantage of an active-active database clustering solution without
having to move to their more expensive Enterprise Edition.
This article provides a detailed guide to the tasks required to install and configure
Oracle RAC 10g using Standard Edition on the Microsoft Windows Server 2003
operating environment. The configuration will consist of a two-node cluster, using
Oracle Real Application Clusters (RAC) and Automated Storage Management (ASM)
for the physical database files. (Using ASM is a requirement when configuring Oracle
RAC 10g Standard Edition!)
Before discussing the details of the installation, let's take a conceptual look at what
the environment will look like:
NOTE: Click on the graphic above to view larger image
The complete installation will consist of two phases. The first phase will be the
installation and configuration of the Cluster Ready Services (CRS) software. The
second phase will consist of installing the Oracle Database / RAC software. (All
software components are available from the Oracle Technology Network (OTN).) At
the end of the installation, we will then create a general purpose clustered database
with all sample schemas.
Oracle Real Application Cluster (RAC) is the successor to Oracle Parallel Server
(OPS) and was first introduced in Oracle9i. RAC allows multiple instances to access
the same database (storage) simultaneously. RAC provides fault tolerance, load
balancing, and performance benefits by allowing the system to scale out, and at the
same time since both nodes access the same database, the failure of one instance will
not cause the loss of access to the database.
At the heart of Oracle RAC 10g is a shared disk subsystem. Both nodes in the cluster
must be able to access all of the data, redo log files, control files and parameter files.
The data disks must be globally available in order to allow both nodes to access the
database. Each node has its own redo log and control files, but the other nodes must
be able to access them in order to recover that node in the event of a system failure.
Not all clustering solutions use shared storage. Some vendors use an approach known
as a federated cluster, in which data is spread across several machines rather than
shared by all. With Oracle RAC 10g, however, multiple nodes use the same set of
disks for storing data. With Oracle RAC 10g, the data files, redo log files, control
files, and archived log files reside on shared storage on raw-disk devices, a NAS,
ASM, or on a clustered file system. Oracle's approach to clustering leverages the
collective processing power of all the nodes in the cluster and at the same time
provides failover security.
The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion.
With OPS a request for data from one node to another required the data to be written
to disk first, then the requesting node can read that data. With cache fusion, data is
passed along with locks.
Pre-configured Oracle RAC 10g solutions are available from vendors such as Dell,
IBM and HP for production environments.
One of the key reasons for delivering this article is the popularity that RAC is
enjoying being packaged free with Oracle10g Standard Edition. I have, however, been
fielding many questions regarding the constraints and supported configurations with
this type of install. In this section, I attempt to lay out some of the more popular
questions and answers that have seen. I have made every attempt and taken great care
making sure that the answers I provide here are accurate. While some of them rely on
official Oracle documentation, others were obtained after talking with Oracle
Corporation's technical support staff. Please let me know if any of the questions and
answers appear to be incorrect.
1. Can I use the Oracle Cluster File System (OCFS) with 10g
Standard Edition RAC?
No. OCFS is not supported with Standard Edition RAC. All database
files must use ASM (redo logs, recovery area, datafiles, control files,
etc.) It is recommended that the binaries and trace/log files (non-ASM
supported files) are replicated on both nodes. (Not to be on any shared
disk.) This is performed automatically by the installer.
5. Can both nodes in the cluster share the same binaries for CRS and
for the Oracle10g install?
The CRS and Software installs must have separate homes but they can
be on a shared drive. Simply specify a shared drive at installation time.
However if you place them on shared drives then all instances of your
database will have to be stopped if you upgrade the software. The
database can be more available during software upgrades if the homes
are on local drives. Whether this is of consequence to you, only you
can decide.
7. I believe that in OCFS V1 for Windows, you could not use it for
sharing the ORACLE_HOME for either the CRS binaries or the
10g ORACLE_HOME binaries.
In Windows you can use the latest OCFS versions delivered with
V9205 and 10g to use OCFS for the ORACLE_HOME. In neither
OCFS V1 or V2 are you able to place the CRS home on an OCFS
drive.
It rules out OCFS, not shared file systems in general. Standard Edition
does however assume that the files are RAW. Any shared disks
supported by your hardware vendor, other than Network Attached
Storage (NAS), can be used for Windows.
9. The OUI does not seem to "prevent" us from using the OCFS as a
storage mechanism for the Voting Disk and Cluster Registry File
when installing Standard Edition. (Not that we would do that - we
would always honor the license agreement we have - just trying to
better understand our options!)
True, OUI does not prevent you from selecting OCFS but it won't work
with Standard Edition.
10. Is it possible to put the Voting Disk and Cluster Registry File on
OCFS in Enterprise Edition?
Yes, in Enterprise Edition you can use OCFS for the OCR and voting
disks.
13. I have been following the activities on OCFS V2 for Linux, but
where do we look to find information on OFCS V2 for Windows?
(i.e. Beta versions, release dates?)
The Linux versions of OCFS are open source and the progress can be
followed via http://oss.oracle.com.
The hardware used to build our Oracle RAC 10g environment consists of two
Windows servers and components that can be purchased over the Internet.
Server 1 - (windows1)
Dimension 2400 Series
- Intel Pentium 4 Processor at 2.80GHz
- 1GB DDR SDRAM (at 333MHz)
- 40GB 7200 RPM Internal Hard Drive
- Integrated Intel 3D AGP Graphics
- Integrated 10/100 Ethernet
- CDROM (48X Max Variable)
- 3.5" Floppy
- No monitor (Already had one)
- USB Mouse and Keyboard $620
1 - SCSI Card
- Dual Differential Ultra/Wide SCSI (PCI) [595-4414] (X6541A) [Manf# 375-0006]
Note that you will need to choose a host adapter that is compatible with your shared storage
subsystem. $195
Server 2 - (windows2)
Dimension 2400 Series $620
- Intel Pentium 4 Processor at 2.80GHz
- 1GB DDR SDRAM (at 333MHz)
- 40GB 7200 RPM Internal Hard Drive
- Integrated Intel 3D AGP Graphics
- Integrated 10/100 Ethernet
- CDROM (48X Max Variable)
- 3.5" Floppy
- No monitor (Already had one)
- USB Mouse and Keyboard
1 - SCSI Card
- Dual Differential Ultra/Wide SCSI (PCI) [595-4414] (X6541A) [Manf# 375-0006]
Note that you will need to choose a host adapter that is compatible with your shared storage
subsystem. $195
Miscellaneous Components
Shared Storage / Disk Array
- Sun StorEdge D1000 Disk Array (JBOD)
I am using a Sun StorEdge D1000 Disk Array in a JBOD configuration with 12 x 9GB 10000
RPM UltraSCSI hard drives. $499
2 - SCSI Cables
- 2 Meter External SCSI Cable - [530-2453] [HD68 to VHDCI68] (X3832A) $40
- 2 Meter External SCSI Cable - [530-2453] [HD68 to VHDCI68] (X3832A) $40
Note that you will need to choose cables that are compatible with your shared storage
subsystem and I/O host adapter.
4 - Network Cables
- Category 5e patch cable - (Connect windows1 to public network) $5
- Category 5e patch cable - (Connect windows2 to public network) $5
- Category 5e patch cable - (Connect windows1 to interconnect ethernet switch) $5
- Category 5e patch cable - (Connect windows2 to interconnect ethernet switch) $5
Total $2,299
A question I often receive is about substituting the Ethernet switch (used for
interconnect int-windows1 / int-windows2) with a crossover CAT5 cable. I would
not recommend this. I have found that when using a crossover CAT5 cable for the
interconnect, whenever I took one of the PCs down, the other PC would detect a
"cable unplugged" error, and thus the Cache Fusion network would become
unavailable.
In this section, we look at the operating system and talk a little about the hardware
requirements for installing Oracle RAC 10g (Standard Edition) on the Microsoft
Windows Server 2003 platform. All topics discussed in this section need to be
performed on both nodes in the cluster.
Memory
Verify that both nodes are equipped with a minimum of 512MB of RAM (for 32-bit
systems) and 1GB (for 64-bit systems). If either of the nodes fail to meet the memory
requirements, you will need to install more RAM before continuing.
Verify that the size of the configured swap space (also known as the paging file size)
is at least twice teh amount of physical RAM size. From the Control Panel, open the
System applet, select the Advanced tab and then select the Performance setting
button.
For Windows 2003 users, click on the Advanced tab from the Performance Options
dialog.
Hard Drives
Each of the nodes in the cluster will need to have a local hard drive installed as well
as access to a set of disks that can be shared between both nodes in the cluster. Any
shared disks that can be supported by your vendor can be used with the exception of a
Network Attached Storage (NAS) device. The shared disk subsystem must be
connected to both nodes in the cluster and all nodes must be able to read and write to
the disks.
The following table describes where the different software components will be stored:
Network
We now look at the network configuration for both nodes. The nodes in the cluster
need to be able to communicate with each other (known as the interconnect) and to
the public network where external clients can establish connections to Oracle through
using TCP/IP. Although not a strict requirement, it is highly recommended that two
NIC interfaces be installed in both nodes: one for the public network and another
(preferably on a different / private subnet) for the interconnect.
A virtual IP address (VIP) will be configured on both nodes in the cluster to provide
for high availability and failover. This VIP address can be moved between the nodes
in a cluster in case of failure. The VIP addresses are managed by the Cluster Ready
Services (CRS) software component. In order to support the VIP address feature, each
of the nodes will require an unused IP address that is compatible with the public
network's subet and netmask. Like the publicly accessible IP address for the nodes in
the cluster, the VIP address and hostname should be stored in the domain name
system (DNS).
Communication between the two nodes (or all of the nodes in the cluster) requires a
private network. The private network should only allow traffic for the interconnect
and should not be allowed access from outside of the cluster. Both nodes should have
a separate network adapter configured for this private network. While the public
network (and VIP address) should be entered into DNS, the private network should
not be. The network configuration (host name and IP address) for the private network
should be done on each node in the cluster using their hosts file:
%SystemRoot%\system32\drivers\etc\hosts. The following table displays my
network configuration used for this article:
10. Verify that you have administrative privileges on the other node
from where you will be performing the installation from. To do
this, enter the following command for the second node that is a
part of the cluster from the node you will be installing from:
We now look at one of the more critical steps in the process and that is to configure
the shared disk subsystem for use with RAC Standard Edition.
1. Disable Write Caching
The first step is to disable write caching on all shared disks that
are intended to be used for database files. This needs to be
performed on both nodes in the cluster:
The last step is to configure the shared disks for use with
Automatic Storage Management (ASM). Oracle's ASM storage
product consists of one or more disk groups, each of which can
span multiple disks. I will be using Normal Redundancy when
creating my ASM disk group. This type of configuration
requires at least two logical drives. For the purpose of this
example, I will be using a total of four disks for 1 ASM disk
group to store all physical database files named
ORCL_DATA1. I will then be creating another ASM disk
group consisting of four disks for the flash recovery area named
ORCL_FRA1. So, I will need to initialize a total of 8 drives
from the array. To prepare each logical drive, follow the steps
below for each drive.
Keep in mind that Oracle highly recommends each disk in the disk group be the
same size. When configuring disks for an ASM disk group, it is best practice to
create extended partitions the same size on each of the disks. There should only be
one extended partition on each disk and should take up the entire disk.
Overview
The next logical step is to install Oracle Cluster Ready Services and the Oracle
Database 10g software. However, we must first download and extract the required
Oracle software packages from the Oracle Technology Network (OTN).
If you do not currently have an account with Oracle OTN, you will need to create
one. This is a FREE account!
In this section, we will be downloading and extracting the required software from
Oracle to only one of the Windows nodes in the RAC cluster - namely windows1. This
is the machine where I will be performing all of the installs from. The Oracle installer
will copy the required software packages to all other nodes in the RAC configuration.
Verify that you have administrative privileges to the second node from where you
will be performing the installation from. To do this, enter the following command
for each node that is a part of the cluster from the node you will be installing from:
C:\> net use \\<node_name>\C$
where node_name is the node name. For example, from windows1, enter:
C:\> net use \\windows2\C$
Also, the password for the account (the local Administrator account in my
example) should be the same on both nodes in the cluster!
Login to one of the nodes in the Windows RAC cluster as the user that will be
performing the installs. In this example, I will be downloading the required Oracle
software to windows1 and saving them to "C:\orainstall\crs" and
"C:\orainstall\db".
First, download the Oracle Database 10g Release 1 (10.1.0.2) for Microsoft Windows
(32-bit).
Oracle Cluster Ready Services Release 1 (10.1.0.2) for Microsoft Windows (32-
bit)
Next, we need to download the Oracle Cluster Ready Services (OCRS) for Windows
32-bit. This can be downloaded from the same page used to download the Oracle
Database Server:
Extract the two packages you downloaded to a temporary directory. In this example, I
will use "C:\orainstall\crs" and "C:\orainstall\db".
This section describes the first phase of the installation of Oracle RAC 10g - installing
the Cluster Ready Services (CRS).
1. We start by running the setup.exe command from within the staging
directory we downloaded the software to. (Or if you are running from a
CD-ROM.) This will start the Oracle Universal Installer (OUI).
2. cd \orainstall\crs
setup.exe
Next, enter the public and private node name for both nodes in the
cluster. Neither of the node names should include the domain qualifier.
The values I supplied are as follows:
7. The next page, "Specify Network Interface Usage", the OUI displays a
list of cluster-wide interfaces. The default value for each of the
interfaces is "Do Not Use". For each of the Interface Names, select
one to be Public, and the other to be Private.
Interface Name Subnet Interface Type
Local Area Connection 192.168.1.0 Public
Local Area Connection 2 192.168.2.0< Private
8. Click [Next] to continue.
9. The next screen, "Select Disk Formatting Option", you MUST select
the Do not format any logical drives option. Oracle does not support
using the Oracle Cluster File System (OCFS) in Standard Edition.
Click [Next] to continue.
Do not select one of options that require a formatted drive because these options are
implemented only in Oracle Database 10g Enterprise Edition.
10. On the next page, "Disk Configuration - Oracle Cluster Registry
(OCR)", locate the partition that we created to hold the OCR (100MB)
file and select that partition's disk number and partition number from
the list. Click [Next] to continue.
11. On the next page, "Disk Configuration - Voting Disk", locate the
partition that we created to hold the Voting disk (20MB) file and select
that partition's disk number and partition number from the list. Click
[Next] to continue.
12. On the Summary page, click [Install] to start the installation
process. The OUI displays the Install page with an installation progress
bar.
13. At the end of the installation phase, the OUI runs a series of
configuration tools, during which it displays the "Configuration
Assistants" page.
14. After the configuration tools complete their processing, (which the user
can monitor on the "Configuration Assistants" page), the OUI displays
the "End of Installation" page.
15. Click the [Exit] button to exit the OUI.
Four new services should now be running on both nodes in the cluster.
Verify this by opening the Windows "Services" application or running the net start
command from a Command window on both nodes in the cluster.
With Oracle RAC 10g Standard Edition, Oracle requires you to use ASM for all
database (and flash recovery) files. To enable disk discovery during the Oracle
database installation, the logical drives to be used to store the database files (and those
to be used for the flash recovery area) must be stampted with an ASM header using a
GUI tool called asmtoolg. All disk names created by this tool begin with the prefix
ORCLDISK for identification purposes.
The following procedures should be used to stamp all logical drives that will be used
to store database and flash recovery files. This actions should only be performed on
one of the nodes in the cluster; preferably the node in which you performed the CRS
installation from.
We now get to start the second and final stage for a fully configured Oracle RAC 10g
environment - installing the Oracleg database software with RAC.
1. Like with installing the CRS software, navigate to the staging directory
we downloaded the Oracle database software to and run the setup.exe
file. (Or if you are running from a CD-ROM.) This will start the Oracle
Universal Installer (OUI).
2. cd \orainstall\db\Disk1
setup.exe
And that's it!!!! The second and final phase of the installation is complete. The
remainder of this article is dedicated to final configuration steps that should be
performed.
1. Navigate to [Start] -> [Settings] -> [Control Panel] -> [System] ->
[Advanced] -> [Environment Variables]
2. In the "System variables" dialog, select the Path variable and ensure
that the value for the Path variable contains %ORACLE_HOME%\bin,
where %ORACLE_HOME% is the new Oracle home for the Oracle10g
database software. If the variable does not contain this value (and the
following values), then click Edit and add this value to the start of the
Path variable definition in the "Edit System Variable" dialog. Click OK
when complete. Here is a list of other values that I have defined at the
start of my Path environment variable for each node:
3. C:\oracle\product\10.1.0\db_1\bin;
4. C:\oracle\product\10.1.0\db_1\jre\1.4.2\bin\client;
5. C:\oracle\product\10.1.0\db_1\jre\1.4.2\bin;
6. C:\oracle\product\10.1.0\crs\jre\1.4.2\bin\client;
C:\oracle\product\10.1.0\crs\jre\1.4.2\bin;
When creating the clustered database, we left all tablespaces set to their default size.
Since I am using a fairly large disk group for the shared storage, I like to make a
sizable testing database.
This section provides several optional SQL commands I used to modify and create all
tablespaces for my testing database. Please keep in mind that the database file names
(OMF files) I used in this example may differ from what Oracle creates for your
environment.
Here is a snapshot of the tablespaces I have defined for my test database environment:
7 rows selected.
Applying the Oracle10g Release 1 (10.1.0.4) Patch Set 2 for Microsoft Windows
Overview
At the time of this writing, the latest patchset for Oracle10g running on Microsoft
Windows (32-bit) is 10.1.0.4 (also known as patch 4163362). This is an important
patchset that fixes many bug related to 10g RAC. In particular, there is a major bug
named "TAF Connections to a Standard Edition Database are Incorrectly Rejected".
This is documented in bug 3549731 and was fixed in 10.1.0.3.0. Here is the error you
will get when attempting a TAF connection using Oracle10g Standard Edition:
C:\> sqlplus scott/tiger@orcltest
ERROR:
ORA-01012: not logged on
Connected to:
Oracle Database 10g Release 10.1.0.2.0 - Production
With the Real Application Clusters option
SQL>
It is also important to note that we will be applying the 10.1.0.4 patchset to both the
Oracle Cluster Ready Services (CRS) and the database software. The CRS software
must be at the same or newer level as the Oracle database software in a RAC
configuration. Therefore, you should always upgrade the CRS software before
upgrading the Oracle database software. Finally, before installing either of the patches
(CRS adn Oracle database software), we will need to download and install the
"Oracle Database 10g Companion CD Release 1 (10.1.0.2) for Microsoft Windows
(32-bit)".
This 10.1.0.4 patch will need to be downloaded from Oracle Metalink, while the
Oracle Database 10g Companion CD can be downloaded from OTN.
As the Admistrator user account, (or the account you installed the Oracle Software
as), extract the patch file to a temporary directory:
mkdir C:\orainstall\patches\10.1.0.4
move p4163362_10104_WINNT.zip C:\orainstall\patches\10.1.0.4
cd C:\orainstall\patches\10.1.0.4
unzip p4163362_10104_WINNT.zip
The steps in this section are only required if the database being upgraded uses Java
Virtual Machine (Java VM) or Oracle interMedia. For the purpose of this article, my
database does make use of the Java Virtual Machine (Java VM) or Oracle interMedia
and therefore will require the installation of the Oracle Database 10g Companion CD.
The type of installation to perform will be the Oracle Database 10g Products
installation type.
This installation type includes the Natively Compiled Java Libraries (NCOMP) files
to improve Java performance. If you do not install the NCOMP files, the "ORA-
29558:JAccelerator (NCOMP) not installed" error occurs when a database that uses
Java VM is upgraded to the patch release.
In this section, we will be downloading and extracting the required software from
Oracle to only one of the Windows nodes in the RAC cluster - namely windows1. This
is the machine where I will be performing the install from. The Oracle installer will
copy the required software packages to all other nodes in the RAC configuration.
Login to the node in the Windows RAC cluster where you performed the CRS and
Oracle database software from. For me, that would be windows1. In this example, I
will be downloading the Oracle Database 10g Companion CD software to windows1
and saving them to "C:\orainstall\comp".
mkdir C:\orainstall\comp
move 10g_win32_companion.zip C:\orainstall\comp
cd C:\orainstall\comp
unzip 10g_win32_companion.zip
Installing the Oracle Database 10g Companion CD
The next step is to install the Oracle Database 10g Companion CD software from
windows1.
cd C:\orainstall\comp\Disk1
setup.exe
Oracle Database 10g Companion CD
Screen Name Response
Welcome Screen Click <Next>
Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.
In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.
Specify File Locations
Here are the settings I used for this article:
Source Path: C:\orainstall\comp\Disk1\stage\products.xml
Destination Name: OraDb10g_home1
(The one in which the DB software is installed)
Destination Path: C:\oracle\product\10.1.0\db_1
Click <Next>
Selected Node This screen simply lists the nodes that will be part of the install. There is
nothing to edit here.
The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.
Before attempting to apply the patchset, we neeed to stop all Oracle related services.
9. Shut down all ASM services (ASM instances) on all the nodes.
From one of the nodes in the cluster, run:
10.srvctl stop asm -n windows1
srvctl stop asm -n windows2
13. Shut down CRS services using the Services Control Panel on
all nodes. The following is a list of services that will need to be
shut down on all nodes:
o Oracle Object Service
o OracleCRService
o OracleCSService
o OracleEVMService
Once all services running for CRS and the Oracle database software are stopped, we
can now start the patch installation process. To do this, navigate to the directory where
you extracted the patch set to and run the OUI installer:
You must install the patch set software from the node from where the release
10.1.0.2 software was installed!
3. In this example, the RAC 10g release 1 (10.1.x) software was installed from
node1, but the patch set would now have to be installed from node2 after
running the command.
cd C:\orainstall\patches\10.1.0.4
setup.exe
Oracle 10.1.0.4.0 Patchset Installation Screen Responses - CRS
Screen Name Response
Welcome Screen Click <Next>
Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.
In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.
Specify File Locations
Here are the settings I used for this article:
Source Path: C:\orainstall\patches\10.1.0.4\stage\products.xml
Destination Name: OraCr10g_home
(The one in which CRS is installed)
Destination Path: C:\oracle\product\10.1.0\crs
Click <Next>
This screen simply lists the nodes that will be part of the install. There is
nothing to edit here.
The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.
At the end of the patch set installation, you will be prompted with a reminder to patch
your CRS installation in a rolling manner, one node at a time. You can now exit from
the OUI by clicking the [Exit] button then click Yes on the confirmation dialog. The
remainder of this section contains the steps that will need to be performed on both
nodes in the cluster to complete the 10.1.0.4 CRS patchset install.
We already performed this on all nodes at the beginning of this section, so let's move
on.
<FROM WINDOWS1>
C:\orainstall\patches\10.1.0.4>
C:\oracle\product\10.1.0\crs\install\patch10104.bat
Successful validation check of Oracle CRS services status
Successful binary patch of the C:\oracle\product\10.1.0\crs
Successful cleanup of patch subdirectory
Successful startup of OracleCSService
Successful startup of OracleEvmService
Successful startup of OracleCRService
Successful upgrade of this node to Oracle Cluster Ready Services
10.1.0.4
<FROM WINDOWS2>
C:\orainstall\patches\10.1.0.4>
C:\oracle\product\10.1.0\crs\install\patch10104.bat
Successful validation check of Oracle CRS services status
Successful binary patch of the C:\oracle\product\10.1.0\crs
Successful cleanup of patch subdirectory
Successful startup of OracleCSService
Successful startup of OracleEvmService
Successful startup of OracleCRService
Successful upgrade of this node to Oracle Cluster Ready Services
10.1.0.4
The Oracle Database 10g Patch Set 2 has now been successfully applied on both
nodes for the CRS software! We now need to patch the Oracle database software.
After applying the 10.1.0.4 patchset and running the post-installation batch file for the
CRS software, it will start all services (CRS and Oracle database services). Before
applying the patchset to the Oracle database software, we will need to stop all related
services (yes, again):
3. Shut down all ASM services (ASM instances) on all the nodes.
From one of the nodes in the cluster, run:
4. srvctl stop asm -n windows1
srvctl stop asm -n windows2
Once all services running for the Oracle database software is stopped, we can now
start the patch installation process. To do this, navigate to the directory where you
extracted the patch set to and run the OUI installer:
You must install the patch set software from the node from where the release
10.1.0.2 software was installed!
3. In this example, the RAC 10g release 1 (10.1.x) software was installed from
node1, but the patch set would now have to be installed from node2 after
running the command.
cd C:\orainstall\patches\10.1.0.4
setup.exe
Oracle 10.1.0.4.0 Patchset Installation Screen Responses - Database Software
Screen Name Response
Welcome Screen Click <Next>
Specify File Locations Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.
In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.
The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.
3.In this example, the RAC 10g release 1 (10.1.x) software was
installed from node1, but the patch set would now have to be
installed from node2 after running the command.
Click <Next>
Summary On the Summary screen, click <Install> to start the installation process.
The Oracle Database 10g Patch Set 2 has now been successfully applied on both
nodes for the Oracle database software! You can now exit from the OUI. We now
need to perform several postinstallation tasks.
We are almost there! The patchset is now installed and we now have the task of
upgrading the database.
The first step is to start all services that were stopped for the patchset installation.
Perform the following:
5. Start the database instance for only the local instance. For my
case, I am on windows1:
Review the patch.log file for errors and inspect the list of
components that is displayed at the end of catpatch.sql script.
SQL> @?\rdbms\admin\utlrp.sql
When the 10.1.0.4 patch set is applied to an Oracle Database Standard Edition or
Standard Edition One database, there may be 42 invalid objects after the utlrp.sql
script runs. These objects belong to the unsupported components and do not affect
the database operation.
Ignore any messages indicating that the database contains invalid recycle bin
objects similar to the following:
BIN$4lzljWIt9gfgMFeM2hVSoA==$0
19. Reset CLUSTER_DATABASE initialization parameter to
TRUE:
These instructions do not apply to RAC installations where the nodes of the cluster
share the same Oracle home. In this article, I am not sharing the Oracle home and
will therefore need to perform this section.
The configPatch.pl script updates the Oracle Enterprise Manager Database Control
files. Although Oracle Universal Installer copies the configPatch.pl script to all of
the Oracle homes on the cluster, it only runs the script on the node running Oracle
Universal Installer.
If you install this patch set on a RAC installation that does not use a shared Oracle
home directory, then you must manually run
%ORACLE_HOME%\sysman\install\configPatch.pl on each node of the cluster,
except the node from which you ran Oracle Universal Installer.
FROM WINDOWS2
cd C:\oracle\product\10.1.0\db_1\perl\5.6.1\bin\MSWin32-x86
perl C:\oracle\product\10.1.0\db_1\sysman\install\configPatch.pl
For Oracle Database 10g release 1 (both 10.1.0.2 and 10.1.0.3) installations, the
Oracle Notification Service (ONS) AUTO_START parameter is set to 0 on each node
of the cluster. This bug seems to exist for all UNIX platforms (Solaris, Linux, etc.)
and MS Windows. For this reason, CRS does not automatically start the ONS
component when the node is restarted. This issue is documented and being tracked
with Oracle bug 4011834.
%ORA_CRS_HOME%\bin\crs_stat
...
NAME=ora.windows1.ons
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
...
NAME=ora.windows1.ons
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
To work around this issue, perform the following steps as the CRS owner for each
ONS resource. For the purpose of this article, I will also be giving the commands I ran
on one of the nodes in my cluster, windows1. The same tasks need to be run on the
second node windows2:
cd %ORA_CRS_HOME%\crs\profile
For example,
For example,
%ORA_CRS_HOME%\bin\crs_profile -update
ora.windows1.ons -o as=1
crs_home\bin\crs_register -u ora.nodename.ons
For example,
%ORA_CRS_HOME%\bin\crs_register -u ora.windows1.ons
listener.ora
Let's first take a look at the listener.ora file that was created during the install. The
listener.ora file should be properly configured on both nodes and no modifications
should be needed.
For clarity, I included a copy of the listener.ora file from my node windows1:
listener.ora
# listener.ora.windows1 Network Configuration File:
# C:\oracle\product\10.1.0\db_1\network\admin\listener.ora.windows1
# Generated by Oracle configuration tools.
LISTENER_WINDOWS1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT =
1521)(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.121)(PORT =
1521)(IP = FIRST))
)
)
)
SID_LIST_LISTENER_WINDOWS1 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oracle\product\10.1.0\db_1)
(PROGRAM = extproc)
)
)
tnsnames.ora
Here is a copy of my tnsnames.ora file that was configured by Oracle and can be
used for testing the Transparent Application Failover (TAF). This file should already
be configured on both nodes, but you will want to add the new ORCLTEST entry.
You can include any of these entries on other client machines that need access to the
clustered database.
tnsnames.ora
# tnsnames.ora Network Configuration File:
# C:\oracle\product\10.1.0\db_1\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
LISTENERS_ORCL =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
)
ORCL2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(INSTANCE_NAME = orcl2)
)
)
ORCL1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(INSTANCE_NAME = orcl1)
)
)
ORCLTEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
If the only service defined is for orcl.idevelopment.info, then you will need to
manually add the service to both instances:
This is an optional step, but I like to perform it in order to verify my TNS files are
configured correctly. Use another machine (i.e. a Windows machine connected to the
network) that has Oracle installed (either 9i or 10g) and add the TNS entries (in the
tnsnames.ora) from either of the nodes in the cluster that were created for the
clustered database.
Then try to connect to the clustered database using all available service names defined
in the tnsnames.ora file:
The following RAC verification checks should be performed on all nodes in the
cluster! For this article, I will only be performing checks from windows1.
Overview
This section provides several srvctl commands and SQL queries that can be used to
validate your Oracle RAC 10g configuration.
There are five node-level tasks defined for SRVCTL:
Display the configuration for node applications - (VIP, GSD, ONS, Listener)
SELECT
inst_id
, instance_number inst_no
, instance_name inst_name
, parallel
, status
, database_status db_status
, active_state state
, host_name host
FROM gv$instance
ORDER BY inst_id;
NAME
-------------------------------------------
+ORCL_DATA1/orcl/controlfile/current.260.3
+ORCL_DATA1/orcl/controlfile/current.261.3
+ORCL_DATA1/orcl/datafile/example.267.1
+ORCL_DATA1/orcl/datafile/sysaux.257.1
+ORCL_DATA1/orcl/datafile/system.256.1
+ORCL_DATA1/orcl/datafile/undotbs1.258.1
+ORCL_DATA1/orcl/datafile/undotbs1.275.1
+ORCL_DATA1/orcl/datafile/undotbs2.268.1
+ORCL_DATA1/orcl/datafile/undotbs2.276.1
+ORCL_DATA1/orcl/datafile/users.259.1
+ORCL_DATA1/orcl/datafile/users.274.1
+ORCL_DATA1/orcl/onlinelog/group_1.262.1
+ORCL_DATA1/orcl/onlinelog/group_1.263.1
+ORCL_DATA1/orcl/onlinelog/group_2.264.1
+ORCL_DATA1/orcl/onlinelog/group_2.265.1
+ORCL_DATA1/orcl/onlinelog/group_3.269.1
+ORCL_DATA1/orcl/onlinelog/group_3.270.1
+ORCL_DATA1/orcl/onlinelog/group_4.271.1
+ORCL_DATA1/orcl/onlinelog/group_4.272.1
+ORCL_DATA1/orcl/tempfile/temp.266.1
20 rows selected.
SELECT path
FROM v$asm_disk
WHERE group_number IN (select group_number
from v$asm_diskgroup
where name = 'ORCL_DATA1');
PATH
----------------------------------
\\.\ORCLDISKDATA0
\\.\ORCLDISKDATA1
\\.\ORCLDISKDATA2
\\.\ORCLDISKDATA3
Starting & Stopping the Cluster
At this point, everything has been installed and configured for Oracle RAC 10g. We
have all of the required software installed and configured plus we have a fully
functional clustered database.
With all of the work we have done up to this point, a popular question might be,
"How do we start and stop services?". If you have followed the instructions in this
article, all services should start automatically on each reboot of the Windows nodes.
This would include CRS, all Oracle instances, Enterprise Manager Database Console,
etc.
There are times, however, when you might want to shutdown a node and manually
start it back up. Or you may find that Enterprise Manager is not running and need to
start it. This section provides the commands (using SRVCTL) responsible for starting
and stopping the cluster environment.
Ensure that you are logged in as the "administrator" user. I will be running all of
the commands in this section from windowds1:
The first step is to stop the Oracle instance. Once the instance (and related services) is
down, then bring down the ASM instance. Finally, shutdown the node applications
(Virtual IP, GSD, TNS Listener, and ONS).
set oracle_sid=orcl1
emctl stop dbconsole
srvctl stop instance -d orcl -i orcl1
srvctl stop asm -n windows1
srvctl stop nodeapps -n windows1
The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and
ONS). Once the node applications are successfully started, then bring up the ASM
instance. Finally, bring up the Oracle instance (and related services) and the
Enterprise Manager Database console.
set oracle_sid=orcl1
srvctl start nodeapps -n windows1
srvctl start asm -n windows1
srvctl start instance -d orcl -i orcl1
emctl start dbconsole
Start / Stop all of the instances and its enabled services. I just included this for fun as
a way to bring down all instances!
srvctl start database -d orcl
srvctl stop database -d orcl
At the start of this article, we mentioned that we wanted to create two disk groups:
one for the actual physical database files named ORCL_DATA1 and another disk
group for the flash recovery area named ORCL_FRA1. During the creation of the
clustered database, we only had an option to create one disk group for the physical
database files. In this section, I will be manually creating another disk group using
SQL.
All of the SQL commands for adding a disk group needs to be performed from one
of the ASM instances: +ASM1 or +ASM2.
set oracle_sid=+ASM1
sqlplus "/ as sysdba"
8 rows selected.
Diskgroup created.
The above commands created a new disk group named ORCL_FRA1. We now want
to assign the Database Flash Recovery Area to this new disk group and to also adjust
the size that I would like to allow for this area. To perform these actions, I will need to
update the SPFILE that is being used for each of the instances. To do this, I will need
to be logged into one of the instances: ORCL1 or ORCL2.
set oracle_sid=orcl1
sqlplus "/ as sysdba"
System altered.
System altered.
The DBCA creates a fully functional Oracle Database Console configuration with
support for RAC. The DBCA creates an instance of the Oracle Database Console on
both nodes in the cluster. Simply point a web browser to either of the machines using
the following URL:
http://<rac_node>:5500/em
For my example, I can navigate to:
http://windows1:5500/em
Overview
A major component of Oracle RAC 10g that is responsible for failover processing is
the Transparent Application Failover (TAF) option. All database connections (and
processes) that loose connections are reconnected to another node within the cluster.
The failover is completely transparent to the user.
This final section provides a short demonstration on how automatic failover works in
Oracle RAC 10g. Please note that a complete discussion on failover in Oracle10g
RAC would be an article in of its own. My intention here is to present a brief
overview and example of how it works.
One important note before continuing is that TAF happens automatically within the
OCI libraries. This means that your application (client) code does not need to change
in order to take advantage of TAF. Certain configuration steps, however, will need to
be done on the Oracle TNS file tnsnames.ora.
Keep in mind that at the time of this article, using the Java thin client will not be
able to participate in TAF since it never reads the tnsnames.ora file.
Before demonstrating TAF, we need to verify that a valid entry exists in the
tnsnames.ora file on a non-RAC client machine (if you have a Windows machine
lying around). Ensure that you have Oracle RDBMS software installed. (Actually, you
only need a client install of the Oracle software.)
During the creation of the clustered database in this article, I created a new service
that will be used for testing TAF named ORCLTEST. It provides all of the necessary
configuration parameters for load balancing and failover. You can copy the contents
of this entry to the %ORACLE_HOME%\network\admin\tnsnames.ora file on the client
machine (my Windows laptop is being used in this example) in order to connect to the
new Oracle clustered database:
SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';
From a Windows machine (or other non-RAC client machine), login to the clustered
database using the orcltest service as the SYSTEM user:
C:\> sqlplus system/manager@orcltest
SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';
SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';
SQL> exit