Академический Документы
Профессиональный Документы
Культура Документы
for RAC
2
Contents
1. Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Using Packages in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Serviceguard Extension for RAC Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Group Membership Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Overview of SGeRAC and Cluster File System (CFS)/
Cluster Volume Manager (CVM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Package Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Storage Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Overview of SGeRAC and Oracle 10g RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Overview of SGeRAC and Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
How Serviceguard Works with Oracle 9i RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Configuring Packages for Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Configuring Packages for Oracle Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Node Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Larger Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Up to Four Nodes with SCSI Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Point to Point Connections to Storage Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Extended Distance Cluster Using Serviceguard Extension for RAC . . . . . . . . . . . . . . 32
3
Contents
Storage Planning with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Volume Planning with CVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Installing Serviceguard Extension for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Configuration File Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Creating a Storage Infrastructure with LVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Building Volume Groups for RAC on Mirrored Disks. . . . . . . . . . . . . . . . . . . . . . . . . 48
Building Mirrored Logical Volumes for RAC with LVM Commands . . . . . . . . . . . . . 50
Creating RAC Volume Groups on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Creating Logical Volumes for RAC on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Displaying the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Exporting the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Installing Oracle Real Application Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Cluster Configuration ASCII File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Creating a Storage Infrastructure with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Creating a SGeRAC Cluster with CFS 4.1 for Oracle 10g . . . . . . . . . . . . . . . . . . . . . 65
Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Deleting CFS from the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Creating a Storage Infrastructure with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Initializing the VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Using CVM 4.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Using CVM 3.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Creating Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Adding Disk Groups to the Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Prerequisites for Oracle 10g (Sample Installation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Installing Oracle 10g Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Installing on Local File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Installing Oracle 10g RAC Binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Installing RAC Binaries on a Local File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Installing RAC Binaries on Cluster File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Creating a RAC Demo Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Creating a RAC Demo Database on SLVM or CVM . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Creating a RAC Demo Database on CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Configuring Oracle to Use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . 93
4
Contents
Verify that Oracle Disk Manager is Running. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Configuring Oracle to Stop Using Oracle Disk Manager Library . . . . . . . . . . . . . . . . . 96
Using Serviceguard Packages to Synchronize with Oracle 10g RAC . . . . . . . . . . . . . . 97
Preparing Oracle Cluster Software for Serviceguard Packages. . . . . . . . . . . . . . . . . 97
Configure Serviceguard Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5
Contents
Create Database with Oracle Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Configure Oracle to use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . . 152
Verify Oracle Disk Manager is Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Configuring Oracle to Stop using Oracle Disk Manager Library . . . . . . . . . . . . . . . . 155
Using Packages to Configure Startup and Shutdown of RAC Instances . . . . . . . . . . 156
Starting Oracle Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Creating Packages to Launch Oracle RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . 157
Configuring Packages that Access the Oracle RAC Database . . . . . . . . . . . . . . . . . 158
Adding or Removing Packages on a Running Cluster . . . . . . . . . . . . . . . . . . . . . . . 159
Writing the Package Control Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6
Contents
Replacing a Lock Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
On-line Hardware Maintenance with In-line SCSI Terminator . . . . . . . . . . . . . . . 198
Replacement of I/O Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Replacement of LAN Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Off-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
On-Line Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
After Replacing the Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Monitoring RAC Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
A. Software Upgrades
Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Steps for Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Example of Rolling Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Limitations of Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Non-Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Steps for Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Limitations of Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Migrating a SGeRAC Cluster with Cold Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
7
Contents
8
Printing History
Table 1 Document Edition and Printing Date
The last printing date and part number indicate the current edition.
Changes in the May 2006 update version include software upgrade
procedures for SGeRAC clusters.
The last printing date and part number indicate the current edition,
which applies to the 11.14, 11.15, 11.16 and 11.17 version of
Serviceguard Extension for RAC (Oracle Real Application Cluster).
9
The printing date changes when a new edition is printed. (Minor
corrections and updates which are incorporated at reprint do not cause
the date to change.) The part number is revised when extensive technical
changes are incorporated.
New editions of this manual will incorporate all material updated since
the previous edition. To ensure that you receive the new editions, you
should subscribe to the appropriate product support service. See your HP
sales representative for details.
HP Printing Division:
Business Critical Computing
Hewlett-Packard Co.
19111 Pruneridge Ave.
Cupertino, CA 95014
10
Preface
The May 2006 update includes a new appendix on software upgrade
procedures for SGeRAC clusters. Also, this guide describes how to use
the Serviceguard Extension for RAC (Oracle Real Application Cluster) to
configure Serviceguard clusters for use with Oracle Real Application
Cluster software on HP High Availability clusters running the HP-UX
operating system. The contents are as follows:
11
• Using High Availability Monitors (B5736-90046)
• Using the Event Monitoring Service (B7612-90015)
• Using Advanced Tape Services (B3936-90032)
• Designing Disaster Tolerant High Availability Clusters
(B7660-90017)
• Managing Serviceguard Extension for SAP (T2803-90002)
• Managing Systems and Workgroups (5990-8172)
• Managing Serviceguard NFS (B5140-90017)
• HP Auto Port Aggregation Release Notes
Before attempting to use VxVM storage with Serviceguard, please refer
to the following:
• http://www.hp.com/go/ha
Use the following URL for access to a wide variety of HP-UX
documentation:
• http://docs.hp.com/hpux
Problem Reporting If you have any problems with the software or documentation, please
contact your local Hewlett-Packard Sales Office or Customer Service
Center.
12
Conventions We use the following typographical conventions.
audit (5) An HP-UX manpage. audit is the name and 5 is the
section in the HP-UX Reference. On the web and on the
Instant Information CD, it may be a hot link to the
manpage itself. From the HP-UX command line, you
can enter “man audit” or “man 5 audit” to view the
manpage. See man (1).
Book Title The title of a book. On the web and on the Instant
Information CD, it may be a hot link to the book itself.
KeyCap The name of a keyboard key. Note that Return and Enter
both refer to the same key.
Emphasis Text that is emphasized.
Emphasis Text that is strongly emphasized.
Term The defined use of an important word or phrase.
ComputerOut Text displayed by the computer.
UserInput Commands and other text that you type.
Command A command name or qualified command phrase.
Variable The name of a variable that you may replace in a
command or function or information in a display that
represents several possible values.
[ ] The contents are optional in formats and command
descriptions. If the contents are a list separated by |,
you must choose one of the items.
{ } The contents are required in formats and command
descriptions. If the contents are a list separated by |,
you must choose one of the items.
... The preceding element may be repeated an arbitrary
number of times.
| Separates items in a list of choices.
13
14
Introduction to Serviceguard Extension for RAC
1 Introduction to Serviceguard
Extension for RAC
Chapter 1 15
Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster?
In the figure, two loosely coupled systems (each one known as a node)
are running separate instances of Oracle software that read data from
and write data to a shared set of disks. Clients connect to one node or the
other via LAN.
16 Chapter 1
Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster?
Group Membership
Oracle RAC systems implement the concept of group membership,
which allows multiple instances of RAC to run on each node. Related
processes are configured into groups. Groups allow processes in
different instances to choose which other processes to interact with. This
allows the support of multiple databases within one RAC cluster.
A Group Membership Service (GMS) component provides a process
monitoring facility to monitor group membership status. GMS is
provided by the cmgmsd daemon, which is an HP component installed
with Serviceguard Extension for RAC.
Figure 1-2 shows how group membership works. Nodes 1 through 4 of
the cluster share the Sales database, but only Nodes 3 and 4 share the
HR database. Consequently, there is one instance of RAC each on Node 1
and Node 2, and there are two instances of RAC each on Node 3 and
Node 4. The RAC processes accessing the Sales database constitute one
group, and the RAC processes accessing the HR database constitute
another group.
Chapter 1 17
Introduction to Serviceguard Extension for RAC
What is a Serviceguard Extension for RAC Cluster?
NOTE In RAC clusters, you create packages to start and stop RAC itself as well
as to run applications that access the database instances. For details on
the use of packages with RAC, refer to section, “Using Packages to
Configure Startup and Shutdown of RAC Instances” on page 156 located
in chapter 3.
18 Chapter 1
Introduction to Serviceguard Extension for RAC
Serviceguard Extension for RAC Architecture
• Oracle Components
— Package Manager
— Cluster Manager
— Network Manager
• Operating System
Chapter 1 19
Introduction to Serviceguard Extension for RAC
Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)
Package Dependencies
When CFS is used as shared storage, the application and software using
the CFS storage should be configured to start and stop using
Serviceguard packages. These application packages should be configured
with a package dependency on the underlying multi-node packages,
which manages the CFS and CVM storage reserves.
Configuring the application to be start/stop through SG package is to
ensure the synchronization of storage activation/deactivation and
application startup/shutdown.
With CVM configurations using multi-node packages, CVM shared
storage should be configured in Serviceguard packages with package
dependencies.
Refer to the Managing Serviceguard Twelfth Edition user’s guide for
detailed information on multi-node package.
20 Chapter 1
Introduction to Serviceguard Extension for RAC
Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM)
Oracle RAC data files can be created on a CFS, allowing the database
administrator or Oracle software to create additional data files without
the need of root system administrator privileges. The archive area can
now be on a CFS. Oracle instances on any cluster node can access the
archive area when database recovery requires the archive logs.
Chapter 1 21
Introduction to Serviceguard Extension for RAC
Overview of SGeRAC and Oracle 10g RAC
NOTE In this document, the generic terms “CRS” and “Oracle Clusterware” will
subsequently be referred to as “Oracle Cluster Software”. The use of the
term CRS will still be used when referring to a sub-component of Oracle
Cluster Software.
22 Chapter 1
Introduction to Serviceguard Extension for RAC
Overview of SGeRAC and Oracle 9i RAC
Group Membership
The group membership service (GMS) is the means by which Oracle
instances communicate with the Serviceguard cluster software. GMS
runs as a separate daemon process that communicates with the cluster
manager. This daemon is an HP component known as cmgmsd.
The cluster manager starts up, monitors, and shuts down the cmgmsd.
When an Oracle instance starts, the instance registers itself with
cmgmsd; thereafter, if an Oracle instance fails, cmgmsd notifies other
members of the same group to perform recovery. If cmgmsd dies
unexpectedly, Serviceguard will fail the node with a TOC (Transfer of
Control).
Chapter 1 23
Introduction to Serviceguard Extension for RAC
Configuring Packages for Oracle RAC Instances
NOTE Packages that start and halt Oracle instances (called instance
packages) do not fail over from one node to another; they are
single-node packages. You should include only one NODE_NAME in the
package ASCII configuration file. The AUTO_RUN setting in the package
configuration file will determine whether the RAC instance will start up
as the node joins the cluster. Your cluster may include RAC and
non-RAC packages in the same cluster.
24 Chapter 1
Introduction to Serviceguard Extension for RAC
Configuring Packages for Oracle Listeners
Chapter 1 25
Introduction to Serviceguard Extension for RAC
Node Failure
Node Failure
RAC cluster configuration is designed so that in the event of a node
failure, another node with a separate instance of Oracle can continue
processing transactions. Figure 1-3 shows a typical cluster with
instances running on both nodes.
Figure 1-4 shows the condition where Node 1 has failed and Package 1
has been transferred to Node 2. Oracle instance 1 is no longer operating,
but it does not fail over to Node 2. Package 1’s IP address was
transferred to Node 2 along with the package. Package 1 continues to be
26 Chapter 1
Introduction to Serviceguard Extension for RAC
Node Failure
available and is now running on Node 2. Also note that Node 2 can now
access both Package 1’s disk and Package 2’s disk. Oracle instance 2 now
handles all database access, since instance 1 has gone down.
In the above figure, pkg1 and pkg2 are not instance packages. They are
shown to illustrate the movement of packages in general.
Chapter 1 27
Introduction to Serviceguard Extension for RAC
Larger Clusters
Larger Clusters
Serviceguard Extension for RAC supports clusters of up to 16 nodes. The
actual cluster size is limited by the type of storage and the type of volume
manager used.
28 Chapter 1
Introduction to Serviceguard Extension for RAC
Larger Clusters
Chapter 1 29
Introduction to Serviceguard Extension for RAC
Larger Clusters
30 Chapter 1
Introduction to Serviceguard Extension for RAC
Larger Clusters
Chapter 1 31
Introduction to Serviceguard Extension for RAC
Extended Distance Cluster Using Serviceguard Extension for RAC
32 Chapter 1
Serviceguard Configuration for Oracle 10g RAC
• Interface Areas
• Oracle Cluster Software
• Planning Storage for Oracle Cluster Software
• Planning Storage for Oracle 10g RAC
• Installing Serviceguard Extension for RAC
• Installing Oracle Real Application Clusters
• Creating a Storage Infrastructure with CFS
• Creating a Storage Infrastructure with CVM
• Installing Oracle 10g Cluster Software
Chapter 2 33
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
Interface Areas
This section documents interface areas where there is expected
interaction between SGeRAC and Oracle 10g Cluster Software and RAC.
SGeRAC Detection
When Oracle 10g Cluster Software is installed on a SGeRAC cluster,
Oracle Cluster Software detects the existence of SGeRAC and CSS uses
SGeRAC group membership.
Cluster Timeouts
SGeRAC uses heartbeat timeouts to determine when any SGeRAC
cluster member has failed or when any cluster member is unable to
communicate with the other cluster members. CSS uses a similar
mechanism for CSS memberships. Each RAC instance group
membership also has a timeout mechanism, which triggers Instance
Membership Recovery (IMR).
34 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
CSS Timeout
When SGeRAC is on the same cluster as Oracle Cluster Software, the
CSS timeout is set to a default value of 600 seconds (10 minutes) at
Oracle software installation.
This timeout is configurable with Oracle tools and should not be changed
without ensuring that the CSS timeout allows enough time for
Serviceguard Extension for RAC (SGeRAC) reconfiguration and to allow
multipath (if configured) reconfiguration to complete.
On a single point of failure, for example, node failure, Serviceguard
reconfigures first and SGeRAC delivers the new group membership to
CSS via NMAPI2. If there is a change in group membership, SGeRAC
updates the members of the new membership. After receiving the new
group membership, CSS in turn initiates its own recovery action as
needed and propagates the new group membership to the RAC instances.
Chapter 2 35
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
Monitoring
Oracle Cluster Software daemon monitoring is performed through
programs initiated by the HP-UX init process. SGeRAC monitors Oracle
Cluster Software to the extent that CSS is a NMAPI2 group membership
client and group member. SGeRAC provides group membership
notification to the remaining group members when CSS enters and
leaves the group membership.
Shared Storage
SGeRAC supports shared storage using HP Shared Logical Volume
Manager (SLVM), Veritas Cluster File System (CFS) and Veritas Cluster
Volume Manager (CVM). The file /var/opt/oracle/oravg.conf must
not be present so Oracle Cluster Software will not activate or deactivate
any shared storage.
Multipath
Multipath is supported through either SLVM pvlinks or CVM Dynamic
Multipath (DMP). In some configurations, SLVM or CVM does not need
to be configured for multipath as the multipath is provided by the
storage array. Since Oracle Cluster Software checks availability of the
shared device for the vote disk through periodic monitoring, the
multipath detection and failover time must be less than CRS's timeout
specified by the Cluster Synchronization Service (CSS) MISSCOUNT. On
SGeRAC configurations, the CSS MISSCOUNT value is set to 600 seconds.
Multipath failover time is typically between 30 to 120 seconds.
36 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
NOTE If Oracle resilvering is not available, the mirror recovery policy should be
set to full mirror resynchronization (NOMWC) of all control, redo, and
datafiles.
Listener
Chapter 2 37
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
Network Monitoring
SGeRAC cluster provides network monitoring. For networks that are
redundant and monitored by Serviceguard cluster, Serviceguard cluster
provides local failover capability between local network interfaces (LAN)
that is transparent to applications utilizing User Datagram Protocol
(UDP) and Transport Control Protocol (TCP).
For virtual IP addresses (floating or package IP address) in
Serviceguard, Serviceguard also provides remote failover capability of
network connection endpoints between cluster nodes and transparent
local failover capability of network connection endpoints between
redundant local network interfaces.
38 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Interface Areas
Chapter 2 39
Serviceguard Configuration for Oracle 10g RAC
RAC Instances
RAC Instances
Shared Storage
It is expected the shared storage is available when the RAC instance is
started. Since the RAC instance expects the shared storage to be
available, ensure the shared storage is activated. For SLVM, the shared
volume groups must be activated and for CVM, the disk group must be
activated. For CFS, the cluster file system must be mounted.
40 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Planning Storage for Oracle Cluster Software
Chapter 2 41
Serviceguard Configuration for Oracle 10g RAC
Planning Storage for Oracle 10g RAC
42 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Planning Storage for Oracle 10g RAC
Chapter 2 43
Serviceguard Configuration for Oracle 10g RAC
Planning Storage for Oracle 10g RAC
44 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Planning Storage for Oracle 10g RAC
Chapter 2 45
Serviceguard Configuration for Oracle 10g RAC
Installing Serviceguard Extension for RAC
NOTE For the up to date version compatibility for Serviceguard and HP-UX, see
the SGeRAC release notes.
1. Mount the distribution media in the tape drive, CD, or DVD reader.
2. Run Software Distributor, using the swinstall command.
3. Specify the correct input device.
4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC
5. After choosing the bundle, select OK to install the software.
46 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Configuration File Parameters
NOTE CVM 4.x with CFS does not use the STORAGE_GROUP parameter because
the disk group activation is performed by the multi-node package. CVM
3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII
package configuration file in order to activate the disk group.
Do not enter the names of LVM volume groups or VxVM disk groups in
the package ASCII configuration file.
Chapter 2 47
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
48 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
Selecting Disks for the Volume Group Obtain a list of the disks on
both nodes and identify which device files are used for the same disk on
both. Use the following command on each node to list available disks as
they are known to each system:
# lssf /dev/dsk/*
In the following examples, we use /dev/rdsk/c1t2d0 and
/dev/rdsk/c0t2d0, which happen to be the device names for the same
disks on both ftsys9 and ftsys10. In the event that the device file
names are different on the different nodes, make a careful note of the
correspondences.
Chapter 2 49
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
where hh must be unique to the volume group you are creating. Use
the next hexadecimal number that is available on your system, after
the volume groups that are already configured. Use the following
command to display a list of existing volume groups:
# ls -l /dev/*/group
3. Create the volume group and add physical volumes to it with the
following commands:
# vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0
# vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0
The first command creates the volume group and adds a physical
volume to it in a physical volume group called bus0. The second
command adds the second drive to the volume group, locating it in a
different physical volume group named bus1. The use of physical
volume groups allows the use of PVG-strict mirroring of disks and
PV links.
4. Repeat this procedure for additional volume groups.
50 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
NOTE It is important to use the -M n and -c y options for both redo logs and
control files. These options allow the redo log files to be resynchronized
by SLVM following a system crash before Oracle recovery proceeds. If
these options are not set correctly, you may not be able to continue with
database recovery.
If the command is successful, the system will display messages like the
following:
Logical volume “/dev/vg_ops/redo1.log” has been successfully
created
with character device “/dev/vg_ops/rredo1.log”
Logical volume “/dev/vg_ops/redo1.log” has been successfully
extended
Note that the character device file name (also called the raw logical
volume name) is used by the Oracle DBA in building the RAC database.
Chapter 2 51
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
If Oracle performs resilvering of RAC data files that are mirrored logical
volumes, choose a mirror consistency policy of “NONE” by disabling both
mirror write caching and mirror consistency recovery. With a mirror
consistency policy of “NONE”, SLVM does not perform the
resynchronization.
Create logical volumes for use as Oracle data files by using the same
options as in the following example:
# lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 \
/dev/vg_ops
The -m 1 option specifies single mirroring; the -M n option ensures that
mirror write cache recovery is set off; the -c n means that mirror
consistency recovery is disabled; the -s g means that mirroring is
PVG-strict, that is, it occurs between different physical volume groups;
the -n system.dbf option lets you specify the name of the logical
volume; and the -L 408 option allocates 408 megabytes.
If the command is successful, the system will display messages like the
following:
Logical volume “/dev/vg_ops/system.dbf” has been successfully
created
with character device “/dev/vg_ops/rsystem.dbf”
Logical volume “/dev/vg_ops/system.dbf” has been successfully
extended
Note that the character device file name (also called the raw logical
volume name) is used by the Oracle DBA in building the OPS database.
52 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
On your disk arrays, you should use redundant I/O channels from each
node, connecting them to separate controllers on the array. Then you can
define alternate links to the LUNs or logical disks you have defined on
the array. If you are using SAM, choose the type of disk array you wish to
configure, and follow the menus to define alternate links. If you are using
LVM commands, specify the links on the command line.
The following example shows how to configure alternate links using LVM
commands. The following disk configuration is assumed:
8/0.15.0 /dev/dsk/c0t15d0 /* I/O Channel 0 (8/0) SCSI address 15 LUN 0 */
8/0.15.1 /dev/dsk/c0t15d1 /* I/O Channel 0 (8/0) SCSI address 15 LUN 1 */
8/0.15.2 /dev/dsk/c0t15d2 /* I/O Channel 0 (8/0) SCSI address 15 LUN 2 */
8/0.15.3 /dev/dsk/c0t15d3 /* I/O Channel 0 (8/0) SCSI address 15 LUN 3 */
8/0.15.4 /dev/dsk/c0t15d4 /* I/O Channel 0 (8/0) SCSI address 15 LUN 4 */
8/0.15.5 /dev/dsk/c0t15d5 /* I/O Channel 0 (8/0) SCSI address 15 LUN 5 */
Assume that the disk array has been configured, and that both the
following device files appear for the same LUN (logical disk) when you
run the ioscan command:
/dev/dsk/c0t15d0
/dev/dsk/c1t3d0
Use the following procedure to configure a volume group for this logical
disk:
Chapter 2 53
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
where hh must be unique to the volume group you are creating. Use
the next hexadecimal number that is available on your system, after
the volume groups that are already configured. Use the following
command to display a list of existing volume groups:
# ls -l /dev/*/group
3. Use the pvcreate command on one of the device files associated with
the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0
It is only necessary to do this with one of the device file names for the
LUN. The -f option is only necessary if the physical volume was
previously used in some other volume group.
4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0
LVM will now recognize the I/O channel represented by
/dev/dsk/c0t15d0 as the primary link to the disk; if the primary link
fails, LVM will automatically switch to the alternate I/O channel
represented by /dev/dsk/c1t3d0. Use the vgextend command to add
additional disks to the volume group, specifying the appropriate physical
volume name for each PV link.
Repeat the entire procedure for each distinct volume group you wish to
create. For ease of system administration, you may wish to use different
volume groups to separate logs from data and control files.
NOTE The default maximum number of volume groups in HP-UX is 10. If you
intend to create enough new volume groups that the total exceeds ten,
you must increase the maxvgs system parameter and then re-build the
HP-UX kernel. Use SAM and select the Kernel Configuration area,
then choose Configurable Parameters. Maxvgs appears on the list.
54 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
Oracle
LV File
Size Size
Logical Volume Name (MB) Raw Logical Volume Path Name (MB)*
Chapter 2 55
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with LVM
Table 2-1 Required Oracle File Names for Demo Database (Continued)
Oracle
LV File
Size Size
Logical Volume Name (MB) Raw Logical Volume Path Name (MB)*
opsspfile1.ora 5 /dev/vg_ops/ropsspfile1.ora 5
pwdfile.ora 5 /dev/vg_ops/rpwdfile.ora 5
The size of the logical volume is larger than the Oracle file size because
Oracle needs extra space to allocate a header in addition to the file's
actual data capacity.
Create these files if you wish to build the demo database. The three
logical volumes at the bottom of the table are included as additional data
files, which you can create as needed, supplying the appropriate sizes. If
your naming conventions require, you can include the Oracle SID and/or
the database name to distinguish files for different instances and
different databases. If you are using the ORACLE_BASE directory
structure, create symbolic links to the ORACLE_BASE files from the
appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \
/u01/ORACLE/db001/ctrl01_1.ctl
After creating these files, set the owner to oracle and the group to dba
with a file mode of 660. The logical volumes are now available on the
primary node, and the raw logical volume names can now be used by the
Oracle DBA.
56 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Displaying the Logical Volume Infrastructure
Chapter 2 57
Serviceguard Configuration for Oracle 10g RAC
Displaying the Logical Volume Infrastructure
58 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Installing Oracle Real Application Clusters
Chapter 2 59
Serviceguard Configuration for Oracle 10g RAC
Cluster Configuration ASCII File
# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.
CLUSTER_NAME cluster 1
60 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Cluster Configuration ASCII File
Chapter 2 61
Serviceguard Configuration for Oracle 10g RAC
Cluster Configuration ASCII File
NODE_NAME ever3a
NETWORK_INTERFACE lan0
STATIONARY_IP15.244.64.140
NETWORK_INTERFACE lan1
HEARTBEAT_IP192.77.1.1
NETWORK_INTERFACE lan2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0
HEARTBEAT_INTERVAL 1000000
NODE_TIMEOUT 2000000
AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000
62 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Cluster Configuration ASCII File
Chapter 2 63
Serviceguard Configuration for Oracle 10g RAC
Cluster Configuration ASCII File
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
64 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
Chapter 2 65
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
66 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
# vxdctl -c mode
The following output will be displayed:
mode: enabled: cluster active - SLAVE
master: ever3b
or
mode: enabled: cluster active - MASTER
slave: ever3b
6. Converting Disks from LVM to CVM
You can use the vxvmconvert utility to convert LVM volume groups
into CVM disk groups. Before you can do this, the volume group must
be deactivated, which means that any package that uses the volume
group must be halted. This procedure is described in the Managing
Serviceguard Twelfth Edition user’s guide Appendix G.
7. Initializing Disks for CVM/CFS
You need to initialize the physical disks that will be employed in
CVM disk groups. If a physical disk has been previously used with
LVM, you should use the pvremove command to delete the LVM
header data from all the disks in the volume group (this is not
necessary if you have not previously used the disk with LVM).
To initialize a disk for CVM, log on to the master node, then use the
vxdiskadm program to initialize multiple disks, or use the
vxdisksetup command to initialize one disk at a time, as in the
following example:
# /etc/vx/bin/vxdisksetup -i c4t4d0
8. Create the Disk Group for RAC
Use the vxdg command to create disk groups. Use the -s option to
specify shared mode, as in the following example:
# vxdg -s init cfsdg1 c4t4d0
9. Create the Disk Group Multi-Node package. Use the following
command to add the disk group to the cluster:
# cfsdgadm add cfsdg1 all=sw
The following output will be displayed:
Chapter 2 67
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
68 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
Chapter 2 69
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
MULTI_NODE_PACKAGES
CAUTION Once you create the disk group and mount point packages, it is critical
that you administer the cluster with the cfs commands, including
cfsdgadm, cfsmntadm, cfsmount, and cfsumount. If you use the general
commands such as mount and umount, it could cause serious problems,
such as writing to the local file system instead of the cluster file system.
Any form of the mount command (for example, mount -o cluster,
dbed_chkptmount, or sfrac_chkptmount) other than cfsmount or
cfsumount in a HP Serviceguard Storage Management Suite
environment with CFS should be done with caution. These non-cfs
commands could cause conflicts with subsequent command operations on
the file system or Serviceguard packages. Use of these other forms of
mount will not create an appropriate multi-node package which means
that the cluster packages are not aware of the file system changes
70 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
# cfsumount /cfs/mnt3
2. Delete Mount Point Multi-node Package
# cfsmntadm delete /cfs/mnt1
The following output will be generated:
Mount point “/cfs/mnt1” was disassociated from the
cluster
# cfsmntadm delete /cfs/mnt2
The following output will be generated:
Mount point “/cfs/mnt2” was disassociated from the
cluster
# cfsmntadm delete /cfs/mnt3
The following output will be generated:
Mount point “/cfs/mnt3” was disassociated from the
cluster Cleaning up resource controlling shared disk
group “cfsdg1” Shared disk group “cfsdg1” was
disassociated from the cluster.
4. De-configure CVM
# cfscluster stop
Chapter 2 71
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CFS
72 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
IMPORTANT Creating a rootdg disk group is only necessary the first time you use the
Volume Manager. CVM 4.1 does not require a rootdg.
Chapter 2 73
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
74 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
Chapter 2 75
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
# cmviewcl
CLUSTER STATUS
ever3_cluster up
MULTI_NODE_PACKAGES
IMPORTANT After creating these files, use the vxedit command to change the
ownership of the raw volume files to oracle and the group membership
to dba, and to change the permissions to 660. Example:
# cd /dev/vx/rdsk/ops_dg
# vxedit -g ops_dg set user=oracle *
# vxedit -g ops_dg set group=dba *
# vxedit -g ops_dg set mode=660 *
The logical volumes are now available on the primary node, and the raw
logical volume names can now be used by the Oracle DBA.
76 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
NOTE The specific commands for creating mirrored and multi-path storage
using CVM are described in the HP-UX documentation for the VERITAS
Volume Manager.
To prepare the cluster for CVM disk group configuration, you need to
ensure that only one heartbeat subnet is configured. Then use the
following command, which creates the special package that
communicates cluster information to CVM:
# cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf
After the above command completes, start the cluster and create disk
groups for shared use as described in the following sections.
Starting the Cluster and Identifying the Master Node Run the
cluster, which will activate the special CVM package:
# cmruncl
Chapter 2 77
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
After the cluster is started, it will now run with a special system
multi-node package named VxVM-CVM-pkg, which is on all nodes. This
package is shown in the following output of the cmviewcl -v command:
CLUSTER STATUS
bowls up
SYSTEM_MULTI_NODE_PACKAGES:
When CVM starts up, it selects a master node, and this is the node from
which you must issue the disk group configuration commands. To
determine the master node, issue the following command from each node
in the cluster:
# vxdctl -c mode
One node will identify itself as the master. Create disk groups from this
node.
Converting Disks from LVM to CVM You can use the vxvmconvert
utility to convert LVM volume groups into CVM disk groups. Before you
can do this, the volume group must be deactivated, which means that
any package that uses the volume group must be halted. This procedure
is described in the Managing Serviceguard Thirteenth Edition user’s
guide Appendix G.
Initializing Disks for CVM You need to initialize the physical disks
that will be employed in CVM disk groups. If a physical disk has been
previously used with LVM, you should use the pvremove command to
delete the LVM header data from all the disks in the volume group (this
is not necessary if you have not previously used the disk with LVM).
To initialize a disk for CVM, log on to the master node, then use the
vxdiskadm program to initialize multiple disks, or use the vxdisksetup
command to initialize one disk at a time, as in the following example:
# /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2
78 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
Creating Disk Groups for RAC Use the vxdg command to create disk
groups. Use the -s option to specify shared mode, as in the following
example:
# vxdg -s init ops_dg c0t3d2
Verify the configuration with the following command:
# vxdg list
NAME STATE ID
Creating Volumes
Use the vxassist command to create logical volumes. The following is
an example:
# vxassist -g ops_dg make log_files 1024m
This command creates a 1024 MB volume named log_files in a disk
group named ops_dg. The volume can be referenced with the block device
file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file
/dev/vx/rdsk/ops_dg/log_files.
Verify the configuration with the following command:
# vxdg list
IMPORTANT After creating these files, use the vxedit command to change the
ownership of the raw volume files to oracle and the group membership
to dba, and to change the permissions to 660. Example:
# cd /dev/vx/rdsk/ops_dg
# vxedit -g ops_dg set user=oracle *
# vxedit -g ops_dg set group=dba *
# vxedit -g ops_dg set mode=660 *
The logical volumes are now available on the primary node, and the raw
logical volume names can now be used by the Oracle DBA.
Chapter 2 79
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
NOTE The specific commands for creating mirrored and multi-path storage
using CVM are described in the HP-UX documentation for the VERITAS
Volume Manager.
Oracle
Size File Size
Volume Name (MB) Raw Device File Name (MB)
80 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
Table 2-2 Required Oracle File Names for Demo Database (Continued)
Oracle
Size File Size
Volume Name (MB) Raw Device File Name (MB)
Create these files if you wish to build the demo database. The three
logical volumes at the bottom of the table are included as additional data
files, which you can create as needed, supplying the appropriate sizes. If
your naming conventions require, you can include the Oracle SID and/or
the database name to distinguish files for different instances and
different databases. If you are using the ORACLE_BASE directory
structure, create symbolic links to the ORACLE_BASE files from the
appropriate directory.
Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \
/u01/ORACLE/db001/ctrl01_1.ctl
Example:
Chapter 2 81
Serviceguard Configuration for Oracle 10g RAC
Creating a Storage Infrastructure with CVM
1. Create an ASCII file, and define the path for each database object.
control1=/u01/ORACLE/db001/ctrl01_1.ctl
2. Set the following environment variable where filename is the name
of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename
82 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Prerequisites for Oracle 10g (Sample Installation)
Chapter 2 83
Serviceguard Configuration for Oracle 10g RAC
Prerequisites for Oracle 10g (Sample Installation)
84 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Prerequisites for Oracle 10g (Sample Installation)
Chapter 2 85
Serviceguard Configuration for Oracle 10g RAC
Prerequisites for Oracle 10g (Sample Installation)
86 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Prerequisites for Oracle 10g (Sample Installation)
Chapter 2 87
Serviceguard Configuration for Oracle 10g RAC
Installing Oracle 10g Cluster Software
88 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Installing Oracle 10g RAC Binaries
Chapter 2 89
Serviceguard Configuration for Oracle 10g RAC
Creating a RAC Demo Database
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.0/crs
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rd
bms/lib
SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32
export LD_LIBRARY_PATH SHLIB_PATH
export \
PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/usr/local/bin:
CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbm
s/jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
export DISPLAY={display}:0.0
90 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Creating a RAC Demo Database
a. In this sample, the database name and SID prefix are ver10.
b. Select the storage option for raw devices.
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.0/crs
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOME/rd
bms/lib
SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32
export LD_LIBRARY_PATH SHLIB_PATH
export \
PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/usr/local/bin:
CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbm
s/jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
export DISPLAY={display}:0.0
a. In this sample, the database name and SID prefix are ver10.
b. Select the storage option for Cluster File System.
c. Enter /cfs/mnt2/oradata as the common directory.
Chapter 2 91
Serviceguard Configuration for Oracle 10g RAC
Verify that Oracle Disk Manager is Configured
92 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Configuring Oracle to Use Oracle Disk Manager Library
Chapter 2 93
Serviceguard Configuration for Oracle 10g RAC
Verify that Oracle Disk Manager is Running
1. Start the cluster and Oracle database (if not already started)
2. Check that the Oracle instance is using the Oracle Disk Manager
function:
# cat /dev/odm/stats
abort: 0
cancel: 0
commit: 18
create: 18
delete: 0
identify: 349
io: 12350590
reidentify: 78
resize: 0
unidentify: 203
mname: 0
vxctl: 0
vxvers: 10
io req: 9102431
io calls: 6911030
comp req: 73480659
comp calls: 5439560
io mor cmp: 461063
io zro cmp: 2330
cl receive: 66145
cl ident: 18
cl reserve: 8
cl delete: 1
cl resize: 0
cl same op: 0
cl opt idn: 0
cl opt rsv: 332
**********: 17
3. Verify that the Oracle Disk Manager is loaded:
94 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Verify that Oracle Disk Manager is Running
Chapter 2 95
Serviceguard Configuration for Oracle 10g RAC
Configuring Oracle to Stop Using Oracle Disk Manager Library
96 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Using Serviceguard Packages to Synchronize with Oracle 10g RAC
Chapter 2 97
Serviceguard Configuration for Oracle 10g RAC
Using Serviceguard Packages to Synchronize with Oracle 10g RAC
98 Chapter 2
Serviceguard Configuration for Oracle 10g RAC
Using Serviceguard Packages to Synchronize with Oracle 10g RAC
DEPENDENCY_NAME mp2
DEPENDENCY_CONDITION SG-CFS-MP-2=UP
DEPENDENCY_LOCATION SAME_NODE
DEPENDENCY_NAME mp3
DEPENDENCY_CONDITION SG-CFS-MP-3=UP
DEPENDENCY_LOCATION SAME_NODE
• Starting and Stopping Oracle Cluster Software
In the Serviceguard package control script, configure the Oracle
Cluster Software start in the customer_defined_run_cmds function
For 10g 10.1.0.04 or later:
/sbin/init.d/init.crs start
For 10g 10.2.0.01 or later:
<CRS HOME>/bin/crsctl start crs
In the Serviceguard package control script, configure the Oracle
Cluster Software stop in the customer_defined_halt_cmds function.
For 10g 10.1.0.04 or later:
/sbin/init.d/init.crs stop
For 10g 10.2.0.01 or later:
<CRS HOME>/bin/crsctl stop crs
When stopping Oracle Cluster Software in a Serviceguard package, it
may be necessary to verify that the Oracle processes have stopped
and exited before deactivating storage or halting CFS multi-node
package. The verification can be done with a script that loops and
checks for the successful stop message in the Oracle Cluster Software
logs or the existences of Oracle processes that needed to be stopped,
specifically the CSS daemon (ocssd.bin). For example, this script
could be called by the Serviceguard package control script after the
command to halt Oracle Cluster Software and before storage
deactivation.
Chapter 2 99
Serviceguard Configuration for Oracle 10g RAC
Using Serviceguard Packages to Synchronize with Oracle 10g RAC
100 Chapter 2
Serviceguard Configuration for Oracle 9i RAC
Chapter 3 101
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
102 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
Chapter 3 103
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
104 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
1 CFS CFS
3 Local FS CFS
NOTE Mixing files between CFS database files and raw volumes is allowable,
but not recommended. RAC datafiles on CFS requires Oracle Disk
Manager (ODM).
Chapter 3 105
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
106 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
Chapter 3 107
Serviceguard Configuration for Oracle 9i RAC
Planning Database Storage
108 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Installing Serviceguard Extension for RAC
NOTE For the most current version compatibility for Serviceguard and HP-UX,
see the SGeRAC release notes.
To install Serviceguard Extension for RAC, use the following steps for
each node:
1. Mount the distribution media in the tape drive, CD, or DVD reader.
2. Run Software Distributor, using the swinstall command.
3. Specify the correct input device.
4. Choose the following bundle from the displayed list:
Serviceguard Extension for RAC
5. After choosing the bundle, select OK. The software is loaded.
Chapter 3 109
Serviceguard Configuration for Oracle 9i RAC
Configuration File Parameters
NOTE CVM 4.x with CFS does not use the STORAGE_GROUP parameter because
the disk group activation is performed by the multi-node package. CVM
3.x or 4.x without CFS uses the STORAGE_GROUP parameter in the ASCII
package configuration file in order to activate the disk group.
Do not enter the names of LVM volume groups or VxVM disk groups in
the package ASCII configuration file.
110 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Operating System Parameters
Chapter 3 111
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
112 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
Selecting Disks for the Volume Group Obtain a list of the disks on
both nodes and identify which device files are used for the same disk on
both. Use the following command on each node to list available disks as
they are known to each system:
# lssf /dev/dsk/*
In the following examples, we use /dev/rdsk/c1t2d0 and
/dev/rdsk/c0t2d0, which happen to be the device names for the same
disks on both ftsys9 and ftsys10. In the event that the device file
names are different on the different nodes, make a careful note of the
correspondences.
Chapter 3 113
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
where hh must be unique to the volume group you are creating. Use
the next hexadecimal number that is available on your system, after
the volume groups that are already configured. Use the following
command to display a list of existing volume groups:
# ls -l /dev/*/group
3. Create the volume group and add physical volumes to it with the
following commands:
# vgcreate -g bus0 /dev/vg_ops /dev/dsk/c1t2d0
# vgextend -g bus1 /dev/vg_ops /dev/dsk/c0t2d0
The first command creates the volume group and adds a physical
volume to it in a physical volume group called bus0. The second
command adds the second drive to the volume group, locating it in a
different physical volume group named bus1. The use of physical
volume groups allows the use of PVG-strict mirroring of disks and
PV links.
4. Repeat this procedure for additional volume groups.
114 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
NOTE It is important to use the -M n and -c y options for both redo logs and
control files. These options allow the redo log files to be resynchronized
by SLVM following a system crash before Oracle recovery proceeds. If
these options are not set correctly, you may not be able to continue with
database recovery.
If the command is successful, the system will display messages like the
following:
Logical volume “/dev/vg_ops/redo1.log” has been successfully
created
with character device “/dev/vg_ops/rredo1.log”
Logical volume “/dev/vg_ops/redo1.log” has been successfully
extended
Note that the character device file name (also called the raw logical
volume name) is used by the Oracle DBA in building the RAC database.
Chapter 3 115
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
If Oracle performs the resilvering of RAC data files that are mirrored
logical volumes, choose a mirror consistency policy of “NONE” by
disabling both mirror write caching and mirror consistency recovery.
With a mirror consistency policy of “NONE”, SLVM does not
perform the resynchronization.
Create logical volumes for use as Oracle data files by using the same
options as in the following example:
# lvcreate -m 1 -M n -c n -s g -n system.dbf -L 408 /dev/vg_ops
The -m 1 option specifies single mirroring; the -M n option ensures that
mirror write cache recovery is set off; the -c n means that mirror
consistency recovery is disabled; the -s g means that mirroring is
PVG-strict, that is, it occurs between different physical volume groups;
the -n system.dbf option lets you specify the name of the logical
volume; and the -L 408 option allocates 408 megabytes.
If the command is successful, the system will display messages like the
following:
Logical volume “/dev/vg_ops/system.dbf” has been successfully
created
with character device “/dev/vg_ops/rsystem.dbf”
Logical volume “/dev/vg_ops/system.dbf” has been successfully
extended
Note that the character device file name (also called the raw logical
volume name) is used by the Oracle DBA in building the OPS database.
116 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
the array. If you are using SAM, choose the type of disk array you wish to
configure, and follow the menus to define alternate links. If you are using
LVM commands, specify the links on the command line.
The following example shows how to configure alternate links using
LVM commands. The following disk configuration is assumed:
8/0.15.0 /dev/dsk/c0t15d0 /* I/O Channel 0 (8/0) SCSI address 15 LUN 0 */
8/0.15.1 /dev/dsk/c0t15d1 /* I/O Channel 0 (8/0) SCSI address 15 LUN 1 */
8/0.15.2 /dev/dsk/c0t15d2 /* I/O Channel 0 (8/0) SCSI address 15 LUN 2 */
8/0.15.3 /dev/dsk/c0t15d3 /* I/O Channel 0 (8/0) SCSI address 15 LUN 3 */
8/0.15.4 /dev/dsk/c0t15d4 /* I/O Channel 0 (8/0) SCSI address 15 LUN 4 */
8/0.15.5 /dev/dsk/c0t15d5 /* I/O Channel 0 (8/0) SCSI address 15 LUN 5 */
Assume that the disk array has been configured, and that both the
following device files appear for the same LUN (logical disk) when you
run the ioscan command:
/dev/dsk/c0t15d0
/dev/dsk/c1t3d0
Use the following procedure to configure a volume group for this logical
disk:
Chapter 3 117
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
# ls -l /dev/*/group
3. Use the pvcreate command on one of the device files associated with
the LUN to define the LUN to LVM as a physical volume.
# pvcreate -f /dev/rdsk/c0t15d0
It is only necessary to do this with one of the device file names for the
LUN. The -f option is only necessary if the physical volume was
previously used in some other volume group.
4. Use the following to create the volume group with the two links:
# vgcreate /dev/vg_ops /dev/dsk/c0t15d0 /dev/dsk/c1t3d0
LVM will now recognize the I/O channel represented by
/dev/dsk/c0t15d0 as the primary link to the disk; if the primary link
fails, LVM will automatically switch to the alternate I/O channel
represented by /dev/dsk/c1t3d0. Use the vgextend command to add
additional disks to the volume group, specifying the appropriate physical
volume name for each PV link.
Repeat the entire procedure for each distinct volume group you wish to
create. For ease of system administration, you may wish to use different
volume groups to separate logs from data and control files.
NOTE The default maximum number of volume groups in HP-UX is 10. If you
intend to create enough new volume groups that the total exceeds ten,
you must increase the maxvgs system parameter and then re-build the
HP-UX kernel. Use SAM and select the Kernel Configuration area,
then choose Configurable Parameters. Maxvgs appears on the list.
118 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
Oracle
LV File
Size Size
Logical Volume Name (MB) Raw Logical Volume Path Name (MB)*
ops1log1.log 28 /dev/vg_ops/rops1log1.log 20
ops1log2.log 28 /dev/vg_ops/rops1log2.log 20
ops1log3.log 28 /dev/vg_ops/rops1log3.log 20
ops2log1.log 28 /dev/vg_ops/rops2log1.log 20
ops2log2.log 28 /dev/vg_ops/rops2log2.log 20
ops2log3.log 28 /dev/vg_ops/rops2log3.log 20
opstools.dbf 24 /dev/vg_ops/ropstools.dbf 15
opsspfile1.ora 5 /dev/vg_ops/ropsspfile1.ora 5
Chapter 3 119
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with LVM
Table 3-2 Required Oracle File Names for Demo Database (Continued)
Oracle
LV File
Size Size
Logical Volume Name (MB) Raw Logical Volume Path Name (MB)*
opsindx1.dbf 78 /dev/vg_ops/ropsindx1.dbf 70
opsdrsys1.dbf 98 /dev/vg_ops/ropsdrsys1.dbf 90
*
The size of the logical volume is larger than the Oracle file size because
Oracle needs extra space to allocate a header in addition to the file's
actual data capacity.
Create these files if you wish to build the demo database. The three
logical volumes at the bottom of the table are included as additional data
files, which you can create as needed, supplying the appropriate sizes. If
your naming conventions require, you can include the Oracle SID and/or
the database name to distinguish files for different instances and
different databases. If you are using the ORACLE_BASE directory
structure, create symbolic links to the ORACLE_BASE files from the
appropriate directory. Example:
# ln -s /dev/vg_ops/ropsctl1.ctl \
/u01/ORACLE/db001/ctrl01_1.ctl
After creating these files, set the owner to oracle and the group to dba
with a file mode of 660. The logical volumes are now available on the
primary node, and the raw logical volume names can now be used by the
Oracle DBA.
120 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Displaying the Logical Volume Infrastructure
Chapter 3 121
Serviceguard Configuration for Oracle 9i RAC
Displaying the Logical Volume Infrastructure
122 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Installing Oracle Real Application Clusters
NOTE If you do not wish to install the demo database, select install software
only.
Chapter 3 123
Serviceguard Configuration for Oracle 9i RAC
Cluster Configuration ASCII File
# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.
CLUSTER_NAME cluster 1
124 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Cluster Configuration ASCII File
Chapter 3 125
Serviceguard Configuration for Oracle 9i RAC
Cluster Configuration ASCII File
NODE_NAME ever3a
NETWORK_INTERFACE lan0
STATIONARY_IP15.244.64.140
NETWORK_INTERFACE lan1
HEARTBEAT_IP192.77.1.1
NETWORK_INTERFACE lan2
# List of serial device file names
# For example:
# SERIAL_DEVICE_FILE /dev/tty0p0
HEARTBEAT_INTERVAL 1000000
NODE_TIMEOUT 2000000
AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000
126 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Cluster Configuration ASCII File
Chapter 3 127
Serviceguard Configuration for Oracle 9i RAC
Cluster Configuration ASCII File
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
128 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
Chapter 3 129
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
130 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
# vxdctl -c mode
The following output will be displayed:
mode: enabled: cluster active - SLAVE
master: ever3b
or
mode: enabled: cluster active - MASTER
slave: ever3b
6. Converting Disks from LVM to CVM
Use the vxvmconvert utility to convert LVM volume groups into
CVM disk groups. Before you can do this, the volume group must be
deactivated, which means that any package that uses the volume
group must be halted. This procedure is described in the Managing
Serviceguard Twelfth Edition user’s guide Appendix G.
7. Initializing Disks for CVM/CFS
You need to initialize the physical disks that will be employed in
CVM disk groups. If a physical disk has been previously used with
LVM, you should use the pvremove command to delete the LVM
header data from all the disks in the volume group (this is not
necessary if you have not previously used the disk with LVM).
To initialize a disk for CVM, log on to the master node, then use the
vxdiskadm program to initialize multiple disks, or use the
vxdisksetup command to initialize one disk at a time, as in the
following example:
# /etc/vx/bin/vxdisksetup -i c4t4d0
8. Create the Disk Group for RAC
Use the vxdg command to create disk groups. Use the -s option to
specify shared mode, as in the following example:
# vxdg -s init cfsdg1 c4t4d0
9. Create the Disk Group Multi-Node package. Use the following
command to add the disk group to the cluster:
# cfsdgadm add cfsdg1 all=sw
The following output will be displayed:
Chapter 3 131
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
132 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
MULTI_NODE_PACKAGES
Chapter 3 133
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
134 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CFS
3. Delete DG MNP
# cfsdgadm delete cfsdg1
The following output will be generated:
Shared disk group “cfsdg1” was disassociated from the
cluster.
4. De-configure CVM
# cfscluster stop
The following output will be generated:
Stopping CVM...CVM is stopped
# cfscluster unconfig
The following output will be generated:
CVM is now unconfigured
Chapter 3 135
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
IMPORTANT Creating a rootdg disk group is only necessary the first time you use the
Volume Manager. CVM 4.1 does not require a rootdg.
136 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
For more detailed information on how to configure CVM 4.x, refer the
Managing Serviceguard Twelfth Edition user’s guide.
NOTE To prepare the cluster for CVM configuration, you need to be sure
MAX_CONFIGURED_PACKAGES to minimum of 3 (the default value for
MAX_CONFIGURED_PACKAGES for Serviceguard A.11.17 is 150) cluster
configuration file. In the sample set the value to 10.
Chapter 3 137
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
# cmrunpkg SG-CFS-pkg
When CVM starts up, it selects a master node, which is the node
from which you must issue the disk group configuration commands.
To determine the master node, issue the following command from
each node in the cluster:
# vxdctl -c mode
The following output will be displayed:
mode: enabled: cluster active - SLAVE
master: ever3b
138 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
MULTI_NODE_PACKAGES
IMPORTANT After creating these files, use the vxedit command to change the
ownership of the raw volume files to oracle and the group membership
to dba, and to change the permissions to 660. Example:
# cd /dev/vx/rdsk/ops_dg
# vxedit -g ops_dg set user=oracle *
# vxedit -g ops_dg set group=dba *
# vxedit -g ops_dg set mode=660 *
The logical volumes are now available on the primary node, and the raw
logical volume names can now be used by the Oracle DBA.
Chapter 3 139
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
NOTE The specific commands for creating mirrored and multi-path storage
using CVM are described in the HP-UX documentation for the VERITAS
Volume Manager.
To prepare the cluster for CVM disk group configuration, you need to
ensure that only one heartbeat subnet is configured. Then use the
following command, which creates the special package that
communicates cluster information to CVM:
# cmapplyconf -P /etc/cmcluster/cvm/VxVM-CVM-pkg.conf
After the above command completes, start the cluster and create disk
groups for shared use as described in the following sections.
Starting the Cluster and Identifying the Master Node Run the
cluster, which will activate the special CVM package:
# cmruncl
140 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
After the cluster is started, it will now run with a special system
multi-node package named VxVM-CVM-pkg, which is on all nodes. This
package is shown in the following output of the cmviewcl -v command:
CLUSTER STATUS
bowls up
SYSTEM_MULTI_NODE_PACKAGES:
When CVM starts up, it selects a master node, and this is the node from
which you must issue the disk group configuration commands. To
determine the master node, issue the following command from each node
in the cluster:
# vxdctl -c mode
One node will identify itself as the master. Create disk groups from this
node.
Initializing Disks for CVM Initialize the physical disks that will be
employed in CVM disk groups. If a physical disk has been previously
used with LVM, you should use the pvremove command to delete the
LVM header data from all the disks in the volume group (this is not
necessary if you have not previously used the disk with LVM).
To initialize a disk for CVM, log on to the master node, then use the
vxdiskadm program to initialize multiple disks, or use the vxdisksetup
command to initialize one disk at a time, as in the following example:
# /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2
Chapter 3 141
Serviceguard Configuration for Oracle 9i RAC
Creating a Storage Infrastructure with CVM
Creating Disk Groups for RAC Use the vxdg command to create disk
groups. Use the -s option to specify shared mode, as in the following
example:
# vxdg -s init ops_dg c0t3d2
Verify the configuration with the following command:
# vxdg list
NAME STATE ID
142 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Creating Volumes
Creating Volumes
Use the vxassist command to create logical volumes. The following is
an example:
# vxassist -g log_files make ops_dg 1024m
This command creates a 1024 MB volume named log_files in a disk
group named ops_dg. The volume can be referenced with the block device
file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file
/dev/vx/rdsk/ops_dg/log_files.
Verify the configuration with the following command:
# vxdg list
IMPORTANT After creating these files, use the vxedit command to change the
ownership of the raw volume files to oracle and the group membership
to dba, and to change the permissions to 660. Example:
# cd /dev/vx/rdsk/ops_dg
# vxedit -g ops_dg set user=oracle *
# vxedit -g ops_dg set group=dba *
# vxedit -g ops_dg set mode=660 *
The logical volumes are now available on the primary node, and the raw
logical volume names can now be used by the Oracle DBA.
Chapter 3 143
Serviceguard Configuration for Oracle 9i RAC
Creating Volumes
NOTE The specific commands for creating mirrored and multi-path storage
using CVM are described in the HP-UX documentation for the VERITAS
Volume Manager.
144 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Oracle Demo Database Files
Oracle
Size File Size
Volume Name (MB) Raw Device File Name (MB)
ops1log1.log 28 /dev/vx/rdsk/ops_dg/oops1log1.log 20
ops1log2.log 28 /dev/vx/rdsk/ops_dg/ops1log2.log 20
ops1log3.log 28 /dev/vx/rdsk/ops_dg/ops1log3.log 20
ops2log1.log 28 /dev/vx/rdsk/ops_dg/ops2log1.log 20
ops2log2.log 28 /dev/vx/rdsk/ops_dg/ops2log2.log 20
ops2log3.log 28 /dev/vx/rdsk/ops_dg/ops2log3.log 20
opstools.dbf 24 /dev/vx/rdsk/ops_dg/opstools.dbf 15
opsspfile1.ora 5 /dev/vx/rdsk/ops_dg/opsspfile1.ora 5
Chapter 3 145
Serviceguard Configuration for Oracle 9i RAC
Oracle Demo Database Files
Table 3-3 Required Oracle File Names for Demo Database (Continued)
Oracle
Size File Size
Volume Name (MB) Raw Device File Name (MB)
opsindx1.dbf 78 /dev/vx/rdsk/ops_dg/opsindx1.dbf 70
Create these files if you wish to build the demo database. The three
logical volumes at the bottom of the table are included as additional data
files, which you can create as needed, supplying the appropriate sizes. If
your naming conventions require, you can include the Oracle SID and/or
the database name to distinguish files for different instances and
different databases. If you are using the ORACLE_BASE directory
structure, create symbolic links to the ORACLE_BASE files from the
appropriate directory.
Example:
# ln -s /dev/vx/rdsk/ops_dg/opsctl1.ctl \
/u01/ORACLE/db001/ctrl01_1.ctl
Example, Oracle9:
1. Create an ASCII file, and define the path for each database object.
control1=/dev/vx/rdsk/ops_dg/opsctl1.ctl
or
control1=/u01/ORACLE/db001/ctrl01_1.ctl
2. Set the following environment variable where filename is the name
of the ASCII file created.
# export DBCA_RAW_CONFIG=<full path>/filename
146 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Adding Disk Groups to the Cluster Configuration
Chapter 3 147
Serviceguard Configuration for Oracle 9i RAC
Installing Oracle 9i RAC
148 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Installing Oracle 9i RAC
# ll
total 0
drwxr-xr-x 2 root root 96 Jun 3 11:43
lost+found
drwxr-xr-x 2 oracle dba 96 Jun 3 13:45
oradat
d. Set up CFS directory for Server Management.
Preallocate space for srvm (200MB)
# prealloc /cfs/cfssrvm/ora_srvm 209715200
# chown oracle:dba /cfs/cfssrvm/ora_srvm
2. Install Oracle RAC Software
Chapter 3 149
Serviceguard Configuration for Oracle 9i RAC
Installing Oracle 9i RAC
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOM
E/rdbms/lib
SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32
export LD_LIBRARY_PATH SHLIB_PATH
export CLASSPATH=/opt/java1.3/lib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$O
RACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
export DISPLAY={display}:0.0
2. Set up Listeners with Oracle Network Configuration
Assistant
$ netca
3. Start GSD on all Nodes
$ gsdctl start
Output: Successfully started GSD on local node
150 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Verify that Oracle Disk Manager is Configured
Chapter 3 151
Serviceguard Configuration for Oracle 9i RAC
Configure Oracle to use Oracle Disk Manager Library
152 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Verify Oracle Disk Manager is Running
1. Start the cluster and Oracle database (if not already started)
2. Check that the Oracle instance is using the Oracle Disk Manager
function with following command:
# cat /dev/odm/stats
abort: 0
cancel: 0
commit: 18
create: 18
delete: 0
identify: 349
io: 12350590
reidentify: 78
resize: 0
unidentify: 203
mname: 0
vxctl: 0
vxvers: 10
io req: 9102431
io calls: 6911030
comp req: 73480659
comp calls: 5439560
io mor cmp: 461063
io zro cmp: 2330
cl receive: 66145
cl ident: 18
cl reserve: 8
cl delete: 1
cl resize: 0
cl same op: 0
cl opt idn: 0
cl opt rsv: 332
**********: 17
Chapter 3 153
Serviceguard Configuration for Oracle 9i RAC
Verify Oracle Disk Manager is Running
154 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Configuring Oracle to Stop using Oracle Disk Manager Library
Chapter 3 155
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
NOTE The maximum number of RAC instances for Oracle 9i is 127 per cluster.
For Oracle 10g refer to Oracle’s requirements.
156 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
NOTE You must create the RAC instance package with a PACKAGE_TYPE of
FAILOVER, but the fact that you are entering only one node ensures that
the instance will only run on that node.
To simplify the creation of RAC instance packages, you can use the
Oracle template provided with the separately purchasable ECM Toolkits
product (T1909BA). Use the special toolkit scripts that are provided, and
follow the instructions that appear in the README file. Also refer to the
section “Customizing the Control Script for RAC Instances” below for
more information.
To create the package with Serviceguard Manager select the cluster. Go
to the actions menu and choose configure package. To modify a package,
select the package. For an instance package, create one package for each
instance. On each node, supply the SID name for the package name.
To create a package on the command line, use the cmmakepkg command
to get an editable configuration file.
Set the AUTO_RUN parameter to YES, if you want the instance to start up
as soon as the node joins the cluster. In addition, you should set the
NODE_FAILFAST_ENABLED parameter to NO.
Chapter 3 157
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
If you are using CVM disk groups for the RAC database, be sure to
include the name of each disk group on a separate STORAGE_GROUP line in
the configuration file.
If you are using CFS or CVM for RAC shared storage with multi-node
packages, the package containing the RAC instance should be configured
with package dependency to depend on the multi-node packages.
The following is a sample of the setup dependency conditions in
application package configuration file:
DEPENDENCY_NAME mp1
DEPENDENCY_CONDITION SG-CFS-MP-1=UP
DEPENDENCY_LOCATION SAME_NODE
DEPENDENCY_NAME mp2
DEPENDENCY_CONDITION SG-CFS-MP-2=UP
DEPENDENCY_LOCATION SAME_NODE
DEPENDENCY_NAME mp3
DEPENDENCY_CONDITION SG-CFS-MP-3=UP
DEPENDENCY_LOCATION SAME_NODE
158 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
own script, and copying it to all nodes that can run the package. This
script should contain the cmmodpkg -e command and activate the
package after RAC and the cluster manager have started.
Chapter 3 159
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
160 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
NOTE Use care in defining service run commands. Each run command is
executed by the control script in the following way:
If you need to define a set of run and halt operations in addition to the
defaults, create functions for them in the sections under the heading
CUSTOMER DEFINED FUNCTIONS.
Chapter 3 161
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
Enter the names of the CVM disk groups you wish to activate in shared
mode in the CVM_DG[] array. Use a different array element for each RAC
disk group. (Remember that CVM disk groups must also be coded in the
package ASCII configuration file using STORAGE_GROUP parameters.) Be
sure to an appropriate type of shared activation with the CVM activation
command. For example:
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set
activation=sharedwrite"
Do not define the RAC instance as a package service. Instead, include the
commands that start up an RAC instance in the
customer_defined_run_commands section of the package control script.
Similarly, you should include the commands that halt an RAC instance
in the customer_defined_halt_commands section of the package control
script.
Define the Oracle monitoring command as a service command, or else
use the special Oracle script provided with the ECM Toolkit.
162 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
/etc/cmcluster/pkg/${SID_NAME}
Example: /etc/cmcluster/pkg/ORACLE_TEST0
Chapter 3 163
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
5. Copy the Oracle shell script templates from the ECMT default source
directory to the package directory.
# cd /etc/cmcluster/pkg/${SID_NAME}
# cp -p /opt/cmcluster/toolkit/oracle/*
Example:
# cd /etc/cmcluster/pkg/ORACLE_TEST0
# cp -p /opt/cmcluster/toolkit/oracle/*
Edit haoracle.conf as per README
6. Gather the package service name for monitoring Oracle instance
processes. In Serviceguard Manager, this information can be found
under the “Services” tab.
SERVICE_NAME[0]=${SID_NAME}
SERVICE_CMD[0]=”etc/cmcluster/pkg/${SID_NAME}/toolkit.sh
”
SERVICE_RESTART[0]=”-r 2”
Example:
SERVICE_NAME[0]=ORACLE_TEST0
SERVICE_CMD[0]=”/etc/cmcluster/pkg/ORACLE_TEST0/toolkit.
sh”
SERVICE_RESTART[0]=”-r 2”
7. Gather how to start the database using an ECMT script. In
Serviceguard Manager, enter this filename for the control script
start command.
/etc/cmcluster/pkg/${SID_NAME}/toolkit.sh start
Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh start
8. Gather how to stop the database using an ECMT script. In
Serviceguard Manager, enter this filename for the control script stop
command.
/etc/cmcluster/pkg/${SID_NAME}/toolkit.sh stop
Example: /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh stop
164 Chapter 3
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
Chapter 3 165
Serviceguard Configuration for Oracle 9i RAC
Using Packages to Configure Startup and Shutdown of RAC Instances
166 Chapter 3
Maintenance and Troubleshooting
4 Maintenance and
Troubleshooting
Chapter 4 167
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
You can also specify that the output should be formatted as it was in a
specific earlier release by using the -r option indicating the release
format you wish. Example:
# cmviewcl -r A.11.16
See the man page for a detailed description of other cmviewcl options.
168 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Quorum_Server_Status:
NAME STATUS STATE
white up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0/0 lan0
PRIMARY up 0/8/0/0/4/0 lan1
STANDBY up 0/8/0/0/6/0 lan3
Quorum_Server_Status:
NAME STATUS STATE
white up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 0/0/0/0 lan0
PRIMARY up 0/8/0/0/4/0 lan1
STANDBY up 0/8/0/0/6/0 lan3
MULTI_NODE_PACKAGES
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 SG-CFS-vxconfigd
Service up 5 0 SG-CFS-sgcvmd
Service up 5 0 SG-CFS-vxfsckd
Service up 0 0 SG-CFS-cmvxd
Service up 0 0 SG-CFS-cmvxpingd
Chapter 4 169
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
mo up enabled
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 SG-CFS-vxconfigd
Service up 5 0 SG-CFS-sgcvmd
Service up 5 0 SG-CFS-vxfsckd
Service up 0 0 SG-CFS-cmvxd
Service up 0 0 SG-CFS-cmvxpingd
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-pkg yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-pkg yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Dependency_Parameters:
170 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Dependency_Parameters:
DEPENDENCY_NAME SATISFIED
SG-CFS-DG-1 yes
Cluster Status
The status of a cluster may be one of the following:
Chapter 4 171
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
• Failed. A node never sees itself in this state. Other active members
of the cluster will see a node in this state if that node was in an
active cluster, but is no longer, and is not halted.
• Reforming. A node is in this state when the cluster is re-forming.
The node is currently running the protocols which ensure that all
nodes agree to the new membership of an active cluster. If agreement
is reached, the status database is updated to reflect the new cluster
membership.
• Running. A node in this state has completed all required activity for
the last re-formation and is operating normally.
• Halted. A node never sees itself in this state. Other nodes will see it
in this state after the node has gracefully left the active cluster, for
instance with a cmhaltnode command.
• Unknown. A node never sees itself in this state. Other nodes assign a
node this state if it has never been an active cluster member.
• Starting. The start instructions in the control script are being run.
• Running. Services are active and being monitored.
• Halting. The halt instructions in the control script are being run.
172 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Chapter 4 173
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Service Status
Services have only status, as follows:
Network Status
The network interfaces have only status, as follows:
• Up.
• Down.
• Unknown. We cannot determine whether the interface is up or down.
This can happen when the cluster is down. A standby interface has
this status.
174 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 56/36.1 lan0
STANDBY up 60/6 lan1
Chapter 4 175
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Start configured_node
Failback manual
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled ftsys9 (cur
rent)
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 28.1 lan0
STANDBY up 32.1 lan1
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Start configured_node
Failback manual
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled ftsys10 (cur
rent)
Alternate up enabled ftsys9
176 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
SYSTEM_MULTI_NODE_PACKAGES:
When you use the -v option, the display shows the system multi-node
package associated with each active node in the cluster, as in the
following:
SYSTEM_MULTI_NODE_PACKAGES:
Chapter 4 177
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 56/36.1 lan0
STANDBY up 60/6 lan1
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover min_package_node
Failback manual
Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 0 0 service1
Subnet up 0 0 15.13.168.0
Resource up /example/float
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled ftsys9 (current)
Alternate up enabled ftsys10
178 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover min_package_node
Failback manual
Script_Parameters:
ITEM STATUS NAME MAX_RESTARTS RESTARTS
Service up service2.1 0 0
Subnet up 15.13.168.0 0 0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled ftsys10
Alternate up enabled ftsys9 (current)
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 28.1 lan0
STANDBY up 32.1 lan1
Now pkg2 is running on node ftsys9. Note that it is still disabled from
switching.
Chapter 4 179
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Both packages are now running on ftsys9 and pkg2 is enabled for
switching. Ftsys10 is running the daemon and no packages are running
on ftsys10.
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 56/36.1 lan0
Serial_Heartbeat:
DEVICE_FILE_NAME STATUS CONNECTED_TO:
/dev/tty0p0 up ftsys10 /dev/tty0p0
180 Chapter 4
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 28.1 lan0
Serial_Heartbeat:
DEVICE_FILE_NAME STATUS CONNECTED_TO:
/dev/tty0p0 up ftsys9 /dev/tty0p0
The following shows status when the serial line is not working:
CLUSTER STATUS
example up
NODE STATUS STATE
ftsys9 up running
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 56/36.1 lan0
Serial_Heartbeat:
DEVICE_FILE_NAME STATUS CONNECTED_TO:
/dev/tty0p0 down ftsys10 /dev/tty0p0
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up 28.1 lan0
Serial_Heartbeat:
DEVICE_FILE_NAME STATUS CONNECTED_TO:
/dev/tty0p0 down ftsys9 /dev/tty0p0
Chapter 4 181
Maintenance and Troubleshooting
Reviewing Cluster and Package States with the cmviewcl Command
UNOWNED_PACKAGES
Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover min_package_node
Failback automatic
Script_Parameters:
ITEM STATUS NODE_NAME NAME
Resource up manx /resource/random
Subnet up manx 192.8.15.0
Resource up burmese /resource/random
Subnet up burmese 192.8.15.0
Resource up tabby /resource/random
Subnet up tabby 192.8.15.0
Resource up persian /resource/random
Subnet up persian 192.8.15.0
Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled manx
Alternate up enabled burmese
Alternate up enabled tabby
Alternate up enabled persian
182 Chapter 4
Maintenance and Troubleshooting
Online Reconfiguration
Online Reconfiguration
The online reconfiguration feature provides a method to make
configuration changes online to a Serviceguard Extension for RAC
(SGeRAC) cluster. Specifically, this provides the ability to add or/and
delete nodes from a running SGeRAC Cluster, and to reconfigure SLVM
VG while it is being accessed by only one node.
1. Export the mapfile for the volume groups that needs to be visible in
the new node (vgexport -s -m mapfile -p <sharedvg>).
2. Copy the mapfile to the new node.
3. Import the volume groups into the new node. (vgimport -s -m
mapfile <sharedvg>).
4. Add node to the cluster online- edit the cluster configuration file to
add the node details and run cmapplyconf.
5. Make the new node join the cluster (cmrunnode) and run the
services.
Use the following steps for deleting a node using online node
reconfiguration:
Chapter 4 183
Maintenance and Troubleshooting
Managing the Shared Storage
184 Chapter 4
Maintenance and Troubleshooting
Managing the Shared Storage
# vgchange -a e -x vg_shared
NOTE Ensure that none of the mirrored logical volumes in this volume
group have Consistency Recovery set to MWC (refer lvdisplay(1M)).
Changing the mode back to “shared” will not be allowed in that case,
since Mirror Write Cache consistency recovery (MWC) is not valid in
volume groups activated in shared mode.
5. Make the desired configuration change for the volume group on the
node where the volume group is active, run the required command to
change the configuration. For example, to add a mirror copy, use the
following command:
# lvextend -m 2 /dev/vg_shared/lvol1
6. Export the changes to other cluster nodes if required.
If the configuration change required the creation or deletion of a new
logical or physical volume (i.e., any of the following commands were
used - lvcreate(1M), lvreduce(1M), vgextend(1M),
vgreduce(1M), lvsplit(1M), lvmerge(1M) then the following
sequence of steps is required.
a. From the same node, export the mapfile for vg_shared. For
example
# vgexport -s -p -m /tmp/vg_shared.map vg_shared
b. Copy the mapfile thus obtained to all the other nodes of the
cluster.
c. On the other cluster nodes, export vg_shared and re-import it
using the new map file. For example,
# vgexport vg_shared
# mkdir /dev/vg_shared
# mknod /dev/vg_shared/group c 64 0xhh0000
# vgimport -s -m /tmp/vg_shared.map vg_shared
Chapter 4 185
Maintenance and Troubleshooting
Managing the Shared Storage
1. Use the vgchange command on each node to ensure that the volume
group to be shared is currently inactive on all nodes. Example:
# vgchange -a n /dev/vg_ops
2. On the configuration node, use the vgchange command to make the
volume group shareable by members of the cluster:
# vgchange -S y -c y /dev/vg_ops
186 Chapter 4
Maintenance and Troubleshooting
Managing the Shared Storage
This command is issued from the configuration node only, and the
cluster must be running on all nodes for the command to succeed.
Note that both the -S and the -c options are specified. The -S y
option makes the volume group shareable, and the -c y option
causes the cluster id to be written out to all the disks in the volume
group. In effect, this command specifies the cluster to which a node
must belong in order to obtain shared access to the volume group.
The above example marks the volume group as non-shared and not
associated with a cluster.
Chapter 4 187
Maintenance and Troubleshooting
Managing the Shared Storage
NOTE Do not share volume groups that are not part of the RAC configuration
unless shared access is controlled.
NOTE If you wish to change the capacity of a volume group at a later time, you
must deactivate and unshare the volume group first. If you add disks,
you must specify the appropriate physical volume group name and make
sure the /etc/lvmpvg file is correctly updated on both nodes.
1. Ensure that the Oracle RAC database is not active on either node.
2. From node 2, use the vgchange command to deactivate the volume
group:
# vgchange -a n /dev/vg_ops
3. From node 2, use the vgexport command to export the volume
group:
# vgexport -m /tmp/vg_ops.map.old /dev/vg_ops
This dissociates the volume group from node 2.
188 Chapter 4
Maintenance and Troubleshooting
Managing the Shared Storage
Chapter 4 189
Maintenance and Troubleshooting
Managing the Shared Storage
13. Use the vgimport command, specifying the map file you copied from
the configuration node. In the following example, the vgimport
command is issued on the second node for the same volume group
that was modified on the first node:
# vgimport -v -m /tmp/vg_ops.map /dev/vg_ops
/dev/dsk/c0t2d0/dev/dsk/c1t2d0
14. Activate the volume group in shared mode by issuing the following
command on both nodes:
# vgchange -a s -p /dev/vg_ops
Skip this step if you use a package control script to activate and
deactivate the shared volume group as a part of RAC startup and
shutdown.
190 Chapter 4
Maintenance and Troubleshooting
Managing the Shared Storage
One node will identify itself as the master. Create disk groups from this
node.
Similarly, you can delete VxVM or CVM disk groups provided they are
not being used by a cluster node at the time.
NOTE For CVM without CFS, if you are adding a disk group to the cluster
configuration, make sure you also modify any package or create the
package control script that imports and deports this disk group. If you
are adding a CVM disk group, be sure to add the STORAGE_GROUP entry
for the disk group to the pacakge ASCII file.
For CVM with CFS, if you are adding a disk group to the cluster
configuration, make sure you also create the corresponding multi-node
package. If you are adding a CVM disk group, be sure to add to the
packages that depend on the CVM disk group the necessary package
dependency.
If you are removing a disk group from the cluster configuration, make
sure that you also modify or delete any package control script that
imports and deports this disk group. If you are removing a CVM disk
group, be sure to remove the STORAGE_GROUP entries for the disk group
from the package ASCII file.
When removig a disk group that is activated and deactivated through a
mulit-node package, make sure to modify or remove any configured
package dependencies to the multi-node package.
Chapter 4 191
Maintenance and Troubleshooting
Removing Serviceguard Extension for RAC from a System
• The cluster service should not be running on the node from which
you will be deleting Serviceguard Extension for RAC.
• The node from which you are deleting Serviceguard Extension for
RAC should not be in the cluster configuration.
• If you are removing Serviceguard Extension for RAC from more than
one node, swremove should be issued on one node at a time.
NOTE After removing Serviceguard Extension for RAC, your cluster will still
have Serviceguard installed. For information about removing
Serviceguard, refer to the Managing Serviceguard user’s guide for your
version of the product.
192 Chapter 4
Maintenance and Troubleshooting
Monitoring Hardware
Monitoring Hardware
Good standard practice in handling a high availability system includes
careful fault monitoring so as to prevent failures if possible or at least to
react to them swiftly when they occur. The following should be monitored
for errors or warnings of all kinds:
• Disks
• CPUs
• Memory
• LAN cards
• Power sources
• All cables
• Disk interface cards
Some monitoring can be done through simple physical inspection, but for
the most comprehensive monitoring, you should examine the system log
file (/var/adm/syslog/syslog.log) periodically for reports on all
configured HA devices. The presence of errors relating to a device will
show the need for maintenance.
Chapter 4 193
Maintenance and Troubleshooting
Adding Disk Hardware
1. Halt packages.
2. Ensure that the Oracle database is not active on either node.
3. Deactivate and mark as unshareable any shared volume groups.
4. Halt the cluster.
5. Deactivate automatic cluster startup.
6. Shutdown and power off system before installing new hardware.
7. Install the new disk hardware with connections on all nodes.
8. Reboot all nodes.
9. On the configuration node, add the new physical volumes to existing
volume groups, or create new volume groups as needed.
10. Start up the cluster.
11. Make the volume groups shareable, then import each shareable
volume group onto the other nodes in the cluster.
12. Activate the volume groups in shared mode on all nodes.
13. Start up the Oracle RAC instances on all nodes.
14. Activate automatic cluster startup.
NOTE As you add new disks to the system, update the planning worksheets
(described in Appendix B, “Blank Planning Worksheets,” so as to record
the exact configuration you are using.
194 Chapter 4
Maintenance and Troubleshooting
Replacing Disks
Replacing Disks
The procedure for replacing a faulty disk mechanism depends on the
type of disk configuration you are using and on the type of Volume
Manager software. For a description of replacement procedures using
VERITAS VxVM or CVM, refer to the chapter on “Administering
Hot-Relocation” in the VERITAS Volume Manager Administrator’s
Guide. Additional information is found in the VERITAS Volume
Manager Troubleshooting Guide.
The following paragraphs describe how to replace disks that are
configured with LVM. Separate descriptions are provided for replacing a
disk in an array and replacing a disk in a high availability enclosure.
Chapter 4 195
Maintenance and Troubleshooting
Replacing Disks
1. Identify the physical volume name of the failed disk and the name of
the volume group in which it was configured. In the following
examples, the volume group name is shown as /dev/vg_sg01 and
the physical volume name is shown as /dev/c2t3d0. Substitute the
volume group and physical volume names that are correct for your
system.
2. Identify the names of any logical volumes that have extents defined
on the failed physical volume.
3. On the node on which the volume group is currently activated, use
the following command for each logical volume that has extents on the
failed physical volume:
# lvreduce -m 0 /dev/vg_sg01/lvolname /dev/dsk/c2t3d0
4. At this point, remove the failed disk and insert a new one. The new
disk will have the same HP-UX device name as the old one.
5. On the node from which you issued the lvreduce command, issue
the following command to restore the volume group configuration
data to the newly inserted disk:
# vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0
6. Issue the following command to extend the logical volume to the
newly inserted disk:
# lvextend -m 1 /dev/vg_sg01 /dev/dsk/c2t3d0
7. Finally, use the lvsync command for each logical volume that has
extents on the failed physical volume. This synchronizes the extents
of the new disk with the extents of the other mirror.
# lvsync /dev/vg_sg01/lvolname
196 Chapter 4
Maintenance and Troubleshooting
Replacing Disks
2. Halt all the applications using the SLVM VG on all the nodes but
one.
3. Re-activate the volume group in exclusive mode on all nodes of the
cluster:
# vgchange -a e -x <slvm vg>
4. Reconfigure the volume:
vgextend, lvextend, disk addition, etc
5. Activate the volume group to shared mode:
# vgchange -a s -x <slvm vg>
Chapter 4 197
Maintenance and Troubleshooting
Replacing Disks
This will synchronize the stale logical volume mirrors. This step can
be time-consuming, depending on hardware characteristics and the
amount of data.
6. Deactivate the volume group:
# vgchange -a n vg_ops
7. Activate the volume group on all the nodes in shared mode using
vgchange - a s:
# vgchange -a s vg_ops
198 Chapter 4
Maintenance and Troubleshooting
Replacing Disks
NOTE You cannot use inline terminators with internal FW/SCSI buses on D
and K series systems, and you cannot use the inline terminator with
single-ended SCSI buses. You must not use an inline terminator to
connect a node to a Y cable.
Figure 4-1 shows a three-node cluster with two F/W SCSI buses. The
solid line and the dotted line represent different buses, both of which
have inline terminators attached to nodes 1 and 3. Y cables are also
shown attached to node 2.
Chapter 4 199
Maintenance and Troubleshooting
Replacing Disks
200 Chapter 4
Maintenance and Troubleshooting
Replacement of I/O Cards
Chapter 4 201
Maintenance and Troubleshooting
Replacement of LAN Cards
Off-Line Replacement
The following steps show how to replace a LAN card off-line. These steps
apply to both HP-UX 11.0 and 11i:
On-Line Replacement
If your system hardware supports hotswap I/O cards, and if the system is
running HP-UX 11i (B.11.11 or later), you have the option of replacing
the defective LAN card on-line. This will significantly improve the
overall availability of the system. To do this, follow the steps provided in
the section “How to On-line Replace (OLR) a PCI Card Using SAM” in
the document Configuring HP-UX for Peripherals. The OLR procedure
also requires that the new card must be exactly the same card type as
the card you removed to avoid improper operation of the network driver.
Serviceguard will automatically recover the LAN card once it has been
replaced and reconnected to the network.
202 Chapter 4
Maintenance and Troubleshooting
Replacement of LAN Cards
Chapter 4 203
Maintenance and Troubleshooting
Monitoring RAC Instances
204 Chapter 4
Software Upgrades
A Software Upgrades
• rolling upgrade
• non-rolling upgrade
Instead of an upgrade, moving to a new version can be done with:
Appendix A 205
Software Upgrades
206 Appendix A
Software Upgrades
Rolling Software Upgrades
Appendix A 207
Software Upgrades
Rolling Software Upgrades
NOTE It is optional to set this parameter to “1”. If you want the node to join
the cluster at boot time, set this parameter to “1”, otherwise set it to
“0”.
6. Restart the cluster on the upgraded node (if desired). You can do this
in Serviceguard Manager, or from the command line, issue the
Serviceguard cmrunnode command.
7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on the local
node.
8. Repeat steps 1-7 on the other nodes, one node at a time until all
nodes have been upgraded.
208 Appendix A
Software Upgrades
Rolling Software Upgrades
NOTE While you are performing a rolling upgrade, warning messages may
appear while the node is determining what version of software is
running. This is a normal occurrence and not a cause for concern.
Appendix A 209
Software Upgrades
Rolling Software Upgrades
Step 1.
This will cause the failover package to be halted cleanly and moved to
node 2. The Serviceguard daemon on node 1 is halted, and the result is
shown in Figure A-2.
210 Appendix A
Software Upgrades
Rolling Software Upgrades
Step 2.
Upgrade node 1 and install the new version of Serviceguard and
SGeRAC (A.11.16), as shown in Figure A-3.
Appendix A 211
Software Upgrades
Rolling Software Upgrades
Step 3.
1. Restart the cluster on the upgraded node (node 1) (if desired). You
can do this in Serviceguard Manager, or from the command line issue
the following:
# cmrunnode node1
2. At this point, different versions of the Serviceguard daemon (cmcld)
are running on the two nodes, as shown in Figure A-4.
3. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 1.
212 Appendix A
Software Upgrades
Rolling Software Upgrades
Step 4.
Appendix A 213
Software Upgrades
Rolling Software Upgrades
Step 5.
Move PKG2 back to its original node. Use the following commands:
# cmhaltpkg pkg2
# cmrunpkg -n node2 pkg2
# cmmodpkg -e pkg2
The cmmodpkg command re-enables switching of the package, which is
disabled by the cmhaltpkg command. The final running cluster is shown
in Figure A-6.
214 Appendix A
Software Upgrades
Rolling Software Upgrades
Appendix A 215
Software Upgrades
Non-Rolling Software Upgrades
216 Appendix A
Software Upgrades
Non-Rolling Software Upgrades
Appendix A 217
Software Upgrades
Non-Rolling Software Upgrades
218 Appendix A
Software Upgrades
Non-Rolling Software Upgrades
CAUTION The cold install process erases the pre-existing software, operating
system, and data. If you want to retain any existing software, make sure
to back up that software before migrating.
Appendix A 219
Software Upgrades
Non-Rolling Software Upgrades
220 Appendix A
Blank Planning Worksheets
Appendix B 221
Blank Planning Worksheets
LVM Volume Group and Physical Volume Worksheet
==========================================================================
PV Link 1 PV Link2
222 Appendix B
Blank Planning Worksheets
VxVM Disk Group and Disk Worksheet
Appendix B 223
Blank Planning Worksheets
Oracle Logical Volume Worksheet
224 Appendix B
Index
A D
activation of volume groups deactivation of volume groups, 188
in shared mode, 187 deciding when and where to run packages, 24
adding packages on a running cluster, 159 deleting from the cluster, 70
administration deleting nodes while the cluster is running,
cluster and package states, 168 190
array demo database
replacing a faulty mechanism, 195, 196, 197 files, 55, 80, 119, 145
AUTO_RUN parameter, 157 disk
AUTO_START_TIMEOUT choosing for volume groups, 49, 113
in sample configuration file, 60, 124 disk arrays
creating logical volumes, 53, 117
B disk storage
creating the infrastructure with CVM, 73,
building a cluster 136
CVM infrastructure, 73, 136
disks
building an RAC cluster replacing, 195
displaying the logical volume
infrastructure, 57, 121
E
logical volume infrastructure, 48, 112
building logical volumes eight-node cluster with disk array
for RAC, 54, 118 figure, 31
EMS
C for preventive monitoring, 193
enclosure for disks
CFS, 65, 70 replacing a faulty mechanism, 195
creating storage infrastructure, 129 Event Monitoring Service
deleting from the cluster, 134 in troubleshooting, 193
cluster exporting
state, 175 shared volume group data, 57, 121
status options, 171 exporting files
cluster configuration file, 60, 124 LVM commands, 57, 121
cluster node extended distance cluster
startup and shutdown OPS instances, 157 building, 32
cluster volume group
creating physical volumes, 49, 113
F
CLUSTER_NAME (cluster name)
in sample configuration file, 60, 124 figures
control script eight-node cluster with EMC disk array, 31
creating with commands, 159 node 1 rejoining the cluster, 212
creating with SAM, 159 node 1 upgraded to HP-UX 111.00, 211
in package configuration, 159 running cluster after upgrades, 214
starting OPS instances, 162 running cluster before rolling upgrade, 209
creating a SGeRAC Cluster, 65 running cluster with packages moved to
creating a storage infrastructure, 65 node 1, 213
CVM running cluster with packages moved to
creating a storage infrastructure, 73, 136 node 2, 210
use of the VxVM-CVM-pkg, 77, 140 FIRST_CLUSTER_LOCK_PV
CVM_ACTIVATION_CMD in sample configuration file, 60, 124
in package control script, 161 FIRST_CLUSTER_LOCK_VG
CVM_DG in sample configuration file, 60, 124
in package control script, 161 FS
225
Index
in sample package control script, 161 using to obtain a list of disks, 49, 113
FS_MOUNT_OPT LV
in sample package control script, 161 in sample package control script, 161
LVM
G creating on disk arrays, 53, 117
LVM commands
GMS
group membership services, 23 exporting files, 57, 121
group membership services
define, 23 M
maintaining a RAC cluster, 167
H maintenance
adding disk hardware, 194
hardware
making changes to shared volume groups,
adding disks, 194
188
monitoring, 193
monitoring hardware, 193
heartbeat subnet address
parameter in cluster manager
configuration, 47, 110 N
HEARTBEAT_INTERVAL network
in sample configuration file, 60, 124 status, 174
HEARTBEAT_IP NETWORK_INTERFACE
in sample configuration file, 60, 124 in sample configuration file, 60, 124
high availability cluster NETWORK_POLLING_INTERVAL
defined, 16 (network polling interval)
in sample configuration file, 60, 124
I node
halting status, 180
in-line terminator in an RAC cluster, 16
permitting online hardware maintenance,
status and state, 172
198
NODE_FAILFAST_ENABLED parameter,
installing 157
Oracle RAC, 59, 123 NODE_TIMEOUT (heartbeat timeout)
installing software in sample configuration file, 60, 124
Serviceguard Extension for RAC, 46, 109
IP
in sample package control script, 161 O
IP address online hardware maintenance
switching, 27 by means of in-line SCSI terminators, 198
Online node addition and deletion, 183
L Online reconfiguration, 183
OPS
lock disk control scripts for starting instances, 162
replacing a faulty mechanism, 198 packages to access database, 158
logical volumes startup and shutdown instances, 157
blank planning worksheet, 223, 224 startup and shutdown volume groups, 156
creating, 54, 118 OPS cluster
creating for a cluster, 50, 79, 114, 142, 143 starting up with scripts, 156
creating the infrastructure, 48, 112 opsctl.ctl
disk arrays, 53, 117 Oracle demo database files, 55, 80, 119, 145
filled in planning worksheet, 42, 44, 102, opslog.log
106 Oracle demo database files, 55, 80, 119, 145
lssf
226
Index
optimizing packages for large numbers of worksheets for physical volume planning,
storage units, 161 222
Oracle planning worksheets
demo database files, 55, 80, 119, 145 blanks, 221
Oracle 10 RAC point to point connections to storage devices,
installing binaries, 89 30
Oracle 10g RAC PVG-strict mirroring
introducing, 33 creating volume groups with, 49, 113
Oracle 9i RAC
installing, 148 Q
introducing, 101
quorum server
Oracle Disk Manager status and state, 176
configuring, 92
Oracle Parallel Server
starting up instances, 156 R
Oracle RAC RAC
installing, 59, 123 group membership services, 23
Oracle10g overview of configuration, 16
installing, 88 status, 173
RAC cluster
P defined, 16
package removing packages on a running cluster, 159
basic concepts, 17, 18 removing Serviceguard Extension for RAC
from a system,, 192
moving status, 178
replacing disks, 195
state, 175 rollback.dbf
status and state, 172 Oracle demo database files, 55, 56, 81, 119,
switching status, 179 145
package configuration rolling software upgrades
service name parameter, 47, 110 example, 209
writing the package control script, 159 steps, 207, 217
package control script rolling upgrade
generating with commands, 159 limitations, 215, 218
packages RS232 status, viewing, 180
accessing OPS database, 158 running cluster
deciding where and when to run, 24 adding or removing packages, 159
launching OPS instances, 157
startup and shutdown volume groups, 156 S
parameter
AUTO_RUN, 157 serial line
status, 174
NODE_FAILFAST_ENABLED, 157
service
performance
status, 174
optimizing packages for large numbers of
storage units, 161 service name
parameter in package configuration, 47, 110
physical volumes SERVICE_CMD
creating for clusters, 49, 113
in sample package control script, 161
filled in planning worksheet, 222 SERVICE_NAME
planning in sample package control script, 161
worksheets for logical volume planning, 42,
parameter in package configuration, 47, 110
44, 102, 106
SERVICE_RESTART
in sample package control script, 161
227
Index
Serviceguard Extension for RAC V
installing, 46, 109 VG
introducing, 15 in sample package control script, 161
shared mode VGCHANGE
activation of volume groups, 187 in package control script, 161
deactivation of volume groups, 188 viewing RS232 status, 180
shared volume groups volume group
making volume groups shareable, 186 creating for a cluster, 49, 113
sharing volume groups, 57, 121 creating physical volumes for clusters, 49,
SLVM 113
making volume groups shareable, 186 volume groups
SNOR adding shared volume groups, 190
configuration, 184 displaying for RAC, 57, 121
software upgrades, 205 exporting to other nodes, 57, 121
state making changes to shared volume groups,
cluster, 175
188
node, 172
making shareable, 186
of cluster and package, 168
making unshareable, 187
package, 172, 175
status OPS startup and shutdown, 156
cluster, 171 VOLUME_GROUP
in sample configuration file, 60, 124
halting node, 180
VXVM_DG
moving package, 178
in package control script, 161
network, 174 VxVM-CVM-pkg, 77, 140
node, 172
normal running RAC, 176
W
of cluster and package, 168
package, 172 worksheet
RAC, 173 logical volume planning, 42, 44, 102, 106
serial line, 174 worksheets
physical volume planning, 222
service, 174
worksheets for planning
switching package, 179
blanks, 221
SUBNET
in sample package control script, 161
switching IP addresses, 27
system multi-node package
used with CVM, 77, 140
system.dbf
Oracle demo database files, 55, 81
T
temp.dbf
Oracle demo database files, 55, 81, 119, 145
tools.dbf
Oracle demo database files, 119, 145
troubleshooting
monitoring hardware, 193
replacing disks, 195
228