Вы находитесь на странице: 1из 393

DRAFT

COOKBOOK

Installation Guide

Oracle 1 1 g RAC Relea se 1


with Oracle A utoma tic
Storage M an agem en t (ASM)
on IBM Sy stem p
and i R unni ng A IX 5L
wi th S AN Storage

Ve r s io n 1 .0
A p r il 20 0 8

ORACL E / IBM
Joint Solut io ns Cen te r

IBM – Thierry Plumeau


Oracle/IBM Joint Solutions Center

Oracle - Frederic Michiara


Oracle/IBM Joint Solutions Center

11gRAC/ASM/AIX oraclibm@fr.ibm.com 1 of 393


This document is based on our experiences.
This is not an official (Oracle or IBM) documentation.
This document will be constantly updated
and we’re open to any add-on
or feedback from your own experiences,
on same or different storage solution !!!

Document history :
Version Date Update Who Validated by
1.0 January 2008 - Creation Frederic Michiara Alain Roy
Thierry Plumeau Paul Bramy
April 2008 - Review Dider Wojciechowski
Paul Bramy

Contributors :
o Paul Bramy : Oracle Corp. (ORACLE/IBM Joint Solutions Center)
o Alain Roy : IBM France (ORACLE/IBM Joint Solutions Center)
o Fabienne Lepetit : Oracle Corp. (ORACLE/IBM Joint Solutions Center)

Special thanks to the participants of the 11gRAC workshop in Switzerland (Geneva) who used our last
draft coobook release, and helped us to discover some “typo’s”, and validate the content of the cookbbok.

Contact :
o ORACLE / IBM Joint Solutions Center oraclibm@fr.ibm.com

11gRAC/ASM/AIX oraclibm@fr.ibm.com 2 of 393


1 The aim of this document............................................... .........................6
2 About Oracle clusterware................................................. .......................8
3 About Oracle Automatic storage management........................................15
4 About Real Application Cluster............................................................. ..17
5 About 11G RAC / ASM on AIX.......................................................... ........19
6 What’s new with 11g RAC implementation on AIX...................................20
7 infrastructure requirements.................................... ..............................20
7.1 General Requirements..................................................................................................... ...............22
7.1.1 About Servers and processors................................................................................. ................22
7.1.2 About RAC on IBM System p............................................................................ ........................23
7.1.3 About Network........................................................................................................... ..............24
7.1.4 About SAN Storage................................................................................................. .................26
7.1.5 Proposed infrastructure with 2 servers................................................................... .................28
7.1.6 What do we protect ?....................................................................................................... ........29
7.1.7 About IBM Advanced Power Virtualization and RAC.................................................... .............30
7.2 Cookbook infrastructure...................................................................................... ...........................35
7.2.1 IBM Sytem p servers.................................................................................................... ............36
7.2.2 Operating System............................................................................................................ ........44
7.2.3 Multi-pathing and ASM............................................................................ ................................45
7.2.4 IBM storage and multi-pathing........................................................................ ........................46
7.2.5 EMC storage and multi-pathing........................................................................... ....................53
7.2.6 HITACHI storage and multi-pathing........................................................... ..............................55
7.2.7 Others, StorageTek, HP EVA storage and multi-pathing....................................................... ....57
8 Specific cOnsiderations for RAC/ASM setup with HACMP installed ...........59
9 Installation steps........................................................ ..........................62
10 Preparing the system............................................... ...........................64
10.1 Network configuration........................................................................................ ..........................65
10.1.1 Define Networks layout, Public, Virtual and Private Hostnames............................................65
10.1.2 Identify Network Interfaces cards...................................................................... ....................68
10.1.3 Update hosts file................................................................................................................... .71
10.1.4 Defining Default gateway on public network interface...................................................... ....72
10.1.5 Configure Network Tuning Parameters................................................... ...............................74
10.2 AIX Operating system level, required APAR’s and filsets........................................ ......................76
10.2.1 Filesets Requirements for 11g RAC R1 / ASM (NO HACMP)....................................................77
10.2.2 APAR’s Requirements for 11g RAC R1 / ASM (NO HACMP).....................................................78
10.3 System Requirements (Swap, temp, memory, internal disks)......................................................79
10.4 Users And Groups....................................................................................................... ..................81
10.5 Kernel and Shell Limits......................................................................................... ........................85
10.5.1 Configure Shell Limits.................................................................................................... ........85
10.5.2 Set crs, asm and rdbms users capabilities................................................ ............................87
10.5.3 Set NCARGS parameter................................................................................... ......................87
10.5.4 Configure System Configuration Parameters.................................................................. .......88
10.5.5 Lru_file_repage setting.................................................................................. ........................89
10.5.6 Asynchronous I/O setting................................................................................................ .......91
10.6 User equivalences.............................................................................................. ..........................92
10.6.1 RSH implementation.......................................................................................................... ....93
10.6.2 SSH implementation...................................................................................................... ........94
10.7 Time Server Synchronization............................................................................................ ............98
10.8 crs environment setup......................................................................................................... .........99
10.9 Oracle Software Requirements......................................................................... ..........................100
11 Preparing Storage...................................................... .......................101
11.1 Required local disks (Oracle Clusterware, ASM and RAC software).................................. ...........103
11.2 Oracle Clusterware Disks (OCR and Voting Disks)........................................... ...........................105
11.2.1 Required LUN’s......................................................................................... ...........................106
11.2.2 How to Identify if a LUN is used or not ?........................................................... ...................107
11.2.3 Register LUN’s at AIX level.............................................................................................. .....110
11.2.4 Identify LUN’s and corresponding hdisk on each node...................................... ..................111
11.2.5 Removing reserve lock policy on hdisks from each node ................................ ...................114
11.2.6 Identify Major and Minor number of hdisk on each node.................................................... .117
11.2.7 Create Unique Virtual Device to access same LUN from each node....................................119
11.2.8 Set Ownership / Permissions on Virtual Devices........................................................ ..........120
11gRAC/ASM/AIX oraclibm@fr.ibm.com 3 of 393
11.2.9 Formating the virtual devices (zeroing).................................................................... ...........122
11.3 ASM disks.................................................................................................... ...............................123
11.3.1 Required LUN’s......................................................................................... ...........................125
11.3.2 How to Identify if a LUN is used or not ?........................................................... ...................125
11.3.3 Register LUN’s at AIX level.............................................................................................. .....127
11.3.4 Identify LUN’s and corresponding hdisk on each node...................................... ..................129
11.3.5 Removing reserve lock policy on hdisks from each node................................. ...................132
11.3.6 Identify Major and Minor number of hdisk on each node.................................................... .134
11.3.7 Create Unique Virtual Device to access same LUN from each node....................................136
11.3.8 Set Ownership / Permissions on Virtual Devices........................................................ ..........138
11.3.9 Formating the virtual devices (zeroing).................................................................... ...........141
11.3.10 Removing assigned PVID on hdisk................................................................................. ....142
11.4 Checking Shared Devices..................................................................................... ......................143
11.5 Recommandations, hints and tips................................................................ ..............................145
11.5.1 OCR / Voting disks.................................................................................. .............................145
11.5.2 ASM disks................................................................................................ ............................148
12 Oracle Clusterware (CRS) Installation................................................151
12.1 Cluster VeriFication utility............................................................................................ ...............152
12.1.1 Understanding and Using Cluster Verification Utility................................................... ........152
12.1.2 Using CVU to Determine if Installation Prerequisites are Complete.....................................152
12.2 Installation....................................................................................................................... ...........163
12.3 Post Installation operations............................................................................. ...............................4
12.3.1 Update the Clusterware unix user .profile........................................................ .......................4
12.3.2 Verify parameter CSS misscount.................................................................. ...........................5
12.3.3 Cluster Ready Services Health Check....................................................................... ...............6
12.3.4 Adding enhanced crsstat script ....................................................................... .......................9
12.3.5 Interconnect Network configuration Checkup................................................................ ........11
12.3.6 Oracle CLuster Registry content Check and Backup.............................................. ................12
12.4 Some usefull commands................................................................................. .............................19
12.5 Accessing CRS logs............................................................................................................. ..........21
12.6 Clusterware Basic Testing........................................................................................... ..................22
12.7 What Has Been Done ?............................................................................................. ....................23
12.8 VIP and CRS TroubleShouting......................................................................... ..............................23
12.9 How to clean a failed CRS installation.................................................................. ........................24
13 Install Automated Storage Management software.................................25
13.1 Installation........................................................................................................................ ............26
13.2 Update the ASM unix user .profile........................................................................ ........................36
13.3 Checking Oracle Clusterware................................................................................................ ........37
13.4 Configure default node listeners.................................................................. ................................38
13.5 Create ASM Instances................................................................................................. ..................48
13.5.1 Thru DBCA...................................................................................................... .......................49
13.5.2 Manual Steps........................................................................................................... ..............69
13.6 Configure ASM local and remote listeners.................................................................... ................74
13.7 Check Oracle Clusterware, and manage ASM ressources............................................. ................80
13.8 About ASM instances Tuning..................................................................................................... ....83
13.9 About ASM CMD (Command Utility).............................................................................. ................85
13.10 Usefull Metalink notes ................................................................................. .............................91
13.11 What has been done ?................................................................................................ ................91
14 Installing Oracle 11g R1 software.................................................... .....92
14.1 11g R1 RDBMS Installation..................................................................................... ......................94
14.2 Symbolic links creation for listener.ora, tnsnames.ora and sqlnet.ora.......................................102
14.3 Update the RDBMS unix user .profile...................................................................................... ....103
15 Database Creation On ASM ...................... .........................................104
15.1 Thru Oracle DBCA...................................................................................................... .................105
15.2 Manual Database Creation.......................................................................... ...............................117
15.3 Post Operations............................................................................................. .............................125
15.3.1 Update the RDBMS unix user .profile.................................................................................. .125
15.3.2 Administer and Check........................................................................................................ ..125
15.4 Setting Database Local and remote listeners.......................................................... ...................131
15.5 Creating Oracle Services................................................................................ ............................133
15.5.1 Creation thru srvctl command............................................................................ .................133
15.6 Transaction Application Failover.......................................................................... .......................159

11gRAC/ASM/AIX oraclibm@fr.ibm.com 4 of 393


15.7 About DBCONSOLE........................................................................................... ..........................167
15.7.1 Checking DB CONSOLE........................................................................................... .............167
15.7.2 Moving from dbconsole to Grid Control.......................................................... .....................171
16 asm advanced tools............................................................. ..............172
16.1 ftp and http access.................................................................................................. ...................172
17 Some usefull commands............................. .......................................173
17.1 Oracle CLuster Registry content Check and Backup................................................. ..................174
18 Appendix A : Oracle / IBM technical documents...................................175
19 Appendix B : Oracle technical notes.................................................. ..176
19.1 CRS and 10g Real Application Clusters................................................................................... ....176
19.2 About RAC ....................................................................................................................... ...........186
19.3 About CRS .......................................................................................................................... ........186
19.4 About VIP ............................................................................................................. ......................187
19.5 About manual database cration ................................................................................................ .187
19.6 About Grid Control .................................................................................................. ...................187
19.7 About TAF .................................................................................................... ..............................187
19.8 About Adding/Removing Node ..................................................................................... ..............187
19.9 About ASM .......................................................................................................... .......................187
19.10 Metalink note to use in case of problem with CRS .................................................... ..............188
20 Appendix C : Usefull commands...................................................... ....189
21 Appendix D : Empty tables to use for installatioN................................199
21.1 Network document to ease your installation......................................................... .....................199
21.2 Steps Checkout.................................................................................................................. .........201
21.3 Disks document to ease disks preparation in your implementation...........................................202
22 Documents, BOOKS to look at .......................................................... ..203

11gRAC/ASM/AIX oraclibm@fr.ibm.com 5 of 393


1 THE AIM OF THIS DOCUMENT

This document is written to provide help installing


Oracle11g Real Application Clusters (11.1) release 1 with
Oracle Automated Storage Management on IBM System p
and i servers with AIX.

Within the Oracle/IBM Joint Solutions Center, we are receiving a lot of request about RAC and ASM, this is
the reason why we decided to deliver the first 11gRAC / AIX cookbook with Oracle ASM (Automated
Storage Management). AIX6 is not covered as 11gRAC is not yet supported with AIX6, but certification
should be available soon. Check on metalink for update, the cookbook will be updated with AIX6 when
official certification will be released.

We will describe step by step the architecture Oracle CRS (Cluster Ready Service) on raw disks and database on ASM
(Automatic Storage Management).

For the architecture using Oracle 11gRAC with IBM GPFS as cluster files system, cookbook named “Installation
Guide Oracle 11g RAC Release 1 and IBM GPFS on IBM eServer System p and i running AIX with SAN Storage“
will be released soon.

Metalink (http://metalink.oracle.com/metalink/plsql/ml2_gui.startup)
Titre Origine Référence
Oracle® Database Oracle Clusterware and Oracle Real http://www.oracle.com/pls/db111/porta
Application Clusters Installation Guide l.portal_db?selected=11&frame=#aix_i
B28252
11g Release 1 (11.1) for AIX nstallation_guides

2 Day + Real Application Clusters Guide - 11g Release 1 http://www.oracle.com/pls/db111/hom


(11.1) epage?remark=tahiti B28252
http://www.oracle.com/pls/db111/porta
Oracle Database Release Notes – 11g Release 1 (11.1) for l.portal_db?selected=11&frame=#aix_i
AIX 5L Based Systems (64bits) nstallation_guides B32075
http://download-
Oracle Universal Installer Concepts Guide Release 2.2 east.oracle.com/docs/cd/B10501_01/ A96697-01
em.920/a96697/preface.htm

Oracle® Database on AIX®,HP-UX®,Linux®,Mac OS®


X,Solaris®,Tru64 Unix® Operating Systems Installation Oracle Metalink
and Configuration Requirements Quick Reference (8.0.5 to http://metalink.oracle.com/ 169706.1
11.1)
Oracle Metalink
ASM FAQ (Frequently Asked Questions) 370915.1
http://metalink.oracle.com/
Oracle Metalink
Oracle ASM and Multi­Pathing Technologies 294869.1
http://metalink.oracle.com/

11gRAC/ASM/AIX oraclibm@fr.ibm.com 6 of 393


The information contained in this paper resulted from :
­Oracle and IBM documentations
­Workshop experiences done in the Oracle/IBM Joint Solutions Center
­Benchmarks and POC implementations for customers performed by PSSC Montpellier
­This documentation is a joint effort from Oracle and IBM specialists.

Please also refer to Oracle & IBM online documentation for more information :
• http://docs.oracle.com
o Oracle Database 11g Release 1 (11.1) Documentation
http://www.oracle.com/technology/documentation/database.html

• http://tahiti.oracle.com

• Oracle RAC home page : http://www.oracle.com/database/rac_home.html

• For more information IBM System p : http://www-03.ibm.com/systems/p/

Your comments are important for us, and we thanks the ones who send us their feedback about previous
release, and about how this document did help them in their implementation. We want our technical papers to
be as helpful as possible.

Please send us your comments about this document to the Oracle/IBM Joint Solutions Center.

Use our email address : Or our phone number :

oraclibm@fr.ibm.com +33 (0)4 67 34 67 49

11gRAC/ASM/AIX oraclibm@fr.ibm.com 7 of 393


2 ABOUT ORACLE CLUSTERWARE
Extract from : Oracle® Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti

Oracle Real Application Clusters (Oracle RAC) uses Oracle Clusterware as the infrastructure that binds together
multiple nodes that then operate as a single server. Oracle Clusterware is a portable cluster management solution that
is integrated with Oracle Database. In an Oracle RAC environment, Oracle Clusterware monitors all Oracle
components (such as instances and listeners). If a failure occurs, Oracle Clusterware automatically attempts to restart
the failed component and also redirects operations to a surviving component.
Oracle Clusterware includes a high availability framework for managing any application that runs on your cluster.
Oracle Clusterware manages applications to ensure they start when the system starts. Oracle Clusterware also
monitors the applications to make sure that they are always available. For example, if an application process fails, then
Oracle Clusterware attempts to restart the process based on scripts that you customize. If a node in the cluster fails,
then youcan program application processes that typically run on the failed node to restart on another node in the
cluster.
Oracle Clusterware includes two important components: the voting disk and the OCR. The voting disk is a file that
manages information about node membership, and the OCR is a file that manages cluster and Oracle RAC database
configuration information.
The Oracle Clusterware installation process creates the voting disk and the OCR on shared storage. If you select the
option for normal redundant copies during the installation process, then Oracle Clusterware automatically maintains
redundant copies of these files to prevent the files from becoming single points of failure. The normal redundancy
feature also eliminates the need for third-party storage redundancy solutions. When you use normal redundancy,
Oracle Clusterware automatically maintains two copies of the OCR file and three copies of the voting disk file.

You will learn more about the operation of the Oracle clusterware, how to build the cluster, and the structure of an
Oracle RAC database in other sections of this guide.
See Also:
• Oracle Clusterware Administration and Deployment Guide

And :
DBA Essentials At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA HTML PDF
2 Day + Performance Tuning Guide HTML PDF
2 Day + Real Application Clusters Guide HTML PDF

11gRAC/ASM/AIX oraclibm@fr.ibm.com 8 of 393


Oracle Clusterware, Group Membership and Heartbeats …

• Cluster needs to know who is a member at all times


• Oracle Clusterware has 2 heartbeats

• Network heartbeat • Disk heartbeat


è If a node does not send a heartbeat for è If disk heartbeat is not updated in I/O
MissCount (time in seconds), then node is timeout, then node is evicted from cluster
evicted from cluster.

Oracle Clusterware, OCR and Voting disks …

Oracle Cluster Registry contains all Voting is the cluster heartbeat.


information on the cluster.
• Since 10gRAC R2 voting can have
• Since 10gRAC R2 OCR can be multiple copies in odd number for
mirrored majority.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 9 of 393


Oracle Clusterware and its virtual IP’s …

-1- -2- -3-


Each node participating to the Oracle Cluster has : If one node fails : When failed node return to normal operation :
• 1 Static IP address (10.3.25.81) • Static IP (10.3.25.81) from failed node wil not • Static IP (10.3.25.81) will be back and
• 1 Virtual IP address, each virtual address be anymore reachable reachable
(10.3.25.181) has its home node • Virtual IP (10.3.25.181) from failed node will • Virtual IP (10.3.25.181) from failed node,
switch to one of the remaining node, and will hosted on one of the remaining nodes will
remain reachable switch back to its home node

11gRAC/ASM/AIX oraclibm@fr.ibm.com 10 of 393


Protecting Single Instance Database and Thrid application tier thru Oracle Clusterware …

-1- -2-
Using Oacle clusterware, we can have Cluster Databases, but If first node where APPS VIP is hosted failed THEN :
also non cluster database. In our exemple, we have one single
instance database on one node. • Node VIP (10.3.25.81) will switch to an other node
Each node has its own node VIP, and we can create an • APPS VIP (10.3.25.200) will switch to a preffered node (if configured, inj our case
application VIP (APPS VIP) on wich different resources are on the third node. If third node was not available, APPS VIP will switch to one avail
dependents : able node).
• ASM Single instance,
• HR Single Database Instance,
• Listener
• HR application tier
• /apps and /oracle local files systems

11gRAC/ASM/AIX oraclibm@fr.ibm.com 11 of 393


-3- -4-
APPS VIPS has switched to preferred 3 node. THEN, All resources
rd
When node 1 come back to normal operation, node VIP (10.3.25.81) will switch back
dependant of APPS VIP will be restarted on third node. automatically to its home node, meaning first node. BUT APPS VIPS will not switch
back !

-5-
To switch back APPS VIP and its dependant resources to first node, administrator will have to relocate the APPS VIP to first node thru clusterware commands.
Sending command on cluster events thru Oracle Clusterware …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 12 of 393


-1- -2- -3-
With Oracle clusterware, all cluster events In our example case, if first node fails, THEN : THEN, the HMC will execute the requests, and will
(node, database, instance, etc …) resize tle LPAR’s as requested on each available
information (up, down, etc …) can be nodes.
obtained and used to associate an action. • Node VIP (10.3.25.81) from first node will switch
on one available node.
In our example case, we have : • Oracle clusterware will have a FAN (Fast
• 3 nodes, with 3 LPAR’s each (RAC, APPS, Application Notification) Event : node1 down
TEST/DEV) • A unix shell script (Call out script) will analyze the
• IBM HMC to administrate the IBM POWER information and send a request to the IBM HMC.
System p and LPAR’s (cores, memory)
o Request to remove resources from
• TEST/DEV LPAR’s from second and third
In normal operation, all LPAR’s are using node.
their defined cores and memory, but o Request to assign new resources to APPS
could also use micro-partitionning and RAC LPAR’s.
between LPAR’s.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 13 of 393


For more information :

On http://otn.oracle.com
http://www.oracle.com/technology/products/database/clusterware/index.html

Oracle Clusterware

Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a
single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC).
In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a
cluster. In any case Oracle Clusterware is the intelligence in those systems that ensures the required cooperation
between the cluster nodes.

Oracle Clusterware
• Oracle Clusterware Whitepaper (PDF)

Oracle Clusterware Technical Articles


• Using standard NFS to support a third voting disk on an Extended Distance
cluster configuration on Linux, AIX, HP-UX, or Solaris (PDF) February 2008
• Oracle Homes in an Oracle Real Application Clusters Environment (PDF) February 2008

Using Oracle Clusterware to protect any kind of application


• Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
• Using Oracle Clusterware to Protect 3rd Party Applications (PDF) February 2008
• Using Oracle Clusterware to Protect Oracle Application Server (PDF) November 2005
• Using Oracle Clusterware to Protect an Oracle Database 10g with
Oracle Enterprise Manager Grid Control Integration (PDF) February 2008
• Using Oracle Clusterware to Protect A Single Instance Oracle Database 11g (PDF) February 2008

Oracle applications protected by Oracle Clusterware


• Siebel CRM Applications protected by Oracle Clusterware January 2008

Pre-configured agents for Oracle Clusterware


• Providing High Availability for SAP Resources (PDF) March 2007

11gRAC/ASM/AIX oraclibm@fr.ibm.com 14 of 393


3 ABOUT ORACLE AUTOMATIC STORAGE MANAGEMENT
Extract from : Oracle® Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti

With Oracle RAC, each instance must have access to the datafiles and recovery files for the Oracle RAC database.
Using Automatic Storage Management (ASM) is an easy way to satisfy this requirement.
ASM is an integrated, high-performance database file system and disk manager. ASM is based on the principle that the
database should manage storage instead of requiring an administrator to do it. ASM eliminates the need for you to
directly manage potentially thousands of Oracle database files.
ASM groups the disks in your storage system into one or more disk groups. You manage a small set of disk groups and
ASM automates the placement of the database files within those disk groups.
ASM provides the following benefits:
• Striping—ASM spreads data evenly across all disks in a disk group to optimize performance and utilization.
This even distribution of database files eliminates the need for regular monitoring and I/O performance tuning.
• Mirroring—ASM can increase data availability by optionally mirroring any file. ASM mirrors at the file level,
unlike operating system mirroring, which mirrors at the disk level. Mirroring means keeping redundant copies,
or mirrored copies, of each extent of the file, to help avoid data loss caused by disk failures. The mirrored copy
of each file extent is always kept on a different disk from the original copy. If a disk fails, ASM can continue to
access affected files by accessing mirrored copies on the surviving disks in the disk group.
• Online storage reconfiguration and dynamic rebalancing—ASM permits you to add or remove disks from
your disk storage system while the database is operating. When you add a disk to a disk group, ASM
automatically redistributes the data so that it is evenly spread across all disks in the disk group, including the
new disk. The process of redistributing data so that it is also spread across the newly added disks is known as
rebalancing. It is done in the background and with minimal impact to database performance.
• Managed file creation and deletion—ASM further reduces administration tasks by enabling files stored in
ASM disk groups to be managed by Oracle Database. ASM automatically assigns file names when files are
created, and automatically deletes files when they are no longer needed by the database.
ASM is implemented as a special kind of Oracle instance, with its own System Global Area and background processes.
The ASM instance is tightly integrated with the database instance. Every server running one or more database
instances that use ASM for storage has an ASM instance. In an Oracle RAC environment, there is one ASM instance
for each node, and the ASM instances communicate with each other on a peer-to-peer basis. Only one ASM instance is
required for each node regardless of the number of database instances on the node.
Oracle recommends that you use ASM for your database file storage, instead of raw devices or the operating system
file system. However, databases can have a mixture of ASM files and non-ASM files.
See Also:
• Oracle Database 2 Day DBA
• Oracle Database Storage Administrator's Guide
And :
DBA Essentials At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA HTML PDF
2 Day + Performance Tuning Guide HTML PDF
2 Day + Real Application Clusters Guide HTML PDF

11gRAC/ASM/AIX oraclibm@fr.ibm.com 15 of 393


For more information :

On http://otn.oracle.com

http://www.oracle.com/technology/products/database/asm/index.html

Automatic Storage Management

Automatic Storage Management (ASM) is a feature in Oracle Database 10g/11g that provides the database
administrator with a simple storage management interface that is consistent across all server and storage platforms. As
a vertically integrated file system and volume manager, purpose-built for Oracle database files, ASM provides the
performance of async I/O with the easy management of a file system. ASM provides capability that saves the DBAs
time and provides flexibility to manage a dynamic database environment with increased efficiency.

What's New for ASM in Oracle Database 11g


• 11g ASM New Features Technical White Paper
Oracle Database 11g ASM new features technical white paper

ASM Overview and Technical Papers


• ASM and Multipathing Best Practices and Information Matrix (PDF)
Multipathing software and ASM information matrix and best practices guide
• Oracle Database 10g Release 2 ASM - New Features White Paper (PDF)
This paper discusses the enhancements to ASM which are new in Release 2 of Oracle Database 10g.
• ASM Overview and Technical Best Practices White Paper (PDF)
This paper discusses the basics of ASM such as the steps to add disks, create a diskgroup, and create
a database within ASM, emphasizing best practices.
• Take the Guesswork Out of Database IO Tuning White Paper (PDF)
Oracle Database layout and storage configurations do not have to be complicated any more. Learn
more about best practices with ASM that reduce complexity and simplify storage management.
• Database Storage Consolidation with ASM White Paper (PDF)
Database storage consolidation empowered by ASM for RAC and single instance database
environments
• Automatic Storage Management Overview (PDF)
This short paper is a summary of current storage challenges and how ASM addresses them in Oracle
Database 10g.
• Automatic Storage Management White Paper (PDF)
This technical overview describes the ASM architecture and its benefits.
• Migration to ASM with Minimal Down Time, Technical White Paper (PDF)
Migrate your database to ASM using Oracle Data Guard and RMAN

11gRAC/ASM/AIX oraclibm@fr.ibm.com 16 of 393


4 ABOUT REAL APPLICATION CLUSTER
Extract from : Oracle® Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1)
Part Number B28252-02
http://www.oracle.com/pls/db111/homepage?remark=tahiti

Oracle RAC extends Oracle Database so that you can store, update, and efficiently retrieve data using multiple
database instances on different servers at the same time. Oracle RAC provides the software that facilitates servers
working together in what is called a cluster. The data files that make up the database must reside on shared storage
that is accessible from all servers that are part of the cluster. Each server in the cluster runs the Oracle RAC software.
An Oracle Database database has a one-to-one relationship between datafiles and the instance. An Oracle RAC
database, however, has a one-to-many relationship between datafiles and instances. In an Oracle RAC database,
multiple instances access a single set of database files. The instances can be on different servers, referred to as hosts
or nodes. The combined processing power of the multiple servers provides greater availability, throughput, and
scalability than is available from a single server.
Each database instance in an Oracle RAC database uses its own memory structures and background processes.
Oracle RAC uses Cache Fusion to synchronize the data stored in the buffer cache of each database instance. Cache
Fusion moves current data blocks (which reside in memory) between database instances, rather than having one
database instance write the data blocks to disk and requiring another database instance to reread the data blocks from
disk. When a data block located in the buffer cache of one instance is required by another instance, Cache Fusion
transfers the data block directly between the instances using the interconnect, enabling the Oracle RAC database to
access and modify data as if the data resided in a single buffer cache.
Oracle RAC is also a key component for implementing the Oracle enterprise grid computing architecture. Having ultiple
database instances accessing a single set of datafiles prevents the server from being a single point of failure. Any
packaged or custom application that ran well on a Oracle Database will perform well on Oracle RAC without requiring
code changes.
You will learn more about the operation of the Oracle RAC database in a cluster, how to build the cluster, and the
structure of an Oracle RAC database in other sections of this guide.
See Also:
• Oracle Real Application Clusters Administration and Deployment Guide

And :
DBA Essentials At http://www.oracle.com/pls/db111/homepage?remark=tahiti
Manage all aspects of your Oracle databases with the Enterprise Manager GUI.
2 Day DBA HTML PDF
2 Day + Performance Tuning Guide HTML PDF
2 Day + Real Application Clusters Guide HTML PDF

11gRAC/ASM/AIX oraclibm@fr.ibm.com 17 of 393


For more information :

On http://otn.oracle.com

http://www.oracle.com/technology/products/database/clustering/index.html

Oracle Real Application Clusters

Oracle Real Application Clusters (RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle
RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing
and shared-disk approaches to provide highly scalable and available database solutions for all your business
applications.
Oracle Database Standard Edition includes Real Application Clusters support for higher levels of system uptime.

Real Application Clusters 11g


• Oracle Real Application Clusters Datasheet (PDF) July 2007
• Oracle Real Application Clusters 11g Technical Overview(PDF) July 2007
• Workload Management with Oracle Real Application Clusters (FAN, FCF, Load Balancing)(PDF) July
2007
• NEW Oracle Homes in an Oracle Real Application Clusters Environment(PDF) January 2008
• UPDATED Using standard NFS to support a third voting disk on an Extended Distance cluster
configuration on Linux,AIX, HP, or Solaris (PDF) February 2008

11gRAC/ASM/AIX oraclibm@fr.ibm.com 18 of 393


5 ABOUT 11G RAC / ASM ON AIX
11g RAC with OCR (Oracle Cluster Registry) disk(s), and Voting (Heartbeat) disk(s) on raw disks, and database
on Oracle ASM (Automated Storage Management).

Oracle Automated Storage Management (ASM) Solution :

ORACLE CLUSTERWARE MANDATORY !!!

è No need for HACMP

è No need for GPFS

è Oracle Clusterware files (OCR and Voting) are placed on raw disks.

è Only Oracle databases files (datafiles, redo logs, archive logs, flash recovery area, …) are stored on the disks
managed by Oracle ASM. No binaries.

è ASM provided with oracle software

• IBM VIOS Virtual SCSI for ASM data storage and associated raw hdisk based Voting and
OCR (NEW April 2008).
• IBM VIOS Virtual LAN for all public network and private network interconnect for supported
data storage options (NEW April 2008).

11gRAC/ASM/AIX oraclibm@fr.ibm.com 19 of 393


6 WHAT’S NEW WITH 11G RAC IMPLEMENTATION ON AIX

With Oracle 11g RAC on IBM POWER System p, i running AIX :


• HACMP is not any more necessary as clusterware software since
10gRAC Release 1.
• Oracle provide with 11gRAC its own clusterware, named as Oracle
Clusterware, or CRS (Oracle Cluster Ready Service).
• Oracle Clusterware can cohabitate with HACMP under some
conditions.
• With 11gRAC, Oracle Clusterware is mandatory as clusterware, even
so other vendors clusterware software are installed on the same
systems. You MUST check that third party clusterware can cohabitate
with Oracle clusterware.
Subject: Using Oracle Clusterware with Vendor Clusterware FAQ   

Doc ID: Note:332257.1

• With Oracle Clusterware 11g Release 1 :


o Oracle VIP (Virtual IP) are now configured at the end of
the Clusterware installation (since 10gRAC Release 2).
• With Oracle Real Application Cluster 11g Release 1 :
o Oracle Real Application Clusters 11g Technical Overview(PDF) July 2007

• With ASM 11g Release 1 :


o ASM software can be, and should be installed in its own
ORACLE_HOME (ASM_HOME) directory which will be
different than the database software ORACLE_HOME.
Oracle Homes in an Oracle Real Application Clusters Environment(PDF) January 2008

o New features as “Fast Mirror Resync” and “Preferred


Mirror Read”, “Fast Rebalance”.
o New ASMCMD commands as “lsdsk”, “md_backup,
“md_restore””, “remap” and “cp”
o Etc … Check 11g ASM New Features Technical White Paper
(Oracle Database 11g ASM new features technical white paper)

7 INFRASTRUCTURE REQUIREMENTS

11gRAC/ASM/AIX oraclibm@fr.ibm.com 20 of 393


Oracle Real Application Cluster intend to provide :
• High Availability of users services to maintain continuity of the business
• Scale-in within each node, adding resources when possible, as your business grow
• Scale out when scale in is not anymore possible, responding to your business grow
• Workload balancing and affinity
• Etc …

Oracle Real Application protect high availability of your databases, but all hardware components of a cluster as :
• SAN storage attachements from nodes to SAN, including SAN switch and HBA’s
• Cluster network interconnect for Oracle clusterware network heartbeat, and Oracle RAC cache fusion
mechanism (Private Network), including network interfaces
• Storage
• Etc …
must be protected !!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 21 of 393


7.1 General Requirements

This chapter will list the general requirements to look at for an Oracle RAC implementation on IBM system p.

Topics covered are :

• Servers and processors


• Network
• Storage
• Storage attachements
• VIO Server

7.1.1 About Servers and processors

11gRAC/ASM/AIX oraclibm@fr.ibm.com 22 of 393


7.1.2 About RAC on IBM System p

Oracle Real Application cluster could be implemented on LPAR’s/DLPAR’s from Separated physical IBM System p
servers …

For production or
testing to achieve :

• High Availability
(protecting loss of
a physical server)

• Scalability (adding
cores and memory,
or adding new
server)

• Database
Workload affinity
across servers

OR implemented between LPAR’s, DLPAR’s from a same physical IBM System p server …

For production if high availability is not the


focus, but mostly about separating the
database workload.

Or for testing, or development purpose.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 23 of 393


7.1.3 About Network

Protect Private network with Etherchanel Implementation, the following diagram show the implementation with 2
servers :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 24 of 393


For PRIVATE Network (Cluster/RAC Interconnect) !!!
Gigabit switch is mandatory for production
implementation, even for only 2 nodes architecture.

(Cross-over cable can be used only for test purpose, and it’s not supported by Oracle Support,
please read RAC FAQ on http://metalink.oracle.com).
A second gigabit ethernet interconnect, with a different network mask, can be setup for security
purposes or performance issues.

 Network cards for public network must have same name on each participating node in the RAC cluster
(For example en0 on all nodes).
 Network cards for Interconnect Network (Private) must have same Name on each participating Node in the
RAC cluster (For example en1 on all nodes).
 1 virtual IP per node must be reserved, and not used on the network prior to Oracle clusterware
installation. Don’t set IP allias at AIX level, Oracle clusterware will take charge of it.

With AIX EtherChannel implemented with 2 Network switches, we’ll cover the loss of interconnect network cards (for
example en2 and en3) and corresponding interconnect network switch.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 25 of 393


7.1.4 About SAN Storage

When implementing RAC, you must be carrefull on the SAN storage to use. The SAN Storage must be
capable thru it’s drivers of read/write concurrency at same time from any member of the RAC cluster, which
means that “reserve_policy” attribute from disks (hdisk, hdiskpower, dlmfdrv, etc …) discovered must be able
to be set to “no_reserve” or “no_lock” values.

For other storages as EMC, HDS, HP and so on, you’ll have to check that the storage to be selected is compatible
and supported with RAC. Oracle is not supporting or certifying storage, Storage Constructors will say if they support or
not, and with which Multi-pathing tool (IBM SDDPCM, EMC PowerPath, HDLM, etc).

Some documents to look at for Oracle on IBM storage :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 26 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 27 of 393
7.1.5 Proposed infrastructure with 2 servers

Example of infrastructure as it should be with 2 servers, for network and storage.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 28 of 393


7.1.6 What do we protect ?

Oracle RAC protect against node failures, the infrastructure should cover a maximum of failure case as :

For the storage

• loss of 1 HBA on one node


• loss of 1 SAN switch

For the Network

• loss of 1 interconnect Network card on one node


• loss of 1 Interconnect Network switch

All components of the infrastructure are protected for high availability apart for the storage, if the full storage is lost.
The only way to extend the high availability to the storage level is to introduce a second storage and implement RAC
as a stretched or extended cluster.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 29 of 393


7.1.7 About IBM Advanced Power Virtualization and RAC

Extract from “Certify : RAC for Unix On IBM AIX based Systems (RAC only)” (27/02/2008, check for last update)
https://metalink.oracle.com/metalink/plsql/f?p=140:1:12020374767153922780:::::
• Oracle products are certified for AIX5L on all servers that IBM supports with AIX5L. This includes IBM System i
and System p models that use POWER5 and POWER6 processors. Please visit IBM's website at this URL for
more information on AIX5L support for System i details.

• IBM power based systems that support AIX5L includes machines branded as RS6000, pSeries, iSeries, System
p and System i.

• The minimum AIX levels for POWER 6 based models are AIX5L 5.2 TL10 and AIX5L 5.3 TL06

• AIX5L certifications include AIX5L versions 5.2 and 5.3. 5.1 was desupported on 16-JUN-2006.

o Customers should review MetaLink Note 282036.1

• 64-bit hardware is required to run Real Application Clusters.

• AIX 64-bit kernel is required for 10gR2 RAC. AIX 32-bit kernel and AIX 64-bit kernel are supported with 9.2 and
10g.

Extract from “RAC Technologies Matrix for UNIX Platforms” (11/04/2008, check for last update)
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html

Technology
Platform Technology Notes
Category
• Advanced Power Virtualization
IBM AIX Server/Processor IBM System p, i, hardware LPAR's are
Architecture BladeCenter JS20, supported including Dynamic
JS21 or shared pool with micro
partitions
• VIOS Virtual SCSI for ASM data
storage and associated raw hdisk
based Voting and OCR (NEW
April 2008).
• VIOS Virtual LAN for all public
network and private network
interconnect for supported data
storage options (NEW April 2008).

11gRAC/ASM/AIX oraclibm@fr.ibm.com 30 of 393


7.1.7.1 Network and VIO Server 

Implementing Etherchannel links thru VIO servers for public and private network between nodes :

Example between 2 LPAR’s, 1 for RAC node, and one for APPS node.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 31 of 393


Implementing Etherchannel and VLAN (Virtual LAN) links thru VIO servers for public and private network
between 2 RAC nodes :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 32 of 393


7.1.7.2 Storage and VIO Server

Following implementation is now supported !!!

Certification for this architecture is done, and it’s now supported.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 33 of 393


Following implementation also supported :

The hdisks hosting the following components are accessed thru VIO servers

• AIX operating system


• Oracle clusterware ($CRS_HOME)
• ASM software ($ASM_HOME)
• RAC Software ($ORACLE_HOME)
• Etc …

And OCR, Voting and ASM disks are accessed thru direct attached HBA’s to the RAC LPAR’s.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 34 of 393


7.2 Cookbook infrastructure

For our infrastructure, we used a cluster which is composed of three partitions (IBM LPAR) on an IBM system
p 570 using AIX 5L.
BUT in the real world, to achieve true high availability it’s necessary to have at least two IBM Systems p / i servers as
shown bellow :

Each component of the infrastructure must be protected :

• Disk access path (2 HBA’s and multi-pathing software)


• Interconnect network
• Public network
• …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 35 of 393


7.2.1 IBM Sytem p servers

This is the IBM Sytem


pServer we used for
our installation …

http://www-03.ibm.com/servers/eserver/pseries/hardware/highend/
http://www-03.ibm.com/systems/p/
??????

THEN you’ll need 1 AIX5L LPAR on each server for real RAC implementation, with necessary memory and Power 6
CPU assigned to each LPAR.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 36 of 393


Commands to print the config for IBM System p on AIX5L (node1) :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 37 of 393


{node1:root}/ # prtconf

System Model: IBM,9117-MMA


Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 1 JSC-11g-node1
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false

Network Information
Host Name: node1
IP Address: 10.3.25.81
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:

Paging Space Information


Total Paging Space: 512MB
Percent Used: 7%

Volume Groups Information


==============================================================================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 273 152 44..00..00..53..55
==============================================================================

INSTALLED RESOURCE LIST

The following resources are installed on the machine.


+/- = Added or deleted from Resource List.
* = Diagnostic support not available.

Model Architecture: chrp


Model Implementation: Multiple Processor, PCI bus

+ sys0 System Object


+ sysplanar0 System Planar
* vio0 Virtual I/O Bus
* vscsi0 U9117.570.65D7D3E-V1-C5-T1 Virtual SCSI Client Adapter
* hdisk0 U9117.570.65D7D3E-V1-C5-T1-L810000000000 Virtual SCSI Disk Drive
* vsa0 U9117.570.65D7D3E-V1-C0 LPAR Virtual Serial Adapter
* vty0 U9117.570.65D7D3E-V1-C0-L0 Asynchronous Terminal
* ent2 U9117.570.65D7D3E-V1-C4-T1 Virtual I/O Ethernet Adapter (l-lan)
* ent1 U9117.570.65D7D3E-V1-C3-T1 Virtual I/O Ethernet Adapter (l-lan)
* ent0 U9117.570.65D7D3E-V1-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
* pci1 U7879.001.DQD17GX-P1 PCI Bus
* pci6 U7879.001.DQD17GX-P1 PCI Bus
+ fcs0 U7879.001.DQD17GX-P1-C2-T1 FC Adapter
* fcnet0 U7879.001.DQD17GX-P1-C2-T1 Fibre Channel Network Protocol Device
* fscsi0 U7879.001.DQD17GX-P1-C2-T1 FC SCSI I/O Controller Protocol Device
* dac0 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31 1722-600 (600) Disk Array Controller
* dac1 U7879.001.DQD17GX-P1-C2-T1-W200900A0B812AB31 1722-600 (600) Disk Array Controller
* dac2 U7879.001.DQD17GX-P1-C2-T1-W200700A0B80FD42F 1722-600 (600) Disk Array Controller
* dac3 U7879.001.DQD17GX-P1-C2-T1-W200600A0B80FD42F 1722-600 (600) Disk Array Controller
+ L2cache0 L2 Cache
+ mem0 Memory

+ proc0 Processor
+ hdisk1 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L0 1722-600 (600) Disk Array Device
+ hdisk2 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
+ hdisk3 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
11gRAC/ASM/AIX oraclibm@fr.ibm.com 38 of 393
Commands to print the config for IBM System p on AIX5L (node2) :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 39 of 393


{node2:root}/ # prtconf

System Model: IBM,9117-MMA


Machine Serial Number: 651A260
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Processor Clock Speed: 3504 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 JSC-11g-node2
Memory Size: 3072 MB
Good Memory Size: 3072 MB
Platform Firmware level: EM310_048
Firmware Version: IBM,EM310_048
Console Login: enable
Auto Restart: true
Full Core: false

Network Information
Host Name: node2
IP Address: 10.3.25.82
Sub Netmask: 255.255.255.0
Gateway: 10.3.25.254
Name Server:
Domain Name:

Paging Space Information


Total Paging Space: 512MB
Percent Used: 2%

Volume Groups Information


==============================================================================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 273 149 54..20..00..20..55
==============================================================================

INSTALLED RESOURCE LIST

The following resources are installed on the machine.


+/- = Added or deleted from Resource List.
* = Diagnostic support not available.

Model Architecture: chrp


Model Implementation: Multiple Processor, PCI bus

+ sys0 System Object


+ sysplanar0 System Planar
* pci9 U7879.001.DQD17GX-P1 PCI Bus

* pci10 U7879.001.DQD17GX-P1 PCI Bus


+ fcs0 U7879.001.DQD17GX-P1-C5-T1 FC Adapter
* fcnet0 U7879.001.DQD17GX-P1-C5-T1 Fibre Channel Network Protocol Device
* fscsi0 U7879.001.DQD17GX-P1-C5-T1 FC SCSI I/O Controller Protocol Device
* dac0 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31 1722-600 (600) Disk Array Controller
* dac1 U7879.001.DQD17GX-P1-C5-T1-W200900A0B812AB31 1722-600 (600) Disk Array Controller
* dac2 U7879.001.DQD17GX-P1-C5-T1-W200700A0B80FD42F 1722-600 (600) Disk Array Controller
* dac3 U7879.001.DQD17GX-P1-C5-T1-W200600A0B80FD42F 1722-600 (600) Disk Array Controller
* vio0 Virtual I/O Bus
* vscsi0 U9117.570.65D7D3E-V2-C5-T1 Virtual SCSI Client Adapter
* hdisk1 U9117.570.65D7D3E-V2-C5-T1-L810000000000 Virtual SCSI Disk Drive
* vsa0 U9117.570.65D7D3E-V2-C0 LPAR Virtual Serial Adapter
* vty0 U9117.570.65D7D3E-V2-C0-L0 Asynchronous Terminal
* ent2 U9117.570.65D7D3E-V2-C4-T1 Virtual I/O Ethernet Adapter (l-lan)
* ent1 U9117.570.65D7D3E-V2-C3-T1 Virtual I/O Ethernet Adapter (l-lan)
* ent0 U9117.570.65D7D3E-V2-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
+ hdisk0 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L0 1722-600 (600) Disk Array Device
+ hdisk2 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device
+ hdisk3 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
+ hdisk4 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L3000000000000 1722-600 (600) Disk Array Device
11gRAC/ASM/AIX oraclibm@fr.ibm.com 40 of 393
11gRAC/ASM/AIX oraclibm@fr.ibm.com 41 of 393
Command to get information on the LPAR :

{node1:root}/ # lparstat -i
Node Name : node1
Partition Name : JSC-11g-node1
Partition Number : 1
Type : Shared-SMT
Mode : Uncapped
Entitled Capacity : 0.50
Partition Group-ID : 32769
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 1
Minimum Virtual CPUs : 1
Online Memory : 3072 MB
Maximum Memory : 6144 MB
Minimum Memory : 1024 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 1.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 16
Active Physical CPUs in system : 16
Active CPUs in Pool : 16
Shared Physical CPUs in system : -
Maximum Capacity of Pool : -
Entitled Capacity of Pool : -
Unallocated Capacity : 0.00
Physical CPU Percentage : 50.00%
Unallocated Weight : 0
{node1:root}/ #

{node2:root}/home # lparstat -i
Node Name : node2
Partition Name : JSC-11g-node2
Partition Number : 2
Type : Shared-SMT
Mode : Uncapped
Entitled Capacity : 0.50
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 1
Minimum Virtual CPUs : 1
Online Memory : 3072 MB
Maximum Memory : 6144 MB
Minimum Memory : 1024 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 1.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 16
Active Physical CPUs in system : 16
Active CPUs in Pool : 16
Shared Physical CPUs in system : -
Maximum Capacity of Pool : -
Entitled Capacity of Pool : -
Unallocated Capacity : 0.00
Physical CPU Percentage : 50.00%
Unallocated Weight : 0
{node2:root}/home #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 42 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 43 of 393
7.2.2 Operating System
Operating system must be installed the same way on each LPAR, with the same maintenance level, same APAR
and FILESETS level.

è Check “PREPARING THE SYSTEM” chapter for Operating System requirements on AIX5L

• AIX5.1 is not supported


• AIX5.2 / 5.3 are supported and certified

 The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if
you’ve chosen an implementation without HACMP. If this layer is implemented for
other purpose, disks ressources necessary to install and run CRS data will have to be part of an
HACMP volume group resource.
If you have previously installed HACMP, you must remove :
 HACMP filesets (cluster.es.*)
 rsct.hacmp.rte
 rsct.compat.basic.hacmp.rte
 rsct.compat.clients.hacmp.rte
If you did run a first installation of the Oracle Clusterware (CRS) with HACMP
installed,
èCheck if /opt/ORCLcluster directory does exist and if so, remove it on
all nodes.

THEN REBOOT ALL NODES ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 44 of 393


7.2.3 Multi­pathing and ASM

Please check Metalink note “Oracle ASM and Multi-Pathing Technologies” Doc ID: Note:294869.1

Note, that Oracle Corporation does not certify ASM against multipathing utilities. The MP utilities listed below are ones
that known working solutions. As we do more testing, additional MP utilities will be listed here, thus, this document is an
active document.

Multi-pathing allow SAN access failover, and load balancing accros SAN Fiber Channel attachements.

OS
Multi-pathing ASM Device
Platfor Notes
tool Usage
m
AIX EMC PowerPath Use raw partitions
thru the pseudo
device
/dev/rhdiskpowerx
IBM SDD (Vpath) /dev/vpathx As of this writing, SDD-AIX is known to cause discovery and
device handling problems for ASM, and thus is not viable
NOT solution.
SUPPORTED for ASM needs to access disks/vpath thru non root user, which is not
RAC/ASM on allowed by SDD as for today 27 March 2007.
AIX !!! See SDDPCM section below for an alternative solution to SDD
for AIX

IBM SDDPCM Use /dev/rhdiskx You must install SDDPCM filesets and enable SDDPCM..
device SDDPCM cannot co-exist w/ SDD.
SDDPCM only works with the following IBM storage components:
DS8000,DS6000,Enterprise Storage Server (ESS)
SDDPCM works also on top of IBM SVC (SAN Volume
Controler), and on top of other supported storages like HDS,
EMC, etc …

IBM RDAC Use /dev/rhdiskx RDAC is installed by default and must be used with IBM storage
(Redundant Disk device DS4000, and former FasTt series.
Array Controller)
Hitachi Dynamic Use HDLM generates a scsi (cxtydzx) address where the controller is
Link Manager - /dev/rdsk/cxtydz highest unused controller number.
HDLM that’s generated by HDLM no longer requires HACMP.
HDLM /dev/dlmfdrvx can be used out of HDLM Volume Group
Or /dev/dlmfdrvx Or if using HDLM Volume Group, logical volumes must be
created with “mklv” command using “-T O” options.

Fujitsu ETERNUS Use /dev/rhdisk


GR Multipath device
Driver

11gRAC/ASM/AIX oraclibm@fr.ibm.com 45 of 393


7.2.4 IBM storage and multi­pathing
With IBM, please refer to IBM to confirm which IBM storage is supported with RAC, if not specified in our document.

IBM TotalStorage products for IBM Sytem p

IBM DS4000, DS6000 and DS8000 series are supported with 10gRAC.

IBM Storage DS300 and DS400 are not, and will not be supported with 10gRAC.
As for today March 27, 2007 IBM Storage DS3200 and DS3400 are not yet
supported with 10gRAC.

IBM System Storage and TotalStorage products


è http://www-03.ibm.com/servers/storage/product/products_pseries.html
??????

There are 2 cases when using IBM storage :

• IBM MPIO (Multi-Path I/O).


MPIO driver is supported with IBM Total Storage ESS, DS6000 and DS8000 series only
And with IBM SVC (SAN Volume Controler).

• IBM RDAC (Redundant Disk Array Controller) for IBM Total Storage DS4000.
RDAC driver is supported with IBM Total Storage DS4000 series only, and former FasTt.

è You MUST use one or the other, depending on the storage used.

• case 1: Lun’s provided by the IBM storage with IBM MPIO installed as multi-pathing driver.
Disks (LUN’s) will be seen as hdisk at AIX level using lspv command.

On node 1 … {node1:root}/ # lspv


hdisk0 00ced22cf79098ff rootvg active
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None

• case 2: Lun’s provided by the IBM DS4000 storage with IBM RDAC installed as multi-pathing
driver.
Disks (LUN’s) will be seen as hdisk at AIX level using lspv command.

On node 1 … {node1:root}/ # lspv


hdisk0 00ced22cf79098ff rootvg active
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None

11gRAC/ASM/AIX oraclibm@fr.ibm.com 46 of 393


7.2.4.1 IBM MPIO (Multi Path I/O) Setup Procedure

AIX Packages needed to install devices.sddpcm.53.2.1.0.7.bff


on all nodes : devices.sddpcm.53.rte
devices.fcp.disk.ibm.mpio.rte

devices.fcp.disk.ibm.mpio.rte download page :


http://www-
1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D410&q1=host+scripts&uid=ssg1S4000203&loc=en_US&cs=utf-
8&lang=en

MPIO for AIX 5.3 download page :


http://www-1.ibm.com/support/docview.wss?uid=ssg1S4000201

smitty install
On node 1,
and node 2 … Install and Update Software
Install Software

* INPUT device / directory for software


[/mydir_with_my_filesets]
Installing
the filesets : SOFTWARE to install []
Press F4

Select devices.fcp.disk.ibm.mpio

Install Software

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* INPUT device / directory for software .
* SOFTWARE to install
[devices.fcp.disk.ibm.> +
PREVIEW only? (install operation will NOT occur) no
+
COMMIT software updates? yes
+
SAVE replaced files? no
+
AUTOMATICALLY install requisite software? yes
+
EXTEND file systems if space needed? yes
+
OVERWRITE same or newer versions? no
+
VERIFY install and check file sizes? no
+
Include corresponding LANGUAGE filesets? yes
+
DETAILED output? no
+
Process multiple volumes? yes
+
ACCEPT new license agreements? no
+
Preview new LICENSE agreements? no
+

11gRAC/ASM/AIX oraclibm@fr.ibm.com 47 of 393


Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
devices.fcp.disk.ibm.mpio.r 1.0.0.0 USR APPLY SUCCESS

Install devices.sddpcm.53

Select :
Check the insatllation
succed and the ¦ devices.sddpcm.53
installation summary ALL ¦
¦ + 2.1.0.0 IBM SDD PCM for AIX V53
message : ¦
¦ + 2.1.0.7 IBM SDD PCM for AIX V53
¦

Install Software

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

* INPUT device / directory for software .


* SOFTWARE to install [devices.sddpcm.53
> +
PREVIEW only? (install operation will NOT occur) no
+
COMMIT software updates? yes
+
SAVE replaced files? no
+
AUTOMATICALLY install requisite software? yes
+
EXTEND file systems if space needed? yes
+
OVERWRITE same or newer versions? no
+
VERIFY install and check file sizes? no
+
Include corresponding LANGUAGE filesets? yes
+
DETAILED output? no
+
Process multiple volumes? yes
+
ACCEPT new license agreements? no
+
Preview new LICENSE agreements? no
+

Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
devices.sddpcm.53.rte 2.1.0.0 USR APPLY SUCCESS
devices.sddpcm.53.rte 2.1.0.0 ROOT APPLY SUCCESS
devices.sddpcm.53.rte 2.1.0.7 USR APPLY SUCCESS
devices.sddpcm.53.rte 2.1.0.7 ROOT APPLY SUCCESS
devices.sddpcm.53.rte 2.1.0.7 USR COMMIT SUCCESS
devices.sddpcm.53.rte 2.1.0.7 ROOT COMMIT SUCCESS

Now you need to reboot all AIX nodes !!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 48 of 393


{node1:root}/ # pcmpath query wwpn
Commands Adapter Name PortWWN
to know AIX WWPN fscsi0 10000000C935A7E7
fscsi1 10000000C93A1BF3

{node1:root}/ # lsdev -Cc disk -t 2107


Commands hdisk2 Available 0A-08-02 IBM MPIO FC 2107
hdisk3 Available 0A-08-02 IBM MPIO FC 2107
to check disks : hdisk4 Available 0A-08-02 IBM MPIO FC 2107
hdisk5 Available 0A-08-02 IBM MPIO FC 2107
hdisk6 Available 0A-08-02 IBM MPIO FC 2107
hdisk7 Available 0A-08-02 IBM MPIO FC 2107
hdisk8 Available 0A-08-02 IBM MPIO FC 2107
hdisk9 Available 0A-08-02 IBM MPIO FC 2107
hdisk10 Available 0A-08-02 IBM MPIO FC 2107
hdisk11 Available 0A-08-02 IBM MPIO FC 2107

{node1:root}/ # pcmpath query device

Commands DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load Balance
SERIAL: 75271812000
to check disks : ==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 CLOSE NORMAL 0 0
1 fscsi1/path1 CLOSE NORMAL 0 0

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2107900 ALGORITHM: Load Balance


SERIAL: 75271812001
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 CLOSE NORMAL 0 0
1 fscsi1/path1 CLOSE NORMAL 0 0

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2107900 ALGORITHM: Load Balance


SERIAL: 75271812002
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 CLOSE NORMAL 0 0
1 fscsi1/path1 CLOSE NORMAL 0 0

11gRAC/ASM/AIX oraclibm@fr.ibm.com 49 of 393


7.2.4.2 IBM AIX RDAC (FCP.ARRAY filesets) Setup Procedure

This ONLY apply to use of of DS4000 storage series. NOT to DS6000, DS8000 and ES800.

RDAC is installed by default on AIX5L.


Each node must have 2 HBA cards, for multi-pathing. With ONLY 1 HBA per node, it will works but path to SAN will not
be protected. THEN in production, 2 HBA per node must be used.

All AIX hosts in your storage subsystem must have the RDAC multipath driver installed.
In a single server environment, AIX allows load sharing (also called load balancing). You can set the load balancing
parameter to yes. In case of heavy workload on one path the driver will move other LUNs to the controller with less
workload and, if the workload reduces back to the preferred controller. Problem that can occur is disk thrashing. That
means that the driver moves the LUN back and forth from one controller to the other. As a result the controller is more
occupied by moving disks around than servicing I/O. The recommendation is to NOT load balance on an AIX system.
The performance increase is minimal (or performance could actually get worse).

RDAC (fcp.array filesets) for AIX support round-robin load-balancing

Setting the attributes of the RDAC driver for AIX


The AIX RDAC driver files are not included on the DS4000 installation CD.
Either install them from the AIX Operating Systems CD, if the correct version is included, or download them from the
following Web site: http://techsupport.services.ibm.com/server/fixes
or http://www-304.ibm.com/jct01004c/systems/support/

{node1:root}/ # lslpp -L devices.fcp.disk.array.rte


Fileset Level State Type Description (Uninstaller)
Commands ----------------------------------------------------------------------------
devices.fcp.disk.array.rte
to check that 5.3.0.52 A F FC SCSI RAIDiant Array Device
necessary filesets Support Software
are present for
RDAC …
State codes:
A -- Applied.
B -- Broken.
C -- Committed.
E -- EFIX Locked.
O -- Obsolete. (partially migrated to newer version)
? -- Inconsistent State...Run lppchk -v.

Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #

{node1:root}/ # lslpp -L devices.common.IBM.fc.rte


Fileset Level State Type Description (Uninstaller)
----------------------------------------------------------------------------
devices.common.IBM.fc.rte
5.3.0.50 C F Common IBM FC Software

State codes:
A -- Applied.
B -- Broken.
C -- Committed.
E -- EFIX Locked.
O -- Obsolete. (partially migrated to newer version)
? -- Inconsistent State...Run lppchk -v.

Type codes:
F -- Installp Fileset
P -- Product
C -- Component
T -- Feature
R -- RPM Package
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 50 of 393


Commands
to check the RDAC configuration and HBA path to hdisk …

On node1 … On node2 …

{node1:root}/ # fget_config -v -A {node2:root}/ # fget_config -v -A

---dar0--- ---dar0---

User array name = 'DS4000_JSC' User array name = 'DS4000_JSC'


dac0 ACTIVE dac3 ACTIVE dac0 ACTIVE dac3 ACTIVE

Disk DAC LUN Logical Drive Disk DAC LUN Logical Drive
hdisk1 dac0 0 G8_spfile hdisk0 dac0 0 G8_spfile
hdisk2 dac3 1 G8_OCR1 hdisk1 dac3 1 G8_OCR1
hdisk3 dac0 2 G8_OCR2 hdisk2 dac0 2 G8_OCR2
hdisk4 dac3 3 G8_Vote1 hdisk3 dac3 3 G8_Vote1
hdisk5 dac0 4 G8_Vote2 hdisk4 dac0 4 G8_Vote2
hdisk6 dac3 5 G8_Vote3 hdisk5 dac3 5 G8_Vote3
hdisk7 dac0 6 G8_Data1 hdisk6 dac0 6 G8_Data1
hdisk8 dac3 7 G8_Data2 hdisk7 dac3 7 G8_Data2
hdisk9 dac0 8 G8_Data3 hdisk8 dac0 8 G8_Data3
hdisk10 dac3 9 G8_Data4 hdisk9 dac3 9 G8_Data4
hdisk11 dac0 10 G8_Data5 hdisk10 dac0 10 G8_Data5
hdisk12 dac3 11 G8_Data6 hdisk12 dac3 11 G8_Data6
hdisk13 dac0 12 G8_tie hdisk13 dac0 12 G8_tie
{node1:root}/ # {node2:root}/ #

Commands
to check the RDAC configuration and HBA path to hdisk for one specific “dar” …

{node1:root}/ # fget_config -l dar0


dac0 ACTIVE dac3 ACTIVE
hdisk1 dac0
hdisk2 dac3
hdisk3 dac0
hdisk4 dac3
hdisk5 dac0
hdisk6 dac3
hdisk7 dac0
hdisk8 dac3
hdisk9 dac0
hdisk10 dac3
hdisk11 dac0
hdisk12 dac3
hdisk13 dac0
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 51 of 393


Fcs0 is one of the HBA. {node1:root}/ # fcstat fcs0

FIBRE CHANNEL STATISTICS REPORT: fcs0

Device Type: FC Adapter (df1080f9)


Commands
Serial Number: 1F41709923
To see the HBA fiber Option ROM Version: 02E01871
channel statistics : Firmware Version: H1D1.81X1
World Wide Node Name: 0x20000000C93F8E29
World Wide Port Name: 0x10000000C93F8E29

FC-4 TYPES:
Supported:
0x0000012000000000000000000000000000000000000000000000000000000000
Active:
0x0000010000000000000000000000000000000000000000000000000000000000
Class of Service: 3
Port Speed (supported): 2 GBIT
Port Speed (running): 2 GBIT
Port FC ID: 0x650B00
Port Type: Fabric

Seconds Since Last Reset: 2795

Transmit Statistics Receive Statistics


------------------- ------------------
Frames: 41615 96207
Words: 1537024 12497408

LIP Count: 0
NOS Count: 0
Error Frames: 0
Dumped Frames: 0
Link Failure Count: 269
Loss of Sync Count: 469
Loss of Signal: 466
Primitive Seq Protocol Error Count: 0
Invalid Tx Word Count: 51
Invalid CRC Count: 0

IP over FC Adapter Driver Information


No DMA Resource Count: 0
No Adapter Elements Count: 0

FC SCSI Adapter Driver Information


No DMA Resource Count: 0
No Adapter Elements Count: 0
No Command Resource Count: 0

IP over FC Traffic Statistics


Input Requests: 0
Output Requests: 0
Control Requests: 0
Input Bytes: 0
Output Bytes: 0

FC SCSI Traffic Statistics


Input Requests: 24721
Output Requests: 8204
Control Requests: 252
Input Bytes: 46814436
Output Bytes: 4207616
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 52 of 393


7.2.5 EMC storage and multi­pathing

With EMC, please refer to Hitachi to see which EMC storage is supported with RAC.
There are 2 cases when using EMC storage :

• case 1: Lun’s provided by the EMC storage with IBM MPIO installed as multi-pathing driver.
Disks (LUN’s) will be seen as hdisk at AIX level using lspv command.

On node 1 … {node1:root}/ # lspv


hdisk0 00ced22cf79098ff rootvg active
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None

Then for disks to be used for ASM, and on all nodes :


1. Install MPIO on all nodes, attach the LUN to each node, dicover LUN’s with “cfgmgr“.
2. Identify hdisk names on each nodes, for a given LUN ID.
3. remove PVID from hdisk and change the reserve policy to no reserve using :
chdev -l hdisk… -a pv=clear
chdev -l hdisk… -a reserve_policy=no_reserve
4. set ownerchip to oracle:dba to the /dev/rhdisk…
5. set read/write permissions to 660 to the /dev/rhdisk…
6. access the disk thru /dev/rhdisk… for ASM diskgroup configuration

• case 2 : Lun’s provided by the EMC storage with EMC PowerPath installed as multi-pathing
driver.
Disks (LUN’s) will be seen as hdiskpower at AIX level using lspv command.

{node1:root}/ # lspv
On node 1 … hdiskpower0 00ced22cf79098ff rootvg active
hdiskpower1 none None
hdiskpower2 none None
hdiskpower3 none None
hdiskpower4 none None

Then for disks to be used for ASM, and on all nodes :


1. Install PowerPath on all nodes, attach the LUN to each node, dicover LUN’s with “cfgmgr“.
2. Identify hdiskpower names on each nodes, for a given LUN ID.
3. remove PVID from hdiskpower and change the reserve policy to no reserve using :
chdev -l hdiskpower… -a pv=clear
chdev -l hdiskpower… -a reserve_lock=no
4. set ownerchip to oracle:dba to the /dev/rhdiskpower…
5. set read/write permissions to 660 to the /dev/rhdiskpower…
6. access the disk thru /dev/rhdiskpower… for ASM diskgroup configuration

11gRAC/ASM/AIX oraclibm@fr.ibm.com 53 of 393


7.2.5.1 EMC PowerPath Setup Procedure

See PowerPath for AIX version 4.3 Installation & Administration Guide, P/N 300-001-683 for details

On node 1, and node 2 …

1. Install EMC ODM drivers and necessary filesets …


• 5.2.0.1 from ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS/EMC.AIX.5.2.0.1.tar.Z
• install using smit install

2. remove any existing devices attached to the EMC


{node1:root}/ # rmdev –dl hdiskX

3. run /usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr to detect devices

4. Install PowerPath version 4.3.0 minimum using smit install

5. register PowerPath
{node1:root}/ # emcpreg –install

6. initialize PowerPath devices


{node1:root}/ # powermt config

7. verify that all PowerPath devices are named consistently across all cluster nodes
{node1:root}/ # /usr/lpp/EMC/Symmetrix/bin/inq.aix64 | grep hdiskpower

compare results. Consistent naming is not required for ASM devices, but LUNs used
for the OCR and VOTE functions must have the same device names on all rac systems.

Identify two small luns to be used for OCR and voting

if the hdiskpowerX names for the OCR and VOTE devices are different, create a new
device for each of these functions as follows:
{node1:root}/ # mknod /dev/ocr c <major # of OCR LUN> <minor # of OCR LUN>
{node1:root}/ # mknod /dev/vote c <major # of VOTE LUN> <minor # of VOTE LUN>

Major and minor numbers can be seen using the command ‘ls –al /dev/hdiskpower*’

8. On all hdiskpower devices to be used by Oracle for ASM, voting, or the OCR, the reserve_lock
attribute must be set to "no"
{node1:root}/ # chdev -l hdiskpowerX -a reserve_lock=no

9. Verify the attribute is set


{node1:root}/ # lsattr –El hdiskpowerX

10. Set permissions on all hdiskpower drives to be used for ASM, voting, or the OCR as follows :
{node1:root}/ # chown oracle:dba /dev/rhdiskpowerX
{node1:root}/ # chmod 660 /dev/rhdiskpowerX

The Oracle Installer will change these permissions and ownership as necessary during the CRS install
process.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 54 of 393


7.2.6 HITACHI storage and multi­pathing
With Hitachi, please refer to Hitachi to see which HDS storage is supported with RAC.
There are 3 cases when using Hitachi HDS storage :

• case 1: Lun’s provided by the HDS storage with IBM MPIO installed as multi-pathing driver.

*** NOT SUPPORTED with all HDS storage, check with Hitachi ***

Disks (LUN’s) will be seen as hdisk at AIX level using lspv command.

On node 1 … {node1:root}/ # lspv


hdisk0 00ced22cf79098ff rootvg active
hdisk1 none None
hdisk2 none None
hdisk3 none None
hdisk4 none None

Then for disks to be used for ASM, and on all nodes :


1. Install MPIO on all nodes, attach the LUN to each node, dicover LUN’s with “cfgmgr“.
2. Identify hdisk names on each nodes, for a given LUN ID.
3. remove PVID from hdisk and change the reserve policy to no reserve using :
chdev -l hdisk… -a pv=clear
chdev -l hdisk… -a reserve_policy=no_reserve
4. set ownerchip to oracle:dba to the /dev/rhdisk
5. set read/write permissions to 660 to the /dev/rhdisk
6. access the disk thru /dev/rhdisk for ASM diskgroup configuration

• case 2 : Lun’s provided by the HDS storage with HDLM installed as multi-pathing driver.

*** NOT SUPPORTED ***


Release Notes 10g Release 2 (10.2) for AIX 5L Based Systems (64­Bit)
B19074-04 / April 2006

Hitachi HDLM for Storage

If you use Hitachi HDLM (dmlf devices) for storage, then automatic storage
instances do not automatically identify the physical disk. Instead, the instances
identify only the logical volume manager (LVM). This is because the physical
disks can only be opened by programs running as root.

Physical disks have path names similar to the following:

/dev/rdlmfdrv8
/dev/rdlmfdrv9

11gRAC/ASM/AIX oraclibm@fr.ibm.com 55 of 393


Case 3 : Lun’s provided by the HDS storage with HDLM as multi-pathing driver.
Disks will be seen as dlmfdrv at AIX level using lspv command, and part of a HDLM VG
(volume groups).

On node 1 … {node1:root}/ # lspv


dlmfdrv0 00ced22cf79098ff rootvg active
dlmfdrv1 none vg_asm active
dlmfdrv2 none vg_asm active
dlmfdrv3 none vg_asm active
dlmfdrv4 none vg_asm active
dlmfdrv5 none vg_asm active

or

{node1:root}/ # lspv
dlmfdrv0 00ced22cf79098ff rootvg active
dlmfdrv1 none vg_ocr_disk1 active
dlmfdrv2 none vg_voting_disk1 active
dlmfdrv3 none vg_asm_disk1 active
dlmfdrv4 none vg_asm_disk2 active
dlmfdrv5 none vg_asm_disk3 active

Then for disks to be used for ASM, and on all nodes :


1. Install HDLM on all nodes, attach the LUN to each node, dicover LUN’s with “cfgmgr“.
2. Turn off reserve locking on all nodes è dlnkmgr set –rsv on 0 –s
3. Create VGs (Volume Groups) and LVs (Logical Volumes)
2 options :

1/ Create a VG with all dlmfdrv, and create LV’s out of it 2/ Create on VG for one dlmfdrv, and one LV for one VG.
(Recommended for ease of administration and avoid
On node 1 … downtime.)
1) create the volume groups On node 1 …
dlmmkvg -y vg_asm -B -s 256 -V 101 dlmfdrv1 dlmfdrv2 1) create the volume groups
dlmfdrv3 dlmfdrv4 dlmfdrv5
dlmmkvg -y vg_ocr1 -B -s 256 -V 101 dlmfdrv1
2) enable the volume groups dlmmkvg -y vg_vot1 -B -s 256 -V 101 dlmfdrv2
dlmmkvg -y vg_asm1 -B -s 256 -V 101 dlmfdrv3
dlmvaryonvg vg_asm dlmmkvg -y vg_asm2 -B -s 256 -V 101 dlmfdrv4
dlmmkvg -y vg_asm3 -B -s 256 -V 101 dlmfdrv5
3) Create the logical volumes …
mklv -y lv_ocr_disk1 -T O -w n -s n -r n vg_asm 2 2) enable the volume groups
mklv -y lv_voting_disk1 -T O -w n -s n -r n vg_asm 2
mklv -y lv_asm_disk1 -T O -w n -s n -r n vg_asm 2 dlmvaryonvg vg_ocr1
mklv -y lv_asm_disk2 -T O -w n -s n -r n vg_asm 2 dlmvaryonvg vg_vot1
mklv -y lv_asm_disk3 -T O -w n -s n -r n vg_asm 2 dlmvaryonvg vg_asm1
… dlmvaryonvg vg_asm2
dlmvaryonvg vg_asm3
4) Change permissions, owner, group of the raw devices …
B10811-05 Oracle DB Installation Guide p2-70
3) Create the logical volumes
chown oracle:dba /dev/rlv_ocr_disk1
chown oracle:dba /dev/rlv_voting_disk1 mklv -y lv_ocr_disk1 -T O -w n -s n -r n vg_ocr1 2
chown oracle:dba /dev/rlv_asm_disk1 mklv -y lv_voting_disk1 -T O -w n -s n -r n vg_vot1 2
chown oracle:dba /dev/rlv_asm_disk2 mklv -y lv_asm_disk1 -T O -w n -s n -r n vg_asm1 2
chown oracle:dba /dev/rlv_asm_disk3 mklv -y lv_asm_disk2 -T O -w n -s n -r n vg_asm2 2
… mklv -y lv_asm_disk3 -T O -w n -s n -r n vg_asm3 2
chmod 660 /dev/rlv_ocr_disk1 …
chmod 660 /dev/rlv_voting_disk1
chmod 660 /dev/rlv_asm_disk1 4) Change permissions, owner, group of the raw devices
chmod 660 /dev/rlv_asm_disk2 B10811-05 Oracle DB Installation Guide p2-70
chmod 660 /dev/rlv_asm_disk3
… chown oracle:dba /dev/rlv_ocr_disk1
chown oracle:dba /dev/rlv_voting_disk1
chown oracle:dba /dev/rlv_asm_disk1
chown oracle:dba /dev/rlv_asm_disk2
chown oracle:dba /dev/rlv_asm_disk3
5) disable the volume groups …
chmod 660 /dev/rlv_ocr_disk1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 56 of 393


dlmvaryoffvg vg_asm chmod 660 /dev/rlv_voting_disk1
chmod 660 /dev/rlv_asm_disk1
On node 2 … chmod 660 /dev/rlv_asm_disk2
chmod 660 /dev/rlv_asm_disk3
6) Identify which dlmfdrv correspond to dlmfdrv1 from …
node1.
5) disable the volume groups
dlmfdrv1 on node1 è dlmfdrv0 on node2
dlmvaryoffvg vg_ocr1
7) import the volume groups on node2 dlmvaryoffvg vg_vot1
dlmvaryoffvg vg_asm1
This will copy the vg/lv configuration that was made on dlmvaryoffvg vg_asm2
node1 dlmvaryoffvg vg_asm3

dlmimportvg -V 101 -y vg_asm dlmfdrv0
On node 2 …
7) enable the volume groups on node2
6) Identify which dlmfdrv correspond to dlmfdrv1 from
dlmvaryonvg vg_asm node1, and so on with other dlmfdrv

8) this will ensure the vg will not get varyon'd at dlmfdrv1 on node1 è dlmfdrv0 on node2
boot dlmfdrv2 on node1 è dlmfdrv1 on node2
dlmfdrv3 on node1 è dlmfdrv2 on node2
dlmchvg -a n vg_asm dlmfdrv4 on node1 è dlmfdrv3 on node2
dlmfdrv5 on node1 è dlmfdrv4 on node2
On node 1 … …

9) enable the volume groups on node1 7) import the volume groups on node2

dlmvaryonvg vg_asm This will copy the vg/lv configuration that was made on
node1

dlmimportvg -V 101 -y vg_ocr1 dlmfdrv0


dlmimportvg -V 101 -y vg_vot1 dlmfdrv1
dlmimportvg -V 101 -y vg_asm1 dlmfdrv2
Check for document : dlmimportvg -V 101 -y vg_asm2 dlmfdrv3
dlmimportvg -V 101 -y vg_asm3 dlmfdrv4
Hitachi Dynamic Link Manager (HDLM) for IBM AIX …
Systems User’s Guide :
http://sys- 7) enable the volume groups on node2
admin.net/uploads/SAN/hldm_admin_guide.pdf
dlmvaryonvg vg_ocr1
dlmvaryonvg vg_vot1
dlmvaryonvg vg_asm1
dlmvaryonvg vg_asm2
dlmvaryonvg vg_asm3

8) this will ensure the vg will not get varyon'd at


boot

dlmchvg -a n vg_ocr1
dlmchvg -a n vg_vot1
dlmchvg -a n vg_asm1
dlmchvg -a n vg_asm2
dlmchvg -a n vg_asm3

On node 1 …

9) enable the volume groups on node1

dlmvaryonvg vg_ocr1
dlmvaryonvg vg_vot1
dlmvaryonvg vg_asm1
dlmvaryonvg vg_asm2
dlmvaryonvg vg_asm3

4. remove any PVID from dlmfdrv…


5. access the disk thru /dev/rlv_asm_disk… for ASM diskgroup configuration

7.2.7 Others, StorageTek, HP EVA storage and multi­pathing

For most of the storage solutions, please contact the providing company for supported configuration, as read/write
concurrent acces from all RAC nodes must be possible to implement a RAC solution. That means possibility to setup
the disk reserve_policy to no_reserve or equivalent.
11gRAC/ASM/AIX oraclibm@fr.ibm.com 57 of 393
11gRAC/ASM/AIX oraclibm@fr.ibm.com 58 of 393
8 SPECIFIC CONSIDERATIONS FOR RAC/ASM SETUP WITH HACMP
INSTALLED
Oracle Clusterware does not requires HACMP to work, but some customers may need to have HACMP
installed on the RAC node cluster to protect third party products, or ressources. Oracle clusterware could
replace HACMP for most operations as cold failover for 11g, 10g and 9i single database, applications servers or any
applications.

Please check following documents for details on


http://www.oracle.com/technology/products/database/clustering/index.html :

• Comparing Oracle Real Applicaton Clusters to Failover Clusters for Oracle Database(PDF) December 2006
• Workload Management with Oracle Real Application Clusters (FAN, FCF, Load Balancing) (PDF) May 2005
• Using Oracle Clusterware to Protect 3rd Party Applications (PDF) May 2005
• Using Oracle Clusterware to Protect a Single Instance Oracle Database (PDF) November 2006
• Using Oracle Clusterware to protect Oracle Application Server (PDF) November 2005
• How to Build End to End Recovery and Workload Balancing for Your Applications 10g Release 1(PDF) Dec
2004
• Oracle Database 10g Services (PDF) Nov 2003
• Oracle Database 10g Release 2 Best Practices: Optimizing Availability During Unplanned Outages Using
Oracle Clusterware and RAC New!

HOWEVER, if customer still need to have HACMP, Oracle Clusterware can cohabitate with HACMP.

Please check for last status of certification on Oracle Metalink (certify)

Extract from metalink in date of March 3rd, 2008.

11g R1 64-bit Certification Summary

OS Product Certified With Version Status

AIX (53) 11gR1 64-bit Oracle Clusterware 11g Certified

AIX (53) 11gR1 64-bit HACMP 5.4.1 Certified

AIX (53) 11gR1 64-bit HACMP 5.3 Certified

11gRAC/ASM/AIX oraclibm@fr.ibm.com 59 of 393


Detailed information for HACMP 5.3 and 11gRAC :

Certify - Additional Info RAC for Unix Version 11gR1 64-bit On IBM AIX based Systems (RAC only)
Operating System: IBM AIX based Systems (RAC only) Version 5.3
RAC for Unix Version 11gR1 64-bit
HACMP Version 5.3
Status: Certified

Product Version Note:


None available for this product.

Certification Note:
Use of HACMP 5.3 requires minimum service levels of the following:
o AIX5L 5.3 TL 6 or later, specifically bos.rte.lvm must be at least 5.3.0.60

o HACMP V5.3 with PTF5, cluster.es.clvm installed and ifix for APAR IZ01809

o RSCT (rsct.basic.rte) version 2.4.7.3 and ifix for APAR IZ01838

Detailed information for HACMP 5.4.1 and 11gRAC :

Certify - Additional Info RAC for Unix Version 11gR1 64-bit On IBM AIX based Systems (RAC only)
Operating System: IBM AIX based Systems (RAC only) Version 5.3
RAC for Unix Version 11gR1 64-bit
HACMP Version 5.4.1
Status: Certified

Product Version Note:


None available for this product.

Certification Note:
Use of HACMP 5.4 requires minimum service levels of the following:
o AIX5L 5.3 TL 6 or later, specifically bos.rte.lvm must be at least 5.3.0.60

o HACMP V5.4.1 (Available in media or APAR IZ02620)

o RSCT (rsct.basic.rte) version 2.4.7.3 and ifix for APAR IZ01838 This APAR is integrated into V2.4.8.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 60 of 393


Following rules have to be applied :

1. HACMP must not take-over/failover the Oracle Clusterware ressources (VIP, database, etc …)

2. HACMP VIP must not be configured on IP from Public node name used by RAC (hostname), or Oracle
Clusterware VIP

3. With 11g, it’s not necessary to declare the RAC interconnect in HACMP

4. It’s not mandatory to declare hdisks used for ASM in HACMP as logical volumes (LV) from Volume Groups
(VG). In this case follow the cookbook to prepare the disks for OCR, Voting and ASM disks.

5. If the choice is to declare hdisks used by ASM in HACMP Volume Groups, THEN you’ll have to prepare the
disks for OCR, Voting, ASM spfile and ASM disks as describe in official Oracle document available on
http://tahiti.oracle.com

Please check :

Oracle® Real Application Clusters Installation Guide


11g Release 1 (11.1) for Linux and UNIX
Part Number B28264-03
http://download.oracle.com/docs/cd/B28359_01/install.111/b28264/toc.htm

And Metalink note 404474.1

Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4


STATUS of IBM HACMP 5.3, 5.4 Certifications with Oracle RAC 10g
5.4 Certifications with Oracle RAC 10g What do you need to do? Configuring HACMP 5.3 or HACMP 5.4.1,
and 10gR2 with Multi-Node Disk Heartbeat (MNDHB)

Even so this note is written for 10gRAC R2, it’s also applicable to 11g RAC R1.

Check for the following chapters :

3.3 Configuring Storage for Oracle Clusterware Files on Raw Devices


The following subsections describe how to configure Oracle Clusterware files on raw partitions.

Configuring Raw Logical Volumes for Oracle Clusterware


Creating a Volume Group for Oracle Clusterware
Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group
Importing the Volume Group on the Other Cluster Nodes
Activating the Volume Group in Concurrent Mode on All Cluster Nodes

For OCR/voting disks, do create a volume group (VG), and create logical volumes (lv) with names as
/dev/Ocr_Disk1, /dev/Ocr_Disk2, /dev/Voting_Disk1, etc
And don’t forget to remove the reserve policy on all hdisks

3.6 Configuring Database File Storage on Raw Devices


The following subsections describe how to configure raw partitions for database files.

Configuring Raw Logical Volumes for Database File Storage


Creating a Volume Group for Database Files
Creating Database File Raw Logical Volumes in the New Volume Group
Importing the Database File Volume Group on the Other Cluster Nodes
Activating the Database File Volume Group in Concurrent Mode on All Cluster Nodes

For ASM disks, do create a volume group (VG), and create logical volumes (lv) with names as
/dev/ASM_Disk1, /dev/ASM_Disk2, /dev/ASM_Disk3, etc
And don’t forget to remove the reserve policy on all hdisks

11gRAC/ASM/AIX oraclibm@fr.ibm.com 61 of 393


9 INSTALLATION STEPS
Prioir to install and use Oracle 11g Real Application Cluster, you must :

1. Prepare the infrastructure


a. Hardware
b. Storage
c. Network
d. San and Network connectivity
e. Operating system


2. Prepare the systems
a. Defining Network Layout, Public, Virtual and
Private hostnames
b. AIX Operating system level, required APAR’s
and filsets
c. System Requirements (Swap, temp, memory,
internal disks)
d. Users And Groups
e. Kernel and Shell Limits
f. User equivalences
g. Time Server Synchronization


3. Prepare the storage
a. Create LUN’s for OCR and Voting disks, and
pepare them at AIX level
b. Create LUN for ASM instances spfile, and
pepare it at AIX level
c. Create LUN’s for ASM disks, and pepare them
at AIX level


4. Check that all pre- installation requirements are
fullfiled
a. Using Oracle cluster verification utility (CVU)
b. Using in-house shell scripts

11gRAC/ASM/AIX oraclibm@fr.ibm.com 62 of 393


When all that is done !!!, you can process the installation in the following order :

1. Install Oracle 11g Clusterware


a. Apply necessary patchset and patches
b. Check that Oracle clusterware is working
properly


2. Install Oracle 11g Automated Storage Management
a. Apply necessary patchset and patches
b. Create default nodes listeners
c. Create ASM instances on each node
d. Configure ASM instances Local and remote
listeners if required
e. Change necessary ASM instances parameters
(process, etc …)
f. Create ASM diskgroup(s)


3. Install Oracle 11g Real Application Cluster or/and
Oracle 10g Real Application cluster
a. Apply necessary patchset and patches


4. Create database(s)
a. Configure Databases instances Local and
remote listeners if required
b. Change necessary Databases instances
parameters (process, etc …)
c. Create database(s) associated cluster services,
and configure TAF (Transactions Applications
Failover) as required for your needs
d.


5. Test your cluster with crash scenarios

11gRAC/ASM/AIX oraclibm@fr.ibm.com 63 of 393


10 PREPARING THE SYSTEM
Preparing the system is to be done on all servers which are planned to participate in the Oracle Cluster.
All servers MUST be as clones, with ONLY different HOSTNAME and IP addresses !!!

IMPORTANT !!!

For ALL servers, you MUST apply the Oracle pre-requisites, those prerequisites are not
optional, BUT MANDATORY !!!

For some parameters as tuning settings, values specified are the minimum required, and
migh be increase depending of your needs !!!

PLEASE check Oracle documentation for last update, and MOSTLY :


Oracle METALINK Note

Preparing the system is about :

• Defining Network Layout, Public, Virtual and Private hostnames


o Hostname and RAC Public Node name
o Network interface identification
o Default Gateway
o Network Tuning Parameters

• AIX Operating system level, required APAR’s and filsets

• System Requirements (Swap, temp, memory, internal disks)

• Users And Groups

• Kernel and Shell Limits

• User equivalences

• Time Server Synchronization

• Etc …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 64 of 393


10.1 Network configuration

10.1.1 Define Networks layout, Public, Virtual and Private Hostnames

We need 2 differents networks, with associated network interfaces on each node participating to the RAC cluster :

• Public Network to be used as Users network or reserved network for Application and Databases servers.
• Private Network to be used as a Reserved network for Oracle clusterware, and RAC.

MANDATORY : Network interfaces must have same name, same subnet and same usage !!!

For each node, We have to define :

• Public hostname and corresponding IP address on the “Public Network”


• Virtual hostname and corresponding IP address, also on the “Public Network”.
IP address must be of same range then Public hostname, but not used on the network prior to Oracle
clusterware installation.
• Private hostname and corresponding IP address on the “Private Network”.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 65 of 393


Let identify hostnames, and network interfaces …

 Please make a table as follow to have a clear view of your RAC network architecture :
 PUBLIC NODE NAME MUST BE NAME RETURNED BY “hostname” AIX COMMAND 

Public Hostname VIP Hostname Private Hostname Not Used


(Virtual IP) (RAC Interconnect)
en ? (Public Network) en ? (Private Network) en ?
Node IP Node IP Node IP Node Name IP
Name Name Name

Issue the AIX command “hostname” on each node to identify default node name :

For first server ...

{node1:root}/ # hostname
node1

{node1:root}/ # ping node1


PING node1: (10.3.25.81): 56 data bytes
64 bytes from 10.3.25.81: icmp_seq=0 ttl=255 time=0 ms
64 bytes from 10.3.25.81: icmp_seq=1 ttl=255 time=0 ms
64 bytes from 10.3.25.81: icmp_seq=2 ttl=255 time=0 ms
^C
----node1 PING Statistics----
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 66 of 393


For second server ...

{node2:root}/ # hostname
node2

{node2:root}/ # ping node2


PING node2: (10.3.25.82): 56 data bytes
64 bytes from 10.3.25.82: icmp_seq=0 ttl=255 time=0 ms
64 bytes from 10.3.25.82: icmp_seq=1 ttl=255 time=0 ms
64 bytes from 10.3.25.82: icmp_seq=2 ttl=255 time=0 ms
^C
----node2 PING Statistics----
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0/0/0 ms
{node2:root}/ #

Other method to check the default hostname :

{node1:root}/crs/11.1.0/bin # lsattr -El inet0


authm 65536 Authentication Methods True
bootup_option no Use BSD-style Network Configuration True
gateway Gateway True
hostname node1 Host Name True
rout6 IPv6 Route True
route net,-hopcount,0,,0,10.3.25.254 Route True
{node1:root}/crs/11.1.0/bin #

To change the default hostname :

{node1:root}/crs/11.1.0/bin # chdev -l inet0 -a hostname=node1


inet0 changed
{node1:root}/crs/11.1.0/bin #

Now, we have Public hostnames and corresponding IP’s in our table :

Public Hostname VIP Hostname Private Hostname Not Used


(Virtual IP) (RAC Interconnect)
en ? (Public Network) en ? (Private Network) en ?
Node IP Node IP Node IP Node Name IP
Name Name Name
node1 10.3.25.81
node2 10. 3.25.82

11gRAC/ASM/AIX oraclibm@fr.ibm.com 67 of 393


Oracle clusterware VIP’s IP adress and corresponding nodes names must not be used on the network prior to
Oracle Clusterware installation. Don’t make any AIX allias on the public network interface, the clusterware installation
will do it. Just reserve 1 VIP and it’s hostname per RAC node.

Public Hostname VIP Hostname Private Hostname Not Used


(Virtual IP) (RAC Interconnect)
en ? (Public Network) en ? (Private Network) en ?
Node IP Node IP Node IP Node Name IP
Name Name Name
node1 10.3.25.81 node1-vip 10.3.25.181
node2 10. 3.25.82 node2-vip 10. 3.25.182

Oracle Clusterware VIP’s IP and corresponding nodes names can be declared in the DNS, or at minimum in the local
hosts file.

10.1.2 Identify Network Interfaces cards

As root, Issue the AIX command “ifconfig –l” to list network card on each node :

Result from node1 : Result from node2 :

{node1:root}/ # ifconfig –l {node2:root}/ # ifconfig -l


en0 en1 en2 lo0 en0 en1 en2 lo0
{node1:root}/ # {node2:root}/ #

{node1:root}/crs/11.1.0/bin # lsdev |grep en


en0 Available Standard Ethernet Network Interface
en1 Available Standard Ethernet Network Interface
en2 Available Standard Ethernet Network Interface
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
inet0 Available Internet Network Extension
rcm0 Defined Rendering Context Manager Subsystem
vscsi0 Available Virtual SCSI Client Adapter
{node1:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 68 of 393


As root, Issue the following shell to get necessary information from network interfaces on each node :

Result from node1 : Result from node2 :

{node1:root}/ # for i in en0 en1 en2 {node2:root}/ # for i in en0 en1 en2
do do
echo $i echo $i
for attribut in netaddr netmask broadcast for attribut in netaddr netmask broadcast
state state
do do
lsattr -El $i -a $attribut lsattr -El $i -a $attribut
done done
done done
en0 en0
netaddr 10.3.25.81 Internet Address True netaddr 10.3.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True broadcast Broadcast Address True
state up Current Interface Status True state up Current Interface Status True
en1 en1
netaddr 10.10.25.81 Internet Address True netaddr 10.10.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True broadcast Broadcast Address True
state up Current Interface Status True state up Current Interface Status True
en2 en2
netaddr 20.20.25.81 Internet Address True netaddr 20.20.25.82 Internet Address True
netmask 255.255.255.0 Subnet Mask True netmask 255.255.255.0 Subnet Mask True
broadcast Broadcast Address True broadcast Broadcast Address True
state up Current Interface Status True state up Current Interface Status True
{node1:root}/ # {node2:root}/ #

As root, Issue the AIX command “ifconfig –a” to list network card on each node :

Result example from node1 :

{node1:root}/ #ifconfig –a
en0:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 10.3.25.81 netmask 0xffffff00 broadcast 10.3.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 10.10.25.81 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFF
LOAD,CHAIN>
inet 20.20.25.81 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 69 of 393


Now, we have identified the following network interfaces :

• en0 is set to be the “Public Network Interface” on all nodes.


• en0 is set to be the “VIP Network Interface” on all nodes.
• en1 is set to be the “Private Network Interface”, also named as “RAC Interconnect” on all nodes.
• en2 is set as not used

THEN, the table looks as follow :

Public Hostname VIP Hostname Private Hostname Not Used


(Virtual IP) (RAC Interconnect)
en0 (Public Network) en1 (Private Network) en2
Node IP Node IP Node IP Node Name IP
Name Name Name
node1 10.3.25.81 node1-vip 10.3.25.181
node2 10. 3.25.82 node2-vip 10. 3.25.182

To see {node1:root}/oracle -> lsattr -El en0


details on alias4 IPv4 Alias including Subnet Mask True
the network alias6 IPv6 Alias including Prefix Length True
interface : arp on Address Resolution Protocol (ARP) True
authority Authorized Users True
broadcast Broadcast Address True
mtu 1500 Maximum IP Packet Size for This Device True
netaddr 10.3.25.81 Internet Address True
netaddr6 IPv6 Internet Address True
netmask 255.255.255.0 Subnet Mask True
prefixlen Prefix Length for IPv6 Internet Address True
remmtu 576 Maximum IP Packet Size for REMOTE Networks True
rfc1323 Enable/Disable TCP RFC 1323 Window Scaling True
security none Security Level True
state up Current Interface Status True
tcp_mssdflt Set TCP Maximum Segment Size True
tcp_nodelay Enable/Disable TCP_NODELAY Option True
tcp_recvspace Set Socket Buffer Space for Receiving True
tcp_sendspace Set Socket Buffer Space for Sending True
{node1:root}/oracle ->

11gRAC/ASM/AIX oraclibm@fr.ibm.com 70 of 393


THEN, we will get the following table with our system :

Public VIP RAC Interconnect (Private Not Used


Network)
en0 en1 en2
Node IP Node IP Node IP Node IP
Name Name Name Name
node1 10.3.25.81 node1-vip 10.3.25.181 node1-rac 10.10.25.81
node2 10. 3.25.82 node2-vip 10. 3.25.182 node1-rac 10.10.25.82

Within our infrastructure for the cookbook, we have the following layout :

10.1.3 Update hosts file
You should have the following entries on each node for “/etc/hosts”
Update/check entries in hosts file on each node

{node1:root}/ # cat /etc/hosts 


NOTA :
# Oracle Clusterware Public nodes list
10.3.25.81 node1 Oracle Clusterware Virtual IP’s are just free
10.3.25.82 node2 IP’s available, and ONLY declared in the hosts
file form each node prior to Oracle
# Oracle Clusterware Virtual IP nodes list clusterware installation.
10.3.25.181 node1-vip
10.3.25.182 node2-vip

# Oracle Clusterware and RAC Interconnect


10.10.25.81 node1-rac
10.10.25.82 node2-rac

11gRAC/ASM/AIX oraclibm@fr.ibm.com 71 of 393


10.1.4 Defining Default gateway on public network interface

  Now, if you want to set the default gateway on public network interface, be carrefull on the impact it
11gRAC/ASM/AIX oraclibm@fr.ibm.com 72 of 393
may have if you have already a default gateway set on a different network interface, and multiple network
interfaces not used for Oracle clusterware and RAC purpose.

First check if default gateway is set :


Use “netstat -r” On both nodes.

{node1:root}/ # netstat -r
Routing tables
Destination Gateway Flags Refs Use If Exp Groups

Route Tree for Protocol Family 2 (Internet):


default 10.3.25.254 UG 1 73348 en0 - - =>
default 9.212.131.254 UG 0 435 en0 - -
10.3.25.0 node1 UHSb 0 0 en0 - - =>
10.3.25/24 node1 U 11 6113590 en0 - -
node1 loopback UGHS 37 1034401 lo0 - -
node1-vip loopback UGHS 8 80831 lo0 - -
10.3.25.255 node1 UHSb 0 4 en0 - -
10.10.25.0 node1-rac UHSb 0 0 en1 - - =>
10.10.25/24 node1-rac U 25 350557 en1 - -
node1-rac loopback UGHS 16 481 lo0 - -
10.10.25.255 node1-rac UHSb 0 4 en1 - -
20.20.25.0 node1-rac-b UHSb 0 0 en2 - - =>
20.20.25/24 node1-rac-b U 16 176379 en2 - -
node1-rac-b loopback UGHS 5 392 lo0 - -
20.20.25.255 node1-rac-b UHSb 0 4 en2 - -
127/8 loopback U 49 187105 lo0 - -

Route Tree for Protocol Family 24 (Internet v6):


::1 ::1 UH 0 32 lo0 - -
{node1:root}/ #

If not set :
Using “route add”, do set the default gateway on public network interface, en0 in our case, and on all nodes :
 MUST BE DONE on each node !!!

To establish a default gateway (10.3.25.254 in our case), type:

{node1:root}/ # route add 0 10.3.25.254

{node1:root}/crs/11.1.0/bin # netstat -r | grep 10.3.25.254


default 10.3.25.254 UG 2 1050 en0 - -
{node1:root}/crs/11.1.0/bin #

{node2:root}/ # route add 0 10.3.25.254

{node2:root}/ # netstat -r | grep 10.3.25.254


default 10.3.25.254 UG 1 7324 en0 - -
{node2:root}/ #

The value 0 or the default keyword for the Destination parameter means that any packets sent to destinations not
previously defined and not on a directly connected network go through the default gateway. The 10.3.25.254 address
is that of the gateway chosen to be the default.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 73 of 393


10.1.5 Configure Network Tuning Parameters

Verify that the network tuning parameters shown in the following table are set to the values shown or higher values.
The procedure following the table describes how to verify and set the values.

Network Tuning
Parameter Recommended minimum Value on all nodes
ipqmaxlen 512
rfc1323 1
sb_max 1310720
tcp_recvspace 65536
tcp_sendspace 65536
udp_recvspace 655360
Note: The recommended value of this parameter is 10 times the value of the udp_sendspace
parameter. The value must be less than the value of the sb_max parameter.

udp_sendspace 65536
Note: This value is suitable for a default database installation. For production databases, the
minimum value for this parameter is 4 KB plus the value of the database DB_BLOCK_SIZE
initialization parameter multiplied by the value of the DB_MULTIBLOCK_READ_COUNT
initialization parameter:

(DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB

To check values …

{node1:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace


udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node1:root}/ #

{node2:root}/ # for i in ipqmaxlen rfc1323 sb_max tcp_recvspace tcp_sendspace


udp_recvspace udp_sendspace
do
no -a |grep $i
done
ipqmaxlen = 512
rfc1323 = 1
sb_max = 1310720
tcp_recvspace = 65535
tcp_sendspace = 65535
udp_recvspace = 655350
udp_sendspace = 65535
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 74 of 393


1. If you must change the value of any parameter, enter the following command to
To change the determine whether the system is running in compatibility mode:
current values to
required ones, if # /usr/sbin/lsattr -E -l sys0 -a pre520tune
necessary,
If the system is running in compatibility mode, the output is similar to the
follow these steps : following, showing that the value of the pre520tune attribute is enable:

pre520tune enable Pre-520 tuning compatibility mode True


 By default, with AIX5L, compatibility mode is set to false !!!
Change it to true ONLY if necessary !!!

** if you want to enable the compatibility mode, issue the following command :
# chdev -l sys0 -a pre520tune=enable

2. If the system is running in compatibility mode,


THEN ELSE

follow these steps to change the parameter


# /usr/sbin/lsattr -E -l sys0 -a pre520tune
values:

Enter commands similar to the following to Will give the following :


change the value of each parameter:
pre520tune disable Pre-520 tuning
compatibility mode True
# /usr/sbin/no –o parameter_name=value
The system is not running in compatibility mode, enter
commands similar to the following to change the
For example: parameter values:
# /usr/sbin/no –o udp_recvspace=655360 Enter commands similar to the following to change
the value of parameter ipqmaxlen:
Add entries similar to the following to the
/etc/rc.net file for each parameter that you
changed in the previous step: ipqmaxlen parameter:
/usr/sbin/no -r -o ipqmaxlen=512

if [ -f /usr/sbin/no ] ; then
Enter commands similar to the following to change
/usr/sbin/no -o udp_sendspace=65536
/usr/sbin/no -o udp_recvspace=655360
the value of each others parameters:
/usr/sbin/no -o tcp_sendspace=65536
/usr/sbin/no -o tcp_recvspace=65536
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=1310720
/usr/sbin/no -o ipqmaxlen=512 /usr/sbin/no -p –o udp_sendspace=65536
fi /usr/sbin/no -p –o udp_recvspace=655360
/usr/sbin/no -p –o tcp_sendspace=65536
By adding these lines to the /etc/rc.net file, the values /usr/sbin/no -p –o tcp_recvspace=65536
persist when the system restarts. /usr/sbin/no -p –o rfc1323=1
THEN YOU NEED TO Modify and REBOOT all nodes ! /usr/sbin/no -p –o sb_max=1310720

Note: If you modify the ipqmaxlen parameter, you must restart the system.

These commands modify the /etc/tunables/nextboot file, causing the attribute values to persist when the
system restarts.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 75 of 393


10.2 AIX Operating system level, required APAR’s and filsets

To have the latest information please refer to Metalink Note 282036.1. on
http://metalink.oracle.com, this document include last update. Check also certification
status on metalink.oracle.com, or otn.oracle.com

AIX release supported with • AIX 5L version 5.3, Maintenance Level 5 or


Oracle 11g RAC R1 as for later
February 3rd, 2008.

To determine which version of {node1:root}/ # oslevel -s


AIX is installed, enter the 5300-07-01-0748 è gives level of TL and sub-
following command : level of service pack, which means AIX5L TL7 SP1

If the operating system version is lower than the minimum required, upgrade your operating system to this level. AIX 5L
maintenance packages are available from the following Web site :
http://www-912.ibm.com/eserver/support/fixes/

11gRAC/ASM/AIX oraclibm@fr.ibm.com 76 of 393


10.2.1 Filesets Requirements for 11g RAC R1 / ASM (NO HACMP)

AIX filesets required on ALL nodes for 11g RAC Release 1 implementation with
ASM !!!

AIX 5L version 5.3, AIX6 version ...


Check that the Maintenance Level 6 or later Maintenance Level ... or later
required filsets are
installed on the Filesets
system.

NOT YET
bos.adt.base
• bos.adt.lib
• bos.adt.libm
(Note: If the PTF is
not downloadable,
customers should



bos.perf.libperfstat
bos.perf.perfstat
bos.perf.proctools
SUPPORTED
Check Metalink and certify for last update on
request an efix • rsct.basic.rte certification status.
through AIX • rsct.compat.clients.rte
customer support.) • xlC.aix50.rte 8.0.0.5
• xlC.rte 8.0.0.5

Specific Filesets

For EMC Symmetrix :


• EMC.Symmetrix.aix.rte.5.2.0.1 or higher
For EMC CLARiiON :
• EMC.CLAR11ON.fcp.rte.5.2.0.1 or
higher

Depending on the AIX Level that you intend to install, verify that the required filesets are installed on the system. The
following procedure describes how to check these requirements.

{node1:root}/ # lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat


To ensure that the bos.perf.perfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte
system meets xlC.aix50.rte xlC.rte
these
requirements, Fileset Level State Description
follow these steps : ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
bos.adt.base 5.3.7.0 COMMITTED Base Application Development
Toolkit
bos.adt.lib 5.3.0.60 COMMITTED Base Application Development
Libraries
bos.adt.libm 5.3.7.0 COMMITTED Base Application Development
Math Library
bos.perf.libperfstat 5.3.7.0 COMMITTED Performance Statistics Library
Interface
bos.perf.perfstat 5.3.7.0 COMMITTED Performance Statistics
And Check that Interface
required filesets bos.perf.proctools 5.3.7.0 COMMITTED Proc Filesystem Tools
are all installed. rsct.basic.rte 2.4.8.0 COMMITTED RSCT Basic Function
rsct.compat.clients.rte 2.4.8.0 COMMITTED RSCT Event Management Client
Function
xlC.aix50.rte 9.0.0.1 COMMITTED XL C/C++ Runtime for AIX 5.2
 And that : xlC.rte 9.0.0.1 COMMITTED XL C/C++ Runtime
xlC.aix50.rte and Path: /etc/objrepos
xlC.rteare at bos.perf.libperfstat 5.3.0.0 COMMITTED Performance Statistics Library
minimum release Interface
bos.perf.perfstat 5.3.7.0 COMMITTED Performance Statistics
of 8.0.0.4 and Interface
8.0.0.0 rsct.basic.rte 2.4.8.0 COMMITTED RSCT Basic Function
{node1:root}/ #

If a fileset is not installed and committed, then install it. Refer to your operating system or software documentation for
information about installing filesets. If fileset is only “APPLIED”, it’s not mandatory to commit it.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 77 of 393


10.2.2 APAR’s Requirements for 11g RAC R1 / ASM (NO HACMP)

Check that the required APAR’s are installed on the system.

AIX Patches (APAR) required on ALL nodes for 11g RAC R1 implementation with
ASM !!!
(Note: If the PTF is not downloadable, customers should request an efix through AIX customer support.)

AIX 5L version 5.3, AIX6 version ...


Maintenance Level 6 or later Maintenance Level ... or later

APAR’s

• IY89080



IY92037
IY94343
IZ01060 or efix for IZ01060
NOT YET
• IZ03260 or efix for IZ03260
SUPPORTED
Check Metalink and certify for last update on certification
JDK : (Not mandatory for the installation), thes APAR are
status.
required only if you are using the associated JDK version
……
• IY58350 Patch for SDK 1.3.1.16 (32-bit)
• IY63533 Patch for SDK 1.4.2.1 (64-bit)
• IY65305 Patch for SDK 1.4.2.2 (32-bit)

• IBM JDK 1.5.0.06 (IA64 - mixed mode) is installed

To ensure that the system meets these requirements, follow these steps:

This is an example for AIX5L 5.3, with TL07 :


To determine
whether an {node1:root}/ # /usr/sbin/instfix -i -k "IY89080 IY92037 IY94343
APAR is
IZ01060 IZ03260"
installed, enter a
command similar All filesets for IY89080 were found.
to the following: All filesets for IY92037 were found.
All filesets for IY94343 were found.
All filesets for IZ01060 were found.
All filesets for IZ03260 were found.
{node1:root}/ #

If an APAR is not installed, download it from the following Web site and install it:
http://www-912.ibm.com/eserver/support/fixes/

11gRAC/ASM/AIX oraclibm@fr.ibm.com 78 of 393


10.3 System Requirements (Swap, temp, memory, internal disks)

Requirements to meet on ALL nodes !!!

RAM >= 512 MB minimum


è. Command to check the physical memory : lsattr –El sys0 –a realmem

{node1:root}/ # lsattr -El sys0 -a realmem


realmem 3145728 Amount of usable physical memory in Kbytes False
{node1:root}/ #

{node2:root}/home # lsattr -El sys0 -a realmem


realmem 3145728 Amount of usable physical memory in Kbytes False
{node2:root}/home #

Internal disk >= 12 GB for the oracle code (CRS_HOME, ASM_HOME, ORACLE_HOME)

This part will be detailed in chapter “Required local disks (Oracle Clusterware, ASM and RAC software)” !!!

{node1:root}/ # df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 262144 207816 21% 13591 23% /
/dev/hd2 4718592 2697520 43% 46201 7% /usr
/dev/hd9var 262144 233768 11% 565 2% /var
/dev/hd3 1310720 1248576 5% 255 1% /tmp
/dev/hd1 262144 261760 1% 5 1% /home
/proc - - - - - /proc
/dev/hd10opt 524288 283400 46% 5663 9% /opt
/dev/oraclelv 15728640 15506524 2% 73 1% /oracle
/dev/crslv 5242880 5124692 3% 41 1% /crs
fas960c2:/vol/VolDistribJSC 276824064 71068384 75% 144273 2% /distrib
{node1:root}/ #

{node2:root}/home # df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 262144 208532 21% 13564 23% /
/dev/hd2 4718592 2697360 43% 46201 7% /usr
/dev/hd9var 262144 236060 10% 485 1% /var
/dev/hd3 2097152 29636 99% 4812 35% /tmp
/dev/hd1 262144 261760 1% 5 1% /home
/proc - - - - - /proc
/dev/hd10opt 524288 283436 46% 5663 9% /opt
/dev/oraclelv 15728640 15540012 2% 64 1% /oracle
/dev/crslv 5242880 5119460 3% 43 1% /crs
fas960c2:/vol/VolDistribJSC 276824064 71068392 75% 144273 2% /distrib
{node2:root}/home #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 79 of 393


Paging space = 2 x RAM, with a minimum of 400 MB and a maximum of 2 GB.
è To check the paging space configured : lsps –a

{node1:root}/ # lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd6 hdisk0 rootvg 512MB 7 yes yes lv
{node1:root}/ #

{node2:root}/home # lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd6 hdisk1 rootvg 512MB 2 yes yes lv
{node2:root}/home #

Temporary Disk Space : The Oracle Universal Installer requires up to 400 MB of free space in the /tmp directory.
è To check the free temporary space available: df –k /tmp

{node1:root}/ # df -k /tmp
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd3 1310720 1248576 5% 255 1% /tmp
{node1:root}/ #

{node2:root}/home # df -k /tmp
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd3 2097152 29636 99% 4812 35% /tmp
{node2:root}/home #

è You can use an other filesystem instead of /tmp.


Set the TEMP environment variable (used by crs, asm, rdbms users and/or oracle user)
and the TMPDIR environment variable to the new location.

For example :
export TEMP=/new_tmp
export TMPDIR=/new_tmp
export TMP=/new_tmp

11gRAC/ASM/AIX oraclibm@fr.ibm.com 80 of 393


10.4 Users And Groups

2 options possibles

OPTION 1 OPTION 2

1 user for each installation, for example : 1 user for all installations,
for example oracle unix user for :
• unix crs user for CRS_HOME installation
• unix asm user for ASM_HOME installation • CRS_HOME installation
• unix rdbms user for ORACLE_HOME • ASM_HOME installation
installation • ORACLE_HOME installation

For the cookbook purpose, and ease of administration, we’ll implement the first option with crs, asm and
rdbms users.

We’ll also create an oracle user to own the /oracle ($ORACLE_BASE) wich will be shared by asm (/oracle/asm) and
rdbms (/oracle/rdbms).

This oracle user (UID=500) will be part of the following groups : oinstall, dba, oper, asm, asmdba.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 81 of 393


This setup has to be done on all the nodes of the cluster.
Default user home for crs, asm and rdbms users MUST be in /home, otherwise you may have trouble
with ssh setup (user equivalence).

To create the following groups, and users :


Be sure that all the groups and user numbers are identical thru the nodes.

On node1 …

mkgroup -'A' id='500' adms='root' oinstall


mkgroup -'A' id='501' adms='root' crs
mkgroup -'A' id='502' adms='root' dba
mkgroup -'A' id='503' adms='root' oper
mkgroup -'A' id='504' adms='root' asm
mkgroup -'A' id='505' adms='root' asmdba
mkuser id='500' pgrp='oinstall' groups='crs,dba,oper,asm,asmdba,staff'
home='/home/oracle' oracle
mkuser id='501' pgrp='oinstall' groups='crs,dba,oper,staff' home='/home/crs' crs
mkuser id='502' pgrp='oinstall' groups='dba,oper,asm,asmdba,staff' home='/home/asm'
asm
mkuser id='503' pgrp='oinstall' groups='dba,oper,staff' home='/home/rdbms' rdbms

THEN On node2 …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 82 of 393


The crs, asm and rdbms users must have oinstall as primary group, dba as secondary groups.

Verification: check if the file /etc/group contains lines such as :


(the numbers could be different)

{node1:root}/ # cat /etc/group |grep crs


staff:!:1:ipsec,sshd,oracle,crs,asm,rdbms
oinstall:!:500:oracle,crs,asm,rdbms
crs:!:501:oracle,crs
dba:!:502:oracle,crs,asm,rdbms
oper:!:503:oracle,crs,asm,rdbms
asm:!:504:oracle,asm
asmdba:!:505:oracle,asm
{node1:root}/ #

{node1:root}/ # su - crs
{node1:crs}/crs # id
uid=501(crs) gid=500(oinstall) groups=1(staff),501(crs),502(dba),503(oper)
{node1:crs}/crs #

{node1:root}/ # su - asm
{node1:asm}/oracle/asm # id
uid=502(asm) gid=500(oinstall) groups=1(staff),502(dba),503(oper),504(asm),505(asmdba)
{node1:asm}/oracle/asm #

{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms # id
uid=503(rdbms) gid=500(oinstall) groups=1(staff),502(dba),503(oper)
{node1:rdbms}/oracle/rdbms #

{node1:root}/ # su - oracle
{node1:oracle}/oracle # id
uid=500(oracle) gid=500(oinstall)
groups=1(staff),501(crs),502(dba),503(oper),504(asm),505(asmdba)
{node1:oracle}/oracle #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 83 of 393


Set a password to {node1:root}/ # passwd crs {node2:root}/ # passwd crs
Changing password for "crs" Changing password for "crs"
• crs user crs's New password: crs's New password:
• asm user Enter the new password again: Enter the new password again:
• rdbms user {node1:root}/ # {node2:root}/ #

Set the same password {node1:root}/ # passwd asm {node2:root}/ # passwd asm
for all the nodes of the Changing password for "asm" Changing password for "asm"
cluster, with the asm's New password: asm's New password:
following command : Enter the new password again: Enter the new password again:
{node1:root}/ # {node2:root}/ #

{node1:root}/ # passwd rdbms {node2:root}/ # passwd rdbms


Changing password for "rdbms" Changing password for "rdbms"
rdbms's New password: rdbms's New password:
Enter the new password again: Enter the new password again:
{node1:root}/ # {node2:root}/ #

And connect at least {node1:root}/ # telnet node1 {node2:root}/ # telnet node2


once to each node with ……. …….
each user (crs, asm, User : crs User : crs
rdbms), to validate the Password : ***** Password : *****
password, as first …… ……
connexion will ask to
change the password :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 84 of 393


10.5 Kernel and Shell Limits

Configuring Shell Limits, and System Configuration (Extract from Oracle Documentation)

Note:

The parameter and shell limit values shown in this section are minimum recommended
values only. For production database systems, Oracle recommends that you tune these values
to optimize the performance of the system. Refer to your operating system documentation for
more information about tuning kernel parameters.

Oracle recommends that you :

• set shell limits for root and oracle users,


• set system configuration parameters as described in this section on all cluster nodes for root users.

10.5.1 Configure Shell Limits

Verify that the shell limits shown in the following table are set to the values shown. The procedure following the table
describes how to verify and set the values.

Recommended Value for oracle user


In our case :
Shell Limit • crs user Recommended Value
(As Shown in smit) for root user
• asm user
• rdbms user

Soft FILE size -1 (Unlimited) -1 (Unlimited)


Soft CPU time -1 (Unlimited) -1 (Unlimited)
Note: This is the default value. Note: This is the default value.

Soft DATA segment -1 (Unlimited) -1 (Unlimited)


Soft STACK size -1 (Unlimited) -1 (Unlimited)

To view the current value


1. Enter the following command:
specified for these shell
limits, and to change # smit chuser
them if necessary,
2. In the User NAME field, enter the user name of the Oracle software
Follow these steps: owner, for example crs, asm and rdbms.

3. Scroll down the list and verify that the value shown for the soft limits
listed in the previous table is -1.

If necessary, edit the existing value.

4. When you have finished making changes, press F10 to exit.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 85 of 393


OR for root and oracle user on each node, to check thru ulimit command :

{node1:root}/ # ulimit -a {node1:root}/ # su - crs


time(seconds) unlimited {node1:crs}/crs/11.1.0 # ulimit -a
file(blocks) unlimited time(seconds) unlimited
data(kbytes) unlimited file(blocks) unlimited
stack(kbytes) 4194304 data(kbytes) unlimited
memory(kbytes) unlimited stack(kbytes) 4194304
coredump(blocks) unlimited memory(kbytes) unlimited
nofiles(descriptors) unlimited coredump(blocks) unlimited
{node1:root}/ nofiles(descriptors) unlimited
{node1:crs}/crs/11.1.0 #

{node1:root}/ # su - asm
{node1:asm}/oracle/asm/11.1.0 # ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited
{node1:asm}/oracle/asm/11.1.0 #

{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms/11.1.0 # ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited
{node1:rdbms}/oracle/rdbms/11.1.0 #

Add the following lines to the /etc/security/limits file on each node : Do set unlimited with “-1”

default:
fsize = -1
core = -1
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1

Or set at minimum Oracle specified values as follow :

default:
fsize = -1
core = -1
cpu = -1
data = 512000
rss = 512000
stack = 512000
nofiles = 2000

11gRAC/ASM/AIX oraclibm@fr.ibm.com 86 of 393


10.5.2 Set crs, asm and rdbms users capabilities

Add capabilities CAP_NUMA_ATTAC, CAP_BYPASS_RAC_VMM, CAP_PROPAGATE to users crs, asm, rdbms


and oracle.

To set crs, asm and rdbms user “capabilities” on each node :

{node1:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE crs


{node1:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE asm
{node1:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE rdbms
{node1:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

{node2:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE crs


{node2:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE asm
{node2:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE rdbms
{node1:root}/ # chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

To check user “capabilities” on each node, with crs user for example :

{node1:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities


capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node1:root}/crs/11.1.0/bin #

{node2:root}/crs/11.1.0/bin # lsuser -f crs | grep capabilities


capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
{node2:root}/crs/11.1.0/bin #

10.5.3 Set NCARGS parameter

Change parameters “ncargs”to 128

To set “ncargs” attribute on each node :

{node1:root}/ # chdev -l sys0 -a ncargs=128


sys0 changed
{node1:root}/

{node2:root}/ # chdev -l sys0 -a ncargs=128


sys0 changed
{node2:root}/

To check “ncargs” attribute on each node :

{node1:root}/crs/11.1.0/bin # lsattr -El sys0 -a ncargs


ncargs 128 ARG/ENV list size in 4K byte blocks True
{node1:root}/crs/11.1.0/bin #

{node2:root}/crs/11.1.0/bin # lsattr -El sys0 -a ncargs


ncargs 128 ARG/ENV list size in 4K byte blocks True
{node2:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 87 of 393


10.5.4 Configure System Configuration Parameters

Verify that the maximum number of processes allowed per user is set to 16384 or greater :

Note:
For production systems, this value should be at least 128 plus the sum of the PROCESSES and
PARALLEL_MAX_SERVERS initialization parameters for each database running on the system.

To check {node1:root}/ # lsattr -El sys0 -a maxuproc


the value maxuproc 1024 Maximum number of PROCESSES allowed per user True
… {node1:root}/ #

To edit Setting value to 16384


and
modify the {node1:root}/ # chdev -l sys0 -a maxuproc='16384'
value … {node1:root}/ #

{node1:root}/ # lsattr -El sys0 -a maxuproc


maxuproc 16384 Maximum number of PROCESSES allowed per user True
{node1:root}/ #

OR

1. Enter the following command:

# smit chgsys

2. Verify that the value shown for Maximum number of PROCESSES allowed per user
is greater than or equal to 16384.

If necessary, edit the existing value.

3. When you have finished making changes, press F10 to exit.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 88 of 393


10.5.5 Lru_file_repage setting

Verify that the lru_file_repage parameter is set to 0 :

The default value is "1", but it is recommended to set this to "0". This setting hints to the VMM to only steal file
pages (from the AIX file buffer cache) and leave the computational pages (from the SGA) alone.
By setting "lru_file_repage=0", AIX only frees file cache memory. This guarantees working storage stays in memory,
and allows file cache to grow.

So in the past you might have set maxclient at 20% for database servers. Today you could set maxclient at 90% and
"lru_file_repage=0". The exact setting will vary based on your application and amount of memory. Contact IBM Support
if you need help determining the optimum setting.

This new lru_file_repage parameter is only available on AIX 5.2 ML04+ and AIX 5.3 ML01+

To check the value on each node …

{node1:root}/ # vmo -L lru_file_repage


NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
lru_file_repage 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
{node1:root}/ #

Change the value to 0 on each node :

{node1:root}/ # vmo -p -o lru_file_repage=0


Setting lru_file_repage to 0 in nextboot file
Setting lru_file_repage to 0
{node1:root}/ #

{node1:root}/ # vmo -L lru_file_repage


NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
lru_file_repage 0 1 0 0 1 boolean D
--------------------------------------------------------------------------------
{node1:root}/ #

Typical vmo settings for Oracle :

– lru_file_repage=0 (default=1) (AIX 5.2 ML04 or later)


• Forces file pages to be repaged before computational pages

– minperm%=5 (default 20)


• Target for minimum % of physical memory to be used for file system cache

- maxperm%=90 ( default 80)


- strict_maxperm = 1
- strict_maxclient = 1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 89 of 393


Content of {node1:root}/ # cat /etc/tunables/nextboot
/etc/tunables/nextboot "/etc/tunables/nextboot" 47 lines, 1001 characters
# IBM_PROLOG_BEGIN_TAG
For node1 : # This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/perf/tune/nextboot 1.1
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 2002
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG

vmo:
strict_maxperm = "0"
minperm% = "5"
maxperm% = "90"
maxclient% = "90"
lru_file_repage = "0"
strict_maxclient = "1"
minfree = "3000"
maxfree = "4000"

nfso:
nfs_rfc1323 = "1"

no:
ipqmaxlen = "512"
udp_sendspace = "65536"
udp_recvspace = "655360"
tcp_sendspace = "262144"
tcp_recvspace = "262144"
sb_max = "1310720"
rfc1323 = "1"

ioo:
pv_min_pbuf = "512"
numfsbufs = "2048"
maxrandwrt = "32"
maxpgahead = "16"
lvm_bufcnt = "64"
j2_nBufferPerPagerDevice = "1024"
j2_maxRandomWrite = "32"
j2_maxPageReadAhead = "128"
j2_dynamicBufferPreallocation = "256"

{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 90 of 393


10.5.6 Asynchronous I/O setting

Setting Asynchronous I/O :

To check {node1:root}/ # smitty aio


the value
on each
node …

Select change / Show Characteristics of Asynchronous I/O

Set STATE to “available”

OR

{node1:root}/ # lsattr -El aio0


autoconfig available STATE to be configured at system restart True
fastpath enable State of fast path True
kprocprio 39 Server PRIORITY True
maxreqs 16384 Maximum number of REQUESTS True
maxservers 300 MAXIMUM number of servers per cpu True
minservers 150 MINIMUM number of servers True
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 91 of 393


10.6 User equivalences

Before installing Oracle clusterware, ASM softwate and Real Application clusters software, you must configure
user equivalence for the crs, asm and rdbms users on all cluster nodes.

You have two type of User Equivalence implementation :

• RSH (Remote shell)


• SSH (Secured Shell)

When SSH is not available, the Installer uses the rsh and rcp commands
instead of ssh and scp. .

You have to choose one or the other, but don’t implement both at the same time. 
è Usually, customers will implement SSH. AND if SSH is started and used, do configure SSH

On the public node name returned by the AIX command “hostname” :

node1 must ssh or rsh to node1, as crs, asm and rdbms user.
node1 must ssh or rsh to node2, as crs, asm and rdbms user.
node2 must ssh or rsh to node1, as crs, asm and rdbms user.
node2 must ssh or rsh to node2, as crs, asm and rdbms user.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 92 of 393


10.6.1 RSH implementation

Set up user equivalence for the oracle and root account, to enable rsh, rcp, rlogin commands.
You should have the entries on each node for : /etc/hosts, /etc/hosts.equiv and on root/oracle home directory
$HOME/.rhosts.

/etc/hosts.equiv {node1:root}/ # pg /etc/hosts.equiv 


....
Update/check entries in hosts.equiv node1 root
file node2 root
on each node node1 crs
node2 crs
....
{node1:root}/ #

$HOME/.rhosts

Update/check entries in .rhosts file on Update/check entries in .rhosts file on


each node for root user : each node for crs, asm and rdbms user :

{node1:root}/ # su – root {node1:root}/ #


{node1:root}/ # cd {node1:root}/ # su - crs
{node1:root}/ # pg$HOME/.rhosts {node1:crs}/crs # pg .rhosts
node1 root node1 root
node2 root node2 root
{node1:root}/ # node1 crs
node2 crs
{node1:crs}/crs #

{node1:root}/ # su - asm
{node1:asm}/oracle/asm # pg .rhosts
node1 root
node2 root
node1 asm
node2 asm
{node1:asm}/oracle/asm #

{node1:root}/ # su - rdbms
{node1:rdbms}/oracle/rdbms # pg .rhosts
node1 root
node2 root
node1 rdbms
node2 rdbms
{node1:rdbms}/oracle/rdbms #

Note : It is possible, but not advised because of security reasons, to put a “+” in hosts.equiv and .rhosts files.

Test if the user equivalence is {node1:root}/ # rsh node2 (=> no password)


correctly set up (node2 is the {node2:root}/ # rcp /tmp/toto node1:/tmp/toto
secondary cluster machine). {node2:root}/ # su - crs
{node2:crs}/crs/11.1.0 # rsh node1 date
You are logged on node1 as root :

Test for crs, asm and rdbms users !!! Mon Apr 23 17:26:27 DFT 2007
{node2:crs}/crs/11.1.0 #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 93 of 393


10.6.2 SSH implementation

Before you install and use Oracle Real Application clusters, you must configure secure shell (SSH) for the ocrs, asm
and rdbms users on all cluster nodes. Oracle Universal Installer uses the ssh and scp commands during installation to
run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands
do not prompt for a password.

Note:

This section describes how to configure OpenSSH version 3. If SSH is not available, then
Oracle Universal Installer attempts to use rsh and rcp instead.
To determine if SSH is running, enter the following command:
$ ps -ef | grep sshd

If SSH is running, then the response to this command is process ID numbers. To find out more
about SSH, enter the following command:
$ man ssh

 For each user, crs, asm and rdbms repeat the next steps :
Configuring SSH on Cluster Member Nodes

To configure SSH, you must first create RSA and DSA keys on each cluster node, and then copy the keys from all
cluster node members into an authorized keys file on each node. To do this task, complete the following steps:

Create RSA and DSA 1. Log in as the oracle user.


keys on each
node: Complete the 2. If necessary, create the .ssh directory in the oracle user's home directory and set
following steps on
each node:
the correct permissions on it:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh

3. Enter the following commands to generate an RSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t rsa

4. At the prompts:
Accept the default location for the key file.
Enter and confirm a pass phrase that is different from the oracle user's password.
This command writes the public key to the ~/.ssh/id_rsa.pub file and the
private key to the ~/.ssh/id_rsa file. Never distribute the private key to
anyone.

Enter the following commands to generate a DSA key for version 2 of the SSH
protocol:
$ /usr/bin/ssh-keygen -t dsa

5. At the prompts:
Accept the default location for the key file
Enter and confirm a pass phrase that is different from the oracle user's password
This command writes the public key to the ~/.ssh/id_dsa.pub file and the private
key to the ~/.ssh/id_dsa file. Never distribute the private key to anyone.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 94 of 393


1. On the local node, determine if you have an authorized key file
Add keys to an (~/.ssh/authorized_keys). If the authorized key file already exists, then proceed
authorized key to step 2. Otherwise, enter the following commands:
file: Complete the $ touch ~/.ssh/authorized_keys
following steps: $ cd ~/.ssh
$ ls
You should see the id_dsa.pub and id_rsa.pub keys that you have created.

2. Using SSH, copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub


Note: files to the file ~/.ssh/authorized_keys, and provide the Oracle user password
Repeat this as prompted. This process is illustrated in the following syntax example with a
process for each two-node cluster, with nodes node1 and node2, where the Oracle user path is
node in the cluster /home/oracle:
!!!
[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node1's password:
[oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
oracle@node2's password:
[oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys
oracle@node2's password:

3. Use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file
to the Oracle user .ssh directory on a remote node. The following example is with
SCP, on a node called node2, where the Oracle user path is /home/oracle:
[oracle@node1 .ssh]scp authorized_keys node2:/home/oracle/.ssh/

4. Repeat step 2 and 3 for each cluster node member. When you have added keys
from each cluster node member to the authorized_keys file on the last node you
want to have as a cluster node member, then use SCP to copy the complete
authorized_keys file back to each cluster node member

Note:

The Oracle user's /.ssh/authorized_keys file on every node must


contain the contents from all of the /.ssh/id_rsa.pub and
/.ssh/id_dsa.pub files that you generated on all cluster nodes.
5. Change the permissions on the Oracle user's /.ssh/authorized_keys file on all
cluster nodes:

$ chmod 600 ~/.ssh/authorized_keys

At this point, if you use ssh to log in to or run a command on another node, you are prompted for the pass
phrase that you specified when you created the DSA key.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 95 of 393


Enabling SSH User Equivalency on Cluster Member Nodes

To enable Oracle 1. On the system where you want to run Oracle Universal Installer, log in as the
Universal Installer oracle user.
to use the ssh and
scp commands 2. Enter the following commands:
without being $ exec /usr/bin/ssh-agent $SHELL
prompted for a $ /usr/bin/ssh-add
pass phrase,
3. At the prompts, enter the pass phrase for each key that you generated.
follow these steps: If you have configured SSH correctly, then you can now use the ssh or scp
commands without being prompted for a password or a pass phrase.

4. If you are on a remote terminal, and the local node has only one visual (which is
typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0

C shell:
$ setenv DISPLAY 0

For example, if you are using the Bash shell, and if your hostname is node1, then enter
the following command:
$ export DISPLAY=node1:0

5. To test the SSH configuration, enter the following commands from the same
terminal session, testing the configuration of each cluster node, where nodename1,
nodename2, and so on, are the names of nodes in the cluster:
$ ssh nodename1 date
$ ssh nodename2 date

These commands should display the date set on each node.


If any node prompts for a password or pass phrase, then verify that the
~/.ssh/authorized_keys file on that node contains the correct public keys.
If you are using a remote client to connect to the local node, and you see a message
similar to "Warning: No xauth data; using fake authentication data for X11 forwarding,"
then this means that your authorized keys file is configured correctly, but your ssh
configuration has X11 forwarding enabled. To correct this, proceed to step 6.

Note:
The first time you use SSH to connect to a node from a particular system,
you may see a message similar to the following:

The authenticity of host 'node1 (140.87.152.153)' can't be established.


RSA key fingerprint is 7z:ez:e7:f6:f4:f2:4f:8f:9z:79:85:62:20:90:92:z9.
Are you sure you want to continue connecting (yes/no)?

Enter yes at the prompt to continue. You should not see this message again
when you connect from this system to that node.
If you see any other messages or text, apart from the date, then the
installation can fail. Make any changes required to ensure that only the date
is displayed when you enter these commands.
You should ensure that any parts of login scripts that generate any output, or
ask any questions, are modified so that they act only when the shell is an
interactive shell.

6. To ensure that X11 forwarding will not cause the installation to fail, create a
user-level SSH client configuration file for the Oracle software owner user, as
follows:

a. Using any text editor, edit or create the ~oracle/.ssh/config file.


11gRAC/ASM/AIX oraclibm@fr.ibm.com 96 of 393
b. Make sure that the ForwardX11 attribute is set to no. For example:
Host *
ForwardX11 no

7. You must run Oracle Universal Installer from this session or remember to
repeat steps 2 and 3 before you start Oracle Universal Installer from a
different terminal session.

Preventing Oracle Clusterware Installation Errors Caused by stty Commands

During an Oracle Clusterware installation, Oracle Universal Installer uses SSH (if available) to run commands and copy
files to the other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause
installation errors if they contain stty commands.

To avoid this problem, • Bourne, Bash, or Korn shell:


you must modify
these files to if [ -t 0 ]; then
suppress all output stty intr ^C
on STDERR, as in the fi
following
examples: • C shell:

test -t 0
if ($status == 0) then
stty intr ^C
endif

11gRAC/ASM/AIX oraclibm@fr.ibm.com 97 of 393


10.7 Time Server Synchronization

Time Synchronisation
 MANDATORY 
To ensure that RAC operates
efficiently, you must synchronize
the system time on all cluster
nodes.

Oracle recommends that you use


xntpd for this purpose. xntpd is a
complete implementation of the
Network Time Protocol (NTP) version
3 standard and is more accurate than
timed.

To configure xntpd, follow these steps on each cluster node :

1 / Enter the following command to


# touch /etc/ntp.drift /etc/ntp.trace /etc/ntp.conf
create required files, if ecessary:

2 / Using any text editor, edit the


# vi /etc/ntp.conf
/etc/ntp.conf file:

3 / Add entries similar to the # Sample NTP Configuration file


following to the file:
# Specify the IP Addresses of three clock server systems.

timeserver1 10.3.25.101
server ip_address1 timeserver2 10.3.25.102
timeserver3 10.3.25.103
server ip_address2
server ip_address3 # Most of the routers are broadcasting NTP time information. If your
# router is broadcasting, then the following line enables xntpd
# to listen for broadcasts.

broadcastclient

# Write clock drift parameters to a file. This enables the system


# clock to quickly sychronize to the true time on restart.

driftfile /etc/ntp.drift
tracefile /etc/ntp.trace

4 / To start xntpd, follow these


A - Enter the following command: # /usr/bin/smitty xntpd
steps:
B - Choose Start Using the xntpd Subsystem, then choose BOTH.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 98 of 393


10.8 crs environment setup

Oracle environment : $HOME/.profile file in Oracle’s home directory

To be done on export ORACLE_BASE=/oracle


export AIXTHREAD_SCOPE=S (S for system-wide thread scope)
each node. export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022

if [ -t 0 ]; then
stty intr ^C
fi

Notes:

On AIX, when using multithreaded applications or LAN-free, especially when running on machines with multiple
CPUs, we strongly recommend setting AIXTHREADSCOPE=S in the environment before starting the application,
for better performance and more solid scheduling.

For example:

EXPORT AIXTHREADSCOPE=S

Setting AIXTHREAD_SCOPE=S means that user threads created with default attributes will be placed into
system-wide contention scope. If a user thread is created with system-wide contention scope, it is bound to a
kernel thread and it is scheduled by the kernel. The underlying kernel thread is not shared with any other user
thread.

AIXTHREAD_SCOPE (AIX 4.3.1 and later)

Purpose:
Controls contention scope. P signifies process-based contention scope (M:N). S signifies system-based
contention scope (1:1).

Values:
Possible Values: P or S. The default is P.

Display:
echo $AIXTHREAD_SCOPE (this is turned on internally, so the initial default value will not be seen with the
echo command)

Change:
AIXTHREAD_SCOPE={P|S}export AIXTHREAD_SCOPE Change takes effect immediately in this shell.
Change is effective until logging out of this shell. Permanent change is made by adding the
AIXTHREAD_SCOPE={P|S} command to the /etc/environment file.

Diagnosis:
If fewer threads are being dispatched than expected, then system scope should be tried.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 99 of 393


10.9 Oracle Software Requirements

Link to donwload code from : http://otn.oracle.com/


http://www.oracle.com/technology/software/products/database/oracle10g/htdocs/10201aixsoft.html

Orale CD’s needed Oracle Database 11g Release 1 (11.1.0.6.0) Enterprise/Standard Edition for AIX5L-Disk1
for the RAC http://download.oracle.com/otn/aix/oracle11g/aix.ppc64_11gR1_database_disk1.zip
installation (650,429,692 bytes) (cksum - 2885574993)

Oracle Database 11g Release 1 (11.1.0.6.0) Enterprise/Standard Edition for AIX5L-Disk2


http://download.oracle.com/otn/aix/oracle11g/aix.ppc64_11gR1_database_disk2.zip
(1,867,281,765 bytes) (cksum - 2533343969)

Clusterware and
Database Application of patchset to the database software depends if your application has been
Patchset needed tested with it, it’s your choice, your desision.

Best is to If available, and As needed for your project ...


implement last
patchset for Oracle
Clusterware and
ASM software

Extra Clusterware If available, and As needed for your project ...


Patches needed

Extra ASM 6468666 Oracle Database Family: Patch


Patches needed HIGH REDUNDANCY DISKGROUPS DO NOT TOLERATE LOSS OF DISKS IN TWO
FAILGROUPS
6494961 Oracle Database Family: Patch
ASM DISKGROUP MOUNT INCURS ORA_600 KFCEMK10

If available, and As needed for your project ...

Extra Database If available, and As needed for your project ...


Patches needed

Directions to extract contents of .gz files :

1. Unzip the file: gunzip <filename>


2. Extract the file: cpio -idcmv < <filename>
3. Installation guides and general Oracle Database 10g documentation can be found here.
4. Review the certification matrix for this product here.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 100 of 393


11 PREPARING STORAGE
Storage is to be prepared for :

• Local disks on each node to install Oracle Clusterware, ASM and RAC softwares è MANDATORY

• Shared disks for Oracle clusterware disks è MANDATORY

11gRAC/ASM/AIX oraclibm@fr.ibm.com 101 of 393


• Shared disk for Oracle ASM Instances parameters (SPFILE) è Recommended but not MANDATORY

• Shared disks for Oracle ASM disks è MANDATORY

11gRAC/ASM/AIX oraclibm@fr.ibm.com 102 of 393


11.1 Required local disks (Oracle Clusterware, ASM and RAC software)

The oracle code (clusterware, ASM and RAC) can be located on an internal disk and propagated on the other machines
of the cluster. The Oracle Universal Installer manage the cluster-wide installation, that is done only once. Regular file
systems are used for Oracle code.

NOTA : You can also use virtual I/O disks for :

• AIX5L operating system


• Oracle clusterware ($CRS_HOME)
• Oracle ASM ($ASM_HOME)
• Oracle RAC Software ($ORACLE_HOME)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 103 of 393


On each node, create a volume group “oraclevg”, or use available space from ‘rootvg’ to create the
Logical Volumes …

To list the internal disks :


{node1:root}/ # lsdev –Ccdisk | grep SCSI
hdisk0 Available Virtual SCSI Disk Drive

Create a volume group called oraclevg :


{node1:root}/ # mkvg -f -y'oraclevg' -S hdisk0

On each node “/crs” must be created to host :

• Oracle Clusterware Software

“/crs” must be about 4 to 6 GB size.


“/crs” must be owned by oracle:oinstall or by crs:oinstall

On node 1 …

Create a 6GB file system /crs in the previous volume group (large file enabled) :

{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size=’6G’ -m'/crs' -A'yes' -p'rw'


{node1:root}/ # mount /crs
{node1:root}/ # chown –R crs:oinstall /crs

THEN On node 2 …

On each node “/oracle” must be created to host :

• Oracle Automated Storage Management (ASM) on


“/oracle/asm”
• Oracle RAC software on “/oracle/rdbms”

“/oracle” must be about 10 to 12 GB size.


“/oracle” must be owned by oracle:oinstall
“/oracle/asm” must be owned by asm:oinstall or oracle:oinstall
“/oracle/rdbms” must be owned by rdbms:oinstall or oracle:oinstall

On node 1 …

Create a 6GB file system /cr in the previous volume group (large file enabled) :

{node1:root}/ # crfs -v jfs2 -g'oraclevg' -a size='12G' -m'/oracle' -A'yes' -p'rw'


{node1:root}/ # mount /oracle
{node1:root}/ # chown oracle:oinstall /oracle
{node1:root}/ # chown –R asm:oinstall /oracle/asm
{node1:root}/ # chown –R rdbms:oinstall /oracle/rdbms

THEN On node 2 …  THEN do the same for each extra node.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 104 of 393


11.2 Oracle Clusterware Disks (OCR and Voting Disks)

The Oracle Cluster Registry (OCR) stores cluster and database configuration information. You must have a shared
raw device containing at least 256 MB of free space that is accessible from all of the nodes in the cluster.

OCR protected by external mechanism OR, OCR mirrored by Oracle Clusterware

'The Oracle Clusterware voting disk contains cluster membership information and arbitrates cluster ownership
among the nodes of your cluster in the event of network failures. You must have a shared raw device containing at
least 256 MB of free space that is accessible from all of the nodes in the cluster.

Voting disk protected by external OR, voting disks protected by Oracle Clusterware.
mechanism It’s always even copies, 1, 3, 5 and so on … / 3 is sufficient in most cases.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 105 of 393


11.2.1 Required LUN’s

Using the storage administration console, you have to create :

Either
• 2 LUNs for OCR disks (300MB size)
• 3 LUNs for Voting disks (300MB size)
to implement normal redundancy of Oracle clusterware, to protect OCR and Voting disks.

or
• 1 LUNs for OCR disks (300MB size)
• 1 LUNs for Voting disks (300MB size)
to implement external redundancy of Oracle clusterware, and let the storage mirroring mechanism to protect OCR
and Voting disks.

The following screen shows the LUN mapping for nodes used in our cluster. The LUN for OCR disks and Voting disks
have ids 1 to 5. These ID’s will help us on to identify which hdisk will be used.

Disks LUN’s ID Number LUN’s Size


OCR1 L1 300 MB
OCR2 L2 300 MB
Voting1 L3 300 MB
Voting2 L4 300 MB
Voting3 L5 300 MB

11gRAC/ASM/AIX oraclibm@fr.ibm.com 106 of 393


11.2.2 How to Identify if a LUN is used or not ?

Get the List of the hdisks on node1, for example :

On node 1 …
L
i {node1:root}/ # lspv
s hdisk0 00cd7d3e6d2fa8db rootvg active
t hdisk1 none None
o hdisk2 none None
f hdisk3 none None
a hdisk4 none None
v hdisk5 none None
a hdisk6 none None
i hdisk7 none Nsd
l hdisk8 none Nsd1
a hdisk9 none Nsd2
b hdisk10 none Nsd3
l hdisk11 none Nsd4
e hdisk12 none None
hdisk13 none None
h hdisk14 none None
d {node1:root}/ #
i
s è NO PVID are assigned apart for the rootvg hdisk
k
s

o
n

n
o
d
e

r
o
o
t
v
g

i
s

h
d
i
s
k
0

!
!
!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 107 of 393


When hdisk is marked rootvg, header of the disk might look like this :

Use the On node 1 … for hdisk0


following
{node1:root}/ # lspv | grep rootvg
command to
hdisk0 00cd7d3e6d2fa8db rootvg active
read hdisk {node1:root}/ #
header :
{node1:root}/ # lquerypv -h /dev/rhdisk0
00000000 C9C2D4C1 00000000 00000000 00000000 |................|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00CD7D3E 6D2FA8DB 00000000 00000000 |..}>m/..........|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

On node 2 …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 108 of 393


When hdisk is not marked rootvg, but None, it’s important to check that it’s not used at all.

Use the On node 1 … for hdisk2


following
command to {node1:root}/ #lquerypv -h /dev/rhdisk2
read hdisk 00000000 00000000 00000000 00000000 00000000 |................|
header : 00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
If all lines are 00000030 00000000 00000000 00000000 00000000 |................|
ONLY full of “0” 00000040 00000000 00000000 00000000 00000000 |................|
THEN the hdisk is 00000050 00000000 00000000 00000000 00000000 |................|
free to be used. 00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
If it’s not the case, 00000080 00000000 00000000 00000000 00000000 |................|
check first that
which LUN is 00000090 00000000 00000000 00000000 00000000 |................|
mapped to the 000000A0 00000000 00000000 00000000 00000000 |................|
hdisk (next 000000B0 00000000 00000000 00000000 00000000 |................|
pages), and if it’s 000000C0 00000000 00000000 00000000 00000000 |................|
the LUN you 000000D0 00000000 00000000 00000000 00000000 |................|
should use, you 000000E0 00000000 00000000 00000000 00000000 |................|
must then “dd” 000000F0 00000000 00000000 00000000 00000000 |................|
(zeroing) the
/dev/rhdisk2 rdisk {node1:root}/ #

On node 2 …

BUT CAREFULL !!!, any hdisk not marked “None” as a result of the lspv command may show also a blank
header (GPFS used hdisk for example), so make sute that hdisk has “None” value out of the lspv command.

On node 1 …
List of
all {node1:root}/ # lspv
hdisks ...
not hdisk7 none Nsd
marked hdisk8 none Nsd1
“rootvg hdisk9 none Nsd2
” or hdisk10 none Nsd3
“None” hdisk11 none Nsd4
on node ...
1: {node1:root}/ #

Use the On node 1 … for hdisk7


following
command to {node1:root}/ #lquerypv -h /dev/rhdisk7
read hdisk 00000000 00000000 00000000 00000000 00000000 |................|
header : 00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
All lines are ONLY 00000030 00000000 00000000 00000000 00000000 |................|
full of “0” 00000040 00000000 00000000 00000000 00000000 |................|
BUT the hdisk is 00000050 00000000 00000000 00000000 00000000 |................|
not free to be 00000060 00000000 00000000 00000000 00000000 |................|
used !!! Because 00000070 00000000 00000000 00000000 00000000 |................|
it’s used by IBM 00000080 00000000 00000000 00000000 00000000 |................|
GPFS in our
example 00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
11gRAC/ASM/AIX oraclibm@fr.ibm.com 109 of 393
{node1:root}/ #

11.2.3 Register LUN’s at AIX level

Before On node 1 …
registration of
{node1:root}/ # lspv
LUN’s : hdisk0 00cd7d3e6d2fa8db rootvg active
{node1:root}/ #

On node 2 …

{node2:root}/ # lspv
hdisk1 00cd7d3e7349e441 rootvg active
{node2:root}/ #

As root on each node, update the ODM repository using the following command : "cfgmgr"

You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.

After On node 1 …
registration of
{node1:root}/ # lspv
LUN’s in ODM hdisk0 00cd7d3e6d2fa8db rootvg active
thru cfgmgr hdisk2 none None
command : hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
{node1:root}/ #

On node 2 …
{node2:root}/ # lspv
hdisk1 00cd7d3e7349e441 rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 110 of 393


11.2.4 Identify LUN’s and corresponding hdisk on each node

Disks LUN’s ID Number


We know the LUN’s available for OCR OCR 1 L1
and Voting disks are L1 to L5. OCR 2 L2
Voting 1 L3
Voting 2 L4
Voting 3 L5

Identify the disks available for OCR and Voting disks, on each node, knowing the LUN’s numbers.

Knowing the LUN’s Disks LUN’s ID Node 1 Node 2 Node ….


number to use, we Number Corresponding Corresponding Corresponding
know need to identify hdisk hdisk hdisk
the corresponding OCR 1 L1
hdisks on each node of OCR 2 L2
the cluster as detailed Voting 1 L3
in the following table : Voting 2 L4
Voting 3 L5

There are two methods to identify the corresponding hdisks :

• Identify LUN ID assign to hdisk, using “lscfg –l hdisk?” command


• Identify hdisks by assigning momently a PVID to each hdisk not having one

We strongly recommend not using PVID to identify hdisk.


If some hdisks are already used, setting a PVID to a used hdisk can
corrupt the the hdisk header, and have generate issues as loosing
data stored on the hdisk.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 111 of 393


Get the List of the hdisks on node1 :

On node 1 …
List of
{node1:root}/ # lspv
available hdisk0 00cd7d3e6d2fa8db rootvg active
hdisks on hdisk2 none None
node 1 : hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
{node1:root}/ #
rootvg is
hdisk0 !!! è NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :

Identify LUN ID assign to hdisk, using “lscfg –vl hdisk?” command

On node 1 …
{node1:root}/ # for i in 2 3 4 5 6
do
lscfg -vl hdisk$i
done

hdisk2 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device


hdisk3 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
hdisk4 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L3000000000000 1722-600 (600) Disk Array Device
hdisk5 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L4000000000000 1722-600 (600) Disk Array Device
hdisk6 U7879.001.DQD17GX-P1-C2-T1-W200800A0B812AB31-L5000000000000 1722-600 (600) Disk Array Device
{node1:root}/ #

THEN, We get the following table :

Disks LUN’s ID Node 1 Node 2 Node ….


Number Corresponding Corresponding Corresponding
hdisk hdisk hdisk
OCR 1 L1 hdisk2 hdisk?
OCR 2 L2 hdisk3
Voting 1 L3 hdisk4 hdisk?
Voting 2 L4 hdisk5
Voting 3 L5 hdisk6

è No need to assign PVID when using this method.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 112 of 393


Get the List of the hdisks on node2

List of available On node 2 …


hdisks on node
2: {node2:root}/ # lspv
hdisk1 00cd7d3e7349e441 rootvg active
hdisk2 none None
hdisk3 none None
rootvg is hdisk4 none None
hdisk1, which is hdisk5 none None
not same hdisk hdisk6 none None
as on node 1, {node2:root}/ #
which means è NO PVID are assigned apart for the rootvg hdisk
that it could be
the same for
each hdisk. On
both nodes,
same hdisk
names might
not be attached
to same LUN.

Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :

 Be carefull hdisk2 on node1 is not necessary hdisk2 on node2.


Identify LUN ID assign to hdisk, using “lscfg –vl hdisk?” command

On node 1 …
{node2:root}/ # for i in 2 3 4 5 6
do
lscfg -vl hdisk$i
done

hdisk2 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L1000000000000 1722-600 (600) Disk Array Device


hdisk3 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L2000000000000 1722-600 (600) Disk Array Device
hdisk4 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L3000000000000 1722-600 (600) Disk Array Device
hdisk5 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L4000000000000 1722-600 (600) Disk Array Device
hdisk6 U7879.001.DQD17GX-P1-C5-T1-W200800A0B812AB31-L5000000000000 1722-600 (600) Disk Array Device
{node2:root}/ #

THEN, We get the following table :

Disks LUN’s ID Node 1 Node 2 Node ….


Number Corresponding Corresponding Corresponding
hdisk hdisk hdisk
OCR 1 L1 hdisk2 hdisk2 hdisk?
OCR 2 L2 hdisk3 hdisk3 hdisk?
Voting 1 L3 hdisk4 hdisk4 hdisk?
Voting 2 L4 hdisk5 hdisk5 hdisk?
Voting 3 L5 hdisk6 hdisk6 Hdisk?

Do the same again for any extra node …

è No need to assign PVID when using this method.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 113 of 393


11.2.5 Removing reserve lock policy on hdisks from each node  
Why is it MANDATORY to change the reserve policy on disks from each node accessing the same
LUN’s ?

11gRAC/ASM/AIX oraclibm@fr.ibm.com 114 of 393


 Setup reserve_policy on ocr and voting hdisks, on each node :
Example for one hdisk :
Issue the command “lsattr –E –l hdisk2” to vizualize all attributes for hdisk2

{node1:root}/ # lsattr -El hdisk2


PR_key_value none Persistant Reserve Key Value True
cache_method fast_write Write Caching method False
ieee_volname 600A0B800012AB30000017F64787273E IEEE Unique volume name False
lun_id 0x0001000000000000 Logical Unit Number False
max_transfer 0x100000 Maximum TRANSFER Size True
prefetch_mult 1 Multiple of blocks to prefetch on read False
pvid none Physical volume identifier False
q_type simple Queuing Type False
queue_depth 10 Queue Depth True
raid_level 5 RAID Level False
reassign_to 120 Reassign Timeout value True
reserve_policy single_path Reserve Policy True
rw_timeout 30 Read/Write Timeout value True
scsi_id 0x660500 SCSI ID False
size 300 Size in Mbytes False
write_cache yes Write Caching enabled False
{node1:root}/ #
Or only lsattr –E –l hdisk2 | grep reserve
{node1:root}/ # lsattr -El hdisk2 | grep reserve
reserve_policy single_path Reserve Policy True
{node1:root}/ #

o On IBM storage (ESS, FasTt, DSXXXX) : Change the “reserve_policy” attribute to “no_reserve”
chdev -l hdisk? -a reserve_policy=no_reserve
o On EMC storage : Change the “reserve_lock” attribute to “no”
chdev -l hdisk? -a reserve_lock=no
o On HDS storage with HDLM driver, and no disks in Volume Group : Change the “dlmrsvlevel”
attribute to “no_reserve”
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve

Change the On node 1 … On node 2 …


“reserve_policy”
{node1:root}/ # for i in 2 3 4 5 6 {node2:root}/ # for i in 2 3 4 5 6
attributes for do do
each disks chdev -l hdisk$i -a reserve_policy=no_reserve chdev -l hdisk$i -a reserve_policy=no_reserve
dedicated to OCR done done
and voting disks, changed changed
on each nodes of changed changed
the cluster : changed changed
changed changed
changed changed
In our case, {node1:root}/ # {node2:root}/ #
we have an
IBM storage !!!

Example for one hdisk on node1 :


Issue the command “lsattr –E –l hdisk2 |grep reserve” to vizualize modified attributes for hdisk2

{node1:root}/ # lsattr -El hdisk2 | grep reserve


11gRAC/ASM/AIX oraclibm@fr.ibm.com 115 of 393
reserve_policy no_reserve Reserve Policy True
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 116 of 393


11.2.6 Identify Major and Minor number of hdisk on each node

As described before, disks might have different names from one node to another for example
hdisk2 on node1 might be hdisk3 on node2, etc…

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID Corresponding Num. Num. Corresponding Num. Num.
Number hdisk hdisk
OCR 1 L1 /dev/ocr_disk1 hdisk2 hdisk2
OCR 2 L2 /dev/ocr_disk2 hdisk3 hdisk3
Voting 1 L3 /dev/voting_disk1 hdisk4 hdisk4
Voting 2 L4 /dev/voting_disk2 hdisk5 hdisk5
Voting 3 L5 /dev/voting_disk3 hdisk6 hdisk6

Next steps will explain procedure to identify Major and Minor number necessary to create the
virtual device pointing to the right hdisk on each node.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 117 of 393


Identify minor and {node1:root}/ # for i in 2 3 4 5 6
major number for do
ls -la /dev/*hdisk$i
each hdisk, on
done
each node … brw------- 1 root system 21, 6 Jan 11 15:58 /dev/hdisk2
crw------- 1 root system 21, 6 Jan 11 15:58 /dev/rhdisk2
On node1 : brw------- 1 root system 21, 7 Jan 11 15:58 /dev/hdisk3
crw------- 1 root system 21, 7 Jan 11 15:58 /dev/rhdisk3
brw------- 1 root system 21, 8 Jan 11 15:58 /dev/hdisk4
crw------- 1 root system 21, 8 Jan 11 15:58 /dev/rhdisk4
brw------- 1 root system 21, 9 Jan 11 15:58 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 15:58 /dev/rhdisk5
brw------- 1 root system 21, 10 Jan 14 21:18 /dev/hdisk6
crw------- 1 root system 21, 10 Jan 14 21:18 /dev/rhdisk6
{node1:root}/ #

Values as 21, 6 are the major and minor number for hdisk2 on node1.

21 is the major number, corresponding to a type of disk.


6 is the minor number, corresponding to an order number.

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID Correspondin Num. Num. Corresponding Num. Num.
Number g hdisk hdisk
OCR 1 L1 /dev/ocr_disk1 hdisk2 21 6 hdisk2
OCR 2 L2 /dev/ocr_disk2 hdisk3 21 7 hdisk3
Voting 1 L3 /dev/voting_disk1 hdisk4 21 8 hdisk4
Voting 2 L4 /dev/voting_disk2 hdisk5 21 9 hdisk5
Voting 3 L5 /dev/voting_disk3 hdisk6 21 10 hdisk6

Identify minor and {node2:root}/ # for i in 2 3 4 5 6


do
major number for
ls -la /de> do
each hdisk, on each ls -la /dev/*hdisk$i
node … done
brw------- 1 root system 21, 6 Jan 11 16:12 /dev/hdisk2
crw------- 1 root system 21, 6 Jan 11 16:12 /dev/rhdisk2
On node2 : brw------- 1 root system 21, 7 Jan 11 16:12 /dev/hdisk3
crw------- 1 root system 21, 7 Jan 11 16:12 /dev/rhdisk3
brw------- 1 root system 21, 8 Jan 11 16:12 /dev/hdisk4
crw------- 1 root system 21, 8 Jan 11 16:12 /dev/rhdisk4
brw------- 1 root system 21, 9 Jan 11 16:12 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 16:12 /dev/rhdisk5
brw------- 1 root system 21, 10 Jan 14 22:20 /dev/hdisk6
crw------- 1 root system 21, 10 Jan 14 22:20 /dev/rhdisk6
{node2:root}/ #

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID Correspondin Num. Num. Corresponding Num. Num.
Number g hdisk hdisk
OCR 1 L1 /dev/ocr_disk1 hdisk2 21 6 hdisk2 21 6
OCR 2 L2 /dev/ocr_disk2 hdisk3 21 7 hdisk3 21 7
Voting 1 L3 /dev/voting_disk1 hdisk4 21 8 hdisk4 21 8
Voting 2 L4 /dev/voting_disk2 hdisk5 21 9 hdisk5 21 9
Voting 3 L5 /dev/voting_disk3 hdisk6 21 10 hdisk6 21 10
As disks may have different names from one node to another for example L1 could correspond to hdisk2 on
node1, and hdisk1 on node2, etc…

11gRAC/ASM/AIX oraclibm@fr.ibm.com 118 of 393


11.2.7 Create Unique Virtual Device to access same LUN from each node

THEN from the table, for node1 we need to create virtual devices :

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID Correspondin Num. Num. Corresponding Num. Num.
Number g hdisk hdisk
OCR 1 L1 /dev/ocr_disk1 hdisk2 21 6 hdisk2 21 6
OCR 2 L2 /dev/ocr_disk2 hdisk3 21 7 hdisk3 21 7
Voting 1 L3 /dev/voting_disk1 hdisk4 21 8 hdisk4 21 8
Voting 2 L4 /dev/voting_disk2 hdisk5 21 9 hdisk5 21 9
Voting 3 L5 /dev/voting_disk3 hdisk6 21 10 hdisk6 21 10

To create same virtual devices on each node Using the command :


called : mknod Device_Name c MajNum MinNum
/dev/ocr_disk1
/dev/ocr_disk2 For first node, as root user
/dev/voting_disk1
/dev/voting_disk2 {node1:root}/ # mknod /dev/ocr_disk1 c 21 6
/dev/voting_disk3
{node1:root}/ # mknod /dev/ocr_disk2 c 21 7
we need to use major and minor number of {node1:root}/ # mknod /dev/voting_disk1 c 21 8
hdisks which will make the link between the {node1:root}/ # mknod /dev/voting_disk2 c 21 9
virtual devices and the hdisks. {node1:root}/ # mknod /dev/voting_disk3 c 21 10

From the table, for node2 we need also to create virtual devices :
 By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.

To create same virtual devices on each node Using the command :


called : mknod Device_Name c MajNum MinNum
/dev/ocr_disk1
/dev/ocr_disk2 For second node, as root user
/dev/voting_disk1
/dev/voting_disk2 {node2:root}/ # mknod /dev/ocr_disk1 c 21 6
/dev/voting_disk3
{node2:root}/ # mknod /dev/ocr_disk2 c 21 7
we need to use major and minor number of {node2:root}/ # mknod /dev/voting_disk1 c 21 8
hdisks which will make the link between the {node2:root}/ # mknod /dev/voting_disk2 c 21 9
virtual devices and the hdisks. {node2:root}/ # mknod /dev/voting_disk3 c 21 10

11gRAC/ASM/AIX oraclibm@fr.ibm.com 119 of 393


11.2.8 Set Ownership / Permissions on Virtual Devices

For each node in the cluster :

Set ownership of the {node1:root}/ # chown root:oinstall /dev/ocr_disk1


created virtual devices {node1:root}/ # chown root:oinstall /dev/ocr_disk2
{node1:root}/ # chown crs:dba /dev/voting_disk1
{node1:root}/ # chown crs:dba /dev/voting_disk2
{node1:root}/ # chown crs:dba /dev/voting_disk3

Then on node2 …

THEN set read/write {node1:root}/ # chmod 640 /dev/ocr_disk1


permissions on the {node1:root}/ # chmod 640 /dev/ocr_disk2
created virtual devices {node1:root}/ # chmod 660 /dev/voting_disk1
{node1:root}/ # chmod 660 /dev/voting_disk2
{node1:root}/ # chmod 660 /dev/voting_disk3

Then on node2 …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 120 of 393


THEN check that settings are applied on node1 :

Checking the {node1:root}/ # ls -la /dev/* | grep "21, 6"


modifications brw------- 1 root system 21, 6 Jan 11 15:58 /dev/hdisk2
crw-r----- 1 root oinstall 21, 6 Feb 06 17:43 /dev/ocr_disk1
crw------- 1 root system 21, 6 Jan 11 15:58 /dev/rhdisk2
After Oracle {node1:root}/ #
clusterware
installation, Check for each disk using following command :

After Oracle {node1:root}/ # for i in 6 7 8 9 10


clusterware > do
installation, > ls -la /dev/* | grep "21, "$i
ownership > done
and brw------- 1 root system 21, 6 Jan 11 15:58 /dev/hdisk2
permissions crw-r----- 1 root oinstall 21, 6 Feb 06 17:43 /dev/ocr_disk1
of virtual crw------- 1 root system 21, 6 Jan 11 15:58 /dev/rhdisk2
devices may brw------- 1 root system 21, 7 Jan 11 15:58 /dev/hdisk3
change. crw-r----- 1 root oinstall 21, 7 Feb 06 17:43 /dev/ocr_disk2
crw------- 1 root system 21, 7 Jan 11 15:58 /dev/rhdisk3
brw------- 1 root system 21, 8 Jan 11 15:58 /dev/hdisk4
crw------- 1 root system 21, 8 Jan 11 15:58 /dev/rhdisk4
crw-rw---- 1 crs dba 21, 8 Feb 06 17:43 /dev/voting_disk1
brw------- 1 root system 21, 9 Jan 11 15:58 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 15:58 /dev/rhdisk5
crw-rw---- 1 crs dba 21, 9 Feb 06 17:43 /dev/voting_disk2
brw------- 1 root system 21, 9 Jan 11 15:58 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 15:58 /dev/rhdisk5
crw-rw---- 1 crs dba 21, 9 Feb 06 17:43 /dev/voting_disk3
{node1:root}/ #

Then on node2 :

Checking the {node2:root}/ # ls -la /dev/* | grep "21, 6"


modifications brw------- 1 root system 21, 6 Jan 11 15:58 /dev/hdisk2
crw-r----- 1 root oinstall 21, 6 Feb 06 17:43 /dev/ocr_disk1
crw------- 1 root system 21, 6 Jan 11 15:58 /dev/rhdisk2
After Oracle {node2:root}/ #
clusterware
installation, Check for each disk using following command :

After Oracle {node2:root}/ # for i in 6 7 8 9 10


clusterware > do
installation, > ls -la /dev/* | grep "21, "$i
ownership > done
and brw------- 1 root system 21, 6 Jan 11 15:58 /dev/hdisk2
permissions crw-r----- 1 root oinstall 21, 6 Feb 06 17:43 /dev/ocr_disk1
of virtual crw------- 1 root system 21, 6 Jan 11 15:58 /dev/rhdisk2
devices may brw------- 1 root system 21, 7 Jan 11 15:58 /dev/hdisk3
change. crw-r----- 1 root oinstall 21, 7 Feb 06 17:43 /dev/ocr_disk2
crw------- 1 root system 21, 7 Jan 11 15:58 /dev/rhdisk3
brw------- 1 root system 21, 8 Jan 11 15:58 /dev/hdisk4
crw------- 1 root system 21, 8 Jan 11 15:58 /dev/rhdisk4
crw-rw---- 1 crs dba 21, 8 Feb 06 17:43 /dev/voting_disk1
brw------- 1 root system 21, 9 Jan 11 15:58 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 15:58 /dev/rhdisk5
crw-rw---- 1 crs dba 21, 9 Feb 06 17:43 /dev/voting_disk2
brw------- 1 root system 21, 9 Jan 11 15:58 /dev/hdisk5
crw------- 1 root system 21, 9 Jan 11 15:58 /dev/rhdisk5
crw-rw---- 1 crs dba 21, 9 Feb 06 17:43 /dev/voting_disk3
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 121 of 393


11.2.9 Formating the virtual devices (zeroing)

Now, we do format the virtual devices :

On node 1 …
Format (Zeroing) and
Verify that you can read {node1:root}/ # for i in 1 2
on the disks from each > do
node : > dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #

{node1:root}/ # for i in 1 2 3
> do
> dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.
{node1:root}/ #

Verify devices concurrent read/write access by running at the same time dd command from each node :

On node 1 …
 At the same time :
{node1:root}/ # for i in 1 2
> do
• on node1 > dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
• on node2 300+0 records out.
300+0 records in.
300+0 records out.

On node 2 …

{node2:root}/ # for i in 1 2
> do
> dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
> done
300+0 records in.
300+0 records out.
300+0 records in.
300+0 records out.

Same for voting disks …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 122 of 393


11.3 ASM disks
About ASM Instances parameters files, pfile or spfile ?

11gRAC/ASM/AIX oraclibm@fr.ibm.com 123 of 393


About ASM disks, using rhdisk’s or virtual devices ?

11gRAC/ASM/AIX oraclibm@fr.ibm.com 124 of 393


11.3.1 Required LUN’s

Using the storage administration console, you have to create :

• 1 LUN for the ASM spfile (100MB size)


o This disk will contain the sahred ASM instance spfile, meaning all ASM instances parameters that you’ll
be able to modify in a dynamic way (Only possible with 1 storage implementation, with 2 storages you’ll
have to use local ASM instance pfile).

• A bunch of LUNs for disks to be used with ASM.


o Disks/LUN’s for ASM can be either
 /dev/rhdisk (raw disks)
or
 /dev/ASM_DISK1 (virtual devices) for ease of administration.

The following screen shows the LUN mapping for nodes used in our cluster.
These ID’s will help us on to identify which hdisk will be used.

Disks LUN’s ID Number LUN’s Size


ASM spfile L0 100 MB
Disk 1 for ASM L6 4 GB
Disk 2 for ASM L7 4 GB
Disk 3 for ASM L8 4 GB
Disk 4 for ASM L9 4 GB
Disk 5 for ASM LA (meaning Lun 10) 4 GB
Disk 6 for ASM LB (meaning Lun 11) 4 GB
Disk 7 for ASM LC (meaning Lun 12) 4 GB
Disk 8 for ASM LD (meaning Lun 13) 4 GB

11.3.2 How to Identify if a LUN is used or not ?

11gRAC/ASM/AIX oraclibm@fr.ibm.com 125 of 393


When hdisk is marked rootvg, header of the disk might look like this :

Use the On node 1 … for hdisk0


following
{node1:root}/ # lspv | grep rootvg
command to
hdisk0 00cd7d3e6d2fa8db rootvg active
read hdisk {node1:root}/ #
header :
{node1:root}/ # lquerypv -h /dev/rhdisk0
00000000 C9C2D4C1 00000000 00000000 00000000 |................|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00CD7D3E 6D2FA8DB 00000000 00000000 |..}>m/..........|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

On node 2 …

When hdisk is not marked rootvg, but None, it’s imortant to check that it’s not used at all.

Use the On node 1 … for hdisk2


following
command to {node1:root}/ #lquerypv -h /dev/rhdisk2
read hdisk 00000000 00000000 00000000 00000000 00000000 |................|
header : 00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
If all lines are 00000030 00000000 00000000 00000000 00000000 |................|
ONLY full of “0” 00000040 00000000 00000000 00000000 00000000 |................|
THEN the hdisk is 00000050 00000000 00000000 00000000 00000000 |................|
free to be used. 00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
If it’s not the case,
check first that 00000090 00000000 00000000 00000000 00000000 |................|
which LUN is 000000A0 00000000 00000000 00000000 00000000 |................|
mapped to the 000000B0 00000000 00000000 00000000 00000000 |................|
hdisk (next 000000C0 00000000 00000000 00000000 00000000 |................|
pages), and if it’s 000000D0 00000000 00000000 00000000 00000000 |................|
the LUN you 000000E0 00000000 00000000 00000000 00000000 |................|
should use, you 000000F0 00000000 00000000 00000000 00000000 |................|
must then “dd”
(zeroing) the {node1:root}/ #
/dev/rhdisk2 rdisk
On node 2 …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 126 of 393


11.3.3 Register LUN’s at AIX level

Before On node 1 …
registration of
{node1:root}/ # lspv
new LUN’s for hdisk0 00cd7d3e6d2fa8db rootvg active
ASM disks : hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
{node1:root}/ #

On node 2 …

{node2:root}/ # lspv
hdisk1 00cd7d3e7349e441 rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
{node2:root}/ #

As root on each node, update the ODM repository using the following command : "cfgmgr"

11gRAC/ASM/AIX oraclibm@fr.ibm.com 127 of 393


You need to register and identify LUN's at AIX level, and LUN's will be mapped to hdisk and registered in the AIX ODM.

After On node 1 …
registration of
{node1:root}/ # lspv
LUN’s in ODM hdisk0 00cd7d3e6d2fa8db rootvg active
thru cfgmgr hdisk1 none None
command : hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
{node1:root}/ #

On node 2 …
{node2:root}/ # lspv
hdisk0 none None
hdisk1 00cd7d3e7349e441 rootvg active
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 128 of 393


11.3.4 Identify LUN’s and corresponding hdisk on each node

Disks LUN’s Node 1 Node 2 Node ….


We know the LUN’s to use Number
for : ASM Spfile Disk L0
Disk 1 for ASM L6
• ASM spfile disk Disk 2 for ASM L7
• ASM disks Disk 3 for ASM L8
Disk 4 for ASM L9
Disk 5 for ASM LA
Disk 6 for ASM LB
Disk 7 for ASM LC
Disk 8 for ASM LD … … …

Identify the disks available for ASM and ASM spfile disks, on each node, knowing the LUN’s
numbers.

There are two methods to identify the corresponding hdisks :

• Identify LUN ID assign to hdisk, using “lscfg –l hdisk?” command


• Identify hdisks by assigning momently a PVID to each hdisk not having one

We strongly recommend not using PVID to identify hdisk.


If some hdisks are already used, setting a PVID to a used hdisk can
corrupt the the hdisk header, and have generate issues as loosing
data stored on the hdis

11gRAC/ASM/AIX oraclibm@fr.ibm.com 129 of 393


Get the List of the hdisks on node1 :

On node 1 …
List of
{node1:root}/ # lspv
available hdisk0 00cd7d3e6d2fa8db rootvg active
hdisks on hdisk1 none None
node 1 : hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
rootvg is hdisk7 none None
hdisk0 !!! hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
{node1:root}/ #

è NO PVID are assigned apart for the rootvg hdisk

Using lscfg command, try to identify the hdisks in the list generated by lspv on node1 :

Identify LUN ID assign to hdisk, using “lscfg –vl hdisk?” command

On node 1 …
{node1:root}/ # for i in 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lscfg -vl hdisk$i
done
hdisk1 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L0 3552 (500) Disk Array Device
hdisk2 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L1000000000000 3552 (500) Disk Array Device
hdisk3 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L2000000000000 3552 (500) Disk Array Device
hdisk4 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L3000000000000 3552 (500) Disk Array Device
hdisk5 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L4000000000000 3552 (500) Disk Array Device
hdisk6 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L5000000000000 3552 (500) Disk Array Device
hdisk7 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L6000000000000 3552 (500) Disk Array Device
hdisk8 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L7000000000000 3552 (500) Disk Array Device
hdisk9 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L8000000000000 3552 (500) Disk Array Device
hdisk10 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-L9000000000000 3552 (500) Disk Array Device
hdisk11 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LA000000000000 3552 (500) Disk Array Device
hdisk12 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LB000000000000 3552 (500) Disk Array Device
hdisk13 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LC000000000000 3552 (500) Disk Array Device
hdisk14 U7879.001.DQD01JK-P1-C2-T1-W200200A0B80C5404-LD000000000000 3552 (500) Disk Array Device
{node1:root}/ #

Disks LUN’s Node 1 Node 2 Node ….


Number
ASM Spfile Disk L0 hdisk1 hdisk?
THEN Disk 1 for ASM L6 hdisk7 hdisk?
Disk 2 for ASM L7 hdisk8 …
We get the Disk 3 for ASM L8 hdisk9
following table : Disk 4 for ASM L9 hdisk10
Disk 5 for ASM LA hdisk11
Disk 6 for ASM LB hdisk12
Disk 7 for ASM LC hdisk13
Disk 8 for ASM LD hdisk14

è No need to assign PVID when using this method.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 130 of 393


Get the List of the hdisks on node2

On node 2 …
List of available
{node2:root}/ # lspv
hdisks on node hdisk0 none None
2: hdisk1 00cd7d3e7349e441 rootvg active
hdisk2 none None
hdisk3 none None
rootvg is hdisk1, hdisk4 none None
hdisk5 none None
which is not hdisk6 none None
same hdisk as hdisk7 none None
on node 1, hdisk8 none None
which means hdisk9 none None
hdisk10 none None
that it could be hdisk11 none None
the same for hdisk12 none None
each hdisk. On hdisk13 none None
both nodes, hdisk14 none None
{node2:root}/ #
same hdisk
names might not è NO PVID are assigned apart for the rootvg hdisk
be attached to
same LUN.

Using lscfg command, try to identify the hdisks in the list generated by lspv on node2 :

 Be carefull hdisk7 on node1 is not necessary hdisk7 on node2.


Identify LUN ID assign to hdisk, using “lscfg –vl hdisk?” command

On node 1 …
{node2:root}/ # for i in 0 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lscfg -vl hdisk$i
done
hdisk0 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L0 3552 (500) Disk Array Device
hdisk1 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L1000000000000 3552 (500) Disk Array Device
hdisk2 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L2000000000000 3552 (500) Disk Array Device
hdisk3 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L3000000000000 3552 (500) Disk Array Device
hdisk4 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L4000000000000 3552 (500) Disk Array Device
hdisk5 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L5000000000000 3552 (500) Disk Array Device
hdisk6 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L6000000000000 3552 (500) Disk Array Device
hdisk7 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L7000000000000 3552 (500) Disk Array Device
hdisk8 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L8000000000000 3552 (500) Disk Array Device
hdisk9 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-L9000000000000 3552 (500) Disk Array Device
hdisk10 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LA000000000000 3552 (500) Disk Array Device
hdisk12 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LB000000000000 3552 (500) Disk Array Device
hdisk13 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LC000000000000 3552 (500) Disk Array Device
hdisk14 U7879.001.DQD01JK-P1-C6-T1-W200200A0B80C5404-LD000000000000 3552 (500) Disk Array Device
{node2:root}/ #

Disks LUN’s Node 1 Node 2 Node ….


Number
ASM Spfile Disk L0 hdisk1 hdisk0 hdisk?
THEN Disk 1 for ASM L6 hdisk7 hdisk7 hdisk?
Disk 2 for ASM L7 hdisk8 hdisk8 …
We get the Disk 3 for ASM L8 hdisk9 hdisk9
following table : Disk 4 for ASM L9 hdisk10 hdisk10
Disk 5 for ASM LA hdisk11 hdisk11
Disk 6 for ASM LB hdisk12 hdisk12
Disk 7 for ASM LC hdisk13 hdisk13
Disk 8 for ASM LD hdisk14 hdisk14

è No need to assign PVID when using this method.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 131 of 393


11.3.5 Removing reserve lock policy on hdisks from each node

 As for OCR and Voting disks, SETUP reserve_policy on ASM spfile and ASM hdisks, on each
node :

Example for one hdisk :


Issue the command “lsattr –E –l hdisk7” to vizualize all attributes for hdisk7
{node1:root}/ # lsattr -El hdisk7
PR_key_value none Persistant Reserve Key Value True
cache_method fast_write Write Caching method False
ieee_volname 600A0B80000C54180000026345EE3CCC IEEE Unique volume name False
lun_id 0x0006000000000000 Logical Unit Number False
max_transfer 0x100000 Maximum TRANSFER Size True
prefetch_mult 1 Multiple of blocks to prefetch on read False
pvid none Physical volume identifier False
q_type simple Queuing Type False
queue_depth 10 Queue Depth True
raid_level 0 RAID Level False
reassign_to 120 Reassign Timeout value True
reserve_policy singl_path Reserve Policy True
rw_timeout 30 Read/Write Timeout value True
scsi_id 0x690600 SCSI ID False
size 4096 Size in Mbytes False
write_cache yes Write Caching enabled False
{node1:root}/ #
Or only lsattr –E –l hdisk3 | grep reserve
{node1:root}/ # lsattr -El hdisk7 | grep reserve
reserve_policy single_path Reserve Policy True
{node1:root}/ #

o On IBM storage (ESS, FasTt, DSXXXX) : Change the “reserve_policy” attribute to “no_reserve”
chdev -l hdisk? -a reserve_policy=no_reserve

o On EMC storage : Change the “reserve_lock” attribute to “no”


chdev -l hdisk? -a reserve_lock=no

o On HDS storage with HDLM driver, and no disks in Volume Group : Change the “dlmrsvlevel”
attribute to “no_reserve”
chdev -l dlmfdrv? -a dlmrsvlevel=no_reserve

Change the In our case, we have an IBM storage !!! On node 2 …


“reserve_p
On node 1 … {node2:root}/ # for i in 0 6 7 8 9 10 12 13 14
olicy” do
attributes {node1:root}/ # for i in 1 7 8 9 10 11 12 13 14 chdev -l hdisk$i -a reserve_policy=no_reserve
for each do done
disks chdev -l hdisk$i -a reserve_policy=no_reserve changed
dedicated done changed
to ASM, on changed changed
each nodes changed …
changed {node2:root}/ #
of the

cluster : {node1:root}/ #

Example for one hdisk on node1 :


Issue the command “lsattr –E –l hdisk3 |grep reserve” to vizualize modified attributes for hdisk3

{node1:root}/ # lsattr -El hdisk7 | grep reserve


reserve_policy no_reserve Reserve Policy True
{node1:root}/ #

As described before, disks might have different names from one node to another for example hdisk7 on node1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 132 of 393


might be hdisk8 on node2, etc…

11gRAC/ASM/AIX oraclibm@fr.ibm.com 133 of 393


11.3.6 Identify Major and Minor number of hdisk on each node

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk

ASM Spfile L0 /dev/ASMspf_disk hdisk1 hdisk0


Disk
Disk 1 for ASM L6 /dev/rhdisk? hdisk7 hdisk7
Disk 2 for ASM L7 /dev/rhdisk? hdisk8 hdisk8
Disk 3 for ASM L8 /dev/rhdisk? hdisk9 hdisk9
Disk 4 for ASM L9 /dev/rhdisk? hdisk10 hdisk10
Disk 5 for ASM LA /dev/rhdisk? hdisk11 hdisk11
Disk 6 for ASM LB /dev/rhdisk? hdisk12 hdisk12
Disk 7 for ASM LC /dev/rhdisk? hdisk13 hdisk13
Disk 8 for ASM LD /dev/rhdisk? hdisk14 hdisk14

We need to get major and minor number for each hdisk of node1 :

To obtain minor {node1:root}/ # for i in 1 7 8 9 10 11 12 13 14


and major do
ls -la /dev/*hdisk$i
numbers of each
done
hdisk, on node1 … brw------- 1 root system 21, 5 Mar 12 12:18 /dev/hdisk1
crw------- 1 root system 21, 5 Mar 12 12:18 /dev/rhdisk1
We need to issue brw------- 1 root system 21, 11 Mar 07 10:31 /dev/hdisk7
the command : crw------- 1 root system 21, 11 Mar 07 10:31 /dev/rhdisk7
brw------- 1 root system 21, 12 Mar 07 10:31 /dev/hdisk8
ls –la /dev/hdisk? crw------- 1 root system 21, 12 Mar 07 10:31 /dev/rhdisk8
brw------- 1 root system 21, 13 Mar 07 10:31 /dev/hdisk9
crw------- 1 root system 21, 13 Mar 07 10:31 /dev/rhdisk9
On node1 : brw------- 1 root system 21, 14 Mar 07 10:31 /dev/hdisk10
crw------- 1 root system 21, 14 Mar 07 10:31 /dev/rhdisk10
brw------- 1 root system 21, 15 Mar 07 10:31 /dev/hdisk11
crw------- 1 root system 21, 15 Mar 07 10:31 /dev/rhdisk11
brw------- 1 root system 21, 16 Mar 07 10:31 /dev/hdisk12
crw------- 1 root system 21, 16 Mar 07 10:31 /dev/rhdisk12
brw------- 1 root system 21, 17 Mar 27 16:13 /dev/hdisk13
crw------- 1 root system 21, 17 Mar 27 16:13 /dev/rhdisk13
brw------- 1 root system 21, 18 Mar 27 16:13 /dev/hdisk14
crw------- 1 root system 21, 18 Mar 27 16:13 /dev/rhdisk14
{node1:root}/ #

To fill in the following table, taking result from command for hdisk1, we get “21, 5” which give us 21 as major
number, and 5 as minor number.

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk

ASM Spfile L0 /dev/ASMspf_disk hdisk1 21 5 hdisk0


Disk
Disk 1 for ASM L6 /dev/rhdisk? hdisk7 21 11 hdisk6
Disk 2 for ASM L7 /dev/rhdisk? hdisk8 21 12 hdisk7
Disk 3 for ASM L8 /dev/rhdisk? hdisk9 21 13 hdisk8
Disk 4 for ASM L9 /dev/rhdisk? hdisk10 21 14 hdisk9
Disk 5 for ASM LA /dev/rhdisk? hdisk11 21 15 hdisk10
Disk 6 for ASM LB /dev/rhdisk? hdisk12 21 16 hdisk12
Disk 7 for ASM LC /dev/rhdisk? hdisk13 21 17 hdisk13
Disk 8 for ASM LD /dev/rhdisk? hdisk14 21 18 hdisk14

11gRAC/ASM/AIX oraclibm@fr.ibm.com 134 of 393


We need to get major and minor number for each hdisk of node2 :

To obtain minor and {node2:root}/ # for i in 0 6 7 8 9 10 12 13 14


do
major numbers of
ls -la /dev/*hdisk$i
each hdisk, on done
node2 … brw------- 1 root system 21, 5 Mar 12 12:17 /dev/hdisk0
crw------- 1 root system 21, 5 Mar 12 12:17 /dev/rhdisk0
We need to issue brw------- 1 root system 21, 11 Mar 07 10:32 /dev/hdisk6
the command : crw------- 1 root system 21, 11 Mar 07 10:32 /dev/rhdisk6
brw------- 1 root system 21, 12 Mar 07 10:32 /dev/hdisk7
ls –la /dev/hdisk? crw------- 1 root system 21, 12 Mar 07 10:32 /dev/rhdisk7
brw------- 1 root system 21, 13 Mar 07 10:32 /dev/hdisk8
crw------- 1 root system 21, 13 Mar 07 10:32 /dev/rhdisk8
On node2 : brw------- 1 root system 21, 14 Mar 07 10:32 /dev/hdisk9
crw------- 1 root system 21, 14 Mar 12 13:55 /dev/rhdisk9
brw------- 1 root system 21, 15 Mar 07 10:32 /dev/hdisk10
crw------- 1 root system 21, 15 Mar 07 10:32 /dev/rhdisk10
brw------- 1 root system 21, 16 Mar 07 10:32 /dev/hdisk12
crw------- 1 root system 21, 16 Mar 07 10:32 /dev/rhdisk12
brw------- 1 root system 21, 17 Mar 27 16:13 /dev/hdisk13
crw------- 1 root system 21, 17 Mar 27 16:13 /dev/rhdisk13
brw------- 1 root system 21, 18 Mar 27 16:13 /dev/hdisk14
crw------- 1 root system 21, 18 Mar 27 16:13 /dev/rhdisk14
{node2:root}/ #

To fill in the following table, taking result from command for hdisk0, we get “2&, 5” which give us 21 as major
number, and 5 as minor number.

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk

ASM Spfile L0 /dev/ASMspf_disk hdisk1 21 5 hdisk0 21 5


Disk
Disk 1 for ASM L6 /dev/rhdisk? hdisk7 21 11 hdisk6 21 11
Disk 2 for ASM L7 /dev/rhdisk? hdisk8 21 12 hdisk7 21 12
Disk 3 for ASM L8 /dev/rhdisk? hdisk9 21 13 hdisk8 21 13
Disk 4 for ASM L9 /dev/rhdisk? hdisk10 21 14 hdisk9 21 14
Disk 5 for ASM LA /dev/rhdisk? hdisk11 21 15 hdisk10 21 15
Disk 6 for ASM LB /dev/rhdisk? hdisk12 21 16 hdisk12 21 16
Disk 7 for ASM LC /dev/rhdisk? hdisk13 21 17 hdisk13 21 17
Disk 8 for ASM LD /dev/rhdisk? hdisk14 21 18 hdisk14 21 18

11gRAC/ASM/AIX oraclibm@fr.ibm.com 135 of 393


11.3.7 Create Unique Virtual Device to access same LUN from each node

As disks could have different names from one node to another for example L0 correspond to hdisk1 on node1,
and could have been hdisk0 or other on node2, etc…

For ASM spfile disk, it is madatory to create a vitual device if hdisk name is different on each node. Even so hdisk
could be same for 2 nodes, adding an extra node could introduce a different hdisk name for the same LUN on the new
node, so it’s best advice to create a virtual device for ASM spfile disk !!!

THEN from the following table, for the ASM spfile disk :

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID Corresponding Num. Num. Corresponding Num. Num.
Number hdisk hdisk
ASM Spfile Disk L0 /dev/ASMspf_disk hdisk1 21 5 hdisk0 21 5
Disk 1 for ASM L6 /dev/rhdisk? hdisk7 21 11 hdisk6 21 11
Disk 2 for ASM L7 /dev/rhdisk? hdisk8 21 12 hdisk7 21 12
Disk 3 for ASM L8 /dev/rhdisk? hdisk9 21 13 hdisk8 21 13
Disk 4 for ASM L9 /dev/rhdisk? hdisk10 21 14 hdisk9 21 14
Disk 5 for ASM LA /dev/rhdisk? hdisk11 21 15 hdisk10 21 15
Disk 6 for ASM LB /dev/rhdisk? hdisk12 21 16 hdisk12 21 16
Disk 7 for ASM LC /dev/rhdisk? hdisk13 21 17 hdisk13 21 17
Disk 8 for ASM LD /dev/rhdisk? hdisk14 21 18 hdisk14 21 18

We need to create a virtual device for the asm spfile disk.

To create same virtual Using the command : mknod Device_Name c MajNum MinNum
devices on each node called
: For first node, as root user
/dev/ASMspf_disk
{node1:root}/ # mknod /dev/ASMspf_disk c 21 5
we need to use major and
minor number of hdisks For Second node, as root user
which will make the link
between the virtual devices {node2:root}/ # mknod /dev/ASMspf_disk c 21 5
and the hdisks.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 136 of 393


NOW for the ASM disks, we have 2 options possibles :

• OPTION 1, Using the /dev/rhdisk, then we don’t need to create virtual device. We’ll just have to set the
right user ownership and unix read/write permissions.
For this option, move to next chapter and follow option 1.
Or
• OPTION 2, Creating Virtual Devices like /dev/ASM_Disk?, if wanted for humans/administrator
conveniences.

With option 2, we’ll have the following table :

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk
ASM Spfile Disk L0 /dev/ASMspf_disk hdisk0 21 5 hdisk0 21 5
Disk 1 for ASM L6 /dev/ASM_Disk1 hdisk6 21 11 hdisk6 21 11
Disk 2 for ASM L7 /dev/ASM_Disk2 hdisk7 21 12 hdisk7 21 12
Disk 3 for ASM L8 /dev/ASM_Disk3 hdisk8 21 13 hdisk8 21 13
Disk 4 for ASM L9 /dev/ASM_Disk4 hdisk9 21 14 hdisk9 21 14
Disk 5 for ASM LA /dev/ASM_Disk5 hdisk10 21 15 hdisk10 21 15
Disk 6 for ASM LB /dev/ASM_Disk6 hdisk12 21 16 hdisk12 21 16
Disk 7 for ASM LC /dev/ASM_Disk7 hdisk13 21 17 hdisk13 21 17
Disk 8 for ASM LD /dev/ASM_Disk8 hdisk14 21 18 hdisk14 21 18

We need to create a virtual device for the asm disks.

 By chance Major and minor number are the same on both nodes, for corresponding hdisks, but it could be
different.

To create same virtual Using the command : mknod Device_Name c MajNum MinNum
devices on each node
called : For first node, as root user

/dev/ASM_Disk… {node1:root}/ # mknod /dev/ASM_Disk1 c 21 11


{node1:root}/ # mknod /dev/ASM_Disk2 c 21 12
we need to use major and {node1:root}/ # mknod /dev/ASM_Disk3 c 21 13
minor number of hdisks {node1:root}/ # mknod /dev/ASM_Disk4 c 21 14
which will make the link .....
between the virtual devices
and the hdisks. For Second node, as root user

{node2:root}/ # mknod /dev/ASM_Disk1 c 21 11


{node2:root}/ # mknod /dev/ASM_Disk2 c 21 12
{node2:root}/ # mknod /dev/ASM_Disk3 c 21 13
{node2:root}/ # mknod /dev/ASM_Disk4 c 21 14
.....

11gRAC/ASM/AIX oraclibm@fr.ibm.com 137 of 393


11.3.8 Set Ownership / Permissions on Virtual Devices

11gRAC/ASM/AIX oraclibm@fr.ibm.com 138 of 393


For ASM spfile :

THEN set For first node, as root user


ownership of {node1:root}/ # chown asm:oinstall /dev/ASMspf_disk
the created
virtual devices For Second node, as root user
to oracle:dba
{node2:root}/ # chown asm:oinstall /dev/ASMspf_disk

THEN set For first node, as root user


read/write {node1:root}/ # chmod 660 /dev/ASMspf_disk
permissions
of the created For Second node, as root user
virtual devices
to 660 {node2:root}/ # chmod 660 /dev/ASMspf_disk

Checking the For first node, as root user


modifications {node1:root}/ # ls -la /dev/* | grep "21, 5"
crw-rw---- 1 asm dba 21, 5 Mar 12 15:06 /dev/ASMspf_disk
brw------- 1 root system 21, 5 Mar 12 12:18 /dev/hdisk1
After Oracle crw------- 1 root system 21, 5 Mar 12 12:18 /dev/rhdisk1
clusterware {node1:root}/ #
installation,
For Second node, as root user
{node2:root}/ # ls -la /dev/* | grep "20, 5"
crw-rw---- 1 asm dba 21, 5 Mar 12 15:06 /dev/ASMspf_disk
brw------- 1 root system 21, 5 Mar 12 12:17 /dev/hdisk0
crw------- 1 root system 21, 5 Mar 12 12:17 /dev/rhdisk0
{node2:root}/ #

For ASM disks :

With option 1, we’ll have the following table, BUT We’ll not use Major and Minor numbers :

Disks LUN’s Device Name Node 1 Node 2


ID seen on node1 (1) and node2 (2) Corresponding Corresponding
Number hdisk hdisk
ASM Spfile Disk L0 /dev/ASMspf_disk hdisk1 hdisk0
Disk 1 for ASM L6 /dev/rhdisk7 on 1 --- /dev/rhdisk6 on 2 hdisk7 hdisk6
Disk 2 for ASM L7 /dev/rhdisk8 on 1 --- /dev/rhdisk7 on 2 hdisk8 hdisk7
Disk 3 for ASM L8 /dev/rhdisk9 on 1 --- /dev/rhdisk8 on 2 hdisk9 hdisk8
Disk 4 for ASM L9 /dev/rhdisk10 on 1 --- /dev/rhdisk9 on 2 hdisk10 hdisk9
Disk 5 for ASM LA /dev/rhdisk11 on 1 --- /dev/rhdisk10 on 2 hdisk11 hdisk10
Disk 6 for ASM LB /dev/rhdisk12 on 1 --- /dev/rhdisk12 on 2 hdisk12 hdisk12
Disk 7 for ASM LC /dev/rhdisk13 on 1 --- /dev/rhdisk13 on 2 hdisk13 hdisk13

We just need to set oracle:dba ownerchip to /dev/rhdisk? mapped to LUN for ASM.
And to set 660 read/write permissions to these rhdisk? :

THEN set ownership of the created virtual devices to oracle:dba

For first node, as root user For Second node, as root user

{node1:root}/ # for i in 7 8 9 10 11 12 13 {node2:root}/ # for i in 6 7 8 9 10 12 13


>do >do
>chown asm:dba /dev/rhdisk$i >chown asm:dba /dev/rhdisk$i
>done >done
node1:root-/> node2:root-/>

THEN set read/write permissions of the created virtual devices to 660

11gRAC/ASM/AIX oraclibm@fr.ibm.com 139 of 393


For first node, as root user For Second node, as root user

{node1:root}/ # for i in 7 8 9 10 11 12 13 14 {node2:root}/ # for i in 6 7 8 9 10 12 13


>do 14
>chmod 660 /dev/rhdisk$i >do
>done >chmod 660 /dev/rhdisk$i
node1:root-/> >done
node2:root-/>

Checking the For first node, as root user


modifications {node1:root}/ # ls -la /dev/rhdisk? | grep oracle
crw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/rhdisk7
Check also crw-rw---- 1 asm dba 21, 12 Mar 07 10:31 /dev/rhdisk8
on second crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk9
node ... crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk10
crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk11
crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk12
crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk13
{node1:root}/ #

With option 2, we’ll have the following table :

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk
ASM Spfile Disk L0 /dev/ASMspf_disk hdisk0 21 5 hdisk0 21 5
Disk 1 for ASM L6 /dev/ASM_Disk1 hdisk6 21 11 hdisk6 21 11
Disk 2 for ASM L7 /dev/ASM_Disk2 hdisk7 21 12 hdisk7 21 12
Disk 3 for ASM L8 /dev/ASM_Disk3 hdisk8 21 13 hdisk8 21 13
Disk 4 for ASM L9 /dev/ASM_Disk4 hdisk9 21 14 hdisk9 21 14
Disk 5 for ASM LA /dev/ASM_Disk5 hdisk10 21 15 hdisk10 21 15
Disk 6 for ASM LB /dev/ASM_Disk6 hdisk12 21 16 hdisk12 21 16
Disk 7 for ASM LC /dev/ASM_Disk7 hdisk13 21 17 hdisk13 21 17

THEN set For first node, as root user For Second node, as root user
ownership of the {node1:root}/ # for i in 1 2 3 4 5 {node2:root}/ # for i in 1 2 3 4 5 …
created virtual … >do
devices to >do >chown asm:dba /dev/ASM_Disk$i
oracle:dba >chown asm:dba /dev/ASM_Disk$i >done
>done node2:root-/>
node1:root-/>

THEN set For first node, as root user For Second node, as root user
read/write {node1:root}/ # for i in 1 2 3 4 {node2:root}/ # for i in 1 2 3 4 5
permissions of 5 … >do …
the created >chmod 660 /dev/ASM_Disk$i >do
virtual devices to >done >chmod 660 /dev/ASM_Disk$i
660 node1:root-/> >done
node2:root-/>

Checking the For first node, as root user


modifications {node1:root}/ # ls -la /dev/* | grep "21, 11"
brw------- 1 root system 21, 11 Mar 07 10:31 /dev/hdisk7
crw------- 1 root system 21, 11 Mar 07 10:31 /dev/rhdisk7
crw-rw-r-- 1 asm dba 21, 11 Apr 03 14:24 /dev/ASM_Disk1

Check also for all disks, and on second node ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 140 of 393


11.3.9 Formating the virtual devices (zeroing)

Now, we need to format the virtual devices, or rhdisk. In both option, zeroing rhdisk is sufficient for :

• ASM spfile disk


• ASM disks

On node 1 …
Format (Zeroing) and Verify
that you can read on the {node1:root}/ # for i in 7 8 9 10 11 12 13 14
disks from each >do
node : >dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
...

node:root-/> for i in 6 7 8 9 10 12 13 14

Verify devices concurrent read/write access by running at the same time dd command from each node :

On node 1 …
 At the same time :
{node1:root}/ # for i in 7 8 9 10 11 12 13 14
>do
• on node1 >dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
...
• on node2
On node 2 …

{node2:root}/ # for i in 6 7 8 9 10 12 13 14
>do
>dd if=/dev/zero of=/dev/rhdisk$i bs=8192 count=25000 &
>done
25000+0 records in.
25000+0 records out.
25000+0 records in.
25000+0 records out.
...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 141 of 393


11.3.10 Removing assigned PVID on hdisk

 PVID MUST NOT BEEN SET on hdisk used for ASM, and should not be set for OCR, voting and ASM spfile
disks !!!!

From following table, make sure that no PVID are assigned to hdisks from each node mapped to LUN’s.

Disks LUN’s Device Name Node 1 Major Minor Node 2 Major Minor
ID seen on node1 (1) and Corresponding Num. Num. Corresponding Num. Num.
Number node2 (2) hdisk hdisk

OCR 1 L1 /dev/ocr_disk1 hdisk2 21 6 hdisk1 21 6


OCR 2 L2 /dev/ocr_disk2 hdisk3 21 7 hdisk2 21 7
Voting 1 L3 /dev/voting_disk1 hdisk4 21 8 hdisk3 21 8
Voting 2 L4 /dev/voting_disk2 hdisk5 21 9 hdisk4 21 9
Voting 3 L5 /dev/voting_disk3 hdisk6 21 10 hdisk5 21 10
ASM Spfile L0 /dev/asmspf_disk hdisk1 21 5 hdisk0 21 5
Disk
Disk 1 for ASM L6 /dev/rhdisk? hdisk7 21 11 hdisk6 21 11
Disk 2 for ASM L7 /dev/rhdisk? hdisk8 21 12 hdisk7 21 12
Disk 3 for ASM L8 /dev/rhdisk? hdisk9 21 13 hdisk8 21 13
Disk 4 for ASM L9 /dev/rhdisk? hdisk10 21 14 hdisk9 21 14
Disk 5 for ASM LA /dev/rhdisk? hdisk11 21 15 hdisk10 21 15
Disk 6 for ASM LB /dev/rhdisk? hdisk12 21 16 hdisk12 21 16
Disk 7 for ASM LC /dev/rhdisk? hdisk13 21 17 hdisk13 21 17
Disk 8 for ASM LD /dev/rhdisk? hdisk14 21 18 hdisk14 21 18

To remove PVID Using the command : chdev –l hdisk? –a pv=clear


from hdisk, we will
use the chdev For first node, as root user
command :
{node1:root}/ # for i in 1 2 3 4 5 6 1 7 8 9 10 11 12 13 14
PVID must be >do
removed from hdisk >chdev –l hdisk$i –a pv=clear
on each node. >done
hdisk2 changed
hdisk3 changed
IMPORTANT !!!!! hdisk4 changed
Don’t remove PVID
to hdisk which are
hdisk5 changed
not yours !!!! ...

For Second node, as root user

{node2:root}/ # for i in 0 2 3 4 5 6 0 7 8 9 10 12 13 14
>do
>chdev –l hdisk$i –a pv=clear
>done
hdisk2 changed
hdisk3 changed
hdisk4 changed
hdisk5 changed
...

Check whith lspv command as root on each node, if PVID are still assigned or not !!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 142 of 393


11.4 Checking Shared Devices

Checking for reserve_policy, and PVID settings.

Checking that no PVID are assigned to hdisks, and no single_path reserve policy are set :

As root user, using the command : lsattr -El hdisk1

{node1:root}/ # lsattr -El hdisk1


PR_key_value none Persistant Reserve Key Value True
cache_method fast_write Write Caching method False
ieee_volname 600A0B80000C54030000022D45F4DF5F IEEE Unique volume name False
lun_id 0x0000000000000000 Logical Unit Number False
max_transfer 0x100000 Maximum TRANSFER Size True
prefetch_mult 1 Multiple of blocks to prefetch on read False
pvid none Physical volume identifier False
q_type simple Queuing Type False
queue_depth 10 Queue Depth True
raid_level 0 RAID Level False
reassign_to 120 Reassign Timeout value True
reserve_policy no_reserve Reserve Policy True
rw_timeout 30 Read/Write Timeout value True
scsi_id 0x690600 SCSI ID False
size 100 Size in Mbytes False
write_cache yes Write Caching enabled False
{node1:root}/ #

To check all hdisks in one shot, use following shell script :

for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done

11gRAC/ASM/AIX oraclibm@fr.ibm.com 143 of 393


For first node, as root user running shell script

{node1:root}/ # for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done
hdisk1 -> lun_id = 0x0000000000000000 / reserve_policy = no_reserve / pvid = none
hdisk2 -> lun_id = 0x0001000000000000 / reserve_policy = no_reserve / pvid = none
hdisk3 -> lun_id = 0x0002000000000000 / reserve_policy = no_reserve / pvid = none
hdisk4 -> lun_id = 0x0003000000000000 / reserve_policy = no_reserve / pvid = none
hdisk5 -> lun_id = 0x0004000000000000 / reserve_policy = no_reserve / pvid = none
hdisk6 -> lun_id = 0x0005000000000000 / reserve_policy = no_reserve / pvid = none
hdisk7 -> lun_id = 0x0006000000000000 / reserve_policy = no_reserve / pvid = none
hdisk8 -> lun_id = 0x0007000000000000 / reserve_policy = no_reserve / pvid = none
hdisk9 -> lun_id = 0x0008000000000000 / reserve_policy = no_reserve / pvid = none
hdisk10 -> lun_id = 0x0009000000000000 / reserve_policy = no_reserve / pvid = none
hdisk11 -> lun_id = 0x000a000000000000 / reserve_policy = no_reserve / pvid = none
hdisk12 -> lun_id = 0x000b000000000000 / reserve_policy = no_reserve / pvid = none
hdisk13 -> lun_id = 0x000c000000000000 / reserve_policy = no_reserve / pvid = none
{node1:root}/ #

For Second node, as root user, running same shell script :

{node2:root}/ # for i in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
do
lsattr -El hdisk$i | grep reserve_policy | awk '{print $1,$2 }'| read rp1 rp2
lsattr -El hdisk$i | grep pvid | awk '{print $1,$2 }'| read pv1 pv2
lsattr -El hdisk$i | grep lun_id | awk '{print $1,$2 }'| read li1 li2
if [ "$li1" != "" ]
then
echo hdisk$i' -> '$li1' = '$li2' / '$rp1' = '$rp2' / '$pv1' = '$pv2
fi
done
hdisk0 -> lun_id = 0x0000000000000000 / reserve_policy = no_reserve / pvid = none
hdisk1 -> lun_id = 0x0001000000000000 / reserve_policy = no_reserve / pvid = none
hdisk2 -> lun_id = 0x0002000000000000 / reserve_policy = no_reserve / pvid = none
hdisk3 -> lun_id = 0x0003000000000000 / reserve_policy = no_reserve / pvid = none
hdisk4 -> lun_id = 0x0004000000000000 / reserve_policy = no_reserve / pvid = none
hdisk5 -> lun_id = 0x0005000000000000 / reserve_policy = no_reserve / pvid = none
hdisk6 -> lun_id = 0x0006000000000000 / reserve_policy = no_reserve / pvid = none
hdisk7 -> lun_id = 0x0007000000000000 / reserve_policy = no_reserve / pvid = none
hdisk8 -> lun_id = 0x0008000000000000 / reserve_policy = no_reserve / pvid = none
hdisk9 -> lun_id = 0x0009000000000000 / reserve_policy = no_reserve / pvid = none
hdisk10 -> lun_id = 0x000a000000000000 / reserve_policy = no_reserve / pvid = none
hdisk12 -> lun_id = 0x000b000000000000 / reserve_policy = no_reserve / pvid = none
hdisk13 -> lun_id = 0x000c000000000000 / reserve_policy = no_reserve / pvid = none
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 144 of 393


11.5 Recommandations, hints and tips

11.5.1 OCR / Voting disks

!!! IN ANY case, DON’T assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.

AFTER CRS All hdisks prepared for OCR and voting disks have dba, or oinstall group assigned :
Installation :
For OCR :
How to
identify {node1:root}/ # ls -la /dev/ocr*disk*
hdisks used crw-r----- 1 root dba 20, 6 Mar 12 15:03 /dev/ocr_disk1
as OCR, and crw-r----- 1 root dba 20, 7 Mar 12 15:03 /dev/ocr_disk2
votings {node1:root}/ #
disks :
Then

{node1:root}/ # ls -la /dev/* |grep "20, 6"


brw------- 1 root system 20, 6 Mar 07 10:31 /dev/hdisk2
crw-r----- 1 root dba 20, 6 Mar 12 15:03 /dev/ocr_disk1
crw------- 1 root system 20, 6 Mar 07 10:31 /dev/rhdisk2
{node1:root}/ #

And using AIX command :

Example with OCR and corresponding rhdisk 2 on node1 :

{node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk2|grep 'z{|}'


00000010 C2BA0000 00001000 00012BFF 7A7B7C7D |..........+.z{|}|
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk2|grep '00820000
FFC00000 00000000 00000000'
00000000 00820000 FFC00000 00000000 00000000 |................|
{node1:root}/ #

OR

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000 00820000 FFC00000 00000000 00000000 |................|
00000010 C2BA0000 00001000 00012BFF 7A7B7C7D |..........+.z{|}|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 145 of 393


AFTER CRS For Voting :
Installation :
{node1:root}/ # ls -la /dev/vot*_disk*
How to crw-r--r-- 1 crs dba 21, 8 Apr 05 14:12 /dev/voting_disk1
identify crw-r--r-- 1 crs dba 21, 9 Apr 05 14:12 /dev/voting_disk2
hdisks used crw-r--r-- 1 crs dba 21, 10 Apr 05 14:12 /dev/voting_disk3
as OCR, and {node1:root}/ #
votings
disks : Then

{node1:root}/ # ls -la /dev/* |grep "21, 8"


brw------- 1 root system 21, 8 Mar 07 10:31 /dev/hdisk4
crw------- 1 root system 21, 8 Mar 07 10:31 /dev/rhdisk4
crw-r--r-- 1 crs dba 21, 8 Apr 05 14:13 /dev/voting_disk1
{node1:root}/ #

And using AIX command :

Example with OCR and corresponding rhdisk 2 on node1 :

{node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk4|grep 'z{|}'


00000010 A4120000 00000200 00095FFF 7A7B7C7D |.........._.z{|}|
{node1:root}/ #
{node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk4|grep '00220000
FFC00000 00000000 00000000'
00000000 00220000 FFC00000 00000000 00000000 |."..............|
{node1:root}/ #

OR

{node1:root}/ # lquerypv -h /dev/rhdisk4


00000000 00220000 FFC00000 00000000 00000000 |."..............|
00000010 A4120000 00000200 00095FFF 7A7B7C7D |.........._.z{|}|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|

11gRAC/ASM/AIX oraclibm@fr.ibm.com 146 of 393


You must reset the hdisk header having the OCR or Voting disk stamp :
How to free
hdisk at AIX {node1:root}/ #dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 &
level, when 25000+0 records in.
hdisk are not 25000+0 records out.
anymore used
for OCR or
THEN query on the hdisk header will return nothing more than all lines full of ‘0’:
voting disk, or
need to be {node1:root}/ # lquerypv -h /dev/rhdisk2
reset for a 00000000 00000000 00000000 00000000 00000000 |................|
failed CRS 00000010 00000000 00000000 00000000 00000000 |................|
installation ?
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 147 of 393


11.5.2 ASM disks

!!! IN ANY case, DON’T assign PVID to OCR / Voting disks when Oracle
clusterware has been installed, and in test or production !!!
Assigning a PVID will erase the hdisk header !!!!, and with the risk to
loose content.

AFTER ASM All hdisks prepared for ASM are owned by oracle user, and group dba :
Installation,
and ASM {node1:root}/ # ls -la /dev/rhdisk* | oracle
diskgroup brw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/hdisk7
creation : crw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/rhdisk7
brw-rw---- 1 asm dba 21, 12 Mar 07 10:31 /dev/hdisk8
How to crw-rw---- 1 asm dba 21, 12 Mar 07 10:31 /dev/rhdisk8
identify brw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/hdisk9
hdisks used crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/rhdisk9
by ASM. brw-rw---- 1 asm dba 21, 14 Mar 07 10:31 /dev/hdisk10
crw-rw---- 1 asm dba 21, 14 Mar 07 10:31 /dev/rhdisk10
brw-rw---- 1 asm dba 21, 15 Mar 07 10:31 /dev/hdisk11
crw-rw---- 1 asm dba 21, 15 Mar 07 10:31 /dev/rhdisk11
brw-rw---- 1 asm dba 21, 16 Mar 07 10:31 /dev/hdisk12
crw-rw---- 1 asm dba 21, 16 Mar 07 10:31 /dev/rhdisk12
brw-rw---- 1 asm dba 21, 17 Mar 27 16:13 /dev/hdisk13
crw-rw---- 1 asm dba 21, 17 Mar 27 16:13 /dev/rhdisk13
brw-rw---- 1 asm dba 21, 18 Mar 27 16:13 /dev/hdisk14
crw-rw---- 1 asm dba 21, 18 Mar 27 16:13 /dev/rhdisk14
{node1:root}/ #

Or if using virtual devices :

{node1:root}/ # ls -la /dev/ASM*Disk* | grep oracle


crw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/ASM_Disk1
crw-rw---- 1 asm dba 21, 12 Mar 07 10:31 /dev/ASM_Disk2
crw-rw---- 1 asm dba 21, 13 Mar 07 10:31 /dev/ASM_Disk3
crw-rw---- 1 asm dba 21, 14 Mar 07 10:31 /dev/ASM_Disk4
crw-rw---- 1 asm dba 21, 15 Mar 07 10:31 /dev/ASM_Disk5
crw-rw---- 1 asm dba 21, 16 Mar 07 10:31 /dev/ASM_Disk6
crw-rw---- 1 asm dba 21, 17 Mar 27 16:13 /dev/ASM_Disk7
crw-rw---- 1 asm dba 21, 18 Mar 27 16:13 /dev/ASM_Disk8
{node1:root}/ #

THEN, for example with /dev/ASM_Disk1, we use major/minor number to identify the equivalent
rhdisk

{node1:root}/ # ls -la /dev/* | “21, 11”


brw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/hdisk7
crw-rw---- 1 asm dba 21, 11 Mar 07 10:31 /dev/rhdisk7
crw-rw---- 1 oracle dba 21, 11 Mar 07 10:31 /dev/ASM_Disk1
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 148 of 393


And using AIX command :

Example with ASM_Disk1 and corresponding rhdisk7 on node1 :


How to identify
hdisks used by {node1:crs}/crs/11.1.0/bin ->lquerypv -h /dev/rhdisk2|grep ORCLDISK
ASM. 00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
{node1:root}/ #

OR
ORCLDISK
standing for {node1:root}/ # lquerypv -h /dev/rhdisk2
oracle ASM disk 00000000 00820101 00000000 80000001 D12A3D5B |.............@|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 0A100000 00010203 41534D44 425F4752 |........ASMDB_GR|
ASMDB_GROU 00000050 4F55505F 30303031 00000000 00000000 |OUP_0001........|
P standing for 00000060 00000000 00000000 41534D44 425F4752 |........ASMDB_GR|
ASM Disk 00000070 4F555000 00000000 00000000 00000000 |OUP.............|
Group used for 00000080 00000000 00000000 41534D44 425F4752 |........ASMDB_GR|
the ASMDB 00000090 4F55505F 30303031 00000000 00000000 |OUP_0001........|
Database we 000000A0 00000000 00000000 00000000 00000000 |................|
have created in 000000B0 00000000 00000000 00000000 00000000 |................|
our example 000000C0 00000000 00000000 01F5874B ED6CE000 |...........K.l..|
(the one you 000000D0 01F588CA 150BA800 02001000 00100000 |................|
will create later 000000E0 0001BC80 00001400 00000002 00000001 |................|
…) 000000F0 00000002 00000002 00000000 00000000 |................|
{node1:root}/ #

NON used ASM disks with query on the hdisk header will return nothing more than all lines full of
‘0’:

{node1:root}/ # lquerypv -h /dev/rhdisk2


00000000 00000000 00000000 00000000 00000000 |................|
00000010 00000000 00000000 00000000 00000000 |................|
00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 149 of 393


Example of output with an other rhdisk :

ORCLDISK THEN query on the hdisk header will return :


standing for
oracle ASM disk {node1:root}/ #lquerypv -h /dev/rhdisk8
00000000 00820101 00000000 80000000 DEC8B940 |...............@|
00000010 00000000 00000000 00000000 00000000 |................|
ASMDB_FLASH 00000020 4F52434C 4449534B 00000000 00000000 |ORCLDISK........|
RECOVERY 00000030 00000000 00000000 00000000 00000000 |................|
standing for 00000040 0A100000 00000103 41534D44 425F464C |........ASMDB_FL|
ASM Disk Group 00000050 41534852 45434F56 4552595F 30303030 |ASHRECOVERY_0000|
used for the 00000060 00000000 00000000 41534D44 425F464C |........ASMDB_FL|
ASMDB 00000070 41534852 45434F56 45525900 00000000 |ASHRECOVERY.....|
Database Flash 00000080 00000000 00000000 41534D44 425F464C |........ASMDB_FL|
Recovery Area 00000090 41534852 45434F56 4552595F 30303030 |ASHRECOVERY_0000|
we have created 000000A0 00000000 00000000 00000000 00000000 |................|
in our example 000000B0 00000000 00000000 00000000 00000000 |................|
(the one you will 000000C0 00000000 00000000 01F5874E 4E47F800 |...........NNG..|
create later …) 000000D0 01F588CA 14FA8C00 02001000 00100000 |................|
000000E0 0001BC80 00001400 00000002 00000001 |................|
000000F0 00000002 00000002 00000000 00000000 |................|
{node1:root}/ #

You must reset the hdisk header having ASM disk stamp :
How to free
hdisk at AIX {node1:root}/ #dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 &
level, when 25000+0 records in.
hdisk are not 25000+0 records out.
anymore used
for OCR or
THEN query on the hdisk header will return nothing more than all lines full of ‘0’:
voting disk, or
need to be {node1:root}/ # lquerypv -h /dev/rhdisk2
reset for a 00000000 00000000 00000000 00000000 00000000 |................|
failed CRS 00000010 00000000 00000000 00000000 00000000 |................|
installation ? 00000020 00000000 00000000 00000000 00000000 |................|
00000030 00000000 00000000 00000000 00000000 |................|
00000040 00000000 00000000 00000000 00000000 |................|
00000050 00000000 00000000 00000000 00000000 |................|
00000060 00000000 00000000 00000000 00000000 |................|
00000070 00000000 00000000 00000000 00000000 |................|
00000080 00000000 00000000 00000000 00000000 |................|
00000090 00000000 00000000 00000000 00000000 |................|
000000A0 00000000 00000000 00000000 00000000 |................|
000000B0 00000000 00000000 00000000 00000000 |................|
000000C0 00000000 00000000 00000000 00000000 |................|
000000D0 00000000 00000000 00000000 00000000 |................|
000000E0 00000000 00000000 00000000 00000000 |................|
000000F0 00000000 00000000 00000000 00000000 |................|
{node1:root}/ #

Assigning PVID on the hdisk used by ASM will give same result, on the disk header !!!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 150 of 393


12 ORACLE CLUSTERWARE (CRS) INSTALLATION
Oracle Cluster Ready Services installation is necessary and mandatory.

Oracle Clusterware will be installed in /crs/11.1.0 ($CRS_HOME, $ORA_CRS_HOME) on each node.

On each node :

• Oracle Clusterware (CRS) will be started,


with
o Public, Private and Virtual hostname defined
o Public, and Private network defined
o OCR and voting disks configured.
o Cluster Interconnect configured (Private Network).
o Virtual IP (VIP) will be configured and started.
o Oracle Notification Server (ONS) will be started.
o Goblal Service Deamon (GSD) will be started.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 151 of 393


12.1 Cluster VeriFication utility

12.1.1 Understanding and Using Cluster Verification Utility
NEW since 10gRAC R2 !!!!

Cluster Verification Utility (CVU) is a tool that performs system checks. CVU commands assist you with
confirming that your system is properly configured for :

• Oracle Clusterware
• Oracle Real Application Clusters installation.

Introduction to Installing and Configuring Oracle Clusterware and Oracle Real Application Clusters
http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/intro.htm#i1026198

Oracle Clusterware and Oracle Real Application Clusters Pre-Installation Procedures


http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14201/part2.htm

12.1.2 Using CVU to Determine if Installation Prerequisites are Complete

On Both nodes, using oracle clusterware Disk1 as root user, DO run

…/clusterware/Disk1/rootpre/rootpre.sh

è CVU is using libraries installed by rootpre.sh script to run properly.

On node1 On node2
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # {node2:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
./rootpre.sh ./rootpre.sh
./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:14:39 ./rootpre.sh output will be logged in /tmp/rootpre.out_08-02-07.10:15:22
Kernel extension /etc/pw-syscall.64bit_kernel is loaded. Kernel extension /etc/pw-syscall.64bit_kernel is loaded.
Unloading the existing extension: /etc/pw-syscall.64bit_kernel.... Unloading the existing extension: /etc/pw-syscall.64bit_kernel....

Oracle Kernel Extension Loader for AIX Oracle Kernel Extension Loader for AIX
Copyright (c) 1998,1999 Oracle Corporation Copyright (c) 1998,1999 Oracle Corporation

Unconfigured the kernel extension successfully Unconfigured the kernel extension successfully
Unloaded the kernel extension successfully Unloaded the kernel extension successfully
Saving the original files in /etc/ora_save_08-02-07.10:14:39.... Saving the original files in /etc/ora_save_08-02-07.10:15:22....
Copying new kernel extension to /etc.... Copying new kernel extension to /etc....
Loading the kernel extension from /etc Loading the kernel extension from /etc

Oracle Kernel Extension Loader for AIX Oracle Kernel Extension Loader for AIX
Copyright (c) 1998,1999 Oracle Corporation Copyright (c) 1998,1999 Oracle Corporation

Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x4245600 Successfully loaded /etc/pw-syscall.64bit_kernel with kmid: 0x419f800
Successfully configured /etc/pw-syscall.64bit_kernel with kmid: Successfully configured /etc/pw-syscall.64bit_kernel with kmid: 0x419f800
0x4245600 The kernel extension was successfuly loaded.
The kernel extension was successfuly loaded.
Configuring Asynchronous I/O....
Configuring Asynchronous I/O.... Asynchronous I/O is already defined
Asynchronous I/O is already defined
Configuring POSIX Asynchronous I/O....
Configuring POSIX Asynchronous I/O.... Posix Asynchronous I/O is already defined
Posix Asynchronous I/O is already defined
Checking if group services should be configured....
Checking if group services should be configured.... Nothing to configure.
Nothing to configure. {node2:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 152 of 393


IMPORTANT Extract from :
Oracle® Database Release Notes
10g Release 2 (10.2) for AIX 5L Based Systems (64-Bit)
Part Number B19074-03

This 10g release note applies also to CVU from 11g !!!

è http://download-uk.oracle.com/docs/cd/B19306_01/relnotes.102/b19074/toc.htm

Third Party Clusterware

• If your deployment environment does not use HACMP, ignore the HACMP version and patches
errors reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2, the expected patch for
HACMP v5.2 is IY60759. On AIX 5L version 5.3, the expected patches for HACMP v5.2 are
IY60759, IY61034, IY61770, and IY62191.

• If your deployment environment does not use GPFS, ignore the GPFS version and patches errors
reported by Cluster Verification Utility (CVU). On AIX 5L version 5.2 and version 5.3, the expected
patches for GPFS 2.3.0.3 are IY63969, IY69911, and IY70276.

Check Kernel Parameter Settings

CVU does not check kernel parameter settings.


èThis issue is tracked with Oracle bug 4565046.

Missing Patch Error Message

èWhen CVU finds a missing patch, it reports a xxxx patch is unknown error. This should be read as xxxx
patch is missing.
This issue is tracked with Oracle bug 4566437.

Verify GPFS is Installed

Use the following commands to check for GPFS :

cluvfy stage -pre cfs -n node_list -s storageID_list [-verbose]


cluvfy stage -post cfs -n node_list -f file_system [-verbose]

This issue is tracked with Oracle bug 456039.

Oracle Cluster Verification Utility

Cluster Verification Utility (CVU) is a utility that is distributed with Oracle Clusterware 10g. It was developed to assist in
the installation and configuration of Oracle Clusterware as well as Oracle Real Application Clusters 10g (RAC). The
wide domain of deployment of CVU ranges from initial hardware setup through fully operational cluster for RAC
deployment and covers all the intermediate stages of installation and configuration of various components.
Cluster Verification Utility (CVU) download for all supported RAC 10g platforms will be posted as they become
available.

Cluster Verification Utility Frequently Asked Questions (PDF)


http://www.oracle.com/technology/products/database/clustering/cvu/faq/cvu_faq.pdf
Download CVU for Oracle RAC 10g :
http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html
Download CVU readme file for Oracle RAC 10g on AIX5L :
http://www.oracle.com/technology/products/database/clustering/cvu/readme/AIX_readme.pdf
Subject: Shared disk check with the Cluster Verification Utility Doc ID: Note:372358.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 153 of 393


 MAKE SURE You have “unzip” tool or symbolic link to unzip in /usr/bin on both node
On node1
Execute the  Setup and export your TMP, TEMP and TMPDIR variables
following script export TMP=/tmp With /tmp or other destination having enough free space
As oracle user : export TEMP=/tmp
export TMPDIR=/tmp

{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware/cluvfy # ls
cvupack.zip jrepack.zip
{node1:root}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware/cluvfy #

./runcluvfy.sh stage -pre crsinst -n node1,node2 –verbose

If you want the result in a text file do the following :

./runcluvfy.sh stage -pre crsinst -n node1,node2 –verbose > /tmp/cluvfy_oracle.txt

And analyze the results :

At this stage, node connectivity is checked !!!


Performing pre-checks for cluster services setup

Checking node reachability...


è this is equivalent to hostname ping …
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1".

Checking user equivalence...


è this is ssh or rsh tests for oracle user …
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
From node1 to node2
node2 passed From node1 to node1
node1 passed From node2 to node1
Result: User equivalence check passed for user "oracle". From node2 to node2
Checking administrative privileges...

Check: Existence of user "oracle" è this is user and group existence tests on
Node Name User Exists Comment node1
------------ ------------------------ ------------------------
node1 yes passed
node2 yes passed
Result: User existence check passed for "oracle".

Check: Existence of group "oinstall"


Node Name Status Group ID
------------ ------------------------ ------------------------
node1 exists 501 The following message is not a big issue :
node2 exists 501
Result: Group existence check passed for "oinstall".
Result: Group existence check failed for
"oinstall".
Check: Membership of user "oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------ è Just create oinstall group at AIX
tbas11b yes yes yes no failed level, using smitty as root
tbas11a yes yes yes no failed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] failed.

Administrative privileges check failed

At this stage, node connectivity is checked !!!

Checking node connectivity...


11gRAC/ASM/AIX oraclibm@fr.ibm.com 154 of 393
Interface information for node "node1" è this is the network interface tests on
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
node1 and node2 …
en0 10.3.25.81 10.3.25.0
en1 10.10.25.81 10.10.25.0
en2 20.20.25.81 20.20.25.0

Interface information for node "node2"


Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
en0 10.3.25.82 10.3.25.0
en1 10.10.25.82 10.10.25.0
en2 20.20.25.82 20.20.25.0

Check: Node connectivity of subnet "10.3.25.0"


Source Destination Connected? è Node connectivity of subnet on node1
------------------------------ ------------------------------ ---------------- and node2, for each network interface …
node1:en0 node2:en0 yes
Result: Node connectivity check passed for subnet "10.3.25.0" with node(s) node1,node2.

Check: Node connectivity of subnet "10.10.25.0"


Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:en1 node2:en1 yes
Result: Node connectivity check passed for subnet "10.10.25.0" with node(s) node1,node2.

Check: Node connectivity of subnet "20.20.25.0"


Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:en2 node2:en2 yes
Result: Node connectivity check passed for subnet "20.20.25.0" with node(s) node1,node2.
è Testing suitable interfaces for VIP
Suitable interfaces for VIP on subnet "20.20.25.0":
(public network), and private network …
node1 en2:20.20.25.81 Don’t worry if you have messase as
node2 en2:20.20.25.82
ERROR:
Suitable interfaces for the private interconnect on subnet "10.3.25.0":
node1 en0:10.3.25.81 Could not find a suitable set of 
node2 en0:10.3.25.82 interfaces for VIPs.
Suitable interfaces for the private interconnect on subnet "10.10.25.0":
node1 en1:10.10.25.81
Result: Node connectivity check 
node2 en1:10.10.25.82
failed.
Result: Node connectivity check passed.
Just carry on (it will not affect the
installation).

ERROR: Could not find a suitable set of interfaces for VIPs.
è This is due to a CVU issue as explain bellow :

Metalink Node ID 338924.1


CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs
Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address that
begins with any of the following octets is non-routable and hence may not be fit for being used as a
VIP: 172.16.x.x192.168.x.x10.x.x.x However

11gRAC/ASM/AIX oraclibm@fr.ibm.com 155 of 393


At this Checking system requirements for 'crs'...
stage, node Check: Kernel version
è Kernel version test …
system Node Name Available Required Comment Detecting AIX release
requirement ------------ ------------------------ ------------------------ ----------
node1 AIX 5.3 AIX 5.2 passed
s for crs is node2 AIX 5.3 AIX 5.2 passed
checked !!! Result: Kernel version check passed.
èSystem architecture test …
Check: System architecture Detecting Power processor
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node1 powerpc powerpc passed
node2 powerpc powerpc passed
Result: System architecture check passed. èChecking memory
Check: Total memory
requirements …
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node1 2GB (2097152KB) 512MB (524288KB) passed
node2 2GB (2097152KB) 512MB (524288KB) passed
Result: Total memory check passed.
èChecking swap space
requirements …
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node1 1024GB (1073741824KB) 1GB (1048576KB) passed
node2 1024GB (1073741824KB) 1GB (1048576KB) passed èChecking /tmp free space
Result: Swap space check passed. requirements …
Check: Free disk space in "/tmp" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node1 400.35MB (409960KB) 400MB (409600KB) passed èChecking oracle user home
node2 400.35MB (409960KB) 400MB (409600KB) passed
Result: Free disk space check passed.
directory free space
requirements …
Check: Free disk space in "/oracle" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node1 4.63GB (4860044KB) 4GB (4194304KB) passed
node2 4.63GB (4860044KB) 4GB (4194304KB) passed
Result: Free disk space check passed.

Checking system requirements for 'crs'... (Continued …)


At this
Check ONLY for
stage, node Check: Package existence for "vacpp.cmp.core:7.0.0.2"
necessary
system Node Name Status Comment
------------------------------ ------------------------------ ---------------- requirements for ASM
requiremen node1 vacpp.cmp.core:6.0.0.0 failed as explained in the
ts for crs is node2 vacpp.cmp.core:6.0.0.0 failed cookbook (chapter 7).
checked !!! Result: Package existence check failed for "vacpp.cmp.core:7.0.0.2".

Check: Operating system patch for "IY65361 " CVU is testing


Node Name Applied Required Comment existence of all
------------ ------------------------ ------------------------ ---------- prerequirements
node1 unknown IY65361 failed
node2 unknown IY65361 failed needed for all
Result: Operating system patch check failed for "IY65361 ". implementations as :
Check: Package existence for "vac.C:7.0.0.2" • RAC implementation
Node Name Status Comment • ASM implementation
------------------------------ ------------------------------ ---------------- • GPFS implementation
node1 vac.C:7.0.0.2 passed
• HACMP implementation
node2 vac.C:7.0.0.2 passed
Result: Package existence check passed for "vac.C:7.0.0.2". (Concurrent raw devices
or cohabitation between
ASM or GPFS with
HACMP)

Check: Package existence for "xlC.aix50.rte:7.0.0.4"


Node Name Status Comment xlC…… MUST be at

11gRAC/ASM/AIX oraclibm@fr.ibm.com 156 of 393


------------------------------ ------------------------------ ---------------- minimum release 7.0.0.1
node1 xlC.aix50.rte:7.0.0.4 passed
node2 xlC.aix50.rte:7.0.0.4 passed With Release 6…. The
Result: Package existence check passed for "xlC.aix50.rte:7.0.0.4". Oracle Clusterware will not
start …
Check: Package existence for "xlC.rte:7.0.0.1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 xlC.rte:7.0.0.1 passed
node2 xlC.rte:7.0.0.1 passed
Result: Package existence check passed for "xlC.rte:7.0.0.1".

Checking system requirements for 'crs'... (Continued …)


At this
stage, node Check: Package existence for "gpfs.base:2.3.0.3"
system Node Name Status Comment
------------------------------ ------------------------------ ----------------
requiremen node1 gpfs.base:2.3.0.5 passed
ts for crs is node2 gpfs.base:2.3.0.5 passed
checked !!! Result: Package existence check passed for "gpfs.base:2.3.0.3".

Check: Operating system patch for "IY63969"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY63969:gpfs.baseIY63969:gpfs.docs.dataIY63969:gpfs.msg.en_US IY63969 passed
node2 IY63969:gpfs.baseIY63969:gpfs.docs.dataIY63969:gpfs.msg.en_US IY63969 passed
Result: Operating system patch check passed for "IY63969".

Check: Operating system patch for "IY69911"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY69911:gpfs.base IY69911 passed
node2 IY69911:gpfs.base IY69911 passed
Result: Operating system patch check passed for "IY69911".

Check: Operating system patch for "IY70276"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY70276:gpfs.base IY70276 passed
node2 IY70276:gpfs.base IY70276 passed
Result: Operating system patch check passed for "IY70276".

Check: Package existence for "cluster.license:5.2.0.0"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "cluster.license:5.2.0.0".

Check: Operating system patch for "IY60759"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 unknown IY60759 failed
node2 unknown IY60759 failed
Result: Operating system patch check failed for "IY60759".

Check: Operating system patch for "IY61034"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY61034:bos.mpIY61034:bos.mp64 IY61034 passed
node2 IY61034:bos.mpIY61034:bos.mp64 IY61034 passed
Result: Operating system patch check passed for "IY61034".

Checking system requirements for 'crs'... (Continued …)

Check: Operating system patch for "IY61770"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
11gRAC/ASM/AIX oraclibm@fr.ibm.com 157 of 393
node1 IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770 passed
Node2 IY61770:rsct.basic.rteIY61770:rsct.core.errmIY61770:rsct.core.hostrm
IY61770:rsct.core.rmcIY61770:rsct.core.sec
IY61770:rsct.core.sensorrmIY61770:rsct.core.utils IY61770 passed
Result: Operating system patch check passed for "IY61770".

Check: Operating system patch for "IY62191"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191 passed
node2 IY62191:bos.adt.profIY62191:bos.rte.libpthreads IY62191 passed
Result: Operating system patch check passed for "IY62191".

Check: Package existence for "ElectricFence-2.2.2-1:2.2.2"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "ElectricFence-2.2.2-1:2.2.2".

Check: Package existence for "xlfrte:9.1"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 xlfrte:8.1.1.4 failed
node2 xlfrte:8.1.1.4 failed
Result: Package existence check failed for "xlfrte:9.1".

Check: Package existence for "gdb-6.0-1:6.0"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "gdb-6.0-1:6.0".

Check: Package existence for "make-3.80-1:3.80"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "make-3.80-1:3.80".

Check: Package existence for "freeware.gnu.tar.rte:1.13.0.0"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "freeware.gnu.tar.rte:1.13.0.0".

Check: Package existence for "Java14_64.sdk:1.4.2.1"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "Java14_64.sdk:1.4.2.1".

Check: Package existence for "Java131.rte.bin:1.3.1.16"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "Java131.rte.bin:1.3.1.16".

Checking system requirements for 'crs'... (Continued …)

Check: Package existence for "Java14.sdk:1.4.2.2"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 Java14.sdk:1.4.2.10 failed
node2 Java14.sdk:1.4.2.10 failed
Result: Package existence check failed for "Java14.sdk:1.4.2.2".

11gRAC/ASM/AIX oraclibm@fr.ibm.com 158 of 393


Check: Operating system patch for "IY65305"
Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY65305:Java14.sdk IY65305 passed
node2 IY65305:Java14.sdk IY65305 passed
Result: Operating system patch check passed for "IY65305".

Check: Operating system patch for "IY58350"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 unknown IY58350 failed
node2 unknown IY58350 failed
Result: Operating system patch check failed for "IY58350".

Check: Operating system patch for "IY63533"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 unknown IY63533 failed
node2 unknown IY63533 failed
Result: Operating system patch check failed for "IY63533".

Check: Package existence for "mqm.server.rte:5.3"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "mqm.server.rte:5.3".

Check: Package existence for "mqm.client.rte:5.3"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "mqm.client.rte:5.3".

Check: Package existence for "sna.rte:6.1.0.4"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "sna.rte:6.1.0.4".

Check: Package existence for "bos.net.tcp.server"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.net.tcp.server:5.3.0.30 passed
node2 bos.net.tcp.server:5.3.0.30 passed
Result: Package existence check passed for "bos.net.tcp.server".

Check: Operating system patch for "IY44599"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 unknown IY44599 failed
node2 unknown IY44599 failed
Result: Operating system patch check failed for "IY44599".

Checking system requirements for 'crs'... (Continued …)

Check: Operating system patch for "IY60930"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY60930:bos.mpIY60930:bos.mp64 IY60930 passed
node2 IY60930:bos.mpIY60930:bos.mp64 IY60930 passed
Result: Operating system patch check passed for "IY60930".

Check: Operating system patch for "IY58143"


Node Name Applied Required Comment

11gRAC/ASM/AIX oraclibm@fr.ibm.com 159 of 393


------------ ------------------------ ------------------------ ----------
node1 IY58143:X11.Dt.libIY58143:X11.base.rteIY58143:bos.acctIY58143:bos.adt.includeI
Y58143:bos.adt.libmIY58143:bos.adt.profIY58143:bos.alt_disk_install.rteIY58143:bos.diag.com
IY58143:bos.mp64IY58143:bos.mpIY58143:bos.net.ewlm.rteIY58143:bos.net.ipsec.keymgt
IY58143:bos.net.ipsec.rteIY58143:bos.net.nfs.clientIY58143:bos.net.nfs.serverIY58143:bos.net.tcp.client
IY58143:bos.net.tcp.serverIY58143:bos.net.tcp.smitIY58143:bos.perf.libperfstatIY58143:bos.perf.perfstat
IY58143:bos.perf.toolsIY58143:bos.rte.archiveIY58143:bos.rte.bind_cmdsIY58143:bos.rte.boot
IY58143:bos.rte.controlIY58143:bos.rte.filesystemIY58143:bos.rte.installIY58143:bos.rte.libc
IY58143:bos.rte.lvmIY58143:bos.rte.manIY58143:bos.rte.methodsIY58143:bos.rte.security
IY58143:bos.rte.serv_aidIY58143:bos.sysmgt.nim.clientIY58143:bos.sysmgt.quotaIY58143:bos.sysmgt.serv_aid
IY58143:bos.sysmgt.sysbrIY58143:devices.chrp.base.rteIY58143:devices.chrp.pci.rteIY58143:devices.chrp.vdevice.rte
IY58143:devices.common.IBM.atm.rteIY58143:devices.common.IBM.ethernet.rteIY58143:devices.common.IBM.fc.rte
IY58143:devices.common.IBM.fda.diagIY58143:devices.common.IBM.mpio.rteIY58143:devices.fcp.disk.rte
IY58143:devices.pci.00100f00.rteIY58143:devices.pci.14100401.diagIY58143:devices.pci.14103302.rte
IY58143:devices.pci.14106602.rteIY58143:devices.pci.14106902.rteIY58143:devices.pci.14107802.rte
IY58143:devices.pci.1410ff01.rteIY58143:devices.pci.22106474.rteIY58143:devices.pci.33103500.rte
IY58143:devices.pci.4f111100.comIY58143:devices.pci.77101223.comIY58143:devices.pci.99172704.rte
IY58143:devices.pci.c1110358.rteIY58143:devices.pci.df1000f7.comIY58143:devices.pci.df1000f7.diag
IY58143:devices.pci.df1000fa.rteIY58143:devices.pci.e414a816.rteIY58143:devices.scsi.disk.rte
IY58143:devices.vdevice.IBM.l-lan.rteIY58143:devices.vdevice.IBM.vscsi.rteIY58143:devices.vdevice.hvterm1.rte
IY58143:devices.vtdev.scsi.rteIY58143:sysmgt.websm.appsIY58143:sysmgt.websm.framework
IY58143:sysmgt.websm.rteIY58143:sysmgt.websm.webaccess IY58143 passed
Node2 IY58143:X11.Dt.libIY58143:X11.base.rteIY58143:bos.acctIY58143:bos.adt.includeI
Y58143:bos.adt.libmIY58143:bos.adt.profIY58143:bos.alt_disk_install.rteIY58143:bos.diag.com
IY58143:bos.mp64IY58143:bos.mpIY58143:bos.net.ewlm.rteIY58143:bos.net.ipsec.keymgt
IY58143:bos.net.ipsec.rteIY58143:bos.net.nfs.clientIY58143:bos.net.nfs.serverIY58143:bos.net.tcp.client
IY58143:bos.net.tcp.serverIY58143:bos.net.tcp.smitIY58143:bos.perf.libperfstatIY58143:bos.perf.perfstat
IY58143:bos.perf.toolsIY58143:bos.rte.archiveIY58143:bos.rte.bind_cmdsIY58143:bos.rte.boot
IY58143:bos.rte.controlIY58143:bos.rte.filesystemIY58143:bos.rte.installIY58143:bos.rte.libc
IY58143:bos.rte.lvmIY58143:bos.rte.manIY58143:bos.rte.methodsIY58143:bos.rte.security
IY58143:bos.rte.serv_aidIY58143:bos.sysmgt.nim.clientIY58143:bos.sysmgt.quotaIY58143:bos.sysmgt.serv_aid
IY58143:bos.sysmgt.sysbrIY58143:devices.chrp.base.rteIY58143:devices.chrp.pci.rteIY58143:devices.chrp.vdevice.rte
IY58143:devices.common.IBM.atm.rteIY58143:devices.common.IBM.ethernet.rteIY58143:devices.common.IBM.fc.rte
IY58143:devices.common.IBM.fda.diagIY58143:devices.common.IBM.mpio.rteIY58143:devices.fcp.disk.rte
IY58143:devices.pci.00100f00.rteIY58143:devices.pci.14100401.diagIY58143:devices.pci.14103302.rte
IY58143:devices.pci.14106602.rteIY58143:devices.pci.14106902.rteIY58143:devices.pci.14107802.rte
IY58143:devices.pci.1410ff01.rteIY58143:devices.pci.22106474.rteIY58143:devices.pci.33103500.rte
IY58143:devices.pci.4f111100.comIY58143:devices.pci.77101223.comIY58143:devices.pci.99172704.rte
IY58143:devices.pci.c1110358.rteIY58143:devices.pci.df1000f7.comIY58143:devices.pci.df1000f7.diag
IY58143:devices.pci.df1000fa.rteIY58143:devices.pci.e414a816.rteIY58143:devices.scsi.disk.rte
IY58143:devices.vdevice.IBM.l-lan.rteIY58143:devices.vdevice.IBM.vscsi.rteIY58143:devices.vdevice.hvterm1.rte
IY58143:devices.vtdev.scsi.rteIY58143:sysmgt.websm.appsIY58143:sysmgt.websm.framework
IY58143:sysmgt.websm.rteIY58143:sysmgt.websm.webaccess IY58143 passed
Result: Operating system patch check passed for "IY58143".

Check: Operating system patch for "IY66513"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY66513:bos.mpIY66513:bos.mp64 IY66513 passed
node2 IY66513:bos.mpIY66513:bos.mp64 IY66513 passed
Result: Operating system patch check passed for "IY66513".

Check: Operating system patch for "IY70159"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY70159:bos.mpIY70159:bos.mp64 IY70159 passed
node2 IY70159:bos.mpIY70159:bos.mp64 IY70159 passed
Result: Operating system patch check passed for "IY70159".

Checking system requirements for 'crs'... (Continued …)

Check: Operating system patch for "IY59386"


Node Name Applied Required Comment
------------ ------------------------ ------------------------ ----------
node1 IY59386:bos.rte.bind_cmds IY59386 passed
node2 IY59386:bos.rte.bind_cmds IY59386 passed
Result: Operating system patch check passed for "IY59386".

Check: Package existence for "bos.adt.base"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.adt.base:5.3.0.30 passed
node2 bos.adt.base:5.3.0.30 passed

11gRAC/ASM/AIX oraclibm@fr.ibm.com 160 of 393


Result: Package existence check passed for "bos.adt.base".

Check: Package existence for "bos.adt.lib"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.adt.lib:5.3.0.30 passed
node2 bos.adt.lib:5.3.0.30 passed
Result: Package existence check passed for "bos.adt.lib".

Check: Package existence for "bos.adt.libm"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.adt.libm:5.3.0.30 passed
node2 bos.adt.libm:5.3.0.30 passed
Result: Package existence check passed for "bos.adt.libm".

Check: Package existence for "bos.perf.libperfstat"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.perf.libperfstat:5.3.0.30 passed
node2 bos.perf.libperfstat:5.3.0.30 passed
Result: Package existence check passed for "bos.perf.libperfstat".

Check: Package existence for "bos.perf.perfstat"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.perf.perfstat:5.3.0.30 passed
node2 bos.perf.perfstat:5.3.0.30 passed
Result: Package existence check passed for "bos.perf.perfstat".

Check: Package existence for "bos.perf.proctools"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 bos.perf.proctools:5.3.0.30 passed
node2 bos.perf.proctools:5.3.0.30 passed
Result: Package existence check passed for "bos.perf.proctools".

Check: Package existence for "rsct.basic.rte"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 rsct.basic.rte:2.4.3.0 passed
node2 rsct.basic.rte:2.4.3.0 passed
Result: Package existence check passed for "rsct.basic.rte".

Check: Package existence for "perl.rte:5.0005"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 perl.rte:5.8.2.30 passed
node1 perl.rte:5.8.2.30 passed
Result: Package existence check passed for "perl.rte:5.0005".

Check: Package existence for "perl.rte:5.6"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 perl.rte:5.8.2.30 passed
node2 perl.rte:5.8.2.30 passed
Result: Package existence check passed for "perl.rte:5.6".

Checking system requirements for 'crs'... (Continued …)

Check: Package existence for "perl.rte:5.8"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 perl.rte:5.8.2.30 passed
node2 perl.rte:5.8.2.30 passed
Result: Package existence check passed for "perl.rte:5.8".

Check: Package existence for "python-2.2-4:2.2"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "python-2.2-4:2.2".

Check: Package existence for "freeware.zip.rte:2.3"

11gRAC/ASM/AIX oraclibm@fr.ibm.com 161 of 393


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "freeware.zip.rte:2.3".

Check: Package existence for "freeware.gcc.rte:3.3.2.0"


Node Name Status Comment
------------------------------ ------------------------------ ----------------
node1 missing failed
node2 missing failed
Result: Package existence check failed for "freeware.gcc.rte:3.3.2.0".

Check: Group existence for "dba"


Node Name Status Comment
------------ ------------------------ ------------------------
node1 exists passed
node2 exists passed
Result: Group existence check passed for "dba".

Check: User existence for "nobody"


Node Name Status Comment
------------ ------------------------ ------------------------
node1 exists passed
node2 exists passed
Result: User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

Don’t worry about the “Pre-check for cluster services setup was unsuccessful on all the nodes.”
è this is a normal message as we don’t want all APAR and FILESETS to be installed.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 162 of 393


12.2 Installation

This installation just have to be done only starting from one node. Once the first node is installed, Oracle OUI
automatically starts the copy of the mandatory files on the others nodes, using rcp command. This step should not last
long. But in any case, don’t think the OUI is stalled, and look at the network traffic before canceling the
installation !

 As root user on each node, DO Create a symbolic link from /usr/sbin/lsattr to /etc/lsattr 
ln -s /usr/sbin/lsattr /etc/lsattr

“/etc/lsattr” is used in vip check action

On each node :
Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced
libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node1:root}/ #

Then on node2

From first node As Under VNC Client session, or other graphical interface, execute :
root user, execute :
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

On each node, set right {node1:root}/ #


{node1:root}/ # chown crs:oinstall /crs
ownership and permissions
{node1:root}/ # chmod 665 /crs
to following directories :
{node1:root}/ #
{node1:root}/ # chown oracle:oinstall /oracle
{node1:root}/ # chmod 665 /oracle

Login as crs or oracle user (crs in our case) and follow the procedure hereunder…

With /tmp or other destination having enough free space, about


 Setup and export your 500Mb on each node.
DISPLAY, TMP and TEMP
variables {node1:crs}/ # export DISPLAY=node1:1
{node1:crs}/ # export TMP=/tmp
{node1:crs}/ # export TEMP=/tmp
{node1:crs}/ # export TMPDIR=/tmp

IF AIX5L release 5.3 is update entries AIX5200 to AIX5300 on both files, and execute :
used, do modify the file $/<cdrom_mount_point>/runInstaller
oraparam.ini, and cluster.ini
Or execute : ./runInstaller -ignoreSysPrereqs
in Disk1/installer

11gRAC/ASM/AIX oraclibm@fr.ibm.com 163 of 393


 Make sure to {node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ls
cluvfy install rootpre rpm runcluvfy.sh upgrade
execute rootpre.sh on doc response rootpre.sh runInstaller stage welcome.html
each node before you {node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware #
click to the next step (If {node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # ./runInstaller
not done yet with CVU). ********************************************************************************

Your platform requires the root user to perform certain pre-installation


è make an NFS mount OS preparation. The root user should run the shell script 'rootpre.sh' before
of the CRS Disk1 on you proceed with Oracle installation. rootpre.sh can be found at the top level
other nodes, or remote of the CD or the stage area.
copy files to other
nodes, THEN run Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle
installation.
rootpre.sh on each Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.
node !!!
********************************************************************************

Has 'rootpre.sh' been run by root? [y/n] (n)

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 190 MB. Actual 1219 MB Passed
Checking swap space: must be greater than 150 MB. Actual 512 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-07_11-04-35AM. Please wait
...{node1:crs}/distrib/SoftwareOracle/rdbms11gr1/aix/clusterware # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.

At the OUI Welcome screen

You can check if any oracle product is


already installed, click on “Installed
Products” :

Just click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 164 of 393


By default, Oracle Universal installer
may try to create /oraInventory
directory, but OUI has no rights to do
so.

THEN you may have this message.

Just click “OK” to modify Inventory


location destination.

Specify Inventory directory and credentials :

• where you want to create the


inventory directory :

/oracle/oraInventoty

• Operating system group name should


be set as oinstall

Then click Next ... to Continue ...

Specify Home Details

Specify an ORACLE_HOME name and


destination directory for the CRS installation.

The destination directory should be out of the


$ORACLE_BASE

Add Product Languages if necessay.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 165 of 393


Product-Specific
Prerequisite Checks :

All checks should be in


status “Succeeded”.

Then click Next ...

This step will :

• Check operating system requirements ...


• Check operating system package requirements ...
• Check recommended operating system patches …
• Check physical memory requirements ...
• Check for Oracle Home incompatibilities ....
• Check Oracle Home path for spaces...
• Check Cluster files...
• Check kernel...
• Check uid/gid...
• Check maxmimum command line length argument, ncarg...
• Check local Cluster Synchronization Services (CSS) status ...
• Check whether Oracle 9.2 RAC is available on all selected nodes
• Check Oracle 9i OCR partition size ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 166 of 393


Details of the Checking operating system requirements ...
Expected result: One of 5300.05,6100.00
prerequisite Actual Result: 5300.07
checks done Check complete. The overall result of this check is: Passed
by ========================================================
runInstaller
Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.7.0). Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.60). Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.7.0). Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.7.0). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.7.0). Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.7.0). Passed
Checking for rsct.basic.rte(0.0); found rsct.basic.rte(2.4.8.0). Passed
Checking for rsct.compat.clients.rte(0.0); found rsct.compat.clients.rte(2.4.8.0). Passed
Checking for bos.mp64(5.3.0.56); found bos.mp64(5.3.7.1). Passed
Checking for bos.rte.libc(5.3.0.55); found bos.rte.libc(5.3.7.1). Passed
Checking for xlC.aix50.rte(8.0.0.7); found xlC.aix50.rte(9.0.0.1). Passed
Checking for xlC.rte(8.0.0.7); found xlC.rte(9.0.0.1). Passed
Check complete. The overall result of this check is: Passed
========================================================

Checking recommended operating system patches


Checking for IY89080(bos.rte.aio,5.3.0.51); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY92037(bos.rte.aio,5.3.0.52); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY94343(bos.rte.lvm,5.3.0.55); found (bos.rte.lvm,5.3.7.0). Passed
Check complete. The overall result of this check is: Passed
========================================================

Checking physical memory requirements ...


Expected result: 922MB
Actual Result: 2048MB
Check complete. The overall result of this check is: Passed
========================================================

Checking for Oracle Home incompatibilities ....


Actual Result: NEW_HOME
Check complete. The overall result of this check is: Passed
========================================================

Checking Oracle Home path for spaces...


Check complete. The overall result of this check is: Passed
========================================================

Checking Cluster files...


Check complete. The overall result of this check is: Passed
========================================================

Checking kernel...
Check complete. The overall result of this check is: Passed
========================================================

Checking uid/gid...
Check complete. The overall result of this check is: Passed
========================================================

Checking maxmimum command line length argument, ncarg...


Check complete. The overall result of this check is: Passed
========================================================

Checking local Cluster Synchronization Services (CSS) status ...


Check complete. The overall result of this check is: Passed
========================================================

Checking whether Oracle 9.2 RAC is available on all selected nodes


Check complete. The overall result of this check is: Passed
========================================================

Checking Oracle 9i OCR partition size ...


Check complete. The overall result of this check is: Passed
========================================================

11gRAC/ASM/AIX oraclibm@fr.ibm.com 167 of 393


Just to remember !!!

Public, Private, and Virtual Host Name


layout

Network Host Type Defined Name Assigned IP Observation


Interface name
Public node1 10.3.25.81 RAC Public node name
Hostname
en0 (Public Network)
Virtual node1-vip 10.3.25.181 RAC VIP node name
Hostname
(Private Network)
en1 Private node1-rac 10.10.25.81 RAC Interconnect node name
Hostame
(Private Network)
Public node2 10.3.25.82 RAC Public node name
Hostname
en0 (Public Network)
Virtual node2-vip 10.3.25.182 RAC VIP node name
Hostname
(Private Network)
en1 Private node2-rac 10.10.25.82 RAC Interconnect node name
Hostame
(Private Network)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 168 of 393


 Specify Cluster configuration
:

You must :

• Give a unique name to the


cluster
crs_cluster

• Edit the default node to


validate entries

• Add the entries for any new


node, node2 in our case

For each cluster node, you must :


specify one by one
• Public Node Name,
Public correspond to IP address
linked to the public network,
usually IP linked to hostname.
Public Node Name =
node1
Public Node Name =
node2

• Private Node Name


Private correspond to IP address
linked to the RAC interconnect.
Private Node Name =
node1-rac
Private Node Name =
node2-rac

• Virtual Host Name


Virtual Host Name correspond to
a free reserved IP address on
the public network to be use for
Oracle clusterware VIP.
Virtual Host Name =
node1-vip
Virtual Host Name =
node2-vip

When done !!! click Next ...

If you have problems at this stage when clicking on Next (error


messages) è Check your network configuration, and User equivalence

11gRAC/ASM/AIX oraclibm@fr.ibm.com 169 of 393


Just to remember !!!

Public, Private, and Virtual Host Name


layout

Network Host Type Defined Assigned IP SUBNET Observation


Interface Name
name
Public node1 10.3.25.81 10.3.25.0 RAC Public node name
Hostname
en0 (Public Network)
Virtual node1-vip 10.3.25.181 10.3.25.0 RAC VIP node name
Hostname
(Private Network)
en1 Private node1-rac 10.10.25.81 10.10.25.0 RAC Interconnect node name
Hostame
(Private Network)
Public node2 10.3.25.82 10.3.25.0 RAC Public node name
Hostname
en0 (Public Network)
Virtual node2-vip 10.3.25.182 10.3.25.0 RAC VIP node name
Hostname
(Private Network)
en1 Private node2-rac 10.10.25.82 10.10.25.0 RAC Interconnect node name
Hostame
(Private Network)

 Specify Network Interface Usage


11gRAC/ASM/AIX oraclibm@fr.ibm.com 170 of 393
For each entry (en0,en1,en2), Click “Edit” to Specify “Interface Type” for each network card correspond to the
public network, and which one correspond to the private network (RAC Interconnect).

In our example, with or without RAC


Interconnect backup implementation :

• en0 (10.3.25.0) è must exist as


“Public” on each node.

• en1 (10.10.25.0) è must exist as


“Private” on each node.

• en2 (20.20.25.0) è “Do Not Use”.

• An other en? … è Do Not Use

Check and Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 171 of 393


At this stage, you must have already configured the shared storage. At least for :

• 1 Oracle Cluster Registry Disk

• 1 Voting Disk

OCR and voting disks protected by


external mechanism.
No Oracle Clusterware redundancy
• /dev/ocr_disk1
mechanism
• /dev/voting_disk1

OR

• 2 Oracle Cluster Registry Disks


(OCR Mirroring)

• 3 Voting Disks (Voting


Redundancy)

Oracle Clusterware redundancy • /dev/ocr_disk1


mechanism for OCR and Voting disks • /dev/ocr_disk2

• /dev/voting_disk1
• /dev/voting_disk2
• /dev/voting_disk3

In our example, We will implement 2 Oracle Cluster Registry Disks, and 3 Voting Disks.

ALL virtual devices must be reachable by all nodes participating to the RAC cluster in concurrency mode.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 172 of 393


Specify OCR Configuration, by selecting :
• /dev/ocr_disk1
• /dev/ocr_disk2 • External Redundancy (No OCR mirroring
by Oracle Clusterware, should be provided
by others options, disks management, etc
…)
If
“External
Redunda
ncy”
selected,
specify
raw disk
location as
follow :
• /dev/ocr_disk1

OR

• Normal Redundancy (OCR mirroring by


Oracle Clusterware)
If
“NORMA
In our case, we will specify “NORMAL Redundancy” as described L
bellow : Redunda
ncy”
selected,
specify
raw disk
location as
follow :
• /dev/ocr_disk1
• /dev/ocr_disk2

Specify the OCR location : this must be


a shared location on the shared storage
reachable from all nodes.
And you must have the read/wright
permissions on this shared location from all
nodes.

Then click Next ...

If problems happens at this stage, do verifiy


that location specified does exist, and is
reachable from each AIX node, with right
read/write access, and user/group owner.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 173 of 393


Specify Voting Disk Configuration, by
• /dev/voting_disk1 selecting :
• /dev/voting_disk2
• /dev/voting_disk3 • External Redundancy (No Voting copies
managed by Oracle Clusterware, should
be provided by others options, disks
management, etc …)
If
“External
Redunda
ncy”
selected,
specify
raw disk
location as
follow :
• /dev/voting_disk1

OR

• Normal Redundancy (Voting copies


managed by Oracle Clusterware)
In our case, we will specify “Normal Redundancy” as described If
bellow : “NORMA
L
Redunda
ncy”
selected,
specify
raw disk
location as
follow :
• /dev/voting_disk1
• /dev/voting_disk2
• /dev/voting_disk3

Specify the Voting Disk location, this


must be a shared location on the shared
storage reachable from all nodes.
And you must have the read/wright
permissions on this shared location from all
nodes.

Then click Next ...

If problems happens at this stage, do verifiy


that location specified does exist, and is
reachable from each AIX node, with right
read/write access, and user/group owner.
11gRAC/ASM/AIX oraclibm@fr.ibm.com 174 of 393
Summary :

Check Cluster Nodes and Remote Nodes


lists.

The OUI will install the Oracle CRS


software on to the local node, and then
copy this information to the other selected
nodes.

Check that all nodes are listed in “Cluster


Nodes” :

Then click Install ...

Install :

The Oracle Universal Installer will


proceed the installation on the first
node, then will copy automatically the
code on the 2 others selected nodes.

Just wait for the next screen ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 175 of 393


Execute Configuration
Scripts :

KEEP THIS WINDOWS


OPEN

AND Do execute scripts


in the following order,
waiting for each to
succeed before running
the next one !!!

AS root :

1 On node1, Execute
orainstRoot.sh

2 On node2, Execute
orainstRoot.sh

3 On node1, Execute
root.sh

4 On node2, Execute
root.sh

orainstRoot.sh :

Execute the orainstRoot.sh on all nodes.


The file is located in $ORACLE_BASE/oraInventory (OraInventory home) on each nodes
/oracle/oraInventory in our case.

On node1 as root, execute {node1:root}/oracle/oraInventory # ./orainstRoot.sh


/oracle/oraInventory/orainstRoot.s Changing permissions of /oracle/oraInventory to 770.
h Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node1:root}/oracle/oraInventory #

THEN On node2 as root, execute {node2:root}/oracle/oraInventory # ./orainstRoot.sh


Changing permissions of /oracle/oraInventory to 770.
/oracle/oraInventory/orainstRoot.sh Changing groupname of /oracle/oraInventory to oinstall.
The execution of the script is complete
{node2:root}/oracle/oraInventory #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 176 of 393


Before running root.sh script on each node, please check the following :

Check if your public network card (en0 in our case) is a standard network adapter, or a virtual network ethernet.

Issue the following command as root :

{node1:root} / -> entstat –d en0


-------------------------------------------------------------
ETHERNET STATISTICS (en0) :
You should get the Device Type: 2-Port Gigabit Ethernet-SX PCI-X Adapter (14108802)
following if en0 is a Hardware Address: 00:09:6b:ee:61:fc
normal network Elapsed Time: 0 days 20 hours 3 minutes 49 seconds
interface on each Transmit Statistics: Receive Statistics:
node (node1, node2) : -------------------- -------------------
Packets: 0 Packets: 0
Bytes: 0 Bytes: 0
Interrupts: 0 Interrupts: 0
Transmit Errors: 0 Receive Errors: 0
Packets Dropped: 0 Packets Dropped: 0
Bad Packets: 0
Max Packets on S/W Transmit Queue: 1
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 1

Broadcast Packets: 0 Broadcast Packets: 0


Multicast Packets: 0 Multicast Packets: 0
No Carrier Sense: 0 CRC Errors: 0
DMA Underrun: 0 DMA Overrun: 0
Lost CTS Errors: 0 Alignment Errors: 0
Max Collision Errors: 0 No Resource Errors: 0
Late Collision Errors: 0 Receive Collision Errors: 0
Deferred: 0 Packet Too Short Errors: 0
SQE Test: 0 Packet Too Long Errors: 0
Timeout Errors: 0 Packets Discarded by Adapter: 0
Single Collision Count: 0 Receiver Start Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 1

General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 2000
Driver Flags: Up Broadcast Simplex
Limbo 64BitSupport ChecksumOffload
PrivateSegment LargeSend DataRateSet

2-Port Gigabit Ethernet-SX PCI-X Adapter (14108802) Specific Statistics:


--------------------------------------------------------------------
Link Status : Up
Media Speed Selected: Auto negotiation
Media Speed Running: Unknown
PCI Mode: PCI-X (100-133)
PCI Bus Width: 64-bit
Latency Timer: 144
Cache Line Size: 128
Jumbo Frames: Disabled
TCP Segmentation Offload: Enabled
TCP Segmentation Offload Packets Transmitted: 0
TCP Segmentation Offload Packet Errors: 0
Transmit and Receive Flow Control Status: Disabled
Transmit and Receive Flow Control Threshold (High): 45056
Transmit and Receive Flow Control Threshold (Low): 24576
Transmit and Receive Storage Allocation (TX/RX): 16/48

You should get the following if en0 (public network {node1:root}/ #


interface) is a normal network interface : entstat -d en0 | grep -iE ".*link.*status.*:.*up.*"
Link Status : Up
IF so, go to the next step, executing the root.sh script
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 177 of 393


OR {node1:root} / -> entstat –d en0
ETHERNET STATISTICS (en0) :
Device Type: Virtual I/O Ethernet Adapter (l-lan)
Hardware Address: ee:51:60:00:10:02
Elapsed Time: 0 days 20 hours 3 minutes 24 seconds
You should get the
Transmit Statistics: Receive Statistics:
following if en0 is a -------------------- -------------------
normal network Packets: 136156 Packets: 319492
interface : Bytes: 19505561 Bytes: 142069339
Interrupts: 0 Interrupts: 285222
Transmit Errors: 0 Receive Errors: 0
Packets Dropped: 0 Packets Dropped: 0
Bad Packets: 0
Max Packets on S/W Transmit Queue: 0
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 0

Broadcast Packets: 208 Broadcast Packets: 241831


Multicast Packets: 2 Multicast Packets: 0
No Carrier Sense: 0 CRC Errors: 0
DMA Underrun: 0 DMA Overrun: 0
Lost CTS Errors: 0 Alignment Errors: 0
Max Collision Errors: 0 No Resource Errors: 0
Late Collision Errors: 0 Receive Collision Errors: 0
Deferred: 0 Packet Too Short Errors: 0
SQE Test: 0 Packet Too Long Errors: 0
Timeout Errors: 0 Packets Discarded by Adapter: 0
Single Collision Count: 0 Receiver Start Count: 0
Multiple Collision Count: 0
Current HW Transmit Queue Length: 0

General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 0
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex 64BitSupport DataRateSet

Virtual I/O Ethernet Adapter (l-lan) Specific Statistics:


---------------------------------------------------------
RQ Length: 4481
No Copy Buffers: 0
Trunk Adapter: False
Filter MCast Mode: False
Filters: 255 Enabled: 1 Queued: 0 Overflow: 0
LAN State: Operational

Buffers Reg Alloc Min Max MaxA LowReg


tiny 512 512 512 2048 512 509
small 512 512 512 2048 553 502
medium 128 128 128 256 128 128
large 24 24 24 64 24 24
huge 24 24 24 64 24 24

OR You should get the following if entstat -d en0 | grep -iE ".*lan state:.*operational.*"
en0 is a virtual network interface :
IF so, don’t worry, 11g ragvip script {node1:root}/ # entstat –d en0 | grep -iE ".*lan
has been updated to take care of State:.*operational.*"
this case.. LAN State: Operational
{node1:root}/ #

in the $CRS_HOME/bin/racgvip script, you will have to modify the following :

$ENTSTAT -d $_IF | $GREP -iEq ".*link.*status.*:.*up.*"


To be replaced by
$ENTSTAT -d $_IF | $GREP -iEq ".*lan state:.*operational.*"

With 10gRAC it was know as “Bug 4437469: RACGVIP NOT WORKING ON SERVER WITH ETHERNET
VIRTUALISATION” (This was only required if using AIX Virtual Interfaces for the Oracle Database 10.2.0.1 RAC

11gRAC/ASM/AIX oraclibm@fr.ibm.com 178 of 393


public network) è  It’s Fixed since 10gRAC release 2 patchset 10.2.0.3 …

At this stage, you should execute “root.sh” script :

Start with node 1 and wait for the result before executing on node 2.

This file is located in $CRS_HOME directory on each node (“/crs/11.1.0/install” in our case).

ONLY For information, The root.sh script is executing two sub scripts, and one is the rootconfig.sh script
which has interresting information to have a look at :

DO not modify the file unless it’s necessary before running the root.sh script !!!
If you have to make a modification, do it on all nodes !!!

What does the rootconfig.sh script executed by root.sh :

# rootconfig.sh for Oracle CRS homes


#
# This is run once per node during the Oracle CRS install.
# This script does the following:
# 1) Stop if any GSDs are running from 9.x oracle homes
# 2) Initialize new OCR device or upgrade the existing OCR device
# 3) Setup OCR for running CRS stack
# 4) Copy the CRS init script to init.d for init process to start
# 5) Start the CRS stack
# 6) Configure NodeApps if CRS is up and running on all nodes

Variables used by root.sh script,


the values are the result of your inputs in the Oracle Clusterware Universal Installer.

You can check the values to see if there are OK.

SILENT=false
ORA_CRS_HOME=/crs/11.1.0
CRS_ORACLE_OWNER=crs
CRS_DBA_GROUP=dba
CRS_VNDR_CLUSTER=false
CRS_OCR_LOCATIONS=/dev/ocr_disk1,/dev/ocr_disk2
CRS_CLUSTER_NAME=crs
CRS_HOST_NAME_LIST=node1,1,node2,2
CRS_NODE_NAME_LIST=node1,1,node2,2
CRS_PRIVATE_NAME_LIST=node1-rac,1,node2-rac,2
CRS_LANGUAGE_ID='AMERICAN_AMERICA.WE8ISO8859P1'
CRS_VOTING_DISKS=/dev/voting_disk1,/dev/voting_disk2,/dev/voting_disk3
CRS_NODELIST=node1,node2
CRS_NODEVIPS='node1/node1-vip/255.255.255.0/en0,node2/node2-vip/255.255.255.0/en0'

11gRAC/ASM/AIX oraclibm@fr.ibm.com 179 of 393


 FIRST On node1
As root, Execute /crs/11.1.0/root.sh

When finished, CSS deamon should be active on node 1.

Check for line “CSS is active on these nodes.

• node1

{node1:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory


Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
Now formatting voting device: /dev/voting_disk1
Now formatting voting device: /dev/voting_disk2
Now formatting voting device: /dev/voting_disk3
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
Cluster Synchronization Services is inactive on these nodes.
node2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
{node1:root}/crs/11.1.0 #

Don’t worry about “WARNING: directory '/crs' is not owned by root”


è This is just a message to forget

IF CSS is not active at the end of the root.sh script :


è Check your network , shared disks configuration, and owner and access permissions (read/write) on OCR and Voting
disks from each participating node. And execute again the root.sh script on node having the problem.
èIf CCS start on one node, but not on the others, Check shared Disks (OCR/Voting) for concurrent read/write access
from all nodes, using unix dd command.
è If ASM or GPFS is implemented with HACMP installed and configured for other purposes then having database on
concurrent raw devices, You must declare disks ressources in HACMP to be able to start the CRS (CSS).
è If ASM or GPFS is implemented, and HACMP is installed but not used at all, THEN remove HACMP or declare disks
ressources in HACMP to be able to start the CRS (CSS).

11gRAC/ASM/AIX oraclibm@fr.ibm.com 180 of 393


 The IBM AIX clustering layer, HACMP filesets, MUST NOT be installed if you’ve chosen an
implementation without HACMP. If this layer is implemented for other purpose, disks ressources necessary to
install and run CRS data will have to be part of an HACMP volume group resource.
If you have previously installed HACMP, you must remove :
 HACMP filesets (cluster.es.*)
 rsct.hacmp.rte
 rsct.compat.basic.hacmp.rte
 rsct.compat.clients.hacmp.rte
If you did run a first installation of the Oracle Clusterware (CRS) with HACMP installed,
èCheck if /opt/ORCLcluster directory does exist and if so, remove it on all nodes.

TO BE ABLE TO RUN AGAIN the root.sh script on the node, you must :

Either
• Clean the failed CRS installation, and start again the CRS installation procedure.
è Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install (same procedure for 10g and
11g)
è Only Supported method by Oracle.

OR
• Do the following just to find out and solve the problem without installing again at each try, And when
solved, follow again the Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install to
clean properly the system, and start again the installation as supported by oracle.

As root user on each node :

Do execute {node1:root}/ # $CRS_HOME/bin/crsctl stop crs


è to clean any remaining crs deamons
{node1:root}/ # rmitab h1
è this will remove oracle CRS entry in the /etc/inittab
{node1:root}/ # rmitab h2
{node1:root}/ # rmitab h3
{node1:root}/ # rm –Rf /opt/ORCL*

kill remaining process from output : ps-ef|grep crs and ps-ef|grep d.bin

{node1:root}/ # rm –R /etc/oracle/*

OR execute {node1:root}/ # $CRS_HOME/install/rootdelete.sh è on each node

Change owner, group (oracle:dba) and permission (660) for /dev/ocr_disk1 and /dev/voting_disk1
THEN
on each node of the cluster on node1 and node2 …

For all OCR Erase OCR and Voting diks content è Format (Zeroing) on the disks from one node :
and Voting
Disks : {node1:root}/ # dd if=/dev/zero of=/dev/ocr_disk1 bs=1024 count=300 &
{node1:root}/ # dd if=/dev/zero of=/dev/ocr_disk2 bs=1024 count=300 &
{node1:root}/ # dd if=/dev/zero of=/dev/voting_disk1 bs=1024 count=300 &
{node1:root}/ # dd if=/dev/zero of=/dev/voting_disk3 bs=1024 count=300 &
{node1:root}/ # dd if=/dev/zero of=/dev/voting_disk3 bs=1034 count=300 &

11gRAC/ASM/AIX oraclibm@fr.ibm.com 181 of 393


When finished, CSS deamon should be active on node 1, 2.
 THEN On node2 You should have the following final result :

CSS is active on these nodes.


As root, Execute node1
/crs/11.1.0/root.sh node2

 If CSS is not active on all nodes, or on one of the nodes, this means that you
could have a problem with the network configuration, or the shared disks configuration for
accessing OCR and Voting Disks.

è Check your network , shared disks configuration, and owner and access permissions
(read/write) on OCR and Voting disks from each participating node. And execute again the
root.sh script on node having the problem.

Check also as oracle user the following command from each node :

{node1:root}/ # su - crs
{node1:crs}/crs/11.1.0 # /crs/11.1.0/bin/olsnodes
node1
node2
{node1:crs}/crs/11.1.0 # rsh node2
{node2:crs}/crs/11.1.0 # olsnodes
node1
node2
{node2:crs}/crs/11.1.0 #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 182 of 393


 THEN On node2
As root, Execute /crs/11.1.0/root.sh

When finished, CSS deamon should be active on node 1 and 2.

Check for line “CSS is active on these nodes.

• node1
• node2

{node2:root}/crs/11.1.0 # ./root.sh
WARNING: directory '/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory


Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/crs' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-rac node1
node 2: node2 node2-rac node2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.


-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
node1
node2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...


Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...

Done.
{node2:root}/crs/11.1.0 #

Don’t worry about “WARNING: directory '/crs' is not owned by root”


è This is just a message to forget

11gRAC/ASM/AIX oraclibm@fr.ibm.com 183 of 393


 On the second node, at he end of the root.sh script :

You should have the following lines :

...

Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes...


Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...

Done.

{node2:root}/crs/11.1.0 #

IF NOT, Check for the line :

...

Running vipca(silent) for configuring nodeapps

"en0 is not public. Public interfaces should be used to configure virtual IPs”,

{node2:root}/crs/11.1.0 #

If you have :  

"en0 is not public. Public interfaces should be used to configure virtual IPs”

THEN
• Read next pages to understand and solve it !!!
• RUN VIPCA manualy as explained on next page !!!

OTHERWISE, skip the next VIPCA pages …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 184 of 393


At this stage, if you get "en0 is not public. Public interfaces should be used to configure virtual Ips”,
Following execution of root.sh on last node, node2 in our case, THEN read the following :

Note 316583.1 – VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC

Symptoms
During CRS install while running root.sh, The following messages are displayed
Oracle CRS stack installed and running under init(1M)
VIP should have been Running vipca(silent) for configuring nodeapps
configured in silent mode The given interface(s), "en0" is not public. Public interfaces should be used to
with the root.sh scripts configure virtual IPs.
executed on node2
Cause
When verifying the IP addresses, VIP uses calls to determine if a IP address is valid or
è THIS IS NOT THE not. In this case, VIP finds that the IPs are non routable (For example IP addresses like
CASE 192.168.* and 10.10.*.)
Oracle is aware that the IP's can be made public but since mostly such IP's are used for
Private, it display this error message.

Solution
The workaround is to re-run vipca manually as root
#./vipca
or add the VIP using srvctl add nodeapps

è YOU MUST CONFIGURE VIP by running vipca script as root user.

logon as root on second node On first or second node as root user, you
must setup the DISPLAY
{node1:root}/ # export DISPLAY
{node1:root}/ # cd $CRS_HOME/bin
before running the vipca script located in
{node1:root}/crs/11.1.0/bin # ./vipca
/crs/11.1.0/bin

The VIP “Welcome” graphical screen


will appear at the end of the root.sh script

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 185 of 393


Just to remember !!! Public, Private, and Virtual Host Name layout

Public VIP RAC RAC Interconnect


Interconnect
Backup
Network card on each node en0 en0 en1 en2

1 of 2 : Select one and only one


network interface.

Select the network interface


corresponding to the Public Network
Remember that each public network card
on each node must have the same name,
“en0” for example in our case.

en1 is the RAC Interconnect, or private


network.

Please check with “ifconfig –a” on each


node as root.

Select “en0” in our case

Then click Next ...

Just to remember !!! Public, Private, and Virtual Host Name layout

Public VIP RAC Interconnect (Private Network)


en0 en1
Node Name IP Node Name IP Node Name IP
node1 10.3.25.81 node1-vip 10.3.25.181 node1-rac 10.10.25.81
node2 10. 3.25.82 node2-vip 10. 3.25.182 node1-rac 10.10.25.82

2 of 2 :

In the Virtual IPs for cluster nodes


screen, you must provide the VIP node
name for node1 and stroke the TAB key to
automatically feel the rest.

Check validity of the entries before


proceeding.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 186 of 393


The Summary screen will appear,
please validate the entries, or go back to
modify.

Then click Finish ...

The VIP configuration Assistant will


proceed with creation, configuration and
startup of all application resources on all
selected nodes.

VIP, GSD and ONS will be the application


resources to be created.

Wait while progressing ...

If you don’t get any errors, you’ll be


prompted to click OK as the configuration is
100% completed.

Then click OK ...

Check the Configuration results.

Then click Exit ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 187 of 393


 Using “ifconfig –a” on each node, check that each network card configured for Public network is mapping a virtual IP.

On node 1 :
Oracle clusterware VIP for node1 is
{node1:root}/ # ifconfig -a assigned/mapped to network interface
en0: en0 corresponding to the public
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI network.
N>
inet 10.3.25.81 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255
VIP from node1 is 10.3.25.181
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.81 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.81 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node1:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 188 of 393


On node 2 :

{node2:root}/ # ifconfig -a Oracle clusterware VIP for node2 is


en0: assigned/mapped to network interface
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI en0 corresponding to the public
N> network,
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255 VIP from node2 is 10.3.25.182
tcp_sendspace 131072 tcp_recvspace 65536
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255
tcp_sendspace 131072 tcp_recvspace 65536
en2:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #

 Using “ifconfig –a” on each node, check that each network card configured for Public network is mapping a virtual IP.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 189 of 393


If node 1 is rebooted :
On node1 failure or reboot,
THEN On node 2 : We’ll see VIP from node1, VIP from node1 is then still reachable.

{node2:root}/ # ifconfig -a Oracle clusterware VIP for node1 will


en0: be switched to node2, and still
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI assigned/mapped to network interface
N> en0 corresponding to the public
inet 10.3.25.82 netmask 0xffffff00 broadcast 10.3.25.255 network,
inet 10.3.25.182 netmask 0xffffff00 broadcast 10.3.25.255
inet 10.3.25.181 netmask 0xffffff00 broadcast 10.3.25.255 VIP from node1 is 10.3.25.181
tcp_sendspace 131072 tcp_recvspace 65536 VIP from node2 is 10.3.25.182
en1:
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
When node1 will come back to normal
inet 10.10.25.82 netmask 0xffffff00 broadcast 10.10.25.255 operations, THEN vip from node1
tcp_sendspace 131072 tcp_recvspace 65536 hosted temporary on node2, will switch
en2: back to its home node, on node1.
flags=1e080863,80<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD,CHAI
N>
inet 20.20.25.82 netmask 0xffffff00 broadcast 20.20.25.255
tcp_sendspace 131072 tcp_recvspace 65536
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
{node2:root}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 190 of 393


Coming back to this
previous screen.

Just click OK to
continue ...

Configuration
Assistants :

3 configuration assistants will be automatically executed.

• “Oracle notification Server Configuration Assistant”


• “Oracle Private Interconnect Assistant”
• ”Oracle Cluster Verification Utility”
Then click Next ...
If successful, Next screen “End of Installation” will appear automatically !!!

If not Check for the result to be successful.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 1 of 393


Output generated from configuration assistants …

Output generated from configuration assistant "Oracle Notification Server Configuration Assistant":
Command = /crs/11.1.0/install/onsconfig add_config node1:6251 node2:6251
The ONS configuration is created successfully
Stopping ONS resource 'ora.node1.ons'
Attempting to stop `ora.node1.ons` on member `node1`
Stop of `ora.node1.ons` on member `node1` succeeded.
The resource ora.node1.ons stopped successfully for restart
Attempting to start `ora.node1.ons` on member `node1`
Start of `ora.node1.ons` on member `node1` succeeded.
The resource ora.node1.ons restarted successfully
Stopping ONS resource 'ora.node2.ons'
Attempting to stop `ora.node2.ons` on member `node2`
Stop of `ora.node2.ons` on member `node2` succeeded.
The resource ora.node2.ons stopped successfully for restart
Attempting to start `ora.node2.ons` on member `node2`
Start of `ora.node2.ons` on member `node2` succeeded.
The resource ora.node2.ons restarted successfully

Configuration assistant "Oracle Notification Server Configuration Assistant" succeeded


-----------------------------------------------------------------------------
Output generated from configuration assistant "Oracle Private Interconnect Configuration Assistant":
Command = /crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.10.25.0:cluster_interconnect

Configuration assistant "Oracle Private Interconnect Configuration Assistant" succeeded


-----------------------------------------------------------------------------
Output generated from configuration assistant "Oracle Cluster Verification Utility":
Command = /crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2

Performing post-checks for cluster services setup

Checking node reachability...


Node reachability check passed from node "node1".

Checking user equivalence...


User equivalence check passed for user "crs".

Checking Cluster manager integrity...

Checking CSS daemon...


Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...

Cluster integrity check passed

Checking OCR integrity...

Checking the absence of a non-clustered configuration...


All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...


OCR of correct Version "2" exists.

Checking data integrity of OCR...


Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...


Liveness check passed for "CRS daemon".

Checking daemon liveness...


Liveness check passed for "CSS daemon".

Checking daemon liveness...


Liveness check passed for "EVM daemon".

Checking CRS health...


CRS health check passed.

CRS integrity check passed.

Checking node application existence...

Checking existence of VIP node application (required)


11gRAC/ASM/AIX oraclibm@fr.ibm.com 2 of 393
Check passed.

Checking existence of ONS node application (optional)


Check passed.

Checking existence of GSD node application (optional)


Check passed.

Post-check for cluster services setup was successful.

Configuration assistant "Oracle Cluster Verification Utility" succeeded


-----------------------------------------------------------------------------
The "/crs/11.1.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the
configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note
that you may have to update this script with passwords (if any) before executing the same.
-----------------------------------------------------------------------------

 If all or parts of assistants are failed or not executed, check for problems in log
/oracle/oraInventory/logs/installActions?????.log (as shown on the runInstaller window) and solve them.

Check also /crs/11.1.0/cfgtoollogs/configToolFailedCommands

{node1:crs}/crs/11.1.0/cfgtoollogs # cat configToolAllCommands


# Copyright (c) 1999, 2005, Oracle. All rights reserved.
/crs/11.1.0/bin/racgons add_config node1:6200 node2:6200
/crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.10.25.0:cluster_interconnect
/crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2
{node1:crs}/crs/11.1.0/cfgtoollogs #

If you miss those steps or closed the previous screen of the runInstaller, you will have to run them manually before
moving to the next step. Just adapt the lines with your own setting (node names, public/private network).

On one node as crs user :

For “Oracle notification Server Configuration Assistant” è


/crs/11.1.0/bin/racgons add_config node1:6200 node2:6200

For “crs Private Interconnect Assistant” è


/crs/11.1.0/bin/oifcfg setif -global en0/10.3.25.0:public en1/10.25.25.0:cluster_interconnect

For ”crs Cluster Verification Utility” è


/crs/11.1.0/bin/cluvfy stage -post crsinst -n node1,node2

End of Installation

Then click Exit ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 3 of 393


12.3 Post Installation operations

12.3.1 Update the Clusterware unix user .profile

 To be done on each node for crs (in our case) or oracle unix user.

vi $HOME/.profile file in crs’s home directory. Add the entries in bold blue color

PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.

export PATH

if [ -s "$MAIL" ] # This is at Shell startup. In normal


then echo "$MAILMSG" # operation, the Shell checks
fi # periodically.

ENV=$HOME/.kshrc
export ENV

#The following line is added by License Use Management installation


export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin

export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORA_CRS_HOME
export LD_LIBRARY_PATH=$CRS_HOME/lib:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$CRS_HOME/bin:$PATH

if [ -t 0 ]; then
stty intr ^C
fi

Do disconnect from crs user, and reconnect to load modified $HOME/.profile

Content of {node1:crs}/home/crs # cat .kshrc


.kshrc file export VISUAL=vi
export PS1='{'$(hostname)':'$LOGIN'}$PWD # '
{node1:crs}/home/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 4 of 393


12.3.2 Verify parameter CSS misscount

MISSCOUNT DEFINITION AND DEFAULT VALUES


The CSS misscount parameter represents the maximum time, in seconds, that a heartbeat can be missed before
entering into a cluster reconfiguration to evict the node. The following are the default values for the misscount
parameter and their respective versions when using Oracle Clusterware*:

10gR1 & 10gR2: *CSS misscount default value when using vendor (non-Oracle) clusterware is 600
Linux 60 Seconds seconds. This is to allow the vendor clusterware ample time to resolve any possible
split brain scenarios.
Unix 30 Seconds
VMS 30 Seconds Subject: CSS Timeout Computation in RAC 10g (10g Release 1 and 10g Release 2) 
Windows 30 Seconds   Doc ID: Note:294430.1

Check css {node1:root}/crs/11.1.0 # crsctl get css misscount


misscount Configuration parameter misscount is not defined.

è we should have a defined value

Check css {node1:root}/crs/11.1.0 # crsctl get css disktimeout


disktimeout 200

Check css {node1:root}/crs/11.1.0 # crsctl get css reboottime


reboottime 3

To co² ampute the right values, do read metalink note 294430.1, and use following note to change the value :
Subject: 10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout   Doc ID: Note:284752.1

Set css misscount Keep only one node up and running, stop the others

Backup the content of your OCR !!!


And check …
Do modify the CSS parameters with the crsctl command as root user

{node1:root}/crs/11.1.0 # crsctl set css misscount 30


Configuration parameter misscount is now set to 30.

{node1:root}/crs/11.1.0 # crsctl get css misscount


30

Restart all other nodes !!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 5 of 393


12.3.3 Cluster Ready Services Health Check

Check CRS processes on each nodes :

{node1:crs}/crs # ps -ef|grep crs


crs 90224 393346 1 Mar 07 - 25:17 /crs/11.1.0/bin/ocssd.bin
crs 188644 237604 0 Mar 07 - 0:05 /crs/11.1.0/bin/evmlogger.bin -o
/crs/11.1.0/evm/log/evmlogger.info -l /crs/11.1.0/evm/log/evmlogger.log
crs 225408 246014 0 Mar 07 - 1:57 /crs/11.1.0/bin/oclsomon.bin
crs 237604 327796 0 Mar 07 - 1:22 /crs/11.1.0/bin/evmd.bin
crs 246014 262190 0 Mar 07 - 0:00 /bin/sh -c cd
/crs/11.1.0/log/node1/cssd/oclsomon; ulimit -c unlimited; /crs/11.1.0/bin/oclsomon || exit $?
root 282844 290996 5 Mar 07 - 251:00 /crs/11.1.0/bin/crsd.bin reboot
root 290996 1 0 Mar 07 - 0:00 /bin/sh /etc/init.crsd run
root 315392 331980 0 Mar 07 - 0:21 /crs/11.1.0/bin/oprocd run -t 1000 -m 500
-f
crs 393346 319628 0 Mar 07 - 0:00 /bin/sh -c ulimit -c unlimited; cd
/crs/11.1.0/log/node1/cssd; /crs/11.1.0/bin/ocssd || exit $?
crs 397508 1 0 0:00 <defunct>
asm 544778 1 0 Mar 10 - 0:00 /crs/11.1.0/bin/oclskd.bin
crs 622672 1 0 Mar 15 - 0:00 /crs/11.1.0/opmn/bin/ons -d
crs 626764 622672 0 Mar 15 - 0:01 /crs/11.1.0/opmn/bin/ons -d
rdbms 4673770 1 0 Mar 15 - 0:00 /crs/11.1.0/bin/oclskd.bin
crs 4681948 4767816 0 16:34:59 pts/0 0:00 grep crs
crs 4767816 675942 0 16:06:06 pts/0 0:00 -ksh
crs 4796524 4767816 7 16:34:59 pts/0 0:00 ps -ef
{node1:crs}/crs # ps -ef|grep d.bin
crs 90224 393346 1 Mar 07 - 25:17 /crs/11.1.0/bin/ocssd.bin
crs 237604 327796 0 Mar 07 - 1:22 /crs/11.1.0/bin/evmd.bin
root 282844 290996 7 Mar 07 - 251:00 /crs/11.1.0/bin/crsd.bin reboot
asm 544778 1 0 Mar 10 - 0:00 /crs/11.1.0/bin/oclskd.bin
rdbms 4673770 1 0 Mar 15 - 0:00 /crs/11.1.0/bin/oclskd.bin
crs 4796528 4767816 0 16:35:11 pts/0 0:00 grep d.bin
{node1:crs}/crs #

You have completed the CRS install. Now you want to verify if the install is valid.
To Ensure that the CRS install on all the nodes is valid, the following should be checked on all the nodes.

1. Ensure that you have successfully completed running root.sh on all nodes during the install. (Please
do not re-run root.sh, this is very dangerous and might corrupt your installation, The object of this step is to only
confirm if the root.sh was run successfully after the install

2. Run the command $ORA_CRS_HOME/bin/crs_stat. Please ensure that this command does not error out but
dumps the information for each resource. It does not matter what CRS stat returns for each resource. If the
crs_stat exits after printing information about each resource then it means that the CRSD daemon is up and
the client crs_stat utility can communicate with it.

• This will also indicate that the CRSD can read the OCR.

• If the crs_stat errors out with CRS-0202: No resources are registered, Then this means that there are
no resources registered, and at this stage you missed the VIP configuration. This is not an error but is
mostly because at this stage you missed the VIP configuration.

• If the crs_stat errors out with CRS-0184: Cannot communicate with the CRS daemon, Then this means
the CRS deamons are not started.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 6 of 393


{node2:crs}/crs/11.1.0/bin # crs_stat -t
Execute Name Type Target State Host
------------------------------------------------------------
crs_stat –t ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
on one node ora.node1.vip application ONLINE ONLINE node1
as oracle user : ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node2:crs}/crs/11.1.0/bin #

{node2:crs}/crs/11.1.0/bin # crs_stat -ls


Execute Name Owner Primary PrivGrp Permission
-----------------------------------------------------------------
crs_stat –ls ora.node1.gsd crs oinstall rwxr-xr--
ora.node1.ons crs oinstall rwxr-xr--
on one node ora.node1.vip root oinstall rwxr-xr--
as oracle user : ora.node2.gsd crs oinstall rwxr-xr--
ora.node2.ons crs oinstall rwxr-xr--
ora.node2.vip root oinstall rwxr-xr--
{node2:crs}/crs/11.1.0/bin #

{node1:crs}/crs # crs_stat -help


Usage: crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
crs_stat -p [resource_name [...]] [-q]
crs_stat [-a] application -g
crs_stat [-a] application -r [-c cluster_member]
crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
crs_stat -ls [resource_name [...]] [-q]

{node1:crs}/crs #

3. Run the command $ORA_CRS_HOME/bin/olsnodes. This should return all the nodes of the cluster.
Successful run of this command would mean that the css is up and running. Also the CSS from each node can
talk to the CSS of other nodes.

Execute {node1:root}/ # su - crs


{node1:crs}/crs # olsnodes
olsnodes node1
node2
on both node {node1:crs # rsh node2
as oracle user : {node2:crs}/crs # olsnodes
node1
node2
{node2:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 7 of 393


{node1:crs}/crs # olsnodes -help
Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v]
where
-n print node number with the node name
-p print private interconnect name with the node name
-i print virtual IP name with the node name
<node> print information for the specified node
-l print information for the local node
-g turn on logging
-v run in verbose mode

{node1:crs}/crs #
{node1:crs}/crs # olsnodes -n -p -i
node1 1 node1-rac node1-vip
node2 2 node2-rac node2-vip
{node1:crs}/crs #

4. Output of crsctl check crs / cssd / crsd / evmd returns "…. daemon appears healthy"

{node1:crs}/crs # crsctl check crs


CRS health Cluster Synchronization Services appears healthy
check Cluster Ready Services appears healthy
Event Manager appears healthy
{node1:crs}/crs #
{node2:crs}/crs # crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
{node1:crs}/crs #

{node1:crs}/crs # crsctl check cssd


cssd, crsd, Cluster Synchronization Services appears healthy
evmd {node1:crs}/crs #
health {node1:crs}/crs # crsctl check crsd
check Cluster Ready Services appears healthy
{node1:crs}/crs #
{node1:crs}/crs # crsctl check evmd
Event Manager appears healthy
{node1:crs}/crs #
{node2:crs}/crs # crsctl check cssd
Cluster Synchronization Services appears healthy
{node2:crs}/crs #
{node2:crs}/crs # crsctl check crsd
Cluster Ready Services appears healthy
{node2:crs}/crs #
{node2:crs}/crs # crsctl check evmd
Event Manager appears healthy
{node2:crs}/crs #

{node1:root}/crs # crsctl query crs activeversion


CRS Oracle Clusterware active version on the cluster is [11.1.0.6.0]
software {node1:root}/crs #
version {node2:crs}/crs # crsctl query crs activeversion
query Oracle Clusterware active version on the cluster is [11.1.0.6.0]
{node2:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 8 of 393


12.3.4 Adding enhanced crsstat script 

Add the following shell script in $ORA_CRS_HOME/bin directory

As crs user :
{node1:crs}/crs # cd $ORA_CRS_HOME/bin
{node1:crs}/crs/11.1.0/bin #
{node1:crs}/crs/11.1.0/bin # vi crstat

Add the following lines :

#--------------------------- Begin Shell Script ----------------------------

#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment

RSC_KEY=$1
QSTAT=-u
AWK=/usr/bin/awk # if not available use /usr/bin/awk

# Table header:echo ""


$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'

# Table body:
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'

#--------------------------- End Shell Script ------------------------------

Check that “AWK=/usr/bin/awk” is the right path on your system !!!


Check that ORA_CRS_HOME is set in your crs .profile file on each node !!!
Save the file and exit
{node1:crs}/crs/11.1.0/bin # chmod +x crsstat
{node1:crs}/crs/11.1.0/bin #
Remote copy the file on al lnodes :
{node1:crs}/crs/11.1.0/bin # rcp crsstat node2:/crs/11.1.0/bin

{node1:crs}/crs/11.1.0/bin # crsstat
HA Resource Target State
----------- ------ -----
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node1:crs}/crs/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 9 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 10 of 393
12.3.5 Interconnect Network configuration Checkup

After CRS installation is completed, verify that the public and cluster interconnect have been set to the desired
values by entering the following commands as root:
Note : oifcfg is found in the <CRS HOME>/bin/oifcfg

{node1:crs}/crs # oifcfg

Name:
oifcfg - Oracle Interface Configuration Tool.

Usage: oifcfg iflist [-p [-n]]


oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>]
]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]

<nodename> - name of the host, as known to a communications network


<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }

{node1:crs}/crs #

oifcfg getif
Subject: How to Change Interconnect/Public Interface IP Subnet in a 10g Cluster   Doc ID: Note:283684.1

This command should return values for global “public” and global “cluster_interconnect”; for example:
{node1:root}/crs/11.1.0/bin # oifcfg getif
en0 10.3.25.0 global public
en1 10.10.25.0 global cluster_interconnect
{node1:root}/crs/11.1.0/bin #

If the command does not return a value for global cluster_interconnect, enter the following
commands:
{node1:crs}/crs/11.1.0/bin # oifcfg delif -global
# oifcfg setif -global <interface name>/<subnet>:public
# oifcfg setif –global <interface name>/<subnet>:cluster_interconnect

For example:

{node1:crs}/crs/11.1.0/bin # oifcfg delif -global


{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en0/10.3.25.0.0:public
{node1:crs}/crs/11.1.0/bin # oifcfg setif -global en1/10.10.25.0.0:cluster_interconnect

• If necessary and only for troubleshooting purpose, disable the automatic reboot of AIX nodes when node
fail to communicate with CRS daemons, or fail to access OCR and Voting disk.

Subject: 10g RAC: Stopping Reboot Loops When CRS Problems Occur Doc ID: Note:239989.1
Subject: 10g RAC: Troubleshooting CRS Reboots Doc ID: Note:265769.1

If one node crashed after running dbca, netca tools, with CRS codedump and Authentication OSD error,
check “crsd.log” file for missing $CRS_HOME/crs/crs/auth directory.
è THEN you need to re-create manually the missing directory, create the auth directory with correct owner,
group, and permission using following metalink note :
Subject: Crs Crashed With Authentication Osd Error Doc ID: Note:358400.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 11 of 393


12.3.6 Oracle CLuster Registry content Check and Backup

{node1:crs}/crs # ocrcheck
Check Oracle Status of Oracle Cluster Registry is as follows :
Cluster Version : 2
Registry Total space (kbytes) : 306972
Integrity Used space (kbytes) : 5716
Available space (kbytes) : 301256
As oracle ID : 1928316120
user, Device/File Name : /dev/ocr_disk1
Execute Device/File integrity check succeeded
ocrcheck Device/File Name : /dev/ocr_disk2
Device/File integrity check succeeded

Cluster registry integrity check succeeded

{node1:crs}/crs #

Check OCR {node1:crs}/crs # cat /etc/oracle/ocr.loc


disks ocrconfig_loc=/dev/ocr_disk1
locations : ocrmirrorconfig_loc=/dev/ocr_disk2
local_only=FALSE
{node1:crs}/crs #

Check Voting {node1:crs}/crs # crsctl query css votedisk


Disks 0. 0 /dev/voting_disk1
1. 0 /dev/voting_disk2
As oracle 2. 0 /dev/voting_disk3
user, Located 3 voting disk(s).
Execute : {node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 12 of 393


Using “ocrconfig” crs tool to export OCR content:

11gRAC/ASM/AIX oraclibm@fr.ibm.com 13 of 393


{node1:crs}/crs # ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.

Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online] - Export cluster register contents to a file
-import <filename> - Import cluster registry contents from a file
-upgrade [<user> [<group>]] - Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname> - Configure periodic backup location
-showbackup [auto|manual] - Show backup information
-manualbackup - Perform OCR backup
-restore <filename> - Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename> - Repair local OCR configuration
-help - Print out this help information

Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.

{node1:crs}/crs #

Export content of OCR :

{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export.dmp1 -s online
{node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
-rw-r--r-- 1 root system 106420 Jan 30 18:30 /oracle/ocr_export.dmp
{node1:root}/crs/11.1.0/bin #

è you must not edit/modify this exported file

11gRAC/ASM/AIX oraclibm@fr.ibm.com 14 of 393


Using “ocrconfig” crs tool to backup OCR content :

{node1:crs}/crs # ocrconfig -showbackup

{node1:crs}/crs #
{node1:crs}/crs # ocrconfig -manualbackup
PROT-20: Insufficient permission to proceed. Require privileged user
{node1:crs}/crs # su
root's Password:
{node1:root}/crs # ocrconfig -manualbackup

node1 2008/02/24 08:08:48 /crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr


{node1:root}/crs #

After few hours, weeks, months, “ocrconfig –showbackup” could display the following :

View OCR automatic periodic backup managed by Oracle Clusterware

{node1:crs}/crs # ocrconfig -showbackup

node2 2008/03/16 14:45:48 /crs/11.1.0/cdata/crs_cluster/backup00.ocr

node2 2008/03/16 10:45:47 /crs/11.1.0/cdata/crs_cluster/backup01.ocr

node2 2008/03/16 06:45:47 /crs/11.1.0/cdata/crs_cluster/backup02.ocr

node2 2008/03/15 06:45:46 /crs/11.1.0/cdata/crs_cluster/day.ocr

node2 2008/03/07 02:52:46 /crs/11.1.0/cdata/crs_cluster/week.ocr

node1 2008/02/24 08:09:21 /crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr

node1 2008/02/24 08:08:48 /crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr


{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 15 of 393


List of options for srvctl command :

{node1:root}/crs/11.1.0/bin # srvctl -h
Usage: srvctl [-V]

Usage: srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY
| PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl add instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl add service -d <name> -s <service_name> -r "<preferred_list>" [-a "<available_list>"] [-P <TAF_policy>]
Usage: srvctl add service -d <name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl add nodeapps -n <node_name> -A <name|ip>/netmask[/if1[|if2|...]]
Usage: srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> [-p <spfile>]
Usage: srvctl add listener -n <node_name> -o <oracle_home> [-l <listener_name>]

Usage: srvctl config database


Usage: srvctl config database -d <name> [-a] [-t]
Usage: srvctl config service -d <name> [-s <service_name>] [-a] [-S <level>]
Usage: srvctl config nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
Usage: srvctl config asm -n <node_name>
Usage: srvctl config listener -n <node_name>

Usage: srvctl disable database -d <name>


Usage: srvctl disable instance -d <name> -i "<inst_name_list>"
Usage: srvctl disable service -d <name> -s "<service_name_list>" [-i <inst_name>]
Usage: srvctl disable asm -n <node_name> [-i <inst_name>]

Usage: srvctl enable database -d <name>


Usage: srvctl enable instance -d <name> -i "<inst_name_list>"
Usage: srvctl enable service -d <name> -s "<service_name_list>" [-i <inst_name>]
Usage: srvctl enable asm -n <node_name> [-i <inst_name>]

Usage: srvctl getenv database -d <name> [-t "<name_list>"]


Usage: srvctl getenv instance -d <name> -i <inst_name> [-t "<name_list>"]
Usage: srvctl getenv service -d <name> -s <service_name> [-t "<name_list>"]
Usage: srvctl getenv nodeapps -n <node_name> [-t "<name_list>"]

Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY |
PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
11gRAC/ASM/AIX oraclibm@fr.ibm.com 16 of 393
Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]

Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]

Usage: srvctl remove database -d <name> [-f]


Usage: srvctl remove instance -d <name> -i <inst_name> [-f]
Usage: srvctl remove service -d <name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl remove nodeapps -n "<node_name_list>" [-f]
Usage: srvctl remove asm -n <node_name> [-i <asm_inst_name>] [-f]
Usage: srvctl remove listener -n <node_name> [-l <listener_name>]

Usage: srvctl setenv database -d <name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}


Usage: srvctl setenv instance -d <name> [-i <inst_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl setenv service -d <name> [-s <service_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl setenv nodeapps -n <node_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}

Usage: srvctl start database -d <name> [-o <start_options>]


Usage: srvctl start instance -d <name> -i "<inst_name_list>" [-o <start_options>]
Usage: srvctl start service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-o <start_options>]
Usage: srvctl start nodeapps -n <node_name>
Usage: srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
Usage: srvctl start listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl status database -d <name> [-f] [-v] [-S <level>]


Usage: srvctl status instance -d <name> -i "<inst_name_list>" [-f] [-v] [-S <level>]
Usage: srvctl status service -d <name> [-s "<service_name_list>"] [-f] [-v] [-S <level>]
Usage: srvctl status nodeapps -n <node_name>
Usage: srvctl status asm -n <node_name>

Usage: srvctl stop database -d <name> [-o <stop_options>]


Usage: srvctl stop instance -d <name> -i "<inst_name_list>" [-o <stop_options>]
Usage: srvctl stop service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-f]
Usage: srvctl stop nodeapps -n <node_name> [-r]
Usage: srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <stop_options>]
Usage: srvctl stop listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl unsetenv database -d <name> -t "<name_list>"


Usage: srvctl unsetenv instance -d <name> [-i <inst_name>] -t "<name_list>"
Usage: srvctl unsetenv service -d <name> [-s <service_name>] -t "<name_list>"
Usage: srvctl unsetenv nodeapps -n <node_name> -t "<name_list>"
{node1:root}/crs/11.1.0/bin #
11gRAC/ASM/AIX oraclibm@fr.ibm.com 17 of 393
11gRAC/ASM/AIX oraclibm@fr.ibm.com 18 of 393
12.4 Some usefull commands

AS root user To start the CRS :

Command to start/stop {node1:root}/crs/11.1.0/bin # crsctl start crs


the CRS deamons : Attempting to start CRS stack
The CRS stack will be started shortly
{node1:root}/crs/11.1.0/bin #

To stop the CRS :

{node1:crs}/crs/11.1.0/bin # su
root's Password:
{node1:root}/ # cd /crs/11.1.0/bin
{node1:root}/crs/11.1.0/bin # crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
{node1:root}/crs/11.1.0/bin #

All crsctl command available :


{node1:root}/crs/11.1.0/bin # crsctl
Usage: crsctl check crs - checks the viability of the CRS stack
crsctl check cssd - checks the viability of CSS
crsctl check crsd - checks the viability of CRS
crsctl check evmd - checks the viability of EVM
crsctl set css <parameter> <value> - sets a parameter override
crsctl get css <parameter> - gets the value of a CSS parameter
crsctl unset css <parameter> - sets CSS parameter to its default
crsctl query css votedisk - lists the voting disks used by CSS
crsctl add css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs - enables startup for all CRS daemons
crsctl disable crs - disables startup for all CRS daemons
crsctl start crs - starts all CRS daemons.
crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster.
crsctl start resources - starts CRS resources.
crsctl stop resources - stops CRS resources.
crsctl debug statedump evm - dumps state info for evm objects
crsctl debug statedump crs - dumps state info for crs objects
crsctl debug statedump css - dumps state info for css objects
crsctl debug log css [module:level]{,module:level} ...
- Turns on debugging for CSS
crsctl debug trace css - dumps CSS in-memory tracing cache
crsctl debug log crs [module:level]{,module:level} ...
- Turns on debugging for CRS
crsctl debug trace crs - dumps CRS in-memory tracing cache
crsctl debug log evm [module:level]{,module:level} ...
- Turns on debugging for EVM
crsctl debug trace evm - dumps EVM in-memory tracing cache
crsctl debug log res <resname:level> turns on debugging for resources
crsctl query crs softwareversion [<nodename>] - lists the version of CRS software
installed
crsctl query crs activeversion - lists the CRS software operating version
crsctl lsmodules css - lists the CSS modules that can be used for debugging
crsctl lsmodules crs - lists the CRS modules that can be used for debugging
crsctl lsmodules evm - lists the EVM modules that can be used for debugging

If necesary any of these commands can be run with additional tracing by


adding a "trace" argument at the very front.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 19 of 393


Example: crsctl trace check css
{node1:root}/crs/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 20 of 393


12.5 Accessing CRS logs

To view CRS cd /crs/11.1.0/log/nodename/….


logs
In our case nodename will be node1 for CRS logs on node1
cd /crs/11.1.0/log/node1

And nodename will be node2 for CRS logs on node2


cd /crs/11.1.0/log/node2

Contents example of ../crs/log/node1 with node1 :

{node1:root}/crs/11.1.0/log/node1 # ls -la
total 256
drwxr-xr-t 8 root dba 256 Mar 12 16:21 .
drwxr-xr-x 4 oracle dba 256 Mar 12 16:21 ..
drwxr-x--- 2 oracle dba 256 Mar 12 16:21 admin
-rw-rw-r-- 1 root dba 24441 Apr 17 12:27 alertnode1.log
drwxr-x--- 2 oracle dba 98304 Apr 17 12:27 client
drwxr-x--- 2 root dba 256 Apr 16 13:56 crsd
drwxr-x--- 4 oracle dba 256 Mar 22 21:56 cssd
drwxr-x--- 2 oracle dba 256 Mar 12 16:22 evmd
drwxrwxr-t 5 oracle dba 4096 Apr 16 12:48 racg
{node1:root}/crs/11.1.0/log/node1 #

Look at Metalink Note :

Subject: Oracle Clusterware consolidated logging in 10gR2 Doc ID: Note:331168.1

Extract from the note : • CRS logs are in $ORA_CRS_HOME/log/<hostname>/crsd/

Oracle Clusterware • CSS logs are in $ORA_CRS_HOME/log/<hostname>/cssd/


consolidated logging in
10gR2
• EVM logs are in $ORA_CRS_HOME/log/<hostname>/evmd
$ORA_CRS_HOME/evm/log/
!!! Same for 11g !!!
• Resource specific logs are in
$ORA_CRS_HOME/log/<hostname>/racg and the
$ORACLE_HOME/log/<hostname>/racg

• RVM logs are in $ORA_CRS_HOME/log/<hostname>/client and


the $ORACLE_HOME/log/<hostname>/client

• Cluster Network Communication logs are in the


$ORA_CRS_HOME/log directory

• OPMN logs are in the $ORA_CRS_HOME/opmn/logs

• New in 10g Release 2 is a alert<nodename>.log present in


the $ORA_CRS_HOME/log/<hostname>

Look at following Metalink Note to get all logs necessary for needed support diagnostics :

Subject: CRS 10g R2 Diagnostic Collection Guide Doc ID: Note:330358.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 21 of 393


12.6 Clusterware Basic Testing
Prior to go further, you should test how the clusterware is behaving at simple tests to validate your oracle
clusterware installation.

Action to be done : What should happen !!!!

( 1 ) Node 1 and Node 2 are UP and Before reboot of node1 :


Running
{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node2:crs}/crs/11.1.0/bin #
è Reboot node1
While node1 is rebooting :
Check on node2, and VIP from node1 should appear while node1 is out of order.

{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE OFFLINE
ora.node1.ons application ONLINE OFFLINE
ora.node1.vip application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node2:crs}/crs/11.1.0/bin #

When node1 is back with CRS up and running, the VIP will come back to its original
position on node1.

After reboot of node1 :

{node2:crs}/crs/11.1.0/bin # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node2:crs}/crs/11.1.0/bin #

( 2 ) Node 1 and Node 2 are UP and Check on node1, and VIP from node2 should appear while node2 is out of order.
Running
When node2 is back with CRS up and running, the VIP will come back to its original
position on node2.
è Reboot node2

( 2 ) Node 1 and Node 2 are UP and


Running After reboot, both nodes will come back with CRS up and running, with VIP from
both nodes on their respective positions, VIP (node1) on node1 and VIP (node2) on
node2.
è Reboot node1 and node2

11gRAC/ASM/AIX oraclibm@fr.ibm.com 22 of 393


12.7 What Has Been Done ?

At this stage :
• The Oracle Cluster Registry and Voting Disk are created and configured
• The Oracle Cluster Ready Services is installed, and started on all nodes.
• The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.

12.8 VIP and CRS TroubleShouting

 If problems occurs with VIP configuration assistant, please use the


metalink notes specified in this chapter.

Metalink Note 296856.1- Configuring the IBM AIX 5L Operating System for the Oracle 10g VIP

Metalink Note 294336.1- Changing the check interval for the Oracle 10g VIP

Metalink Note 276434.1- Modifying the VIP of a Cluster Node

srvctl modify nodeapps ­n <node_name> [­o <oracle_home>] [­A <new_vip_address>]
   
Options Description:
­n <node_name> Node name.
­o <oracle_home> Oracle home for the cluster database.
­A <new_vip_address> The node level VIP address (<name|ip>/netmask[/if1[|if2|...]]).

An example of the 'modify nodeapps' command is as follows:

$ srvctl stop nodeapps –n node1
$ srvctl modify nodeapps ­n node1 ­A 10.3.25.181/255.255.255.0/en0
$ srvctl start nodeapps –n node1

Note:   This command should be run as root.

Metalink Note 298895.1- Modifying the default gateway address used by the Oracle 10g VIP

Metalink Note 264847.1- How to Configure Virtual IPs for 10g RAC

How to delete VIP IP Example for our case :


allias on public network
card, if they are On node1 as root è Ifconfig en0 delete 10.3.25.181
persistents even after On node2 as root è Ifconfig en0 delete 10.3.25.182
the CRS shutdown :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 23 of 393


12.9 How to clean a failed CRS installation

Metalink Note 239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install

On both Nodes :

As root user on each node :

$ORA_CRS_HOME/bin/crsctl stop crs (to clean any remaining crs deamons)


rmitab h1 è this will remove oracle CRS entry in the /etc/inittab
rmitab h2
rmitab h3
rm –Rf /opt/ORCL*
kill remaining process from output : ps-ef|grep crs and ps-ef|grep d.bin
rm –R /etc/oracle/*

You should now remove the CRS installation, 2 options :

1/ You want to keep the oraInventory as its used for other oracle products which are installed and
used, THEN run runInstaller as oracle user to uninstall the CRS installation. When done remove the
content of the CRS directory on both nodes : rm –Rf /crs/11.1.0/*

OR

2/ You don’t care about the oraInventory, and there’s no other oracle products installed on
the nodes, THEN remove the full content of the $ORACLE_BASE including the oraInventory
directory : rm –Rf /oracle/*

Change owner, group and permission for /dev/ocr_disk and /dev/vote_disk on each node of the cluster :

On node1 … {node1:root}/ # chown root.oinstall /dev/ocr_disk*


{node1:root}/ # chown crs.oinstall /dev/voting_disk*
{node1:root}/ # chmod 660 /dev/ocr_disk*
{node1:root}/ # chmod 660 /dev/voting_disk*

On node2 … {node1:root}/ # rsh node2


{node2:root}/ # chown root.oinstall /dev/ocr_disk1*
{node2:root}/ # chown crs.oinstall /dev/voting_disk*
{node2:root}/ # chmod 660 /dev/ocr_disk*
{node2:root}/ # chmod 660 /dev/voting_disk*

To erase ALL OCR and Voting diks content è Format (Zeroing) on the disks from one node :

for i in 1 2
do
dd if=/dev/zero of=/dev/ocr_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.
……………….

for i in 1 2 3
do
dd if=/dev/zero of=/dev/voting_disk$i bs=1024 count=300 &
done
300+0 records in.
300+0 records out.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 24 of 393


13 INSTALL AUTOMATED STORAGE MANAGEMENT SOFTWARE

At this stage, Oracle Clusterware is installed and MUST be started on all nodes.
ASM is an Oracle cluster files system (Oracle Automated Storage Management), and as a CFS it can be
independent from the database software, and updated to upper release independently from database software.
That is the reason why we will install ASM software in its own ORACLE_HOME directory that we’ll define as
ASM_HOME ( /oracle/asm/11.1.0 ).

Now, in order to make available ASM cluster files systems to Oracle RAC database, we need to :

• Install ASM software in its own ORACLE_HOME

• Create and configure a Listener on each node


Either Thru Netca (Oracle Network Configuration Assistant)
Or Manually

• Create and configure ASM instance on each node


Either Thru DBCA (Oracle Database Configuration Assistant)
Or Manually

• Prepare LUN’s / Disks for ASM (Done in previous Storage chapter)

• Create and configure an ASM Diskgroup


Either Thru DBCA (Oracle Database Configuration Assistant)
Or SQL commands, using SQLPlus.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 25 of 393


13.1 Installation
Oracle ASM installation just have to be done only starting from one node. Once the first node is installed, Oracle
OUI automatically starts the copy of the mandatory files on the second node, using rcp or scp command. This step
could last long, depending on the network speed (one hour…), without any message. So, don’t think the OUI is
stalled, and look at the network traffic before canceling the installation !
You can also create a staging area. The name of the subdirectories is in the format “Disk1” to “Disk3”.

On each node :
Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced
libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node2:root}/ # /usr/sbin/slibclean

From first node As Under VNC Client session, or other graphical interface, execute :
root user, execute :
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

On each node, set right {node1:root}/ # chown crs:oinstall /oracle/asm


{node1:root}/ # chmod 665 /oracle/asm
ownership and permissions
{node1:root}/ #
to following directories :

Login as asm (in our case), or oracle user and follow the procedure hereunder…

With /tmp or other destination having enough free space, about 500Mb on
 Setup and each node.
export your
DISPLAY, TMP and {node1:asm}/ # export DISPLAY=node1:0.0
TEMP variables
If not set in asm .profile, do :

{node1:asm}/ # export TMP=/tmp


{node1:asm}/ # export TEMP=/tmp
{node1:asm}/ # export TMPDIR=/tmp

 Check that Oracle Clusterware (including VIP, ONS and GSD) is started on each node !!!
As asm user from
node1 : {node1:asm}/oracle/asm # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 26 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 27 of 393
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ls
doc install response runInstaller stage welcome.html
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 190 MB. Actual 1950 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-13_03-47-34PM. Please wait …
{node1:asm}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.

At the OUI Welcome screen


You can check the installed Products :

Just click Next …

Select the installation type :

You have the option to choose Enterprise,


Standard Edition, or Custom to proceed.

Choose the “Custom” option to avoid creating


a database by default.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 28 of 393


Specify File Locations :
Do not change the Source field

Specify a different ORACLE_HOME Name with


its own directory for the Oracle software
installation.

 This ORACLE_HOME must be different


then the CRS ORACLE_HOME.
OraAsm11g_home1
/oracle/asm/11.1.0

Then click Next ...

If you don’t see the following screen with


Node selection, it might be that your CRS is
down on one or all nodes. è Please check if
CRS is up and running on all nodes.

Specify Hardware Cluster Installation Mode :

Select Cluster Installation

AND the other nodes on to which the Oracle


RDBMS software will be installed. It is not
necessary to select the node on which the OUI is
currently running. Click Next.

Then click Next ...

The installer will check some product-


specific Prerequisite.

Don’t take care of the lines with checking at


status “Not executed”, These are just warnings
because AIX maintenance level might be higher
then 5300, which is the case in our example
(ML03).

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 29 of 393


Details of the Checking operating system requirements ...
prerequisite Expected result: One of 5200.004,5300.002
checks done Actual Result: 5300.002
Check complete. The overall result of this check is: Passed
by
=======================================================================
runInstaller
Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.0.51). Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.50). Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.0.40). Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.0.50). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.0.50). Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.0.50). Passed
Check complete. The overall result of this check is: Passed
=======================================================================

Checking recommended operating system patches


Checking for IY59386(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.51). Passed
Checking for IY60930(bos.mp,5.3.0.1); found (bos.mp,5.3.0.54). Passed
Checking for IY60930(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.54). Passed
Checking for IY66513(bos.mp64,5.3.0.20); found (bos.mp64,5.3.0.54). Passed
Checking for IY66513(bos.mp,5.3.0.20); found (bos.mp,5.3.0.54). Passed
Checking for IY70159(bos.mp,5.3.0.22); found (bos.mp,5.3.0.54). Passed
Checking for IY70159(bos.mp64,5.3.0.22); found (bos.mp64,5.3.0.54). Passed
Checking for IY58143(bos.mp64,5.3.0.1); found (bos.mp64,5.3.0.54). Passed
Checking for IY58143(bos.acct,5.3.0.1); found (bos.acct,5.3.0.51). Passed
Checking for IY58143(bos.adt.include,5.3.0.1); found (bos.adt.include,5.3.0.53). Passed
Checking for IY58143(bos.adt.libm,5.3.0.1); found (bos.adt.libm,5.3.0.40). Passed
Checking for IY58143(bos.adt.prof,5.3.0.1); found (bos.adt.prof,5.3.0.53). Passed
Checking for IY58143(bos.alt_disk_install.rte,5.3.0.1); found (bos.alt_disk_install.rte,5.3.0.51). Passed
Checking for IY58143(bos.cifs_fs.rte,5.3.0.1); found (bos.cifs_fs.rte,5.3.0.50). Passed
Checking for IY58143(bos.diag.com,5.3.0.1); found (bos.diag.com,5.3.0.51). Passed
Checking for IY58143(bos.perf.libperfstat,5.3.0.1); found (bos.perf.libperfstat,5.3.0.50). Passed
Checking for IY58143(bos.perf.perfstat,5.3.0.1); found (bos.perf.perfstat,5.3.0.50). Passed
Checking for IY58143(bos.perf.tools,5.3.0.1); found (bos.perf.tools,5.3.0.52). Passed
Checking for IY58143(bos.rte.boot,5.3.0.1); found (bos.rte.boot,5.3.0.51). Passed
Checking for IY58143(bos.rte.archive,5.3.0.1); found (bos.rte.archive,5.3.0.51). Passed
Checking for IY58143(bos.rte.bind_cmds,5.3.0.1); found (bos.rte.bind_cmds,5.3.0.51). Passed
Checking for IY58143(bos.rte.control,5.3.0.1); found (bos.rte.control,5.3.0.50). Passed
Checking for IY58143(bos.rte.filesystem,5.3.0.1); found (bos.rte.filesystem,5.3.0.51). Passed
Checking for IY58143(bos.rte.install,5.3.0.1); found (bos.rte.install,5.3.0.54). Passed
Checking for IY58143(bos.rte.libc,5.3.0.1); found (bos.rte.libc,5.3.0.53). Passed
Checking for IY58143(bos.rte.lvm,5.3.0.1); found (bos.rte.lvm,5.3.0.53). Passed
Checking for IY58143(bos.rte.man,5.3.0.1); found (bos.rte.man,5.3.0.50). Passed
Checking for IY58143(bos.rte.methods,5.3.0.1); found (bos.rte.methods,5.3.0.51). Passed
Checking for IY58143(bos.rte.security,5.3.0.1); found (bos.rte.security,5.3.0.53). Passed
Checking for IY58143(bos.rte.serv_aid,5.3.0.1); found (bos.rte.serv_aid,5.3.0.52). Passed
Check complete. The overall result of this check is: Passed
=======================================================================

Validating ORACLE_BASE location (if set) ...


Check complete. The overall result of this check is: Passed
=======================================================================

Checking for proper system clean-up....


Check complete. The overall result of this check is: Passed
=======================================================================

Checking for Oracle Home incompatibilities ....


Actual Result: NEW_HOME
Check complete. The overall result of this check is: Passed
=======================================================================

Checking Oracle Clusterware version ...


Check complete. The overall result of this check is: Passed
=======================================================================

11gRAC/ASM/AIX oraclibm@fr.ibm.com 30 of 393


Available Product Components :

Select the product components for Oracle 11g


ASM software that you want to install.

Then click Next ...

Privileged Operating Systems Groups :

Verify the UNIX primary group name of the user


which controls the installation of the Oracle11g
ASM software. (Use unix command id to find out)

And specify the Privileged Operating System


Groups to the value found.
In our example, this must be :
• “dba” for Database Administrator (OSDBA)
group
• “oper” for Database Operator (OSOPER)
group
• “asm” for administrator (SYSASM) group

(Primary group of unix asm user) to be set for


both entries.

Then click Next ...

Create Database :

Choose “Install database Software Only”

Choose “Configure Automatic Storage


Management (ASM)”, we want to install the
ASM software in its own ORACLE_HOME at this
stage.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 31 of 393


Summary :

The Summary screen will be presented.

Check Cluster Nodes and Remote Nodes lists.


The OUI will install the Oracle 11g ASM
software on to the local node, and then copy this
information to the other selected nodes.

Then click Install ...

Install :

The Oracle Universal Installer will proceed


the installation on the first node, then will
copy automatically the code on the others
selected nodes.

At 50%, it may take time to pass over the 50%,


don’t be affrais, installation is processing,
running a test script on remote nodes

Just wait for the next screen ...

 At this stage, you could hit message about “Failed Attached Home” !!!
if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on “OK”
:

From node2 :

{node2:root}/oracle/asm/11.1.0 # su – asm
{node2:asm}/oracle/asm -> /oracle/asm/11.1.0/oui/bin/runInstaller –attachHome –
noClusterEnabled ORACLE_HOME=/oracle/asm/11.1.0 ORACLE_HOME_NAME=OraAsm11g_home1
CLUSTER_NODES=node1,node2 “INVENTORY_LOCATION=/oracle/oraInventory” LOCAL_NODE=node2
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be
executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
‘AttachHome’ was successful.
{node2:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 32 of 393


Execute Configuration
Scripts :

/oracle/asm/11.1.0/root.sh to
execute on each node as root
user, on node1, then on node2.

When Done, come back on


this screen and click on
“OK”... !!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 33 of 393


On node1 :

{node1:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node1:root}/oracle/asm/11.1.0 #

On node2 :

{node2:root}/oracle/asm/11.1.0 # ./root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= asm
ORACLE_HOME= /oracle/asm/11.1.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
{node2:root}/oracle/asm/11.1.0 #

End of Installation :

Click on “Installed Products” to chek with the


oraInventory that ASM is installed on each
node.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 34 of 393


Just click on “Close”, then “Exit” ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 35 of 393


13.2 Update the ASM unix user .profile

 To be done on each node for asm (in our case) or oracle unix user.

vi $HOME/.profile file in asm’s home directory. Add the entries in bold blue color

PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.

export PATH

if [ -s "$MAIL" ] # This is at Shell startup. In normal


then echo "$MAILMSG" # operation, the Shell checks
fi # periodically.

ENV=$HOME/.kshrc
export ENV

#The following line is added by License Use Management installation


export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin

export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORACLE_CRS_HOME=$CRS_HOME
export ORACLE_HOME=$ORACLE_BASE/asm/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_SID=+ASM1

if [ -t 0 ]; then
stty intr ^C
fi

On node2, ORACLE_SID will be set as bellow :

...
export ORACLE_SID=+ASM2

Do disconnect from asm user, and reconnect to load modified $HOME/.profile

11gRAC/ASM/AIX oraclibm@fr.ibm.com 36 of 393


13.3 Checking Oracle Clusterware

At this stage, Oracle ASM software is installed in its own ORACLE_HOME directory that we’ll define as
ASM_HOME ( /oracle/asm/11.1.0 ).

Now, we need to :

• Create and configure a Listener on each node

• Create and configure ASM instance on each node

• Create and configure an ASM Diskgroup

11gRAC/ASM/AIX oraclibm@fr.ibm.com 37 of 393


13.4 Configure default node listeners

In order to configure ASM instances, and then create ASM Disk Groups, we need to configure and start a default
listener on each node.

Listener on each node must :

• Configured with ORACLE_HOME set to /oracle/asm/11.1.0

• Listen to Static IP and VIP

• Using TCP and IPC protocols

Starting the installation !!!

{node1:asm}/oracle/asm/11.1.0/bin # id
uid=502(asm) gid=500(oinstall)
groups=1(staff),501(crs),502(dba),503(oper),504(asm),505(asmdba)
{node1:asm}/oracle/asm/11.1.0/bin # pwd
/oracle/asm/11.1.0/bin
{node1:asm}/oracle/asm/11.1.0/bin # ./netca

11gRAC/ASM/AIX oraclibm@fr.ibm.com 38 of 393


Oracle Net Configuration Assistant,
Configuration :

• Select “Cluster Configuration” as


type of Oracle Net Services.

Click “Next” …

Oracle Net Configuration Assistant,


Real Application Clusters, Active Nodes
:

• “Select All nodes” to configure.

Click “Next” …

Oracle Net Configuration Assistant,


Welcome :

• Select “Listener Configuration”

Click “Next” …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 39 of 393


Oracle Net Configuration Assistant,
Listener Configuration, Listener :

• Select “Add”

Click “Next” …

Oracle Net Configuration Assistant,


Listener Name :

• Keep by default name as


“LISTENER”.

Click “Next” …

Oracle Net Configuration Assistant,


Listener Configuration, Select
Protocols :

• Select Protocols “TCP”, and “IPC”

Click “Next” …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 40 of 393


Oracle Net Configuration Assistant,
Listener Configuration, TCP / IP
Protocol :

• Use the standard port number of


“1521”, or other port for your
project.

Port 1521 means automatic registering


of ASM or databases instances within
the listener’s.

Click “Next” …

Using other port will mean that configuration of “LOCAL” and REMOTE” Listener’s will be MANDATORY.
Later, If you need to configure more than a listener, for example one for each database in the cluster,
you should :

• Have a listener for ASM on each node :


• Keep a default listener in the $ASM_HOME (/oracle/asm/11.1.0).
• Using port different than 1521 to avoid automatic registering of all the resources on this listener.
• Configure local and remote listeners for ASM instances.

• Have listener(s) for RDBMS on each node :


• Set your extra listeners out of the $ASM_HOME, in their own ORACLE_HOME
• Using port different than 1521 and different from the one from ASM listeners.
• Configure local and remote listeners for each Database instances.

In case of extra listeners, they should be configured from their own $ORACLE_HOME directory, and not from the
$ASM_HOME disrectory. So creation of extra listeners will have to be done when RDBMS will be installed ...

Oracle Net Configuration Assistant,


Listener Configuration,
IPC Protocol :

• Enter IPC Key value, “EXTPROC”


for example.

Click “Next” …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 41 of 393


Oracle Net Configuration Assistant,
Listener Configuration, More
Listeners ? :

• Select Protocols “TCP”, and “IPC”

Click “Next” …

THEN

On the telnet session, we can see :

Oracle Net Services


Configuration:
Configuring Listener:LISTENER
node1...
node2...
Listener configuration complete.

Oracle Net Configuration Assistant,


Listener Configuration, Listener
Configuration Done :

• Listener Configuration complete !

Click “Next” …

Oracle Net Configuration Assistant,


Listener Configuration, Welcome :

Click on “Finish” to exit !!!

Following message will appear on


exit :

Oracle Net Services configuration


successful. The exit code is 0

11gRAC/ASM/AIX oraclibm@fr.ibm.com 42 of 393


Checking if Listeners are registered in Oracle Clusterware :
1 listener on each node must be registered !!!

To stop the default node {node1:asm}/oracle/asm/11.1.0/bin # srvctl stop listener -n node1


listener on specified node : {node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t
Name Type Target State Host
Node1 for example … ------------------------------------------------------------
ora....E1.lsnr application OFFLINE OFFLINE
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm/11.1.0/bin #

To start the default node {node1:asm}/oracle/asm/11.1.0/bin # srvctl start listener -n node1


listener on specified node : {node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t
Name Type Target State Host
Node1 for example … ------------------------------------------------------------
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 43 of 393


To get {node1:asm}/oracle/asm/11.1.0/bin # crs_stat |grep lsnr
information NAME=ora.node1.LISTENER_NODE1.lsnr
about the NAME=ora.node2.LISTENER_NODE2.lsnr
listener {node1:asm}/oracle/asm/11.1.0/bin #
resource on {node1:asm}/oracle/asm/11.1.0/bin # crs_stat -p ora.node1.LISTENER_NODE1.lsnr
one node, from NAME=ora.node1.LISTENER_NODE1.lsnr
the Oracle TYPE=application
Clusterware ACTION_SCRIPT=/oracle/asm/11.1.0/bin/racgwrap
Registry : ACTIVE_PLACEMENT=0
AUTO_START=1
Node1 for CHECK_INTERVAL=600
example … DESCRIPTION=CRS application for listener on node
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=ora.node1.vip
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=600
STOP_TIMEOUT=600
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

{node1:asm}/oracle/asm/11.1.0/bin #

{node1:asm}/oracle/asm/11.1.0/bin # crs_stat ora.node1.LISTENER_NODE1.lsnr


NAME=ora.node1.LISTENER_NODE1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on node1

{node1:asm}/oracle/asm/11.1.0/bin #

{node1:asm}/oracle/asm/11.1.0/bin # crs_stat -t ora.node1.LISTENER_NODE1.lsnr


Name Type Target State Host
------------------------------------------------------------
ora....E1.lsnr application ONLINE ONLINE node1
{node1:asm}/oracle/asm/11.1.0/bin #

{node1:asm}/oracle/asm/11.1.0/bin # crs_stat -ls ora.node1.LISTENER_NODE1.lsnr


Name Owner Primary PrivGrp Permission
-----------------------------------------------------------------
ora....E1.lsnr asm oinstall rwxrwxr--
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 44 of 393


{node1:asm}/oracle/asm/11.1.0/bin # srvctl config listener -n node1
node1 LISTENER_NODE1
{node1:asm}/oracle/asm/11.1.0/bin #

{node1:asm}/oracle/asm/11.1.0/network/admin # cat listener.ora


 Check that # listener.ora.node1 Network Configuration File:
/oracle/asm/11.1.0/network/admin/listener.ora.node1
entry is OK,
# Generated by Oracle configuration tools.
showing
LISTENER_NODE LISTENER_NODE1 =
1, and with vip from (DESCRIPTION_LIST =
node1, and static IP (DESCRIPTION =
from node1. (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.81)(PORT = 1521)(IP =
FIRST))
)
)

{node1:asm}/oracle/asm/11.1.0/network/admin #

 Check that entry is OK, showing LISTENER_NODE2, and with vip from node2, and static IP from node2.

It may happens that {node2:asm}/oracle/asm/11.1.0/network/admin # cat listener.ora


on remote nodes, # listener.ora.node2 Network Configuration File:
/oracle/asm/11.1.0/network/admin/listener.ora.node2
the listener.ora file
# Generated by Oracle configuration tools.
get not properly
configured, and LISTENER_NODE2 =
may show same (DESCRIPTION_LIST =
content as the one (DESCRIPTION =
from node1. (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521)(IP = FIRST))
Chech on extra (ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.82)(PORT = 1521)(IP =
nodes, if any … FIRST))
)
)

{node2:asm}/oracle/asm/11.1.0/network/admin #

To get a status from the listener :

export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl status listener

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 13-FEB-2008 21:52:26

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_NODE1
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production
Start Date 13-FEB-2008 21:39:08
Uptime 0 days 0 hr. 13 min. 20 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File /oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml

11gRAC/ASM/AIX oraclibm@fr.ibm.com 45 of 393


Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
The listener supports no services
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 46 of 393


To get a status of the {node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl services listener
services managed by
the listener : LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production on 13-FEB-2008 21:53:49

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The listener supports no services
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

To get help on {node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl help


listener’s commands :
LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on
13-FEB-2008 21:55:05

Copyright (c) 1991, 2007, Oracle. All rights reserved.

The following operations are available


An asterisk (*) denotes a modifier or extended command:

start stop status


services version reload
save_config trace spawn
change_password quit exit
set* show*

{node1:asm}/oracle/asm/11.1.0/bin #

To get the version release of the listener software :

{node1:asm}/oracle/asm/11.1.0/bin # ./lsnrctl version

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 13-FEB-2008


21:58:10

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
TNS for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Unix Domain Socket IPC NT Protocol Adaptor for IBM/AIX RISC System/6000:
Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production
Oracle Bequeath NT Protocol Adapter for IBM/AIX RISC System/6000: Version
11.1.0.6.0 - Production,,
The command completed successfully
{node1:asm}/oracle/asm/11.1.0/bin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 47 of 393


13.5 Create ASM Instances

Create, configure and start the ASM instances …

And create the ASM diskgroups.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 48 of 393


13.5.1 Thru DBCA

As asm unix user, from one node :

{node1:asm}/oracle/asm # export DISPLAY=node1 :0.0


{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # cd $ORACLE_HOME/bin
{node1:asm}/oracle/asm/11.1.0/bin # dbca &

Database Configuration
Assistant : Welcome

• Select “Real Application


Clusters Database ”

THEN click “Next”


for next screen …

Database Configuration
Assistant : Operations

• Select “Configure
Automatic Storage
Management ”

THEN click “Next”


for next screen …

Database Configuration
Assistant : Node Selection

• Click on “Select ALL”,


Node1 and node2 in our
case.

THEN click “Next”


for next screen …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 49 of 393


Database Configuration
Assistant : Create ASM Instance

On this screen, you can :

• Modify/Set ASM instances


parameters.

• Enter ASM instance


password for “SYS” user.

• Choose the type of ASM


instances parameter files,
a,d specify file name or
disk location.

Click on “ ”:

More details on next page …

Set the password for “SYS“ user


administering ASM instances :

Choose the type of parameter


file for ASM instances :

More details on next page …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 50 of 393


About ASM Parameters :

ASM_DISKGROUPS *
Description: This value is the list of the Disk Group names to be mounted by the ASM at startup or when ALTER DISKGROUP ALL MOUNT
command is used.

ASM_DISKSTRING *
Description: A comma separated list of paths used by the ASM to limit the set of disks considered for discovery when a new disk is added to a
Disk Group. The disk string should match the path of the disk, not the directory containing the disk. For example: /dev/rdsk/*.

ASM_POWER_LIMIT *
Description: This value is the maximum power on the ASM instance for disk rebalancing.
Range of Values: 1 to 11
Default Value: 1

CLUSTER_DATABASE
Description : Set CLUSTER_DATABASE to TRUE to enable Real Application Clusters option.
Range of Values: TRUE | FALSE
Default Value : FALSE

DB_UNIQUE_NAME

DIAGNOSTIC_DEST
Replace USER_DUMP_DEST, CORE_DUMP_DEST, BACKGROUND_DUMP_DEST
Specifies the pathname for a directory where the server will write debugging trace files on behalf of a user process.
Specifies the pathname (directory or disc) where trace files are written for the back ground processes (LGWR, DBW n, and so on) during
Oracle operations. It also defines the location of the database alert file which logs significant events and messages.
The directory name specifying the core dump location (for UNIX).

11gRAC/ASM/AIX oraclibm@fr.ibm.com 51 of 393


INSTANCE_TYPE

LARGE_POOL_SIZE
Description : Specifies the size of the large pool allocation heap, which is used by Shared Server for session memory, parallel execution for
message buffers, and RMAN backup and recovery for disk I/O buffers.
Range of Values: 600K (minimum); >= 20000M (maximum is operating system specific).
Default Value : 0, unless parallel execution or DBWR_IO_SLAVES are configured

LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))

SHARED_POOL_SIZE
Description : Specifies the size of the shared pool in bytes. The shared pool contains objects such as shared cursors, stored procedures,
control structures, and Parallel Execution message buffers. Larger values can improve performance in multi-user systems.
Range of Values: 300 Kbytes - operating system dependent.
Default Value : If 64 bit, 64MB, else 16MB

SPFILE
Description: Specifies the name of the current server parameter file in use.
Range of Values: static parameter
Default Value: The SPFILE parameter can be defined in a client side PFILE to indicate the name of the server parameter file to use. When the
default server parameter file is used by the server, the value of SPFILE will be internally set by the server.

INSTANCE_NUMBER
Description : A Cluster Database parameter that assigns a unique number for mapping the instance to one free list group owned by a
database object created with storage parameter FREELIST GROUPS. Use this value in the INSTANCE clause of the ALTER TABLE ...
ALLOCATE EXTENT statement to dynamically allocate extents to this instance.
Range of Values: 1 to MAX_INSTANCES (specified at database creation).
Default Value : Lowest available number (depends on instance startup order and on the INSTANCE_NUMBER values assigned to other
instances)

LOCAL_LISTENER
Description : A Oracle Net address list which identifies database instances on the same machine as the Oracle Net listeners. Each instance
and dispatcher registers with the listener to enable client connections. This parameter overrides MTS_LISTENER_ADDRESS and
MTS_MULTIPLE_LISTENERS parameters that were obsolete as 8.1.
Range of Values: A valid Oracle Net address list.
Default Value : (ADDRESS_LIST=(Address=(Protocol=TCP)(Host=localhost)(Port=1521)) (Address=(Protocol=IPC)(Key=DBname)))

Please read following Metalink note :

Subject: SGA sizing for ASM instances and databases that use ASM   Doc ID: Note:282777.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 52 of 393


• Choose the type of parameter file that you would like to use for the new ASM instances

OPTION 1

“Create initialization
parameter file (IFILE)”

And specify the filename


..

OPTION 2

Select “Create server


parameter file (SPFILE)”,

Replace
{ORACLE_HOME}/dbs/s
pfile+ASM.ora by
/dev/ASMspf_disk

THEN Click on “Next” …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 53 of 393


Database Configuration
Assistant :

Click on “OK”, DBCA will create


an ASM Instance on each node …

Database Configuration
Assistant :

DBCA is creating the ASM


instances.

When done, check the content of {node1:asm}/oracle/asm/11.1.0/dbs # cat init+ASM1.ora


the init+ASM…ora file on each SPFILE='/dev/ASMspf_disk'
node.
{node1:asm}/oracle/asm/11.1.0/dbs #
You should also find a “+ASM”
{node2:asm}/oracle/asm/11.1.0/dbs # cat init+ASM2.ora
directory in the $ORACLE_BASE.
SPFILE='/dev/ASMspf_disk'
{node2:asm}/oracle/asm/11.1.0/dbs #

Database Configuration
Assistant :

ASM instances are created,

Click on « Create New » to create


an ASM Disk Group.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 54 of 393


 If you don’t see any
candidate disks,

Do click on “Change Disk


Discovery Path” …

… and change to the right path.

If you still don’t see any


candidates disks, do check
the disks preparation.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 55 of 393


About ASM EXTERNAL Redundancy …

Redundancy will be done by external


mechanism, and not by ASM.

Thru SQLPlus on node1, su - asm


it will be the following syntax : export ORACLE_SID=+ASM1
export ORACLE_HOME=/oracle/asm/11.1.0
sqlplus /nolog
connect /as sysdba

Then execute following SQL :

CREATE DISKGROUP DATA_DISKGROUP


NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

11gRAC/ASM/AIX oraclibm@fr.ibm.com 56 of 393


About ASM NORMAL Redundancy
without Failure groups …

NORMAL Redundancy stand for 2 copies of


the block (1 original, and 1 copy).

Redundancy will be done by ASM.


Block will be mirrored between the disks,
ensuring that original and copy are not on the
same disk.

If one copy is lost, ASM will ensure to re-


create the copy, making sure that always 2
copies are available on different disks.

Thru SQLPlus on node1, CREATE DISKGROUP DATA_DISKGROUP


it will be the following syntax : NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

11gRAC/ASM/AIX oraclibm@fr.ibm.com 57 of 393


About ASM HIGH Redundancy without
Failure groups …

HIGH Redundancy stand for 3 copies of the


block (1 original, and 2 copies).

Redundancy will be done by ASM.


Block will be mirrored between the disks,
ensuring that original and copies are not on
the same disk.

If one copy is lost, ASM will ensure to re-


create the copy, making sure that always 3
copies are available on different disks.

Thru SQLPlus on node1, CREATE DISKGROUP DATA_DISKGROUP


it will be the following syntax : HIGH REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3';

11gRAC/ASM/AIX oraclibm@fr.ibm.com 58 of 393


About ASM NORMAL Redundancy with
2 Failure groups …

NORMAL Redundancy stand for 2 copies


of the block (1 original, and 1 copy).

Redundancy will be done by ASM.


Block will be mirrored between the disks,
ensuring that original and copies are not
on the same disk.

If one copy is lost, ASM will ensure to re-


create the copy, making sure that always 2
copies are available on different disks.

Failure groups will ensure that original and


copie are in different group within the
diskgroup, meaning that if the 2 failure
groups are made of disks from 2 differents
storage, original and copie will be on
different storage.

Thru SQLPlus on node1, CREATE DISKGROUP DATA_DISKGROUP


it will be the following syntax : NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
'/dev/ASM_disk3',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk4',
'/dev/ASM_disk5',
'/dev/ASM_disk6';

11gRAC/ASM/AIX oraclibm@fr.ibm.com 59 of 393


About ASM HIGH Redundancy with 3
Failure groups …

NORMAL Redundancy stand for 3 copies


of the block (1 original, and 2 copies).

Redundancy will be done by ASM.


Block will be mirrored between the disks,
ensuring that original and copies are not
on the same disk.

If one copy is lost, ASM will ensure to re-


create the copy, making sure that always 2
copies are available on different disks.

Failure groups will ensure that original and


copie are in different group within the
diskgroup, meaning that if the 2 failure
groups are made of disks from 2 differents
storage, original and copie will be on
different storage.

Thru SQLPlus on node1, CREATE DISKGROUP DATA_DISKGROUP


it will be the following syntax : HIGH REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk3',
'/dev/ASM_disk4',
FAILGROUP GROUP3 DISK
'/dev/ASM_disk5',
'/dev/ASM_disk6';

11gRAC/ASM/AIX oraclibm@fr.ibm.com 60 of 393


About Setting or NOT ASM Name to disks …

Syntax with DISK naming … Thru SQLPlus, it will be the following syntax :

In this case, we force the name of CREATE DISKGROUP DATA_DISKGROUP


each ASM disks to ease NORMAL REDUNDANCY
administration. DISK
'/dev/ASM_disk1' NAME ASM_disk1,
If one disk is dropped, and same disk '/dev/ASM_disk2' NAME ASM_disk2,
to be re-added to same or new Disk '/dev/ASM_disk3' NAME ASM_disk3;
Group, ASM Disk name has to be
diferrent from the previous !!!

Syntax with automatic DISK Thru SQLPlus, it will be the following syntax :
naming …
CREATE DISKGROUP DATA_DISKGROUP
ASM instance will generate NORMAL REDUNDANCY
automatically an ASM disk name to FAILGROUP GROUP1 DISK
each disk, ASM disk name will be '/dev/ASM_disk1',
unique. And if a disk is dropped and '/dev/ASM_disk2',
added again in same or new '/dev/ASM_disk3';
diskgroup, a new ASM disk name will
be generated !!!!

11gRAC/ASM/AIX oraclibm@fr.ibm.com 61 of 393


For cookbook purpose, we will create an ASM DISKGROUP called “DATA_DG1”

With 2 Failures Groups called :

• “GROUP1” with 2 disks


o /dev/ASM_disk1
o /dev/ASM_disk2

and

• “GROUP2”
o /dev/ASM_disk4
o /dev/ASM_disk5

Thru SQLPlus, it will be the following syntax :


Click on “ ” then ASM disk Group creation will
start … CREATE DISKGROUP DATA_DG1
NORMAL REDUNDANCY
FAILGROUP GROUP1 DISK
'/dev/ASM_disk1',
'/dev/ASM_disk2',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk4',
'/dev/ASM_disk5';

ASM alert log history with command executed by DBCA :

Thu Feb 14 11:53:42 2008


SQL> CREATE DISKGROUP DATA_DG1 Normal REDUNDANCY FAILGROUP GROUP2 DISK '/dev/ASM_disk4' SIZE 4096M ,
'/dev/ASM_disk5' SIZE 4096M FAILGROUP GROUP1 DISK '/dev/ASM_disk1' SIZE 4096M ,
'/dev/ASM_disk2' SIZE 4096M
NOTE: Assigning number (1,0) to disk (/dev/ASM_disk4)
NOTE: Assigning number (1,1) to disk (/dev/ASM_disk5)
NOTE: Assigning number (1,2) to disk (/dev/ASM_disk1)
NOTE: Assigning number (1,3) to disk (/dev/ASM_disk2)
NOTE: initializing header on grp 1 disk DATA_DG1_0000
NOTE: initializing header on grp 1 disk DATA_DG1_0001
NOTE: initializing header on grp 1 disk DATA_DG1_0002
NOTE: initializing header on grp 1 disk DATA_DG1_0003
NOTE: initiating PST update: grp = 1
kfdp_update(): 1
Thu Feb 14 11:53:42 2008
kfdp_updateBg(): 1
NOTE: group DATA_DG1: initial PST location: disk 0000 (PST copy 0)
NOTE: group DATA_DG1: initial PST location: disk 0002 (PST copy 1)
NOTE: PST update grp = 1 completed successfully
NOTE: cache registered group DATA_DG1 number=1 incarn=0x966b9ace
NOTE: cache opening disk 0 of grp 1: DATA_DG1_0000 path:/dev/ASM_disk4
NOTE: cache opening disk 1 of grp 1: DATA_DG1_0001 path:/dev/ASM_disk5
NOTE: cache opening disk 2 of grp 1: DATA_DG1_0002 path:/dev/ASM_disk1
NOTE: cache opening disk 3 of grp 1: DATA_DG1_0003 path:/dev/ASM_disk2
Thu Feb 14 11:53:42 2008

11gRAC/ASM/AIX oraclibm@fr.ibm.com 62 of 393


* allocate domain 1, invalid = TRUE
kjbdomatt send to node 1
Thu Feb 14 11:53:42 2008
NOTE: attached to recovery domain 1
NOTE: cache creating group 1/0x966B9ACE (DATA_DG1)
NOTE: cache mounting group 1/0x966B9ACE (DATA_DG1) succeeded
NOTE: allocating F1X0 on grp 1 disk DATA_DG1_0000
NOTE: allocating F1X0 on grp 1 disk DATA_DG1_0002
NOTE: diskgroup must now be re-mounted prior to first use
NOTE: cache dismounting group 1/0x966B9ACE (DATA_DG1)
NOTE: lgwr not being msg'd to dismount
kjbdomdet send to node 1
detach from dom 1, sending detach message to node 1
freeing rdom 1
NOTE: detached from domain 1
NOTE: cache dismounted group 1/0x966B9ACE (DATA_DG1)
kfdp_dismount(): 2
kfdp_dismountBg(): 2
kfdp_dismount(): 3
kfdp_dismountBg(): 3
NOTE: De-assigning number (1,0) from disk (/dev/ASM_disk4)
NOTE: De-assigning number (1,1) from disk (/dev/ASM_disk5)
NOTE: De-assigning number (1,2) from disk (/dev/ASM_disk1)
NOTE: De-assigning number (1,3) from disk (/dev/ASM_disk2)
SUCCESS: diskgroup DATA_DG1 was created
NOTE: cache registered group DATA_DG1 number=1 incarn=0xba2b9ad0
NOTE: Assigning number (1,3) to disk (/dev/ASM_disk2)
NOTE: Assigning number (1,0) to disk (/dev/ASM_disk4)
NOTE: Assigning number (1,1) to disk (/dev/ASM_disk5)
NOTE: Assigning number (1,2) to disk (/dev/ASM_disk1)
NOTE: start heartbeating (grp 1)
Thu Feb 14 11:53:52 2008
kfdp_query(): 6
Thu Feb 14 11:53:52 2008
kfdp_queryBg(): 6
NOTE: cache opening disk 0 of grp 1: DATA_DG1_0000 path:/dev/ASM_disk4
NOTE: F1X0 found on disk 0 fcn 0.0
NOTE: cache opening disk 1 of grp 1: DATA_DG1_0001 path:/dev/ASM_disk5
NOTE: cache opening disk 2 of grp 1: DATA_DG1_0002 path:/dev/ASM_disk1
NOTE: F1X0 found on disk 2 fcn 0.0
NOTE: cache opening disk 3 of grp 1: DATA_DG1_0003 path:/dev/ASM_disk2
NOTE: cache mounting (first) group 1/0xBA2B9AD0 (DATA_DG1)
* allocate domain 1, invalid = TRUE
kjbdomatt send to node 1
NOTE: attached to recovery domain 1
NOTE: cache recovered group 1 to fcn 0.0
Thu Feb 14 11:53:53 2008
NOTE: opening chunk 1 at fcn 0.0 ABA
NOTE: seq=2 blk=0
NOTE: cache mounting group 1/0xBA2B9AD0 (DATA_DG1) succeeded
Thu Feb 14 11:53:53 2008
kfdp_query(): 7
kfdp_queryBg(): 7
NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
SUCCESS: diskgroup DATA_DG1 was mounted
SUCCESS: CREATE DISKGROUP DATA_DG1 Normal REDUNDANCY FAILGROUP GROUP2 DISK '/dev/ASM_disk4' SIZE 4096M ,
'/dev/ASM_disk5' SIZE 4096M FAILGROUP GROUP1 DISK '/dev/ASM_disk1' SIZE 4096M ,
'/dev/ASM_disk2' SIZE 4096M
ALTER SYSTEM SET asm_diskgroups='DATA_DG1' SCOPE=BOTH SID='+ASM1';
ALTER SYSTEM SET asm_diskgroups='DATA_DG1' SCOPE=BOTH SID='+ASM2';
NOTE: enlarging ACD for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 11:54:05 2008
SUCCESS: ACD enlarged for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:25:39 2008

11gRAC/ASM/AIX oraclibm@fr.ibm.com 63 of 393


We now have an ASM disk
group called “DATA_DG1”

With 2 failures groups :

• GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2

• GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5

Database Configuration
Assistant : ASM Disk Groups

ASM disk group is then


displayed with information as :

• Disk Group Name


• Size (MB), total capacity
• Free (MB), Available space
• Redundancy type
• Mount status

From the menu, it’s possible to :

• Create a new ASM disk group


• Add disk (s) to an existing ASM disk group
• Manage ASM template
• Mount a selected ASM disk group
• Mount all ASM disk group

11gRAC/ASM/AIX oraclibm@fr.ibm.com 64 of 393


From

If you want to create a new ASM diskgroup, click on “ ”, and ONLY candidate disks will be displayed :
/dev/ASM_disk1, /dev/ASM_disk2, /dev/ASM_disk4 and /dev/ASM_disk5 are not anymore “Candidate” !!!

If you select “Show All”, All disks will be shown :

o /dev/ASM_disk1 and /dev/ASM_disk2 as “Member”


(member of an ASM DiskGroup within Failure Group GROUP1.
ASM Disk name “DATA_DG1_002” as been assigned by ASM instance to device /dev/ASM_disk1
ASM Disk name “DATA_DG1_003” as been assigned by ASM instance to device /dev/ASM_disk2

o /dev/ASM_disk4 and /dev/ASM_disk5 as “Member”


(member of an ASM DiskGroup within Failure Group GROUP2.
ASM Disk name “DATA_DG1_000” as been assigned by ASM instance to device /dev/ASM_disk4
ASM Disk name “DATA_DG1_001” as been assigned by ASM instance to device /dev/ASM_disk5

o /dev/ASM_disk3, /dev/ASM_disk6 and /dev/ASM_disk7 as “Candidate”

11gRAC/ASM/AIX oraclibm@fr.ibm.com 65 of 393


From

If you want to add disks in an ASM diskGroup, Select the DiskGroup “DATA_DG1”, and click on “ ”.

ONLY “Candidate” disks will


be displayed,

and ASM REDUNDANCY


type can not be modified !!!

• add in “GROUP1” disk


o /dev/ASM_disk3

and

• add in “GROUP2” disk


o /dev/ASM_disk6

Thru SQLPlus, it will be the following syntax :

ALTER DISKGROUP DATA_DG1


ADD
FAILGROUP GROUP1 DISK '/dev/ASM_disk3'
FAILGROUP GROUP2 DISK '/dev/ASM_disk2';

ASM alert log history with command executed by DBCA :

SQL> ALTER DISKGROUP ALTER DISKGROUP DATA_DG1 ADD FAILGROUP GROUP1 DISK '/dev/ASM_disk3' SIZE 4096M FAILGROUP
GROUP2 DISK '/dev/ASM_disk6' SIZE 4096M;
NOTE: Assigning number (1,4) to disk (/dev/ASM_disk6)
NOTE: Assigning number (1,5) to disk (/dev/ASM_disk3)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DATA_DG1_0004
NOTE: initializing header on grp 1 disk DATA_DG1_0005
NOTE: cache opening disk 4 of grp 1: DATA_DG1_0004 path:/dev/ASM_disk6
NOTE: cache opening disk 5 of grp 1: DATA_DG1_0005 path:/dev/ASM_disk3
NOTE: requesting all-instance disk validation for group=1
Thu Feb 14 12:25:42 2008
NOTE: disk validation pending for group 1/0xba2b9ad0 (DATA_DG1)
SUCCESS: validated disks for 1/0xba2b9ad0 (DATA_DG1)
NOTE: initiating PST update: grp = 1
kfdp_update(): 8
Thu Feb 14 12:25:45 2008
kfdp_updateBg(): 8
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xba2b9ad0 (DATA_DG1)
kfdp_query(): 9

11gRAC/ASM/AIX oraclibm@fr.ibm.com 66 of 393


kfdp_queryBg(): 9
kfdp_query(): 10
kfdp_queryBg(): 10
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
SUCCESS: ALTER DISKGROUP ALTER DISKGROUP DATA_DG1 ADD FAILGROUP GROUP1 DISK '/dev/ASM_disk3' SIZE 4096M FAILGROUP
GROUP2 DISK '/dev/ASM_disk6' SIZE 4096M;
NOTE: starting rebalance of group 1/0xba2b9ad0 (DATA_DG1) at power 1
Starting background process ARB0
Thu Feb 14 12:25:49 2008
ARB0 started with pid=20, OS id=606380
NOTE: assigning ARB0 to group 1/0xba2b9ad0 (DATA_DG1)
NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 4:2
Thu Feb 14 12:26:13 2008
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:26:16 2008
NOTE: requesting all-instance membership refresh for group=1
NOTE: initiating PST update: grp = 1
kfdp_update(): 11
Thu Feb 14 12:26:19 2008
kfdp_updateBg(): 11
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp = 1
kfdp_update(): 12
kfdp_updateBg(): 12
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0xba2b9ad0 (DATA_DG1)
kfdp_query(): 13
kfdp_queryBg(): 13
SUCCESS: refreshed membership for 1/0xba2b9ad0 (DATA_DG1)
Thu Feb 14 12:44:31 2008

Adding new disks …

We now have the ASM disk


group “DATA_DG1”

With 2 failures groups :

• GROUP1 with
o /dev/ASM_disk1
o /dev/ASM_disk2
o /dev/ASM_disk3

• GROUP2 with
o /dev/ASM_disk4
o /dev/ASM_disk5
o /dev/ASM_disk6

11gRAC/ASM/AIX oraclibm@fr.ibm.com 67 of 393


From

Click on “ ” to :

• define/manage the type of tripping and the


level of redundancy for each type of object
to be stored on ASM disk groups.

• add new type of object with its own


attributes.

From

Click on “ ” to mount selected ASM disks group,

or click on “ ” to mount all ASM disks group

Database Configuration
Assistant :

Click on “Finish”, then


“Yes” to exit …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 68 of 393


13.5.2 Manual Steps

As asm unix user, from one node :

{node1:root}/ # su - asm
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # cd $ORACLE_HOME/dbs
{node1:asm}/oracle/asm/11.1.0/dbs #

{node1:asm}/oracle/asm/11.1.0/dbs # vi init+ASM1.ora

Add the line and save :

SPFILE='/dev/ASMspf_disk'

{node1:asm}/oracle/asm/11.1.0/dbs # rcp init+ASM1.ora node2:


/oracle/asm/11.1.0/dbs/init+ASM2.ora

On node1, do create a +ASM1.__oracle_base='/oracle'#ORACLE_BASE set from environment


pfile initASM.ora in +ASM2.__oracle_base='/oracle'#ORACLE_BASE set from environment
/oracle/asm/11.1.0/dbs +ASM1.asm_diskgroups=''
+ASM2.asm_diskgroups=''
With the specified +ASM1.asm_diskstring='/dev/ASM_disk*'
values : +ASM2.asm_diskstring='/dev/ASM_disk*'
*.cluster_database=true
*.diagnostic_dest='/oracle'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.processes=100

On each node, On node1


node1, then node2
{node1:asm}/oracle/asm # cd ..
AS asm user, Do {node1:asm}/oracle # mkdir admin
create the following {node1:asm}/oracle # mkdir admin/+ASM
directories {node1:asm}/oracle # mkdir admin/+ASM/hdump
{node1:asm}/oracle # mkdir admin/+ASM/pfile

On node2

{node2:asm}/oracle/asm # cd ..
{node2:asm}/oracle # mkdir admin
{node2:asm}/oracle # mkdir admin/+ASM
{node2:asm}/oracle # mkdir admin/+ASM/hdump
{node2:asm}/oracle # mkdir admin/+ASM/pfile

11gRAC/ASM/AIX oraclibm@fr.ibm.com 69 of 393


On node1, let create the ASM instance

On node1

{node1:asm}/oracle/asm # cd 11.1.0/dbs
{node1:asm}/oracle/asm/11.1.0/dbs # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm/11.1.0/dbs # export ORACLE_SID=+ASM1
{node1:asm}/oracle/asm/11.1.0/dbs # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 22:31:09 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected to an idle instance.
SQL>
SQL> startup nomunt pfile=/oracle/asm/11.1.0/initASM.ora ;
ASM instance started

Total System Global Area 125829120 bytes


Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
SQL>

SQL> create spfile=’/dev/ASMspf_disk’ from pfile=’ =/oracle/asm/11.1.0/initASM.ora’ ;

File created.
SQL>
SQL> shutdown
ASM instance shutdown
SQL>

SQL> startup nomount;


ASM instance started

Total System Global Area 125829120 bytes


Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
SQL>
SQL> set linesize 1000
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from
v$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STATUS


--------------- ---------------- -------------------- ----------------- ------------
1 +ASM1 node1 11.1.0.6.0 STARTED

SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 70 of 393


TEHN On node2, let start the ASM instance

On node2

{node2:asm}/oracle/asm # cd 11.1.0/dbs
{node2:asm}/oracle/asm/11.1.0/dbs # export ORACLE_HOME=/oracle/asm/11.1.0
{node2:asm}/oracle/asm/11.1.0/dbs # export ORACLE_SID=+ASM2
{node2:asm}/oracle/asm/11.1.0/dbs # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 22:31:09 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected to an idle instance.
SQL>
SQL> startup nomunt
ASM instance started

Total System Global Area 125829120 bytes


Fixed Size 1301456 bytes
Variable Size 124527664 bytes
Database Buffers 0 bytes
Redo Buffers 0 bytes
SQL>
SQL> set linesize 1000
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from v$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STATUS


--------------- ---------------- -------------------- ----------------- ------------
1 +ASM2 node1 11.1.0.6.0 STARTED

SQL>

Do query the global view or cluster view, know as gv$instance

SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from


gv$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STATUS


--------------- ---------------- -------------------- ----------------- ------------
2 +ASM2 node2 11.1.0.6.0 STARTED
1 +ASM1 node1 11.1.0.6.0 STARTED

SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 71 of 393


On node1, then node2
Do create an oracle password pfile in /oracle/asm/11.1.0/dbs

{node1:asm}/oracle/asm/11.1.0/dbs # orapwd
Usage: orapwd file=<fname> password=<password> entries=<users> force=<y/n>
ignorecase=<y/n> nosysdba=<y/n>

where
file - name of password file (required),
password - password for SYS (optional),
entries - maximum number of distinct DBA (required),
force - whether to overwrite existing file (optional),
ignorecase - passwords are case-insensitive (optional),
nosysdba - whether to shut out the SYSDBA logon (optional Database Vault only).

There must be no spaces around the equal-to (=) character.


{node1:asm}/oracle/asm/11.1.0/dbs #

On node1

{node1:asm}/oracle/asm/11.1.0/dbs # orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1


password=oracle entries=20
{node1:asm}/oracle/asm/11.1.0/dbs #
{node1:asm}/oracle/asm/11.1.0/dbs # ls -la orapw*
-rw-r----- 1 asm oinstall 1536 Feb 14 10:53 orapw+ASM1
{node1:asm}/oracle/asm/11.1.0/dbs #

On node2

{node2:asm}/oracle/asm/11.1.0/dbs # orapwd file=/oracle/asm/11.1.0/dbs/orapw+ASM1


password=oracle entries=20
{node2:asm}/oracle/asm/11.1.0/dbs #
{node2:asm}/oracle/asm/11.1.0/dbs # ls -la orapw*
-rw-r----- 1 asm oinstall 1536 Feb 14 10:53 orapw+ASM2
{node2:asm}/oracle/asm/11.1.0/dbs #

Checking Oracle {node1:asm}/oracle/asm # crs_stat -t


Clusterware Name Type Target State Host
------------------------------------------------------------
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 72 of 393


We need to register the ASM instances in Oracle Clusterware Registry (OCR).

Registering For node1


ASM instances
within Oracle {node1:asm}/oracle/asm # srvctl add asm –n node1 –o /oracle/asm/11.1.0
Clusterware : {node1:asm}/oracle/asm #

For node2

{node1:asm}/oracle/asm # srvctl add asm –n node2 –o /oracle/asm/11.1.0


{node1:asm}/oracle/asm #

{node1:asm}/oracle/asm # crs_stat -t
Checking Name Type Target State Host
Oracle ------------------------------------------------------------
Clusterware ora....SM1.asm application ONLINE ONLINE node1
after ASM ora....E1.lsnr application ONLINE ONLINE node1
resources ora.node1.gsd application ONLINE ONLINE node1
registration. ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

Start ans stop Stop ASM on node1, and node2


ASM instance
ressource : {node1:asm}/oracle/asm # srvctl stop asm –n node1
{node1:asm}/oracle/asm # srvctl stop asm –n node2

Start ASM on node1, and node2

{node1:asm}/oracle/asm # srvctl start asm –n node1


{node1:asm}/oracle/asm # srvctl start asm –n node2

Add entry in Add +ASM1:/oracle/asm/11.1.0:N


/etc/oratab from
each node : On node1 :

{node1:asm}/oracle/asm # cat /etc/oratab


+ASM1:/oracle/asm/11.1.0:N
{node1:asm}/oracle/asm #

On node2 :

{node2:asm}/oracle/asm # cat /etc/oratab


+ASM2:/oracle/asm/11.1.0:N
{node2:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 73 of 393


13.6 Configure ASM local and remote listeners

Create and Add entry for local and remote listeners in $ASM_HOME/network/admin/tnsnames.ora
(/oracle/asm/11.1.0/network/admin/tnsnames.ora)

On node1 {node1:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

{node1:asm}/oracle/asm #

On node2 {node2:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


# tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

{node2:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 74 of 393


Set “LOCAL_LISTENER”, “REMOTE_LISTNER” and “SERVICES” at ASM instance level

For node1, and {node1:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0


node2, {node1:asm}/oracle # export ORACLE_SID=+ASM1
{node1:asm}/oracle # sqlplus /nolog
from node1 :
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string +ASM
SQL> show parameter listener

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
local_listener string
remote_listener string
SQL>

Let set local_listener and remote_listener

• remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE
PRESENT in the tnsnames.ora from each node.
• local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT
in the tnsnames.ora from each node.
• local_listener from node1, and node2 are not the ones defined in the listener.ora files from
each node.

SQL> ALTER SYSTEM SET remote_listener='LISTENERS_+ASM' SCOPE=BOTH SID='*';

System altered.

SQL> ALTER SYSTEM SET local_listener='LISTENER_+ASM1' SCOPE=BOTH SID='+ASM1';

System altered.

SQL> ALTER SYSTEM SET local_listener='LISTENER_+ASM2' SCOPE=BOTH SID='+ASM2';

System altered.

SQL> show parameters listener

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
local_listener string LISTENER_+ASM1
remote_listener string LISTENERS_+ASM
SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 75 of 393


Check for {node2:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0
node2, {node2:asm}/oracle # export ORACLE_SID=+ASM1
{node2:asm}/oracle # sqlplus /nolog
from node2 :
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string +ASM
SQL> show parameters listener

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
local_listener string LISTENER_+ASM2
remote_listener string LISTENERS_+ASM
SQL>

Checking the listener status on each node, we will see 2 instances registered, +ASM1 and +ASM2

{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008


00:01:57

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_NODE1
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production
Start Date 13-FEB-2008 22:04:55
Uptime 5 days 1 hr. 57 min. 2 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File /oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
The command completed successfully

{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 76 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 77 of 393
Checking the listener services on each node, we will see 2 instances registered, +ASM1 and +ASM2

{node1:asm}/oracle/asm/11.1.0/network/admin # lsnrctl services

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 19-FEB-2008


00:02:47

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
The command completed successfully

{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 78 of 393


For remote connection to ASM instances :
Add entry in $ASM_HOME/network/admin/tnsnames.ora
(/oracle/asm/11.1.0/network/admin/tnsnames.ora)

On node1, and {node1:asm}/oracle/asm/11.1.0/network/admin # cat tnsnames.ora


node2 … # tnsnames.ora Network Configuration File:
/oracle/products/asm/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

LISTENERS_+ASM =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

LISTENER_+ASM1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

LISTENER_+ASM2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = +ASM)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 79 of 393


13.7 Check Oracle Clusterware, and manage ASM ressources
Checking if ASM instances are registered in Oracle Clusterware :
1 ASM instance on each node must be registered !!!

To stop the ASM instance {node1:asm}/oracle/asm # srvctl stop asm -n node1


on specified node : {node1:asm}/oracle/asm # crs_stat -t
Name Type Target State Host
Node1 for example … ------------------------------------------------------------
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

To start the ASM instance {node1:asm}/oracle/asm # srvctl start asm -n node1


on specified node : {node1:asm}/oracle/asm # crs_stat -t
Name Type Target State Host
Node1 for example … ------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 80 of 393


To check status and config {node1:asm}/oracle/asm # srvctl status asm -n node1
of ASM cluster resources on ASM instance +ASM1 is running on node node1.
node1 and node2 in the {node1:asm}/oracle/asm # srvctl status asm -n node2
cluster : ASM instance +ASM2 is running on node node2.
{node1:asm}/
{node1:asm}/oracle/asm # srvctl config asm -n node1
+ASM1 /oracle/asm/11.1.0
{node1:asm}/oracle/asm # srvctl config asm -n node2
+ASM2 /oracle/asm/11.1.0
{node1:asm}/oracle/asm #

To list all attributes from the {node1:asm}/oracle/asm/11.1.0/network/admin # crs_stat -p


resource ASM on node1 : ora.node1.ASM1.asm
NAME=ora.node1.ASM1.asm
TYPE=application
ACTION_SCRIPT=/oracle/asm/11.1.0/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=1
CHECK_INTERVAL=300
DESCRIPTION=CRS application for ASM instance
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=node1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=900
STOP_TIMEOUT=180
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysasm
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=mount
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

{node1:asm}/oracle/asm/11.1.0/network/admin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 81 of 393


To see resources {node1:asm}/oracle/asm/11.1.0 # crs_stat -ls
read/write Name Owner Primary PrivGrp Permission
permissions and -----------------------------------------------------------------
ownership within ora....SM1.asm asm oinstall rwxrwxr--
the clusterware : ora....E1.lsnr asm oinstall rwxrwxr--
ora.node1.gsd crs oinstall rwxr-xr--
ora.node1.ons crs oinstall rwxr-xr--
ora.node1.vip root oinstall rwxr-xr--
ora....SM2.asm asm oinstall rwxrwxr--
ora....E2.lsnr asm oinstall rwxrwxr--
ora.node2.gsd crs oinstall rwxr-xr--
ora.node2.ons crs oinstall rwxr-xr--
ora.node2.vip root oinstall rwxr-xr--
{node1:asm}/oracle/asm/11.1.0/network/admin #

Check ASM {node1:root}/ # su - asm


deamons {node1:asm}/oracle/asm # ps -ef|grep ASM
asm 163874 1 0 Feb 14 - 10:07 asm_lmon_+ASM1
For example on asm 180340 1 0 Feb 14 - 0:18 asm_gmon_+ASM1
node1 : asm 221216 1 0 Feb 14 - 2:19 asm_rbal_+ASM1
asm 229550 1 0 Feb 14 - 0:12 asm_lgwr_+ASM1
asm 237684 1 0 Feb 14 - 11:14 asm_lmd0_+ASM1
asm 364670 1 0 Feb 14 - 0:08 asm_mman_+ASM1
asm 417982 1 0 Feb 14 - 17:15 asm_dia0_+ASM1
asm 434260 1 0 Feb 14 - 0:36
/oracle/asm/11.1.0/bin/racgimon daemon ora.node1.ASM1.asm
asm 458862 1 0 Feb 14 - 0:15 asm_ckpt_+ASM1
asm 463014 1 0 Feb 14 - 0:36 asm_lck0_+ASM1
asm 540876 1 0 Feb 14 - 0:07 asm_psp0_+ASM1
asm 544906 1 0 Feb 14 - 1:24 asm_pmon_+ASM1
asm 577626 1 0 Feb 14 - 0:26 asm_ping_+ASM1
asm 598146 1 0 Feb 14 - 2:15 asm_diag_+ASM1
asm 606318 1 0 Feb 14 - 0:10 asm_dbw0_+ASM1
asm 630896 1 0 Feb 14 - 12:30 asm_lms0_+ASM1
asm 635060 1 0 Feb 14 - 0:07 asm_smon_+ASM1
asm 639072 1 0 Feb 14 - 2:27 asm_vktm_+ASM1
asm 913652 954608 0 09:52:02 pts/5 0:00 grep ASM
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 82 of 393


13.8 About ASM instances Tuning

You should increase the ASM Instance process parameter on each node …

Moving from the default 100 value to 200, or 400 depending of your I/O activities.
Too low process value may generate read/write I/O access error from the database instances to the ASM disk
Groups.

{node1:asm}/oracle # export ORACLE_HOME=/oracle/asm/11.1.0


{node1:asm}/oracle # export ORACLE_SID=+ASM1
{node1:asm}/oracle # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 18 23:50:24 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected

SQL> show parameter processes

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
processes integer 100
SQL>

“processes” parameter can not be changed on the fly, in dynamic.


You must modify the parameter on both ASM instances, and permanently in the ASM
spfile, then stop and restart ASM instance to activate the change :

SQL> alter system set processes=200 scope=spfile SID='*';

System altered.

SQL> show parameter processes

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
processes integer 100

SQL> exit

Let stop and restart ASM instances thru clusterware commands :

{node1:asm}/oracle/asm # srvctl stop asm –n node1


{node1:asm}/oracle/asm # srvctl stop asm –n node2

{node1:asm}/oracle/asm # crs_stat -t | grep asm


ora....SM1.asm application OFFLINE OFFLINE
ora....SM2.asm application OFFLINE OFFLINE
{node1:asm}/oracle/asm #

{node1:asm}/oracle/asm # srvctl start asm –n node1


{node1:asm}/oracle/asm # srvctl start asm –n node2

{node1:asm}/oracle/asm # crs_stat -t | grep asm


ora....SM1.asm application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 83 of 393


Then under SQLPlus :

SQL> show parameter processes

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
processes integer 200

SQL>

Please read following Metalink note :

Subject: SGA sizing for ASM instances and databases that use ASM   Doc ID: Note:282777.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 84 of 393


13.9 About ASM CMD (Command Utility)
Access to document :

• «Commanding ASM » on http://www.oracle.com/oramag


http://www.oracle.com/technology/oramag/oracle/06-mar/o26asm.html
Access, transfer, and administer ASM files without SQL commands.
By Arup Nanda

• «Add Storage, Not Projects » on http://www.oracle.com/oramag


http://www.oracle.com/technology/oramag/oracle/04-may/o34tech_management.html
Automatic Storage Management lets DBAs do their jobs and keep their weekends.
By Jonathan Gennick

To access the From node1 :


content of the
{node1:asm}/oracle/asm # export ORACLE_HOME=/oracle/asm/11.1.0
ASM Disks
{node1:asm}/oracle/asm # export ORACLE_SID=+ASM1
Groups : {node1:asm}/oracle/asm # asmcmd -a sysasm -p
ASMCMD [+] > help
asmcmd [-v] [-a <sysasm|sysdba>] [-p] [command]

The environment variables ORACLE_HOME and ORACLE_SID determine the


instance to which the program connects, and ASMCMD establishes a
bequeath connection to it, in the same manner as a SQLPLUS / AS
SYSDBA. The user must be a member of the SYSDBA group.

Specifying the -v option prints the asmcmd version number and


exits immediately.

Specify the -a option to choose the type of connection. There are


only two possibilities: connecting as "sysasm" or as "sysdba".
The default value if this option is unspecified is "sysasm".

Specifying the -p option allows the current directory to be displayed


in the command prompt, like so:

ASMCMD [+DATAFILE/ORCL/CONTROLFILE] >

[command] specifies one of the following commands, along with its


parameters.

Type "help [command]" to get help on a specific ASMCMD command.

commands:
--------
help

cd
cp
du
find
ls
lsct
lsdg
mkalias
mkdir
pwd
rm
rmalias

md_backup (New with 11G)


md_restore (New with 11G)

lsdsk (New with 11G)


remap (New with 11G)

ASMCMD [+] >

Subject: ASMCMD ­ ASM command line utility   Doc ID: Note:332180.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 85 of 393


Using ASMCD “ls” command :

ASMCMD [+] > help ls


ls [-lsdrtLacgH] [name]

List [name] or its contents alphabetically if [name] refers to a


directory. [name] can contain the wildcard "*" and is the current
directory if unspecified. Directory names in the display list
have the "/" suffix to clarify their identity. The first two optional
flags specify how much information is displayed for each file, in the
following manner:

(no flag) V$ASM_ALIAS.NAME

-l V$ASM_ALIAS.NAME, V$ASM_ALIAS.SYSTEM_CREATED;
V$ASM_FILE.TYPE, V$ASM_FILE.REDUNDANCY,
V$ASM_FILE.STRIPED, V$ASM_FILE.MODIFICATION_DATE

-s V$ASM_ALIAS.NAME;
V$ASM_FILE.BLOCK_SIZE, V$ASM_FILE.BLOCKS,
V$ASM_FILE.BYTES, V$ASM_FILE.SPACE

If the user specifies both flags, then the command shows an union of
their respective columns, with duplicates removed.

If an entry in the list is an user-defined alias or a directory,


then -l displays only the V$ASM_ALIAS columns, and -s shows only
the alias name and its size, which is zero because it is negligible.
Moreover, the displayed name contains a suffix that is in the form of
an arrow pointing to the absolute path of the system-created filename
it references:

t_db1.f => +diskgroupName/DBName/DATAFILE/SYSTEM.256.1

See the -L option below for an exception to this rule.

For disk group information, this command queries V$ASM_DISKGROUP_STAT


by default, which can be modified by the -c and -g flags.

The remaining flags have the following meanings:

-d If an argument is a directory, list only its name


(not its contents).

-r Reverse the sorting order.

-t Sort by time stamp (latest first) instead of by name.

-L If an argument is an user alias, display information on


the file it references.

-a If an argument is a system-created filename, show the


location of its user-defined alias, if any.

-c For disk group information, select from V$ASM_DISKGROUP,


or GV$ASM_DISKGROUP, if the -g flag is also active.
If the ASM software version is 10gR1, then the effects
of this flag is always active, whether or not it is set.

-g For disk group information, select from


GV$ASM_DISKGROUP_STAT, or GV$ASM_DISKGROUP, if the -c
flag is also active; GV$ASM_DISKGROUP.INST_ID is
included in the output.

-H Suppress the column header information, so that


scripting is easier.

Note that "ls +" would return information on all diskgroups, including
whether they are mounted.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 86 of 393


Not all possible file columns or disk group columns are included.
To view the complete set of columns for a file or a disk group,
query the V$ASM_FILE and V$ASM_DISKGROUP views.
ASMCMD [+] >

Using ASMCD “lsct” command :

ASMCMD [+] > help lsct


lsct [-cgH] [group]

List all clients and their information from V$ASM_CLIENT. If group is


specified, then return only information on that group.

-g Select from GV$ASM_CLIENT.

-H Suppress the column headers from the output.


ASMCMD [+] >

Using ASMCD “du” command :

ASMCMD [+] > help du


du [-H] [dir]

Display total space used for files located recursively under [dir],
similar to “du –s” under UNIX; default is the current directory. Two
values are returned, both in units of megabytes. The first value does
not take into account mirroring of the diskgroup while the second does.
For instance, if a file occupies 100 MB of space, then it actually
takes up 200 MB of space on a normal redundancy diskgroup and 300 MB
of space on a high redundancy diskgroup.

[dir] can also contain wildcards.

The –H flag suppresses the column headers from the output.


ASMCMD [+] >

Using ASMCD “lsdg” command :

ASMCMD [+] > help lsdg


lsdg [-cgH] [group]

List all diskgroups and their information from V$ASM_DISKGROUP. If


[group] is specified, then return only information on that group. The
command also informs the user if a rebalance is currently under way
for a diskgroup.

This command queries V$ASM_DISKGROUP_STAT by default, which can be


modified by the -c and -g flags.

-c Select from V$ASM_DISKGROUP, or GV$ASM_DISKGROUP,


if the -g flag is also active. If the ASM software
version is 10gR1, then the effects of this flag is
always active, whether or not it is set.

-g Select from GV$ASM_DISKGROUP_STAT, or GV$ASM_DISKGROUP,


if the -c flag is also active;
GV$ASM_DISKGROUP.INST_ID is included in the output.

-H Suppress the column headers from the output.

Not all possible disk group columns are included. To view the
complete set of columns for a disk group, query the V$ASM_DISKGROUP

11gRAC/ASM/AIX oraclibm@fr.ibm.com 87 of 393


view.
ASMCMD [+] >

11gRAC/ASM/AIX oraclibm@fr.ibm.com 88 of 393


Running ASMCD command (lsdg, lsct, ls, du, lsct) :

From node1 :

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/asm/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=+ASM1
{node1:oracle}/oracle -> asmcmd -a sysasm -p
ASMCMD [+] >

ASMCMD [+] > lsgd


ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED NORMAL N 512 4096 1048576 24576 24382 4096 10143 0 DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > lsdg -g
Inst_ID State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
1 MOUNTED NORMAL N 512 4096 1048576 24576 24382 4096 10143 0 DATA_DG1/
2 MOUNTED NORMAL N 512 4096 1048576 24576 24382 4096 10143 0 DATA_DG1/
ASMCMD [+] >

ASMCMD [+] > du


Used_MB Mirror_used_MB
2467 4952
ASMCMD [+] >

11gRAC/ASM/AIX oraclibm@fr.ibm.com 89 of 393


Running ASMCD command (lsdg, lsct, ls, du, lsct) :

From node1

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/asm/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=+ASM1
{node1:oracle}/oracle -> asmcmd -a sysasm -p
ASMCMD [+] >

ASMCMD [+] > ls


DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > ls -la
State Type Rebal Name
MOUNTED NORMAL N DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > ls -s
Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
512 4096 1048576 24576 24382 4096 10143 0 DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > ls -g
Inst_ID Name
1 DATA_DG1/
2 DATA_DG1/
ASMCMD [+] >
ASMCMD [+] > du
“No database create at this stage, so no result”
ASMCMD [+] >
ASMCMD [+] > lsct
“No database create at this stage, so no result”
ASMCMD [+] >
ASMCMD [+] > lsct
DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
JSC1DB CONNECTED 11.1.0.6.0 11.1.0.0.0 JSC1DB1 DATA_DG1
ASMCMD [+] > lsct -g
Instance_ID DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
1 JSC1DB CONNECTED 11.1.0.6.0 11.1.0.0.0 JSC1DB1 DATA_DG1
2 JSC1DB CONNECTED 11.1.0.6.0 11.1.0.0.0 JSC1DB2 DATA_DG1
ASMCMD [+] >

11gRAC/ASM/AIX oraclibm@fr.ibm.com 90 of 393


13.10 Usefull Metalink notes

Subject: How to cleanup ASM installation (RAC and Non­RAC)   Doc ID: Note:311350.1

Subject: Backing Up an ASM Instance   Doc ID: Note:333257.1

Subject: How to Re­configure Asm Disk Group?   Doc ID: Note:331661.1

Subject: Assigning a Physical Volume ID (PVID) To An Existing ASM Disk Corrupts the ASM Disk Header   
Doc ID: Note:353761.1

Subject: How To Reclaim Asm Disk Space?   Doc ID: Note:351866.1

Subject: Re­creating ASM Instances and Diskgroups   Doc ID: Note:268481.1

Subject: How To Connect To Asm Instance Remotely   Doc ID: Note:340277.1

13.11 What has been done ?

At this stage :
• The Oracle Cluster Registry and Voting Disk are created and configured
• The Oracle Cluster Ready Services is installed, and started on all nodes.
• The VIP (Virtual IP), GSD and ONS application resources are configured on all
nodes.
• ASM Home is installed
• Default node listener for ASM are created
• ASM instances are created and started
• ASM Diskgroup is created

11gRAC/ASM/AIX oraclibm@fr.ibm.com 91 of 393


14 INSTALLING ORACLE 11G R1 SOFTWARE

This cookbook is dealing with 11g RAC database(s), but if you have also 10g database to run in cluster mode, you
can use 11g Oracle Clusterware and 11g ASM, but keeping 10g database(s).

Each RAC database software will have its own ORACLE_HOME !!!

For all new RAC projects, we’ll advice to use Oracle 11g clusterware and Oracle 11g ASM.

For 11g RAC database(s), all new features from 11g ASM will be available for use.
For 10g RAC database(s), not all 11g ASM new features are available for use, you have to use different ASM
diskgroups for 10g and 11g RAC database(s), and set proper ASM disgroup “compatible”attribute.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 92 of 393


At this stage :

• Oracle clusterware is installed in $ORA_CRS_HOME (/crs/11.1.0)


• Oracle ASM software is installed in $ASM_HOME (/oracle/asm/11.1.0)
• Default Listener for ASM instances are configured within the $ASM_HOME
• ASM instances are up and running
• At least on ASM Disk Group is created and mounted on each node.

In our case, we have a Diskgroup called


“DATA_DG1” where to create an Oracle
cluster database.

BUT for now, we need to install the Oracle 11g Database Software in its own $ORACLE_HOME
(/oracle/rdbms/11.1.0) using the rdbms unix user.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 93 of 393


14.1 11g R1 RDBMS Installation
Oracle RAC option installation just have to be done only starting from one node. Once the first node is installed,
Oracle OUI automatically starts the copy of the mandatory files on the second node, using rcp command. This step
could last long, depending on the network speed (one hour…), without any message. So, don’t think the OUI is
stalled, and look at the network traffic before canceling the installation !
You can also create a staging area. The name of the subdirectories is in the format “Disk1” to “Disk3”

On each node :
Run the AIX command "/usr/sbin/slibclean" as "root" to clean all unreferenced
libraries from memory !!!
{node1:root}/ # /usr/sbin/slibclean
{node2:root}/ # /usr/sbin/slibclean

From first node As Under VNC Client session, or other graphical interface, execute :
root user, execute :
{node1:root}/ # xhost +
access control disabled, clients can connect from any hosts
{node1:root}/ #

On each node, set right {node1:root}/ # chown rdbms:oinstall /oracle/rdbms


{node1:root}/ # chmod 665 /oracle/rdbms
ownership and permissions
{node1:root}/ #
to following directories :

Login as rdbms (in our case), or oracle user and follow the procedure hereunder…

With /tmp or other destination having enough free space, about 500Mb on
 Setup and each node.
export your
DISPLAY, TMP and {node1:rdbms}/ # export DISPLAY=node1:0.0
TEMP variables
If not set in asm .profile, do :

{node1:rdbms}/ # export TMP=/tmp


{node1:rdbms}/ # export TEMP=/tmp
{node1:rdbms}/ # export TMPDIR=/tmp

 Check that Oracle Clusterware (including VIP, ONS, GSD, Listener’s and ASM ressources) is started on
each node !!!. ASM instances and listener’s are not mandatory for the RDBMS installation.

As asm user from


node1 : {node1:rdbms}/oracle/rdbms # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:asm}/oracle/asm #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 94 of 393


{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 190 MB. Actual 1933 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3584 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-02-14_02-41-49PM. Please wait
...{node1:rdbms}/distrib/SoftwareOracle/rdbms11gr1/aix/database # Oracle Universal Installer, Version 11.1.0.6.0
Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.

At the OUI Welcome screen

Just click Next ...

Select the installation type :

You have the option to choose Enterprise,


Standard Edition, or Custom to proceed.

Choose the “Custom” option to avoid


creating a database by default.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 95 of 393


Specify File Locations :
Do not change the Source field

Specify a different ORACLE_HOME Name


with its own directory for the Oracle
software installation.

 This ORACLE_HOME must be


different then the CRS and ASM
ORACLE_HOME.

OraDb11g_home1
/oracle/rdbms/11.1.0

Then click Next ...

If you don’t see the following screen


with Node selection, it might be that
your CRS is down on one or all nodes.
è Please check if CRS is up and
running on all nodes.

Specify Hardware Cluster Installation


Mode :

Select Cluster Installation

AND the other nodes on to which the


Oracle RDBMS software will be installed.
It is not necessary to select the node on
which the OUI is currently running. Click
Next.

Then click Next ...

The installer will check some product-


specific Prerequisite.

Don’t take care of the lines with checking


at status “Not executed”, These are just
warnings because AIX maintenance level
might be higher then 5300, which is the
case in our example (ML03).

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 96 of 393


Details of the hecking operating system requirements ...
Expected result: One of 5300.05,6100.00
prerequisite Actual Result: 5300.07
checks done Check complete. The overall result of this check is: Passed
by ========================================================
runInstaller Checking operating system package requirements ...
Checking for bos.adt.base(0.0); found bos.adt.base(5.3.7.0). Passed
Checking for bos.adt.lib(0.0); found bos.adt.lib(5.3.0.60). Passed
Checking for bos.adt.libm(0.0); found bos.adt.libm(5.3.7.0). Passed
Checking for bos.perf.libperfstat(0.0); found bos.perf.libperfstat(5.3.7.0). Passed
Checking for bos.perf.perfstat(0.0); found bos.perf.perfstat(5.3.7.0). Passed
Checking for bos.perf.proctools(0.0); found bos.perf.proctools(5.3.7.0). Passed
Checking for rsct.basic.rte(0.0); found rsct.basic.rte(2.4.8.0). Passed
Checking for rsct.compat.clients.rte(0.0); found rsct.compat.clients.rte(2.4.8.0). Passed
Checking for bos.mp64(5.3.0.56); found bos.mp64(5.3.7.1). Passed
Checking for bos.rte.libc(5.3.0.55); found bos.rte.libc(5.3.7.1). Passed
Checking for xlC.aix50.rte(8.0.0.7); found xlC.aix50.rte(9.0.0.1). Passed
Checking for xlC.rte(8.0.0.7); found xlC.rte(9.0.0.1). Passed
Check complete. The overall result of this check is: Passed
========================================================

Checking recommended operating system patches


Checking for IY89080(bos.rte.aio,5.3.0.51); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY92037(bos.rte.aio,5.3.0.52); found (bos.rte.aio,5.3.7.0). Passed
Checking for IY94343(bos.rte.lvm,5.3.0.55); found (bos.rte.lvm,5.3.7.0). Passed
Check complete. The overall result of this check is: Passed
========================================================

Checking kernel parameters


Check complete. The overall result of this check is: Not executed <<<<
OUI-18001: The operating system 'AIX Version 5300.07' is not supported.
Recommendation: Perform operating system specific instructions to update the kernel parameters.
========================================================

Checking physical memory requirements ...


Expected result: 922MB
Actual Result: 2048MB
Check complete. The overall result of this check is: Passed
========================================================

Checking available swap space requirements ...


Expected result: 3072MB
Actual Result: 3548MB
Check complete. The overall result of this check is: Passed
========================================================

Validating ORACLE_BASE location (if set) ...


Check complete. The overall result of this check is: Passed
========================================================

Checking maxmimum command line length argument, ncarg...


Check complete. The overall result of this check is: Passed
========================================================

Checking for proper system clean-up....


Check complete. The overall result of this check is: Passed
========================================================

Checking Oracle Clusterware version ...


Check complete. The overall result of this check is: Passed
========================================================

Checking uid/gid...
Check complete. The overall result of this check is: Passed
========================================================

11gRAC/ASM/AIX oraclibm@fr.ibm.com 97 of 393


Available Product Components :

Select the product components for Oracle


Database 11g that you want to install.

INFO : Compared to 10gRAC R1 installation,


there is no “Real Application Cluster” option to
select.

Then click Next ...

Privileged Operating Systems Groups :

Verify the UNIX primary group name of the user


which controls the installation of the Oracle11g
database software. (Use unix command id to find
out)
And specify the Privileged Operating System
Groups to the value found.
In our example, this must be :
• “dba” for Database Administrator (OSDBA)
group
• “oper” for Database Operator (OSOPER)
group
• “asm” for administrator (SYSASM) group

(Primary group of unix asm user) to be set for


both entries.

Then click Next ...

Create Database :

Choose “Install database Software only”, we


don’t want to create a database at this stage.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 98 of 393


Summary :

The Summary screen will be presented.


Confirm that the RAC database software and
other selected options will be installed.
Check Cluster Nodes and Remote Nodes lists.
The OUI will install the Oracle 10g software on to
the local node, and then copy this information to
the other selected nodes.

Then click Install ...

Install :

The Oracle Universal Installer will proceed the


installation on the first node, then will copy
automatically the code on the others selected
nodes.

At 50%, it may take time to pass over the 50%,


don’t be affrais, installation is processing, running
a test script on remote nodes

Just wait for the next screen ...

 At this stage, you could hit message about “Failed Attached Home” !!!
if similar screen message appears, just run the specified command on the specified node as asm user.
Do execute the script as bellow (check and adapt the script upon your message). When done, click on “OK” :

From node2 :

{node2:root}/oracle/rdbms/11.1.0 # su – rdbms
{node2:rdbms}/oracle/rdbms/11.1.0 -> /oracle/rdbms/11.1.0/oui/bin/runInstaller –attachHome –
noClusterEnabled ORACLE_HOME=/oracle/rdbms/11.1.0 ORACLE_HOME_NAME=OraDb11g_home1
CLUSTER_NODES=node1,node2 “INVENTORY_LOCATION=/oracle/oraInventory” LOCAL_NODE=node2
Starting Oracle Universal Installer…
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
‘AttachHome’ was successful.
{node2:rdbms}/oracle/rdbms/11.1.0/bin #

Execute Configuration Scripts

11gRAC/ASM/AIX oraclibm@fr.ibm.com 99 of 393


will pop-up :

AS root, execute root.sh on each


node.

For our case, this script is


located in the
/oracle/products/asm

Just click OK ...

{node1:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node1:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

{node1:root}/ #

---------------------------------------------------------------

{node2:root}/ # id
uid=0(root) gid=0(system) groups=2(bin),3(sys),7(security),8(cron),10(audit),11(lp)
{node2:root}/ # /oracle/rdbms/11.1.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= rdbms
ORACLE_HOME= /oracle/rdbms/11.1.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 100 of 393


Finished product-specific root actions.
{node2:root}/ #

Coming back to
this previous
screen,

Just click OK

End of Installation :

This screen will automatically appear.

Check that it is successful and write down the


URL list of the J2EE applications that have
been deployed (isqlplus, …).

Then click Exit ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 101 of 393


14.2 Symbolic links creation for listener.ora, tnsnames.ora and sqlnet.ora

In order to create the symbolic links, make sure that the files “listener.ora”, “tnsnames.ora” and
“sqlnet.ora” does exist in /oracle/asm/11.1.0/network/admin/listener.ora on both nodes.

As rdbms user on each node :

{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/listener.ora
/oracle/rdbms/11.1.0/network/admin/listener.ora
{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/tnsnames.ora
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
{node1:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/sqlnet.ora
/oracle/rdbms/11.1.0/network/admin/sqlnet.ora

{node1:rdbms}/ # ls –la /oracle/rdbms/11.1.0/network/admin/*.ora


lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 listener.ora -> /oracle/
asm/11.1.0/network/admin/listener.ora
lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 tnsnames.ora ->
/oracle/asm/11.1.0/network/admin/tnsnames.ora
lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 sqlnet.ora ->
/oracle/asm/11.1.0/network/admin/sqlnet.ora
{node1:rdbms}/ #

è Doing this will avoid dealing with many listener.ora, tnsnames.ora and sqlnet.ora, as setting TNS_ADMIN
variable will not be enough as some oracle assistants tools will not take consideration of this variable.

As rdbms user on node2 :

{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/listener.ora
/oracle/rdbms/11.1.0/network/admin/listener.ora
{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/tnsnames.ora
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
{node2:rdbms}/ # ln -s /oracle/asm/11.1.0/network/admin/sqlnet.ora
/oracle/rdbms/11.1.0/network/admin/sqlnet.ora

{node2:rdbms}/ # ls –la /oracle/rdbms/11.1.0/network/admin/*.ora


lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 listener.ora -> /oracle/
asm/11.1.0/network/admin/listener.ora
lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 tnsnames.ora ->
/oracle/asm/11.1.0/network/admin/tnsnames.ora
lrwxrwxrwx 1 oracle dba 47 Apr 23 10:19 sqlnet.ora ->
/oracle/asm/11.1.0/network/admin/sqlnet.ora
{node2:rdbms}/ #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 102 of 393


14.3 Update the RDBMS unix user .profile

 To be done on each node for rdbms (in our case) or oracle unix user.

vi $HOME/.profile file in rdbms’s home directory. Add the entries in bold blue color

PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.

export PATH

if [ -s "$MAIL" ] # This is at Shell startup. In normal


then echo "$MAILMSG" # operation, the Shell checks
fi # periodically.

ENV=$HOME/.kshrc
export ENV

#The following line is added by License Use Management installation


export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin

export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=

if [ -t 0 ]; then
stty intr ^C
fi

Do disconnect from asm user, and reconnect to load modified $HOME/.profile

11gRAC/ASM/AIX oraclibm@fr.ibm.com 103 of 393


15 DATABASE CREATION ON ASM

In our case, we have a Diskgroup called


“DATA_DG1” where to create an Oracle
cluster database.

MANDATORY {node2:root}/ # rsh node1 chmod -R g+w /oracle/asm/11.1.0/network


{node2:root}/ # rsh node2 chmod -R g+w /oracle/asm/11.1.0/network
 Change permission {node2:root}/ # rsh node1 chmod -R g+w /oracle/cfgtoollogs
{node2:root}/ # rsh node2 chmod -R g+w /oracle/cfgtoollogs
to allow rdbms user to
{node2:root}/ # rsh node1 chmod -R g+w /oracle/admin
write on directories
onwned by asm user. {node2:root}/ # rsh node2 chmod -R g+w /oracle/admin
{node2:root}/ # rsh node1 chmod -R g+w /oracle/diag
{node2:root}/ # rsh node2 chmod -R g+w /oracle/diag

11gRAC/ASM/AIX oraclibm@fr.ibm.com 104 of 393


15.1 Thru Oracle DBCA

Connect as rdbms unix user


from first node, From node1 :
and setup your DISPLAY
{node1:rdbms}/ # export ORACLE_HOME=/oracle/rdbms/11.1.0
Execute dbca & to lanch the {node1:rdbms}/ # export ORACLE_SID=
database configuration assistant {node1:rdbms}/ # cd $ORACLE_HOME/bin
{node1:rdbms}/oracle/rdbms/11.1.0/bin #./dbca

Check that all resources from From node1 or node2 :


Oracle Clusterware are started
on their home node. {node2:rdbms}/oracle/rdbms # crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node2:rdbms}/oracle/rdbms #

DBCA Welcome Screen :

Select the “Oracle Real Application Cluster


Database” option.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 105 of 393


Operations :

Select the “Create a Database” option.

Then click Next ...

Node Selection :

Make sure to select all RAC nodes.

Then click Next ...

Database Templates :

Select “General Purpose”

Or “Custom Database” if you want to generate


the creation scripts.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 106 of 393


Database Identification :

Specify the “Global Database Name”

The “SID Prefix” will be automatically


updated. (by default it is the Global Database
Name)

For our example : JSC1DB

Then click Next ...

Management Options :

Check “Configure the database with


Enterprise Manager” if you want to use the
Database Control (local administration).

Or Don’t check if you plan to administrate the


database using the Grid Control (global
network administration)

You can also set Alert and backup :

Then click Next ...

Database Credentials :

Specify same password for all


administrator users,

or specify individual password for each user.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 107 of 393


Storage Options :

Choose Automatic Storage


Management (ASM)

Then click Next ...

You’ll be prompted to enter ASM sys


password :

Then Click “OK”

Create Disk Group

Now the ASM Disks Group is created

Select DiskGroup to be used !!!

DATA_DG1 in our example

Then click Ok ...

Database File Locations :

Select Use Oracle-Managed Files


AND Select DiskGroup to use for the
Database Files.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 108 of 393


Recovery Configuration :

Select the DiskGroup to be used for the


Flash Recovery Area.

In our case :

+DATA_DG1
and size of 4096

Enable Archiving, and click on “Edit


Archive Mode arameters” to specify an
other ASM Disk Group as different
Archive Log Destinations.

You can also check the File Location


Variables …

Click “Ok” then “Next” ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 109 of 393


Database Content :

Select the options needed

Then click Next ...

Initialization Parameters :

Select the parameters needed

You can click on “All Initialization


Parameters” to view or modify …

You can also select the folders

• “Sizing”
• “Character Sets”
• “Connection Mode”

to validate/modify/specify settings at your


convenience.

Then click Next ...

 About Oracle Cluster database instances parameters, we’ll have to look on these parameters :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 110 of 393


Instance Name Value

* asm_preferred_read_failure_groups ONLY usable if you did create and ASM diskgroup in normal or
high redoundancy with a minimum of 2 failures groups.
Valuer will be set as the name of the preferred failure group.

If you want to set a different location than the default choosen ASM disk group for the control files, you
should modify the value of parameter (control_files)

* control_files (“+DATA_DG1/{DB_NAME}/control01.ctl”,
“+DATA_DG1/{DB_NAME}/control02.ctl”,
“+DATA_DG1/{DB_NAME}/control03.ctl”)

* db_create_file_dest +DATA_DG1
(By default, all files from the database will be created on the
specified ASM Diskgroup)
* db_recovery_file_dest +DATA_DG1
(Default location for backups, and archives logs)
* remote_listener LISTENERS_JSC1DB
JSC1DB1 undo_tablespace UNDOTBS1
JSC1DB2 undo_tablespace UNDOTBS2
JSC1DB1 local_listener LISTENER_JSC1DB1
JSC1DB2 Local_listener LISTENER_JSC1DB2
* log_archive_dest_1 'LOCATION=+DATA_DG1/'
JSC1DB1 Instance_number 1
JSC1DB2 Instance_number 2
JSC1DB1 thread 1
JSC1DB2 thread 2
* db_name JSC1DB
* db_recovery_file_dest_size 4294967296
* diagnostic_dest (ORACLE_BASE)
(Default location for the database traces and logs files, it
replace the USER, BACKGROUND and DUMP destinations)

 In LISTENERS_JSC1DB =
$ORACLE_HOME/network/admin (ADDRESS_LIST =
, do edit tnsnames.ora file on each (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
node, and add the following lines : (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
LISTENERS_JSC1DB could be
automatically added by DBCA in LISTENER_JSC1DB1 =
the tnsnames.ora (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

LISTENER_JSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 111 of 393


Security Settings :

Select the options


needed

By default è

“Keep the enhanced 11g


default security
settings.”

Then click Next ...

Either (By default)

Or

Automatic Maintenance Tasks :

By default, “Enable Automatic


Maintenance Tasks”.

Then click Next ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 112 of 393


Database Storage :

Check the datafiles organization

Add one extra log member on each thread


now with the assistant,

or later using QSL commands as :

ALTER DATABASE
ADD LOGFILE THREAD 1 GROUP 5 SIZE
51200K

ALTER DATABASE
ADD LOGFILE THREAD 2 GROUP 6 SIZE
51200K

Then click Next ...

REMINDER {node2:root}/ # rsh node1 chmod -R g+w /oracle/asm/11.1.0/network


{node2:root}/ # rsh node2 chmod -R g+w /oracle/asm/11.1.0/network
 Change permission {node2:root}/
{node2:root}/
#
#
rsh
rsh
node1
node2
chmod
chmod
-R
-R
g+w
g+w
/oracle/cfgtoollogs
/oracle/cfgtoollogs
to allow rdbms user to
write on directories {node2:root}/ # rsh node1 chmod -R g+w /oracle/admin
onwned by asm user. {node2:root}/ # rsh node2 chmod -R g+w /oracle/admin
{node2:root}/ # rsh node1 chmod -R g+w /oracle/diag
{node2:root}/ # rsh node2 chmod -R g+w /oracle/diag

11gRAC/ASM/AIX oraclibm@fr.ibm.com 113 of 393


Creation Options :

Select the options needed

• Create Database

• Generate Database Creation Scripts

Then click Finish ...

Summary :

Check the description

Save the HTML summary file if needed

Then click Ok ...

Database Creation script generation …

When finished :

Click “Ok” ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 114 of 393


Database creation on progress :

Just wait while processing ...

Check /oracle/cfgtoollogs/dbca/JSC1DB
for the logs in case of failure to create the
database.

Passwords Management

Enter in password management, if you


need to change pasword, and unlock
some user accounts that are locked by
default (for security purpose).

Then click Exit ...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 115 of 393


{node1:rdbms}/oracle/rdbms # crsstat
Execute HA Resource Target State
----------- ------ -----
crs_stat –t ora.JSC1DB.JSC1DB1.inst OFFLINE OFFLINE
ora.JSC1DB.JSC1DB2.inst OFFLINE OFFLINE
on one node ora.JSC1DB.db OFFLINE OFFLINE
as rdbms ora.node1.ASM1.asm ONLINE ONLINE on node1
user : ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE ONLINE on node2
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

Starting cluster
database

{node1:rdbms}/oracle/rdbms # crsstat
Execute HA Resource Target State
----------- ------ -----
crs_stat –t ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE ONLINE on node2
on one node ora.JSC1DB.db ONLINE ONLINE on node1
as rdbms ora.node1.ASM1.asm ONLINE ONLINE on node1
user : ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE ONLINE on node2
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crsstat JSC1DB


HA Resource Target State
----------- ------ -----
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE ONLINE on node2
ora.JSC1DB.db ONLINE ONLINE on node1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 116 of 393


15.2 Manual Database Creation

Here are the steps to be followed to create a Real Application Clusters database :

Let’s imagine we want to create an 11g cluster database called JSC2DB on our 2 RAC nodes cluster, with instances
JSC2DB1 on node1 and JSC2DB2 on node2.

1. Make the necessary sub directories on each node :

On node1 : {node1:root}/ # su – rdbms


{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB
{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/scripts
{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/adump
{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/hdump
{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/dpdump
{node1:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/pfile

{node1:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db


{node1:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db/JSC2DB1

On node2 : {node2:root}/ # su – rdbms


{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB
{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/scripts
{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/adump
{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/hdump
{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/dpdump
{node2:rdbms}/home/rdbms # mkdir $ORACLE_BASE/admin/JSC2DB/pfile

{node2:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db


{node2:rdbms}/home/rdbms # mkdir /oracle/diag/rdbms/jsc2db/JSC2DB2

2. Create any ASM diskgroup(s) if needed :

From node1 :

{node1:root}/ # su – asm
{node1:asm}/home/asm #
{node1:asm}/home/asm # export ORACLE_SID=+ASM1
{node1:asm}/home/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/oracle/asm # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 10:28:54 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected.
SQL> set linesize 1000
SQL> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
DATA_DG1 MOUNTED

SQL>
SQL> select HEADER_STATUS,PATH from v$asm_disk order by PATH;

11gRAC/ASM/AIX oraclibm@fr.ibm.com 117 of 393


HEADER_STATU PATH
------------
--------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------
------------------------------------------------------------------------------------
MEMBER /dev/ASM_disk1
MEMBER /dev/ASM_disk2
MEMBER /dev/ASM_disk3
MEMBER /dev/ASM_disk4
MEMBER /dev/ASM_disk5
MEMBER /dev/ASM_disk6
CANDIDATE /dev/ASM_disk7
CANDIDATE /dev/ASM_disk8
CANDIDATE /dev/ASM_disk9
CANDIDATE /dev/ASM_disk10
CANDIDATE /dev/ASM_disk11
CANDIDATE /dev/ASM_disk12

12 rows selected.

SQL>

You can use the disks that are in disk header status “CANDIDATE” or “FORMER”

SQL> CREATE DISKGROUP DATA_DG2 NORMAL REDUNDANCY


FAILGROUP GROUP1 DISK
'/dev/ASM_disk7',
'/dev/ASM_disk8',
FAILGROUP GROUP2 DISK
'/dev/ASM_disk9',
'/dev/ASM_disk10';

Diskgroup created.

SQL> select HEADER_STATUS,PATH from v$asm_disk order by PATH;

HEADER_STATU PATH
------------
--------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------
------------------------------------------------------------------------------------
MEMBER /dev/ASM_disk1
MEMBER /dev/ASM_disk2
MEMBER /dev/ASM_disk3
MEMBER /dev/ASM_disk4
MEMBER /dev/ASM_disk5
MEMBER /dev/ASM_disk6
MEMBER /dev/ASM_disk7
MEMBER /dev/ASM_disk8
EMBER /dev/ASM_disk9
MEMBER /dev/ASM_disk10
MEMBER /dev/ASM_disk11
CANDIDATE /dev/ASM_disk12

12 rows selected.

SQL>
SQL> select INST_ID, NAME, STATE, OFFLINE_DISKS from gv$asm_diskgroup;

INST_ID NAME STATE OFFLINE_DISKS


---------- ------------------------------ ----------- -------------
2 DATA_DG1 MOUNTED 0

11gRAC/ASM/AIX oraclibm@fr.ibm.com 118 of 393


2 DATA_DG2 DISMOUNTED 0
1 DATA_DG1 MOUNTED 0
1 DATA_DG2 MOUNTED 0

6 rows selected.

SQL>

We need to mount DATA_DG2 on the second node !!!

On node2 :

{node2:root}/oracle/diag # su - asm
{node2:asm}/oracle/asm # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Mar 28 13:41:23 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysasm


Connected.
SQL>

SQL> alter diskgroup DATA_DG2 mount;

Diskgroup altered.

SQL> select INST_ID, NAME, STATE, OFFLINE_DISKS from gv$asm_diskgroup;

INST_ID NAME STATE OFFLINE_DISKS


---------- ------------------------------ ----------- -------------
1 DATA_DG1 MOUNTED 0
1 DATA_DG2 MOUNTED 0
2 DATA_DG1 MOUNTED 0
2 DATA_DG2 MOUNTED 0

6 rows selected.

SQL>

DATA_DG2 is now mounyed on all nodes !!!

Then with asmcmd command, we will see :

{node1:asm}/oracle/asm # asmcmd -p
ASMCMD [+] > ls
DATA_DG1/
DATA_DG2/
ASMCMD [+] >

3. Create necessary sub directories for the JSC2DB database within the choosen ASM diskgroup :

From node1 : {node1:root}/ # su – asm


{node1:asm}/home/asm #
{node1:asm}/home/asm # export ORACLE_SID=+ASM1
{node1:asm}/home/asm # export ORACLE_HOME=/oracle/asm/11.1.0
{node1:asm}/home/asm # asmcmd –p
{node1:asm}/oracle/asm # asmcmd -p
ASMCMD [+] > ls
DATA_DG1/
ASMCMD [+] > mkdir …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 119 of 393


4. Make a init<SID>.ora in your $ORACLE_HOME/dbs directory.

Cluster- ##############################################################################
Wide # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
Parameter ##############################################################################
s for
Database " ###########################################
JSC2DB ": # Archive
###########################################
log_archive_dest_1='LOCATION=+DATA_DG2/'
log_archive_format=%t_%s_%r.dbf

###########################################
# Cache and I/O
###########################################
db_block_size=8192

###########################################
# Cluster Database
###########################################
cluster_database_instances=2
#remote_listener=LISTENERS_JSC2DB

###########################################
# Cursors and Library Cache
###########################################
open_cursors=300

###########################################
# Database Identification
###########################################
db_domain=""
db_name=JSC2DB

###########################################
# File Configuration
###########################################
db_create_file_dest=+DATA_DG2
db_recovery_file_dest=+DATA_DG2
db_recovery_file_dest_size=4294967296

###########################################
# Miscellaneous
###########################################
asm_preferred_read_failure_groups=""
compatible=11.1.0.0.0
diagnostic_dest=/oracle
memory_target=262144000

###########################################
# Processes and Sessions
###########################################
processes=150

###########################################
# Security and Auditing
###########################################
audit_file_dest=/oracle/admin/JSC2DB/adump
audit_trail=db
remote_login_passwordfile=exclusive

11gRAC/ASM/AIX oraclibm@fr.ibm.com 120 of 393


###########################################
# Shared Server
###########################################
dispatchers="(PROTOCOL=TCP) (SERVICE=JSC2DBXDB)"

###########################################
# Cluster Database
###########################################

db_unique_name=JSC2DB
control_files=/oracle/admin/JSC1DB/scripts/tempControl.ctl
JSC2DB1.instance_number=1
JSC2DB2.instance_number=2
JSC2DB2.thread=2
JSC2DB1.thread=1
JSC2DB1.undo_tablespace=UNDOTBS1
JSC2DB2.undo_tablespace=UNDOTBS2

5. Run the following sqlplus command to connect to the database:

From node1 : {node1:root}/ # su – rdbms


{node1:rdbms}/home/rdbms #
{node1:rdbms}/home/rdbms # export ORACLE_SID=JSC2DB1
{node1:rdbms}/home/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/home/rdbms # sqlplus /nolog

connect / as sysdba

6. Startup up the database in NOMOUNT mode:

From node1 : SQL> startup nomount pfile="/oracle/admin/JSC2DB/scripts/init.ora";

7. Create the Database (ASM diskgroup(s) must be created and mounted on all nodes) :

From node1 :

SQL> CREATE DATABASE "JSC2DB"


MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K
MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K
MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P15
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 51200K, GROUP 2 SIZE 51200K;

11gRAC/ASM/AIX oraclibm@fr.ibm.com 121 of 393


8. Create a Temporary Tablespace:

From node1 : SQL> CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE


'+DATA_DG2’ SIZE 40M REUSE

9. Create a 2nd Undo Tablespace:

From node1 : SQL> CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE


'+DATA_DG2’ SIZE 200M REUSE
NEXT 5120K MAXSIZE UNLIMITED;

10. Run the necessary scripts to build views, synonyms, etc.:

The primary scripts that you must run are:


From node1 : @/oracle/rdbms/11.1.0/rdbms/admin//catalog.sql
-- creates the views of data dictionary tables and the dynamic performance views
@/oracle/rdbms/11.1.0/rdbms/admin/catproc.sql
-- establishes the usage of PL/SQL functionality and creates many of the PL/SQL Oracle
supplied packages
@/oracle/rdbms/11.1.0/rdbms/admin/catclust.sql
-- create the cluster views

Run any other necessary catalog that you require …

11. Edit init<SID>.ora and set appropriate values for the 2nd instance on the 2nd Node:

instance_name=JSC2DB2
instance_number=2
local_listener=LISTENER_JSC2DB2
thread=2
undo_tablespace=UNDOTBS2

12. From the first instance, run the following command:

From node1 :

SQL> connect SYS/oracle as SYSDBA


SQL> shutdown immediate;
SQL> startup mount pfile="/oracle/admin/JSC1DB/scripts/init.ora";
SQL> alter database archivelog;
SQL> alter database open;
SQL> select group# from v$log where group# =3;
SQL> select group# from v$log where group# =4;
SQL> select group# from v$log where group# =6;
SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 SIZE 51200K,
GROUP 4 SIZE 51200K,
GROUP 6 SIZE 51200K;
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 2;

SQL> create spfile='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' FROM


pfile='/oracle/admin/JSC2DB/scripts/init.ora';
SQL> shutdown immediate;

From node1 :

11gRAC/ASM/AIX oraclibm@fr.ibm.com 122 of 393


{node1:rdbms}/home/rdbms # echo SPFILE='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' >
/oracle/rdbms/11.1.0/dbs/initJSC2DB1.ora

From node2 :

{node2:rdbms}/home/rdbms # echo SPFILE='+DATA_DG2/JSC2DB/spfileJSC2DB.ora' >


/oracle/rdbms/11.1.0/dbs/initJSC2DB2.ora

13. Create the oracle instance password files

From node1 :

{node1:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB1 password=oracle force=y

From node2 :

{node2:rdbms}/oracle/rdbms/11.1.0/bin # orapwd
file=/oracle/rdbms/11.1.0/dbs/orapwJSC2DB2 password=oracle force=y

14. Start the second Instance. (Assuming that your cluster configuration is up and running)

15. Configure listener.ora / sqlnet.ora / tnsnames.ora,


Use netca and/or netmgr to check the configuration of the listener and configure Oracle Net services (by
default the Net service may be equal to the global database name (see instance parameter service_names
).

16. Configure Oracle Enterprise Manager (dbconsole or grid control agent)


Then start the Grid agent :
$emctl start agent

17. Check /etc/oratab


The file should contain a reference to the database name, not to the instance name.
The last field should always be “N” on a RAC environment to avoid 2 instances of the same name to be
started.

{node1:rdbms}/oracle/rdbms # cat /etc/oratab


……
+ASM1:/oracle/asm/11.1.0:N
JSC1DB:/oracle/rdbms/11.1.0:N
JSC2DB:/oracle/rdbms/11.1.0:N
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 123 of 393


18. Register the database resources with cluster srvctl command :

From node1 :

{node1:rdbms}/home/rdbms # srvctl add database –d JSC2DB –o /oracle/rdbms/11.1.0


{node1:rdbms}/home/rdbms # srvctl add instance –d JSC2DB –i JSC2DB1 –n node1
{node1:rdbms}/home/rdbms # srvctl add instance –d JSC2DB –i JSC2DB2 –n node2

{node1:rdbms}/home/rdbms #

{node1:rdbms}/oracle/rdbms # crsstat JSC2DB


HA Resource Target State
----------- ------ -----
ora.JSC2DB.JSC2DB1.inst ONLINE ONLINE on node1
ora.JSC2DB.JSC2DB2.inst ONLINE ONLINE on node2
ora.JSC2DB.db ONLINE ONLINE on node1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 124 of 393


15.3 Post Operations

15.3.1 Update the RDBMS unix user .profile

 To be done on each node for rdbms (in our case) or oracle unix user.

vi $HOME/.profile file in rdbms’s home directory. Add the entries in bold blue color

PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.

export PATH

if [ -s "$MAIL" ] # This is at Shell startup. In normal


then echo "$MAILMSG" # operation, the Shell checks
fi # periodically.

ENV=$HOME/.kshrc
export ENV

#The following line is added by License Use Management installation


export PATH=$PATH:/usr/opt/ifor/ls/os/aix/bin
export PATH=$PATH:/usr/java14/bin

export MANPATH=$MANPATH:/usr/local/man
export ORACLE_BASE=/oracle
export AIXTHREAD_SCOPE=S
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export CRS_HOME=/crs/11.1.0
export ORA_CRS_HOME=$CRS_HOME
export ASM_HOME=$ORACLE_BASE/asm/11.1.0
export ORA_ASM_HOME=$ASM_HOME
export ORACLE_HOME=$ORACLE_BASE/rdbms/11.1.0
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib:$ORACLE_HOME/lib32:$CRS_HOME/lib32
export LIBPATH=$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
export TNS_ADMIN=$ASM_HOME/network/admin
export ORACLE_SID=JSC1DB1

if [ -t 0 ]; then
stty intr ^C
fi

Do disconnect from asm user, and reconnect to load modified $HOME/.profile


THEN on node2 …

……..
export ORACLE_SID=JSC1DB2
……..

Do disconnect from asm user, and reconnect to load modified $HOME/.profile

15.3.2 Administer and Check

11gRAC/ASM/AIX oraclibm@fr.ibm.com 125 of 393


To start / stop the database, stopping all instances :

{node2:rdbm
s}/oracle/rdbms # srvctl stop database –d JDBC1DB

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db


ora.JSC1DB.db application OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application OFFLINE OFFLINE
ora....B2.inst application OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

{node2:rdbms}/oracle/rdbms # srvctl start database –d JDBC1DB

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db


ora.JSC1DB.db application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

To start / stop selected database instance :

{node2:rdbms}/oracle/rdbms # srvctl stop instance –d JDBC1DB –i JDBC1DB1

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db


ora.JSC1DB.db application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst
ora....B1.inst application OFFLINE OFFLINE
ora....B2.inst application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

{node2:rdbms}/oracle/rdbms # srvctl stop instance –d JDBC1DB –i JDBC1DB2

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst


ora....B1.inst application OFFLINE OFFLINE
ora....B2.inst application OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application OFFLINE OFFLINE

{node2:rdbms}/oracle/rdbms # srvctl start instance –d JDBC1DB –i JDBC1DB1

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst


ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application OFFLINE OFFLINE

{node2:rdbms}/oracle/rdbms # srvctl start instance –d JDBC1DB –i JDBC1DB2

{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .inst


ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #
{node1:rdbms}/oracle/rdbms # crs_stat -t |grep .db
ora.JSC1DB.db application ONLINE ONLINE node2

11gRAC/ASM/AIX oraclibm@fr.ibm.com 126 of 393


Getting profile from the ora.JSC1DB.JSC1DB1.inst Getting profile from the
resource within the clusterware ora.JSC1DB.JSC1DB1.db resource within the
clusterware

{node1:rdbms}/oracle/rdbms # crs_stat -p {node1:rdbms}/oracle/rdbms # crs_stat -p


ora.JSC1DB.JSC1DB1.inst ora.JSC1DB.db

NAME=ora.JSC1DB.db
NAME=ora.JSC1DB.JSC1DB1.inst TYPE=application
TYPE=application ACTION_SCRIPT=/crs/11.1.0/bin/racgwrap
ACTION_SCRIPT=/oracle/rdbms/11.1.0/bin/racgwrap ACTIVE_PLACEMENT=0
ACTIVE_PLACEMENT=0 AUTO_START=1
AUTO_START=1 CHECK_INTERVAL=600
CHECK_INTERVAL=300 DESCRIPTION=CRS application for the
DESCRIPTION=CRS application for Instance Database
FAILOVER_DELAY=0 FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=1
FAILURE_THRESHOLD=0 HOSTING_MEMBERS=
HOSTING_MEMBERS=node1 OPTIONAL_RESOURCES=
OPTIONAL_RESOURCES= PLACEMENT=balanced
PLACEMENT=restricted REQUIRED_RESOURCES=
REQUIRED_RESOURCES=ora.node1.ASM1.asm RESTART_ATTEMPTS=0
RESTART_ATTEMPTS=5 SCRIPT_TIMEOUT=600
SCRIPT_TIMEOUT=600 START_TIMEOUT=600
START_TIMEOUT=900 STOP_TIMEOUT=600
UPTIME_THRESHOLD=7d
STOP_TIMEOUT=300 USR_ORA_ALERT_NAME=
UPTIME_THRESHOLD=7d USR_ORA_CHECK_TIMEOUT=0
USR_ORA_ALERT_NAME= USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_CHECK_TIMEOUT=0 USR_ORA_DEBUG=0
USR_ORA_CONNECT_STR=/ as sysdba USR_ORA_DISCONNECT=false
USR_ORA_DEBUG=0 USR_ORA_FLAGS=
USR_ORA_DISCONNECT=false USR_ORA_IF=
USR_ORA_FLAGS= USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_IF=
USR_ORA_NETMASK=
USR_ORA_INST_NOT_SHUTDOWN= USR_ORA_OPEN_MODE=
USR_ORA_LANG= USR_ORA_OPI=false
USR_ORA_NETMASK= USR_ORA_PFILE=
USR_ORA_OPEN_MODE= USR_ORA_PRECONNECT=none
USR_ORA_OPI=false USR_ORA_SRV=
USR_ORA_PFILE= USR_ORA_START_TIMEOUT=0
USR_ORA_PRECONNECT=none USR_ORA_STOP_MODE=immediate
USR_ORA_SRV= USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate {node1:rdbms}/oracle/rdbms #
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 127 of 393


Check JSC1DB spfile or initfile content, For example from node1 :
{node1:rdbms}/oracle/rdbms # cat 11.1.0/dbs/initJSC1DB1.ora
SPFILE='+DATA_DG1/JSC1DB/spfileJSC1DB.ora'
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Feb 20 19:33:47 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> create pfile='/oracle/rdbms/JSC1DB_pfile.ora' from
spfile='+DATA_DG1/JSC1DB/spfileJSC1DB.ora';

File created.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit
Production
With the Partitioning, Real Application Clusters and Real Application Testing options
{node1:rdbms}/oracle/rdbms # ls
11.1.0 JSC1DB JSC1DB_pfile.ora
{node1:rdbms}/oracle/rdbms # cat JSC1DB_pfile.ora
JSC1DB1.__db_cache_size=8388608
JSC1DB2.__db_cache_size=16777216
JSC1DB1.__java_pool_size=12582912
JSC1DB2.__java_pool_size=4194304
JSC1DB1.__large_pool_size=4194304
JSC1DB2.__large_pool_size=4194304
JSC1DB1.__oracle_base='/oracle'#ORACLE_BASE set from environment
JSC1DB2.__oracle_base='/oracle'#ORACLE_BASE set from environment
JSC1DB1.__pga_aggregate_target=83886080
JSC1DB2.__pga_aggregate_target=83886080
JSC1DB1.__sga_target=180355072
JSC1DB2.__sga_target=180355072
JSC1DB1.__shared_io_pool_size=0
JSC1DB2.__shared_io_pool_size=0
JSC1DB1.__shared_pool_size=146800640
JSC1DB2.__shared_pool_size=146800640
JSC1DB1.__streams_pool_size=0
JSC1DB2.__streams_pool_size=0
*.audit_file_dest='/oracle/admin/JSC1DB/adump'
*.audit_trail='db'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='11.1.0.0.0'
*.control_files='+DATA_DG1/jsc1db/controlfile/current.261.647193795','+DATA_DG1/jsc1db/contro
lfile/current.260.6471
93795'
*.db_block_size=8192
*.db_create_file_dest='+DATA_DG1'
*.db_domain=''
*.db_name='JSC1DB'
*.db_recovery_file_dest='+DATA_DG1'
*.db_recovery_file_dest_size=4294967296
*.diagnostic_dest='/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JSC1DBXDB)'
JSC1DB1.instance_number=1
JSC1DB2.instance_number=2
*.log_archive_dest_1='LOCATION=+DATA_DG1/'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=262144000
*.open_cursors=300
*.processes=150
*.remote_listener='LISTENERS_JSC1DB'
*.remote_login_passwordfile='exclusive'
JSC1DB2.thread=2

11gRAC/ASM/AIX oraclibm@fr.ibm.com 128 of 393


JSC1DB1.thread=1
JSC1DB1.undo_tablespace='UNDOTBS1'
JSC1DB2.undo_tablespace='UNDOTBS2'
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 129 of 393


Checking listener status on each node :

{node1:rdbms}/oracle/rdbms # lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 20-FEB-2008 21:15:01

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_NODE1
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production
Start Date 19-FEB-2008 13:59:26
Uptime 1 days 7 hr. 15 min. 36 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File /oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #

Check also :
{node1:rdbms}/oracle/rdbms # lsnrctl service

11gRAC/ASM/AIX oraclibm@fr.ibm.com 130 of 393


15.4 Setting Database Local and remote listeners

 About Oracle Cluster database instances parameters, we’ll have to look on these parameters :
Instance Name Value
* remote_listener LISTENERS_JSC1DB
JSC1DB1 local_listener LISTENER_JSC1DB1
JSC1DB2 Local_listener LISTENER_JSC1DB2

• remote_listener from node1, and node2 MUST BE THE SAME, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
• local_listener from node1, and node2 are differents, and ENTRIES MUST BE PRESENT in the
tnsnames.ora from each node.
• local_listener from node1, and node2 are not the ones defined in the listener.ora files from each node.

At database level, you should see

From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter listener

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
local_listener string
remote_listener string LISTENERS_JSC1DB
SQL>

From node2 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB2
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter listener

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
local_listener string
remote_listener string LISTENERS_JSC1DB
SQL>

ONLY “remote_listener” parameter is set, “local_listener” parameter for each instance is not mandatory as we’re using
port 1521 (automatic registration) for the default node listeners. But if not using 1521 port, it is mandatory to set them
to have a proper load balancing and failover of user connections.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 131 of 393


$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin

MANDATORY !!! LISTENERS_JSC1DB =


(ADDRESS_LIST =
On all nodes, for (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
JSC1DB )
Database, you
should have or LISTENER_JSC1DB1 =
add the following (ADDRESS_LIST =
lines : (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)

LISTENER_AJSC1DB2 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

If remote_listener, and local_listener are not set, empty or not well set, you should issue the following
commands :

From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog
SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 15:19:04 2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> ALTER SYSTEM SET local_listener='LISTENER_JSC1DB1' SCOPE=BOTH SID='JSC1DB1';
System Altered.
SQL> ALTER SYSTEM SET local_listener='LISTENER_JSC1DB2' SCOPE=BOTH SID='JSC1DB2';
System Altered.
SQL>

If needed to set ‘remote_listener’, command will be as follow :

SQL> ALTER SYSTEM SET remote_listener='LISTENERS_JSC1DB' SCOPE=BOTH SID='*';


System Altered.

From node1 :

SQL> show parameter listener


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string LISTENER_JSC1DB1
remote_listener string LISTENERS_JSC1DB
SQL>

From node2 :

SQL> show parameter listener


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string LISTENER_JSC1DB2
remote_listener string LISTENERS_JSC1DB
SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 132 of 393


15.5 Creating Oracle Services
With 11g, it’s not anymore possible to create services thru DBCA tool, option has been removed.
The 2 options are :

• thru srvctl tool from Oracle Clusterware


• thru Oracle Grid Control, or DB Console

15.5.1 Creation thru srvctl command

 NEVER Create a service with same name as database !!!

Subject: Cannot manage service with srvctl, when it's created with same name as Database: PRKO­2120   
Doc ID: Note:362645.1

Details of command to add oracle services thru srvctl :

srvctl add service -d <name> -s <service_name> -r <preferred_list> [-a <available_list>] [-P <TAF_policy>]
[-u]

-d Database name
-s Service name
-a for services, list of available instances, this list cannot include preferred instances
-P for services, TAF preconnect policy – NONE, BASIC, PRECONNECT
-r for services, list of preferred instances, this list cannot include available instances.
-u updates the preferred or available list for the service to support the specified instance.
Only one instance may be specified with the
-u switch. Instances that already support the service should not be included.

Examples for a 4 RAC nodes Cluster.


With a cluster database named ORA.
4 instances named ORA1, ORA2, ORA3 and ORA4.

Add a STD_BATCH service to an existing database with preferred instances (-r) and available instances (-a).
Use basic failover to the available instances.
srvctl add service -d RAC -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4

Add a STD_BATCH service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r ORA1,ORA2 -a ORA3,ORA4 -P PRECONNECT

11gRAC/ASM/AIX oraclibm@fr.ibm.com 133 of 393


In our case, we want to :

Add an OLTP service to an existing JSC1DB database


with JSC1DB1 and JSC1DB2 as preferred instances (-r)
Using basic failover to the available instances.

As rdbms user, from one node :

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s OLTP -r JSC1DB1,JSC1DB2

Add a BATCH service to an existing JSC1DB database


with JSC1DB2 as preferred instances (-r)
and JSC1DB1 as available instances (-a).
Using basic failover to the available instances.

As rdbms user, from one node :

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s BATCH -r JSC1DB2 -a JSC1DB1

Add a DISCO service to an existing JSC1DB database


with JSC1DB2 as available instances (-a)
and JSC1DB1 as preferred instances (-r).
Using basic failover to the available instances.

As rdbms user, from one node :

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s DISCO -r JSC1DB1 -a JSC1DB2

Query the cluster {node1:rdbms}/oracle/rdbms # crs_stat -t


before creating any Name Type Target State Host
service ------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
ora.JSC1DB.db application ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 134 of 393


Create and manage OLTP service :

Let see in details, As rdbms user, from one node …

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service OLTP

Add an OLTP service to an existing JSC1DB database


with JSC1DB1 and JSC1DB2 as preferred instances (-r)
Using basic failover to the available instances.

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s OLTP -r JSC1DB1,JSC1DB2

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s OLTP


OLTP PREF: JSC1DB1 JSC1DB2 AVAIL:
{node1:rdbms}/oracle/rdbms #

To query after {node1:rdbms}/oracle/rdbms # crs_stat -t


OLTP service Name Type Target State Host
creation ------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
ora....DB1.srv application OFFLINE OFFLINE
ora....DB2.srv application OFFLINE OFFLINE
ora....OLTP.cs application OFFLINE OFFLINE
ora.JSC1DB.db application ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

To query and
get full name {node1:rdbms}/oracle/rdbms # crs_stat |grep OLTP
of OLTP NAME=ora.JSC1DB.OLTP.JSC1DB1.srv
service NAME=ora.JSC1DB.OLTP.JSC1DB2.srv
resource NAME=ora.JSC1DB.OLTP.cs
{node1:rdbms}/oracle/rdbms #
Entries added
by the creation {node1:rdbms}/oracle/rdbms # crsstat OLTP
of Oracle HA Resource Target State
services at ----------- ------ -----
Oracle ora.JSC1DB.OLTP.JSC1DB1.srv OFFLINE OFFLINE
Clusterware ora.JSC1DB.OLTP.JSC1DB2.srv OFFLINE OFFLINE
level ora.JSC1DB.OLTP.cs OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 135 of 393


To start the service OLTP

{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s OLTP

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s OLTP


Service OLTP is running on instance(s) JSC1DB1, JSC1DB2
{node1:rdbms}/oracle/rdbms #

To query after OLTP {node1:rdbms}/oracle/rdbms # crs_stat -t


service startup Name Type Target State Host
------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
ora....DB1.srv application ONLINE ONLINE node1
ora....DB2.srv application ONLINE ONLINE node2
ora....OLTP.cs application ONLINE ONLINE node1
ora.JSC1DB.db application ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

To stop OLTP service :

{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s OLTP

To query after {node1:rdbms}/oracle/rdbms # crs_stat -t


stopping OLTP Name Type Target State Host
service ------------------------------------------------------------
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
ora....DB1.srv application OFFLINE OFFLINE
ora....DB2.srv application OFFLINE OFFLINE
ora....OLTP.cs application OFFLINE OFFLINE
ora.JSC1DB.db application ONLINE ONLINE node1
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 136 of 393


THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes

Srvctl tool does not OLTP =


add it when creating (DESCRIPTION =
the service. (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
Looking at (LOAD_BALANCE = yes)
description, we can (CONNECT_DATA =
see : (SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
• All VIP declared (FAILOVER_MODE =
• Load Balancing (TYPE = SELECT)
configured (METHOD = BASIC)
• Failover (RETRIES = 180)
Configured for (DELAY = 5)
“SESSION” and )
“SELECT” )
)

Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

Test connexion using OLPT connexion string from node 2 …

{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 08:48:38 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
JSC1DB1

SQL>

Test connexion using OLPT connexion string from node 1 …

{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'



SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
JSC1DB2

SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 137 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 138 of 393
Let stop the OLTP service to see if we can still connect thru SQLPlus …

From node2 :

{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s OLTP


{node2:rdbms}/oracle/rdbms # crsstat |grep OLTP
ora.JSC1DB.OLTP.JSC1DB1.srv OFFLINE OFFLINE
ora.JSC1DB.OLTP.JSC1DB2.srv OFFLINE OFFLINE
ora.JSC1DB.OLTP.cs OFFLINE OFFLINE
{node2:rdbms}/oracle/rdbms #
{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:10:18 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus

{node2:rdbms}/oracle/rdbms #

From node1 :

{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:09:35 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus

{node1:rdbms}/oracle/rdbms #

Conclusion : if the OLTP service is stopped, no new connections will be allowed on any database
instances even so nodes listeners are still up and running !!!

Restarting OLTP service will enable the connections …

{node2:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s OLTP

11gRAC/ASM/AIX oraclibm@fr.ibm.com 139 of 393


Create and manage BATCH service :

Let see in details, As rdbms user, from one node …

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service BATCH

Add an OLTP service to an existing JSC1DB database


with JSC1DB2 as preferred instances (-r)
and JSC1DB1 as available instances (-a).
Using basic failover to the available instances.

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s BATCH -r JSC1DB2 –a JSC1DB1

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node1:rdbms}/oracle/rdbms #

To query {node1:rdbms}/oracle/rdbms # crs_stat -t


after BATCH Name Type Target State Host
service ------------------------------------------------------------
creation ora....DB2.srv application OFFLINE OFFLINE
ora....ATCH.cs application OFFLINE OFFLINE
ora....B1.inst application ONLINE ONLINE node1
ora....B2.inst application ONLINE ONLINE node2
ora....DB1.srv application ONLINE ONLINE node1
ora....DB2.srv application ONLINE ONLINE node2
ora....OLTP.cs application ONLINE ONLINE node1
ora.JSC1DB.db application ONLINE ONLINE node2
ora....SM1.asm application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....SM2.asm application ONLINE ONLINE node2
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
{node1:rdbms}/oracle/rdbms #

To query and get full name of BATCH service resource

Entries {node1:rdbms}/oracle/rdbms # crs_stat |grep BATCH


added by the NAME=ora.JSC1DB.BATCH.JSC1DB2.srv
creation of NAME=ora.JSC1DB.BATCH.cs
Oracle {node1:rdbms}/oracle/rdbms #
services at
Oracle {node1:rdbms}/oracle/rdbms # crsstat BATCH
Clusterware HA Resource Target State
level ----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv OFFLINE OFFLINE
ora.JSC1DB.BATCH.cs OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 140 of 393


To start the service BATCH

{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s BATCH

To query after BATCH service startup

{node1:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH


Service BATCH is running on instance(s) JSC1DB2
{node1:rdbms}/oracle/rdbms #

To stop BATCH service :

{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s BATCH

To query after stopping BATCH service

{node1:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv OFFLINE OFFLINE
ora.JSC1DB.BATCH.cs OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes

Srvctl tool does not BATCH =


add it when creating (DESCRIPTION =
the service. (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
Looking at (LOAD_BALANCE = yes)
description, we can (CONNECT_DATA =
see : (SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
• All VIP declared (FAILOVER_MODE =
• Load Balancing (TYPE = SELECT)
configured (METHOD = BASIC)
• Failover (RETRIES = 180)
Configured for (DELAY = 5)
“SESSION” and )
“SELECT” )
)

Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 141 of 393


BATCH Service Testing …

{node1:rdbms}/oracle/rdbms # tnsping BATCH

TNS Ping Utility for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-
FEB-2008 21:15:14

Copyright (c) 1997, 2007, Oracle. All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias


Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1-
vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = BATCH)
(FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))))
OK (10 msec)
{node1:rdbms}/oracle/rdbms #

Test connexion using BATCH connexion string from node 2 …

BATCH service MUST BE started !!!

{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 08:48:38 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
JSC1DB2 è We’re connected on instance JSC1DB2.

SQL>

Test connexion using BATCH connexion string from node 1 …

{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'



SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
JSC1DB2 è We’re connected on instance JSC1DB2.

SQL>

As you can see, connections will only goes to instance JSC1DB2 (node2), as this is the preferred instance
for the BATCH service, no connections will goes to instance JSC1DB1 (node1) unless node2 fails, then
BATCH will use the available instance as set in the configuration.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 142 of 393


Let stop the BATCH service to see if we can still connect thru SQLPlus …

From node2 :

{node2:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s BATCH

{node2:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv OFFLINE OFFLINE
ora.JSC1DB.BATCH.cs OFFLINE OFFLINE
{node2:rdbms}/oracle/rdbms #

{node2:rdbms}/oracle/rdbms # sqlplus 'sys/password@batch as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:10:18 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus

{node2:rdbms}/oracle/rdbms #

From node1 :

{node1:rdbms}/oracle/rdbms # sqlplus 'sys/password@oltp as sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 09:09:35 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
Enter user-name:
ERROR:
ORA-01017: invalid username/password; logon denied
SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus

{node1:rdbms}/oracle/rdbms #

Conclusion : if the BATCH service is stopped, no new connections will be allowed on any database
instances even so nodes listeners are still up and running !!!

Re-starting BATCH service will enable the connections …

{node2:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s BATCH

11gRAC/ASM/AIX oraclibm@fr.ibm.com 143 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 144 of 393
Let crash node2 which host the BATCH service preferred instance called JSC1DB2 to see what will
happened to existing connection …

BATCH service MUST BE started !!!

From node2 :

{node2:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node2
{node2:rdbms}/oracle/rdbms #

Let connect to othe service from node1 …

{node1:rdbms}/oracle/rdbms/11.1.0/network/admin # sqlplus 'sys/password@batch as


sysdba'

SQL*Plus: Release 11.1.0.6.0 - Production on Fri Feb 22 11:04:47 2008


Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
JSC1DB2 è We’re connected on instance JSC1DB2.

SQL>

From node2 :

{node2:root}/ # reboot
Rebooting . . .

From node1 (same unix and SQLPlus session as before) :

SQL> /

INSTANCE_NAME
----------------
JSC1DB1 è We’re still connected, but on instance JSC1DB1

SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 145 of 393


From node1 (new unix session) :

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node1
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE OFFLINE
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE OFFLINE
ora.JSC1DB.OLTP.cs ONLINE ONLINE on node1
ora.JSC1DB.db ONLINE ONLINE on node1
ora.node1.ASM1.asm ONLINE ONLINE on node1
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE OFFLINE
ora.node2.LISTENER_NODE2.lsnr ONLINE OFFLINE
ora.node2.gsd ONLINE OFFLINE
ora.node2.ons ONLINE OFFLINE
ora.node2.vip ONLINE ONLINE on node1
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crsstat vip


HA Resource Target State
----------- ------ -----
ora.node1.vip ONLINE ONLINE on node1
ora.node2.vip ONLINE ONLINE on node1

{node1:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node1
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH


Service BATCH is running on instance(s) JSC1DB1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 146 of 393


When node2 is back ONLINE, Oracle Clusterware will start, VIP from node2 failed on node1 will get back
on its home node (node2), ASM instance2, Database instance will restart as others resources linked to
node2.

{node2:rdbms}/oracle/rdbms # crsstat
HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node1
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE ONLINE on node2
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.OLTP.cs ONLINE ONLINE on node1
ora.JSC1DB.db ONLINE ONLINE on node1
ora.node1.ASM1.asm ONLINE ONLINE on node1
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE ONLINE on node2
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node2:rdbms}/oracle/rdbms #

BUT service status will not change unless we decide to switch back to JSC1DB2 as preferred instance for
BATCH service

To relocate the service on its preferred instance JSC1DB2 …

{node2:rdbms}/oracle/rdbms # srvctl relocate service -d JSC1DB -s BATCH -i JSC1DB1 -t


JSC1DB2 -f

{node2:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s BATCH


Service BATCH is running on instance(s) JSC1DB2

{node2:rdbms}/oracle/rdbms # crsstat BATCH


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1

{node2:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s BATCH


BATCH PREF: JSC1DB2 AVAIL: JSC1DB1
{node2:rdbms}/oracle/rdbms #

From node1 (same unix and SQLPlus session as before) :

SQL> /

INSTANCE_NAME
----------------
JSC1DB2

11gRAC/ASM/AIX oraclibm@fr.ibm.com 147 of 393


SQL>

è We switched back to preferred instance

11gRAC/ASM/AIX oraclibm@fr.ibm.com 148 of 393


Create and manage DISCO service :

Let see in details, As rdbms user, from one node …

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

To create the service DISCO

Add an DISCO service to an existing JSC1DB database


with JSC1DB1 as preferred instances (-r)
and JSC1DB2 as available instances (-a).
Using basic failover to the available instances.

{node1:rdbms}/oracle/rdbms # srvctl add service -d JSC1DB -s DISCO -r JSC1DB1 –a JSC1DB2

To query {node1:rdbms}/oracle/rdbms # crsstat


after HA Resource Target State
DISCO ----------- ------ -----
service ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
creation ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1
ora.JSC1DB.DISCO.JSC1DB1.srv OFFLINE OFFLINE
ora.JSC1DB.DISCO.cs OFFLINE OFFLINE
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE ONLINE on node2
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.OLTP.cs ONLINE ONLINE on node1
ora.JSC1DB.db ONLINE ONLINE on node1
ora.node1.ASM1.asm ONLINE ONLINE on node1
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE ONLINE on node2
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crsstat DISCO


HA Resource Target State
----------- ------ -----
ora.JSC1DB.DISCO.JSC1DB1.srv OFFLINE OFFLINE
ora.JSC1DB.DISCO.cs OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # srvctl config service -d JSC1DB -s DISCO


DISCO PREF: JSC1DB1 AVAIL: JSC1DB2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s DISCO


Service DISCO is not running.
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 149 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 150 of 393
To start the service DISCO

{node1:rdbms}/oracle/rdbms # srvctl start service -d JSC1DB -s DISCO

To query after DISCO service startup

{node1:rdbms}/oracle/rdbms # crsstat DISCO


HA Resource Target State
----------- ------ -----
ora.JSC1DB.DISCO.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.DISCO.cs ONLINE ONLINE on node1
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # srvctl status service -d JSC1DB -s DISCO


Service DISCO is running on instance(s) JSC1DB1
{node1:rdbms}/oracle/rdbms #

To stop DISCO service :

{node1:rdbms}/oracle/rdbms # srvctl stop service -d JSC1DB -s DISCO

To query after stopping DISCO service

{node1:rdbms}/oracle/rdbms # crsstat DISCO


HA Resource Target State
----------- ------ -----
ora.JSC1DB.DISCO.JSC1DB1.srv OFFLINE OFFLINE
ora.JSC1DB.DISCO.cs OFFLINE OFFLINE
{node1:rdbms}/oracle/rdbms #

THEN YOU MUST add the following lines in the tnsnames.ora file on all nodes

Srvctl tool does not DSICO =


add it when creating (DESCRIPTION =
the service. (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
Looking at (LOAD_BALANCE = yes)
description, we can (CONNECT_DATA =
see : (SERVER = DEDICATED)
(SERVICE_NAME = DISCO)
• All VIP declared (FAILOVER_MODE =
• Load Balancing (TYPE = SELECT)
configured (METHOD = BASIC)
• Failover (RETRIES = 180)
Configured for (DELAY = 5)
“SESSION” and )
“SELECT” )
)

Good method is to modify one tnsnames.ora on one node, and remote copy the
modified tnsnames.ora file to all other nodes, making sure that all node have the
same content.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 151 of 393


We now have 3 services for the JSC1DB cluster database :

• OLTP running on preferred database instance JSC1DB1 on node1 and JSC1DB2 on node2.
• BATCH running on preferred database instance JSC1DB2 on node2.
• DISCO running on preferred database instance JSC1DB1 on node1.

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1
ora.JSC1DB.DISCO.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.DISCO.cs ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE ONLINE on node2
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.OLTP.cs ONLINE ONLINE on node1
ora.JSC1DB.db ONLINE ONLINE on node1
ora.node1.ASM1.asm ONLINE ONLINE on node1
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE ONLINE on node2
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
ora.node2.gsd ONLINE ONLINE on node2
ora.node2.ons ONLINE ONLINE on node2
ora.node2.vip ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # crsstat srv


HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.DISCO.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # for service in OLTP BATCH DISCO


> do
> srvctl config service -d JSC1DB -s $service
> srvctl status service -d JSC1DB -s $service
> echo ''
> done

OLTP PREF: JSC1DB1 JSC1DB2 AVAIL:


Service OLTP is running on instance(s) JSC1DB1, JSC1DB2

BATCH PREF: JSC1DB2 AVAIL: JSC1DB1


Service BATCH is running on instance(s) JSC1DB2

DISCO PREF: JSC1DB1 AVAIL: JSC1DB2


Service DISCO is running on instance(s) JSC1DB1

{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 152 of 393


At database level, you should check database parameter “service_names”, ensuring that services created
are listed :

From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1

{node1:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:18:12 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string OLTP, JSC1DB, DISCO
SQL>

On node1, we’ll see OLTP and DISCO as they are prefered on JSC1DB1.
BUT not BATCH as it’s only available on JSC1DB1.
JSC1DB MUST be also declared in the service_names list.

From node2 :

{node2:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node2:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB2
{node2:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:20:40 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string OLTP, JSC1DB, BATCH
SQL>

On node2, we’ll see OLTP and BATCH as they are prefered on JSC1DB2.
BUT not DISCO as it’s only available on JSC1DB2.
JSC1DB MUST be also declared in the service_names list.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 153 of 393


If service_names not set, empty or not well set, you should issue the following commands :

From node1 :

{node1:oracle}/oracle -> export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:oracle}/oracle -> export ORACLE_SID=JSC1DB1

{node1:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 21:18:12 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> ALTER SYSTEM SET service_names='JSCDB','OLTP','DISCO' SCOPE=BOTH SID='JSC1DB1';
System Altered.
SQL> ALTER SYSTEM SET service_names='JSCDB','OLTP','BATCH' SCOPE=BOTH SID='JSC1DB2';
System Altered.
SQL>

Check that services are registered with command : lsnrctl status …. On node1 with listener_node1
{node1:rdbms}/oracle/rdbms # lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-FEB-2008 21:45:43

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_NODE1
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production
Start Date 22-FEB-2008 07:57:23
Uptime 1 days 13 hr. 48 min. 21 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File /oracle/diag/tnslsnr/node1/listener_node1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.181)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.81)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 2 handler(s) for this service...
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 2 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
The command completed successfully
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 154 of 393


This is normal as BATCH service is
Service "BATCH" has 1 instance(s). • Available on JSC1DB1 instance, node1
Instance "JSC1DB2", status READY, has 1 handler(s)
for this service... • Prefered on JSC1DB2 instance, node2

This is normal as DISCO service is


Service "DISCO" has 1 instance(s). • Prefered on JSC1DB1 instance, node1
Instance "JSC1DB1", status READY, has 2 handler(s)
for this service... • Available on JSC1DB2 instance, node2

Service "OLTP" has 2 instance(s). This is normal as OLTP service is


Instance "JSC1DB1", status READY, has 2 handler(s) • Prefered on JSC1DB1 instance, node1
for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) • Prefered on JSC1DB2 instance, node2
for this service...

Check that services are registered with command : lsnrctl status …. On node2 with listener_node2
{node2:rdbms}/oracle/rdbms # lsnrctl status

LSNRCTL for IBM/AIX RISC System/6000: Version 11.1.0.6.0 - Production on 23-FEB-2008 21:55:48

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_NODE2
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.1.0.6.0 -
Production
Start Date 22-FEB-2008 11:11:32
Uptime 1 days 10 hr. 44 min. 17 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/asm/11.1.0/network/admin/listener.ora
Listener Log File /oracle/diag/tnslsnr/node2/listener_node2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.182)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.3.25.82)(PORT=1521)))
Services Summary...
Service "+ASM" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "+ASM_XPT" has 2 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Instance "+ASM2", status READY, has 2 handler(s) for this service...
Service "BATCH" has 1 instance(s).
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "DISCO" has 1 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Service "JSC1DB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "JSC1DBXDB" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 1 handler(s) for this service...
Service "JSC1DB_XPT" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...
Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
Service "OLTP" has 2 instance(s).
Instance "JSC1DB1", status READY, has 1 handler(s) for this service...

11gRAC/ASM/AIX oraclibm@fr.ibm.com 155 of 393


Instance "JSC1DB2", status READY, has 2 handler(s) for this service...
The command completed successfully
{node2:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 156 of 393


Let check what will happend as database level when failling node2
Before failure of one node, crs_stat –t will show :

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node2
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node2
{node1:rdbms}/oracle/rdbms #

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 22:14:50 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string OLTP, JSC1DB, DISCO
SQL>

If node2 fails, or reboot for any reason : What should we see on node1 ?

After failure : On node1, AS BATCH was prefered on JSC1DB2, and available on JSC1DB1,
THEN after node2 failure, BATCH service will be swithed to available instance JSC1DB1.

{node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # sqlplus /nolog

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 23 22:14:50 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> connect /as sysdba


Connected.
SQL> show parameter service

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
service_names string OLTP, JSC1DB, DISCO, BATCH
SQL>

11gRAC/ASM/AIX oraclibm@fr.ibm.com 157 of 393


After failure of node2, “crs_stat –t” will show :

• VIP from node2 swith to node1.

• ONS, GSD, Listener, ASM2 instance, and JSC1DB2 instance are switch to OFFLINE state.

• OLTP service is switch to OFFLINE state on node2. But still ONLINE state on node1

• BATCH service is switch to ONLINE state from node2 to node1.

THEN BATCH service is still available thru node1.

{node1:rdbms}/oracle/rdbms # crsstat
HA Resource Target State
----------- ------ -----
ora.JSC1DB.BATCH.JSC1DB2.srv ONLINE ONLINE on node1
ora.JSC1DB.BATCH.cs ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB1.inst ONLINE ONLINE on node1
ora.JSC1DB.JSC1DB2.inst ONLINE OFFLINE
ora.JSC1DB.OLTP.JSC1DB1.srv ONLINE ONLINE on node1
ora.JSC1DB.OLTP.JSC1DB2.srv ONLINE OFFLINE
ora.JSC1DB.OLTP.cs ONLINE ONLINE on node1
ora.JSC1DB.db ONLINE ONLINE on node1
ora.node1.ASM1.asm ONLINE ONLINE on node1
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node1.gsd ONLINE ONLINE on node1
ora.node1.ons ONLINE ONLINE on node1
ora.node1.vip ONLINE ONLINE on node1
ora.node2.ASM2.asm ONLINE OFFLINE
ora.node2.LISTENER_NODE2.lsnr ONLINE OFFLINE
ora.node2.gsd ONLINE OFFLINE
ora.node2.ons ONLINE OFFLINE
ora.node2.vip ONLINE ONLINE on node1
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 158 of 393


15.6 Transaction Application Failover

$ORACLE_HOME/network/admin/listener.ora
Which will be in /oracle/asm/11.1.0/network/admin

• Listener from node1 is listening on node1-vip (10.3.25.181) and on node1 (10.3.25.81).


• Listener from node2 is listening on node2-vip (10.3.25.182) and on node2 (10.3.25.82).
• Users will connect thru node1-vip or node2-vip
If Load Balancing is configuring on the client connection string, users will be dispatch on both
nodes depending of the workload.
• DBA’s can connect thru node1 or node2 for administration purpose.
• IPC necessary.

Checking content of listener.ora on node1 :

{node1:rdbms}/oracle/rdbms/11.1.0/network/admin # cat listener.ora


# listener.ora.node1 Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/listener.ora.node1
# Generated by Oracle configuration tools.

LISTENER_NODE1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.81)(PORT = 1521)(IP = FIRST))
)
)

{node1:rdbms}/oracle/rdbms/11.1.0/network/admin #

Checking content of listener.ora on node2 :

{node2:rdbms}/oracle/rdbms/11.1.0/network/admin # cat listener.ora


# listener.ora.node2 Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/listener.ora.node2
# Generated by Oracle configuration tools.

LISTENER_NODE2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.25.82)(PORT = 1521)(IP = FIRST))
)
)

{node2:rdbms}/oracle/rdbms/11.1.0/network/admin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 159 of 393


11gRAC/ASM/AIX oraclibm@fr.ibm.com 160 of 393
How to query “LISTENER” resources

{node2:rdbms}/oracle/rdbms # crsstat LISTENER


HA Resource Target State
----------- ------ -----
ora.node1.LISTENER_NODE1.lsnr ONLINE ONLINE on node1
ora.node2.LISTENER_NODE2.lsnr ONLINE ONLINE on node2
{node2:rdbms}/oracle/rdbms #

How to obtain “LISTENER” ressources config

{node2:rdbms}/oracle/rdbms # srvctl config listener -n node1


node1 LISTENER_NODE1
{node2:rdbms}/oracle/rdbms #

{node2:rdbms}/oracle/rdbms # srvctl config listener -n node2


node2 LISTENER_NODE2
{node2:rdbms}/oracle/rdbms #

If listener is not the default one, or default one but not with default naming as
“LISTENER_NODENAME” THEN … Imagine listener is named “LISTENER_JSC”

{node2:rdbms}/oracle/rdbms # srvctl config listener -n node2 –l LISTENER_JSC


node2 LISTENER_JSC
{node2:rdbms}/oracle/rdbms #

How to STOP “LISTENER” ressources config

{node2:rdbms}/oracle/rdbms # srvctl stop listener -n node1

If listener is not the default one, or default one but not with default naming as
“LISTENER_NODENAME” THEN … Imagine listener is named “LISTENER_JSC”

{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1 -l listener_JSC

How to START “LISTENER” ressources config

{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1

If listener is not the default one, or default one but not with default naming as
“LISTENER_NODENAME” THEN … Imagine listener is named “LISTENER_JSC”

{node2:rdbms}/oracle/rdbms # srvctl start listener -n node1 -l listener_JSC

11gRAC/ASM/AIX oraclibm@fr.ibm.com 161 of 393


Let’s have a look on the tnsnames.ora file from on node …

ALL clients connections MUST use VIP’s !!!

$ORACLE_HOME/network/admin/tnsnames.ora
Which will be in /oracle/asm/11.1.0/network/admin

“REMOTE_LISTENER {node1:rdbms}/oracle/rdbms/11.1.0/network/admin # cat tnsnames.ora


” for ASM instances : # tnsnames.ora Network Configuration File:
/oracle/rdbms/11.1.0/network/admin/tnsnames.ora
• +ASM1 # Generated by Oracle configuration tools.
• +ASM2
LISTENERS_+ASM =
(Added by netca) (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)

“LOCAL_LISTENER”
for ASM instance : LISTENER_+ASM2 =
(ADDRESS_LIST =
• +ASM2 (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
(Added manualy)

“LOCAL_LISTENER”
for ASM instance : LISTENER_+ASM1 =
(ADDRESS_LIST =
• +ASM1 (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
)
(Added manualy)

Connection String for ASM =


ASM instances with : (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
• Load Balancing (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
• Failover (Session (LOAD_BALANCE = yes)
and Select) (CONNECT_DATA =
(SERVER = DEDICATED)
This entry could be (SERVICE_NAME = +ASM)
used for by the DBA to (FAILOVER_MODE =
connect remotely to (TYPE = SELECT)
ASM instances (METHOD = BASIC)
(RETRIES = 180)
(Added manualy) (DELAY = 5)
)
)
)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 162 of 393


“REMOTE_LISTENER
” for Database
instances : LISTENERS_JSC1DB =
(ADDRESS_LIST =
• JSC1DB1 (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
• JSC1DB2 (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
(Added by netca/dbca)

“LOCAL_LISTENER”
for ASM instance :
LISTENER_JSC1DB2 =
• JSC1DB1 (ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
)
“LOCAL_LISTENER”
for ASM instance :

• JSC1DB2 LISTENER_JSC1DB1 =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(Added manualy) )

This connection JSC1DB2 =


string will only allow (DESCRIPTION =
to connect to (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
database instance (CONNECT_DATA =
JSC1DB2 (SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
• NO Failover (INSTANCE_NAME = JSC1DB2)
• NO Load Balancing )
)
(Added by netca/dbca)

This connection JSC1DB1 =


string will only allow (DESCRIPTION =
to connect to (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
database instance (CONNECT_DATA =
JSC1DB1 (SERVER = DEDICATED)
(SERVICE_NAME = JSC1DB)
• NO Failover (INSTANCE_NAME = JSC1DB1)
• NO Load Balancing )
)
(Added by netca/dbca)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 163 of 393


This connection
string will allow to
connect to JSC1DB =
database instances (DESCRIPTION =
JSC1DB1 and (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
JSC1DB2 (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(P<ORT = 1521))
(LOAD_BALANCE = yes)
• NO Failover (CONNECT_DATA =
• NO Load (SERVER = DEDICATED)
Balancing (SERVICE_NAME = JSC1DB)
)
(Added by netca) )

Without lines as :

(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)

Load Balancing will be achieved, BUT NO Failover will be ensured !!!

So Entry for JSC1DB should me modified as below :

This connection
string will allow to JSC1DB =
connect to (DESCRIPTION =
database instances (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
JSC1DB1 and (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
JSC1DB2 (LOAD_BALANCE = yes)
(CONNECT_DATA =
• Failover (SERVER = DEDICATED)
(Session and (SERVICE_NAME = JSC1DB)
Select) (FAILOVER_MODE =
• Load Balancing (TYPE = SELECT)
(METHOD = BASIC)
(Added by netca) (RETRIES = 180)
(DELAY = 5)
)
)
)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 164 of 393


Connection String
for OLTP Database OLTP =
cluster Service (DESCRIPTION =
instances with : (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
• Load Balancing (LOAD_BALANCE = yes)
• Failover (Session (CONNECT_DATA =
and Select) (SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
“Preferred”, (FAILOVER_MODE =
“Available” and “Not (TYPE = SELECT)
Used” will be (METHOD = BASIC)
managed thru Oracle (RETRIES = 180)
Clusterware (DELAY = 5)
)
(Added manualy) )
)

Connection String
for BATCH Database BATCH =
cluster Service (DESCRIPTION =
instances with : (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
• Load Balancing (LOAD_BALANCE = yes)
• Failover (Session (CONNECT_DATA =
and Select) (SERVER = DEDICATED)
(SERVICE_NAME = BATCH)
“Preferred”, (FAILOVER_MODE =
“Available” and “Not (TYPE = SELECT)
Used” will be (METHOD = BASIC)
managed thru Oracle (RETRIES = 180)
Clusterware (DELAY = 5)
)
(Added manualy) )
)

Connection String DISCO =


for DISCO Database (DESCRIPTION =
cluster Service (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
instances with : (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
• Load Balancing (CONNECT_DATA =
• Failover (Session (SERVER = DEDICATED)
and Select) (SERVICE_NAME = DISCO)
(FAILOVER_MODE =
“Preferred”, (TYPE = SELECT)
“Available” and “Not (METHOD = BASIC)
Used” will be (RETRIES = 180)
managed thru Oracle (DELAY = 5)
Clusterware )
)
(Added manualy) )

{node1:rdbms}/oracle/rdbms/11.1.0/network/admin #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 165 of 393


IMPORTANT, What should be on the client side with the local tnsnames.ora file ?

For Load Balancing and failover on all nodes without going thru Oracle clusterware services, I will have to
add the JSC1DB entries in my local client tnsnames.ora.

JSC1DB =
The name of the (DESCRIPTION =
connection string (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
JSC1DB could be (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
replaced by an other (LOAD_BALANCE = yes)
(CONNECT_DATA =
name, as long as the (SERVER = DEDICATED)
description including (SERVICE_NAME = JSC1DB)
SERVICE_NAME is (FAILOVER_MODE =
(TYPE = SELECT)
not changed !!! (METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

For Load Balancing and failover on all nodes going thru Oracle clusterware services, I will have to choice
add the OLTP entries in my local client tnsnames.ora.

OLTP =
The name of the (DESCRIPTION =
connection string (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
OLTP could be (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
replaced by an other (LOAD_BALANCE = yes)
(CONNECT_DATA =
name, as long as the (SERVER = DEDICATED)
description including (SERVICE_NAME = OLTP)
SERVICE_NAME is (FAILOVER_MODE =
(TYPE = SELECT)
not changed !!! (METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

For example, changing the name of the connection string OLTP to TEST will look like

TEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = OLTP)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

For instance 2 as preferred instance and failover to available instance 1 thru Oracle clusterware services, I
will have to choice add the BATCH entries in my local client tnsnames.ora.

For instance 1 as preferred instance and failover to available instance 2 thru Oracle clusterware services, I
will have to choice add the DISCO entries in my local client tnsnames.ora.

IMPORTANT, On the client side, make sure IP/hostname resiolution for VIP’s are solved either by using DNS or
hosts file, even so you changed the VIP hotname by real IP. Otherwise you may end-up with intermittent connection

11gRAC/ASM/AIX oraclibm@fr.ibm.com 166 of 393


problems.

15.7 About DBCONSOLE

15.7.1 Checking DB CONSOLE

Since 10gRAC R2, and with 11gRAC also, dbconsole is only configured on first nore at database creation.
When using DBCONSOLE, each database will have it’s own dbconsole !!!

Look at Metalink Note :

Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1
Subject: Troubleshooting Database Control Startup Issues Doc ID: Note:549079.1

How to è When using DBCONSOLE, the oracle agent start and stop automatically with the dbconsole.
start/stop the If needed, it’s still possible to start/stop the agent using the emctl tool as follow :
dbconsole
agent : {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0
{node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl start agent
OR
{node1:rdbms}/oracle/rdbms # emctl stop agent

Check status of {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


dbconsole agent {node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
for node1 {node1:rdbms}/oracle/rdbms # emctl status agent
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 10.1.0.5.1
OMS Version : 10.1.0.5.0
Protocol Version : 10.1.0.2.0
Agent Home : /oracle/rdbms/11.1.0/node1_JSC1DB1
Agent binaries : /oracle/rdbms/11.1.0
Agent Process ID : 1507562
Parent Process ID : 1437786
Agent URL : http://node1:3938/emd/main
Started at : 2007-04-23 10:44:38
Started by user : oracle
Last Reload : 2007-04-23 10:44:38
Last successful upload : 2007-04-23 17:32:23
Total Megabytes of XML files uploaded so far : 7.64
Number of XML files pending upload : 0
Size of XML files pending upload(MB) : 0.00
Available disk space on upload filesystem : 37.78%
---------------------------------------------------------------
Agent is Running and Ready
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 167 of 393


Check status of EM agent
for node1, and node2 :

Commands to {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


start {node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
DBCONSOLE : {node1:rdbms}/oracle/rdbms # emctl start dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 10g Database Control
................. started.
------------------------------------------------------------------
Logs are generated in directory
/oracle/rdbms/11.1.0/node1_JSC1DB1/sysman/log
{node1:rdbms}/oracle/rdbms #

Commands to {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


stop {node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
DBCONSOLE : {node1:rdbms}/oracle/rdbms # emctl stop dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 10g Database Control ...
... Stopped.
{node1:rdbms}/oracle/rdbms #

Check status of {node1:rdbms}/oracle/rdbms # export ORACLE_HOME=/oracle/rdbms/11.1.0


dbconsole {node1:rdbms}/oracle/rdbms # export ORACLE_SID=JSC1DB1
{node1:rdbms}/oracle/rdbms # emctl status dbconsole
Oracle Enterprise Manager 10g Database Control Release 10.2.0.3.0
Copyright (c) 1996, 2006 Oracle Corporation. All rights reserved.
http://node1:1158/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.
------------------------------------------------------------------
Logs are generated in directory
/oracle/rdbms/11.1.0/node1_JSC1DB1/sysman/log
{node1:rdbms}/oracle/rdbms #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 168 of 393


Update your local
host file with entries
for node1 and node2.

AND

Access DBCONSOLE
thru
http://node1:1158/em

Connect with oracle


database “sys” user
as “SYSDBA”

Just click on “I agree” to


accept the Oracle
Database 10g Licensing
Information.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 169 of 393


Look at Metalink Note to discover dbconsole, or get help to solve any issues :

Subject: How to configure dbconsole to display information from the ASM DISKGROUPS Doc ID: Note:329581.1

Subject: Em Dbconsole 10.1.0.X Does Not Discover Rac Database As A Cluster Doc ID: Note:334546.1

Subject: DBConsole Shows Everything Down Doc ID: Note:332865.1

Subject: How To Config Dbconsole (10.1.x or 10.2) EMCA With Another Hostname Doc ID: Note:336017.1

Subject: How To Change The DB Console Language To English Doc ID: Note:370178.1

Subject: Dbconsole Fails After A Physical Move Of Machine To New Domain/Hostnam Doc ID: Note:401943.1

Subject: How to Troubleshoot Failed Login Attempts to DB Control Doc ID: Note:404820.1

Subject: How to manage DB Control 10.2 for RAC Database with emca Doc ID: Note:395162.1

Subject: How To Drop, Create And Recreate DB Control In A 10g Database Doc ID: Note:278100.1

Subject: Overview Of The EMCA Commands Available for DB Control 10.2 Installations Doc ID: Note:330130.1

Subject: How to change the password of the 10g database user dbsnmp   Doc ID: Note:259387.1

Subject: EM 10.2 DBConsole Displaying Wrong Old Information   Doc ID: Note:336179.1

Subject: How To Configure A Listener Target From Grid Control 10g   Doc ID: Note:427422.1

For each RAC node :


Testing if
As oracle user on node 1 :
dbconsole is
working or Export DISPLAY=??????
not !!!
Export ORACLE_SID=JSC1DB1

Check if DBConsole is running : emctl status dbconsole

IF dbconsole not running

THEN 1/ execute : emctl start dbconsole


2/ access dbconsole using Internet browser :
http://node1:1158/em
using sys as user, connected as sysdba, with its password

IF dbconsole started and reachable with http://node1:1158/em

THEN dbconsole is OK on Node 1

ELSE See metalink note for DBConsole troubleshooting


!!!

ELSE 1/ access dbconsole using Internet browser :


http://node1:1158/em
using sys as user, connected as sysdba, with its password

IF dbconsole is reachable with http://node1:1158/em

THEN dbconsole is OK on Node 1

ELSE See metalink note for DBConsole troubleshooting


!!!

END

11gRAC/ASM/AIX oraclibm@fr.ibm.com 170 of 393


The “dbconsole” may not start after doing an install through the Oracle Enterprise Manager (OEM)
Troubleshooting and selecting a database.
Tips
The solution is to:

 Edit the file:  “${ORACLE_HOME}/<hostname>_${ORACLE_SID}/sysman/config/emd.properties”

Locate the entry where the EMD_URL is set.  

This entry should have the format:

EMD_URL=http://<hostname>:%EM_SERVLET_PORT%/emd/main

If you see the string: %EM_SERVLET_PORT% in the entry, then replace the complete string with an 
unused port number that is not defined in the "/etc/services" file.  If this string is missing and no port 
number is in its place, then insert an unused port number that is not defined in the "/etc/services" file in 
between the "http://<hostname>:" and the "/emd/main" strings.

 Use the command “emctl start dbconsole” to start the dbconsole after making this change.

For example:

EMD_URL=http://myhostname.us.oracle.com:5505/emd/main

15.7.2 Moving from dbconsole to Grid Control

You must either use DBCONSOLE, or GRID CONTROL, not both at the same time !!!

To move from Locally Managed (DBConsole) to Centrally Managed (Grid Control) :

• You must have an existing GRID Control, or install a new Grid Control on aseparate LPAR, or server. Grid
Control is available on AIX5L, but could be installed on any supported operating system.

• You must install the Oracle Grid Agent on each RAC node AIX LPAR, with same unix user as Oracle
Clusterware owner, and in the same oraInventory.

• THEN you must follow this metalink note :


Subject: How to change a 10.2.0.x Database from Locally Managed to Centrally Managed Doc ID:
Note:400476.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 171 of 393


16 ASM ADVANCED TOOLS

16.1 ftp and http access

Subject: How to configure XDB for using ftp and http protocols with ASM   Doc ID: Note:357714.1

Subject: /Sys/Asm path is not visible in XDB Repository   Doc ID: Note:368454.1

Subject: How to Deinstall and Reinstall XML Database (XDB)   Doc ID: Note:243554.1

11gRAC/ASM/AIX oraclibm@fr.ibm.com 172 of 393


17 SOME USEFULL COMMANDS
Command to start/stop the Database, and Databases Instances :

From any node :

For database

{node1:rdbms}/oracle/rdbms # srvctl start database –d ASMDB to start the Database instance


{node1:rdbms}/oracle/rdbms # srvctl stop database –d ASMDB to stop the Database instance

For instance 1

{node1:rdbms}/oracle/rdbms # srvctl start instance –d ASMDB –i ASMDB1 to start the DB.. instance
{node1:rdbms}/oracle/rdbms # srvctl stop instance –d ASMDB –i ASMDB1 to stop the DB.. instance

For instance 2
{node2:rdbms}/oracle/rdbms # srvctl start instance –d ASMDB –i ASMDB2 to start the DB.. instance
{node2:rdbms}/oracle/rdbms # srvctl stop instance –d ASMDB –i ASMDB2 to stop the DB.. instance

To access an ASM instance with sqlplus

From node1 :

{node1:rdbms}/oracle/rdbms # set ORACLE_SID=+ASM1


{node1:rdbms}/oracle/rdbms # sqlplus /nolog
connect /as sysdba
show sga
………

From node2 :

{node2:rdbms}/oracle/rdbms # set ORACLE_SID=+ASM2


{node2:rdbms}/oracle/rdbms # sqlplus /nolog
connect /as sysdba
show sga
………

To acces a Database instance stored in ASM with sqlplus

From node1 :

{node1:rdbms}/oracle/rdbms # export ORACLE_SID=ASMDB1


{node1:rdbms}/oracle/rdbms # sqlplus /nolog
connect /as sysdba
show sga
…….

11gRAC/ASM/AIX oraclibm@fr.ibm.com 173 of 393


17.1 Oracle CLuster Registry content Check and Backup

{node1:crs}/crs # ocrcheck
Check Status of Oracle Cluster Registry is as follows :
Oracle Version : 2
Cluster Total space (kbytes) : 306972
Registry Used space (kbytes) : 6540
Integrity Available space (kbytes) : 300432
ID : 1928316120
As oracle Device/File Name : /dev/ocr_disk1
user, Device/File integrity check succeeded
Execute Device/File Name : /dev/ocr_disk2
ocrcheck Device/File integrity check succeeded

Cluster registry integrity check succeeded

{node1:crs}/crs #

AS root {node1:crs}/crs/11.1.0/bin # su
user : root's Password:
{node1:root}/crs/11.1.0/bin # ocrconfig -export /oracle/ocr_export4.dmp -s
Export online
Oracle {node1:root}/crs/11.1.0/bin # ls -la /oracle/*.dmp
Cluster -rw-r--r-- 1 root system 106420 Jan 30 18:30
Registry /oracle/ocr_export.dmp
content {node1:crs}/crs/11.1.0/bin #

è you must not edit/modify this exported file

View OCR automatic periodic backup managed by Oracle Clusterware

{node1:crs}/crs/11.1.0 # ocrconfig -showbackup

node1 2008/03/25 09:24:19 /crs/11.1.0/cdata/crs_cluster/backup00.ocr

node1 2008/03/25 05:24:18 /crs/11.1.0/cdata/crs_cluster/backup01.ocr

node1 2008/03/25 01:24:18 /crs/11.1.0/cdata/crs_cluster/backup02.ocr

node1 2008/03/23 13:24:15 /crs/11.1.0/cdata/crs_cluster/day.ocr

node2 2008/03/14 06:45:44 /crs/11.1.0/cdata/crs_cluster/week.ocr

node2 2008/03/16 17:13:39 /crs/11.1.0/cdata/crs_cluster/backup_20080316_171339.ocr

node1 2008/02/24 08:09:21 /crs/11.1.0/cdata/crs_cluster/backup_20080224_080921.ocr

node1 2008/02/24 08:08:48 /crs/11.1.0/cdata/crs_cluster/backup_20080224_080848.ocr


{node1:crs}/crs/11.1.0 #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 174 of 393


18 APPENDIX A : ORACLE / IBM TECHNICAL DOCUMENTS
• Technical Documents on Oracle Real Application Cluster :

http://www.oracle.com/technology/products/database/clustering/index.html

•Technical Documents on Oracle Real Application Cluster :

•Note:282036.1 – Minimum Software Versions and Patches Required to Support Oracle Products on IBM System p
•Note:302806.1 – IBM General Parallel File System (GPFS) and Oracle RAC on AIX 5L and IBM eServer System p
•Note:341507.1 – Oracle Products on Linux on IBM POWER
•Note:4044741 – Status of Certification of Oracle Clusterware with HACMP 5.3 & 5.4

•JSC Cookbooks (available at http://www.oracleracsig.org) :

•9i RAC Release 2 / AIX / HACMP / GPFS


•10g RAC Release 1 / AIX / GPFS / ASM / Concurrent Raw Devices on IBM SAN Storage
•10g RAC Release 2 / AIX / GPFS / ASM on IBM SAN Storage

•IBM Tech documents (http://www.ibm.com) :

•Implementing 10gRAC with ASM on AIX5L


•Oracle 10g RAC on AIX with Veritas Storage Foundation for Oracle RAC
•Oracle 9i RAC on AIX with VERITAS SFRAC
•Oracle's licensing in a multi-core environment
•Oracle Architecture and Tuning on AIX
•Diagnosing Oracle® Database Performance on AIX® Using IBM® NMON and Oracle Statspack Reports
•Observations Using SMT on an Oracle 9i OLTP Workload
•Oracle 9i & 10g on IBM AIX5L: Tips & Considerations
•Impact of Advanced Power Virtualization Features on Performance Characteristics of a Java® / C++ / Oracle®
based Application
•Performance Impact of Upgrading a Data Warehouse Application from AIX 5.2 to AIX 5.3 with SMT
•Simultaneous Multi-threading (SMT) on IBM POWER5: Performance improvement on commercial OLTP workload
using Oracle database version 10g

11gRAC/ASM/AIX oraclibm@fr.ibm.com 175 of 393


19 APPENDIX B : ORACLE TECHNICAL NOTES
This appendix provides some useful notes coming from Oracle support. These notes can be found in Metalink.

19.1 CRS and 10g Real Application Clusters

Doc
Note:259301.1 Content Type: TEXT/X-
ID:
HTML
Subjec CRS and 10g Real Application
Creation Date: 05-DEC-
t: Clusters
2003
Type: BULLETIN
Last Revision 15-MAR-
Status
PUBLISHED Date: 2005
:

PURPOSE
-------

This document is to provide additional information on CRS (Cluster Ready Services)


in 10g Real Application Clusters.

SCOPE & APPLICATION


-------------------

This document is intended for RAC Database Administrators and Oracle support
enginneers.

CRS and 10g REAL APPLICATION CLUSTERS


-------------------------------------

CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters
that provides a standard cluster interface on all platforms and performs
new high availability operations not available in previous versions.

CRS KEY FACTS


-------------

Prior to installing CRS and 10g RAC, there are some key points to remember about
CRS and 10g RAC:

- CRS is REQUIRED to be installed and running prior to installing 10g RAC.

- CRS can either run on top of the vendor clusterware (such as Sun Cluster,
HP Serviceguard, IBM HACMP, TruCluster, Veritas Cluster, Fujitsu Primecluster,
etc...) or can run without the vendor clusterware. The vendor clusterware
was required in 9i RAC but is optional in 10g RAC.

- The CRS HOME and ORACLE_HOME must be installed in DIFFERENT locations.

- Shared Location(s) or devices for the Voting File and OCR (Oracle
Configuration Repository) file must be available PRIOR to installing CRS. The
voting file should be at least 20MB and the OCR file should be at least 100MB.

- CRS and RAC require that the following network interfaces be configured prior
to installing CRS or RAC:
- Public Interface
- Private Interface
- Virtual (Public) Interface
For more information on this, see Note 264847.1.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 176 of 393


- The root.sh script at the end of the CRS installation starts the CRS stack.
If your CRS stack does not start, see Note 240001.1.

- Only one set of CRS daemons can be running per RAC node.

- On Unix, the CRS stack is run from entries in /etc/inittab with "respawn".

- If there is a network split (nodes loose communication with each other). One
or more nodes may reboot automatically to prevent data corruption.

- The supported method to start CRS is booting the machine. MANUAL STARTUP OF
THE CRS STACK IS NOT SUPPORTED UNTIL 10.1.0.4 OR HIGHER.
- The supported method to stop is shutdown the machine or use "init.crs stop".

- Killing CRS daemons is not supported unless you are removing the CRS
installation via Note 239998.1 because flag files can become mismatched.

- For maintenance, go to single user mode at the OS.

Once the stack is started, you should be able to see all of the daemon processes
with a ps -ef command:

[rac1]/u01/home/beta> ps -ef | grep crs

oracle 1363 999 0 11:23:21 ? 0:00 /u01/crs_home/bin/evmlogger.bin -o /u01


oracle 999 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/evmd.bin
root 1003 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/crsd.bin
oracle 1002 1 0 11:21:39 ? 0:01 /u01/crs_home/bin/ocssd.bin

CRS DAEMON FUNCTIONALITY


------------------------

Here is a short description of each of the CRS daemon processes:

CRSD:
- Engine for HA operation
- Manages 'application resources'
- Starts, stops, and fails 'application resources' over
- Spawns separate 'actions' to start/stop/check application resources
- Maintains configuration profiles in the OCR (Oracle Configuration Repository)
- Stores current known state in the OCR.
- Runs as root
- Is restarted automatically on failure

OCSSD:
- OCSSD is part of RAC and Single Instance with ASM
- Provides access to node membership
- Provides group services
- Provides basic cluster locking
- Integrates with existing vendor clusteware, when present
- Can also runs without integration to vendor clustware
- Runs as Oracle.
- Failure exit causes machine reboot.
--- This is a feature to prevent data corruption in event of a split brain.

EVMD:
- Generates events when things happen
- Spawns a permanent child evmlogger
- Evmlogger, on demand, spawns children
- Scans callout directory and invokes callouts.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 177 of 393


- Runs as Oracle.
- Restarted automatically on failure

11gRAC/ASM/AIX oraclibm@fr.ibm.com 178 of 393


CRS LOG DIRECTORIES
-------------------

When troubleshooting CRS problems, it is important to review the directories


under the CRS Home.

$ORA_CRS_HOME/crs/log - This directory includes traces for CRS resources that are
joining, leaving, restarting, and relocating as identified by CRS.

$ORA_CRS_HOME/crs/init - Any core dumps for the crsd.bin daemon should be written
here. Note 1812.1 could be used to debug these.

$ORA_CRS_HOME/css/log - The css logs indicate all actions such as


reconfigurations, missed checkins , connects, and disconnects from the client
CSS listener . In some cases the logger logs messages with the category of
(auth.crit) for the reboots done by oracle. This could be used for checking the
exact time when the reboot occured.

$ORA_CRS_HOME/css/init - Core dumps from the ocssd primarily and the pid for the
css daemon whose death is treated as fatal are located here. If there are
abnormal restarts for css then the core files will have the formats of
core.<pid>. Note 1812.1 could be used to debug these.

$ORA_CRS_HOME/evm/log - Log files for the evm and evmlogger daemons. Not used
as often for debugging as the CRS and CSS directories.

$ORA_CRS_HOME/evm/init - Pid and lock files for EVM. Core files for EVM should
also be written here. Note 1812.1 could be used to debug these.

$ORA_CRS_HOME/srvm/log - Log files for OCR.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 179 of 393


STATUS FOR CRS RESOURCES
------------------------

After installing RAC and running the VIPCA (Virtual IP Configuration Assistant)
launched with the RAC root.sh, you should be able to see all of your CRS
resources with crs_stat. Example:

cd $ORA_CRS_HOME/bin
./crs_stat

NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.oem
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE

NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE

11gRAC/ASM/AIX oraclibm@fr.ibm.com 180 of 393


There is also a script available to view CRS resources in a format that is
easier to read. Just create a shell script with:

--------------------------- Begin Shell Script ----------------------------

#!/usr/bin/ksh
#
# Sample 10g CRS resource status query script
#
# Description:
# - Returns formatted version of crs_stat -t, in tabular
# format, with the complete rsc names and filtering keywords
# - The argument, $RSC_KEY, is optional and if passed to the script, will
# limit the output to HA resources whose names match $RSC_KEY.
# Requirements:
# - $ORA_CRS_HOME should be set in your environment

RSC_KEY=$1
QSTAT=-u
AWK=/usr/xpg4/bin/awk # if not available use /usr/bin/awk

# Table header:echo ""


$AWK \
'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resource", "Target", "State";
printf "%-45s %-10s %-18s\n", "-----------", "------", "-----";}'

# Table body:
$ORA_CRS_HOME/bin/crs_stat $QSTAT | $AWK \
'BEGIN { FS="="; state = 0; }
$1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};
state == 0 {next;}
$1~/TARGET/ && state == 1 {apptarget = $2; state=2;}
$1~/STATE/ && state == 2 {appstate = $2; state=3;}
state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}'

--------------------------- End Shell Script ------------------------------

Example output:

[opcbsol1]/u01/home/usupport> ./crsstat
HA Resource Target State
----------- ------ -----
ora.V10SN.V10SN1.inst ONLINE ONLINE on opcbsol1
ora.V10SN.V10SN2.inst ONLINE ONLINE on opcbsol2
ora.V10SN.db ONLINE ONLINE on opcbsol2
ora.opcbsol1.ASM1.asm ONLINE ONLINE on opcbsol1
ora.opcbsol1.LISTENER_OPCBSOL1.lsnr ONLINE ONLINE on opcbsol1
ora.opcbsol1.gsd ONLINE ONLINE on opcbsol1
ora.opcbsol1.ons ONLINE ONLINE on opcbsol1
ora.opcbsol1.vip ONLINE ONLINE on opcbsol1
ora.opcbsol2.ASM2.asm ONLINE ONLINE on opcbsol2
ora.opcbsol2.LISTENER_OPCBSOL2.lsnr ONLINE ONLINE on opcbsol2
ora.opcbsol2.gsd ONLINE ONLINE on opcbsol2
ora.opcbsol2.ons ONLINE ONLINE on opcbsol2
ora.opcbsol2.vip ONLINE ONLINE on opcbsol2

11gRAC/ASM/AIX oraclibm@fr.ibm.com 181 of 393


CRS RESOURCE ADMINISTRATION
---------------------------

You can use srvctl to manage these resources. Below are syntax and examples.

---------------------------------------------------------------------------

CRS RESOURCE STATUS

srvctl status database -d <database-name> [-f] [-v] [-S <level>]


srvctl status instance -d <database-name> -i <instance-name> >[,<instance-name-list>]
[-f] [-v] [-S <level>]
srvctl status service -d <database-name> -s <service-name>[,<service-name-list>]
[-f] [-v] [-S <level>]
srvctl status nodeapps [-n <node-name>]
srvctl status asm -n <node_name>

EXAMPLES:

Status of the database, all instances and all services.


srvctl status database -d ORACLE -v
Status of named instances with their current services.
srvctl status instance -d ORACLE -i RAC01, RAC02 -v
Status of a named services.
srvctl status service -d ORACLE -s ERP -v
Status of all nodes supporting database applications.
srvctl status node

START CRS RESOURCES

srvctl start database -d <database-name> [-o < start-options>]


[-c <connect-string> | -q]
srvctl start instance -d <database-name> -i <instance-name>
[,<instance-name-list>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start service -d <database-name> [-s <service-name>[,<service-name-list>]]
[-i <instance-name>] [-o <start-options>] [-c <connect-string> | -q]
srvctl start nodeapps -n <node-name>
srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]

EXAMPLES:

Start the database with all enabled instances.


srvctl start database -d ORACLE
Start named instances.
srvctl start instance -d ORACLE -i RAC03, RAC04
Start named services. Dependent instances are started as needed.
srvctl start service -d ORACLE -s CRM
Start a service at the named instance.
srvctl start service -d ORACLE -s CRM -i RAC04
Start node applications.
srvctl start nodeapps -n myclust-4

11gRAC/ASM/AIX oraclibm@fr.ibm.com 182 of 393


STOP CRS RESOURCES

srvctl stop database -d <database-name> [-o <stop-options>]


[-c <connect-string> | -q]
srvctl stop instance -d <database-name> -i <instance-name> [,<instance-name-list>]
[-o <stop-options>][-c <connect-string> | -q]
srvctl stop service -d <database-name> [-s <service-name>[,<service-name-list>]]
[-i <instance-name>][-c <connect-string> | -q] [-f]
srvctl stop nodeapps -n <node-name>
srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]

EXAMPLES:

Stop the database, all instances and all services.


srvctl stop database -d ORACLE
Stop named instances, first relocating all existing services.
srvctl stop instance -d ORACLE -i RAC03,RAC04
Stop the service.
srvctl stop service -d ORACLE -s CRM
Stop the service at the named instances.
srvctl stop service -d ORACLE -s CRM -i RAC04
Stop node applications. Note that instances and services also stop.
srvctl stop nodeapps -n myclust-4

ADD CRS RESOURCES

srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>]


[-A <name|ip>/netmask] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s <start_options>] [-n <db_name>]
srvctl add instance -d <name> -i <inst_name> -n <node_name>
srvctl add service -d <name> -s <service_name> -r <preferred_list>
[-a <available_list>] [-P <TAF_policy>] [-u]
srvctl add nodeapps -n <node_name> -o <oracle_home>
[-A <name|ip>/netmask[/if1[|if2|...]]]
srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home>

OPTIONS:

-A vip range, node, and database, address specification. The format of


address string is:
[<logical host name>]/<VIP address>/<net mask>[/<host interface1[ |
host interface2 |..]>] [,] [<logical host name>]/<VIP address>/<net mask>
[/<host interface1[ | host interface2 |..]>]
-a for services, list of available instances, this list cannot include
preferred instances
-m domain name with the format “us.mydomain.com”
-n node name that will support one or more instances
-o $ORACLE_HOME to locate Oracle binaries
-P for services, TAF preconnect policy - NONE, PRECONNECT
-r for services, list of preferred instances, this list cannot include
available instances.
-s spfile name
-u updates the preferred or available list for the service to support the
specified instance. Only one instance may be specified with the -u
switch. Instances that already support the service should not be

11gRAC/ASM/AIX oraclibm@fr.ibm.com 183 of 393


included.

EXAMPLES:

Add a new node:


srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A
139.184.201.1/255.255.255.0/hme0
Add a new database.
srvctl add database -d ORACLE -o $ORACLE_HOME
Add named instances to an existing database.
srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3
Add a service to an existing database with preferred instances (-r) and
available instances (-a). Use basic failover to the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
Add a service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P
PRECONNECT

REMOVE CRS RESOURCES

srvctl remove database -d <database-name>


srvctl remove instance -d <database-name> [-i <instance-name>]
srvctl remove service -d <database-name> -s <service-name> [-i <instance-name>]
srvctl remove nodeapps -n <node-name>

EXAMPLES:

Remove the applications for a database.


srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4

MODIFY CRS RESOURCES

srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>]
[-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY}]
[-s <start_options>]
srvctl modify instance -d <database-name> -i <instance-name> -n <node-name>
srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-t <instance-name> [-f]
srvctl modify service -d <database-name> -s <service_name> -i <instance-name>
-r [-f]
srvctl modify nodeapps -n <node-name> [-A <address-description> ] [-x]

11gRAC/ASM/AIX oraclibm@fr.ibm.com 184 of 393


OPTIONS:

-i <instance-name> -t <instance-name> the instance name (-i) is replaced by the


instance name (-t)
-i <instance-name> -r the named instance is modified to be a preferred instance
-A address-list for VIP application, at node level
-s <asm_inst_name> add or remove ASM dependency

EXAMPLES:

Modify an instance to execute on another node.


srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 –r

RELOCATE SERVICES

srvctl relocate service -d <database-name> -s <service-name> [-i <instance-name >]-


t<instance-name > [-f]

EXAMPLES:

Relocate a service from one instance to another


srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01

ENABLE CRS RESOURCES (The resource may be up or down to use this function)

srvctl enable database -d <database-name>


srvctl enable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl enable service -d <database-name> -s <service-name>] [, <service-name-list>]
[-i <instance-name>]

EXAMPLES:

Enable the database.


srvctl enable database -d ORACLE
Enable the named instances.
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service.
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance.
srvctl enable service -d ORACLE -s CRM -i RAC03

DISABLE CRS RESOURCES (The resource must be down to use this function)

srvctl disable database -d <database-name>


srvctl disable instance -d <database-name> -i <instance-name> [,<instance-name-list>]
srvctl disable service -d <database-name> -s <service-name>] [,<service-name-list>]
[-i <instance-name>]

11gRAC/ASM/AIX oraclibm@fr.ibm.com 185 of 393


EXAMPLES:

Disable the database globally.


srvctl disable database -d ORACLE
Disable the named instances.
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service globally.
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service at the named instance.
srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04

For more information on this see the Oracle10g Real Application Clusters
Administrator’s Guide - Appendix B

RELATED DOCUMENTS

Oracle10g Real Application Clusters Installation and Configuration


Oracle10g Real Application Clusters Administrator’s Guide

19.2 About RAC ...

282036.1 - Minimum software versions and patches required to Support Oracle Products on ...
283743.1 - Pre-Install checks for 10g RDBMS on AIX
220970.1 - RAC: Frequently Asked Questions
183408.1 - Raw Devices and Cluster Filesystems With Real Application Clusters
293750.1 - 10g Installation on Aix 5.3, Failed with Checking operating system version mu...

19.3 About CRS ...

263897.1 - 10G: How to Stop the Cluster Ready Services (CRS)


295871.1 - How to verify if CRS install is Valid
265769.1 - 10g RAC: Troubleshooting CRS Reboots
259301.1 - CRS and 10g Real Application Clusters
268937.1 - Repairing or Restoring an Inconsistent OCR in RAC
293819.1 - Placement of voting and OCR disk files in 10gRAC
239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
272332.1 - CRS 10g Diagnostic Collection Guide
279793.1 - How to Restore a Lost Voting Disk in 10g
239989.1 - 10g RAC: Stopping Reboot Loops When CRS Problems Occur
298073.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
298069.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
284949.1 - CRS Home Is Only Partially Copied to Remote Node
285046.1 - How to recreate ONS,GSD,VIP deleted from ocr by crs_unregister

11gRAC/ASM/AIX oraclibm@fr.ibm.com 186 of 393


19.4 About VIP ...

296856.1 - Configuring the IBM AIX 5L Operating System for the Oracle 10g VIP
294336.1 - Changing the check interval for the Oracle 10g VIP
276434.1 - Modifying the VIP of a Cluster Node
298895.1 - Modifying the default gateway address used by the Oracle 10g VIP
264847.1 - How to Configure Virtual IPs for 10g RAC

19.5 About manual database cration ...

240052.1 - 10g Manual Database Creation in Oracle (Single Instance and RAC)

19.6 About Grid Control ...

284707.1 - Enterprise Manager Grid Control 10.1.0.3.0 Release Notes


277420.1 - EM 10G Grid Control Preinstall Steps for AIX 5.2

19.7 About TAF ...

271297.1 - Troubleshooting TAF Issues in 10g RAC

19.8 About Adding/Removing Node ...

269320.1 - Removing a Node from a 10g RAC Cluster


270512.1 - Adding a Node to a 10g RAC Cluster

19.9 About ASM ...

243245.1 - 10G New Storage Features and Enhancements


268481.1 - Re-creating ASM Instances and Diskgroups
282777.1 - SGA sizing for ASM instances and databases that use ASM
274738.1 - Creating an ASM-enabled Database
249992.1 - New Feature on ASM (Automatic Storage Manager).
252219.1 - Steps To Migrate Database From Non-ASM to ASM And Vice-Versa
293234.1 - How To Move Archive Files from ASM
270066.1 - Manage ASM instance-creating diskgroup,adding/dropping/resizing disks.

11gRAC/ASM/AIX oraclibm@fr.ibm.com 187 of 393


300472.1 - How To Delete Archive Log Files Out Of +Asm?
265633.1 - ASM Technical Best Practices http://metalink.oracle.com/metalink/plsql/docs/ASM.pdf

For full article, download Automatic Storage Management (154K/pdf)
294869.1 - Oracle ASM and Multi-Pathing Technologies

19.10 Metalink note to use in case of problem with CRS ...

263897.1 - 10G: How to Stop the Cluster Ready Services (CRS)


295871.1 - How to verify if CRS install is Valid
265769.1 - 10g RAC: Troubleshooting CRS Reboots
259301.1 - CRS and 10g Real Application Clusters
268937.1 - Repairing or Restoring an Inconsistent OCR in RAC
293819.1 - Placement of voting and OCR disk files in 10gRAC
239998.1 - 10g RAC: How to Clean Up After a Failed CRS Install
272332.1 - CRS 10g Diagnostic Collection Guide
239989.1 - 10g RAC: Stopping Reboot Loops When CRS Problems Occur
298073.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
298069.1 - HOW TO REMOVE CRS AUTO START AND RESTART FOR A RAC INSTANCE
284949.1 - CRS Home Is Only Partially Copied to Remote Node

11gRAC/ASM/AIX oraclibm@fr.ibm.com 188 of 393


20 APPENDIX C : USEFULL COMMANDS

• Crsctl : To administrate the clusterware

• Ocrconfig : To backup, export, import repair ... OCR (Oracle Cluster Registry) contents.

• Ocrdump : To dump Oracle cluster registry content

• Srvctl : To administrate Oracle clusterware resources

• Crs_stat –help : To query state of Oracle clusterware ressources

• Clufvy : To pre/post check installation stages

11gRAC/ASM/AIX oraclibm@fr.ibm.com 189 of 393


crsctl : To administrate the clusterware.

{node1:crs}/crs # crsctl
Usage: crsctl check crs - checks the viability of the Oracle Clusterware
crsctl check cssd - checks the viability of Cluster Synchronization Services
crsctl check crsd - checks the viability of Cluster Ready Services
crsctl check evmd - checks the viability of Event Manager
crsctl check cluster [-node <nodename>] - checks the viability of CSS across nodes
crsctl set css <parameter> <value> - sets a parameter override
crsctl get css <parameter> - sets the value of a Cluster Synchronization Services parameter
crsctl unset css <parameter> - sets the Cluster Synchronization Services parameter to its default
crsctl query css votedisk - lists the voting disks used by Cluster Synchronization Services
crsctl add css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs - enables startup for all Oracle Clusterware daemons
crsctl disable crs - disables startup for all Oracle Clusterware daemons
crsctl start crs [-wait] - starts all Oracle Clusterware daemons
crsctl stop crs [-wait] - stops all Oracle Clusterware daemons. Stops Oracle Clusterware managed resources in case of cluster.
crsctl start resources - starts Oracle Clusterware managed resources
crsctl stop resources - stops Oracle Clusterware managed resources
crsctl debug statedump css - dumps state info for Cluster Synchronization Services objects
crsctl debug statedump crs - dumps state info for Cluster Ready Services objects
crsctl debug statedump evm - dumps state info for Event Manager objects
crsctl debug log css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug log crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug log evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl debug log res [resname:level] ... - turns on debugging for Event Manager
crsctl debug trace css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
crsctl debug trace crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
crsctl debug trace evm [module:level] {,module:level} ... - turns on debugging for Event Manager
crsctl query crs softwareversion [<nodename>] - lists the version of Oracle Clusterware software installed
crsctl query crs activeversion - lists the Oracle Clusterware operating version
crsctl lsmodules css - lists the Cluster Synchronization Services modules that can be used for debugging
crsctl lsmodules crs - lists the Cluster Ready Services modules that can be used for debugging
crsctl lsmodules evm - lists the Event Manager modules that can be used for debugging
If necessary any of these commands can be run with additional tracing by adding a 'trace'
argument at the very front. Example: crsctl trace check css
{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 190 of 393


ocrconfig : To backup, export, import repair … OCR (Oracle Clusterware
Registry) contents.

{node1:crs}/crs # ocrconfig
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.

Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a
file
-import <filename> - Import cluster registry contents from a
file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous
version
-downgrade [-version <version string>]
- Downgrade cluster registry to the
specified version
-backuploc <dirname> - Configure periodic backup location
-showbackup [auto|manual] - Show backup information
-manualbackup - Perform OCR backup
-restore <filename> - Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename> - Repair local OCR configuration
-help - Print out this help information

Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.

{node1:crs}/crs #

ocrdump : To dump the content of the OCR.


{node1:crs}/crs # ocrdump -help
Name:
ocrdump - Dump contents of Oracle Cluster Registry to a file.

Synopsis:
ocrdump [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>]
[-xml] [-noheader]

Description:
Default filename is OCRDUMPFILE. Examples are:

prompt> ocrdump
writes cluster registry contents to OCRDUMPFILE in the current directory

prompt> ocrdump MYFILE


writes cluster registry contents to MYFILE in the current directory

prompt> ocrdump -stdout -keyname SYSTEM


writes the subtree of SYSTEM in the cluster registry to stdout

prompt> ocrdump -stdout -xml


writes cluster registry contents to stdout in xml format

Notes:
The header information will be retrieved based on best effort basis.
A log file will be created in

11gRAC/ASM/AIX oraclibm@fr.ibm.com 191 of 393


$ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure
you have file creation privileges in the above directory before
running this tool.

{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 192 of 393


srvctl : To administrate the clusterware resources.

{node1:crs}/crs # srvctl
Usage: srvctl <command> <object> [<options>]
command: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
objects: database|instance|service|nodeapps|asm|listener
For detailed help on each command and object and its options use:
srvctl <command> <object> -h
{node1:crs}/crs #

srvctl –h : To obtain all possible commands.

{node1:crs}/crs # srvctl -h
Usage: srvctl [-V]

Usage: srvctl add database -d <name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-A <name|ip>/netmask] [-r {PRIMARY |
PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl add instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl add service -d <name> -s <service_name> -r "<preferred_list>" [-a "<available_list>"] [-P <TAF_policy>]
Usage: srvctl add service -d <name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl add nodeapps -n <node_name> -A <name|ip>/netmask[/if1[|if2|...]]
Usage: srvctl add asm -n <node_name> -i <asm_inst_name> -o <oracle_home> [-p <spfile>]
Usage: srvctl add listener -n <node_name> -o <oracle_home> [-l <listener_name>]

Usage: srvctl config database


Usage: srvctl config database -d <name> [-a] [-t]
Usage: srvctl config service -d <name> [-s <service_name>] [-a] [-S <level>]
Usage: srvctl config nodeapps -n <node_name> [-a] [-g] [-s] [-l] [-h]
Usage: srvctl config asm -n <node_name>
Usage: srvctl config listener -n <node_name>

Usage: srvctl disable database -d <name>


Usage: srvctl disable instance -d <name> -i "<inst_name_list>"
Usage: srvctl disable service -d <name> -s "<service_name_list>" [-i <inst_name>]
Usage: srvctl disable asm -n <node_name> [-i <inst_name>]

Usage: srvctl enable database -d <name>


Usage: srvctl enable instance -d <name> -i "<inst_name_list>"
Usage: srvctl enable service -d <name> -s "<service_name_list>" [-i <inst_name>]
Usage: srvctl enable asm -n <node_name> [-i <inst_name>]

Usage: srvctl getenv database -d <name> [-t "<name_list>"]


Usage: srvctl getenv instance -d <name> -i <inst_name> [-t "<name_list>"]
Usage: srvctl getenv service -d <name> -s <service_name> [-t "<name_list>"]
11gRAC/ASM/AIX oraclibm@fr.ibm.com 193 of 393
Usage: srvctl getenv nodeapps -n <node_name> [-t "<name_list>"]

Usage: srvctl modify database -d <name> [-n <db_name] [-o <ohome>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY |
LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-y {AUTOMATIC | MANUAL}]
Usage: srvctl modify instance -d <name> -i <inst_name> -n <node_name>
Usage: srvctl modify instance -d <name> -i <inst_name> {-s <asm_inst_name> | -r}
Usage: srvctl modify service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <name> -s <service_name> -n -i <preferred_list> [-a <available_list>] [-f]
Usage: srvctl modify asm -n <node_name> -i <asm_inst_name> [-o <oracle_home>] [-p <spfile>]

Usage: srvctl relocate service -d <name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]

Usage: srvctl remove database -d <name> [-f]


Usage: srvctl remove instance -d <name> -i <inst_name> [-f]
Usage: srvctl remove service -d <name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl remove nodeapps -n "<node_name_list>" [-f]
Usage: srvctl remove asm -n <node_name> [-i <asm_inst_name>] [-f]
Usage: srvctl remove listener -n <node_name> [-l <listener_name>]

Usage: srvctl setenv database -d <name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}


Usage: srvctl setenv instance -d <name> [-i <inst_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl setenv service -d <name> [-s <service_name>] {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl setenv nodeapps -n <node_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}

Usage: srvctl start database -d <name> [-o <start_options>]


Usage: srvctl start instance -d <name> -i "<inst_name_list>" [-o <start_options>]
Usage: srvctl start service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-o <start_options>]
Usage: srvctl start nodeapps -n <node_name>
Usage: srvctl start asm -n <node_name> [-i <asm_inst_name>] [-o <start_options>]
Usage: srvctl start listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl status database -d <name> [-f] [-v] [-S <level>]


Usage: srvctl status instance -d <name> -i "<inst_name_list>" [-f] [-v] [-S <level>]
Usage: srvctl status service -d <name> [-s "<service_name_list>"] [-f] [-v] [-S <level>]
Usage: srvctl status nodeapps -n <node_name>
Usage: srvctl status asm -n <node_name>

Usage: srvctl stop database -d <name> [-o <stop_options>]


Usage: srvctl stop instance -d <name> -i "<inst_name_list>" [-o <stop_options>]
Usage: srvctl stop service -d <name> [-s "<service_name_list>" [-i <inst_name>]] [-f]
Usage: srvctl stop nodeapps -n <node_name> [-r]
Usage: srvctl stop asm -n <node_name> [-i <asm_inst_name>] [-o <stop_options>]
Usage: srvctl stop listener -n <node_name> [-l <lsnr_name_list>]

Usage: srvctl unsetenv database -d <name> -t "<name_list>"


11gRAC/ASM/AIX oraclibm@fr.ibm.com 194 of 393
Usage: srvctl unsetenv instance -d <name> [-i <inst_name>] -t "<name_list>"
Usage: srvctl unsetenv service -d <name> [-s <service_name>] -t "<name_list>"
Usage: srvctl unsetenv nodeapps -n <node_name> -t "<name_list>"
{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 195 of 393


crs_stat –help : To query state of Oracle Clusterware resources.
{node1:crs}/crs # crs_stat -help
Usage: crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
crs_stat -p [resource_name [...]] [-q]
crs_stat [-a] application -g
crs_stat [-a] application -r [-c cluster_member]
crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
crs_stat -ls [resource_name [...]] [-q]

{node1:crs}/crs #

Cluvfy : To pre/post check installation stages.


{node1:crs}/crs # cluvfy

USAGE:
cluvfy [ -help ]
cluvfy stage { -list | -help }
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
cluvfy comp { -list | -help }
cluvfy comp <component-name> <component-specific options> [-verbose]

{node1:crs}/crs #

cluvfy stage –list


{node1:crs}/crs # cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]

Valid stage options and stage names are:


-post hwos : post-check for hardware and operating system
-pre cfs : pre-check for CFS setup
-post cfs : post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre dbinst : pre-check for database installation
-pre dbcfg : pre-check for database configuration

{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 196 of 393


cluvfy stage -help

{node1:crs}/crs # cluvfy stage -help

USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]

SYNTAX (for Stages):


cluvfy stage -post hwos -n <node_list> [ -s <storageID_list> ] [-verbose]
cluvfy stage -pre cfs -n <node_list> -s <storageID_list> [-verbose]
cluvfy stage -post cfs -n <node_list> -f <file_system> [-verbose]
cluvfy stage -pre crsinst -n <node_list> [-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -c <ocr_location> ] [ -q <voting_disk> ]
[ -osdba <osdba_group> ]
[ -orainv <orainventory_group> ] [-verbose]
cluvfy stage -post crsinst -n <node_list> [-verbose]
cluvfy stage -pre dbinst -n <node_list> [-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -osdba <osdba_group> ] [-verbose]
cluvfy stage -pre dbcfg -n <node_list> -d <oracle_home> [-verbose]

{node1:crs}/crs #

cluvfy comp -list

{node1:crs}/crs # cluvfy comp -list

USAGE:
cluvfy comp <component-name> <component-specific options> [-verbose]

Valid components are:


nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers

{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 197 of 393


Cluvfy comp -help

{node1:crs}/crs # cluvfy comp -help

USAGE:
cluvfy comp <component-name> <component-specific options> [-verbose]

SYNTAX (for
Components):
cluvfy comp
nodereach -n <node_list> [ -srcnode <node> ] [-verbose]
cluvfy comp
nodecon -n <node_list> [ -i <interface_list> ] [-verbose]
cluvfy comp
cfs [ -n <node_list> ] -f <file_system> [-verbose]
cluvfy comp
ssa [ -n <node_list> ] [ -s <storageID_list> ] [-verbose]
cluvfy comp
space [ -n <node_list> ] -l <storage_location>
-z <disk_space> {B|K|M|G} [-verbose]
cluvfy comp sys [ -n <node_list> ] -p { crs | database } [-r { 10gR1 | 10gR2 | 11gR1
} ]
[ -osdba <osdba_group> ] [ -orainv <orainventory_group> ]
[-verbose]
cluvfy comp clu [ -n <node_list> ] [-verbose]
cluvfy comp clumgr [ -n <node_list> ] [-verbose]
cluvfy comp ocr [ -n <node_list> ] [-verbose]
cluvfy comp crs [ -n <node_list> ] [-verbose]
cluvfy comp nodeapp [ -n <node_list> ] [-verbose]
cluvfy comp admprv [ -n <node_list> ] [-verbose]
-o user_equiv [-sshonly]
-o crs_inst [-orainv <orainventory_group> ]
-o db_inst [-osdba <osdba_group> ]
-o db_config -d <oracle_home>
cluvfy comp peer [ -refnode <node> ] -n <node_list> [-r { 10gR1 | 10gR2 | 11gR1 } ]
[ -orainv <orainventory_group> ] [ -osdba <osdba_group> ] [-verbose]

{node1:crs}/crs #

11gRAC/ASM/AIX oraclibm@fr.ibm.com 198 of 393


21 APPENDIX D : EMPTY TABLES TO USE FOR INSTALLATION

21.1 Network document to ease your installation


(Network layout, Hosts naming, etc ...)

Each node MUST have same network interfaces layout and usage, Do identify your PUBLIC, and PRIVATE network interface, and fill in table on next page …

11gRAC/ASM/AIX oraclibm@fr.ibm.com 199 of 393


Public, Private, and Virtual Host Name layout

For each node participating to the cluster, fill in hostname and associated IP for :

• Public network
• Virtual network
• Private network

Node Network Host Type Defined hostname Assigned IP Observation


Interface name
Public RAC Public node name (Public Network)
en____
1 Virtual RAC VIP node name (Private Network)
en____ Private RAC Interconnect node name (Private Network)
Public RAC Public node name (Public Network)
en____
2 Virtual RAC VIP node name (Private Network)
en____ Private RAC Interconnect node name (Private Network)
Public RAC Public node name (Public Network)
en____
3 Virtual RAC VIP node name (Private Network)
en____ Private RAC Interconnect node name (Private Network)
Public RAC Public node name (Public Network)
en____
4 Virtual RAC VIP node name (Private Network)
en____ Private RAC Interconnect node name (Private Network)
Public RAC Public node name (Public Network)
en____
… Virtual RAC VIP node name (Private Network)
en____ Private RAC Interconnect node name (Private Network)

11gRAC/ASM/AIX oraclibm@fr.ibm.com 200 of 393


21.2 Steps Checkout

Done ?
Action
Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Node ..

11gRAC/ASM/AIX oraclibm@fr.ibm.com 201 of 393


21.3 Disks document to ease disks preparation in your implementation
(OCR, Voting, ASM spfile and ASM disks)

Node 1 Node 2 Node 3 Node 4


LUN’s
Disks Device Name Major Minor Major Minor Major Minor Major Minor
Number hdisk hdisk hdisk hdisk
Num. Num. Num. Num. Num. Num. Num. Num.
Disk example L0 /dev/example_disk Hdisk1 30 2 Hdisk0 30 2 Hdisk1 30 2 Hdisk1 30 2

Disk for OCR 1


Disk for OCR 2
Disk for Voting 1
Disk for Voting 2
Disk for Voting 3
Disk for ASM spfile
Disk 1 for ASM
Disk 2 for ASM
Disk 3 for ASM
Disk 4 for ASM
Disk 5 for ASM
Disk 6 for ASM
Disk 7 for ASM
Disk 8 for ASM
Disk 9 for ASM
Disk 10 for ASM

11gRAC/ASM/AIX oraclibm@fr.ibm.com 202 of 393


22 DOCUMENTS, BOOKS TO LOOK AT ..
IBM® DS4000 and Oracle® Database for AIX.

Abstract: The IBM® System Storage DS4000 is very well suited for Oracle Databases. Learn how to best use the DS4000 in an AIX® environment by understanding
applicable data layout techniques and other important considerations.
Document Author : Jeff Mucher (Advanced Technical Support)
Document ID : PRS3044
Product(s) covered : AIX 5L; DS4000; DS4700; DS4800; GPFS; Oracle; Oracle RAC; Oracle Database
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3044

Oracle Automatic Storage Oracle Database 10g High Availability


Management with RAC Flashback and Data Guard
Under-the-Hood & Practical
Deployment Guide McGraw Hill
By : Matthew Hart , Scott Jesse
McGraw Hill
By: Nitin Vengurlekar, Murali Vallath,
Rich Long

11gRAC/ASM/AIX oraclibm@fr.ibm.com 203 of 393

Вам также может понравиться