Вы находитесь на странице: 1из 238

Composed March 29, 2006

Sun Cluster 3.0-3.1 Release Notes


Supplement

Sun Microsystems, Inc.


4150 Network Circle
Santa Clara, CA 95054
U.S.A.
Part No: 816338124
April 2006, Revision A

Composed March 29, 2006


Copyright 2006 Sun Microsystems, Inc.

4150 Network Circle, Santa Clara, CA 95054 U.S.A.

All rights reserved.

Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more U.S. patents or pending patent applications in the U.S. and in other countries.
U.S. Government Rights Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions
of the FAR and its supplements.
This distribution may include materials developed by third parties.
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other
countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, the Solaris logo, the Java Coffee Cup logo, docs.sun.com, Java, and Solaris are trademarks or registered trademarks of Sun
Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. This
product includes software developed by the Apache Software Foundation (http://www.apache.org/).
The OPEN LOOK and Sun Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of
Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the
Xerox Graphical User Interface, which license also covers Suns licensees who implement OPEN LOOK GUIs and otherwise comply with Suns written license
agreements.
Products covered by and information contained in this publication are controlled by U.S. Export Control laws and may be subject to the export or import laws in
other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export
or reexport to countries subject to U.S. embargo or to entities identied on U.S. export exclusion lists, including, but not limited to, the denied persons and specially
designated nationals lists is strictly prohibited.
DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO
THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Copyright 2006 Sun Microsystems, Inc.

4150 Network Circle, Santa Clara, CA 95054 U.S.A.

Tous droits rservs.

Sun Microsystems, Inc. dtient les droits de proprit intellectuelle relatifs la technologie incorpore dans le produit qui est dcrit dans ce document. En particulier,
et ce sans limitation, ces droits de proprit intellectuelle peuvent inclure un ou plusieurs brevets amricains ou des applications de brevet en attente aux Etats-Unis et
dans dautres pays.
Cette distribution peut comprendre des composants dvelopps par des tierces personnes.
Certaines composants de ce produit peuvent tre drives du logiciel Berkeley BSD, licencis par lUniversit de Californie. UNIX est une marque dpose aux
Etats-Unis et dans dautres pays; elle est licencie exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, le logo Solaris, le logo Java Coffee Cup, docs.sun.com, Java et Solaris sont des marques de fabrique ou des marques dposes de
Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques
dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par
Sun Microsystems, Inc. Ce produit inclut le logiciel dvelopp par la base de Apache Software Foundation (http://www.apache.org/).
Linterface dutilisation graphique OPEN LOOK et Sun a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de
pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces dutilisation visuelle ou graphique pour lindustrie de linformatique. Sun dtient
une licence non exclusive de Xerox sur linterface dutilisation graphique Xerox, cette licence couvrant galement les licencis de Sun qui mettent en place linterface
dutilisation graphique OPEN LOOK et qui, en outre, se conforment aux licences crites de Sun.
Les produits qui font lobjet de cette publication et les informations quil contient sont rgis par la legislation amricaine en matire de contrle des exportations et
peuvent tre soumis au droit dautres pays dans le domaine des exportations et importations. Les utilisations nales, ou utilisateurs naux, pour des armes nuclaires,
des missiles, des armes chimiques ou biologiques ou pour le nuclaire maritime, directement ou indirectement, sont strictement interdites. Les exportations ou
rexportations vers des pays sous embargo des Etats-Unis, ou vers des entits gurant sur les listes dexclusion dexportation amricaines, y compris, mais de manire
non exclusive, la liste de personnes qui font objet dun ordre de ne pas participer, dune faon directe ou indirecte, aux exportations des produits ou des services qui
sont rgis par la legislation amricaine en matire de contrle des exportations et la liste de ressortissants spciquement designs, sont rigoureusement interdites.
LA DOCUMENTATION EST FOURNIE "EN LETAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES
SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE
IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFACON.

060329@14558

Composed March 29, 2006

Contents

Sun Cluster 3.1 8/05 Release Notes Supplement ...................................................................................13


Revision Record ............................................................................................................................................13
New Features .................................................................................................................................................15
Required Patches ...................................................................................................................................15
Support for Oracle 10g R2 Real Application Clusters on the x64 Platform ...................................16
Support for Sun Multipathing Software on Solaris 10 x86 Based Congurations ........................16
Support for InniBand Adapters on the Cluster Interconnect .......................................................16
Support for the Sun StorEdge QFS Shared File System With Solaris Volume Manager for Sun
Cluster ....................................................................................................................................................17
Support for Oracle 10g R1 and 10g R2 Real Application Clusters on the SPARC Platform ........19
How to Disable the Oracle GSD .............................................................................................20
Support for Oracle 10g on the x64 Platform With the Solaris 10 OS ..............................................20
Support for Version SAP Version 6.40 ................................................................................................20
Support for MaxDB Version 7.5 ..........................................................................................................21
Support for SAP liveCache Version 7.5 ..............................................................................................21
How to Congure the SAP liveCache Administrator User .................................................21
How to Conrm That the SAP liveCache Administrator User Can Run the lcinit
Command ......................................................................................................................................22
Support for Oracle 10g on the x86 Platform ......................................................................................22
Restrictions and Requirements ...................................................................................................................23
Fixed Problems .............................................................................................................................................23
Known Problems ..........................................................................................................................................23
Cluster Problems When Using the ipge3 Port as Interconnect ......................................................23
Must Congure Runtime Linking Environment on SAP Unicode Systems (4996643) ...............24
Use of the Console as ttya on a V440 Causes Unresponsiveness of WebLogic Server (6182519) ..
24
Solaris Operating System 10 Patch 11882218 and Later Can Negatively Impact Cluster Stability
When Run on SPARC Platform with PxFS (6335093) .....................................................................25
Java ES 4 Installer Fails to Install on Solaris 10 End User Cluster (6363536) .................................25
Localization Packages For Sun Java Web Console Do Not Exist in the Sun Cluster Standalone
3

Composed March 29, 2006


Contents

Distribution (6299614) ........................................................................................................................26


How to Upgrade Sun Java Web Console Localization Packages ........................................26
IPv6 Scalable Service Support is Not Enabled by Default (6332656) ..............................................27
How to Manually Enable IPv6 Scalable Service Support .....................................................27
Known Documentation Problems .............................................................................................................28
Sun Cluster Concepts Guide for Solaris OS .......................................................................................28
Software Installation Guide .................................................................................................................28
Sun Cluster Data Service for Solaris Containers Guide ...................................................................29
How to Install a Zone and Perform the Initial Internal Zone Conguration ...................30
How to Patch to the Global Zone and Local Zones ..............................................................31
Sun Cluster Data Service for SAP Guide for Solaris OS ...................................................................31
Sun Cluster Data Service for SAP DB Guide for Solaris OS .............................................................32
Sun Cluster Data Service for SAP liveCache Guide for Solaris OS .................................................32
Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS .........................33
Release Notes .........................................................................................................................................35

Sun Cluster 3.1 9/04 Release Notes Supplement ...................................................................................37


Revision Record ............................................................................................................................................37
New Features .................................................................................................................................................39
SPARC: Support for VxVM 4.1 and VxFS 4.1 ...................................................................................39
Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated
Mirroring ...............................................................................................................................................39
Conguring Internal Disk Mirroring During Installation ..................................................40
How to Congure Internal Disk Mirroring After the Cluster is Established ....................40
How to Remove an Internal Disk Mirror ..............................................................................42
SPARC: Support for Sun StorEdge QFS With Oracle 10g Real Application Clusters ...................43
Support for Automatic Storage Management (ASM) With Oracle 10g Real Application Clusters
on the SPARC Platform ........................................................................................................................45
How to Use ASM with Oracle 10g Real Application Clusters .............................................46
Restrictions and Requirements ...................................................................................................................46
Restriction on SCI Card Placement ....................................................................................................46
Storage-Based Data Replication and Quorum Devices ....................................................................47
Known Problems ..........................................................................................................................................47
Bug ID 4333241 .....................................................................................................................................47
Bug ID 4804696 .....................................................................................................................................47
Bug ID 5107076 .....................................................................................................................................48

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Contents

How to Replace the Sun Cluster Support Packages for Oracle Real Application Clusters ..
48
Bug ID 5109935 .....................................................................................................................................49
Bug ID 6196936 .....................................................................................................................................49
Bug ID 6198608 .....................................................................................................................................49
Bug ID 6210418 .....................................................................................................................................50
Bug ID 6220218 .....................................................................................................................................50
Bug ID 6252555 .....................................................................................................................................50
Known Documentation Problems .............................................................................................................51
System Administration Guide .............................................................................................................51
Software Installation Guide .................................................................................................................51
Man Pages ..............................................................................................................................................52

Sun Cluster 3.1 4/04 Release Notes Supplement ...................................................................................55


Revision Record ............................................................................................................................................55
New Features .................................................................................................................................................57
SPARC: Support for VxVM 4.0 and VxFS 4.0 ...................................................................................57
Support for the Sun StorEdge QFS File System .................................................................................57
Support for Oracle 10g Real Application Clusters on the SPARC Platform ..................................57
Restrictions and Requirements ...................................................................................................................59
Compiling Data Services That Are Written in C++ ..........................................................................59
Known Problems ..........................................................................................................................................60
Bug ID 5095543 .....................................................................................................................................60
Bug ID 5066167 .....................................................................................................................................60
Known Documentation Problems .............................................................................................................61
Software Installation Guide .................................................................................................................61

Sun Cluster 3.1 10/03 Release Notes Supplement .................................................................................63


Revision Record ............................................................................................................................................63
New Features .................................................................................................................................................65
Restrictions and Requirements ...................................................................................................................65
Compiling Data Services That Are Written in C++ ..........................................................................65
Upgrading Sun Cluster 3.1 10/03 Software on Clusters That Run Sun StorEdge Availability Suite
3.1 Software ............................................................................................................................................66
Restriction on Rolling Upgrade and VxVM ......................................................................................66
Known Problems ..........................................................................................................................................67
5

Composed March 29, 2006


Contents

Bug ID 4848612 .....................................................................................................................................67


Bug ID 4983696 .....................................................................................................................................67
Known Documentation Problems .............................................................................................................67
Software Installation Guide .................................................................................................................67

Sun Cluster Data Services 3.1 10/03 Release Notes Supplement ........................................................71
Revision Record ............................................................................................................................................71
New Features .................................................................................................................................................72
Support for Oracle 10g .........................................................................................................................72
WebLogic Server Version 8.x ...............................................................................................................73
Restrictions and Requirements ...................................................................................................................73
Known Problems ..........................................................................................................................................74
Some Data Services Cannot be Upgraded by Using the scinstall Utility ...................................74
How to Upgrade Data Services That Cannot be Upgraded by Using scinstall .................74
Sun Cluster HA for liveCache nsswitch.conf requirements for passwd make NIS unusable
(4904975) ...............................................................................................................................................75
Known Documentation Problems .............................................................................................................75

Sun Cluster 3.1 Release Notes Supplement ............................................................................................77


Revision Record ............................................................................................................................................77
New Features .................................................................................................................................................80
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes ...........80
Restrictions and Requirements ...................................................................................................................81
Compiling Data Services That Are Written in C++ ..........................................................................81
Reserved RPC Program Numbers ......................................................................................................81
Changing Quorum Device Connectivity ...........................................................................................81
Required VxFS Default Stack Size Increase .......................................................................................81
Clarication of the IPv6 Restriction ...................................................................................................82
Fixed Problems .............................................................................................................................................82
Bug ID 4840853 .....................................................................................................................................82
Bug ID 4867584 .....................................................................................................................................82
Known Problems ..........................................................................................................................................82
Bug ID 4781666 .....................................................................................................................................82
Bug ID 4863254 .....................................................................................................................................83
Bug ID 4867560 .....................................................................................................................................83
Bug ID 4920156 .....................................................................................................................................83

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Contents

Known Documentation Problems .............................................................................................................83


Software Installation Guide .................................................................................................................83

Sun Cluster Data Services 3.1 5/03 Release Notes Supplement ..........................................................87
Revision Record ............................................................................................................................................87
New Features .................................................................................................................................................88
Support for Oracle 10g .........................................................................................................................88
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes ...........89
Restrictions and Requirements ...................................................................................................................90
Known Problems ..........................................................................................................................................91
Known Documentation Problems .............................................................................................................91
Sun Cluster 3.1 Data Service for NetBackup ......................................................................................91
Sun Cluster 3.1 Data Service for Sun ONE Application Server .......................................................91
Release Notes .........................................................................................................................................91

Sun Cluster 3.0 5/02 Release Notes Supplement ...................................................................................93


Revision Record ............................................................................................................................................93
New Features ...............................................................................................................................................102
SPARC: Support for VxVM 4.0 and VxFS 4.0 .................................................................................103
Support for Oracle 10g .......................................................................................................................103
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes .........104
Support for VERITAS NetBackup 4.5 ..............................................................................................104
Security Hardening for Solaris 9 .......................................................................................................108
Failover File System (HAStoragePlus) .............................................................................................108
RAID 5 on Sun StorEdge 99x0 Storage Arrays ................................................................................109
Apache 2.0 ...........................................................................................................................................109
New Guidelines for the swap Partition .............................................................................................109
Support for Oracle Real Application Clusters on the Cluster File System ...................................109
How to Install Sun Cluster Support for Oracle Real Application Clusters Packages With
the Cluster File System ............................................................................................................... 110
Restrictions and Requirements ................................................................................................................. 111
Compiling Data Services That Are Written in C++ ........................................................................ 111
Reserved RPC Program Numbers .................................................................................................... 111
Dynamic Multipathing (DMP) ......................................................................................................... 111
Changing Quorum Device Connectivity ......................................................................................... 112
Storage Topologies Replaced by New Requirements ...................................................................... 112
7

Composed March 29, 2006


Contents

Shared Storage Restriction Relaxed .................................................................................................. 112


EMC Storage Restriction ................................................................................................................... 112
Framework Restrictions and Requirements .................................................................................... 113
Oracle UDLM Requirement .............................................................................................................. 113
Fixed Problems ........................................................................................................................................... 114
BugId 4818874 ..................................................................................................................................... 114
Known Problems ........................................................................................................................................ 114
Bug ID 4346123 ................................................................................................................................... 114
Bug ID 4662264 ................................................................................................................................... 115
Bug ID 4665886 ................................................................................................................................... 115
Bug ID 4668496 ................................................................................................................................... 115
Bug ID 4680862 ................................................................................................................................... 115
Bug ID 4779686 ................................................................................................................................... 115
BugId 4836405 ..................................................................................................................................... 116
BugID 4838619 .................................................................................................................................... 116
Known Documentation Problems ........................................................................................................... 117
System Administration Guide ........................................................................................................... 117
Hardware Guide .................................................................................................................................. 117
Software Installation Guide ............................................................................................................... 119
How to Upgrade to the Sun Cluster 3.0 5/02 Software Update Release ........................... 119
Data Services Installation and Conguration Guide ......................................................................126
Supplement ..........................................................................................................................................134
Release Notes .......................................................................................................................................134
Man Pages ............................................................................................................................................135

Scalable Cluster Topology ........................................................................................................................139


Overview of Scalable Topology .................................................................................................................139
Adding or Removing a Cluster Node .......................................................................................................140
Adding a Cluster Node .......................................................................................................................140
Removing a Cluster Node ..................................................................................................................140
How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater
Than Two-Node Connectivity ..........................................................................................................142

Installing and Conguring Sun Cluster HA for SAP liveCache ...........................................................145


Sun Cluster HA for SAP liveCache Overview ..........................................................................................145
Installing and Conguring Sun Cluster HA for SAP liveCache ............................................................147

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Contents

Planning the Sun Cluster HA for SAP liveCache Installation and Conguration ..............................148
Conguration Requirements ............................................................................................................148
Standard Data Service Congurations .............................................................................................149
Conguration Considerations ..........................................................................................................149
Conguration Planning Questions ..................................................................................................149
Preparing the Nodes and Disks .................................................................................................................150
How to Prepare the Nodes ............................................................................................................150
Installing and Conguring SAP liveCache ..............................................................................................151
How to Install and Congure SAP liveCache .............................................................................151
How to Enable SAP liveCache to Run in a Cluster .....................................................................151
Verifying the SAP liveCache Installation and Conguration ................................................................152
How to Verify the SAP liveCache Installation and Conguration ...........................................152
Installing the Sun Cluster HA for SAP liveCache Packages ...................................................................153
How to Install the Sun Cluster HA for SAP liveCache Packages ..............................................153
Registering and Conguring the Sun Cluster HA for SAP liveCache ..................................................154
Sun Cluster HA for SAP liveCache Extension Properties ..............................................................154
How to Register and Congure Sun Cluster HA for SAP liveCache ........................................156
Verifying the Sun Cluster HA for SAP liveCache Installation and Conguration ..............................159
How to Verify the Sun Cluster HA for SAP liveCache Installation and Conguration .........159
Understanding Sun Cluster HA for SAP liveCache Fault Monitors .....................................................160
Extension Properties ..........................................................................................................................160
Monitor Check Method .....................................................................................................................160
Probing Algorithm and Functionality ..............................................................................................161

Installing and Conguring Sun Cluster HA for Sybase ASE ................................................................163


Installing and Conguring Sun Cluster HA for Sybase ASE .................................................................163
Preparing to Install Sun Cluster HA for Sybase ASE ..............................................................................164
Installing the Sybase ASE 12.0 Software ...................................................................................................164
How to Prepare the Nodes ............................................................................................................165
How to Install the Sybase Software ..............................................................................................166
How to Verify the Sybase ASE Installation ..................................................................................168
Creating the Sybase ASE Database Environment ...................................................................................168
How to Congure Sybase ASE Database Access With Solstice DiskSuite/Solaris Volume
Manager ...............................................................................................................................................168
How to Congure Sybase ASE Database Access With VERITAS Volume Manager ..............169
How to Create the Sybase ASE Database Environment .............................................................170
Installing the Sun Cluster HA for Sybase ASE Package ..........................................................................171
9

Composed March 29, 2006


Contents

How to Install Sun Cluster HA for Sybase ASE Packages ..........................................................172


Registering and Conguring Sun Cluster HA for Sybase ASE ..............................................................172
How to Register and Congure Sun Cluster HA for Sybase ASE .............................................172
Verifying the Sun Cluster HA for Sybase ASE Installation ....................................................................175
How to Verify the Sun Cluster HA for Sybase ASE Installation ................................................175
Understanding Sun Cluster HA for Sybase ASE Logging and Security Issues ....................................176
Sun Cluster HA for Sybase ASE Logging ..........................................................................................176
Important Security Issues ..................................................................................................................177
Conguring Sun Cluster HA for Sybase ASE Extension Properties .....................................................177
Sun Cluster HA for Sybase ASE Fault Monitor .......................................................................................180
Main Fault Monitor Process ..............................................................................................................180
Database-Client Fault Probe .............................................................................................................181
Extension Properties ..........................................................................................................................181

RSM Phase II: RSMRDT Driver Installation ............................................................................................183


Overview of the RSMRDT Driver .............................................................................................................183
Installing the RSMRDT Driver .........................................................................................................183
Restrictions ..........................................................................................................................................184
How to Install the SUNWscrdt Package ........................................................................................184
How to Uninstall the SUNWscrdt Package ...................................................................................184
How to Unload the RSMRDT Driver Manually .........................................................................185

Installing and Conguring Sun Cluster HA for SAP ..............................................................................187


Sun Cluster HA for SAP Overview ...........................................................................................................188
Installing and Conguring Sun Cluster HA for SAP ..............................................................................188
Planning the Sun Cluster HA for SAP Installation and Conguration ................................................190
Conguration Restrictions ................................................................................................................190
Conguration Requirements ............................................................................................................190
Standard Data Service Congurations .............................................................................................191
Conguration Considerations ..........................................................................................................192
Conguration Planning Questions ..................................................................................................194
Packages and Support ........................................................................................................................194
Upgrading Sun Cluster HA for SAP .........................................................................................................195
How to Upgrade a Resource Type or Convert a Failover Application Resource to a Scalable
Application Resource .........................................................................................................................195
Preparing the Nodes and Disks .................................................................................................................196

10

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Contents

How to Prepare the Nodes ............................................................................................................196


Installing and Conguring SAP and Database ........................................................................................197
How to Install SAP and the Database ..........................................................................................197
How to Install an SAP Scalable Application Server ....................................................................197
How to Enable Failover SAP Instances to Run in a Cluster ......................................................200
Conguring Sun Cluster HA for DBMS ..................................................................................................201
Where to Go From Here ....................................................................................................................202
Verifying the SAP Installation ...................................................................................................................202
How to Verify SAP and the Database Installation with Central Instance ................................202
How to Verify an SAP Failover Application Server ....................................................................203
How to Verify an SAP Scalable Application Server .........................................................................204
Installing the Sun Cluster HA for SAP Packages .....................................................................................204
How to Install the Sun Cluster HA for SAP Packages ................................................................204
Registering and Conguring Sun Cluster HA for SAP ...........................................................................205
Sun Cluster HA for SAP Extension Properties ................................................................................205
How to Register and Congure Sun Cluster HA for SAP with Central Instance ................... 211
How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service .............212
How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service .............213
Setting Up a Lock File .................................................................................................................................214
How to Set Up a Lock File for Central Instance or the Failover Application Server ..............215
How to Set Up a Lock File for Scalable Application Server .......................................................215
Verifying the Sun Cluster HA for SAP Installation and Conguration ................................................216
How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance ..
216
How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a Failover
Data Service .........................................................................................................................................217
How to Verify Sun Cluster HA for SAP Installation and Conguration of as a Scalable Data
Service ..................................................................................................................................................217
Understanding Sun Cluster HA for SAP Fault Monitor ........................................................................218
Sun Cluster HA for SAP Fault Probes for Central Instance ...........................................................218
Sun Cluster HA for SAP Fault Probes for Application Server .......................................................219

Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software ............................................221


Upgrading to Sun Cluster 3.1 04/04 Software .........................................................................................221
Upgrade Requirements and Restrictions .........................................................................................222
How to Prepare the Cluster for Upgrade .....................................................................................223
How to Upgrade the Solaris Operating Environment ...............................................................225
11

Composed March 29, 2006


Contents

How to Upgrade to Sun Cluster 3.1 04/04 Software ...................................................................227


How to Upgrade Sun Cluster-Module Software for Sun Management Center ......................233
How to Finish Upgrading to Sun Cluster 3.1 04/04 Software ...................................................234
Recovering From Storage Conguration Changes During Upgrade ...................................................235
How to Handle Storage Reconguration During an Upgrade .................................................235
How to Resolve Mistaken Storage Changes During an Upgrade .............................................236

12

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.1 8/05 Release Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 8/05
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 8/05 product. These online
release notes provide the most current information on the Sun Cluster 3.1 8/05 product. This
chapter includes the following information.

Revision Record on page 13


New Features on page 15
Restrictions and Requirements on page 23
Known Problems on page 23
Known Documentation Problems on page 28

Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 11 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2006
Revision Date

New Information

April 2006

Incorrect Release Date for the First Update of the Solaris 10 OS on page 28
Support for Oracle 10g R2 Real Application Clusters on the x64 Platform on page 16

March 2006

Support for Sun Multipathing Software on Solaris 10 x86 Based Congurations


on page 16

13

Composed March 29, 2006


Revision Record

TABLE 11 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2006

(Continued)

Revision Date

New Information

January 2006

Use of the Console as ttya on a V440 Causes Unresponsiveness of WebLogic Server


(6182519) on page 24
Cluster Problems When Using the ipge3 Port as Interconnect on page 23
Must Congure Runtime Linking Environment on SAP Unicode Systems (4996643)
on page 24
Java ES 4 Installer Fails to Install on Solaris 10 End User Cluster (6363536) on page 25
Package Dependency Change from 1.0 to 1.1 Causes Installation Problems (6316676)
on page 29

TABLE 12 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2005
Revision Date

New Information
Required Patches on page 15
Support for InniBand Adapters on the Cluster Interconnect on page 16
Support for the Sun StorEdge QFS Shared File System With Solaris Volume Manager for Sun Cluster
on page 17
Support for Oracle 10g R1 and 10g R2 Real Application Clusters on the SPARC Platform on page 19
Support for Oracle 10g on the x64 Platform With the Solaris 10 OS on page 20
Support for Version SAP Version 6.40 on page 20
Support for MaxDB Version 7.5 on page 21

November 2005

Support for SAP liveCache Version 7.5 on page 21

October 2005

Claried ambiguous statement about support of 16 node clusters in Sun Cluster


Concepts Guide for Solaris OS on page 28.
Clarication of the Restriction Concerning Solaris 10 Non-Global Zones on page 29
Corrected errors in procedures for installing, conguring, and patching zones for use
with Sun Cluster HA for Solaris Containers. See Sun Cluster Data Service for Solaris
Containers Guide on page 29
Manual steps are required to enable IPv6 support for scalable services. See IPv6
Scalable Service Support is Not Enabled by Default (6332656) on page 27

September 2005

Support for Oracle 10g on the x86 Platform on page 22


Correction to Release Notes support matrices for VxVM and VxFS. See Incorrect
Claim That VxVM 4.0 Is Supported on Solaris 10 OS (CR 6315895) on page 35

14

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

TABLE 12 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2005
Revision Date

(Continued)

New Information
Required Patches on page 15
Support for InniBand Adapters on the Cluster Interconnect on page 16
Support for the Sun StorEdge QFS Shared File System With Solaris Volume Manager for Sun Cluster
on page 17
Support for Oracle 10g R1 and 10g R2 Real Application Clusters on the SPARC Platform on page 19
Support for Oracle 10g on the x64 Platform With the Solaris 10 OS on page 20
Support for Version SAP Version 6.40 on page 20
Support for MaxDB Version 7.5 on page 21

November 2005

Support for SAP liveCache Version 7.5 on page 21

Localization Packages For Sun Java Web Console Do Not Exist in the Sun Cluster
Standalone Distribution (6299614) on page 26

New Features
In addition to features documented in the Sun Cluster 3.1 8/05 Release Notes for Solaris OS, this
release now includes support for the following features.

Required Patches
Patches are required to run Sun Cluster 3.1 8/05 on certain operating system congurations. See to
the following table to determine if your operating system conguration requires a patch.

Solaris Operating System Version

Conguration

Patch Number

Solaris 9

SPARC

11794919

Solaris 9

x86

11790919

Solaris 10
With Kernel Jumbo Patch
118822-15 or greater

SCI adapter

12054502

Solaris 10

x64

12050103

Solaris 10
With Kernel Jumbo Patch
118822-18 or greater

SPARC using PxFs


With workaround for bug 6335093

12050003

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

15

Composed March 29, 2006


New Features

Support for Oracle 10g R2 Real Application Clusters on


the x64 Platform
Sun Cluster Support for Oracle Real Application Clusters supports Oracle 10g R2 Real Application
Clusters on the x64 platform with version 10 of the Solaris OS. For information about Sun Cluster
Support for Oracle Real Application Clusters, see Sun Cluster Data Service for Oracle Real
Application Clusters Guide for Solaris OS.
If you are using Sun Cluster Support for Oracle Real Application Clusters with Oracle 10g R2 Real
Application Clusters on the x64 platform, the following patches are required:

Sun Cluster patch 120498-02


Solaris patch 119964-05

If you are using a storage area network (SAN) to provide access to shared storage and I/O
multipathing is enabled, the following Solaris patches are also required:

119375-13
119716-10

Without these patches, a node can lose access to all shared storage if a physical link that provides
access to storage is disconnected or fails.

Support for Sun Multipathing Software on Solaris 10


x86 Based Congurations
The procedure How to Install Sun Multipathing Software in Sun Cluster Software Installation
Guide for Solaris OS is now also valid for x86 based congurations on the Solaris 10 operating system
(OS). Contact your Sun sales representative for details.
Sun Trafc Manager is not supported on x86 based congurations that run the Solaris 9 OS.
However, it is still supported on SPARC based congurations that run the Solaris 8 or Solaris 9 OS.

Support for InniBand Adapters on the Cluster


Interconnect
The following requirements and guidelines apply to Sun Cluster congurations that use InniBand
adapters:

16

A two-node cluster must use InniBand switches. You cannot directly connect the InniBand
adapters to each other.

A single Sun InniBand switch, which has nine ports, can support up to nine nodes in a cluster.

Jumbo frames are not supported on a cluster that uses InniBand adapters.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

VLANs are not supported on a cluster that uses InniBand adapters.

If only one InniBand adapter is installed on a cluster node, each of its two ports must be
connected to a different InniBand switch.

If two InniBand adapters are installed in a cluster node, leave the second port on each adapter
unused. For example, connect port 1 on HCA 1 to switch 1 and connect port 1 on HCA 2 to
switch 2.

Support for the Sun StorEdge QFS Shared File System


With Solaris Volume Manager for Sun Cluster
Sun Cluster Support for Oracle Real Application Clusters supports the use of the Sun StorEdge QFS
shared le system with Solaris Volume Manager for Sun Cluster on the Solaris 10 OS. For more
information, see Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
Note Sun StorEdge QFS shared le system with Solaris Volume Manager for Sun Cluster is

supported only on the SPARC platform.


Additional information that you require if you are using this conguration is provided in the
subsections that follow.

Required Solaris OS Patches


To use Sun Cluster Support for Oracle Real Application Clusters with the Sun StorEdge QFS shared
le system and Solaris Volume Manager for Sun Cluster, install these patches in the following order:
1.
2.
3.
4.

120809-01
120807-01
118822-21
120537-04

Note Ensure that you install the stated revision or a higher revision of each patch in the preceding

list.

Storage Management Requirements for Oracle Files


Storage Management Requirements for Oracle Files in Sun Cluster Data Service for Oracle Real
Application Clusters Guide for Solaris OS states that you can use the Sun StorEdge QFS shared le
system only with hardware redundant array of independent disks (RAID) support. Ignore this
statement.
Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

17

Composed March 29, 2006


New Features

Using Sun StorEdge QFS Shared File System


How to Use Sun StorEdge QFS Shared File System in Sun Cluster Data Service for Oracle Real
Application Clusters Guide for Solaris OS states that you must use the Sun StorEdge QFS shared le
system with hardware RAID support. Ignore this statement.
You might use Solaris Volume Manager metadevices as devices for the shared le systems. In this
situation, ensure that the metaset and its metadevices are created and available on all nodes before
conguring the shared le systems.
For optimum performance, use Solaris Volume Manager for Sun Cluster to mirror the logical unit
numbers (LUNs) of your disk arrays. If you require striping, congure the striping with the le
systems.
Mirroring the LUNs of your disk arrays involves the following operations:

Creating RAID0 metadevices

Using the RAID0 metadevices or Solaris Volume Manager soft partitions of such metadevices as
Sun StorEdge QFS devices

Installing Sun Cluster Support for Oracle Real Application Clusters


Packages
In Step 5 of the procedure How to Install Sun Cluster Support for Oracle Real Application Clusters
Packages in Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS, run
the command for users of Solaris Volume Manager for Sun Cluster:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC_SVM_3.1/Solaris_N/Packages

N is the version number of the Solaris OS that you are using. For example, if you are using the Solaris
10 OS, N is 10.

Installing Oracle Real Application Clusters Software


For instructions, see Installing Oracle Real Application Clusters Software in Sun Cluster Data
Service for Oracle Real Application Clusters Guide for Solaris OS.
If you are installing the Oracle binary les and Oracle conguration les on a shared le system,
specify the absolute paths to the le system when the Oracle installation tool requests this
information. Do not use a symbolic link whose target is the shared le system. Examples of shared le
systems are the Sun StorEdge QFS shared le system and the cluster le system.

Removing Sun Cluster Support for Oracle Real Application Clusters


From a Cluster
Before you begin: If you are using Sun StorEdge QFS shared le systems on Solaris Volume Manager
metadevices, remove these items in the following order:
1. The resource groups that contain resources for the Sun StorEdge QFS metadata servers of these
les systems
18

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

2. The Sun StorEdge QFS shared le systems


In Step 5 of the procedure How to Remove Sun Cluster Support for Oracle Real Application
Clusters From a Cluster in Sun Cluster Data Service for Oracle Real Application Clusters Guide for
Solaris OS, run the command for users of Solaris Volume Manager for Sun Cluster:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWscmd

Removing Sun Cluster Support for Oracle Real Application Clusters


From Selected Nodes
Before you begin: If you are using Sun StorEdge QFS shared le systems on Solaris Volume Manager
metadevices, remove these items in the following order:
1. Each affected node from the node list of the resource groups that contain resources for the Sun
StorEdge QFS metadata servers of these les systems
For instructions for removing a node from a resource groups node list, see Removing a Node
From a Resource Group in Sun Cluster Data Services Planning and Administration Guide for
Solaris OS.
2. The conguration of the Sun StorEdge QFS shared le systems from each affected node
In Step 4 of the procedure How to Remove Sun Cluster Support for Oracle Real Application
Clusters From Selected Nodes in Sun Cluster Data Service for Oracle Real Application Clusters Guide
for Solaris OS, run the command for users of Solaris Volume Manager for Sun Cluster:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWscmd

Support for Oracle 10g R1 and 10g R2 Real Application


Clusters on the SPARC Platform
Sun Cluster Support for Oracle Real Application Clusters supports Oracle 10g R1 and 10g R2 Real
Application Clusters on the SPARC platform with versions 8, 9, and 10 of the Solaris OS.
If you are using Oracle 10.1.0.4 through Oracle 10g R2 Real Application Clusters with Sun Cluster
Support for Oracle Real Application Clusters, you must disable the Oracle Global Services Daemon
(GSD).
Note Disabling the Oracle GSD does not enable Oracle 10g R1 or Oracle 10g R2 to coexist with

Oracle 9.2.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

19

Composed March 29, 2006


New Features

How to Disable the Oracle GSD


Perform this task on each node of the cluster.
1

Stop the Oracle GSD.


# crs-home/bin/crs_stop ora.nodename.gsd

crs-home

The home directory for Oracle Cluster Ready Services (CRS)

nodename

The name of the node where you are disabling the GSD

Prevent the Oracle GSD from being started if the node is rebooted.
# crs-home/bin/crs_unregister ora.nodename.gsd

crs-home

The home directory for Oracle CRS

nodename

The name of the node where you are disabling the GSD

Support for Oracle 10g on the x64 Platform With the


Solaris 10 OS
Sun Cluster HA for Oracle supports version 10g of Oracle with the Solaris 10 OS on the x64 platform.
For more information, see Sun Cluster Data Service for Oracle Guide for Solaris OS.
If you are using Sun Cluster HA for Oracle with version 10g of Oracle on the x64 platform, you must
install the Oracle application les on a highly available local le system. Do not install these les on
the cluster le system.

Support for Version SAP Version 6.40


Sun Cluster HA for SAP supports version 6.40 of SAP. For more information, see Sun Cluster Data
Service for SAP Guide for Solaris OS.
Steps in this guide that apply specically to SAP 6.10 and SAP 6.20 also apply to SAP 6.40.
When planning Sun Cluster HA for SAP installation and conguration or performing the following
procedures, consult http://service.sap.com/ha for information about updates to SAP proles.

How to Install an SAP Scalable Application Server on page 197


How to Enable Failover SAP Instances to Run in a Cluster on page 200

When performing How to Enable Failover SAP Instances to Run in a Cluster on page 200, add the
following Step 9 to this procedure:
9. As user sapsidadm, add the following entries for enq in the DEFAULT.PFL prole le under the
/sapmnt/SAPSID/profile directory.
20

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

rdisp/enqname=<ci-logical-hostname>_sapsid_NR
rdisp/myname=<ci-logical-hostname>_sapsid_NR

Support for MaxDB Version 7.5


Sun Cluster HA for MaxDB supports version 7.5 of MaxDB. For more information, see Sun Cluster
Data Service for MaxDB Guide for Solaris OS.
Note From version 7.5 of this product, the product name SAP DB has become MaxDB by MySQL

(MaxDB).
If you are using MaxDB 7.5, the UNIX user identity of the OS user who administers the MaxDB
database must be sdb. Otherwise, the MaxDB fault monitor cannot probe the MaxDB database.
You are required to specify this user identity when you perform the tasks that are explained in the
following sections:

How to Install and Congure MaxDB in Sun Cluster Data Service for MaxDB Guide for Solaris
OS

How to Verify MaxDB Installation and Conguration on Each Node in Sun Cluster Data
Service for MaxDB Guide for Solaris OS

How to Register and Congure a MaxDB Resource in Sun Cluster Data Service for MaxDB
Guide for Solaris OS

How to Verify the Operation of the MaxDB Fault Monitor in Sun Cluster Data Service for
MaxDB Guide for Solaris OS

Support for SAP liveCache Version 7.5


Sun Cluster HA for SAP liveCache supports version 7.5 of SAP liveCache. For more information, see
Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
If you are using SAP liveCache 7.5, the following additional conguration tasks are required:

Conguring the SAP liveCache administrator user


Conrming that the SAP liveCache administrator user can run the lcinit command

How to Congure the SAP liveCache Administrator User


If you are using SAP liveCache 7.5, congure the SAP liveCache administrator user immediately
after you perform the step to install SAP liveCache in How to Install and Congure SAP liveCache
on page 151.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

21

Composed March 29, 2006


New Features

Ensure that the SAP liveCache administrator user is in the sdba user group.
The format of the SAP liveCache administrator users user ID is lc-nameadm.
If you are creating the SAP liveCache administrator user manually, add the following entry to the
/etc/group le:
sdba::group-id:lc-nameadm

group-id

The groups unique numerical ID (GID) within the system

lc-name

Lowercase name of SAP liveCache database instance

For more information about the /etc/group le, see the group(4) man page.
2

If the SCM and SAP liveCache are installed on different machines, ensure that the SAP liveCache
administrator users user ID is identical and belongs to the sdba group on each machine.
To meet these requirements, ensure that the entry for the SAP liveCache administrator user in the
/etc/group le on each machine is identical. The required format of this entry is given in Step 1.

How to Conrm That the SAP liveCache Administrator User Can Run the

lcinit Command
If you are using SAP liveCache 7.5, conrm that the SAP liveCache administrator user can run
lcinit immediately after you perform the task How to Verify the SAP liveCache Installation and
Conguration on page 152.
1

Become the SAP liveCache administrator user.


# su - lc-nameadm

lc-name
2

Lowercase name of SAP liveCache database instance

Run the lcinit command.


$ lcinit

Support for Oracle 10g on the x86 Platform


Sun Cluster HA for Oracle supports version 10g of Oracle with the Solaris 9 OS on the x86 platform.
For more information, see Sun Cluster Data Service for Oracle Guide for Solaris OS.

22

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

Restrictions and Requirements


No restrictions or requirements have been added or updated since the Sun Cluster 3.1 8/05 release.

Fixed Problems
There are no xed problems at this time.

Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 8/05 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 8/05 release.

Cluster Problems When Using the ipge3 Port as


Interconnect
Problem Summary: Various problems occur when using the ipge3 port for the cluster interconnect.
These issues include the following:

The cluster interconnect goes down intermittently (CR 6328986).

The rsh/telnet/rlogin process hangs when connecting over the cluster interconnect (CR
6352333).

Network devices based on the Intel Ophir chip are unreliable in a back-to-back conguration
(CR 6331252).

Workaround: To avoid these problems, perform the following steps:


1. Set the /etc/system variable for ipge as follows:
set ipge:ipge_taskq_disable=1

2. Use an Ethernet switch with your cluster interconnect cables for all ipge onboard interfaces.
Direct-connect onboard interfaces are not supported by Sun Cluster software at this time.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

23

Composed March 29, 2006


Known Problems

Must Congure Runtime Linking Environment on SAP


Unicode Systems (4996643)
Problem Summary: On SAP unicode systems, cleanpic binary needs User_env parameter for
LD_LIBRARY_PATH

sap_ci and sap_as start methods dump core unable to start SAP unicode systems.

Workaround: To avoid this problems, if you are using a SAP Unicode system, you must perform the
following steps before you perform Step 6 of How to Register and Congure Sun Cluster HA for
SAP with Central Instance on page 211 as the Solaris root user congure the runtime linking
environment to include the SAP exe and load library directories as follows:
1. Congure the runtime linking environment for 32 bit applications.
# crle -u -l /sapmnt/SAPSID/exe

2. Verify that this modication has been applied for 32 bit applications.
# crle

3. Congure the runtime linking environment for 64 bit applications.


# crle -64 -u -l /sapmnt/SAPSID/exe

4. Verify that this modication has been applied for 64 bit applications.
# crle -64

You need only perform these steps once. If you have not performed these steps, you will not be able
to:

Enable the failover resource group that includes the SAP central instance as described in How to
Register and Congure Sun Cluster HA for SAP with Central Instance on page 211

Enable the failover resource group that includes the SAP application server resource group, step 6
of How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service on page
212

Enable the scalable resource group that includes the SAP application server resource, as described
in How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213

Use of the Console as ttya on a V440 Causes


Unresponsiveness of WebLogic Server (6182519)
Problem Summary: In a Sun Cluster 3.1 environment, when using the console as a ttya on a V440
server, WebLogic Server can become extremely slow and unresponsive.

24

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

Workaround: Avoiding this problem involves steps specic to the WebLogic Server and its
conguration les. These steps are kept in a single location to ensure that they are kept up to date as
work to x the problem progresses. To see these workaround steps, go to
http://www.sunsolve.sun.com and search on change request 6182519.

Solaris Operating System 10 Patch 11882218 and


Later Can Negatively Impact Cluster Stability When
Run on SPARC Platform with PxFS (6335093)
Problem Summary:The Solaris Operating System 10 patch 11882218 and later can cause node and
cluster panics when run on a SPARC with PxFS. The workaround for this bug is to disable segkpm on
all nodes of the cluster. This workaround can cause severe performance degradation when compared
to an existing Solaris 10 installation. There is no performance degradation when compared to a
Solaris 9 installation. The performance degradation is directly proportional to the number of CPUs
on each node. Nodes with moderate numbers of CPUs (less than 20) will not be affected signicantly.
This problem does not affect x86/x64 systems on Solaris 8, Solaris 9, and Solaris 10 and SPARC
Solaris 8 and Solaris 9 installations. This problem also does not affect clusters running UFS, QFS, and
VxFS.
Workaround: Disable segkpm. On each node, add the following entry to the /etc/system le.
set segmap_kpm=0

Java ES 4 Installer Fails to Install on Solaris 10 End User


Cluster (6363536)
Problem Summary: In the Sun Java Enterprise System 2005Q4 distribution, Sun Cluster software
has a dependency on Sun Java Web Console. The Solaris 10 version of Java ES no longer installs Sun
Java Web Console. Instead, Sun Java Web Console is expected to be installed as part of the Solaris 10
OS.
Sun Java Web Console is only available in the Developer software group of Solaris 10 software and
higher. If you install the End User software group of the Solaris 10 OS, Sun Java Web Console is not
installed. Therefore, the Java ES installer will not install Sun Cluster software because this required
software is missing.
Workaround: After you install the Solaris 10 End User software group but before you start the Java
ES installer, use the pkgadd command to install the following Sun Java Web Console software and its
dependency packages from the Solaris 10 media:
SUNWmctag
SUNWmconr
SUNWmcon

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

25

Composed March 29, 2006


Known Problems

SUNWtcatu
SUNWmcosx
SUNWmcos
SUNWj3dev
SUNWjato
SUNWjhdev

After all packages are installed, start the Java ES installer and proceed with Sun Cluster software
installation.

Localization Packages For Sun Java Web Console Do


Not Exist in the Sun Cluster Standalone Distribution
(6299614)
Problem Summary: In the standalone distribution of Sun Cluster 3.1 8/05 software, the Sun Java
Web Console packages on the Sun Cluster 2 of 2 CD-ROM do not include localization packages. The
lack of packages prevents SunPlex Manager from displaying the correct localized version after Sun
Cluster software is upgraded to the Sun Cluster 3.1 8/05 release.
Workaround: During upgrade to the Sun Cluster 3.1 8/05 release, upgrade Sun Java Web Console
packages from the Sun Java Enterprise System (Java ES) distribution instead of from the Sun Cluster
distribution. When following the Sun Cluster procedures for upgrading dependency software,
substitute the following instructions to install or upgrade Sun Java Web Console.

How to Upgrade Sun Java Web Console Localization Packages


1

Remove any Sun Java Web Console localization packages that are installed on the node.
# pkgrm SUNWcmctg SUNWdmctg SUNWemctg SUNWfmctg SUNWhmctg SUNWkmctg SUNWjmctg
# pkgrm SUNWcmcon SUNWdmcon SUNWemcon SUNWfmcon SUNWhmcon SUNWkmcon SUNWjmcon

Insert the Java ES 2 of 2 CD-ROM in the CD-ROM drive of the node.

Install the base Sun Java Web Console package by using the setup utility.
# Product/sunwebconsole/setup

Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# cd /
# eject cdrom

26

Insert the Java ES 1 of 2 CD-ROM in the CD-ROM drive of the node.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

Change to the directory that contains the Sun Java Web Console localization packages for the
language that you want.
# cd Product/shared_components/Packages/locale/lang/

Each language package is located in the Product/shared_components/Packages/locale/lang/


directory where lang is the locale name of a particular language. For example, the locale name for
Japanese is ja.
7

Install the packages manually from the lang/ directory.


# pkgadd -d . localization-packages

Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.

Continue the Sun Cluster 3.1 8/05 software upgrade procedures.


See Upgrading Sun Cluster Software in Sun Cluster Software Installation Guide.

IPv6 Scalable Service Support is Not Enabled by


Default (6332656)
Problem Summary: IPv6 plumbing on the interconnects, which is required for forwarding of IPv6
scalable service packets, will no longer be enabled by default. The IPv6 interfaces, as seen when using
the ifconfig command, will no longer be plumbed on the interconnect adapters by default.
Workaround: Manually enable IPv6 scalable service support.

How to Manually Enable IPv6 Scalable Service Support


Before You Begin

Ensure that you have prepared all cluster nodes to run IPv6 services. These tasks include proper
conguration of network interfaces, server/client application software, name services, and routing
infrastructure. Failure to do so might result in unexpected failures of network applications. For more
information, see your Solaris system-administration documentation for IPv6 services.
On each node, add the following entry to the /etc/system le.
set cl_comm:ifk_disable_v6=0

On each node, enable IPv6 plumbing on the interconnect adapters.


# /usr/cluster/lib/sc/config_ipv6

The config_ipv6 utility brings up an IPv6 interface on all cluster interconnect adapters that have a
link-local address. The utility enables proper forwarding of IPv6 scalable service packets over the
interconnects.
Alternately, you can reboot each cluster node to activate the conguration change.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

27

Composed March 29, 2006


Known Documentation Problems

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems that are documented
in the Sun Cluster 3.1 8/05 Release Notes for Solaris OS.

Sun Cluster Concepts Guide for Solaris OS


The following subsection describes omissions or errors that were discovered in the Sun Cluster
Concepts Guide for Solaris OS.

Ambiguous Statement That Sun Cluster for SPARC Supports a


Maximum of 16 Nodes (CR 6322262)
In the section Sun Cluster Topologies for SPARC in Sun Cluster Concepts Guide for Solaris OS, the
following ambiguous statement appears:
A Sun Cluster environment that is composed of SPARC based systems supports a maximum of
sixteen nodes in a cluster, regardless of the storage congurations that you implement.
The preceding statement is changed as follows:
A Sun Cluster environment that is composed of SPARC based systems supports a maximum of
sixteen nodes in a cluster. All SPARC based topologies support up to eight nodes in a cluster. Selected
SPARC based topologies support up to sixteen nodes in a cluster. Contact your Sun sales
representative for more information.

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publication of the Software Installation Guide.

Incorrect Release Date for the First Update of the Solaris 10 OS


In Appendix F, upgrade guidelines and procedures refer to the rst update release of the Solaris 10
OS as Solaris 10 10/05. The date of this release is incorrect. The correct release date is 1/06.
Therefore, to upgrade a Sun Cluster conguration to the Solaris 10 OS, the Solaris 10 1/06 release is
the minimum version that Sun Cluster 3.1 8/05 software supports.

28

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Clarication of the Restriction Concerning Solaris 10 Non-Global Zones


The section Solaris OS Restrictions in the chapter Planning the Sun Cluster Conguration
contains the following statement:
Sun Cluster 3.1 8/05 software does not support non-global zones. All Sun Cluster software and
software that is managed by the cluster must be installed only on the global zone of the node. Do
not install cluster-related software on a non-global zone. In addition, all cluster-related software
must be installed in a way that prevents propagation to a non-global zone that is later created on a
cluster node.

This restriction applies specically to the installation location of Sun Cluster framework software
and Sun Cluster data-service software. It does not restrict the creation of non-global zones on a
cluster node. In addition, applications can be installed in a non-global zone on a cluster node and
congured to be highly available and managed by Sun Cluster software. For more information, see
Sun Cluster HA for Solaris Containers in Sun Cluster 3.1 8/05 Release Notes for Solaris OS.

Package Dependency Change from 1.0 to 1.1 Causes Installation


Problems (6316676)
The procedures to upgrade dependency software (nonrolling upgrade and rolling upgrade) are
correct only for common agent container version 1.0, which was distributed in the initial standalone
release of Sun Cluster 3.1 8/05 software. The Sun Java Enterprise System 2005Q4 distribution
contains common agent container version 1.1, which now has a package order dependency.
To install common agent container 1.1 software from the Java ES 2005Q4 distribution, specify the
package names explicitly and in the following order, to satisfy the new package dependencies:
SUNWcacaocfg SUNWcacao

Sun Cluster Data Service for Solaris Containers Guide


The following sections in Sun Cluster Data Service for Solaris Containers Guide are incorrect:

How to Install and Congure a Zone in Sun Cluster Data Service for Solaris Containers Guide
Replace this incorrect section with How to Install a Zone and Perform the Initial Internal Zone
Conguration on page 30

Patching the Global Zone and Local Zones in Sun Cluster Data Service for Solaris Containers
Guide
Replace this incorrect section with How to Patch to the Global Zone and Local Zones on page
31.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

29

Composed March 29, 2006


Known Documentation Problems

How to Install a Zone and Perform the Initial Internal Zone

Conguration
Perform this task on each node that is to host the zone.
Note For complete information about installing a zone, see System Administration Guide: Solaris

Containers-Resource Management and Solaris Zones.

Before You Begin

Determine the following requirements for the deployment of the zone with Sun Cluster:

The number of Solaris Zone instances that are to be deployed.


The cluster le system that is to be used by each Solaris Zone instance.

Ensure that the zone is congured.


If the zone that you are installing is to run in a failover conguration, congure the zones zone path
to specify a highly available local le system. The le system must be managed by the
SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover
Conguration in Sun Cluster Data Service for Solaris Containers Guide.
For detailed information about conguring a zone before installation of the zone, see the following
documentation:

Chapter 17, Non-Global Zone Conguration (Overview), in System Administration Guide:


Solaris Containers-Resource Management and Solaris Zones

Chapter 18, Planning and Conguring Non-Global Zones (Tasks), in System Administration
Guide: Solaris Containers-Resource Management and Solaris Zones

If the zone is to run in a failover conguration, ensure that the zones zone path can be created on the
zones disk storage.
If the zone is to run in a multiple-masters conguration, omit this step.
a. On the node where you are installing the zone, bring online the resource group that contains the
resource for the zones disk storage.
# scswitch -z -g solaris-zone-resource-group -h node

b. If the zones zone path exists on the zones disk storage, remove the zone path.
The zones zone path exists on the zones disk storage if you have already installed the zone on
another node.
2

Install the zone.


# zoneadm -z zone install

30

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

For more detailed information about installing a zone, see How to Install a Congured Zone in
System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
3

Perform the initial internal zone conguration.


a. Log in to the zones console.
# zlogin -C zone

You are prompted to congure the zone.


b. Follow the prompts to congure the zone.
c. Disconnect from the zones console.
Use the escape sequence that you dened for the zone. If you did not dene an escape sequence,
use the default escape sequence as follows:
# ~.

How to Patch to the Global Zone and Local Zones


This task is required only if you are applying a patch to the global zone and to local zones. If you are
applying a patch to only the global zone, follow the instructions in Chapter 8, Patching Sun Cluster
Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.
This task applies to both nonrebooting patches and rebooting patches.
Perform this task on all nodes in the cluster.
1

Ensure that the node that you are patching can access the zone paths of all zones that are congured
on the node.
Some zones might be congured to run in a failover conguration. In this situation, bring online on
the node that you are patching the resource group that contains the resources for the zones disk
storage.
# sscswitch -z -g solaris-zone-resource-group -h node

Apply the patch to the node.


For detailed instructions, see Chapter 8, Patching Sun Cluster Software and Firmware, in Sun
Cluster System Administration Guide for Solaris OS in Sun Cluster System Administration Guide for
Solaris OS.

Sun Cluster Data Service for SAP Guide for Solaris OS


The following information is missing from Sun Cluster Data Service for SAP Guide for Solaris OS.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

31

Composed March 29, 2006


Known Documentation Problems

After modifying your SAP systems database to refer to a logical host, if you are using SAP DB or
MaxDB as your database, create a .XUSER.62 le in the home directory of the sapsysadm user that
refers to the logical host of the database. Create this .XUSER.62 le using the dbmcli or xuser tools.
Test this change using R3trans -d. This step is necessary so the SAP instance can nd the database
state while starting up.

Sun Cluster Data Service for SAP DB Guide for Solaris


OS
The following information is missing from the Sun Cluster Data Service for MaxDB Guide for Solaris
OS:
When following the procedure How to Install and Congure MaxDB in Sun Cluster Data Service
for MaxDB Guide for Solaris OS, add the following fth step to the procedure:
5. Copy the /etc/opt/sdb directory and its contents from the node on which you installed SAP DB
to all nodes where resources for SAP DB and SAP xserver will run.
Ensure that the ownership of this directory and its contents is the same on all nodes:
# tar cfB - /etc/opt/sdb | rsh destination tar xfB -

destination

Species the node to which you are copying the /etc/opt/sdb directory and its
contents

Sun Cluster Data Service for SAP liveCache Guide for


Solaris OS
The following information is missing from the Sun Cluster Data Service SAP liveCache Guide for
Solaris OS:
When following the procedure How to Install and Congure SAP liveCache on page 151, add the
following fth and sixth steps to the procedure:
5. Copy the /etc/opt/sdb directory and it contents from the node on which you installed SAP
liveCache, to all the nodes where resources for SAP liveCache will run. Ensure that the ownership of
these les is the same on all nodes as it is on the node on which you installed SAP liveCache.
# tar cfB - /etc/opt/sdb | rsh destination tar xfB -

destination

Species the node to which you are copying the /etc/opt/sdb directory and its
contents

6. Create a link from the /sapdb/LCA/db/wrk directory to the /sapdb/data/wrk directory as follows:

32

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

# ln -s /sapdb/data/wrk /sapdb/LCA/db/wrk

Sun Cluster Data Service for SAP Web Application


Server Guide for Solaris OS
Information is missing from the Sun Cluster Data Service for SAP Web Application Server Guide for
Solaris OS on the following topics:

How to Install and Congure the Scalable SAP Web Application Server and the SAP J2EE
Engine on page 33
How to Modify the Installation for a Scalable SAP Web Application Server Component on page
33
How to Create a Dependency on the Web Application Server Database on page 34

How to Install and Congure the Scalable SAP Web Application Server
and the SAP J2EE Engine
When performing the procedure How to Install and Congure the SAP Web Application Server and
the SAP J2EE Engine in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris
OS check http://service.sap.com/ha and corresponding SAP notes for information about any changes
that you must make to the SAP conguration for it to work with a logical host.
Step 2 of How to Install and Congure the SAP Web Application Server and the SAP J2EE Engine
in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS is missing
information for the J2EE user.
2. If you are using the SAP J2EE engine, install J2EE as an addon or a standalone, following these
instructions:
2a. If you are conguring J2EE as a failover data service install the SAP J2EE engine software on the
same node as which you installed the SAP Web Application Server software.
2b. If you are conguring J2EE as a scalable data service, install the same J2EE instance using the
same instance name on each node where you want the corresponding scalable resource to run.

How to Modify the Installation for a Scalable SAP Web Application


Server Component
Steps 2 and 5 of procedure How to Modify the Installation for a Scalable SAP Web Application
Server Component in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS
are missing information specic to the use of a scalable J2EE instance and a scalable ABAP Web
Application Server instance.
2. If you are on an ABAP-only system, copy the dialog instance from the node where it was installed
to the local le system on the other nodes.
Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

33

Composed March 29, 2006


Known Documentation Problems

Note If you are using the J2EE engine, you have installed the J2EE instance on each node. For more

information, see Step 2 of How to Install and Congure the Scalable SAP Web Application Server
and the SAP J2EE Engine on page 33.
5. Update the script $HOME/loghost as follows:
Here are examples depending on the type of instance you are using:
A scalable J2EE instance
if [ "$1" = "J85" ]; then
echo hostname;
fi

Update the script $HOME/loghost to return the physical host name


A scalable ABAP Web Application Server instance
if [ "$1" = "D85" ]; then
echo "scalable";
fi

Your script must return a common string.


The returned string must match the corresponding proles for the instance. A scalable resource
group does not contain a logical host.

How to Create a Dependency on the Web Application Server Database


When following Step 2 of the procedure How to Register and Congure an SAP Enqueue Server
Resource in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS, add a
dependency to the Web Application Server database:
2. Create an SAP enqueue server resource in the SAP central services resource group.
# scrgadm -a -j enq-rs -g central-rg -t SUNW.sapenq \
-x Enqueue_Profile=path-to-enq-prole \
-x Enqueue_Server=path-to-enq-server-binary \
-x SAP_User=enq-user \
-x Enqueue_Instance_Number=enq-instance \
-y Resource_Dependencies=hsp-central-rs,db-webas-rs

-y hsp-central-rs,db-webas-rs

Species that the following resources must be online before the


resource for the SAP enqueue server component can be online:

34

HAStoragePlus resource for the global device group on which


the SAP web application server component is installed.
Database resource. The database resource is created by the
relevant data service.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

When following Step 2 of the procedure How to Register and Congure an SAP Message Server
Resource in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS, add a
dependency to the Web Application Server database:
2. Create an SAP message server resource in the SAP central services resource group.
# scrgadm -a -j msg-rs -g central-rg -t SUNW.sapscs \
-x SAP_SID=scs-system-ID \
-x SAP_Instance_Number=scs-instance-number \
-x SAP_Instance_Name=scs-instance-name \
-x Msg_Server_Port=msg-server-port \
-x Scs_Startup_Script=scs-server-startup-script \
-x Scs_Shutdown_Script=scs-server-shutdown-script \
-y Resource_Dependencies=hsp-central-rs,db-webas-rs

-y hsp-central-rs,db-webas-rs

Species that the following resources must be online before the


resource for the SAP enqueue server component can be online:

HAStoragePlus resource for the global device group on which


the SAP web application server component is installed.
Database resource. The database resource is created by the
relevant data service.

Release Notes
The following subsections describe omissions or errors discovered in the Sun Cluster 3.1 8/05 Release
Notes for Solaris OS.

Incorrect Claim That VxVM 4.0 Is Supported on Solaris 10 OS (CR


6315895)
In the Supported Products section, the matrix for volume manages and for le systems each lists
VxVM 4.0 and VxFS 4.0 as supported on the Solaris 10 OS versions of Sun Cluster 3.1 8/05 software.
However, only version 4.1 of VxVM and VxFS is supported on Solaris 10 OS. The following are the
correct support matrices:

Volume managers

On Solaris 8 - Solstice DiskSuite 4.2.1 and (SPARC only) VERITAS Volume Manager 3.5,
4.0, and 4.1. Also, VERITAS Volume Manager components delivered as part of Veritas
Storage Foundation 4.0 and 4.1.

On Solaris 9 - Solaris Volume Manager and (SPARC only) VERITAS Volume Manager 3.5,
4.0, and 4.1. Also, VERITAS Volume Manager components delivered as part of Veritas
Storage Foundation 4.0 and 4.1.

Chapter 1 Sun Cluster 3.1 8/05 Release Notes Supplement

35

Composed March 29, 2006


Known Documentation Problems

36

On Solaris 10 - Solaris Volume Manager and (SPARC only) VERITAS Volume Manager 4.1.
Also, VERITAS Volume Manager components delivered as part of Veritas Storage
Foundation 4.1.

File systems

On Solaris 8 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 3.5, 4.0, and 4.1. Also, VERITAS File System components delivered
as part of Veritas Storage Foundation 4.0 and 4.1.

On Solaris 9 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 3.5, 4.0, and 4.1. Also, VERITAS File System components delivered
as part of Veritas Storage Foundation 4.0 and 4.1.

On Solaris 10 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 4.1. Also, VERITAS File System components delivered as part of
Veritas Storage Foundation 4.1.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.1 9/04 Release Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 9/04
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.

Revision Record on page 37


New Features on page 39
Restrictions and Requirements on page 46
Known Problems on page 47
Known Documentation Problems on page 51

Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 21 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2006
Revision Date

New Information

April 2006

January 2006

Cluster Problems When Using the ipge3 Port as Interconnect on page 23

37

Composed March 29, 2006


Revision Record

TABLE 22 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2005
Revision Date

New Information

November 2005

Change Request 6220218 for VERITAS Storage Foundation 4.0 is now xed by a patch.
See Bug ID 6220218 on page 50.

October 2005

Claried ambiguous statement about support of 16 node clusters in Sun Cluster


Concepts Guide for Solaris OS on page 28.

September 2005

Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39.

July 2005

Documented steps for using hardware RAID on internal drives for servers providing
internal hardware disk mirroring (integrated mirroring). See Mirroring Internal Disks
on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring
on page 39.

June 2005

Added restriction on placement of SCI cards in hot swap PCI+ (hsPCI+) I/O
assemblies. See Restriction on SCI Card Placement on page 46.
Bug ID 6252555, problems with quorum reservations and patch 11327728 or later. See
Bug ID 6252555 on page 50.

May 2005

The VERITAS Storage Foundation 4.0 standard license enables PGR functionality,
causing cluster nodes to panic. See Bug ID 6220218 on page 50.
Added restriction on quorum devices when using storage-based data replication. See
Storage-Based Data Replication and Quorum Devices on page 47.
Sun Cluster Support for Oracle Real Application Clusters supports the use of Sun
StorEdge QFS with Oracle 10g Real Application Clusters. For more information, see
SPARC: Support for Sun StorEdge QFS With Oracle 10g Real Application Clusters
on page 43

March 2005

Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50.
Additional requirements to support IPv6 network addresses. See IPv6 Support and
Restrictions for Public Networks on page 51 and IPv6 Requirement for the Cluster
Interconnect on page 51.
Correction to upgrade procedures for Sun Cluster HA for SAP liveCache 3.1. See
Correction to the Upgrade of Sun Cluster HA for SAP liveCache on page 52.

January 2005

SCSI reset errors when using Cauldron-S and 3310 RAID arrays. See Bug ID 6196936
on page 49.
Support for jumbo frames with Solaris 8 limited to clusters using Oracle RAC. See Bug
ID 4333241 on page 47.

38

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

TABLE 22 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2005

(Continued)

Revision Date

New Information

December 2004

The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
Restrictions apply to Sun Cluster installations on x86 based systems. See Bug ID
5066167 on page 60.
You will receive an error if you try to re-encapsulate root on a device that was
previously encapsulated. See Bug ID 4804696 on page 47.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
When Sun Cluster is upgraded from a previous version to Sun Cluster 3.1 9/04, the Sun
Cluster support packages for Oracle Real Application Clusters are not upgraded. See
Bug ID 5107076 on page 48.
When using scinstall to upgrade Sun Cluster data services for Sun Cluster 3.1 9/04
software, Sun Cluster will issue error messages complaining about missing Solaris_10
Packages directories. See Bug ID 5109935 on page 49.

New Features
In addition to features documented in the Sun Cluster 3.1 9/04 Release Notes for Solaris OS, this
release now includes support for the following features.

SPARC: Support for VxVM 4.1 and VxFS 4.1


A patch to Sun Cluster 3.1 software adds support on Sun Clusterr 3.1 9/04 congurations and earlier
for VERITAS Volume Manager 4.1 and VERITAS File System 4.1 software. Download and install the
latest Sun Cluster 3.1 Core Patch from http://www.sunsolve.com. This support addition is
associated with the bug x for 6230506.

Mirroring Internal Disks on Servers that Use Internal


Hardware Disk Mirroring or Integrated Mirroring
Some servers support the mirroring of internal hard drives (internal hardware disk mirroring or
integrated mirroring) to provide redundancy for node data. To use this feature in a cluster
environment, follow the steps in this section.
Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

39

Composed March 29, 2006


New Features

Depending on the version of the Solaris operating system you use, you might need to install a patch
to correct change request 5023670 and ensure the proper operation of internal mirroring. Check the
PatchPro site to nd the patch for your server.
The best way to set up hardware disk mirroring is to perform RAID conguration after you install
the Solaris OS and before you congure multipathing. If you need to change your mirroring
conguration after you have established the cluster, you must perform some cluster-specic steps to
clean up the device IDs.
For specics about how to congure your servers internal disk mirroring, refer to the documents
that shipped with your server and the raidctl(1M) man page.

Conguring Internal Disk Mirroring During Installation


Before You Begin
1

Install your cluster hardware as instructed in your server and storage array documentation.
Install the Solaris operating system, as instructed in the Sun Cluster installation guide.
As a part of this procedure, you will check the PatchPro web site and install any necessary patches.

Congure the internal mirror.


# raidctl -c clt0d0 clt1d0

-c clt0d0 clt1d0

Creates the mirror of primary disk to the mirror disk. Enter the name of your
primary disk as the rst argument. Enter the name of the mirror disk as the
second argument.

Continue with installing and conguring your multipathing software, if necessary, as instructed in
the Sun Cluster installation guide.

Install the Sun Cluster software, as instructed in the Sun Cluster installation guide.

How to Congure Internal Disk Mirroring After the Cluster is

Established
Before You Begin

This procedure assumes that you have already installed your hardware and software and have
established the cluster.
Check the PatchPro site for any patches required for using internal disk mirroring.
Pro is a -management tool that eases the selection and download of patches required for installation
or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for
Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPros Expert Mode
tool helps you to maintain your conguration with the latest set of patches. Expert Mode is especially
useful for obtaining all of the latest patches, not just the high availability and security patches.

40

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun
Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the
PatchPro tool to describe your cluster conguration and download the patches.
For third-party rmware patches, see the SunSolveSM Online site at
http://sunsolve.ebay.sun.com.
1

If necessary, prepare the node for establishing the mirror.


a. Determine the resource groups and device groups that are running on the node.
Record this information because you use this information in later in this procedure to return
resource groups and device groups to the node.
# scstat

b. If necessary, move all resource groups and device groups off the node.
# scswitch -S -h fromnode
2

Congure the internal mirror.


# raidctl -c clt0d0 clt1d0

-c clt0d0 clt1d0

Creates the mirror of primary disk to the mirror disk. Enter the name of your
primary disk as the rst argument. Enter the name of the mirror disk as the
second argument.

Boot the node into single user mode.


# reboot -- -S

Clean up the device IDs.


# scdidadm -R /dev/rdsk/clt0d0

-R /dev/rdsk/clt0d0

Updates the clusters record of the device IDs for the primary disk. Enter the
name of your primary disk as the argument.

Conrm that the mirror has been created and only the primary disk is visible to the cluster.
# scdidadm -l

The command lists only the primary disk as visible to the cluster.
6

Boot the node back into cluster mode.


# reboot

If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on
the primary disk, recreate the state database replicas.
# metadb -afc 3 /dev/rdsk/clt0d0s4

Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

41

Composed March 29, 2006


New Features

If you moved device groups off the node in Step 1, move all device groups back to the node.
Perform the following step for each device group you want to return to the original node.
# scswitch -z -D devicegroup -h nodename

In this command, devicegroup is one or more device groups that are returned to the node.
9

If you moved resource groups off the node in Step 1, move all resource groups back to the node.
# scswitch -z -g resourcegroup -h nodename

How to Remove an Internal Disk Mirror


1

If necessary, prepare the node for removing the mirror.


a. Determine the resource groups and device groups that are running on the node.
Record this information because you use this information later in this procedure to return
resource groups and device groups to the node.
# scstat

b. If necessary, move all resource groups and device groups off the node.
# scswitch -S -h fromnode
2

Remove the internal mirror.


# raidctl -d clt0d0

-d clt0d0

Deletes the mirror of primary disk to the mirror disk. Enter the name of your primary
disk as the argument.

Boot the node into single user mode.


# reboot -- -S

Clean up the device IDs.


# scdidadm -R /dev/rdsk/clt0d0
# scdidadm -R /dev/rdsk/clt1d0

-R /dev/rdsk/clt0d0
-R /dev/rdsk/clt1d0

Updates the clusters record of the device IDs. Enter the names of your disks
separated by spaces.

Conrm that the mirror has been deleted and that both disks are visible.
# scdidadm -l

The command lists both disks as visible to the cluster.

42

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

Boot the node back into cluster mode.


# reboot

If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on
the primary disk, recreate the state database replicas.
# metadb -c 3 -ag /dev/rdsk/clt0d0s4

If you moved device groups off the node in Step 1, return the device groups to the original node.
# scswitch -z -D devicegroup -h nodename

If you moved resource groups off the node in Step 1, return the resource groups and device groups to
the original node.

If you are using Sun Cluster 3.2, use the following command:
Perform the following step for each resource group you want to return to the original node.
# scswitch -z -g resourcegroup -h nodename

If you are using Sun Cluster 3.1, use the following command:

SPARC: Support for Sun StorEdge QFS With Oracle 10g


Real Application Clusters
Sun Cluster Support for Oracle Real Application Clusters supports the use of Sun StorEdge QFS with
Oracle 10g Real Application Clusters. For information about Sun Cluster Support for Oracle Real
Application Clusters, see Sun Cluster Data Service for Oracle Real Application Clusters Guide for
Solaris OS.
Oracle 10g Real Application Clusters introduces new types of les. For information about using Sun
StorEdge QFS for these new types of les, see the subsections that follow.

SPARC: Requirements for Using the Sun StorEdge QFS Shared File
System
You can store all of the les that are associated with Oracle Real Application Clusters on the Sun
StorEdge QFS shared le system.
For information about how to create a Sun StorEdge QFS shared le system, see the following
documentation for Sun StorEdge QFS:

Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Conguration Guide

Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide

Distribute these les among several le systems as explained in the subsections that follow.
Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

43

Composed March 29, 2006


New Features

Sun StorEdge QFS File Systems for RDBMS Binary Files and Related Files
For RDBMS binary les and related les, create one le system in the cluster to store the les.
The RDBMS binary les and related les are as follows:

Oracle relational database management system (RDBMS) binary les

Oracle conguration les (for example, init.ora, tnsnames.ora, listener.ora, and


sqlnet.ora)

System parameter le (SPFILE)

Alert les (for example, alert_sid.log)

Trace les (*.trc)

Oracle Cluster Ready Services (CRS) binary les

Sun StorEdge QFS File Systems for Database Files and Related Files
For database les and related les, determine whether you require one le system for each database
or multiple le systems for each database.

For simplicity of conguration and maintenance, create one le system to store these les for all
Oracle Real Application Clusters instances of the database.

To facilitate future expansion, create multiple le systems to store these les for all Oracle Real
Application Clusters instances of the database.

Note If you are adding storage for an existing database, you must create additional le systems for

the storage that you are adding. In this situation, distribute the database les and related les among
the le systems that you will use for the database.
Each le system that you create for database les and related les must have its own metadata server.
For information about the resources that are required for the metadata servers, see SPARC:
Resources for the Sun StorEdge QFS Shared File System on page 45.
The database les and related les are as follows:

44

Data les
Control les
Online redo log les
Archived redo log les
Flashback log les
Recovery les
Oracle cluster registry (OCR) les
Oracle CRS voting disk

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

SPARC: Resources for the Sun StorEdge QFS Shared File System
If you are using the Sun StorEdge QFS shared le system, answer the following questions:

Which resources will you create to represent the metadata server for the Sun StorEdge QFS
shared le system?
One resource is required for each Sun StorEdge QFS metadata server.

Which resource groups will you use for these resources?


You might use multiple le systems for database les and related les. For more information, see
SPARC: Requirements for Using the Sun StorEdge QFS Shared File System on page 43.
If you are using Oracle 10g, Oracle CRS manage Real Application Clusters database instances.
These database instances must be started only after all shared le systems are mounted. To meet
this requirement, ensure that the le system that contains the Oracle CRS voting disk is mounted
only after the le systems for other database les have been mounted. This behavior ensures that,
when a node is booted, Oracle CRS are started only after all Sun StorEdge QFS le systems are
mounted.
To enable Sun Cluster to mount the le systems in the required order, congure resource groups
for the metadata servers of the le systems as follows:

Create the resources for the metadata servers in separate resource groups.

Set the resource group for the le system that contains the Oracle CRS voting disk to depend
on the other resource groups.

For more information, see the following documentation for Sun StorEdge QFS:

Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Conguration Guide

Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide

Use the answers to these questions when you perform the procedure in Registering and Conguring
Oracle RAC Server Resources in Sun Cluster Data Service for Oracle Real Application Clusters Guide
for Solaris OS.

Support for Automatic Storage Management (ASM)


With Oracle 10g Real Application Clusters on the
SPARC Platform
Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the SPARC
platform.

Required Versions of Oracle 10g Real Application Clusters


If you are using ASM, you must use Oracle 10g Real Application Clusters version 10.1.0.3 with the
following Oracle patches:
Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

45

Composed March 29, 2006


Restrictions and Requirements

3644481
3976437

How to Use ASM with Oracle 10g Real Application Clusters


Except as indicated in this section, the procedures for using ASM with Sun Cluster Support for
Oracle Real Application Clusters are identical to the procedures for using hardware redundant array
of independent disks (RAID). For more information about these procedures, see Sun Cluster Data
Service for Oracle Real Application Clusters Guide for Solaris OS.
Note If you are using ASM, you do not require the underlying storage to be hardware RAID.

Run the scdidadm(1M) command to nd the raw device identity (DID) that corresponds to shared
disks that are available in the cluster.
The following example lists output from the scdidadm -L command.
# scdidadm -L
1
1
2
2

phys-schost-1:/dev/rdsk/c0t2d0
phys-schost-2:/dev/rdsk/c0t2d0
phys-schost-1:/dev/rdsk/c0t3d0
phys-schost-2:/dev/rdsk/c0t3d0

/dev/did/rdsk/d1
/dev/did/rdsk/d1
/dev/did/rdsk/d2
/dev/did/rdsk/d2

Use the DID that the scdidadm output identies to set up the disk in the ASM disk group.
For example, the scdidadm output might identify that the raw DID that corresponds to the disk is d2.
In this instance, use the /dev/did/rdsk/d2sN raw device, where N is the slice number.

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
9/04 release.

Restriction on SCI Card Placement


Do not place an SCI card in the 33 MHz PCI slot (slot 1) of the hot swap PCI+ (hsPCI+) I/O
assembly. This placement can cause a system panic.

46

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

Storage-Based Data Replication and Quorum Devices


When using storage-based data replication, with storage devices that provide this feature, never
congure a replicated volume as a quorum device.

Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 9/04 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 9/04 release.

Bug ID 4333241
Problem Summary: System deadlocks when using jumbo frames with Solaris 8 and failover or
scalable data services.
Workaround: Support of jumbo frames with Solaris 8 is limited to clusters using Oracle Real
Application Clusters only. Solaris 9 can be used with all types of data services

Bug ID 4804696
Problem Summary: If an attempt is made by VxVM to re-encapsulate root on a device that was
previously encapsulated, an error can result due to not being able to create the rootdg:
scvxinstall: Failed to create rootdg using "vxdg init root".
# vxdg init rootdg
vxvm:vxdg: ERROR: Disk group rootdg: cannot create: Disk group exists and is imported
# vxdg destroy rootdg
vxvm:vxdg: ERROR: Disk group rootdg: No such disk group is imported

Workaround: Using the touch command, create an empty install-db le in the


/etc/vx/reconfig.d/state.d directory. Then kill the vxconfigd daemon and recreate the daemon
in disable mode.
#
#
#
#

touch /etc/vx/reconfig.d/state.d/install-db
ps -ef | grep vxconfigd
kill -9 vxcongd process
vxconfigd -m disable

After performing these steps, you should be able to re-encapsulate root.

Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

47

Composed March 29, 2006


Known Problems

Bug ID 5107076
Problem Summary: When Sun Cluster software is upgraded from a previous version to Sun Cluster
3.1 9/04 release, the Sun Cluster support packages for Oracle Real Application Clusters are not
upgraded.
Workaround: When you upgrade Sun Cluster software to Sun Cluster 3.1 9/04 release, you must
remove the Sun Cluster support packages for Oracle Real Application Clusters from the Sun Cluster
system and add the Sun Cluster support packages from the Sun Java Enterprise System Accessory
CD Volume 3.

How to Replace the Sun Cluster Support Packages for Oracle Real

Application Clusters
Note If you have edited the conguration les /opt/SUNWudlm/etc/udlm.conf or
/opt/SUNWcvm/etc/cvm.conf, any edits to adjust timeouts will be lost and must be reapplied after
installing the new packages using the procedure Tuning Sun Cluster Support for Oracle Real
Application Clusters in Sun Cluster Data Service for Oracle Real Application Clusters Guide for
Solaris OS. To set up the RAC framework resource group, refer to Registering and Conguring the
RAC Framework Resource Group in Sun Cluster Data Service for Oracle Real Application Clusters
Guide for Solaris OS.

Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.

Become superuser.

Change the current working directory to the directory that contains the packages for the Real
Application Clusters framework resource group.
This directory depends on the version of the Solaris Operating System that you are using.

If you are using Solaris 8, run the following command:


# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_8/Packages

If you are using Solaris 9, run the following command:


# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_9/Packages

On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, transfer
the contents of the required software packages from the CD-ROM to the node.
The required software packages depend on the storage management scheme that you are using for
the Oracle Real Application Clusters database.

48

If you are using Solaris Volume Manager for Sun Cluster, run the following commands:

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

# pkgrm SUNWudlm SUNWudlmr SUNWschwr SUNWscucm


# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWscmd

If you are using VxVM with the cluster feature, run the following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm

If you are using hardware RAID support, run the following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWschwr SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr

If you are using Sun StorEdge QFS shared le system with hardware RAID support, run the
following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWschwr SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr

Bug ID 5109935
Problem Summary: When using scinstall to upgrade Sun Cluster data services for the Sun Cluster
3.1 9/04 release, Sun Cluster will issue error messages complaining about missing Solaris_10
Packages directories.
Workaround: These error messages can be safely ignored.

Bug ID 6196936
Problem Summary: SCSI reset errors when using X4422A Sun Dual Gigabit Ethernet + Dual SCSI
PCI Adapter cards in Sun Fire V40zs PCI slots 2 and 3.
Workaround: Do not use 4422A cards in both slot 2 and 3.

Bug ID 6198608
Problem Summary: An underlying rmware problem caused by issuing an
SCMD_READ_DEFECT_LIST (0x37) to an SE 3510 disk causes clusters to panic when run with Explorer
versions 4.3 or 4.3.1 (these versions call diskinfo -g). The Sun Cluster sccheck command in Sun
Cluster 3.1 (10/03) through Sun Cluster 3.1 (9/04) allows Explorer to run the command that causes
the panic. Java Enterprise System R3 also includes Explorer 4.3.1. This scsi command can be issued
by either using format (defect->grown option) or by running Explorer 4.3 and 4.3.1.
Workaround: Release 4.1 of the SE 3510 rmware contains the x to the problem. Sun Cluster 3.1
(5/05) will include a workaround to the problem when it occurs by using sccheck. There is also a

Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

49

Composed March 29, 2006


Known Problems

workaround for the problem in Explorer 4.4. EMC Clarion arrays have also experienced this
problem. Contact EMC to obtain the appropriate rmware x.

Bug ID 6210418
Problem Summary: If a process accounting log is located on a cluster le system or on an
HAStoragePlus failover le system, a switchover would be blocked by writes to the log le. This
would then cause the node to hang.
Workaround: Use only a local le system to contain process accounting log les.

Bug ID 6220218
Problem Summary: The standard license for VERITAS Storage Foundation 4.0 is enabling the
VxVM Persistent Group Reservations (PGR) functionality, making the product incompatible with
Sun Cluster software. This incompatibility might bring down the cluster by causing the cluster nodes
to panic.
Workaround: Download from http://www.sunsolve.com Patch 120585 (revision -01 or higher)
and follow the Special Install Instructions at the end of the patch description to apply the patch to
your cluster.

Bug ID 6252555
Problem Summary: The sd driver patches 113277-28 and higher break quorum reservations,
resulting in a node panic.
Workaround: Do not use patch 113277-28 or later, until further notice, if the target cluster uses one
of the following arrays as shared storage:

Sun StorEdge 3510


Sun StorEdge 3511
Sun StorEdge 6120
Sun StorEdge 6130
Sun StorEdge 6920

and if one or more volumes within the array is visible to more than 2 nodes of a Sun Cluster 3 cluster.
Sun Alert 101805 provides more information about this issue

50

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 9/04 Release Notes for Solaris OS.

System Administration Guide


The following subsection describes omissions or new information that will be added to the next
publication of the Sun Cluster System Administration Guide for Solaris OSSystem Administration
Guide.

Cluster File System Restrictions


In the section Cluster File System Restrictions, the list of restrictions is not correct. The Sun Cluster
3.1 9/04 release supports the -f option of the umount command. The Sun Cluster 3.1 9/04 release
supports forced unmounts.

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publication of the Sun Cluster Software Installation Guide for Solaris OS.

IPv6 Support and Restrictions for Public Networks


Sun Cluster software supports IPv6 addresses on the public network under the following conditions
or restrictions:

Sun Cluster software does not support IPv6 addresses on the public network if the private
interconnect uses SCI adapters.

On Solaris 9 OS, Sun Cluster software supports IPv6 addresses for both failover and scalable data
services.

On Solaris 8 OS, Sun Cluster software supports IPv6 addresses for failover data services only.

IPv6 Requirement for the Cluster Interconnect


To support IPv6 addresses on the public network, Sun Cluster software requires that all private
network adapters must use network interface cards (NICs) that support local MAC address
assignment. Link-local IPv6 addresses, which are required on private network addresses to support
IPv6 public network addresses, are derived from the local MAC addresses.

Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

51

Composed March 29, 2006


Known Documentation Problems

Correction to the Upgrade of Sun Cluster HA for SAP liveCache


The procedure How to Finish a Nonrolling Upgrade to Sun Cluster 3.1 9/04 Software includes an
instruction to modify the /opt/SUNWsclc/livecache/bin/lccluster le. This instruction applies
if you upgraded the Sun Cluster HA for SAP liveCache data service from the Sun Cluster 3.0 version
to the Sun Cluster 3.1 version. This instruction is incorrect.
Do not perform Step 3, the instruction to edit the /opt/SUNWsclc/livecache/bin/lccluster le.
This le is only a template that is installed with the Sun Cluster HA for SAP liveCache data service.
Do not edit the lccluster le at that location. Instead, perform the following procedure:

How to Upgrade Sun Cluster HA for SAP liveCache to Sun Cluster 3.1
1

Go to a node that will host the Sun Cluster HA for SAP liveCache resource.

Copy the new /opt/SUNWsclc/livecache/bin/lccluster le to the /sapdb/LC-NAME/db/sap/


directory.
Overwrite the lccluster le that already exists from the previous conguration of the data service.

Congure this /sapdb/LC-NAME/db/sap/lccluster le as documented in Step 3 of How to


Register and Congure Sun Cluster HA for SAP liveCache in Sun Cluster Data Service for SAP
liveCache Guide for Solaris OS.

Man Pages
The following subsections describe omissions or new information that will be added to the next
publication of the man pages.

Data-Service Names for Individual Upgrade


The following table lists the names to specify to the scinstall -u update -s srvc command. A
version of the data service must already be installed for the command to succeed.
TABLE 23 Scinstall Upgrade Names of Data Services

52

Data Service

Upgrade Name

Sun Cluster HA for Agfa IMPAX

pax

Sun Cluster HA for Apache

apache

Sun Cluster HA for Apache Tomcat

tomcat

Sun Cluster HA for BEA WebLogic Server

wls

Sun Cluster HA for BroadVision One-To-One Enterprise

bv

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

TABLE 23 Scinstall Upgrade Names of Data Services

(Continued)

Data Service

Upgrade Name

Sun Cluster HA for DHCP

dhcp

Sun Cluster HA for DNS

dns

Sun Cluster HA for MySQL

mys

Sun Cluster HA for N1 Grid Service Provisioning System

sps

Sun Cluster HA for NetBackup

netbackup

Sun Cluster HA for NFS

nfs

Sun Cluster HA for Oracle

oracle

Sun Cluster HA for Oracle E-Business Suite

ebs

Sun Cluster HA for Samba

smb

Sun Cluster HA for SAP

sap

Sun Cluster HA for MaxDB

sapdb

Sun Cluster HA for SAP liveCache

livecache

Sun Cluster HA for SAP Web Application Server

sapwebas

Sun Cluster HA for Siebel

siebel

Sun Cluster HA for Solaris Containers

container

Sun Cluster HA for Sun Grid Engine

sge

Sun Cluster HA for Sun Java System Application Server

s1as

Sun Cluster HA for Sun Java System Application Server EE (HADB)

hadb

Sun Cluster HA for Sun Java System Message Queue

s1mq

Sun Cluster HA for Sun Java System Web Server

iws

Sun Cluster HA for SWIFTAlliance Access

saa

Sun Cluster HA for SWIFTAlliance Gateway

sag

Sun Cluster HA for Sybase ASE

sybase

Sun Cluster HA for WebSphere MQ

mqs

Sun Cluster HA for WebSphere MQ Integrator

mqi

Sun Cluster Oracle Application Server (9i)

9ias

Sun Cluster Support for Oracle Real Application Clusters

oracle_rac

Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement

53

Composed March 29, 2006

54

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.1 4/04 Release Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 4/04
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.

Revision Record on page 55


New Features on page 57
Restrictions and Requirements on page 59
Known Problems on page 60
Known Documentation Problems on page 61

Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 31 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2006
Revision Date

New Information

April 2006

January 2006

Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.

55

Composed March 29, 2006


Revision Record

TABLE 32 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2005
Revision Date

New Information

September 2005

Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39 in Chapter 2.

June 2005

Added restriction on placement of SCI cards in hot swap PCI+ (hsPCI+) I/O
assemblies. See Restriction on SCI Card Placement on page 46.
Bug ID 6252555, problems with quorum reservations and patch 11327728 or later. See
Bug ID 6252555 on page 50.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 57.

May 2005

Added restriction on quorum devices when using storage-based data replication. See
Storage-Based Data Replication and Quorum Devices on page 47.

March 2005

Bug ID 6210418, Process accounting log les on global le systems cause the node to
hang. See Bug ID 6210418 on page 50 in Chapter 2.

January 2005

Bug ID 6196936, SCSI reset errors when using Cauldron-S and 3310 RAID arrays. See
Bug ID 6196936 on page 49.

TABLE 33 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2004
Revision Date

New Information

December 2004

Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.

November 2004

The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.

56

September 2004

Restrictions apply to Sun Cluster installations on x86 based systems. See Bug ID
5066167 on page 60.

July 2004

Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 59.

June 2004

Information about support for the Sun StorEdge QFS le system added. See Support
for the Sun StorEdge QFS File System on page 57.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

New Features
In addition to features documented in the Sun Cluster 3.1 4/04 Release Notes for Solaris OS, this
release now includes support for the following features.

SPARC: Support for VxVM 4.0 and VxFS 4.0


A patch to Sun Cluster 3.1 software adds support on Sun Cluster 3.1 4/04 congurations and earlier
for VERITAS Volume Manager 4.0 and VERITAS File System 4.0 software. Download and install the
latest Sun Cluster 3.1 Core/Sys Admin patch from http://www.sunsolve.com. This support addition
is associated with Bug ID 4978425.

Support for the Sun StorEdge QFS File System


Sun Cluster supports failover of a standalone Sun StorEdge QFS le system. For use with Sun Cluster,
Sun StorEdge QFS release 4.1 is required.
For information about how to congure failover of a standalone Sun StorEdge QFS le system with
Sun Cluster, see the following documentation:

Sun StorEdge QFS and Sun StorEdge SAM-FS Release Notes, part number 817-4094-10

Sun StorEdge QFS and Sun StorEdge SAM-FS Installation and Conguration Guide, part number
817-4092-10

Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, part number
817-4091-10

Support for Oracle 10g Real Application Clusters on


the SPARC Platform
The Sun Cluster Support for Oracle Real Application Clusters data service supports Oracle 10g Real
Application Clusters on the SPARC platform.
For information about how to congure Sun Cluster Support for Oracle Real Application Clusters,
see Sun Cluster Data Service for Oracle Parallel Server/Real Application Clusters Guide for Solaris OS.
Additional information that you require if you are using Oracle 10g Real Application Clusters is
provided in the subsections that follow.

Required Sun Cluster Support for Oracle Real Application Clusters


Patches
To use Oracle 10g Real Application Clusters with Sun Cluster Support for Oracle Real Application
Clusters, install the appropriate HA-OPS/RAC patch for your versions of Sun Cluster and the Solaris
Operating System. See the following table.
Chapter 3 Sun Cluster 3.1 4/04 Release Notes Supplement

57

Composed March 29, 2006


New Features

Sun Cluster Version

Solaris Operating System Version

HA-OPS/RAC Patch Number

3.1

115063-04

3.1

115062-04

3.0

114176-05

3.0

111857-09

Required Sun Cluster Patches


If you are using Sun Cluster 3.1, install the appropriate Core/Sys Admin patch for your version of the
Solaris Operating System. See the following table.

Solaris Operating System Version

Core/Sys Admin Patch Number

113801-11

113800-11

Required Versions of Oracle 10g Real Application Clusters


To use Oracle 10g Real Application Clusters with Sun Cluster Support for Oracle Real Application
Clusters, you must use the appropriate versions of Oracle 10g Real Application Clusters and the
Oracle UDLM:

Use Oracle 10g Real Application Clusters version 10.1.0.3 with the following Oracle patches:

3923542
3849723
3714210
3455036

Use Oracle UDLM version 3.3.4.8 with Oracle patch 3965383.

Installing Oracle 10g Cluster Ready Services (CRS) With Sun Cluster 3.0
During the installation of CRS, you are prompted in the Cluster Conguration screen for the private
name or private IP address for each node. If you are using CRS with Sun Cluster 3.0, you must specify
the private IP address that Sun Cluster assigns to the node. CRS uses this address to interconnect the
nodes in the cluster.
Each node in the cluster has a different private address. To determine the private address of a node,
determine the private address that is plumbed on interface lo0:1.
# ifconfig lo0:1

58

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

Installing CRS on a Subset of Sun Cluster Nodes


By default, the Oracle installer installs CRS on all nodes in a cluster. Instructions for installing CRS
on a subset of Sun Cluster nodes are available at the Oracle MetaLink web site
(http://metalink.oracle.com/). See Oracle MetaLink note 280589.1 How to install Oracle 10g CRS on
a cluster where one or more nodes are not to be congured to run CRS.

Using the Cluster File System


You can store only these les that are associated with Oracle Real Application Clusters on the cluster
le system:

Oracle relational database management system (RDBMS) binary les

Oracle conguration les (for example, init.ora, tnsnames.ora, listener.ora, and


sqlnet.ora)

Archived redo log les

Alert les (for example, alert_sid.log)

Trace les (*.trc)

Oracle CRS binary les

Oracle cluster registry (OCR) les

Oracle CRS voting disk

Note You must not store data les, control les, online redo log les, or Oracle recovery les on the

cluster le system.
If you are using the cluster le system with Sun Cluster 3.1, consider increasing the desired number
of secondary nodes for device groups. By increasing the desired number of secondary nodes for
device groups, you can improve the availability of your cluster. To increase the desired number of
secondary nodes for device groups, change the numsecondaries property. For more information, see
the section about multiported disk device groups in Sun Cluster Concepts Guide for Solaris OS.

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
4/04 release.

Compiling Data Services That Are Written in C++


If you are using Sun Cluster 3.1 and are writing data services in C++, you must compile these data
services in ANSI C++ standard mode.
Chapter 3 Sun Cluster 3.1 4/04 Release Notes Supplement

59

Composed March 29, 2006


Known Problems

Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 4/04 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 4/04 release.

Bug ID 5095543
Problem Summary: When using Sun StorEdge 6130 arrays in your cluster, you cannot connect both
host ports of the same controller to the same switch.
Workaround: Connect only one controller host port to a given switch. See Figure 31 for an example
of correct cabling.
Node 1

Node 2

Switch

Switch

Controller module

FIGURE 31 Cabling Sun StorEdge 6130 Arrays

Bug ID 5066167
Problem Summary: When installing Sun Cluster Software on x86 based systems, you cannot use
autodiscovery.
Workaround: When the installer asks Do you want to use autodiscovery (yes/no) [yes]? answer
no and specify the cluster transport yourself.

60

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 4/04 Release Notes for Solaris OS.

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publication of the Software Installation Guide.

CR 6341573
Problem Summary: In the chapters for Solstice DiskSuite/Solaris Volume Manager and VxVM, the
procedure for mirroring the root disk instructs you to skip enabling the localonly property if the
mirror is not connected to multiple nodes. This is incorrect.
Workaround: Always enable the localonly property of the mirror disk, even if the disk does not
have more than one node directly attached to it.
# scconf -c -D name=rawdisk-groupname,localonly=true

Chapter 3 Sun Cluster 3.1 4/04 Release Notes Supplement

61

Composed March 29, 2006

62

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.1 10/03 Release Notes


Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 10/03
Release Notes that shipped with the Sun Cluster 3.1 product. These online release notes provide
the most current information on the Sun Cluster 3.1 product. This chapter includes the following
information.

Revision Record on page 63


New Features on page 65
Restrictions and Requirements on page 65
Known Problems on page 67
Known Documentation Problems on page 67

Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 41 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2006
Revision Date

New Information

April 2006

January 2006

Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.

63

Composed March 29, 2006


Revision Record

TABLE 42 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2005
Revision Date

New Information

September 2005 Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and VxFS
4.1 on page 39 in Chapter 2.
June 2005

Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and VxFS
4.0 on page 57 in Chapter 3.

May 2005

Restriction on quorum devices when using storage-based data replication. See


Storage-Based Data Replication and Quorum Devices on page 47.

March 2005

Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.

TABLE 43 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date

New Information

December 2004

Restriction against rolling upgrade and VxVM. See Restriction on Rolling Upgrade
and VxVM on page 66.

November 2004

Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.

July 2004

Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 65.

March 2004

scsetup is not able to add the rst adapter to a single-node cluster. See Bug ID
4983696 on page 67.
Additional procedures to perform when you add a node to a single-node cluster. See
Software Installation Guide on page 67.
Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 69.
Restriction against using Live Upgrade. See Live Upgrade is Not Supported on page
69.

64

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

TABLE 43 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004

(Continued)

Revision Date

New Information

February 2004

Instruction to set the localonly property on any shared disks that are used to create a
root disk group on nonroot disks. See Setting the localonly Property For a rootdg
Disk Group on a Nonroot Disk on page 69.
Restriction against creating a swap le using global devices. See Create swap Files Only
on Local Disks on page 70.
Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.

January 2004

Added a brief description of the newly supported 3-room, 2-node campus cluster. See
Additional Campus Cluster Conguration Examples in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.

November 2003

Procedure to upgrade Sun Cluster 3.1 10/03 software on a cluster that runs Sun
StorEdge Availability Suite 3.1 .

New Features
In addition to features documented in the Sun Cluster 3.1 10/03 Release Notes, this release now
includes support for the following features.
There are no new features at this time.

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
10/03 release.

Compiling Data Services That Are Written in C++


If you are using Sun Cluster 3.1 and are writing data services in C++, you must compile these data
services in ANSI C++ standard mode.

Chapter 4 Sun Cluster 3.1 10/03 Release Notes Supplement

65

Composed March 29, 2006


Restrictions and Requirements

Upgrading Sun Cluster 3.1 10/03 Software on Clusters


That Run Sun StorEdge Availability Suite 3.1 Software
To ensure proper functioning of Sun StorEdge Availability Suite 3.1 software, you must place the
conguration data for availability services on the quorum disk. Before you upgrade to Sun Cluster
3.1 10/03 software, perform the following procedure on one node in the cluster that runs Sun
StorEdge Availability Suite 3.1 software.
1. Use dscfg to nd the device ID and the partition (slice) used by the Sun StorEdge Availability
Suite 3.1 conguration le.
# /usr/opt/SUNWscm/sbin/dscfg
dev/did/rdsk/d11s7

In this example, d11 is the device ID and s7 the slice of device d11.
2. Identify the existing quorum device, if any.
# /usr/cluster/bin/scstat -q
-- Quorum Votes by Device -Device Name
Present Possible Status
----------------- -------- -----Device votes:
/dev/did/rdsk/d15s2 1
1
Online

In this example, d15s2 is the existing quorum device.


3. Congure the Availability Suite 3.1 conguration data device as a quorum device.
# /usr/cluster/bin/scconf -a -q globaldev=/dev/did/rdsk/d11s2

Quorum devices do not use any of the partition space. The sufx s2 is displayed for syntax
purposes only. Although they appear to be different, both Sun StorEdge Availability Suite
conguration disk (for example, d11s7) and the Sun Cluster quorum disk (for example, d11s2)
refer to the same disk.
4. Uncongure the original quorum device.
# /usr/cluster/bin/scconf -r -q globaldev=/dev/did/rdsk/d15s2

Note If you are installing Sun Cluster software for the rst time, use a slice on the quorum disk for

Sun StorEdge Availability Suite 3.1 conguration data.

Restriction on Rolling Upgrade and VxVM


Sun Cluster 3.1 10/03 software does not support rolling upgrade of a cluster that runs VERITAS
Volume Manager (VxVM) software. You must instead follow nonrolling upgrade procedures.
66

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 10/03 Release Notes, the
following known problems affect the operation of the Sun Cluster 3.1 10/03 release.

Bug ID 4848612
Problem Summary: When all private interconnections fail in a two-node cluster which is running
Oracle Real Application Clusters with VxVM, the rst node might panic with one of the following
messages:

CMM: Cluster lost operational quorum: aborting.


Reservation conflict.

The other node occasionally panics because the cluster reconguration step cvm return times out.
Workaround: Edit the default /opt/SUNWcvm/etc/cvm.conf le to increase the timing parameter
cvm.return_timeout from 40 seconds to 160 seconds. For further inquiries, contact Brian Reynard,
Software Engineering Manager OS Sustaining Escalations (Sun Cluster) at brian.reynard@sun.com.

Bug ID 4983696
Problem Summary: If scsetup is used in an attempt to add the rst adapter to a single-node cluster,
the following error messsage results: Unable to determine transport type.
Workaround: Create an empty install-db le. at least the rst adapter manually:
# scconf -a -A trtype=type,name=nodename,node=nodename

After the rst adapter is congured, further use of scsetup to congure the interconnects works as
expected.

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 10/03 Release Notes.

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publication of the Sun Cluster 3.1 10/03 Software Installation Guide.
Chapter 4 Sun Cluster 3.1 10/03 Release Notes Supplement

67

Composed March 29, 2006


Known Documentation Problems

Preparing a Single-Node Cluster for Additional Nodes


How to Prepare a Single-Node Cluster for Additional Nodes
To add a node to a single-node cluster, you must rst congure the cluster interconnect if it does not
already exist. You must also add the name of the new node to the clusters authorized-nodes list. In
the procedure How to Install Sun Cluster Software on Additional Cluster Nodes (scinstall),
perform the following additional steps before you run the scinstall command.
1

From the existing cluster node, determine whether two cluster interconnects already exist.
You must have at least two cables or two adapters congured.
# scconf -p | grep cable
# scconf -p | grep adapter

If the output shows conguration information for two cables or for two adapters, skip to Step 3.

If the output shows no conguration information for either cables or adapters, or shows
conguration information for only one cable or adapter, proceed to Step 2.

Congure new cluster interconnects.


a. On the existing cluster node, start the scsetup(1M) utility.
# scsetup

The Main Menu displays.


b. Select Cluster interconnect.
c. Select Add a transport cable.
Follow the instructions to specify the name of the node to add to the cluster, the name of a
transport adapter, and whether to use a transport junction.
d. If necessary, repeat Step c to congure a second cluster interconnect.
When nished, quit the scsetup utility.
e. Verify that the cluster now has two cluster interconnects congured.
# scconf -p | grep cable
# scconf -p | grep adapter

The command output should show conguration information for at least two cluster
interconnects.
3

Add the new node to the cluster authorizednodes list.


a. On any active cluster member, start the scsetup(1M) utility.
# scsetup

68

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

The Main Menu is displayed.


b. Select New nodes.
c. Select Specify the name of a machine which may add itself.
d. Follow the prompts to add the nodes name to the list of recognized machines.
e. Verify that the task has succeeded.
The scsetup utility prints the message Command completed successfully if the task completes
without error.
f. Quit the scsetup utility.

Correcting Stack Overow Related to VxVM Disk Device Groups


If you experience a stack overow when a VxVM disk device group is brought online, the default
value of the thread stack size might be insufcient. To increase the thread stack size, add the
following entry to the /etc/system on each node. Set the value for size to a number that is greater
than 8000, which is the default setting.
set cl_comm:rm_thread_stacksize=0xsize

Live Upgrade is Not Supported


In the procedure How to Upgrade the Solaris Operating Environment (Nonrolling), the table in
Step 5 is incorrect. For a cluster that uses Solstice DiskSuite/Solaris Volume Manager as the volume
manager, the tables Procedure to Use column says, Upgrading Solaris software. It should instead
say, Any Solaris upgrade method except the Live Upgrade Method. The Solaris Live Upgrade
method is not yet supported in a Sun Cluster conguration.

Setting the localonly Property For a rootdg Disk Group on a Nonroot


Disk
In the procedure How to Create a rootdg Disk Group on a Nonroot Disk, you must perform an
additional step if the root disk group contains one or more disks that connect to two or more nodes.
Perform the following step after vxinstall processing has completed.

Chapter 4 Sun Cluster 3.1 10/03 Release Notes Supplement

69

Composed March 29, 2006


Known Documentation Problems

How to Set the localonly Property For a rootdg Disk Group on a

Nonroot Disk

Enable the localonly property of the raw-disk device group for each shared disk in the root disk
group.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node
in its node list. This usage prevents unintentional fencing of the node from the device that is used by
the root disk group if that device is connected to multiple nodes.
# scconf -c -D name=dsk/dN,localonly=true

For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

Create swap Files Only on Local Disks


If after installation you intend to create a swap le, do not create the swap le on a global device. Only
use a local disk as a swap device for the node.

70

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

C H A P T E R

Sun Cluster Data Services 3.1 10/03 Release


Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Data
Service 5/03 Release Notes that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.

Revision Record on page 71


New Features on page 72
Restrictions and Requirements on page 73
Known Problems on page 74
Known Documentation Problems on page 75

Revision Record
The following table lists the information contained in this chapter and provides the revision date for
this information.
TABLE 51 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date

New Information

December 2004

Sun Clustersupports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.

November 2004

The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.

71

Composed March 29, 2006


New Features

TABLE 51 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004

(Continued)
Revision Date

New Information

May 2004

The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 10/03 now
supports Oracle 10g. See Support for Oracle 10g on page 72.

February 2004

Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.

December 2003

Problem of using NIS for naming services in a cluster that runs Sun Cluster HA for SAP
liveCache. See Sun Cluster HA for liveCache nsswitch.conf requirements for passwd
make NIS unusable (4904975) on page 75.

November 2003

Procedure and examples to upgrade data services that cannot be upgraded by using the
scinstall utility. See Some Data Services Cannot be Upgraded by Using the
scinstall Utility on page 74.
Support for WebLogic Server 8.x. See WebLogic Server Version 8.x on page 73.

New Features
In addition to features documented in the Sun Cluster 3.1 Data Service 5/03 Release Notes, this release
now includes support for the following features.

Support for Oracle 10g


The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 10/03 now supports
Oracle 10g.
If you are using Sun Cluster HA for Oracle with Oracle 10g, an attempt by the init(1M) command
to start the Oracle cssd daemon might cause unnecessary error messages to be displayed. These error
messages are displayed if the Oracle binary les are installed on a highly available local le system or
on the cluster le system. The messages are displayed repeatedly until the le system where the
Oracle binary les are installed is mounted.
These error messages are as follows:
INIT: Command is respawning too rapidly. Check for possible errors.
id: h1 "/etc/init.d/init.cssd run >/dev/null 2>&1 >/dev/null"
Waiting for filesystem containing $CRSCTL.

These messages are displayed if the following events occur:


72

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.

A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.

Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.

To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null

Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.

WebLogic Server Version 8.x


System administration instructions for Sun Cluster for WebLogic Server apply also to WebLogic
Server version 8.x. For documentation on Sun Cluster for WebLogic Server, see the Sun Cluster Data
Service for WebLogic Server Guide for Solaris OS.

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
Data Services 10/03 release.
There are no known restrictions and requirements at this time.

Chapter 5 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement

73

Composed March 29, 2006


Known Problems

Known Problems
In addition to known problems that are documented in the Sun Cluster 3.1 Data Service 5/03 Release
Notes, the following known problems affect the operation of the Sun Cluster 3.1 Data Services 10/03
release.

Some Data Services Cannot be Upgraded by Using the


scinstall Utility
The data services for the following applications cannot be upgraded by using the scinstall utility:

Apache Tomcat
DHCP
mySQL
Oracle E-Business Suite
Samba
SWIFTAlliance Access
WebLogic Server
WebSphere MQ
WebSphere MQ Integrator

If you plan to upgrade a data service for an application in the preceding list, replace Step 5 in the
procedure Upgrading to Sun Cluster 3.1 10/03 Software (Rolling) in Sun Cluster 3.1 10/03 Software
Installation Guide with the steps that folllow. Perform these steps for each node where the data
service is installed.

How to Upgrade Data Services That Cannot be Upgraded by Using

scinstall
1

Remove the software package for the data service that you are upgrading.
# pkgrm pkg-inst

pkg-inst species the software package name for the data service that you are upgrading as listed in
the following table.

74

Application

Data Service Software Package

Apache Tomcat

SUNWsctomcat

DHCP

SUNWscdhc

mySQL

SUNWscmys

Oracle E-Business Suite

SUNWscebs

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Application

Data Service Software Package

Samba

SUNWscsmb

SWIFTAlliance Access

SUNWscsaa

WebLogic Server (English locale)

SUNWscwls

WebLogic Server (French locale)

SUNWfscwls

WebLogic Server (Japanese locale)

SUNWjscwls

WebSphere MQ

SUNWscmqs

WebSphere MQ Integrator

SUNWscmqi

Install the software package for the version of the data service to which you are upgrading.
To install the software package, follow the instructions in the Sun Cluster documentation for the data
service that you are upgrading. This documentation is available in the Sun Cluster 3.1 10/03 Data
Services Collection at http://docs.sun.com/db/coll/573.11.

Sun Cluster HA for liveCache nsswitch.conf


requirements for passwd make NIS unusable
(4904975)
The requirements for the nssswitch.conf le in Preparing the Nodes and Disks on page
not apply to the entry for the passwd database.

do

The entry in the /etc/nsswitch.conf le for the passwd database should be as follows:
passwd: files nis [TRYAGAIN=0]

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 Data Service 5/03 Release Notes.
There are no known problems at this time.

Chapter 5 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement

75

Composed March 29, 2006

76

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.1 Release Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Release
Notes that shipped with the Sun Cluster 3.1 product. These online release notes provide the most
current information on the Sun Cluster 3.1 product. This chapter includes the following
information.

Revision Record on page 77


New Features on page 80
Fixed Problems on page 82
Restrictions and Requirements on page 81
Known Problems on page 82
Known Documentation Problems on page 83

Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 61 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2006
Revision Date

New Information

January 2006

Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.

TABLE 62 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2005
Revision Date

New Information

September 2005

Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39 in Chapter 2.

77

Composed March 29, 2006


Revision Record

TABLE 62 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2005

(Continued)

Revision Date

New Information

June 2005

Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 57 in Chapter 3.

March 2005

Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.

TABLE 63 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2004
Revision Date

New Information

November 2004

Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.

July 2004

Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 81.

March 2004

Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 84.
Restriction against using the Live Upgrade method to upgrade Solaris software. See
Step 5 of How to Upgrade the Solaris Operating Environment in Appendix F.

February 2004

Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.

January 2004

Added a brief description of the newly supported 3room, 2node campus cluster. See
Chapter 7, Campus Clustering With Sun Cluster Software, in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.

TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003

78

Revision Date

New Information

December 2003

Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The bug was xed (see
Bug ID 4840853 on page 82).

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Revision Record

TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003

(Continued)

Revision Date

New Information

November 2003

The onerror=lock and onerror=umount mount options are not supported on cluster
le systems. See Bug ID 4781666 on page 82.
To upgrade a cluster that uses mediators, you must remove the mediators before you
upgrade to Sun Cluster 3.1 software, then recreate the mediators after the cluster
software is upgraded. See Upgrading a Cluster That Uses Mediators on page 84.
Additional information about the restriction on IPv6 addressing. See Clarication of
the IPv6 Restriction on page 82.
Logical volumes are not supported with the Sun StorEdge 3510 FC storage array. See
the Preface of the Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array
Manual for more information.

October 2003

Certain RPC program numbers are reserved for Sun Cluster software use. See
Reserved RPC Program Numbers on page 81.
Clarication about which name to use for disk slices when you create state database
replicas. See How to Create State Database Replicas on page 85.
Upgrade from Sun Cluster 3.0 software on the Solaris 8 Operating System to Sun
Cluster 3.1 software on the Solaris 9 Operating System removes dual-string mediators.
See Bug ID 4920156 on page 83.
Updated VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic
Multipathing (DMP) on page 111 for more information.

August 2003

Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.

July 2003

Revised support for Multiple Masters conguration of Sun Cluster HA for Sun ONE
Application Server. See Sun Cluster 3.1 Data Service for Sun ONE Application Server
on page 91.

Chapter 6 Sun Cluster 3.1 Release Notes Supplement

79

Composed March 29, 2006


New Features

TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003

(Continued)

Revision Date

New Information

June 2003

Procedures to upgrade a Sun Cluster 3.0 conguration to Sun Cluster 3.1 software,
including upgrading from Solaris 8 to Solaris 9 software. See Appendix F.
Modications to make to the /etc/system le to correct changes made by VxFS
installation. See Changing Quorum Device Connectivity on page 81.
Procedures to support Sun StorEdge 6320 storage systems. See Chapter 1, Installing
and Maintaining a Sun StorEdge 6320 System, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6320 System Manual for Solaris OS.
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. (See Bug ID 4840853
on page 82.)
This restriction has been removed for clusters using the 3.1 rmware.
Procedures to support Sun StorEdge 3510 FC storage device. See the Sun Cluster 3.0-3.1
With Sun StorEdge 3510 or 3511 FC RAID Array Manual .
Sun StorEdge 3510 FC storage arrays are no longer limited to 256 LUNs per channel.
See Bug ID 4867584 on page 82.
Sun StorEdge 3510 FC storage arrays are limited to one node per channel. See Bug ID
4867560 on page 83.
Requirements for storage topologies. See Storage Topologies Replaced by New
Requirements on page 112.
Relaxed requirements for shared storage. See Shared Storage Restriction Relaxed
on page 112.

New Features
In addition to features documented in the Sun Cluster 3.1 Release Notes, this release now includes
support for the following features.

Sun Cluster Support for Oracle Real Application


Clusters on a Subset of Cluster Nodes
To enable Sun Cluster Support for Oracle Real Application Clusters on a subset of cluster nodes, see
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes on page
89.

80

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
release.

Compiling Data Services That Are Written in C++


If you are using Sun Cluster 3.1 and are writing data services in C++, you must compile these data
services in ANSI C++ standard mode.

Reserved RPC Program Numbers


If you install an RPC service on the cluster, the service must not use any the following program
numbers:

100141
100142
100248

These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and rgmd,
respectively. If the RPC service you install also uses one of these program numbers, you must change
that RPC service to use a different program number.

Changing Quorum Device Connectivity


When you increase or decrease the number of node attachments to a quorum device, the quorum
vote count is not automatically recalculated. You can reestablish the correct quorum vote if you
remove all quorum devices and then add them back into the conguration.

Required VxFS Default Stack Size Increase


The default stack size that VERITAS File System (VxFS) sets during installation is 0x4000. However,
this is inadequate for Sun Cluster software and might lead to a system panic. If you install VxFS on a
Sun Cluster conguration, you must reset the stack size by making the following modications to
entries in the /etc/system le on each cluster node.
set rpcmod:svc_default_stksize=0x8000
set lwp_default_stksize=0x6000

The rst line changes the value for the rpcmod:svc_default_stksize variable from 0x4000 to
0x8000.

The second line sets the value of the lwp_default_stksize variable to 0x6000.

Chapter 6 Sun Cluster 3.1 Release Notes Supplement

81

Composed March 29, 2006


Fixed Problems

Clarication of the IPv6 Restriction


Sun Cluster software does not support IPv6. However, network interfaces on a cluster node can host
IPv6 addressing as long as those interfaces are not used by Sun Cluster services or facilities.

Fixed Problems
The following problems identied in previous release notes supplements are now resolved.

Bug ID 4840853
Problem Summary: Due to memory segmentation issues, if you congured the StorEdge 6120 or
StorEdge 6320 storage system with four nodes and more than 16 LUNs, the storage device might fail
and cause your data to be compromised.
Problem Fixed: When using a StorEdge 6120 or StorEdge 6320 storage system with the version 3.1
rmware (or later), you no longer must limit your conguration to 16 LUNs. Instead, the limit is 64
LUNs.

Bug ID 4867584
Problem Summary: If you had 512 LUNs in a direct-attach storage conguration with Sun StorEdge
3510 FC storage arrays, LUNs might be lost when the server rebooted.
Problem Fixed: This bug is xed when using both of the following items:

3.27R rmware or later (which is contained in patch 113723-07 or later)


SAN Foundation Kit 4.3 Software or later

Known Problems
In addition to known problems documented in the Sun Cluster 3.1 Release Notes, the following
known problems affect the operation of the Sun Cluster 3. 1 release.

Bug ID 4781666
Problem Summary: Use of the onerror=umount mount option or the onerror=lock mount option
might cause the cluster le system to lock or become inaccessible if the cluster le system experiences
le corruption. Or, use of these mount options might cause the cluster le system to become
unmountable. The cluster le system might then cause applications to hang or prevent them from
being killed. The node might require rebooting to recover from these states.
82

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Workaround: Do not specify the onerror=umount or onerror=lockmount option. The


onerror=panic mount option, which is supported by Sun Cluster software, does not need to be
specied. It is already the default value.

Bug ID 4863254
Problem Summary: Due to a Solaris bug (4511634), Sun Cluster 3.1 does not provide the ability to
auto-create IPMP groups when you add a logical host.
Workaround: You must manually create an IPMP group when you add a logical host.

Bug ID 4867560
Problem Summary: When two nodes are connected to the same channel of a Sun StorEdge 3510 FC
storage array, rebooting one node causes the other node to lose the SCSI-2 reservation.
Workaround: You can only connect one node per channel on the Sun StorEdge 3510 FC storage
arrays.

Bug ID 4920156
Problem Summary: When performing an upgrade from Sun Cluster 3.0 software on Solaris 8
software with Solstice DiskSuite 4.2.1 to Sun Cluster 3.1 software on Solaris 9 software with Solaris
Volume Manager, the dual-string mediators are removed.
Workaround: Remove mediators before you upgrade the cluster, then recreate them after the cluster
is upgraded.

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 Release Notes.

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publication of the Sun Cluster 3.1 Software Installation Guide.
Chapter 6 Sun Cluster 3.1 Release Notes Supplement

83

Composed March 29, 2006


Known Documentation Problems

Correcting Stack Overow Related to VxVM Disk Device Groups


If you experience a stack overow when a VxVM disk device group is brought online, the default
value of the thread stack size might be insufcient. To increase the thread stack size, add the
following entry to the /etc/system on each node. Set the value for size to a number that is greater
than 8000, which is the default setting.
set cl_comm:rm_thread_stacksize=0xsize

Upgrading a Cluster That Uses Mediators


To upgrade a Sun Cluster 3.0 conguration that uses mediators to Sun Cluster 3.1 software, you must
uncongure the mediators before you upgrade the cluster software. Then after you upgrade the
cluster software you must recongure the mediators. Add the following steps to the procedures that
you perform from Upgrading Sun Cluster Software in Sun Cluster 3.1 Software Installation Guide.

How to Upgrade a Cluster That Uses Mediators


1

Perform the steps to prepare the cluster for upgrade but do not shut down the cluster.

Uncongure the mediators.


a. Run the following command to verify that no mediator data problems exist.
# medstat -s setname

-s setname-

Species the diskset name

If the value in the Status eld is Bad, repair the affected mediator host. Follow the procedure to x
bad mediator data in Conguring Mediators in Sun Cluster 3.1 Software Installation Guide.
b. List all mediators.
Use this information for when you restore the mediators during Step 4.
c. For a diskset that uses mediators, take ownership of the diskset if no node already has
ownership.
# metaset -s setname -t

-t

Takes ownership of the diskset

d. Uncongure all mediators for the diskset.


# metaset -s setname -d -m mediator-host-list

84

-s setname-

Species the diskset name

-d

Deletes from the diskset

-m mediator-host-list-

Species the name of the node to remove as a mediator host for the
diskset

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

See the mediator(7D) man page for further information about mediator-specic options to the
metaset command.
e. Repeat Step c through Step d for each remaining diskset that uses mediators.
3

Shut down the cluster and continue to follow procedures to upgrade Sun Cluster software.

After all nodes are upgraded and booted back into the cluster, recongure the mediators.
a. Determine which node has ownership of a diskset to which you will add the mediator hosts.
# metaset -s setname

-s setname-

Species the diskset name

b. If no node has ownership, take ownership of the diskset.


# metaset -s setname -t

-t

Takes ownership of the diskset

c. Recreate the mediators.


# metaset -s

setname -a -m mediator-host-list

-a

Adds to the diskset

-m mediator-host-list-

Species the names of the nodes to add as mediator hosts for the
diskset

d. Repeat Step a through Step c for each diskset in the cluster that uses mediators.
5

Perform any remaining upgrade tasks to complete cluster upgrade.

How to Create State Database Replicas


When you use the metadb -af command to create state database replicas on local disks, use the
physical disk name (cNtXdYsZ), not the device-ID name (dN), to specify the slices to use.

Chapter 6 Sun Cluster 3.1 Release Notes Supplement

85

Composed March 29, 2006

86

Composed March 29, 2006

C H A P T E R

Sun Cluster Data Services 3.1 5/03 Release


Notes Supplement

This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Data
Service 5/03 Release Notes that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.

Revision Record on page 87


New Features on page 88
Restrictions and Requirements on page 90
Known Problems on page 91
Known Documentation Problems on page 91

Revision Record
The following table lists the information contained in this chapter and provides the revision date for
this information.
TABLE 71 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date

New Information

December 2004

Sun Clustersupports the use of ASM with Oracle 10g Real Application Servers on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.

November 2004

The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Servers on the SPARC platform. For more information, see
Support for Oracle 10g Real Application Clusters on the SPARC Platform on page 57.

May 2004

The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 5/03 now
supports Oracle 10g. See Support for Oracle 10g on page 88.

87

Composed March 29, 2006


New Features

TABLE 71 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement Revision Record: 2003/2004

(Continued)
Revision Date

New Information

February 2004

Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.

July 2003

Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.
Revised support for Multiple Masters conguration of Sun Cluster HA for Sun ONE
Application Server. See Sun Cluster 3.1 Data Service for Sun ONE Application Server
on page 91.

New Features
In addition to features documented in Sun Cluster 3.1 Data Service 5/03 Release Notes, this release
now includes support for the following features.

Support for Oracle 10g


The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 5/03 now supports
Oracle 10g.
If you are using Sun Cluster HA for Oracle with Oracle 10g, an attempt by the init(1M) command
to start the Oracle cssd daemon might cause unnecessary error messages to be displayed. These error
messages are displayed if the Oracle binary les are installed on a highly available local le system or
on the cluster le system. The messages are displayed repeatedly until the le system where the
Oracle binary les are installed is mounted.
These error messages are as follows:
INIT: Command is respawning too rapidly. Check for possible errors.
id: h1 "/etc/init.d/init.cssd run >/dev/null 2>&1 >/dev/null"
Waiting for filesystem containing $CRSCTL.

These messages are displayed if the following events occur:

88

A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.

A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.

To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null

Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.

Sun Cluster Support for Oracle Real Application


Clusters on a Subset of Cluster Nodes
You can enable Sun Cluster Support for Oracle Real Application Clusters on a subset of cluster
nodes. Install the data service packages only on the nodes that are congured to run Oracle Real
Application Clusters. You are not required to install data service packages on nodes that will not run
Oracle Real Application Clusters. For a list of packages and installation instructions, see Sun Cluster
Data Service for Oracle Real Application Clusters Guide for Solaris OS.
Note Restrictions apply when you use Sun Cluster Support for Oracle Real Application Clusters

with hardware RAID support or VxVM with the cluster feature. The Sun Cluster Support for Oracle
Real Application Clusters software must be installed only on the cluster nodes that are directly
attached to the shared storage used by Oracle Real Application Clusters.

Adding Sun Cluster Support for Oracle Real Application Clusters to


Selected Nodes
Add Sun Cluster Support for Oracle Real Application Clusters to selected nodes in the following
situations:

You are adding nodes to a cluster and you plan to run Sun Cluster Support for Oracle Real
Application Clusters on the nodes.

You are enabling Sun Cluster Support for Oracle Real Application Clusters on a node.

Chapter 7 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement

89

Composed March 29, 2006


Restrictions and Requirements

To add Sun Cluster Support for Oracle Real Application Clusters to selected nodes, the required data
service software packages on the nodes. The storage management scheme that you are using
determines which packages to install. For installation instructions, see Sun Cluster Data Service for
Oracle Real Application Clusters Guide for Solaris OS.

Removing Sun Cluster Support for Oracle Real Application Clusters


From a Node
To remove Sun Cluster Support for Oracle Real Application Clusters from selected nodes, remove
software packages from the nodes. The storage management scheme that you are using determines
which packages to remove.

How to Remove Sun Cluster Support for Oracle Real Application

Clusters From a Node


1

Become superuser.

Boot the nodes from which you are removing Sun Cluster Support for Oracle Real Application
Clusters in noncluster mode.

Uninstall from each node the Sun Cluster Support for Oracle Real Application Clusters software
packages for the storage management scheme that you are using.

If you are using VxVM with the cluster feature, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm

If you are using hardware RAID support, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWschwr

If you are using the cluster le system, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.1
Data Service 5/03 release.
There are no known restrictions and requirements at this time.

90

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Known Problems
In addition to known problems documented in the Sun Cluster 3.1 Data Service 5/03 Release Notes,
the following known problems affect the operation of the Sun Cluster 3.1 Data Service 5/03 release.
There are no known problems at this time.

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.1 Data Service 5/03 Release Notes.

Sun Cluster 3.1 Data Service for NetBackup


This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Netbackup.
The Sun Cluster HA for NetBackup Overview section should state that in a Sun Cluster
environment, robotic control is only supported on media servers and not on the NetBackup master
server running on Sun Cluster software.

Sun Cluster 3.1 Data Service for Sun ONE Application


Server
Do not congure the Sun Cluster HA for Sun Java System Application Server as a resource that is
mastered on multiple nodes at the same time. The multiple masters conguration is not supported.
Only the failover conguration is supported. For information about supported congurations,
contact your Sun service representative.

Release Notes
The following subsections describe omissions or new information that will be added to the next
publishing of the Release Notes.

Chapter 7 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement

91

Composed March 29, 2006

92

Composed March 29, 2006

C H A P T E R

Sun Cluster 3.0 5/02 Release Notes Supplement

This document supplements the standard user documentation, including the Sun Cluster 3.0 5/02
Release Notes that shipped with the Sun Cluster 3.0 product. These online release notes provide
the most current information on the Sun Cluster 3.0 product. This document includes the following
information.

Revision Record on page 93


New Features on page 102
Restrictions and Requirements on page 111
Known Problems on page 114
Known Documentation Problems on page 117

Revision Record
The following tables list the information contained in this document and provides the revision date
for this information.
TABLE 81 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: 2006
Revision Date

New Information

January 2006

Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.

TABLE 82 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2005
Revision Date

New Information

March 2005

Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 103.

93

Composed March 29, 2006


Revision Record

TABLE 83 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2004
Revision Date

New Information

March 2005

Bug ID 6210418, Process accounting log les on global le systems cause the node to
hang. See Bug ID 6210418 on page 50 in Chapter 2.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 103.

December 2004

Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.

November 2004

The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.

July 2004

Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 111.

May 2004

The Sun Cluster HA for Oracle data service in Sun Cluster 3.0 5/02 now supports
Oracle 10g. See Support for Oracle 10g on page 103.

March 2004

Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 119.

February 2004

Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.

January 2004

Added a brief description of the newly supported 3room, 2node campus cluster. See
Chapter 7, Campus Clustering With Sun Cluster Software, in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
Correction to path to the dlmstart.log le in Oracle UDLM Requirement on page
113.

94

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Revision Record

TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003
Revision Date

New Information

December 2003

Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The restriction has been
removed. See Bug ID 4840853 on page 82.

November 2003

The onerror=lock and onerror=umount mount options are not supported on cluster
le systems. See Bug ID 4781666 on page 82.
Sun Cluster 3.0 12/01 System Administration Guide: The correct caption for Table 5-2
is Task Map: Dynamic Reconguration with Cluster Interconnects.
Logical volumes are not supported with the Sun StorEdge 3510 FC storage array. See
the Preface of the Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array
Manual for more information.

October 2003

Added omission from the Installing and Conguring Sun Cluster HA for NetBackup
chapter of the Data Service Installation and Conguration Guide. See Sun Cluster Data
Service for NetBackup on page 126.
Certain RPC program numbers are reserved for Sun Cluster software use. See
Reserved RPC Program Numbers on page 111.
Clarication about which name to use for disk slices when you create state database
replicas. See How to Create State Database Replicas on page 126.
Updated VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic
Multipathing (DMP) on page 111 for more information.

August 2003

Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.
The Sun Cluster HA for NetBackup data service in Sun Cluster 3.0 5/02 now supports
VERITAS NetBackup 4.5. See Support for VERITAS NetBackup 4.5 on page 104.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

95

Composed March 29, 2006


Revision Record

TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003

(Continued)

Revision Date

New Information

June 2003

Procedures to upgrade a Sun Cluster 3.0 conguration to Sun Cluster 3.1 software,
including upgrading from Solaris 8 to Solaris 9 software. See Appendix F.
Procedures to support Sun StorEdge 6320 storage systems. See Chapter 1, Installing
and Maintaining a Sun StorEdge 6320 System, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6320 System Manual for Solaris OS.
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The restriction has been
removed (see Bug ID 4840853 on page 82.
Procedures to support Sun StorEdge 3510 FC storage array. See the Sun Cluster 3.0-3.1
With Sun StorEdge 3510 or 3511 FC RAID Array Manual .
Sun StorEdge 3510 FC storage arrays are limited to 256 LUNs per channel. See Bug ID
4867584 on page 82.
Sun StorEdge 3510 FC storage arrays are limited to one node per channel. See Bug ID
4867560 on page 83.

May 2003

How to create node-specic les and directories for use with Oracle Real Application
Clusters on the cluster le system. See Creating Node-Specic Files and Directories for
Use With Oracle Real Application Clusters Software on the Cluster File System
on page 129 for more information.
New bge(7D) Ethernet adapter requires patches and modied installation procedure.
See BugID 4838619 on page 116 for more information.
Increased stack-size settings are required when using VxFS. See Bug ID 4662264
on page 115 for more information.

96

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Revision Record

TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003

(Continued)

Revision Date

New Information

April 2003

Procedures to support Sun StorEdge 6120 storage arrays. See Chapter 1, Installing and
Maintaining a Sun StorEdge 6120 Array, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6120 Array Manual for Solaris OS.
Added VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic Multipathing
(DMP) on page 111 for more information.
Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
PCI Dual Ultra3 SCSI host adapter needs jumpers set for manual termination. See
BugId 4836405 on page 116 for more information.
Added information on support for Oracle Real Application Clusters on the cluster le
system. See Support for Oracle Real Application Clusters on the Cluster File System
on page 109.
Added information on using the Sun Cluster LogicalHostname resource with Oracle
Real Application Clusters. See Using the Sun Cluster LogicalHostname Resource
With Oracle Real Application Clusters on page 129.
Sun Cluster HA for SAP now supports the SAP J2EE engine and SAP Web dispatcher
congurations. For more information, seeConguring an SAP J2EE Engine Cluster
and an SAP Web Dispatcher on page 126.
Revised procedures on how to for install and congure Sun Cluster HA for SAP
liveCache. See Appendix B.

March 2003

Revised support for installation of the Remote Shared Memory Reliable Datagram
Transport (RSMRDT) driver. See Appendix D.
Revised How to Register and Congure Sun Cluster for SAP liveCache procedure.
See Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Documentation bug in scconf_transp_adap_sci(1M) man page. See
scconf_transp_adap_sci Man Page on page 135.
Updated revised procedure on how to replace a disk drive in a StorEdge A5x00 storage
array. See the Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

97

Composed March 29, 2006


Revision Record

TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003

(Continued)

Revision Date

New Information

February 2003

Revised procedures to support Sun Cluster HA for SAP on SAP 6.20. See Appendix E.
Virtual Local Area Network (VLAN) support expanded. See the Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
Procedures to support Sun StorEdge 9900 Dynamic Link Manager. See the Sun
Cluster 3.0-3.1 With Sun StorEdge 9900 Series Storage Device Manual.
Revised scconf_transp_adap_wrsm(1M) man page to support a Sun Fire Linkbased
cluster interconnect. See scconf_transp_adap_wrsm Man Page on page 135.
Procedures to support a Sun Fire Linkbased cluster interconnect. See the Sun
Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.

January 2003

Requirements on how to change connectivity to a quorum device. Changing Quorum


Device Connectivity on page 81
Support for daisy-chaining Sun StorEdge A1000 storage arrays. See the Sun
Cluster 3.0-3.1 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500
System Manual.
Support for Cluster interconnects over Virtual Local Area Networks. See the Sun
Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Support for installation of the Remote Shared Memory Reliable Datagram Transport
(RSMRDT) driver. See Appendix D.

TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002

98

Revision Date

New Information

December 2002

Revised procedures on how to for install and congure Sun Cluster HA for SAP
liveCache. See Appendix B.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Revision Record

TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002

(Continued)

Revision Date

New Information

November 2002

Revised SUNW.HAStoragePlus.5 man page to correct the Notes section and include
FilesystemCheckCommand extension property. See SUNW.HAStoragePlus.5 on page
136.
Sun Cluster HA for Sun ONE Web Server now supports Sun ONE Proxy Server. See
Support for Sun ONE Proxy Server on page 128.
Name to use to congure SCI-PCI adapters for the cluster interconnect. See Names for
SCI-PCI Adapters on page 125.
Requirements for storage topologies. See Storage Topologies Replaced by New
Requirements on page 112.
Support for Dynamic Reconguration with the Sun Fire V880 system and Sun Cluster
software. See Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Correction to the planning statement on how to connect quorum devices to nodes. See
Quorum Device Connection to Nodes on page 125.
Removal of the step on how to add nodes to the authentication list before you install
VERITAS Volume Manager. See New Features on page 102.
Package dependency to upgrade Sun Cluster HA for NFS from Sun Cluster 2.2 to Sun
Cluster 3.0 software. See How to Create State Database Replicas on page 85.

October 2002

/etc/iu.ap le to support the ce adapter. See Chapter 5, Installing and Maintaining


Public Network Hardware, in Sun Cluster 3.0-3.1 Hardware Administration Manual
for Solaris OS .
Procedures to support Sun StorEdge 3310 RAID storage arrays. See Sun Cluster 3.0-3.1
With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual.
Procedures to support Sun StorEdge 3310 JBOD storage arrays. See Sun Cluster 3.0-3.1
With SCSI JBOD Storage Device Manual for Solaris OS.
Relaxed requirements for shared storage. See Shared Storage Restriction Relaxed
on page 112.
Procedure on how to replace a SCI-PCI host adapter in a running cluster. See Sun
Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Revised procedure on how to replace a disk drive in a non-RAID storage device. For the
Sun Cluster documentation for the storage device, see Sun Cluster 3.x Hardware
Administration Collection.
Revised swap partition requirements. See New Guidelines for the swap Partition
on page 109.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

99

Composed March 29, 2006


Revision Record

TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002

(Continued)

Revision Date

New Information

September 2002

IP address conguration requirement for Sun Fire 15000 systems. See IP Address
Requirement for Sun Fire 15000 Systems on page 125.
Corrected cross-reference between uninstall procedures. See How to Uninstall Sun
Cluster Software From a Cluster Node (5/02) on page 134.

August 2002

Restriction on EMC storage use in a two node conguration. See EMC Storage
Restriction on page 112.

July 2002

Revised procedure to upgrade to the Sun Cluster 3.0 5/02 release from any previous
version of Sun Cluster 3.0 software. See How to Upgrade to the Sun Cluster 3.0 5/02
Software Update Release on page 119.
Revised procedure on how to replace a disk drive in StorEdge A5x00 storage array. See
the Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual.
Requirements for ATM support with Sun Cluster 3.0 5/02. See ATM with Sun Cluster
3.0 5/02 on page 117
Sun Cluster Security Hardening support for Solaris 9. See Security Hardening for
Solaris 9 on page 108.

100

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Revision Record

TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002

(Continued)

Revision Date

New Information

June 2002

Restriction on concurrent upgrade of Solaris 9 and Sun Cluster 3.0 5/02 software. See
Framework Restrictions and Requirements on page 113.
Revised appendix to support Sun StorEdge 9970 system and Sun StorEdge 9980 system
with Sun Cluster software. See the Sun Cluster 3.0-3.1 With Sun StorEdge 9900 Series
Storage Device Manual.
Procedures to support Sun StorEdge D2 storage systems. See Sun Cluster 3.0-3.1 With
SCSI JBOD Storage Device Manual for Solaris OS.
Revised procedures to support Sun StorEdge T3/T3+ Partner Group and Sun StorEdge
3900 storage arrays in a 4node conguration. See Sun StorEdge T3/T3+ Partner
Group and Sun StorEdge 3900 Storage Devices Supported in a Scalable Topology.
on page 119.
Updated procedures to support Sun Cluster software on Sybase 12.0 64bit version. See
Appendix C.
Documentation bug in the Sun Cluster Hardware Guide. See Failover File System
(HAStoragePlus) on page 108.
Documentation bug in the Sun Cluster Hardware Guide. See Changing Quorum
Device Connectivity on page 112.
Documentation bug in the Sun Cluster Hardware Guide: ce Sun Ethernet Driver
Considerations. See Chapter 5, Installing and Maintaining Public Network
Hardware, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Documentation bug in the Sun Cluster Hardware Guide: Hard zone conguration
changed. See the Sun Cluster 3.0-3.1 With Sun StorEdge 3900 Series or Sun
StorEdge 6900 Series System Manual.
Updated procedures to support Apache version 2.0. See Apache 2.0 on page 109.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

101

Composed March 29, 2006


New Features

TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002

(Continued)

Revision Date

New Information

May 2002

Oracle UDLM requirement. See Oracle UDLM Requirement on page 113.


Restriction on IPv6 and IP Network Multipathing. See Framework Restrictions and
Requirements on page 113.
Failover File System (HAStoragePlus) support. See Quorum Device Connection to
Nodes on page 125.
RAID-5 support on Sun StorEdge 99x0 storage arrays. See RAID 5 on Sun StorEdge
99x0 Storage Arrays on page 109.
Correction to BugId 4662264 workaround. See BugId 4662264 on page 134.
Bug ID 4346123 on page 114, cluster le system might not mount after multiple
failures.
Bug ID 4665886 on page 115, mapping a le into the address space with mmap(2) and
then issuing a write(2) call to the same le results in a recursive mutex panic.
Bug ID 4668496 on page 115, Solaris Volume Manager replicas need more space.
Bug ID 4680862 on page 115, node needs access to the highly available local le
system managed by HAStoragePlus.
Documentation bug in the Sun Cluster data services collection. See Conguring Sun
Java System Web Server on page 128.
Space requirement for Solaris Volume Manager. See Solaris Volume Manager Replica
Space Requirement on page 125.
Information and procedures on how to use the new scalable cluster topology. See
Appendix A.
Documentation bug in the Sun Cluster Hardware Guide. See the Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS .

May 2002
(continued)

Documented campus clustering conguration information to include support for the


Sun StorEdge 9910 storage device and Sun StorEdge 9960 storage device. See the Sun
Cluster 3.0-3.1 With Sun StorEdge 9900 Series Storage Device Manual.

New Features
In addition to features documented in Sun Cluster 3.0 5/02 Release Notes, this release now includes
support for the following features.

102

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

SPARC: Support for VxVM 4.0 and VxFS 4.0


A patch to Sun Cluster 3.0 software adds support on Sun Cluster 3.0 5/02 congurations for
VERITAS Volume Manager 4.0 and VERITAS File System 4.0 software. Download and install the
latest Sun Cluster 3.0 Core/Sys Admin patch from http://www.sunsolve.com. This support addition
is associated with Bug ID 4978425.

Support for Oracle 10g


The Sun Cluster HA for Oracle data service in Sun Cluster 3.0 5/02 now supports Oracle 10g.
If you are using Sun Cluster HA for Oracle with Oracle 10g, an attempt by the init(1M) command
to start the Oracle cssd daemon might cause unnecessary error messages to be displayed. These error
messages are displayed if the Oracle binary les are installed on a highly available local le system or
on the cluster le system. The messages are displayed repeatedly until the le system where the
Oracle binary les are installed is mounted.
These error messages are as follows:
INIT: Command is respawning too rapidly. Check for possible errors.
id: h1 "/etc/init.d/init.cssd run >/dev/null 2>&1 >/dev/null"
Waiting for filesystem containing $CRSCTL.

These messages are displayed if the following events occur:

A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.

A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.

Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.

To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null

Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

103

Composed March 29, 2006


New Features

Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.

Sun Cluster Support for Oracle Real Application


Clusters on a Subset of Cluster Nodes
To enable Sun Cluster Support for Oracle Real Application Clusters on a subset of cluster nodes, see
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes on page 89.

Support for VERITAS NetBackup 4.5


The Sun Cluster HA for NetBackup data service in Sun Cluster 3.0 5/02 now supports VERITAS
NetBackup 4.5.
After you install and congure Sun Cluster 3.1, use the following procedure and your VERITAS
documentation to install and congure VERITAS Netbackup.

Installing VERITAS Netbackup


How to Install VERITAS Netbackup
In the examples throughout this procedure, the name nb-master refers to the cluster node that
masters NetBackup, and slave-1 refers to the media server.
Before You Begin

You must have the following information to perform this procedure.

A list of cluster nodes that can master the data service.

The network resource that clients use to access the data service. Normally, you set up this IP
address when you install the cluster. See the Sun Cluster concepts documentation document for
details on network resources.

Ensure that Sun Cluster is running on all of the nodes.

Create a failover resource group to hold the network and application resources.
You can optionally select the set of nodes that the data service can run on with the -h option, as
follows.
# scrgadm -a- -g- resource-group [-h- nodelist]

-g resource-group
104

Species the name of the resource group.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

[-h nodelist]

Species an optional comma-separated list of physical node names or IDs that


identify potential masters. The order here determines the order in which the
nodes are considered as primary during failover. If all of the nodes in the
cluster are potential masters, you do not need to use the -h option.

Verify that you have added all of your network resources to the name service database.
You should have performed this verication during the Sun Cluster installation.
Note Ensure that all of the network resources are present in the servers and clients

/etc/inet/hosts le to avoid any failures because of name service lookup.

Add a logical host resource to the resource group.


# scrgadm -a -L -g resource-group -l logical-hostname

Enable the failover resource group and bring the resource group online.
# scswitch -Z -g resource-group

-g resource-group

Species the name of the resource group.

-Z

Moves the resource group to the managed state, and brings the resource group
online.

Log on to the node that masters the logical host resource.

Execute the install script to install the VERITAS Netbackup packages from the VERITAS product
CD-ROM into the /usr/openv directory.
phys-schost-1# ./install

When the menu appears, choose Option 1 (NetBackup).


This option installs both the Media Manager and the NetBackup software on the server.

Follow the prompts in the installation script.


The installation script adds entries to the /etc/services and /etc/inetd.conf les.
phys-schost-1# ./install
...
Would you like to use "phys-schost-1.somedomain.com" as the
configured name of the NetBackup server? (y/n) [y] n
...
Enter the name of the NetBackup server: nb-master
...
Is nb-master the master server? (y/n) [y] y
...

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

105

Composed March 29, 2006


New Features

Enter the fully qualified name of a media (slave) server (q to quit)?


slave-1
10

Switch the NetBackup resource to the backup node

11

Repeat Step 6 through Step 10 until you install the NetBackup binaries on all the nodes that will run
the NetBackup resource.

Enabling NetBackup to Run on a Cluster


This section contains the procedure you need to enable NetBackup to run on a cluster.

How to Enable NetBackup to Run on a Cluster


In the examples throughout this procedure, the name nb-master refers to the cluster node that
masters NetBackup, and slave-1 refers to the media server.
1

Remove the /etc/rc2.d/S77netbackup and /etc/rc0.d/K77netbackup les from each cluster


node on which Sun Cluster HA for NetBackup is installed.
If you remove these les, you prevent NetBackup from starting at boot time.

On one node, modify the /usr/openv/netbackup/bp.conf le to specify the following information.

SERVER = logical-hostname-resource
All requests to the backup server originate from the primary node. The server name equals the
logical hostname resource.

CLIENT_NAME = logical-hostname-resource
On a cluster that runs Sun Cluster HA for NetBackup, the CLIENT_NAME equals nb-master.
Note Use this client name to back up les in the cluster running Sun Cluster HA for NetBackup.

REQUIRED_INTERFACE = logical-hostname-resource
This entry indicates the logical interface that the NetBackup application is to use.

The resulting le should resemble the following example.


SERVER = nb-master
SERVER = slave-1
CLIENT_NAME = nb-master
REQUIRED_INTERFACE = nb-master

106

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

From one node, put the NetBackup conguration les on a multihost disk.
Place the les on a disk that is part of a failover disk device group that NetBackup is to use.
a. Run the following commands from the primary node of the failover disk device group. In this
example, the failover disk device group is global.
#
#
#
#
#
#
#

mkdir /global/netbackup
mv /usr/openv/netbackup/bp.conf /global/netbackup
mv /usr/openv/netbackup/db /global/netbackup
mv /usr/openv/volmgr/database /global/netbackup
ln -s /global/netbackup/bp.conf /usr/openv/netbackup/bp.conf
ln -s /global/netbackup/db /usr/openv/netbackup/db
ln -s /global/netbackup/database /usr/openv/volmgr/database

b. If the directory /usr/openv/db/var and the le /usr/openv/volmgr/vm.conf exist on the node,


move them to the disk that is part of the failover disk device group.
You must congure the NetBackup master server before you move and link
/usr/openv/volmgr/vm.conf le.
#
#
#
#

mv
mv
ln
ln

/usr/openv/db/var /global/netbackup/nbdb
/usr/openv/volmgr/vm.conf /global/netbackup
-s /global/netbackup/nbdb /usr/openv/db/var
-s /global/netbackup/vm.conf /usr/openv/volmgr/vm.conf

Note Run the command scstat -D to identify the primary for a particular disk device group.

c. Run the following commands from all of the other nodes that will run the NetBackup resource.
#
#
#
#
#
#

rm
rm
rm
ln
ln
ln

-rf /usr/openv/netbackup/bp.conf
-rf /usr/openv/netbackup/db
-rf /usr/openv/volmgr/database
-s /global/netbackup/bp.conf /usr/openv/netbackup/bp.conf
-s /global/netbackup/db /usr/openv/netbackup/db
-s /global/netbackup/database /usr/openv/volmgr/database

d. On all of the other nodes that will run the NetBackup resource, if the directory
/usr/openv/db/var and the le /usr/openv/volmgr/vm.conf exist on the node, run the
following commands:
#
#
#
#

rm
rm
ln
ln

-rf /usr/openv/db/var
-rf /usr/openv/volmgr/vm.conf
-s /global/netbackup/nbdb /usr/openv/db/var
-s /global/netbackup/vm.conf /usr/openv/volmgr/vm.conf

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

107

Composed March 29, 2006


New Features

Note You must congure the NetBackup master server before you remove and link

/usr/openv/volmgr/vm.conf le.

Fault Monitoring Sun Cluster HA for NetBackup


Depending on the installed version of NetBackup, NetBackup application startup starts one of the
following sets of daemons:

vmd, bprd, and bpdbm


vmd, bprd, bpdbm, bpjobd, and nbdbd

Sun Cluster HA for NetBackup can work with either of these two sets of daemons. The Sun Cluster
HA for NetBackup fault monitor monitors either of these two sets of processes. While the START
method runs, the fault monitor waits until the daemons are online before monitoring the
application. The Probe_timeout extension property species the amount of time that the fault
monitor waits.
After the daemons are online, the fault monitor uses kill (pid, 0) to determine whether the
daemons are running. If any daemon is not running, the fault monitor initiates the following actions,
in order, until all of the probes are running successfully.
1. Restarts the resource on the current node.
2. Restarts the resource group on the current node.
3. Fails over the resource group to the next node on the resource groups nodelist.
All process IDs (PIDs) are stored in a temporary le, /var/run/.netbackup_master.

Security Hardening for Solaris 9


Sun Cluster Security Hardening now supports data services in a Solaris 8 and Solaris 9 environment.
The Sun Cluster Security Hardening documentation is available
athttp://www.sun.com/security/blueprints. From this URL, scroll down to the Architecture heading
to locate the article Securing the Sun Cluster 3.0 Software.

Failover File System (HAStoragePlus)


Failover File System (HAStoragePlus) is now supported in Sun Cluster 3.0 5/02 release. See the Sun
Cluster 3.0 5/02 Supplement for information about this new feature.
The FilesystemMountPoints extension property can be used to specify a list of one or more le
system mount points. This list can consist of both local and global le system mount points.

108

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


New Features

RAID 5 on Sun StorEdge 99x0 Storage Arrays


RAID level 5 is supported on Sun StorEdge 99x0 storage arrays with multipathing and Sun adapters.

Apache 2.0
Sun Cluster 3.0 5/02 now supports Apache version 2.0. For Apache version 2.0, the procedure for
conguring the httpd.conf conguration le has changed as follows. (See the Sun Cluster data
services collection for the complete procedure.)

The ServerName directive species the hostname and the port.

The BindAddress and Port directives have been replaced with the Listen directive. The Listen
directive must use the address of the logical host or shared address.

The Servertype directive no longer exists.

New Guidelines for the swap Partition


The amount of swap space allocated for Solaris and Sun Cluster software combined must be no less
that 750 Mbytes. For best results, add at least 512 Mbytes for Sun Cluster software to the amount
required by the Solaris Operating System. In addition, allocate additional swap space for any
third-party applications you install on the node that also have swap requirements. See your
third-party application documentation for any swap requirements.

Support for Oracle Real Application Clusters on the


Cluster File System
You can use Oracle Real Application Clusters with the cluster le system.

Pre-Installation Considerations
Oracle Real Application Clusters is a scalable application that can run on more than one node
concurrently. You can store all of the les that are associated with this application on the cluster le
system, namely:

Binary les
Control les
Data les
Log les
Conguration les

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

109

Composed March 29, 2006


New Features

For optimum I/O performance during the writing of redo logs, ensure that the following items are
located on the same node:

The Oracle Real Application Clusters database instance

The primary of the device group that contains the cluster le system that holds the following logs
of the database instance:

Online redo logs


Archived redo logs

For other pre-installation considerations that apply to Sun Cluster Support for Oracle Real
Application Clusters, see Overview of the Installation and Conguration Process in Sun
Cluster 3.0 12/01 Data Services Installation and Conguration Guide.

How to Use the Cluster File System


To use the cluster le system with Oracle Real Application Clusters, create and mount the cluster le
system as explained in Conguring the Cluster in Sun Cluster Software Installation Guide for
Solaris OS. When you add an entry to the /etc/vfstab le for the mount point, set UNIX le system
(UFS) le system specic options for various types of Oracle les as shown in the following table.
TABLE 86 UFS File System Specic Options for Oracle Files
File Type

Options

RDBMS data les, log les, control les

global, logging, forcedirectio

Oracle binary les, conguration les

global, logging

How to Install Sun Cluster Support for Oracle Real Application Clusters

Packages With the Cluster File System


To complete this procedure, you need the Sun Cluster 3.0 5/02 Agents CD-ROM. Perform this
procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Real Application
Clusters.
Note Due to the preparation that is required prior to installation, the scinstall(1M) utility does

not support automatic installation of the data service packages.

Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.

Become superuser.

On all of the nodes, run the following command to install the data service packages.
# pkgadd -d \
/cdrom/scdataservices_3_0_u3/components/\

110

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

SunCluster_Oracle_Parallel_Server_3.0_u3/Packages \
SUNWscucm SUNWudlm SUNWudlmr
Troubleshooting

See Also

Before you reboot the nodes, you must ensure that you have correctly installed and congured the
Oracle UDLM software. For more information, see Installing the Oracle Software in Sun
Cluster 3.0 12/01 Data Services Installation and Conguration Guide.
Go to Installing the Oracle Software in Sun Cluster 3.0 12/01 Data Services Installation and
Conguration Guide to install the Oracle UDLM and Oracle RDBMS software.

Restrictions and Requirements


The following restrictions and requirements have been added or updated since the Sun Cluster 3.0
12/01 release.

Compiling Data Services That Are Written in C++


If you are using Sun Cluster 3.0 and are writing data services in C++, you must compile these data
services in compatibility mode.

Reserved RPC Program Numbers


If you install an RPC service on the cluster, the service must not use any the following program
numbers:

100141
100142
100248

These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and rgmd,
respectively. If the RPC service you install also uses one of these program numbers, you must change
that RPC service to use a different program number.

Dynamic Multipathing (DMP)


With VxVM 3.2 or later, Dynamic Multipathing (DMP) cannot be disabled with the scvxinstall
command during VxVM installation. This procedure is described in the chapter,Installing and
Conguring VERITAS Volume Manager in Sun Cluster software installation documentation. The
use of DMP is supported in the following congurations:

A single I/O path per node to the clusters shared storage

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

111

Composed March 29, 2006


Restrictions and Requirements

A supported multipathing solution (Sun StorEdge Trafc Manager, EMC PowerPath, Hitachi
HDLM) that manages multiple I/O paths per node to the shared cluster storage

The use of DMP alone to manage multiple I/O paths per node to the shared storage is not supported.

Changing Quorum Device Connectivity


When you increase or decrease the number of node attachments to a quorum device, the quorum
vote count is not automatically recalculated. You can reestablish the correct quorum vote if you
remove all quorum devices and then add them back into the conguration.

Storage Topologies Replaced by New Requirements


Sun Cluster 3.1 04/04 software now supports open topologies. You are no longer limited to the
storage topologies listed in the Sun Cluster concepts documentation document.
Use the following guidelines to congure your cluster.

Sun Cluster supports a maximum of eight nodes in a cluster, regardless of the storage
congurations that you implement.

A shared storage device can connect to as many nodes as the storage device supports.

Shared storage devices do not need to connect to all nodes of the cluster. However, these storage
devices must connect to at least two nodes.

Shared Storage Restriction Relaxed


Sun Cluster 3.1 04/04 now supports greater than three-node cluster congurations without shared
storage devices. Two-node clusters are still required to have a shared storage device to maintain
quorum. This storage device does not need to perform any other function.

EMC Storage Restriction


The quorum device access mode is not automatically set to scsi-3 in the following situations:

112

After applying the core patch 110648-20 or later in a two node cluster with an EMC Powerpath
congured quorum disk.

After upgrading from Sun Cluster 3.0 12/01 software to Sun Cluster 3.0 05/02 software in a two
node cluster with an EMC Powerpath congured quorum disk.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Restrictions and Requirements

Note This is a problem only for a multipath quorum device congured with EMC Powerpath in a

two node conguration. The problem is characterized by a value of NULL being printed for the
quorum device access mode property.
To x the property setting after applying the patch or performing the upgrade, use the scsetup
command to remove the existing quorum disk and add it back to the conguration. Removing and
adding back the quorum disk will correct the Sun Cluster software to use scsi-3 PGR for reserving
quorum disks. To verify that the quorum device access mode is set correctly, run scconf -p to print
the conguration.

Framework Restrictions and Requirements

Upgrade to Solaris 9 Upgrade to Solaris 9 software during upgrade to Sun Cluster 3.0 5/02
software is not supported. You can only upgrade to subsequent, compatible versions of the
Solaris 8 Operating System during upgrade to Sun Cluster 3.0 5/02 software. To run Sun Cluster
3.0 5/02 software on the Solaris 9 Operating System, you must perform a new installation of the
Solaris 9 version of Sun Cluster 3.0 5/02 software after the nodes are upgraded to Solaris 9
software.

IPv6 This is not supported.

IP Network Multipathing This is not supported.

Oracle UDLM Requirement


Oracle RAC running on Sun Cluster software requires that the Oracle UDLM be at least
version 3.3.4.5, which ships with the Oracle 9.2 release.
Caution If you do not have this revision or higher, you might encounter a problem during a cluster

reconguration where the reconguration process will hang, leaving all nodes in the cluster unable to
provide Oracle RAC database service. You can x this problem by ensuring that your Oracle UDLM
is at least version 3.3.4.5. This problem and x are documented in Oracle Bug #2273410.
You can determine the version of Oracle UDLM currently installed on your system by running the
following command.
pkginfo -l ORCLudlm | grep VERSION

The version of the Oracle UDLM currently installed on your system also appears in the le
/var/cluster/ucmm/dlm_node-name/logs/dlmstart.log.
The version information appears just before the Copyright (c) line. Look for the latest occurrence of
this information in the le. If you do not have this version of the Oracle UDLM package, please
contact Oracle Support to obtain the latest version.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

113

Composed March 29, 2006


Fixed Problems

Fixed Problems
BugId 4818874
Problem Summary: When used in a clustered environment, the Sun StorEdge 3310 JBOD array
relies on the cluster nodes to provide SCSI bus termination. Because termination power was not
supplied from the arrays IN ports, if the server connected to these ports lost power then SCSI bus
termination was lost. This in turn could result in the remaining cluster node losing access to the
shared storage on that bus.
Problem Fixed: The StorEdge 3310 JBOD array is now supported in a split-bus conguration, when
using the updated version (part number 370-5396-02/50 or newer) of the I/O board.

Known Problems
In addition to known problems documented in Sun Cluster 3.0 5/02 Release Notes, the following
known problems affect the operation of the Sun Cluster 3.0 12/01 release.

Bug ID 4346123
Problem Summary: When booting a cluster node after multiple failures, a cluster le system might
fail to mount automatically from its /etc/vfstab entry, and the boot process will place the node in
an administrative shell. Running the fsck command on the device might yield the following error.
Cant roll the log for /dev/global/rdsk/dXsY

Workaround: This problem might occur when the global device is associated with a stale cluster le
system mount. Run the following command, and check if the le system shows up in an error state to
conrm a stale mount.
# /usr/bin/df -k

If the global device is associated with a stale cluster le system mount, unmount the global device. If
any users of the le system exist on any of the nodes, the unmount cannot succeed. Run the following
command on each node to identify current users of the le system.
# /usr/sbin/fuser -c mountpoint

If there are users of the le system, terminate those users connection to the le system. Run the
share(1M) command to conrm that the le system is not NFS- shared by any node.

114

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Problems

Bug ID 4662264
Problem Summary: To avoid panics when using VxFS with Sun Cluster software, the default thread
stack size must be greater than the VxFS default value of 0x4000.
Workaround: Increase the stack size by putting the following lines in the /etc/system le:
set rpcmod:svc_default_stksize=0x8000
set lwp_default_stksize=0x6000

After installing VxFS packages, verify that VxFS installation has not added similar statements to the
/etc/system le. If multiple entries exist, resolve them to one statement per variable, using these
higher values.

Bug ID 4665886
Problem Summary: Mapping a le into the address space with mmap(2) and then issuing a write(2)
call to the same le results in a recursive mutex panic. This problem was identied in a cluster
conguration running the iPlanet Mail Server.
Workaround: There is no workaround.

Bug ID 4668496
Problem Summary: The default JumpStart profile le allocates 10 Mbytes to slice 7. If you use
Solaris 9 software with Solstice DiskSuite, this amount of space is not enough for Solstice DiskSuite
replicas. Solaris 9 software with Solstice DiskSuite requires at least 20 Mbytes.
Workaround: Edit the default profile le to congure slice 7 of the system disk with 20 Mbytes of
space, instead of 10 Mbytes. This workaround is only necessary if you install Solaris 9 software with
Solstice DiskSuite

Bug ID 4680862
Problem Summary: When you install Oracle or Sybase binaries and conguration les on a highly
available local le system managed by HAStoragePlus, the node that does not have access to this le
system fails validation. The result is that you cannot create the resource.
Workaround: Create a symbolic link named /usr/cluster/lib/hasp_check to link to the
/usr/cluster/lib/scdsbuilder/src/scripts/hasp_check le.

Bug ID 4779686
Problem Summary: Availability Suite 3.1 does not support the Sun Cluster 3.0 HAStoragePlus
resource.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

115

Composed March 29, 2006


Known Problems

Workaround: If you intend to implement Availability Suite 3.1 and failover le system, use an
HAStorage resource in the light-weight resource group that includes the Availability Suite logical
host. For the application resource group, use HAStoragePlus. This allows you to use a failover le
system for application performance and also use Availability Suite 3.1 to back up the disk blocks
under the failover le system.

BugId 4836405
Problem Summary: When using the PCI Dual Ultra3 SCSI host adapter in a clustered environment,
the host adapter jumpers for each port must be set for manual SCSI termination. If the ports are not
set to manual SCSI termination, a loss of power to one host could prevent correct SCSI bus operation
and might result in loss of access to all SCSI devices attached to that bus from the remaining host.
Workaround: When using the PCI Dual Ultra3 SCSI host adapter in a clustered environment, set the
jumpers on the host adapter to manual SCSI termination. This setting causes the host adapter to
activate its built-in SCSI terminators, whether or not the host adapter receives PCI bus power.
The jumper settings needed for manual termination are listed below.

SCSI bus 2 (external SCSI connector nearest to the PCI slot)

J4: 2-3 (factory default 2-3)


J5: 2-3 (factory default 2-3)

SCSI bus 1 (internal SCSI connector and external SCSI connector furthest from the PCI slot)

J8: 2-3 (factory default 1-2)


J9: 2-3 (factory default 1-2)

See the host adapter documentation for further information.

BugID 4838619
Problem Summary: Without a patch, Sun Cluster software will not recognize bge(7D) Ethernet
adapters.
Workaround: If you plan to use bge(7D) Ethernet adapters as cluster interconnects in your Sun
Cluster conguration, you will need to install patches and use a modied installation procedure. The
onboard Ethernet ports on the Sun Fire V210 and V240 are examples of bge(7D) Ethernet adapters.
If you use Solaris 8 software, install the following patches.

110648-28 or later (Sun Cluster 3.0: Core/Sys Admin)


112108-07 or later (Required for SunPlex Manager use)

If you use Solaris 9, install the following patches.

116

112563-10 or later (Sun Cluster 3.0: Core/Sys Admin)

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

114189-01 or later (Required for SunPlex Manager use)

For the modied installation procedure, refer to the patchs README le.

Known Documentation Problems


This section discusses documentation errors you might encounter and steps to correct these
problems. This information is in addition to known documentation problems documented in the
Sun Cluster 3.0 5/02 Release Notes.

System Administration Guide


The correct caption for Table 5-2 is Task Map: Dynamic Reconguration with Cluster
Interconnects.

Hardware Guide
The following subsections describe omissions or new information that will be added to the next
publishing of the Hardware Guide.

ATM with Sun Cluster 3.0 5/02


ATM is supported with Sun Cluster 3.0 5/02 software as a public network interface to be used in LAN
Emulation (LANE) mode only. Use the SunATM 5.0 version to run on Solaris 8 software.
Use the following network, ATM card, and LANE instance guidelines to congure ATM with Sun
Cluster 3.0 5/02. For additional conguration information, see the Platform Notes: The SunATM
Driver Software, 816-1915.

Network Conguration Guidelines


In order to support ATM LANE on Sun Cluster, an ATM capable router and switch are required. The
router needs to provide LANE services, with 1 ELAN for each set of nodes. Congure the router to
respond to ALLROUTERS (224.0.0.2) and the ALLHOSTS (224.0.0.1) pings. The ATM switch
should have PNNI (Private Network-Node Interface) enabled.
The router provides Emulated LAN (ELAN) service to the cluster nodes and clients. The clients can
belong to a different ELANs but the cluster nodes must be part of the same ELAN.

ATM Card Guidelines


To use ATM as a public network adapter, Sun Cluster software requires at least one ATM card per
NAFO group. For high availability, you can eliminate the potential single point of failure by using
more than one ATM card per NAFO group.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

117

Composed March 29, 2006


Known Documentation Problems

LANE Instance Conguration Guidelines


Perform the following tasks to congure LANE instances.

Create one LANE instance on each ATM card.

All LANE instances in a NAFO group must be congured on the same ELAN. For example, all
LANE instances in NAFO1 must be in the same ELAN on all cluster nodes.

Congure the primary LANE interface using the /etc/hostname.lanen le. This le is
necessary, but will cause warning messages to display at boot up on SunATM 5.0. The following
example is of the console messages. These messages can be ignored.
Rebooting with command: boot
Boot device: diskbrd:a File and args:
SunOS Release 5.8 Version Generic_108528-13 64-bit
Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_ATTACH_REQ(11), errno 8, unix 0
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_BIND_REQ(1), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_PHYS_ADDR_REQ(49), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_UNBIND_REQ(2), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_DETACH_REQ(12), errno 3, unix 71
ifconfig: SIOCSLIFNAME for ip: lane1: Protocol error
moving addresses from failed IPv4 interfaces: lane1 (couldnt move, no
alternative interface).
Hostname: atm10

Assign an IP address to the primary LANE interface in the atmconfig le.


Note Do not assign an IP address to the secondary, backup LANE interface.

The following example shows an atmconfig le with the primary and secondary LANE
interfaces congured. Note the IP address is assigned only to the primary LANE interface.

118

ba0

3.1

ba0

SONET

ba0

ba1

3.1

ba1

SONET

ba1

1
-

atm20

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Sun StorEdge T3/T3+ Partner Group and Sun StorEdge 3900 Storage
Devices Supported in a Scalable Topology.
The Sun StorEdge T3/T3+ Partner Group and Sun StorEdge 3900 storage devices are supported with
4node connectivity in a cluster environment.
To congure and maintain these storage devices with 4node connectivity, use the procedures listed
in the storage devices chapter and repeat the steps for Node B on each additional node that connects
to the storage device.
For the following node-related procedures , see Appendix A.

Adding a Cluster Node on page 140


Removing a Cluster Node on page 140
How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater
Than Two-Node Connectivity on page 142

Software Installation Guide


The following subsections describe omissions or new information that will be added to the next
publishing of the Software Installation Guide.

Correcting Stack Overow Related to VxVM Disk Device Groups


If you experience a stack overow when a VxVM disk device group is brought online, the default
value of the thread stack size might be insufcient. To increase the thread stack size, add the
following entry to the /etc/system on each node. Set the value for size to a number that is greater
than 8000, which is the default setting.
set cl_comm:rm_thread_stacksize=0xsize

How to Upgrade to the Sun Cluster 3.0 5/02 Software Update Release
Use the following procedure to upgrade any previous release of Sun Cluster 3.0 software to the Sun
Cluster 3.0 5/02 update release.
Note Do not use any new features of the update release, install new data services, or issue any

administrative conguration commands until all nodes of the cluster are successfully upgraded.

Back up the shared data from all device groups within the cluster.

Get any necessary patches for your cluster conguration.


In addition to Sun Cluster software patches, get any patches for your hardware, Solaris Operating
System, volume manager, applications, and any other software products currently running on your
cluster. See the Sun Cluster release notes documentation for the location of Sun patches and
installation instructions. You will apply the patches in different steps of this procedure.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

119

Composed March 29, 2006


Known Documentation Problems

From any node, view the current status of the cluster to verify that the cluster is running normally.
% scstat

See the scstat(1M) man page for more information.


4

Become superuser on one node of the cluster.


Upgrade only one node at a time.

Evacuate all resource groups and device groups that are running on the node to upgrade.
Specify the node that you are upgrading in the node argument of the following scswitch command:
# scswitch -S -h from-node

-S

Evacuates all resource groups and device groups

-h node

Species the name of the node from which to evacuate resource groups and
device groups (the node you are upgrading)

See the scswitch(1M) man page for more information.


6

Verify that the evacuation completed successfully.


# scstat -g -D

Ensure that the node you are upgrading is no longer the primary for any resource groups or device
groups in the cluster.
7

Reboot the node into noncluster mode.


Include the double dashes (--) in the command.
# reboot -- -x

Back up the system disk.

Determine whether any of the Cool Stuff CD packages are installed on the node.
To display the version of an installed package, use the following command:
# pkginfo -l package

The following table lists the packages from the Sun Cluster 3.0 GA Cool Stuff CD-ROM:

Package

Version

Description

SUNWscrtw

3.0.0/2000.10.17.22.22

Resource Type Wizard

SUNWscsdk

3.0.0/2000.10.10.13.06

Data Service Software Development Kit

SUNWscset

3.0.0/2000.10.17.22.22

rgmsetup

SUNWscvxi

3.0.0/2000.10.17.22.22

Cluster VxVM setup

120

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Remove any Cool Stuff CD-ROM packages found on the node. These packages will be replaced with
supported versions in Sun Cluster 3.0 5/02 software.
# pkgrm package
10

Do you intend to upgrade Solaris 8 software?


Note The cluster must already run on, or be upgraded to, at least the minimum required level of the

Solaris 8 Operating System to support Sun Cluster 3.0 5/02 software. See the Info Documents page for
Sun Cluster 3.0 software on http://sunsolve.sun.com for the latest Solaris support information.

11

If yes, go to Step 11.


If no, go to Step 12.

Upgrade Solaris 8 software.


a. Determine whether the following links already exist, and if so, whether the le names contain an
uppercase K or S.
/etc/rc0.d/K16apache
/etc/rc1.d/K16apache
/etc/rc2.d/K16apache
/etc/rc3.d/S50apache
/etc/rcS.d/K16apache

If these links already exist and contain an uppercase K or S in the le name, no further action is
necessary concerning these links. If these links do not exist, or if these links exist but contain a
lowercase k or s in the le name, you will move aside these links in Step g.
b. Are you using the Maintenance Update upgrade method?

If yes, skip to Step c.

If no, temporarily comment out all global device entries in the /etc/vfstab le.
Do this to prevent the Solaris upgrade from attempting to mount the global devices. To
identify global device entries, look for entries that contain global in the mount-options list.

c. Shut down the node to upgrade.


# shutdown -y -g0
ok

d. Follow instructions in the installation guide for the Solaris 8 update version you want to upgrade
to.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

121

Composed March 29, 2006


Known Documentation Problems

Note To reboot the node during Solaris software upgrade, always add the -x option to the
command. This ensures that the node reboots into noncluster mode. The following two
commands boot a node into single-user noncluster mode:

# reboot -- -sx
ok boot -sx

Do not reboot the node into cluster mode during or after Solaris software upgrade.

e. Are you using the Maintenance Update upgrade method?

If yes, skip to Step f.

If no, uncomment all global device entries that you commented out in the /a/etc/vfstab le.

f. Install any Solaris software patches and hardware-related patches, and download any needed
rmware contained in the hardware patches.
Do not reboot yet if any patches require rebooting.
g. If the Apache links in Step a did not already exist or they contained a lowercase k or s in the le
names before you upgraded Solaris software, move aside the restored Apache links.
Use the following commands to rename the les with a lowercase k or s:
#
#
#
#
#

mv
mv
mv
mv
mv

/a/etc/rc0.d/K16apache
/a/etc/rc1.d/K16apache
/a/etc/rc2.d/K16apache
/a/etc/rc3.d/S50apache
/a/etc/rcS.d/K16apache

/a/etc/rc0.d/k16apache
/a/etc/rc1.d/k16apache
/a/etc/rc2.d/k16apache
/a/etc/rc3.d/s50apache
/a/etc/rcS.d/k16apache

Note For the Maintenance Update upgrade method, the paths to the les do not begin with /a.

h. Reboot the node into noncluster mode.


Include the double dashes (--) in the command.
# reboot -- -x
12

Determine whether the following packages are installed on the node.


# pkginfo SUNWscva SUNWscvr SUNWscvw SUNWscgds

Sun Cluster software upgrade requires that these packages exist on the node before upgrade begins. If
any of these packages are missing, install them from the Sun Cluster 3.0 5/02 CD-ROM.
# cd /cdrom/suncluster_3_0/SunCluster_3.0/Packages
# pkgadd -d . SUNWscva SUNWscvr SUNWscvw SUNWscgds
13
122

Do you intend to use SunPlex Manager?


Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

If no, go to Step 14.

If yes, ensure that the required Apache software packages are installed on the node.
# pkginfo SUNWapchr SUNWapchu

If any Apache software packages are missing, install them on the node from the Solaris CD-ROM.
# pkgadd -d . SUNWapchr SUNWapchu
14

Upgrade to the Sun Cluster 3.0 5/02 update software.


a. Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive on the node.
If the volume daemon vold(1M) is running and congured to manage CD-ROM devices, it
automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.
b. Change to the Tools directory.
# cd /cdrom/suncluster_3_0/SunCluster_3.0/Tools

c. Install the Sun Cluster 3.0 5/02 update patches.


# ./scinstall -u update

See the scinstall(1M) man page for more information.


d. Change to the CD-ROM root directory and eject the CD-ROM.
e. Install any Sun Cluster software patches.
f. Verify that each Sun Cluster 3.0 5/02 update patch is installed correctly.
View the upgrade log le referenced at the end of the upgrade output messages.
15

Reboot the node into the cluster.


# reboot

16

Verify the status of the cluster conguration.


% scstat

17

Repeat Step 4 through Step 16 on each remaining cluster node, one node at a time.

18

Take ofine all resource groups for the data services you will upgrade.
# scswitch -F -g resource-grp

-F

Take ofine

-g resource-grp

Species the name of the resource group to take ofine

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

123

Composed March 29, 2006


Known Documentation Problems

19

Upgrade applications as needed.


Follow the instructions provided in your third-party documentation.

20

On each cluster node on which data services are installed, upgrade to the Sun Cluster 3.0 5/02 data
services update software.
a. Insert the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive on the node.
b. Install the Sun Cluster 3.0 5/02 data services update patches.
Use one of the following methods:

To upgrade one or more specied data services, type the following command:

To upgrade all data services present on the node, type the following command:

# scinstall -u update -s srvc[,srvc,...] -d cdrom-image

# scinstall -u update -s all -d cdrom-image


Note The -s all option assumes that updates for all installed data services exist on the

update release. If an update for a particular data service does not exist in the update release,
that data service is not upgraded.

c. Eject the CD-ROM.


d. Install any Sun Cluster data service software patches.
e. Verify that each data service update patch is installed successfully.
View the upgrade log le referenced at the end of the upgrade output messages.
21

After all data services on all cluster nodes are upgraded, bring back online the resource groups for
each upgraded data service.
# scswitch -Z -g resource-grp

-Z
22

Bring online

From any node, verify the status of the cluster conguration.


% scstat

23

Restart any applications.


Follow the instructions provided in your applications documentation.

124

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Upgrading the Sun Cluster HA for NFS Data Service


In the procedure How to Finish Upgrading Cluster Software from the section Upgrading From
Sun Cluster 2.2 to Sun Cluster 3.0 Software, the following command upgrades the Sun Cluster HA
for NFS data service:
# scinstall -u finish -q globaldev=DIDname \
-d /cdrom/scdataservices_3_0_u3 -s nfs

This command requires that the SUNWscnfs package is already installed from the Sun Cluster 3.0
5/02 Agents CD-ROM on all nodes before you invoke the scinstall command. To ensure successful
upgrade of the Sun Cluster HA for NFS data service, do the following:

Ensure that the SUNWscnfs package is installed on all nodes of the cluster before you run this
scinstall command.

If the scinstall command fails because the SUNWscnfs package is missing from a node, install
the SUNWscnfs package on all nodes from the Sun Cluster 3.0 5/02 Agents CD-ROM, then rerun
the scinstall command.

Names for SCI-PCI Adapters


To congure SCI-PCI adapters for the cluster interconnect, specify sciN as the adapter name, for
example, sci0. Do not use scidN as the adapter name.

Solaris Volume Manager Replica Space Requirement


Problem Summary: The Sun Cluster 3.0 12/01 Software Installation Guide tells you to set aside at
least 10 Mbytes in slice 7 to use to create three Solaris Volume Manager replicas in that slice.
However, Solaris Volume Manager replicas in Solaris 9 software require substantially more space
than the 10 Mbytes required for Solstice DiskSuite replicas in Solaris 8 software.
Workaround: When you install Solaris 9 software, allocate at least 20 Mbytes to slice 7 of the root
disk to accommodate the larger Solaris Volume Manager replicas.

IPAddress Requirement for Sun Fire 15000 Systems


Before you install Sun Cluster software on a Sun Fire 15000 system, you must add the IP address of
each domain console network interface to the /etc/inet/hosts le on each node in the cluster.
Perform this task regardless of whether you use a naming service.

Quorum Device Connection to Nodes


In the Planning chapter, the following statement about quorum devices is incorrect:
Connection - Do not connect a quorum device to more than two nodes.

The statement should instead read as follows:

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

125

Composed March 29, 2006


Known Documentation Problems

Connection You must connect a quorum device to at least two nodes.

Node Authentication When Installing VERITAS Volume Manager


In the procedures How to Install VERITAS Volume Manager Software and Encapsulate the Root
Disk and How to Install VERITAS Volume Manager Software Only, it is no longer necessary to
rst add cluster node names to the authentication list. You can therefore skip Step 3, Add all nodes
in the cluster to the cluster node authentication list.

How to Create State Database Replicas


When you use the metadb -af command to create state database replicas on local disks, use the
physical disk name (cNtXdYsZ), not the device-ID name (dN), to specify the slices to use.

Data Services Installation and Conguration Guide


The following subsections describe omissions or new information that will be added to the next
publishing of the Data Service Installation and Conguration Guide.

Sun Cluster Data Service for NetBackup


The Sun Cluster HA for NetBackup Overview section should state that in a Sun Cluster
environment, robotic control is only supported on media servers and on not the NetBackup master
server running on Sun Cluster.

Conguring an SAP J2EE Engine Cluster and an SAP Web Dispatcher


Sun Cluster now supports the SAP J2EE engine cluster and SAP Web dispatcher components on the
Sun Cluster environment. To use these components you must complete additional steps during your
Sun Cluster HA for SAP installation and conguration.

To congure a J2EE engine cluster with your Sun Cluster HA for SAP with a Central Instance, see
How to Congure an SAP J2EE Engine with your Sun Cluster HA for SAP with Central
Instance on page 127.

To congure a J2EE engine cluster with your Sun cluster HA for SAP with an SAP Application
Server, see How to Congure an SAP J2EE Engine Cluster with your Sun Cluster HA for SAP
with an Application Server on page 127.

To congure SAP Web dispatcher with your Sun Cluster HA for SAP agent, see How to
Congure a SAP Web Dispatcher with your Sun Cluster HA for SAP on page 128.

The SAP J2EE engine is started by the SAP dispatcher which is under the protection of the Sun
Cluster HA for SAP. If the SAP J2EE engine goes down, the SAP dispatcher will restart it.

126

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

The SAP Web dispatcher has the capability of auto restart. If the SAP Web dispatcher goes down, the
SAP Web dispatcher watch dog process will restart. Currently, there is no Sun Cluster agent available
for the SAP Web dispatcher.

How to Congure an SAP J2EE Engine with your Sun Cluster HA for SAP

with Central Instance


After you have completed the How to Enable Failover SAP Instances to Run in a Sun Cluster
procedure in the Sun Cluster HA for SAP document, perform the following steps.
1

Using the SAP J2EE Admintool GUI, change the ClusterHosts parameter to list all logical hosts for the
application server and port pair under dispatcher/Manager/ClusterManager. For example,
as11h:port;as21h:port ...

Change the le j2ee-install-dir/additionalproperties as follows:


com.sap.instanceId = logical-host-ci_SID_SYSNR

Change the le j2ee-install-dir/server/services/security/work/R3Security.properties as


follows:
sapbasis.ashost = logical-host-ci

Change the le SDM-dir/program/config/flow.xml


host = logical-host-ci

How to Congure an SAP J2EE Engine Cluster with your Sun Cluster HA

for SAP with an Application Server


After you have completed the How to Enable Failover SAP Instances to Run in a Sun Cluster or How
to Install an SAP Scalable Application Server procedure in the Sun Cluster HA for SAP document,
perform the following steps.
1

Using the SAP J2EE Admintool GUI, change ClusterHosts parameter to list the logical host for the
central instance and port pair under the dispatcher/Manager/ClusterManager.
logical-host-ci:port

Change the le j2ee-install-dir/additionalproperties as follows:


com.sap.instanceId = logical-host-as_SID_SYSNR

Change the le j2ee-install-dir/server/services/security/work/R3Security.properties as


follows:
sapbasis.ashost = logical-host-as

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

127

Composed March 29, 2006


Known Documentation Problems

How to Congure a SAP Web Dispatcher with your Sun Cluster HA for

SAP
After you have congured the SAP Web dispatcher with your Sun Cluster HA for SAP, perform the
following steps.
1

Ensure that SAP Web dispatcher has an instance number different than the Central Instance and the
application server instances.
For example, SAPSYSTEM = 66 is used in the prole for the SAP Web dispatcher.

Activate the Internet Communication Frame Services manually after you install the SAP Web
Application Server.
See SAP OSS note 517484 for more details.

Conguring Sun Java System Web Server


The How to Congure a Sun Java System Web Server procedure in the Sun Cluster data services
collection is missing the following step, which is not dependent on any other step in the procedure.
Create a le that contains the secure key password you need to start this instance, and place this le
under the server root directory. Name this le keypass.
Note Because this le contains the key database password, protect the le with the appropriate

permissions.

Support for Sun ONE Proxy Server


Sun Cluster HA for Sun ONE Web Server now supports Sun ONE Proxy Server. For information
about the Sun ONE Proxy Server product, see http://docs.sun.com/db/prod/s1.webproxys. For Sun
ONE Proxy Server installation and conguration information, see
http://docs.sun.com/db/coll/S1_ipwebproxysrvr36.

Registering and Conguring the Sun Cluster for SAP liveCache


The procedure How to Register and Congure Sun Cluster HA for SAP liveCache has been revised.
Add the following command to step 4 of this procedure.
-x affinityon=TRUE
Note AffinityOn must be set to TRUE and the local le system must reside on global disk groups to

be failover

128

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

For the procedure on how to set up an HAStoragePlus resource, see Sun Cluster 3.0 Data Service
Installation and Conguration Guide.

Using the Sun Cluster LogicalHostname Resource With Oracle Real


Application Clusters
Information on using the Sun Cluster LogicalHostname resource with Oracle Real Application
Clusters is missing from Chapter 8, Installing and Conguring Sun Cluster Support for Oracle
Parallel Server/Real Application Clusters, in Sun Cluster 3.0 12/01 Data Services Installation and
Conguration Guide.
If a cluster node that is running an instance of Oracle Real Application Clusters fails, an operation
that a client application attempted might be required to time out before the operation is attempted
again on another instance. If the TCP/IP network timeout is high, the client application might take a
long time to detect the failure. Typically client applications take between three and nine minutes to
detect such failures.
In such situations, client applications may use the Sun Cluster LogicalHostname resource for
connecting to an Oracle Real Application Clusters database that is running on Sun Cluster. You can
congure the LogicalHostname resource in a separate resource group that is mastered on the nodes
on which Oracle Real Application Clusters is running. If a node fails, the LogicalHostname resource
fails over to another surviving node on which Oracle Real Application Clusters is running. The
failover of the LogicalHostname resource enables new connections to be directed to the other
instance of Oracle Real Application Clusters.
Caution Before using the LogicalHostname resource for this purpose, consider the effect on existing
user connections of failover or failback of the LogicalHostname resource.

Creating Node-Specic Files and Directories for Use With Oracle Real
Application Clusters Software on the Cluster File System
When Oracle software is installed on the cluster le system, all the les in the directory that the
ORACLE_HOME environment variable species are accessible by all cluster nodes.
An installation might require that some Oracle les or directories maintain node-specic
information. You can satisfy this requirement by using a symbolic link whose target is a le or a
directory on a le system that is local to a node. Such a le system is not part of the cluster le system.
To use a symbolic link for this purpose, you must allocate an area on a local le system. To enable
Oracle applications to create symbolic links to les in this area, the applications must be able to
access les in this area. Because the symbolic links reside on the cluster le system, all references to
the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area
on the local le system.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

129

Composed March 29, 2006


Known Documentation Problems

How to Create a Node-Specic Directory for Use With Oracle Real

Application Clusters Software on the Cluster File System


Perform this procedure for each directory that is to maintain node-specic information. The
following directories are typically required to maintain node-specic information:

$ORACLE_HOME/network/agent
$ORACLE_HOME/network/log
$ORACLE_HOME/network/trace
$ORACLE_HOME/srvm/log
$ORACLE_HOME/apache

For information about other directories that might be required to maintain node-specic
information, see your Oracle documentation.
1

On each cluster node, create the local directory that is to maintain node-specic information.
# mkdir -p local-dir

-p

Species that all nonexistent parent directories are created rst

local-dir

Species the full path name of the directory that you are creating

On each cluster node, make a local copy of the global directory that is to maintain node-specic
information.
# cp -pr global-dir local-dir-parent

-p

Species that the owner, group, permissions modes, modication time, access
time, and access control lists are preserved.

-r

Species that the directory and all its les, including any subdirectories and their
les, are copied.

global-dir

Species the full path of the global directory that you are copying. This directory
resides on the cluster le system under the directory that the ORACLE_HOME
environment variable species.

local-dir-parent

Species the directory on the local node that is to contain the local copy. This
directory is the parent directory of the directory that you created in Step 1.

Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the
global directory.
a. From any cluster node, remove the global directory that you copied in Step 2.
# rm -r global-dir

-r

130

Species that the directory and all its les, including any subdirectories and their
les, are removed.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

global-dir

Species the le name and full path of the global directory that you are removing.
This directory is the global directory that you copied in Step 2.

b. From any cluster node, create a symbolic link from the local copy of the directory to the global
directory that you removed in Step a.
# ln -s local-dir global-dir

Example 81

-s

Species that the link is a symbolic link

local-dir

Species that the local directory that you created in Step 1 is the source of the link

global-dir

Species that the global directory that you removed in Step a is the target of the link

Creating Node-Specic Directories


This example shows the sequence of operations that is required to create node-specic directories on
a two-node cluster. This cluster is congured as follows:

The ORACLE_HOME environment variable species the /global/oracle directory.


The local le system on each node is located under the /local directory.

The following operations are performed on each node:


1. To create the required directories on the local le system, the following commands are run:
# mkdir -p /local/oracle/network/agent
# mkdir -p /local/oracle/network/log
# mkdir -p /local/oracle/network/trace
# mkdir -p /local/oracle/srvm/log
# mkdir -p /local/oracle/apache

2. To make local copies of the global directories that are to maintain node-specic information, the
following commands are run:
# cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.
# cp -pr $ORACLE_HOME/network/log /local/oracle/network/.
# cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.
# cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.
# cp -pr $ORACLE_HOME/apache /local/oracle/.

The following operations are performed on only one node:


1. To remove the global directories, the following commands are run:
# rm -r $ORACLE_HOME/network/agent
# rm -r $ORACLE_HOME/network/log

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

131

Composed March 29, 2006


Known Documentation Problems

# rm -r $ORACLE_HOME/network/trace
# rm -r $ORACLE_HOME/srvm/log
# rm -r $ORACLE_HOME/apache

2. To create symbolic links from the local directories to their corresponding global directories, the
following commands are run:
# ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent
# ln -s /local/oracle/network/log $ORACLE_HOME/network/log
# ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace
# ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log
# ln -s /local/oracle/apache $ORACLE_HOME/apache

How to Create a Node-Specic File for Use With Oracle Real Application

Clusters Software on the Cluster File System


Perform this procedure for each le that is to maintain node-specic information. The following les
are typically required to maintain node-specic information:

$ORACLE_HOME/network/admin/snmp_ro.ora
$ORACLE_HOME/network/admin/snmp_rw.ora

For information about other les that might be required to maintain node-specic information, see
your Oracle documentation.
1

On each cluster node, create the local directory that will contain the le that is to maintain
node-specic information.
# mkdir -p local-dir

-p

Species that all nonexistent parent directories are created rst

local-dir

Species the full path name of the directory that you are creating

On each cluster node, make a local copy of the global le that is to maintain node-specic
information.
# cp -p global-le local-dir

132

-p

Species that the owner, group, permissions modes, modication time, access time,
and access control lists are preserved.

global-le

Species the le name and full path of the global le that you are copying. This le was
installed on the cluster le system under the directory that the ORACLE_HOME
environment variable species.

local-dir

Species the directory that is to contain the local copy of the le. This directory is the
directory that you created in Step 1.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

Replace the global le that you copied in Step 2 with a symbolic link to the local copy of the le.
a. From any cluster node, remove the global le that you copied in Step 2.
# rm global-le

global-le

Species the le name and full path of the global le that you are removing. This
le is the global le that you copied in Step 2.

b. From any cluster node, create a symbolic link from the local copy of the le to the directory from
which you removed the global le in Step a.
# ln -s local-le global-dir

Example 82

-s

Species that the link is a symbolic link

local-le

Species that the le that you copied in Step 2 is the source of the link

global-dir

Species that the directory from which you removed the global version of the le in
Step a is the target of the link

Creating Node-Specic Files


This example shows the sequence of operations that is required to create node-specic les on a
two-node cluster. This cluster is congured as follows:

The ORACLE_HOME environment variable species the /global/oracle directory.


The local le system on each node is located under the /local directory.

The following operations are performed on each node:


1. To create the local directory that will contain the les that are to maintain node-specic
information, the following command is run:
# mkdir -p /local/oracle/network/admin

2. To make a local copy of the global les that are to maintain node-specic information, the
following commands are run:
# cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
/local/oracle/network/admin/.
# cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
/local/oracle/network/admin/.

The following operations are performed on only one node:


1. To remove the global les, the following commands are run:
# rm $ORACLE_HOME/network/admin/snmp_ro.ora
# rm $ORACLE_HOME/network/admin/snmp_rw.ora

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

133

Composed March 29, 2006


Known Documentation Problems

2. To create symbolic links from the local copies of the les to their corresponding global les, the
following commands are run:
# ln -s /local/oracle/network/admin/snmp_ro.ora \
$ORACLE_HOME/network/admin/snmp_rw.ora
# ln -s /local/oracle/network/admin/snmp_rw.ora \
$ORACLE_HOME/network/admin/snmp_rw.ora

Supplement
The following subsections describe known errors in or omissions from the Sun Cluster 3.0 5/02
Supplement.

How to Uninstall Sun Cluster Software From a Cluster Node (5/02)


The following note at the beginning of this procedure is incorrect:
Note To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in

install mode, do not perform this procedure. Instead, go to How to Uninstall Sun Cluster Software
to Correct Installation Problems in the Sun Cluster 3.0 12/01 Software Installation Guide.
The note should instead read as follows:
Note To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in

install mode, do not perform this procedure. Instead, go to How to Uninstall Sun Cluster Software
to Correct Installation Problems in the Sun Cluster 3.0 5/02 Supplement.

Release Notes
The following subsections describe omissions or new information that will be added to the next
publishing of the Release Notes.

BugId 4662264
The Workaround documented in the Sun Cluster 3.1 8/05 Release Notes for Solaris OS is incorrect.
Incorrect:
Increase the stack size by putting the following lines in the /etc/system le.

134

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

set lwp_default_stksize=0x6000
set svc_default_stksize 0x8000

Correct:
Increase the stack size by putting the following lines in the /etc/system le.
set lwp_default_stksize=0x6000
set rpcmod:svc_default_stksize=0x8000

Man Pages
The following subsections describe omissions or new information that will be added to the next
publishing of the man pages.

scconf_transp_adap_sci Man Page


The scconf_transp_adap_sci(1M) man page states that SCI transport adapters can be used with
the rsm transport type. This support statement is incorrect. SCI transport adapters do not support the
rsm transport type. SCI transport adapters support the dlpi transport type only.

scconf_transp_adap_wrsm Man Page


The following scconf_transp_adap_wrsm(1M) man page replaces the existing
scconf_transp_adap_wrsm(1M) man page.
NAME
scconf_transp_adap_wrsm.1m- congure the wrsm transport adapter
DESCRIPTION
wrsm adapters may be congured as cluster transport adapters. These adapters can only be used with
transport types dlpi.
The wrsm adapter connects to a transport junction or to another wrsm adapter on a different node. In
either case, the connection is made through a transport cable.
Although you can connect the wrsm adapters directly by using a point-to-point conguration, Sun
Cluster software requires that you specify a transport junction, a virtual transport junction. For
example, if node1:wrsm1 is connected to node2:wsrm1 directly through a cable, you must specify the
following conguration information.
node1:wrsm1 <--cable1--> Transport Junction sw_wrsm1 <--cable2--> node2:wrsm1

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

135

Composed March 29, 2006


Known Documentation Problems

The transport junction, whether a virtual switch or a hardware switch, must have a specic name.
The name must be sw_wrsmN where the adapter is wrsmN. This requirement reects a Wildcat
restriction that requires that all wrsm controllers on the same Wildcat network have the same instance
number.
When a transport junction is used and the endpoints of the transport cable are congured using
scconf, scinstall, or other tools, you are asked to specify a port name on the transport junction.
You can provide any port name, or accept the default, as long as the name is unique for the transport
junction.
The default sets the port name to the node ID that hosts the adapter at the other end of the cable.
Refer to scconf(1M) for more conguration details.
There are no user congurable properties for cluster transport adapters of this type.
SEE ALSO
scconf(1M), scinstall(1M), wrsmconf(1M), wrsmstat(1M), wrsm(7D), wrsmd(7D)

SUNW.HAStoragePlus.5
The SunW.HAStoragePlus.5 man page has been modied. The following paragraph replaces the
paragraph in the Notes section of the man page.
Although unlikely, the SUNW.HAStoragePlus resource is capable of mounting any global le system
found to be in a unmounted state. This check will be skipped only if the le system is of type UFS and
logging is turned off. All le systems are mounted in the overlay mode. Local le systems will be
forcibly unmounted.
The following FilesystemCheckCommand extension property has been added to the
SUNW.HAStoragePlus.5 man page.
FilesystemCheckCommand

SUNW.HAStoragePlus conducts a le system check on each


unmounted le system before attempting to mount it. The default
le system check command is /usr/sbin/fsck -o p for UFS and
VxFS le systems, and /usr/sbin/fsck for other le systems. The
FilesystemCheckCommand extension property can be used to
override this default le system check specication and instead
specify an alternate command string/executable. This command
string/executable will then be invoked on all unmounted le
systems.
The default FilesystemCheckCommand extension property value is
NULL. When the FilesystemCheckCommand is set to NULL the
command will be assumed to be /usr/sbin/fsck -o p for
UFS/VxFS le systems and /usr/sbin/fsck for other le systems.
When the FilesystemCheckCommand is set to a user specied

136

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Known Documentation Problems

command string, SUNW.HAStoragePlus will elect to invoke this


command string with the le system mount point as an argument.
Any arbitrary executable can be specied in this manner. A
non-zero return value will be treated as a error which occurred
during the le system check operation, causing the start method to
fail. Any arbitrary executable can be specied in this manner.
When the FilesystemCheckCommand is set to /usr/bin/true, le
system checks will altogether be avoided.

Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement

137

Composed March 29, 2006

138

Composed March 29, 2006

A P P E N D I X

Scalable Cluster Topology

This appendix provides information and procedures for using the scalable cluster topology. This
information supplements the . Certain procedures have been updated and included here to
accommodate this new Sun Cluster 3.x topology.
This chapter contains new information for the following topics.

Overview of Scalable Topology on page 139


Adding a Cluster Node on page 140
Removing a Cluster Node on page 140
How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater
Than Two-Node Connectivity on page 142

Overview of Scalable Topology


The scalable cluster topology allows connectivity of up to four nodes to a single storage array. Note
the following considerations for this topology at this time:

All nodes must have the Oracle Real Application Clusters software installed. For information
about installing and using Oracle Real Application Clusters in a cluster, see the Sun Cluster 3.0
12/01 Data Services Installation and Conguration Guide.

The storage arrays supported with this cluster topology include the Sun StorEdge T3/T3+ array
(single-controller and partner-group congurations), the Sun StorEdge 9900 Series storage
device, and the Sun StorEdge 3900 storage device.

139

Composed March 29, 2006


Adding or Removing a Cluster Node

Adding or Removing a Cluster Node


The following information and procedures supplement procedures in the Sun Cluster system
administration documentation.

Adding a Cluster Node


The scalable topology does not introduce any changes to the standard procedure for adding cluster
nodes. See Sun Cluster system administration documentation the for the procedure for adding a
cluster node.
Figure A1 shows a sample diagram of cabling for four-node connectivity with scalable topology.

FIGURE A1 Sample Scalable Topology Cabling, Four-Node Connectivity

Removing a Cluster Node


The following task map procedure is an update to the standard procedure in the Sun Cluster system
administration documentation.

140

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Adding or Removing a Cluster Node

Caution Do not use this procedure if your cluster is running an Oracle Real Application Clusters
conguration. At this time, removing a node in an Oracle Real Application Clusters conguration
might cause nodes to panic at reboot.

TABLE A1 Task Map: Removing a Cluster Node (5/02)


Task

For Instructions, Go To

Move all resource groups and disk device


groups off of the node to be removed.

# scswitch -S -h from-node

- Use scswitch
Remove the node from all resource groups.
- Use scrgadm
Remove node from all disk device groups
- Use scconf, metaset, and scsetup

Remove all quorum devices.


- Use scsetup.

Sun Cluster data services collection: See the procedure for how to
remove a node from an existing resource group.
Sun Cluster system administration documentation: see the
procedures for how to remove a node from a disk device group
(separate procedures for Solstice DiskSuite, VERITAS Volume
Manager, and raw disk device groups).
Caution: Do not remove the quorum device if you are
removing a node from a two-node cluster.
Sun Cluster system administration documentation: How to
Remove a Quorum Device.
Note that although you must remove the quorum device before
you remove the storage device in the next step, you can add the
quorum device back immediately afterward.

Remove the storage device from the node.


- Use devfsadm, scdidadm.
Add the new quorum device (to only the
nodes that are intended to remain in the
cluster).

How to Remove Connectivity Between an Array and a Single


Node, in a Cluster With Greater Than Two-Node Connectivity
on page 142
scconf(1M) man page

- Use scconf -a -q
globaldev=d[n],node=node1,node=node2,...
Place the node being removed into
maintenance state.

Sun Cluster system administration documentation: How to Put


a Node Into Maintenance State

- Use scswitch, shutdown, and scconf.

Appendix A Scalable Cluster Topology

141

Composed March 29, 2006


Adding or Removing a Cluster Node

TABLE A1 Task Map: Removing a Cluster Node (5/02)


Task

(Continued)

For Instructions, Go To

Remove all logical transport connections to Sun Cluster system administration documentation: How to
the node being removed.
Remove Cluster Transport Cables, Transport Adapters, and
Transport Junctions
- Use scsetup.
Remove node from the cluster software
conguration.

Sun Cluster system administration documentation: How to


Remove a Node From the Cluster Software Conguration

- Use scconf.

How to Remove Connectivity Between an Array and a


Single Node, in a Cluster With Greater Than Two-Node
Connectivity
Use this procedure to detach a storage array from a single cluster node, in a cluster that has three- or
four-node connectivity.

Back up all database tables, data services, and volumes that are associated with the storage array
that you are removing.

Determine the resource groups and device groups that are running on the node to be disconnected.
# scstat

If necessary, move all resource groups and device groups off the node to be disconnected.
Caution If your cluster is running Oracle Real Application Clusters software, shut down the Oracle
Real Application Clusters database instance that is running on the node before you move the groups
off the node. For instructions see the Oracle Database Administration Guide.

# scswitch -S -h from-node
4

Put the device groups into maintenance state.


For the procedure on quiescing I/O activity to Veritas shared disk groups, see your VERITAS Volume
Manager documentation.
For the procedure on putting a device group in maintenance state, see the Sun Cluster system
administration documentation.

Remove the node from the device groups.

142

If you use VERITAS Volume Manager or raw disk, use the scconf command to remove the
device groups.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Adding or Removing a Cluster Node

If you use Solstice DiskSuite/Solaris Volume Manager, use the metaset command to remove the
device groups.

If the cluster is running HAStorage or HAStoragePlus, remove the node from the resource groups
nodelist.
# scrgadm -a -g resource-group -h nodelist

See the Sun Cluster data services collection for more information on changing a resource groups
nodelist.
7

If the storage array you are removing is the last storage array that is connected to the node,
disconnect the ber-optic cable between the node and the hub or switch that is connected to this
storage array (otherwise, skip this step).

Do you want to remove the host adapter from the node you are disconnecting?

If yes, shut down and power off the node.


If no, skip to Step 11.

Remove the host adapter from the node.


For the procedure on removing host adapters, see the documentation that shipped with your node.

10

Without allowing the node to boot, power on the node.


For more information, see the Sun Cluster system administration documentation.

11

Boot the node into noncluster mode.


ok boot -x
Caution The node must be in noncluster mode before you remove Oracle Real Application Clusters
software in the next step or the node will panic and potentially cause a loss of data availability.

12

If OPS/RAC software has been installed, remove the OPS/RAC software package from the node that
you are disconnecting.
# pkgrm SUNWscucm
Caution If you do not remove the Oracle Real Application Clusters software from the node you
disconnected, the node will panic when the node is reintroduced to the cluster and potentially cause
a loss of data availability.

13

Boot the node into cluster mode.


ok> boot

For more information, see the Sun Cluster system administration documentation.
Appendix A Scalable Cluster Topology

143

Composed March 29, 2006


Adding or Removing a Cluster Node

14

On the node, update the device namespace by updating the /devices and /dev entries.
# devfsadm -C
# scdidadm -C

15

Bring the device groups back online.


For procedures on bringing a VERITAS shared disk group online, see your VERITAS Volume
Manager documentation.
For the procedure on bringing a device group online, see the procedure on putting a device group
into maintenance state in the Sun Cluster system administration documentation.

144

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

A P P E N D I X

Installing and Conguring Sun Cluster HA for


SAP liveCache

This chapter contains the procedures on how to install and congure Sun Cluster HA for SAP
liveCache.
This chapter contains the following procedures.

How to Prepare the Nodes on page 150


How to Install and Congure SAP liveCache on page 151
How to Enable SAP liveCache to Run in a Cluster on page 151
How to Verify the SAP liveCache Installation and Conguration on page 152
How to Install the Sun Cluster HA for SAP liveCache Packages on page 153
How to Register and Congure Sun Cluster HA for SAP liveCache on page 156
How to Verify the Sun Cluster HA for SAP liveCache Installation and Conguration on page
159

Sun Cluster HA for SAP liveCache Overview


Use the information in this section to understand how Sun Cluster HA for SAP liveCache makes SAP
liveCache highly available.
For conceptual information on failover and scalable services, see the .Sun Cluster concepts
documentation
To eliminate a single point of failure in an SAP Advanced Planner & Optimizer (APO) System, Sun
Cluster HA for SAP liveCache provides fault monitoring and automatic failover for SAP liveCache
and fault monitoring and automatic restart for SAP xserver. The following table lists the data services
that best protect SAP Supply Chain Management (SCM) components in a Sun Cluster conguration.
Figure B1 also illustrates the data services that best protect SAP SCM components in a Sun Cluster
conguration.

145

Composed March 29, 2006


Sun Cluster HA for SAP liveCache Overview

TABLE B1 Protection of SAP liveCache Components


SAP liveCacheComponent

Protected by

SAP APO Central Instance

Sun Cluster HA for SAP


The resource type is SUNW.sap_ci_v2.
For more information on this data service, see Sun Cluster data
services collection.

SAP APO database

All highly available databases that are supported with Sun


Cluster software and by SAP.

SAP APO Application Server

Sun Cluster HA for SAP


The resource type is SUNW.sap_as_v2.
For more information on this data service, see Sun Cluster data
services collection.

SAP liveCache xserver

Sun Cluster HA for SAP liveCache


The resource type is SUNW.sap_xserver.

SAP liveCache database

Sun Cluster HA for SAP liveCache


The resource type is SUNW.sap_livecache.

NFS le system

Sun Cluster HA for NFS


The resource type is SUNW.nfs.
For more information on this data service, see Sun Cluster data
services collection.

146

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing and Conguring Sun Cluster HA for SAP liveCache

RDBMS

R/3

Sun Cluster
data service for
your RDBMS

Sun Cluster
HA for SAP
liveCache
Sun Cluster
HA for SAP
liveCache

FIGURE B1 Protection of SAP liveCache Components

Installing and Conguring Sun Cluster HA for SAP liveCache


Table B2 lists the tasks for installing and conguring Sun Cluster HA for SAP liveCache. Perform
these tasks in the order that they are listed.
TABLE B2 Task Map: Installing and Conguring Sun Cluster HA for SAP liveCache
Task

For Instructions, Go To

Plan the Sun Cluster HA for SAP liveCache Your SAP documentation.
installation
Sun Cluster data services collection
Prepare the nodes and disks

How to Prepare the Nodes on page 150

Install and congure SAP liveCache

How to Install and Congure SAP liveCache on page 151


How to Enable SAP liveCache to Run in a Cluster on page 151

Verify SAP liveCache installation and


conguration

How to Verify the SAP liveCache Installation and


Conguration on page 152

Install Sun Cluster HA for SAP liveCache


packages

How to Install the Sun Cluster HA for SAP liveCache Packages


on page 153

Register and congure Sun Cluster HA for


SAP liveCache as a failover data service

How to Register and Congure Sun Cluster HA for SAP


liveCache on page 156

Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

147

Composed March 29, 2006


Planning the Sun Cluster HA for SAP liveCache Installation and Conguration

TABLE B2 Task Map: Installing and Conguring Sun Cluster HA for SAP liveCache

(Continued)

Task

For Instructions, Go To

Verify Sun Cluster HA for SAP liveCache


installation and conguration

Verifying the Sun Cluster HA for SAP liveCache Installation and


Conguration on page 159

Understand Sun Cluster HA for SAP


liveCache Fault Monitors

Understanding Sun Cluster HA for SAP liveCache Fault


Monitors on page 160

Planning the Sun Cluster HA for SAP liveCache Installation


and Conguration
This section contains the information you need to plan your Sun Cluster HA for SAP liveCache
installation and conguration.
Note If you have not already done so, read your SAP documentation before you begin planning your

Sun Cluster HA for SAP liveCache installation and conguration because your SAP documentation
includes conguration restrictions and requirements that are not outlined in Sun Cluster
documentation or dictated by Sun Cluster software.

Conguration Requirements
Caution Your data service conguration might not be supported if you do not adhere to these
requirements.

Use the requirements in this section to plan the installation and conguration of Sun Cluster HA for
SAP liveCache. These requirements apply to Sun Cluster HA for SAP liveCache only. You must meet
these requirements before you proceed with your Sun Cluster HA for SAP liveCache installation and
conguration.
For requirements that apply to all data services, see Sun Cluster data services collection.

148

Use an SAP liveCache version 7.4 or higher.

Congure SAP xserver so that SAP xserver starts on all nodes that the SAP liveCache resource
can failover to. To implement this conguration, ensure that the nodelist of the SAP xserver
resource group and the SAP liveCache resource group contain the same nodes. Also, the value of
desired_primaries and maximum_primaries of the SAP xserver resource must be equal to the
number of nodes listed in the nodelist parameter of the SAP liveCache resource. For more
information, see How to Register and Congure Sun Cluster HA for SAP liveCache on page
156.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Planning the Sun Cluster HA for SAP liveCache Installation and Conguration

Standard Data Service Congurations


Use the standard congurations in this section to plan the installation and conguration of Sun
Cluster HA for SAP liveCache. Sun Cluster HA for SAP liveCache supports the standard
congurations in this section. Sun Cluster HA for SAP liveCache might support additional
congurations. However, you must contact your Sun service provider for information on additional
congurations.
Figure B2 illustrates a four-node cluster with SAP APO Central Instance, APO application servers, a
database, and SAP liveCache. APO Central Instance, the database, and SAP liveCache are congured
as failover data services. APO application servers and SAP xserver can be congured as scalable or
failover data services.

CI

APP

liveCache

XServer

XServer

DB
APP

FIGURE B2 Four-Node Cluster

Conguration Considerations
Use the information in this section to plan the installation and conguration of Sun Cluster HA for
SAP liveCache. The information in this section encourages you to think about the impact your
decisions have on the installation and conguration of Sun Cluster HA for SAP liveCache.

Install SAP liveCache on its own global device group, separate from the global device group for
the APO Oracle database and SAP R/3 software. This separate global device group for SAP
liveCache ensures that the SAP liveCache resource can depend on the HAStoragePlus resource
for SAP liveCache only.

If you want to run SAP xserver as any user other than user root, create that user on all nodes on
which SAP xserver runs, and dene this user in the Xserver_User extension property. SAP
xserver starts and stops based on the user you identify in this extension property. The default for
this extension property is user root.

Congure SAP xserver as a failover resource unless you are running multiple liveCache instances
that overlap.

Conguration Planning Questions


Use the questions in this section to plan the installation and conguration of Sun Cluster HA for SAP
liveCache. Insert the answers to these questions into the data service worksheets in the Sun Cluster
release notes documentation. See Conguration Considerations on page 149 for information that
might apply to these questions.
Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

149

Composed March 29, 2006


Preparing the Nodes and Disks

What resource groups will you use for network addresses and application resources and the
dependencies between them?

What is the logical hostname (for SAP liveCache resource) for clients that will access the data
service?

Where will the system conguration les reside?


See Sun Cluster data services collection for the advantages and disadvantages of placing the SAP
liveCache binaries on the local le system as opposed to the cluster le system.

Preparing the Nodes and Disks


This section contains the procedures you need to prepare the nodes and disks.

How to Prepare the Nodes


Use this procedure to prepare for the installation and conguration of SAP liveCache.

Become superuser on all of the nodes.

Congure the /etc/nsswitch.conf le.


a. On each node that can master the SAP liveCache resource, include one of the following entries for
group, project, an passwd database entries in the /etc/nsswitch.conf le.
database:
database: files
database: files [NOTFOUND=return] nis
database: files [NOTFOUND=return] nisplus

b. On each node that can master the SAP liveCache resource, ensure that files appears rst for the
protocols database entry in the /etc/nsswitch.conf le.
Example:
protocols: files nis

Sun Cluster HA for SAP liveCache uses the su - user command and the dbmcli command to start and
stop SAP liveCache.
The network information name service might become unavailable when a cluster nodes public
network fails. Implementing the preceding changes to the /etc/nsswitch.conf le ensures that the
su(1M) command and the dbmcli command do not refer to the NIS/NIS+ name services.

150

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing and Conguring SAP liveCache

Installing and Conguring SAP liveCache


This section contains the procedures you need to install and congure SAP liveCache.

How to Install and Congure SAP liveCache


Use this procedure to install and congure SAP liveCache.

Install and congure SAP APO System.


See Sun Cluster data services collection for the procedures on how to install and congure SAP APO
System on Sun Cluster software.

Install SAP liveCache.


Note Install SAP liveCache by using the physical hostname if you have not already created the

required logical host.


For more information, see your SAP documentation.
3

Create the .XUSER.62 le for the SAP APO administrator user and the SAP liveCache administrator
user by using the following command.
# dbmcli -d LC-NAME -n logical-hostname -us user,passwd

LC-NAME

Uppercase name of SAP liveCache database instance

logical-hostname

Logical hostname that is used with the SAP liveCache resource

Caution Neither SAP APO transaction LC10 nor Sun Cluster HA for SAP liveCache functions
properly if you do not create this le correctly.

Copy /usr/spool/sql from the node, on which you installed SAP liveCache, to all the nodes that will
run the SAP liveCache resource. Ensure that the ownership of these les is the same on all node as it is
on the node on which you installed SAP liveCache.
Example:
# tar cfB - /usr/spool/sql | rsh phys-schost-1 tar xfB -

How to Enable SAP liveCache to Run in a Cluster


During a standard SAP installation, SAP liveCache is installed with a physical hostname. You must
modify SAP liveCache to use a logical hostname so that SAP liveCache works in a Sun Cluster
environment. Use this procedure to enable SAP liveCache to run in a cluster.
Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

151

Composed March 29, 2006


Verifying the SAP liveCache Installation and Conguration

Create the failover resource group to hold the network and SAP liveCache resource.
# scrgadm -a -g livecache-resource-group [-h nodelist]

Verify that you added all the network resources you use to your name service database.

Add a network resource (logical hostname) to the failover resource group.


# scrgadm -a -L -g livecache-resource-group\ -l lc-logical-hostname [-n netiist]

Enable the failover resource group.


# scswitch -Z -g livecache-resource-group

Log on to the node that hosts the SAP liveCache resource group.

Start SAP xserver manually on the node that hosts the SAP liveCache resource group.
# su - lc-nameadm
# x_server start

lc-name

Lowercase name of SAP liveCache database instance

Log on to SAP APO System by using your SAP GUI with user DDIC.

Go to transaction LC10 and change the SAP liveCache host to the logical hostname you dened in
Step 3.
liveCache host: lc-logical-hostname

Verifying the SAP liveCache Installation and Conguration


This section contains the procedure you need to verify the SAP liveCache installation and
conguration.

How to Verify the SAP liveCache Installation and


Conguration
Use this procedure to verify the SAP liveCache installation and conguration. This procedure does
not verify that your application is highly available because you have not installed your data service
yet.

152

Log on to SAP APO System by using your SAP GUI with user DDIC.

Go to transaction LC10.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing the Sun Cluster HA for SAP liveCache Packages

Ensure that you can check the state of SAP liveCache.

Ensure that the following dbmcli command work as user lc_nameadm.


# dbmcli -d LC_NAME -n logical-hostname db_state
# dbmcli -d LC_NAME -n logical-hostname db_enum

Installing the Sun Cluster HA for SAP liveCache Packages


This section contains the procedure you need to install the Sun Cluster HA for SAP liveCache
packages.

How to Install the Sun Cluster HA for SAP liveCache


Packages
Use this procedure to install the Sun Cluster HA for SAP liveCache packages. You need the Sun
Cluster 3.0 5/02 Agents CD-ROM to perform this procedure. This procedure assumes that you did
not install the data service packages during your initial Sun Cluster installation.

Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.

Run the scinstall utility with no options.


This step starts the scinstall utility in interactive mode.

Choose the Add Support for New Data Service to This Cluster Node menu option.
The scinstall utility prompts you for additional information.

Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD-ROM as the data services cd.

Specify the data service to install.


The scinstall utility lists the data service that you selected and asks you to conrm your choice.

Exit the scinstall utility.

Unload the CD-ROM from the drive.

Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

153

Composed March 29, 2006


Registering and Conguring the Sun Cluster HA for SAP liveCache

Registering and Conguring the Sun Cluster HA for SAP


liveCache
This section contains the procedures you need to congure Sun Cluster HA for SAP liveCache.

Sun Cluster HA for SAP liveCache Extension Properties


Use the extension properties in Table B3 and Table B4 to create your resources. Use the following
command line to congure extension properties when you create your resource.
scrgadm -x parameter=value

Use the procedure in Sun Cluster data services collection to congure the extension properties if you
have already created your resources. You can update some extension properties dynamically. You can
update others, however, only when you create or disable a resource. The Tunable elds in Table B3
and Table B4 indicate when you can update each property. See Appendix A for details on all Sun
Cluster properties.
TABLE B3 Sun Cluster HA for SAP liveCache (SUNW.sap_xserver) Extension Properties
Name/Data Type

Description

Confdir_List (optional) String

The directory for SAP liveCache software and


instance directories.
Default: /sapdb
Range: None
Tunable: At creation

Monitor_retry_count

Number of PMF restarts that are allowed for the fault


monitor.
Default: 4
Tunable: Any time

Monitor_retry_ interval

Time interval in minutes for fault monitor restarts.


Default: 2
Tunable: Any time

Probe_timeout

Time-out value in seconds for the probes.


Default: 120
Tunable: Any time

154

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring the Sun Cluster HA for SAP liveCache

TABLE B3 Sun Cluster HA for SAP liveCache (SUNW.sap_xserver) Extension Properties

(Continued)

Name/Data Type

Description

Soft_Stop_Pct (optional) Integer

Percentage of stop timeout that is used to stop SAP


xserver by using the SAP utility x_server stop before
SIGKILL is used to stop all SAP xserver processes.
Default: 50
Range: 1-100
Tunable: When disabled

Xserver_User (optional) String

SAP xserver system administrator user name.


Default: root
Range: None
Tunable: At creation

TABLE B4 Sun Cluster HA for SAP liveCache (SUNW.sap_livecache) Extension Properties


Name/Data Type

Description

Confdir_list (optional) String

The directory for SAP liveCache software and the


instance directory.
Default: /sapdb
Range: None
Tunable: At creation

Livecache_name (required) String

Name of SAP liveCache database instance.


Default: None
Range: None
Tunable: At creation

Monitor_retry_count

Number of PMF restarts that are allowed for the fault


monitor.
Default: 4
Tunable: Any time

Monitor_retry_interval

Time interval in minutes for fault monitor restarts.


Default: 2
Tunable: Any time

Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

155

Composed March 29, 2006


Registering and Conguring the Sun Cluster HA for SAP liveCache

TABLE B4 Sun Cluster HA for SAP liveCache (SUNW.sap_livecache) Extension Properties


Name/Data Type

Description

Probe_timeout

Time-out value in seconds for the probes.

(Continued)

Default: 90
Tunable: Any time

How to Register and Congure Sun Cluster HA for SAP


liveCache
Use this procedure to congure Sun Cluster HA for SAP liveCache as a failover data service for the
SAP liveCache database and SAP xserver as a failover or scalable data service. This procedure
assumes that you installed the data service packages. If you did not install the Sun Cluster HA for
SAP liveCache packages as part of your initial Sun Cluster installation, go to How to Install the Sun
Cluster HA for SAP liveCache Packages on page 153 to install the data service packages. Otherwise,
use this procedure to congure the Sun Cluster HA for SAP liveCache.
Caution Do not congure more than one SAP xserver resource on the same cluster because one SAP

xserver serves multiple SAP liveCache instances in the cluster. More than one SAP xserver resource
that runs on the same cluster causes conicts between the SAP xserver resources. These conicts
cause all SAP xserver resources to become unavailable. If you attempt to start the SAP xserver twice,
you receive an error message that says Address already in use.

Become superuser on one of the nodes in the cluster that will host the SAP liveCache resource.

Copy the lccluster le to the same location as the lcinit le.


# cp /opt/SUNWsclc/livecache/bin/lccluster \
/sapdb/LC-NAME/db/sap

LC-NAME
3

Uppercase name of SAP liveCache database instance

Edit the lccluster le to substitute values for put-LC_NAME-here and put-Confdir_list-here.


Note The put-Confidir_list-here value exists only in the Sun Cluster 3.1 version.

a. Open the lccluster le.


# vi /sapdb/LC-NAME/db/sap/lccluster \
LC_NAME="put-LC_NAME-here" \
CONFDIR_LIST="put-Confdir_list-here"
156

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring the Sun Cluster HA for SAP liveCache

Note The CONFDIR_LIST=put-Confdir_list-here entry exists only in the Sun Cluster 3.1
version.

b. Replace put-LC_NAME-here with the SAP liveCache instance name. The SAP liveCache instance
name is the value you dened in the Livecache_Name extension property.
LC_NAME="liveCache-instance-name"

c. Replace put-Confdir_list-here with the value of the Confidir_list extension property.


Note This step is only for the Sun Cluster 3.1 version. Skip this step if you are running an earlier

version of Sun Cluster.


CONFDIR_LIST="liveCache-software-directory"

Example:
If the SAP liveCache instance name is LC1 and the SAP liveCache software directory is /sapdb,
edit the lccluster script as follows.
LC_NAME="LC1"
CONFDIR_LIST="/sapdb" [Sun Cluster 3.1 version only]
4

Add the HAStoragePlus resource to the SAP liveCache resource group.


# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j livecache-storage-resource -g livecache-resource-group \
-t SUNW.HAStoragePlus -x filesystemmountpoints=mountpoint,... \
-x globaldevicepaths=livecache-device-group

Enable the SAP liveCache storage resource.


# scswitch -e -j livecache-storage-resource

Register the resource type for SAP liveCache database.


# scrgadm -a -t SUNW.sap_livecache

Register the resource type for SAP xserver.


# scrgadm -a -t SUNW.sap_xserver

Congure the SAP xserver as a scalable resource, completing the following substeps.
a. Create a scalable resource group for SAP xserver. Congure SAP xserver to run on all the potential
nodes that SAP liveCache will run on.

Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

157

Composed March 29, 2006


Registering and Conguring the Sun Cluster HA for SAP liveCache

Note Congure SAP xserver so that SAP xserver starts on all nodes that the SAP liveCache

resources can fail over to. To implement this conguration, ensure that the nodelist parameter of
the SAP xserver resource group contains all the nodes listed in the liveCache resource groups
nodelist. Also, the value of desired_primaries and maximum_primaries of the SAP xserver
resource group must be equal to each other.
# scrgadm -a -g xserver-resource-group \
-y Maximum_primaries=value \
-y Desired_primaries=value \
-h nodelist

b. Create an SAP xserver resource in this scalable resource group.


# scrgadm -a -j xserver-resource\
-g xserver-resource-group -t SUNW.sap_xserver

c. Enable the scalable resource group that now includes the SAP xserver resource.
# scswitch -Z -g xserver-resource-group
9

Register the SAP liveCache resource as follows.


# scrgadm -a -j livecache-resource -g livecache-resource-group \
-t SUNW.sap_livecache -x livecache_name=LC-NAME \
-y resource_dependencies=livecache-storage-resource

10

Set up a resource group dependency between SAP xserver and SAP liveCache.
# scrgadm -c -g livecache-resource-group \
-y rg_dependencies=xserver-resource-group

11

Enable the liveCache failover resource group.


# scswitch -Z -g livecache-resource-group

12

Are you running an APO application server on a node that SAP liveCache can fail over to?

13

If no, this step completes this procedure.


If yes, proceed to Step 9.

Is the scalable APO application server resource group already in an RGOfoad resources
rg_to_offload list?
# scrgadm -pvv | grep -i rg_to_offload | grep value:

If yes, this step completes this procedure.

If no, consider adding an RGOfoad resource in the SAP liveCache resource group.
This conguration enables you to automatically shut down the APO application server if the
liveCache resource fails over to a node on which the APO application server was running.

158

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Verifying the Sun Cluster HA for SAP liveCache Installation and Conguration

Verifying the Sun Cluster HA for SAP liveCache Installation


and Conguration
This section contains the procedure you need to verify that you installed and congured your data
service correctly.

How to Verify the Sun Cluster HA for SAP liveCache


Installation and Conguration
Use this procedure to verify that you installed and congured Sun Cluster HA for SAP liveCache
correctly. You need the information in the following table to understand the various states of the SAP
liveCache database.

State

Description

OFFLINE

SAP liveCache is not running.

COLD

SAP liveCache is available for administrator tasks.

WARM

SAP liveCache is online.

STOPPED INCORRECTLY

SAP liveCache stopped incorrectly. This is also one of


the interim states while SAP liveCache starts or stops.

ERROR

Cannot determine the current state. This is also one of


the interim states while SAP liveCache starts or stops.

UNKNOWN

This is one of the interim states while SAP liveCache


starts or stops.

Log on to the node that hosts the resource group that contains the SAP liveCache resource, and
verify that the fault monitor functionality works correctly.
a. Terminate SAP liveCache abnormally by stopping all SAP liveCache processes.
Sun Cluster software restarts SAP liveCache.
# ps -ef|grep sap|grep kernel
# kill -9 livecache-processes

b. Terminate SAP liveCache by using the Stop liveCache button in LC10 or by running the lcinit
command.
Sun Cluster software does not restart SAP liveCache. However, the SAP liveCache resource status
message reects that SAP liveCache stopped outside of Sun Cluster software through the use of
the Stop liveCache button in LC10 or the lcinit command. The state of the SAP liveCache
resource is UNKNOWN. When the user successfully restarts SAP liveCache by using the Start
Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

159

Composed March 29, 2006


Understanding Sun Cluster HA for SAP liveCache Fault Monitors

liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault
Monitor updates the resource state and status message to indicate that SAP liveCache is running
under the control of Sun Cluster software.
2

Log on to SAP APO by using your SAP GUI with user DDIC, and verify that SAP liveCache starts
correctly by using transaction LC10.
As user root, switch the SAP liveCache resource group to another node.
# scswitch -z -g livecache-resource-group -h node2

Repeat Step 1 through Step 3 for each potential node on which the SAP liveCache resource can run.

Log on to the nodes that host the SAP xserver resource, and verify that the fault monitor
functionality works correctly.
Terminate SAP xserver abnormally by stopping all SAP xserver processes.
# ps -ef|grep xserver
# kill -9 xserver-process

Understanding Sun Cluster HA for SAP liveCache Fault


Monitors
Use the information in this section to understand Sun Cluster HA for SAP liveCache Fault Monitors.
This section describes the Sun Cluster HA for SAP liveCache Fault Monitors probing algorithm or
functionality, states the conditions, messages, and recovery actions associated with unsuccessful
probing, and states the conditions and messages associated with successful probing.

Extension Properties
See Sun Cluster HA for SAP liveCache Extension Properties on page 154 for the extension
properties that the Sun Cluster HA for SAP liveCache Fault Monitors use.

Monitor Check Method


A SAP liveCache resource Monitor_check method checks whether SAP xserver is available on this
node. If SAP xserver is not available on this node, this method returns an error and rejects the
failover of SAP liveCache to this node.
This method is needed to enforce the cross-resource group resource dependency between SAP
xserver and SAP liveCache.
160

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Understanding Sun Cluster HA for SAP liveCache Fault Monitors

Probing Algorithm and Functionality


Sun Cluster HA for SAP liveCache has a fault monitor for each resource type.

SAP xserver Fault Monitor on page 161 (SUNW.sap_xserver)


SAP liveCache Fault Monitor on page 161 (SUNW.sap_livecache)

SAP xserver Fault Monitor


The SAP xserver parent process is under the control of process monitor pmfadm. If the parent process
is stopped or killed, the process monitor contacts the SAP xserver Fault Monitor, and the SAP xserver
Fault Monitor decides what action must be taken.
The SAP xserver Fault Monitor performs the following steps in a loop.
1. Sleeps for Thorough_probe_interval.
2. Uses the SAP utility dbmcli with db_enum to check SAP xserver availability.

If SAP xserver is unavailable, SAP xserver probe restarts or fails over the SAP xserver resource
if it reaches the maximum number of restarts.

If any system error messages are logged in syslog during the checking process, the SAP
xserver probe concludes that a partial failure has occurred. If the system error messages
logged in syslog occur four times within the probe_interval, SAP xserver probe restarts
SAP xserver.

SAP liveCache Fault Monitor


The SAP liveCache probe checks for the presence of the SAP liveCache parent process, the state of
the SAP liveCache database, and whether the user intentionally stopped SAP liveCache outside of
Sun Cluster software. If a user used the Stop liveCache button in LC10 or the lcinit command to
stop SAP liveCache outside of Sun Cluster software, the SAP liveCache probe concludes that the user
intentionally stopped SAP liveCache outside of Sun Cluster software.
If the user intentionally stopped SAP liveCache outside of Sun Cluster software by using the Stop
liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault
Monitor updates the resource state and status message to reect this action, but it does not restart
SAP liveCache. When the user successfully restarts SAP liveCache outside of Sun Cluster software by
using the Start liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP
liveCache Fault Monitor updates the resource state and status message to indicate that SAP
liveCache is running under the control of Sun Cluster software, and Sun Cluster HA for SAP
liveCache Fault Monitor takes appropriate action if it detects SAP liveCache is OFFLINE.
If SAP liveCache database state reports that SAP liveCache is not running or that the SAP liveCache
parent process terminated, the Sun Cluster HA for SAP liveCache Fault Monitor restarts or fails over
SAP liveCache.
The Sun Cluster HA for SAP liveCache Fault Monitor performs the following steps in a loop. If any
step returns SAP liveCache is offline, the SAP liveCache probe restarts or fails over SAP
liveCache.
Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache

161

Composed March 29, 2006


Understanding Sun Cluster HA for SAP liveCache Fault Monitors

1. Sleeps for Thorough_probe_interval.


2. Uses the dbmcli utility with db_state to check the SAP liveCache database state.
3. If SAP liveCache is online, SAP liveCache probe checks the SAP liveCache parent process.

If the parent process terminates, SAP liveCache probe returns liveCache is offline.

If the parent process is online, SAP liveCache probe returns OK.

4. If SAP liveCache is not online, SAP liveCache probe determines if the user stopped SAP
liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the
lcinit command.
5. If the user stopped SAP liveCache outside of Sun Cluster software by using the Stop liveCache
button in LC10 or the lcinit command, returns OK.
6. If the user did not stop SAP liveCache outside of Sun Cluster software by using the Stop
liveCache button in LC10 or the lcinit command, checks SAP xserver availability.

If SAP xserver is unavailable, returns OK because the probe cannot restart SAP liveCache if
SAP xserver is unavailable.

If SAP xserver is available, returns liveCache is offline.

7. If any errors are reported from system function calls, returns system failure.

162

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

A P P E N D I X

Installing and Conguring Sun Cluster HA for


Sybase ASE

This chapter provides instructions on how to congure and administer Sun Cluster HA for Sybase
ASE on your Sun Cluster release notes documentation nodes.
This chapter contains the following procedures.

How to Prepare the Nodes on page 165


How to Install the Sybase Software on page 166
How to Verify the Sybase ASE Installation on page 168
How to Congure Sybase ASE Database Access With Solstice DiskSuite/Solaris Volume
Manager on page 168
How to Congure Sybase ASE Database Access With VERITAS Volume Manager on page 169
How to Create the Sybase ASE Database Environment on page 170
How to Install Sun Cluster HA for Sybase ASE Packages on page 172
How to Register and Congure Sun Cluster HA for Sybase ASE on page 172
How to Verify the Sun Cluster HA for Sybase ASE Installation on page 175

You must congure Sun Cluster HA for Sybase ASE as a failover data service. See the Sun Cluster
concepts documentation document and Sun Cluster data services collection for general information
about data services, resource groups, resources, and other related topics.

Installing and Conguring Sun Cluster HA for Sybase ASE


The following table lists sections that describe the installation and conguration tasks.
TABLE C1 Task Map: Installing and Conguring Sun Cluster HA for Sybase ASE
Task

For Instructions, Go To

Prepare to install Sun Cluster HA for


Sybase ASE

Preparing to Install Sun Cluster HA for Sybase ASE on page 164

163

Composed March 29, 2006


Preparing to Install Sun Cluster HA for Sybase ASE

TABLE C1 Task Map: Installing and Conguring Sun Cluster HA for Sybase ASE

(Continued)

Task

For Instructions, Go To

Install the Sybase ASE 12.0 software

Installing the Sybase ASE 12.0 Software on page 164

Create the Sybase database environment

Creating the Sybase ASE Database Environment on page 168

Install the Sun Cluster HA for Sybase ASE


package

Installing the Sun Cluster HA for Sybase ASE Package on page


171

Register Sun Cluster HA for Sybase ASE


resource types and congure resource
groups and resources

Registering and Conguring Sun Cluster HA for Sybase ASE


on page 172

Verify the Sun Cluster HA for Sybase ASE


installation

Verifying the Sun Cluster HA for Sybase ASE Installation


on page 175

Understand Sun Cluster HA for Sybase


ASE logging and security issues

Understanding Sun Cluster HA for Sybase ASE Logging and


Security Issues on page 176

Congure Sun Cluster HA for Sybase ASE


extension properties

Conguring Sun Cluster HA for Sybase ASE Extension


Properties on page 177

View fault monitor information

Sun Cluster HA for Sybase ASE Fault Monitor on page 180

Preparing to Install Sun Cluster HA for Sybase ASE


To prepare your nodes for the Sun Cluster release notes documentation HA for Sybase Adaptive
Server 12.0 installation, select an installation location for the following les.

Sybase ASE application les These les include Sybase ASE binaries and libraries. You can
install these les on either the local le system or the cluster le system.
See the Sun Cluster data services collection for the advantages and disadvantages of placing the
Sybase ASE binaries on the local le system as opposed to the cluster le system.

Sybase ASE conguration les These les include the interfaces le, config le, and
environment le. You can install these les on the local le system (with links), the highly
available local le system, or on the cluster le system.

Database data les These les include Sybase device les. You must install these les on the
highly available local le system or the cluster le system as either raw devices or regular les.

Installing the Sybase ASE 12.0 Software


Use the procedures in this section to complete the following tasks.

164

Prepare the nodes.


Install the Sybase ASE software.
Verify the Sybase ASE installation.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing the Sybase ASE 12.0 Software

Note Before you congure Sun Cluster HA for Sybase ASE, use the procedures that the Sun Cluster

data services collection describes to congure the Sybase ASE software on each node.

How to Prepare the Nodes


This procedure describes how to prepare the cluster nodes for Sybase ASE software installation.
Caution Perform all of the steps in this procedure on all of the nodes. If you do not perform all of the

steps on all of the nodes, the Sybase ASE installation will be incomplete, and Sun Cluster HA for
Sybase ASE will fail during startup.

Note Consult the Sybase ASE documentation before you perform this procedure.

Become superuser on all of the nodes.

Congure the /etc/nsswitch.conf le as follows so that Sun Cluster HA for Sybase ASE starts and
stops correctly if a switchover or failover occurs.
On each node that can master the logical host that runs Sun Cluster HA for Sybase ASE, include one
of the following entries for group in the /etc/nsswitch.conf le.
group:
group: files [NOTFOUND=return] nis
group: file [NOTFOUND=return] nisplus

Sun Cluster HA for Sybase ASE uses the su user command to start and stop the database node.
The network information name service might become unavailable when a cluster nodes public
network fails. Adding one of the preceding entries for group ensures that the su(1M) command does
not refer to the NIS/NIS+ name services if the network information name service is unavailable.
3

Congure the cluster le system for Sun Cluster HA for Sybase ASE.
If raw devices contain the databases, congure the global devices for raw-device access. See the Sun
Cluster data services collection for information on how to congure global devices.
If you use the Solstice DiskSuite/Solaris Volume Manager volume manager, congure the Sybase ASE
software to use UNIX le system (UFS) logging on mirrored meta devices or raw-mirrored meta
devices. See the Solstice DiskSuite/Solaris Volume Manager documentation for information on how
to congure raw-mirrored metadevices.

Prepare the SYBASE_HOME directory on a local or multihost disk.


Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

165

Composed March 29, 2006


Installing the Sybase ASE 12.0 Software

Note If you install the Sybase ASE binaries on a local disk, use a separate disk if possible. Installing

the Sybase ASE binaries on a separate disk prevents the binaries from overwrites during operating
environment reinstallation.

On each node, create an entry for the database administrator (DBA) group in the /etc/group le,
and add potential users to the group.
Verify that the root and sybase users are members of the dba group, and add entries as necessary for
other DBA users. Ensure that group IDs are the same on all of the nodes that run Sun Cluster HA for
Sybase ASE, as the following example illustrates.
dba:*:520:root,sybase

You can create group entries in a network name service. If you do so, also add your entries to the local
/etc/group le to eliminate dependency on the network name service.
6

On each node, create an entry for the Sybase system administrator.


The following command updates the /etc/passwd and /etc/shadow les with an entry for the
Sybase system administrator.
# useradd -u 120 -g dba -d /Sybase-home sybase

Ensure that the sybase user entry is the same on all of the nodes that run Sun Cluster HA for Sybase
ASE.

How to Install the Sybase Software


Perform the following steps to install the Sybase ASE software.

Become superuser on a cluster member.

Note the Sybase ASE installation requirements.


You can install Sybase ASE binaries on one of the following locations.

Local disks of the cluster nodes

Highly available local le system

Cluster le system
Note Before you install the Sybase ASE software on the cluster le system, start the Sun Cluster

release notes documentation software and become the owner of the disk device group.
See Preparing to Install Sun Cluster HA for Sybase ASE on page 164 for more information about
installation locations.
166

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing the Sybase ASE 12.0 Software

Create a failover resource group to hold the network and application resources.
# scrgadm -a -g resource-group [-h nodelist]

-g resource-group

Species the name of the resource group. This name can be your choice but
must be unique for resource groups within the cluster.

-h nodelist

Species an optional, comma-separated list of physical node names or IDs


that identify potential masters. The order here determines the order in which
the Resource Group Manager (RGM) considers primary nodes during
failover.

Note Use the -h option to specify the order of the node list. If all of the nodes in the cluster are

potential masters, you do not need to use the -h option.

Verify that you have added all of the network resources that Sun Cluster HA for Sybase ASE uses to
either the /etc/inet/hosts le or to your name service (NIS, NIS+) database.

Add a network resource (logical hostname or shared address) to the failover resource group.
# scrgadm -a -L -g resource-group -l logical-hostname [-n netiist]

-l logical-hostname

Species a network resource. The network resource is the logical hostname


or shared address (IP address) that clients use to access Sun Cluster HA for
Sybase ASE.

-n netiist

Species an optional, comma-separated list that identies the NAFO groups


on each node. All of the nodes that are in the resource groups nodelist must
be represented in the netiist. If you do not specify this option, the
scrgadm(1M) command attempts to discover a net adapter on the subnet
that the hostname list identies for each node that is in nodelist. For
example, -n nafo0@nodename, nafo0@nodename2.

Run the scswitch(1M) command to complete the following tasks.

Enable the resource and fault monitoring.


Move the resource group into a managed state.
Bring the resource group online.

# scswitch -Z -g resource-group
7

On the node mastering the resource group that you just created, login as sybase.
The installation of the Sybase binaries must be performed on the node where the corresponding
logical host is running.

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

167

Composed March 29, 2006


Creating the Sybase ASE Database Environment

Install the Sybase ASE software.


Regardless of where you install the Sybase ASE software, modify each nodes /etc/system les as you
would in standard Sybase ASE installation procedures. For instructions on how to install the Sybase
ASE software, refer to the Sybase installation and conguration guides.
Note For every Sybase server, enter the hostname that is associated with a network resource when

asked to specify the hostname.

See Also

After you install the Sybase ASE software, go to How to Congure Sybase ASE Database Access
With Solstice DiskSuite/Solaris Volume Manager on page 168 if you use the Solstice
DiskSuite/Solaris Volume Manager volume manager. Go to How to Congure Sybase ASE Database
Access With VERITAS Volume Manager on page 169 if you use the VERITAS Volume Manager
(VxVM).

How to Verify the Sybase ASE Installation


Perform the following steps to verify the Sybase ASE software installation.

Verify that the sybase user and the dba group own the $SYBASE_HOME directory and $SYBASE_HOME
children directories.

Run the scstat(1M) command to verify that the Sun Cluster release notes documentationsoftware
functions correctly.

Creating the Sybase ASE Database Environment


The procedures in this section enable you to complete the following tasks.

Congure Sybase ASE database access with Solstice DiskSuite/Solaris Volume Manager or
VERITAS Volume Manager.

Create the Sybase ASE database environment.

How to Congure Sybase ASE Database Access With


Solstice DiskSuite/Solaris Volume Manager
If you use the Solstice DiskSuite/Solaris Volume Manager volume manager, perform the following
steps to congure Sybase ASE database access with the Solstice DiskSuite/Solaris Volume Manager
volume manager.

168

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Creating the Sybase ASE Database Environment

Congure the disk devices for the Solstice DiskSuite/Solaris Volume Manager software to use.
See the Sun Cluster software installation documentation for information on how to congure
Solstice DiskSuite/Solaris Volume Manager.

If you use raw devices to contain the databases, run the following commands to change each
raw-mirrored metadevices owner, group, and mode.
If you do not use raw devices, do not perform this step.
a. If you create raw devices, run the following commands for each device on each node that can
master the Sybase ASE resource group.
# chown sybase /dev/md/metaset/rdsk/dn
# chgrp dba /dev/md/metaset/rdsk/dn
# chmod 600 /dev/md/metaset/rdsk/dn

metaset

Species the name of the diskset.

/rdsk/dn

Species the name of the raw disk device within the metaset diskset.

b. Verify that the changes are effective.


# ls -lL /dev/md/metaset/rdsk/dn

How to Congure Sybase ASE Database Access With


VERITAS Volume Manager
If you use VERITAS Volume Manager software, perform the following steps to congure Sybase ASE
database access with the VERITAS Volume Manager software.

Congure the disk devices for the VERITAS Volume Manager software to use.
See the Sun Cluster software installation documentation for information on how to congure
VERITAS Volume Manager.

If you use raw devices to contain the databases, run the following commands on the current
disk-group primary to change each devices owner, group, and mode.
If you do not use raw devices, do not perform this step.
a. If you create raw devices, run the following command for each raw device.
# vxedit -g diskgroup set user=sybase group=dba mode=0600 volume

-g resource-group

Species the name of the resource group. This name can be your choice
but must be unique for resource groups within the cluster.

-h nodelist

Species an optional comma-separated list of physical node names or IDs


that identify potential masters. The order here determines the order in

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

169

Composed March 29, 2006


Creating the Sybase ASE Database Environment

which the nodes are considered as primary during failover.


b. Verify that the changes are effective.
# ls -lL /dev/vx/rdsk/diskgroup/volume

c. Reregister the disk device group with the cluster to keep the VERITAS Volume Manager
namespace consistent throughout the cluster.
# scconf -c -D name=diskgroup

How to Create the Sybase ASE Database Environment


Before you perform this procedure, ensure that you have completed the following tasks.

Establish a highly available IP address and name, that is, a network resource that operates at
installation time.

Locate device paths for all of the Sybase ASE devicesincluding the master device and system
devicesin the highly available local le system or cluster le system. Congure device paths as
one of the following le types.

regular les

raw devices

les that the Solstice DiskSuite/Solaris Volume Manager software or the VERITAS Volume
Manager software manage

Locate the Sybase ASE server logs in either the cluster le system or the local le system.

The Sybase ASE 12.0 environment consists of the data server, backup server, monitor server, text
server, and XP server. The data server is the only server that you must congureyou can choose
whether to congure all of the other servers.

The entire cluster must contain only one copy of the interfaces le. The $SYBASE directory
contains the interfaces le. If you plan to maintain per-node le copies, ensure the le contents
are identical.
All of the clients that connect to Sybase ASE servers connect with Sybase OpenClient libraries
and utilities. When you congure the Sybase ASE software, in the interfaces le, enter
information about the network resource and various ports. All of the clients use this connection
information to connect to the Sybase ASE servers.

Perform the following steps to create the Sybase ASE database environment.
1

Run the GUI-based utility srvbuild to create the Sybase ASE database.
The $SYBASE/ASE_12-0/bin directory contains this utility. See the Sybase ASE document entitled
Installing Sybase Adaptive Server Enterprise on Sun Solaris 2.x (SPARC).

170

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing the Sun Cluster HA for Sybase ASE Package

To verify successful database installation, ensure that all of the servers start correctly.
Run the ps(1) command to verify the operation of all of the servers. Sybase ASE server logs indicate
any errors that have occurred.

Set the password for the Sybase ASE system administrator account.
See the Sybase Adaptive Server Enterprise System Administration Guide for details on changing the sa
login password.

Create a new Sybase ASE account for fault monitoring.


This account enables the fault monitor to perform the following tasks.

Support queries to system tables.


Create and update user tables.

Note Do not use the sa account for these purposes.

See Sun Cluster HA for Sybase ASE Fault Monitor on page 180 for more information.
5

Update the stop le with the sa password.


Because the stop le contains the sa password, protect the le with the appropriate permissions, and
place the le in a directory that the system administrator chooses. Enable only the sybase user to read,
write, and execute the stop le.
See Important Security Issues on page 177 for more information about the stop le.

See Also

After you create the Sybase ASE database environment, go to How to Install Sun Cluster HA for
Sybase ASE Packages on page 172.

Installing the Sun Cluster HA for Sybase ASE Package


You can use the scinstall(1M) utility to install SUNWscsyb, the Sun Cluster HA for Sybase ASE
package, on a cluster. Do not use the -s option to non-interactive scinstall to install all of the data
service packages.
If you installed the SUNWscsyb data service package as part of your initial Sun Cluster release notes
documentation installation, proceed to Registering and Conguring Sun Cluster HA for Sybase
ASE on page 172. Otherwise, use the following procedure to install the SUNWscsyb package.

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

171

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for Sybase ASE

How to Install Sun Cluster HA for Sybase ASE Packages


You need the Sun Cluster 3.0 5/02 Agents CD-ROM to complete this procedure. Perform this
procedure on all of the cluster nodes that run the Sun Cluster HA for Sybase ASE package.

Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.

Run the scinstall utility with no options.


This step starts the scinstall utility in interactive mode.

Choose the menu option, Add Support for New Data Service to This Cluster Node.
The scinstall utility prompts you for additional information.

Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD as the data services cd.

Specify the data service to install.


The scinstall utility lists the data service that you selected and asks you to conrm your choice.

Exit the scinstall utility.

Unload the CD from the drive.

See Also

When you nish the Sun Cluster HA for Sybase ASE package installation, go to How to Register and
Congure Sun Cluster HA for Sybase ASE on page 172.

Registering and Conguring Sun Cluster HA for Sybase ASE


Use the procedures in this section to register and congure the Sun Cluster HA for Sybase ASE.
Register and congure Sun Cluster HA for Sybase ASE as a failover data service.

How to Register and Congure Sun Cluster HA for


Sybase ASE
This procedure describes how to use the scrgadm(1M) command to register and congure Sun
Cluster HA for Sybase ASE.

172

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for Sybase ASE

This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes
actions between HAStorage and Sun Cluster HA for Sybase ASE and enables you to use a highly
available local le system. Sun Cluster HA for Sybase ASE is disk-intensive, and therefore you should
congure the HAStoragePlus resource type.
See the SUNW.HAStoragePlus(5) man page and Sun Cluster data services collection for more
information about the HAStoragePlus resource type.
Note Other options also enable you to register and congure the data service. See Sun Cluster data

services collection for details about these options.


To perform this procedure, you must have the following information.

The names of the cluster nodes that master the data service.

The network resource that clients use to access the data service. You typically congure the IP
address when you install the cluster. See the sections in the Sun Cluster software installation
documentation on planning the Sun Cluster environment and on how to install the Solaris
operating environment for details.

The path to the Sybase ASE application installation.

Note Perform the following steps on one cluster member.

Become superuser on a cluster member.

Run the scrgadm command to register resource types for Sun Cluster HA for Sybase ASE.
# scrgadm -a -t SUNW.sybase

-a

Adds the resource type for the data service.

-t SUNW.sybase

Species the resource type name that is predened for your data service.

Register the HAStoragePlus resource type with the cluster.


# scrgadm -a -t SUNW.HAStoragePlus

Create the resource sybase-hastp-rs of type HAStoragePlus.


# scrgadm -a -j sybase-hastp-rs -g sybase-rg -t SUNW.HAStoragePlus \
-x GlobalDevicePaths=sybase-set1,/dev/global/dsk/dl \
-x FilesystemMountPoints=/global/sybase-inst \
-x AffinityOn=TRUE

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

173

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for Sybase ASE

Note AfnityOn must be set to TRUE and the local le system must reside on global disk groups to

be failover.

Run the scrgadm command to complete the following tasks and bring the resource group sybase-rg
online on a cluster node.

Move the resource group into a managed state.


Bring the resource group online

This node will be made the primary for device group sybase-set1 and raw device
/dev/global/dsk/d1. Device groups associated with le systems such as /global/sybase-inst will
also be made primaries on this node.
# scrgadm -Z -g sybase-rg
6

Create Sybase ASE application resources in the failover resource group.


# scrgadm -a -j resource -g resource-group \
-t SUNW.sybase \
-x Environment_File=environment-le-path \
-x Adaptive_Server_Name=adaptive-server-name \
-x Backup_Server_Name=backup-server-name \
-x Text_Server_Name=text-server-name \
-x Monitor_Server_Name=monitor-server-name \
-x Adaptive_Server_Log_File=log-le-path \
-x Stop_File=stop-le-path \
-x Connect_string=user/passwd
-y resource_dependencies=storageplus-resource

-j resource
Species the resource name to add.
-g resource-group
Species the resource group name into which the RGMplaces the resources.
-t SUNW.sybase
Species the resource type to add.
-x Environment_File=environment-le
Sets the name of the environment le.
-x Adaptive_Server_Name=adaptive-server-name
Sets the name of the adaptive server.
-x Backup_Server_Name=backup-server-name
Sets the name of the backup server.
-x Text_Server_Name=text-server-name
Sets the name of the text server.
174

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Verifying the Sun Cluster HA for Sybase ASE Installation

-x Monitor_Server_Name=monitor-server-name
Sets the name of the monitor server.
-x Adaptive_Server_Log_File=log-le-path
Sets the path to the log le for the adaptive server.
-x Stop_File=stop-le-path
Sets the path to the stop le.
-x Connect_string=user/passwd
Species the user name and password that the fault monitor uses to connect to the database.
You do not have to specify extension properties that have default values. See Conguring Sun
Cluster HA for Sybase ASE Extension Properties on page 177 for more information.
7

Enable the resource and fault monitoring.


Note Sybase start logs print to the console when the Sybase servers start. If you do not want these

messages to print to the console, update the appropriate RUN les to redirect these messages to
another le.
# scswitch -Z -g resource-group
See Also

After you register and congure Sun Cluster HA for Sybase ASE, go to How to Verify the Sun
Cluster HA for Sybase ASE Installation on page 175.

Verifying the Sun Cluster HA for Sybase ASE Installation


Perform the following verication tests to ensure that you have correctly installed and congured
Sun Cluster HA for Sybase ASE.
These sanity checks ensure that all of the nodes that run Sun Cluster HA for Sybase ASE can start the
Sybase ASE data server. These checks also ensure that other nodes in the conguration can access the
Sybase ASE data server. Perform these sanity checks to isolate any problems with starting the Sybase
ASE software from Sun Cluster HA for Sybase ASE.

How to Verify the Sun Cluster HA for Sybase ASE


Installation

Log in to the node that masters the Sybase ASE resource group.

Set the Sybase ASE environment variables.


The environment variables are the variables that you specify with the Environment_file extension
property. You typically name this le SYBASE.sh or SYBASE.csh.
Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

175

Composed March 29, 2006


Understanding Sun Cluster HA for Sybase ASE Logging and Security Issues

Verify that the Sun Cluster HA for Sybase ASE resource is online.
# scstat -g

Inspect the Sybase ASE logs to determine the cause of any errors that have occurred.

Conrm that you can connect to the data server and execute the following test command.
# isql -S adaptive-server -U sa
isql> sp_help
isql> go
isql> quit

Kill the process for the Sybase ASE data server.


The Sun Cluster release notes documentation software restarts the process.

Switch the resource group that contains the Sybase ASE resource to another cluster member.
# scswitch -z -g resource-group -h node

Log in to the node that now contains the resource group.

Repeat Step 3 and Step 5.


Note Sybase ASE client connections cannot survive a Sun Cluster HA for Sybase ASE switchover. If a

switchover occurs, the existing client connections to Sybase ASE terminate, and clients must
reestablish their connections. After a switchover, the time that is required to replay the Sybase ASE
transaction log determines Sun Cluster HA for Sybase ASE recovery time.

Understanding Sun Cluster HA for Sybase ASE Logging and


Security Issues
The following sections contain information about Sun Cluster HA for Sybase ASE logging and
security issues.

Sun Cluster HA for Sybase ASE Logging


Sun Cluster HA for Sybase ASE logs messages to the le message_log in the /opt/SUNWscsyb/l
directory. Although this le cannot exceed 512 Kbytes, Sun Cluster HA for Sybase ASE does not
delete old log les. The number of log les, therefore, can grow to a large number.
Sun Cluster HA for Sybase ASE writes all of the error messages in the syslog le. Sun Cluster HA for
Sybase ASE also logs fault monitor history to the le restart_history in the log directory. These
les can also grow to a large number.
176

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Conguring Sun Cluster HA for Sybase ASE Extension Properties

As part of your regular le maintenance, check the following log les and remove les that you no
longer need.

syslog
message_log
restart_history

Important Security Issues


Sun Cluster HA for Sybase ASE requires that you embed the system administrators password in a
stop le. The /opt/SUNWscsyb/bin directory contains the template for the stop le,
sybase_stop_servers. Sun Cluster HA for Sybase ASE uses this le to log in to the Sybase ASE
environment and to stop the Sybase ASE servers. Enable the sybase user to execute the stop le, but
protect the le from general access. Give read, write, and execute privileges to only the following
users.

sybase user
sybase group

Conguring Sun Cluster HA for Sybase ASE Extension


Properties
This section describes how to congure Sun Cluster HA for Sybase ASE extension properties.
Typically, you use the command line scrgadm -x parameter=value to congure extension properties
when you create the Sybase ASE resources. You can also use the procedures that Sun Cluster data
services collection describes to congure them later.
See the r_properties(5) and the rg_properties(5) man pages for details on all of the Sun Cluster
release notes documentation extension properties.
Table C2 describes the extension properties that you can set for the Sybase ASE server resource. You
can update some extension properties dynamically. You can update others, however, only when you
create or disable a resource. The Tunable entries indicate when you can update each property.

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

177

Composed March 29, 2006


Conguring Sun Cluster HA for Sybase ASE Extension Properties

TABLE C2 Sun Cluster HA for Sybase ASE Extension Properties


Name/Data Type

Description

Environment_File

File that contains all of the Sybase ASE environment variables. This le is automatically
created in the Sybase home directory.
Default: None
Range: Minimum=1
Tunable: When disabled

Adaptive_Server_NameThe name of the data server. Sun Cluster HA for Sybase ASE uses this property to locate
the RUN server in the $SYBASE/$ASE/install directory.
Default: None
Range: Minimum=1
Tunable: When disabled
Backup_Server_Name The name of the backup server. Sun Cluster HA for Sybase ASE uses this property to
locate the RUN server in the $SYBASE/$ASE/install directory. If you do not set this
property, Sun Cluster HA for Sybase ASE will not manage the server.
Default: Null
Range: None
Tunable: When disabled
Monitor_Server_Name The name of the monitor server. Sun Cluster HA for Sybase ASE uses this property to
locate the RUN server in the $SYBASE/$ASE/install directory. If you do not set this
property, Sun Cluster HA for Sybase ASE will not manage the server.
Default: Null
Range: None
Tunable: When disabled
Text_Server_Name

The name of the text server. The Sun Cluster HA for Sybase ASE data service uses this
property to locate the RUN server in the $SYBASE/$ASE/install directory. If you do not
set this property, the Sun Cluster HA for Sybase ASE data service will not manage the
server.
Default: Null
Range: None
Tunable: When disabled

178

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Conguring Sun Cluster HA for Sybase ASE Extension Properties

TABLE C2 Sun Cluster HA for Sybase ASE Extension Properties


Name/Data Type

(Continued)

Description

Adaptive_Server_Log_The path to the log le for the adaptive server. Sun Cluster HA for Sybase ASE
File
continually reads this property for error monitoring.
Default: None
Range: Minimum=1
Tunable: When disabled
Stop_File

Sun Cluster HA for Sybase ASEuses this property during server stoppages. This
property contains the sa password. Protect this property from general access.
Default: None
Range: Minimum=1
Tunable: When disabled

Probe_timeout

Time-out value for the fault monitor probe.


Default: 30 seconds
Range: 1 99999 seconds
Tunable: Any time

Debug_level

Debug level for writing to the Sun Cluster HA for Sybase ASE log.
Default: 0
Range: 0 15
Tunable: Any time

Connect_string

String of format user/password. Sun Cluster HA for Sybase ASE uses this property for
database probes.
Default: None
Range: Minimum=1
Tunable: When disabled

Connect_cycle

Number of fault monitor probe cycles before Sun Cluster HA for Sybase ASE
establishes a new connection.
Default: 5
Range: 1 100
Tunable: Any time

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

179

Composed March 29, 2006


Sun Cluster HA for Sybase ASE Fault Monitor

TABLE C2 Sun Cluster HA for Sybase ASE Extension Properties

(Continued)

Name/Data Type

Description

Wait_for_online

Whether the start method waits for the database to come online before exiting.
Default: FALSE
Range: TRUE FALSE
Tunable: Any time

Sun Cluster HA for Sybase ASE Fault Monitor


The Sun Cluster HA for Sybase ASE fault monitor queries the Sybase ASE server to determine server
health.
Note The Sun Cluster HA for Sybase ASE fault monitor only monitors the Adaptive server. The fault

monitor does not monitor auxiliary servers.


The fault monitor consists of the following processes.

a main fault monitor process


a database-client fault probe

The following sections describe the Sun Cluster HA for Sybase ASE fault monitor processes and the
extension properties that the fault monitor uses.

Main Fault Monitor Process


The fault monitor process diagnoses errors and checks statistics. The monitor labels an operation
successful if the following conditions occur.

The database is online.


The activity check returns no errors.
The test transaction returns no errors.

If an operation fails, the main process checks the action table for an action to perform and then
performs the predetermined action. If an operation fails, the main process can perform the following
actions, which execute external programs as separate processes in the background.
1. Restarts the resource on the current node.
2. Restarts the resource group on the current node.
3. Fails over the resource group to the next node on the resource groups nodelist.
The server fault monitor also scans the Adaptive_Server_Log le and acts to correct any errors that
the scan identies.
180

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Sun Cluster HA for Sybase ASE Fault Monitor

Database-Client Fault Probe


The database-client fault probe performs activity checks and test transactions. The extension
property Connect_string species an account that performs all of the database operations. The
extension property Probe_timeout sets the time-out value that the probe uses to determine the time
that has elapsed in a successful database probe.

Extension Properties
The fault monitor uses the following extension properties.

Thorough_probe_interval
Retry_count
Retry_interval
Probe_timeout
Connect_string
Connect_cycle
Adaptive_Server_Log

See Conguring Sun Cluster HA for Sybase ASE Extension Properties on page 177 for more
information about these extension properties.

Appendix C Installing and Conguring Sun Cluster HA for Sybase ASE

181

Composed March 29, 2006

182

Composed March 29, 2006

A P P E N D I X

RSM Phase II: RSMRDT Driver Installation

This appendix describes the prerequisites and procedures for installation of the Remote Shared
Memory Reliable Datagram Transport (RSMRDT) driver. This appendix includes the following
sections:
Note The RSMRDT driver should not be installed until RSM with 9iRAC is supported. Contact your

Sun service provider for conguration support information.

Overview of the RSMRDT Driver on page 183


Restrictions on page 184
How to Install the SUNWscrdt Package on page 184
How to Uninstall the SUNWscrdt Package on page 184
How to Unload the RSMRDT Driver Manually on page 185

Overview of the RSMRDT Driver


Remote Shared Memory (RSM) is an interface on top of a memory-based interconnect. RSM
provides highly reliable remote memory operations and synchronous detection of communication
failure through barrier calls. RSMRDT consists of a driver that is built on top of RSMAPI and a
library that exports the RSMRDT-API interface. RSMRDT is dependent on Sun Cluster software and
RSM. The primary goal of the driver is to provide enhanced Oracle Parallel Server (OPS)
performance. A secondary goal is to enhance load-balancing and high-availability (HA) functions by
providing them directly inside the driver, making them available to the clients transparently.

Installing the RSMRDT Driver


The RSMRDT driver and library are installed with the SUNWscrdt package. You must successfully
install Sun Cluster software, the RSM package, SUNWrsmo, and SUNWrsmx before beginning RSMRDT
installation.
183

Composed March 29, 2006


Overview of the RSMRDT Driver

Restrictions
Use of the RSMRDT Driver is restricted to customers running an Oracle9i release 2 SCI
conguration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed
installation and conguration instructions. The SUNWscrdt package (RSMRDT driver package)
depends on the following packages:

SUNWrsmo RSMPI Operations Registration Module


SUNWrsmox RSMPI Operations Registration Module (64-bit)

The SUNWscrdt package also has a functional dependency on the following RSM packages:

SUNWrsm Remote Shared Memory


SUNWrsmx Remote Shared Memory (64-bit)

How to Install the SUNWscrdt Package

Verify that SUNWrsmo and SUNWrsmx are installed before completing this procedure.

Become superuser on the node to which you want to install the SUNWscrdt package.
Note You must repeat this procedure for each node in the cluster.

Install the SUNWscrdt package.


# pkgadd -d pathname SUNWscrdt

pathname

Species the path name of the directory that contains SUNWscrdt

How to Uninstall the SUNWscrdt Package

Verify that no applications are using the RSMRDT driver before performing this procedure.

Become superuser on the node to which you want to uninstall the SUNWscrdt package.
Note You must repeat this procedure for each node in the cluster.

Uninstall the SUNWscrdt package.


# pkgrm SUNWscrdt

184

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Overview of the RSMRDT Driver

How to Unload the RSMRDT Driver Manually


If the driver remains loaded in memory after completing How to Uninstall the SUNWscrdt Package
on page 184, perform the following procedure to unload the driver manually.

Start the adb tool.


# adb -kw

Set the kernel variable clifrsmrdt_modunload_ok to 1.


physmem ####
clifrsmrdt_modunload_ok/W 1

Exit adb by pressing Control-D.

Find the clif_rsmrdt and rsmrdt module IDs.


# modinfo | grep rdt

Unload the clif_rsmrdt module.


# modunload -i clif_rsmrdt_id
Note You must unload the clif_rsmrdt module before unloading rsmrdt. If modunload fails,

applications are probably still using the driver. Terminate the applications before running modunload
again.
clif_rsmrdt_id
6

Species the numeric ID for the module being unloaded.

Unload the rsmrdt module.


# modunload -i rsmrdt_id

rsmrdt_id
7

Species the numeric ID for the module being unloaded.

Verify that the module was unloaded successfully.


# modinfo | grep rdt

Example D1

Unloading the RSMRDT Driver


The following example shows the console output after the RSMRDT driver is manually unloaded.
# adb -kw
physmem fc54
clifrsmrdt_modunload_ok/W 1
clifrsmrdt_modunload_ok: 0x0 = 0x1
^D

Appendix D RSM Phase II: RSMRDT Driver Installation

185

Composed March 29, 2006


Overview of the RSMRDT Driver

# modinfo | grep rsm


88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
93 f08e07d4 b95 - 1 clif_rsmrdt (CLUSTER-RSMRDT Interface module)
94 f0d3d000 13db0 194 1 rsmrdt (Reliable Datagram Transport dri)
# modunload -i 93
# modunload -i 94
# modinfo | grep rsm
88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
#

186

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

A P P E N D I X

Installing and Conguring Sun Cluster HA for


SAP

This appendix contains the procedures on how to install and congure Sun Cluster HA for SAP.
This appendix contains the following procedures.

How to Upgrade a Resource Type or Convert a Failover Application Resource to a Scalable


Application Resource on page 195
How to Prepare the Nodes on page 196
How to Install SAP and the Database on page 197
How to Install an SAP Scalable Application Server on page 197
How to Verify an SAP Scalable Application Server on page 204
How to Enable Failover SAP Instances to Run in a Cluster on page 200
How to Verify SAP and the Database Installation with Central Instance on page 202
How to Verify an SAP Failover Application Server on page 203
How to Install the Sun Cluster HA for SAP Packages on page 204
How to Register and Congure Sun Cluster HA for SAP with Central Instance on page 211
How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service on page 212
How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page 213
How to Set Up a Lock File for Central Instance or the Failover Application Server on page 215
How to Set Up a Lock File for Scalable Application Server on page 215
How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance
on page 216
How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a Failover Data
Service on page 217
How to Verify Sun Cluster HA for SAP Installation and Conguration of as a Scalable Data
Service on page 217

187

Composed March 29, 2006


Sun Cluster HA for SAP Overview

Sun Cluster HA for SAP Overview


Use the information in this section to understand how Sun Cluster HA for SAP makes SAP highly
available.
For conceptual information on failover and scalable services, see the Sun Cluster concepts
documentation.
Sun Cluster HA for SAP provides fault monitoring and automatic failover for the SAP application to
eliminate single points of failure in an SAP system. The following table lists the data services that best
protect SAP components in a Sun Cluster conguration. You can congure Sun Cluster HA for SAP
as a failover application or a scalable application.
TABLE E1 Protection of SAP Components
SAP Component

Protected by

SAP database

Sun Cluster HA for Oracle


Use Oracle as your database.

SAP central instance

Sun Cluster HA for SAP


The resource type is SUNW.sap_ci or SUNW.sap_ci_v2.

SAP application server

Sun Cluster HA for SAP


The resource type is SUNW.sap_as or SUNW.sap_as_v2.

NFS le system

Sun Cluster HA for NFS

Use the scinstall(1M) command to install Sun Cluster HA for SAP. Sun Cluster HA for SAP
requires a functioning cluster with the initial cluster framework already installed. See the Sun Cluster
software installation documentation for details on initial installation of clusters and data service
software. Register Sun Cluster HA for SAP after you successfully install the basic components of the
Sun Cluster and SAP software.

Installing and Conguring Sun Cluster HA for SAP


Table E2 lists the tasks for installing and conguring Sun Cluster HA for SAP. Perform these tasks in
the order that they are listed.

188

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing and Conguring Sun Cluster HA for SAP

TABLE E2 Task Map: Installing and Conguring Sun Cluster HA for SAP
Task

For Instructions, Go To

Plan the SAP installation

Chapter 1 of Sun Cluster 3.0-3.1 Release Notes Supplement


Planning the Sun Cluster HA for SAP Installation and Conguration on page
190

Upgrade Sun Cluster HA


for SAP

How to Upgrade a Resource Type or Convert a Failover Application Resource to


a Scalable Application Resource on page 195

Prepare the nodes and


disks

How to Prepare the Nodes on page 196

Install SAP, SAP failover


application server, and the
database

How to Install SAP and the Database on page 197

Congure the Sun Cluster


HA for DBMS
Verify SAP Installation

How to Enable Failover SAP Instances to Run in a Cluster on page 200


Conguring Sun Cluster HA for DBMS on page 201
How to Verify SAP and the Database Installation with Central Instance on page
202
How to Verify an SAP Failover Application Server on page 203

or
Install SAP, SAP scalable
application server, and the
database
Congure the Sun Cluster
HA for DBMS

How to Install SAP and the Database on page 197


How to Install an SAP Scalable Application Server on page 197
Conguring Sun Cluster HA for DBMS on page 201
How to Verify an SAP Scalable Application Server on page 204

Verify SAP Installation


Install Sun Cluster HA for
SAP packages

How to Install the Sun Cluster HA for SAP Packages on page 204

Register and congure Sun How to Register and Congure Sun Cluster HA for SAP with Central Instance
Cluster HA for SAP as a
on page 211
failover data service
How to Register and Congure Sun Cluster HA for SAP as a Failover Data
Service on page 212
or
Register and congure Sun How to Register and Congure Sun Cluster HA for SAP with Central Instance
Cluster HA for SAP as a
on page 211
scalable data service
How to Register and Congure Sun Cluster HA for SAP as a Scalable Data
Service on page 213
Set up a lock le

Setting Up a Lock File on page 214

Appendix E Installing and Conguring Sun Cluster HA for SAP

189

Composed March 29, 2006


Planning the Sun Cluster HA for SAP Installation and Conguration

TABLE E2 Task Map: Installing and Conguring Sun Cluster HA for SAP

(Continued)

Task

For Instructions, Go To

Verify Sun Cluster HA for


SAP installation and
conguration

How to Verify Sun Cluster HA for SAP Installation and Conguration and
Central Instance on page 216
How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a
Failover Data Service on page 217
How to Verify Sun Cluster HA for SAP Installation and Conguration of as a
Scalable Data Service on page 217

Understand Sun Cluster


HA for SAP fault monitor

Understanding Sun Cluster HA for SAP Fault Monitor on page 218

Planning the Sun Cluster HA for SAP Installation and


Conguration
This section contains the information you need to plan your Sun Cluster HA for SAP installation and
conguration.

Conguration Restrictions
Caution Your data service conguration might not be supported if you do not observe these
restrictions.

Use the restrictions in this section to plan the installation and conguration of Sun Cluster HA for
SAP. This section provides a list of software and hardware conguration restrictions that apply to
Sun Cluster HA for SAP.
For restrictions that apply to all data services, see the Sun Cluster release notes documentation.

Limit node names as outlined in the SAP installation guide This limitation is an SAP
software restriction.

Conguration Requirements
Caution Your data service conguration might not be supported if you do not adhere to these
requirements.

190

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Planning the Sun Cluster HA for SAP Installation and Conguration

Use the requirements in this section to plan the installation and conguration of Sun Cluster HA for
SAP. These requirements apply to Sun Cluster HA for SAP only. You must meet these requirements
before you proceed with your Sun Cluster HA for SAP installation and conguration.
For requirements that apply to all data services, see Conguring and Administering Sun Cluster
Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

After you create all of the le systems for the database and for SAP software, create the mount
points, and put the mount points in the /etc/vfstab le on all of the cluster nodes See the
SAP installation guides, Installation of the SAP R/3 on UNIX and R/3 Installation on UNIX-OS
Dependencies, for details on how to set up the database and SAP le systems.

Create the required groups and users on all of the cluster nodes See the SAP installation
guides, Installation of the SAP R/3 on UNIX and R/3 Installation on UNIX-OS Dependencies, for
details on how to create SAP groups and users.

Congure Sun Cluster HA for NFS on the cluster that hosts the central instance if you plan to
install some external SAP application servers See Overview of the Installation and
Conguration Process for Sun Cluster HA for NFS in Sun Cluster Data Service for NFS Guide for
Solaris OS for details on how to congure Sun Cluster HA for NFS.

Install application servers on either the same cluster that hosts the central instance or on a
separate cluster If you install and congure any application server outside of the cluster
environment, Sun Cluster HA for SAP does not perform fault monitoring and does not
automatically restart or fail over those application servers. You must manually start and shut
down application servers that you install and congure outside of the cluster environment.

Use an SAP software version with automatic enqueue reconnect mechanism capability Sun
Cluster HA for SAP relies on this capability. SAP 4.0 software with patch information and later
releases should have automatic enqueue reconnect mechanism capability.

Standard Data Service Congurations


Use the standard congurations in this section to plan the installation and conguration of Sun
Cluster HA for SAP. Sun Cluster HA for SAP supports the standard congurations in this section.
Sun Cluster HA for SAP might support additional congurations. However, you must contact your
Enterprise Services representative for information on additional congurations.
Node 1

Node 2

Node 3

Node 4

DB

CI

AS1

AS2

CLUSTER 1
FIGURE E1 Four-Node Cluster with Central Instance, Application Servers, and Database

Appendix E Installing and Conguring Sun Cluster HA for SAP

191

Composed March 29, 2006


Planning the Sun Cluster HA for SAP Installation and Conguration

Node 1

Node 2

CI
NFS

AS1

AS2

AS3

CLUSTER 1

FIGURE E2 Two-Node Cluster with Central Instance, NFS, and Non-HA External Application

Note The conguration in Figure E2 was a common conguration under previous Sun Cluster

releases. To use the Sun Cluster software to the fullest extent, congure SAP as shown in Figure E1
or Figure E3.

Node 1

Node 2

CI

DEV

CLUSTER 1
FIGURE E3 Two-Node Cluster With Central Instance and Development Node

Conguration Considerations
Use the information in this section to plan the installation and conguration of Sun Cluster HA for
SAP. The information in this section encourages you to think about the impact your decisions have
on the installation and conguration of Sun Cluster HA for SAP.

192

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Planning the Sun Cluster HA for SAP Installation and Conguration

Failover and Scalable Applications

Retrieve the latest patch for the sapstart executable This patch enables Sun Cluster HA for
SAP users to congure a lock le. For details on the benets of this patch in your cluster
environment, see Setting Up a Lock File on page 214.

Read all of the related SAP online service-system notes for the SAP software release and
database that you are installing on your Sun Cluster conguration Identify any known
installation problems and xes.

Consult SAP software documentation for memory and swap recommendations SAP
software uses a large amount of memory and swap space.

Generously estimate the total possible load on nodes that might host the central instance, the
database instance, and the application server, if you have an internal application server This
consideration is especially important if you congure the cluster to ensure that the central
instance, database instance, and application server will all exist on one node if failover occurs.

Scalable Applications

Ensure that the SAPSIDadm home directory resides on a cluster le system - This consideration
enables you to maintain only one set of scripts for all application server instances that run on all
nodes. However, if you have some application servers that need to be congured differently (for
example, application servers with different proles), install those application servers with
different instance numbers, and then congure them in a separate resource group.

Install the application servers directory locally on each node instead of on a cluster le
system - This consideration ensures that another application server does not overwrite the
log/data/work/sec directory for the application server.

Use the same instance number when you create all application server instances on multiple
nodes - This consideration ensures ease of maintenance and ease of administration because you
will only need to use one set of commands to maintain all application servers on multiple nodes.

Place the application servers into multiple resource groups if you want to use the RGOfoad
resource type to shut down one or more application servers when a higher priority resource is
failing over - This consideration provides exibility and availability if you want to use the
RGOfoad resource type to ofoad one or more application servers for the database. The value
you gain from this consideration supersedes the ease of use you gain from placing the application
servers into one large group. See Freeing Node Resources by Ofoading Noncritical Resource
Groups in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more
information on using the RGOffload resource type.

Create separate scalable application server instances for each SAP logon group.

Create an SAP lock le on the local instance directory - This consideration prevents a system
administrator from manually starting an application instance that is already running.

Appendix E Installing and Conguring Sun Cluster HA for SAP

193

Composed March 29, 2006


Planning the Sun Cluster HA for SAP Installation and Conguration

Conguration Planning Questions


Use the questions in this section to plan the installation and conguration of Sun Cluster HA for
SAP. Insert the answers to these questions into the data service worksheets in the Sun Cluster release
notes documentation. See Conguration Considerations on page 192 for information that might
apply to these questions.

What resource groups will you use for network addresses and application resources and the
dependencies between them?

What is the logical hostname (for failover services) for clients that will access the data service?

Where will the system conguration les reside?


See Determining the Location of the Application Binaries on page 3 of the Sun Cluster 3.0-3.1
Release Notes Supplement for the advantages and disadvantages of placing the SAP liveCache
binaries on the local le system as opposed to the cluster le system.

Packages and Support


Table E3 and Table E4 lists the packages that Sun Cluster HA for SAP supports.
TABLE E3 Sun Cluster HA for SAP packages from Sun Cluster 3.0 7/01
Resource Type

Description

SUNW.sap_ci

Added support for failover central instance.

SUNW.sap_as

Added support for failover application servers.

The *_v2 resource types are the latest version of the resource types (RT) for Sun Cluster HA for SAP.
The *_v2 resource types are a superset of the original RTs. Whenever possible, use the latest RTs
provided.
TABLE E4 Sun Cluster HA for SAP package from Sun Cluster 3.0 12/01
Resource Type

Description

SUNW.sap_ci

Same as Sun Cluster 3.0 7/01. See Table E3.

SUNW.sap_as

Same as Sun Cluster 3.0 7/01. See Table E3.

SUNW.sap_ci_v2

Added the Network_resources_used resource property to the Resource Type


Registration (RTR) le.
Retained support for failover central instance.

SUNW.sap_as_v2

Added the Network_resources_used resource property to RTR le.


Added support for scalable application servers.

194

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading Sun Cluster HA for SAP

TABLE E4 Sun Cluster HA for SAP package from Sun Cluster 3.0 12/01
Resource Type

(Continued)

Description

Retained support for failover application servers

Upgrading Sun Cluster HA for SAP


As Table E3 and Table E4 illustrate, the Sun Cluster HA for SAP package from Sun Cluster 3.0 7/01
does not support a scalable application server and the Network_resources_used resource property.
Therefore, you have the following upgrade options.

Retain (do not upgrade) the existing SUNW.sap_ci and SUNW.sap_as resource types. Choose this
option if any of the following statements apply to you.

You cannot schedule down time.


You do not want the Network_resources_used resource property.
You do not want to congure a scalable application server.

Upgrade a resource type.


See How to Upgrade a Resource Type or Convert a Failover Application Resource to a Scalable
Application Resource on page 195 for the procedure on how to upgrade a resource type.

Convert a failover application resource to a scalable application resource.


See How to Upgrade a Resource Type or Convert a Failover Application Resource to a Scalable
Application Resource on page 195 for the procedure on how to convert a failover application
resource to a scalable application resource.

How to Upgrade a Resource Type or Convert a Failover


Application Resource to a Scalable Application
Resource
Use this procedure to upgrade a resource type or to convert a failover application server resource to a
scalable application server resource. This procedure requires that you schedule down time.

Disable the existing resource.

Delete the existing resource from the resource group.

Delete the existing resource type if no other resource uses it.

Register the new resource type.

Which task are you performing?

If you are upgrading the resource type for the central instance, skip to Step 7.

Appendix E Installing and Conguring Sun Cluster HA for SAP

195

Composed March 29, 2006


Preparing the Nodes and Disks

If you are converting a failover application server resource to a scalable application server
resource, proceed to Step 6.

Create the new application server resource group.

Add the scalable application resource to the resource group.

See Also

Go to How to Prepare the Nodes on page 196.

Preparing the Nodes and Disks


This section contains the procedures you need to prepare the nodes and disks.

How to Prepare the Nodes


Use this procedure to prepare for the installation and conguration of SAP.

Become superuser on all of the nodes.

Congure the /etc/nsswitch.conf so that Sun Cluster HA for SAP starts and stops correctly in the
event of a switchover or a failover.
On each node that can master the logical host that runs Sun Cluster HA for SAP, include one of the
following entries for group in the /etc/nsswitch.conf le.
group:
group: files [NOTFOUND=return] nis
group: file [NOTFOUND=return] nisplus

Sun Cluster HA for SAP uses the su user command to start and probe SAP. The network information
name service might become unavailable when a cluster nodes public network fails. When you add
one of the entries for group in the /etc/nsswitch.conf le, you ensure that the su(1M) command
does not refer to the NIS/NIS+ name services if the network information name service is unavailable.
See Also

196

Go to How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing and Conguring SAP and Database

Installing and Conguring SAP and Database


This section contains the procedures you need to install and congure SAP and the database.

How to Install SAP and the Database


Use this procedure to install SAP and the database.

Become superuser on one of the nodes in the cluster where you are installing the central instance.

Install SAP binaries on a cluster le system.


Note Before you install SAP software on a cluster le system, use the scstat(1M) command to

verify that the Sun Cluster software is fully operational.


a. For all of the SAP-required kernel parameter changes, edit the /etc/system le on all of the
cluster nodes that will run the SAP application.
After you edit the /etc/system le, reboot each node. See the SAP document R/3 Installation on
UNIX-OS Dependencies for details on kernel parameter changes.
b. See the SAP document Installation of the SAP R/3 on UNIX for details on how to install the
central instance, the database, and the application server instances.
See How to Install an SAP Scalable Application Server on page 197 for the procedure on how to
install a scalable application server in a Sun Cluster environment.
See Also

Go to How to Enable Failover SAP Instances to Run in a Cluster on page 200 or How to Install an
SAP Scalable Application Server on page 197.

How to Install an SAP Scalable Application Server


Use this procedure to install scalable application server instances. This procedure assumes that you
installed the central instance and the database. This procedure includes additional steps for SAP 6.10
and SAP 6.20 users to ensure that Sun Cluster HA for SAP can manage and bring online SAP 6.10
and SAP 6.20 services. SAP 6.10 and SAP 6.20 create one startsap script and one stopsap script.
Other SAP versions create one of each of theses scripts for each service you create. This difference
accounts for the additional steps for SAP 6.10 and SAP 6.20 users.
Tip The following le system layout ensures ease of use and prevents data from being overwritten.

Cluster File Systems

Appendix E Installing and Conguring Sun Cluster HA for SAP

197

Composed March 29, 2006


Installing and Conguring SAP and Database

/sapmnt/SID
/usr/sap/SID -> all subdirectories except the app-instance subdirectory
/usr/sap/SID/home -> the SAPSIDadm home directory
/usr/sap/trans

Local File Systems


/usr/sap/local/SID/app-instance

Create all SAP directories on cluster le systems.

Ensure that the central instance and the database can fail over.

Set up the lock le on cluster le system for the central instance to prevent a multiple startup
from a different node.
For the procedure on how to set up a lock le on the central instance, see How to Set Up a Lock
File for Central Instance or the Failover Application Server on page 215.

Ensure that all application servers can use the SAP binaries on a cluster le system.

Install the central instance and the database on a cluster le system.


See the SAP document Installation of the SAP R/3 on UNIX for details on how to install the central
instance and the database.

On all nodes that will host the scalable application server, create a local directory for the
data/log/sec/work directories and the log les for starting and stopping the application server.
Create a local directory for each new application server.
Example:
# mkdir -p /usr/sap/local/SID/D03
Caution You must perform this step. If you do not perform this step, you will inadvertently install a

different application server instance on a cluster le system and the two application servers will
overwrite each other.

Set up a link to point to the local application server directory from a cluster le system, so the
application server and the startup log le and the stop log le will be installed on the local le
system.
Example:
# ln -s /usr/sap/local/SID/D03 /usr/sap/SID/D03

Install the application server.

Are you using SAP 6.10 or SAP 6.20?

198

If no, skip to Step 11

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Installing and Conguring SAP and Database

If yes, proceed to Step 7.

Become user sapsidadm.

Make a copy of the startsap script and the stopsap script, and save these les in the SAPSIDadm
home directory. The lenames that you choose specify this instance.
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/startsap_instance-number
# cp /usr/sap/SID/SYS/exe/run/stopsap \
$SAPSID_HOME/stopsap_instance-number

Make backup copies of the following les because you will modify them. In the SAP prole directory,
modify all the lenames for this instance. The lenames that you choose must be specic to this
instance, and they must follow the same naming convention you chose in Step 8.
# mv SAPSID_Service-StringSystem-Number_physical-hostname \
SAPSID_Service-StringSystem_instance-number
# mv START_Service-StringSystem-Number_physical-hostname \
START_Service-StringSystem_instance-number

10

Modify the contents of the les you created in Step 9 to replace any reference to the physical host
with the instance number.
Caution It is important that you make your updates consistent so that you can start and stop this

application server instance from all the nodes that will run this scalable application server. For
example, if you make these changes for SAP instance number 02, then use 02 where this instance
number appears. If you do not use a consistent naming convention you will be unable start and stop
this application server instance from all the nodes that will run this scalable application server.

11

Edit the start script and the stop script so that the startup log le and the stop log le will be node
specic under the home directories of users sapsidadm and orasapsid.
Example:
# vi startsap_D03

Before:
LOGFILE=$R3S_LOGDIR/basename $0.log

After:
LOGFILE=$R3S_LOGDIR/basename $0_uname -n.log

Appendix E Installing and Conguring Sun Cluster HA for SAP

199

Composed March 29, 2006


Installing and Conguring SAP and Database

12

Copy the application server (with the same SAPSID and the same instance number) on all nodes that
run the scalable application server.
The nodes that run the scalable application server are in the scalable application server resource
group nodelist.

13

Ensure that you can startup and stop the application server from each node, and verify that the log
les are in the correct location.

14

Create the SAP logon group if you use a logon group.

See Also

Go to Conguring Sun Cluster HA for DBMS on page 201.

How to Enable Failover SAP Instances to Run in a


Cluster
During SAP installation, the SAP software creates les and shell scripts on the server on which you
installed the SAP instance. These les and scripts use physical server names. To run the SAP software
with Sun Cluster software, replace references to a physical server with references to a network
resource (logical hostname). Use this procedure to enable SAP to run in a cluster.

Make backup copies of the les you will modify in Step 5 through Step 8.

Log in to the node on which you installed the SAP software.

Shut down the SAP instances (central instance and application server instances) and the database.

Are you using SAP 6.10 or SAP 6.20?

If no, skip to Step 6.


If yes, proceed to Step 5.

Make a copy of the startsap script and the stopsap script, and save these les in the SAPSIDadm
home directory. The lenames that you choose must specify this instance.
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/startsap_logical-hostname_instance-number
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/stopsap_logical-hostname_instance-number

200

Become user sapsidadm, and then perform the following tasks.

In the SAPSIDadm home directory, modify all of the le names that reference a physical server
name.

In the SAPSIDadm home directory, modify all of the le contentsexcept log le contentsthat
reference a physical server name.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Conguring Sun Cluster HA for DBMS

In the SAP prole directory, modify all of the le names that reference a physical server name.

As user sapsidadm, add entries for the parameter SAPLOCALHOST.


Add this entry to the SAPSID_Service-StringSystem-Number_logical-hostname prole le under the
/sapmnt/SAPSID/profile directory.
For Central Instance:
SAPLOCALHOST=ci-logical-hostname

This entry enables the external application server to locate the central instance by using the network
resource (logical hostname).
For Application Server:
SAPLOCALHOST=as-logical-hostname
8

See Also

Become user orasapsid, and then perform the following tasks.

In the oraSAPSID home directory, modify all of the le names that reference a physical server
name.

In the oraSAPSID home directory, modify all of the le contentsexcept log le contentsthat
reference a physical server name.

Ensure that the /usr/sap/tmp directory owned by user sapsidadm and group sapsys exists on all
nodes that can master the failover SAP instance.
Go to Conguring Sun Cluster HA for DBMS on page 201.

Conguring Sun Cluster HA for DBMS


SAP supports various databases. See the appropriate chapter of this book for details on how to
congure the resource type, resource group, and resource for your highly available database. For
example, see Overview of the Installation and Conguration Process for Sun Cluster HA for Oracle
in Sun Cluster Data Service for Oracle Guide for Solaris OS for more information if you plan to use
Oracle with SAP.
Additionally, see the appropriate chapter of this book and the appropriate chapter of your database
installation book for details on other resource types to congure with your database. This book
includes details on how to congure other resource types for Oracle databases. For instance, set up
the SUNW.HAStoragePlus resource type if you use Oracle. See the procedure Synchronizing the
Startups Between Resource Groups and Disk Device Groups in Sun Cluster Data Services Planning
and Administration Guide for Solaris OS for more information.

Appendix E Installing and Conguring Sun Cluster HA for SAP

201

Composed March 29, 2006


Verifying the SAP Installation

Where to Go From Here


Go to How to Verify SAP and the Database Installation with Central Instance on page 202 or How
to Verify an SAP Scalable Application Server on page 204.

Verifying the SAP Installation


This section contains the procedures you need to verify the SAP installation.

How to Verify SAP and the Database Installation with


Central Instance
Use this procedure to verify SAP central instance. Perform the following steps on all of the potential
nodes on which the central instance can run.

Create the failover resource group to hold the network and central instance resources.
# scrgadm -a -g sap-ci-resource-group [-h nodelist]
Note Use the -h option to the scrgadm(1M) command to select the set of nodes on which the SAP

central instance can run.

Verify that you have added to your name service database all of the network resources that you use.

Add a network resource (logical hostname) to the failover resource group.


# scrgadm -a -L -g sap-ci-resource-group
-l ci-logical-hostname [-n netiist]

Enable the resource group.


Run the scswitch(1M) command to move the resource group into a managed state and bring the
resource group online.
# scswitch -Z -g sap-ci-resource-group

Log in to the cluster member that hosts the central instance resource group.

Ensure that the database is running.

Manually start the central instance.

Start the SAP GUI using the logical hostname, and verify that SAP initializes correctly.
The default dispatcher port is 3200.

202

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Verifying the SAP Installation

9
10

Manually stop the central instance.


Switch this resource group to another cluster member that can host the central instance.
# scswitch -z -h node -g sap-ci-resource-group

11

See Also

Repeat Step 5 through Step 9 until you verify startup and shutdown of the central instance on each
cluster node that can host the central instance.
Go to How to Verify an SAP Failover Application Server on page 203.

How to Verify an SAP Failover Application Server


Use this procedure to verify SAP and the database installation for the failover application server.
Perform the following steps on all of the potential nodes on which the failover application server can
run.

Create the failover resource group to hold the network and application server resources.
# scrgadm -a -g sap-as-fo-resource-group
Note Use the -h option to the scrgadm command to select the set of nodes on which the SAP

application server can run.


# scrgadm -a -g sap-as-fo-resource-group\
[-h nodelist]

Verify that you added to your name service database all of the network resources that you use.

Add a network resource (logical hostname) to the failover resource group.


# scrgadm -a -L -g sap-as-fo-resource-group\
-l as-fo-logical-hostname [-n netiist]

Enable the resource group.


Run the scswitch(1M) command to move the resource group into a managed state and bring the
resource group online.
# scswitch -Z -g sap-as-of-resource-group

Log in to the cluster member that hosts the application server resource group.

Manually start the application server.

Start the SAP GUI using the logical hostname, and verify that SAP initializes correctly.
Appendix E Installing and Conguring Sun Cluster HA for SAP

203

Composed March 29, 2006


Installing the Sun Cluster HA for SAP Packages

Manually stop the application server.

Switch this resource group to another cluster member that can host the application server.
# scswitch -z -h node -g sap-as-fo-resource-group

10

See Also

Repeat Step 5 through Step 7 until you verify startup and shutdown of the application server on each
cluster node that can host the application server.
Go to How to Install the Sun Cluster HA for SAP Packages on page 204.

How to Verify an SAP Scalable Application Server


If you installed scalable application server instances in How to Install an SAP Scalable Application
Server on page 197, you veried the installation of an SAP scalable application server in Step 13 of
How to Install an SAP Scalable Application Server on page 197.

Where to Go From Here


Go to How to Install the Sun Cluster HA for SAP Packages on page 204.

Installing the Sun Cluster HA for SAP Packages


This section contains the procedure you need to install the Sun Cluster HA for SAP packages.

How to Install the Sun Cluster HA for SAP Packages


Use this procedure to install the Sun Cluster HA for SAP packages. You need the Sun Cluster 3.0 5/02
Agents CD-ROM to perform this procedure. This procedure assumes that you did not install the data
service packages during your initial Sun Cluster installation.

Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.

Run the scinstall utility with no options.


This step starts the scinstall utility in interactive mode.

Choose the Add Support for New Data Service to This Cluster Node menu option.
The scinstall utility prompts you for additional information.

Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD-ROM as the data services cd.

204

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

Specify the data service to install.


The scinstall utility lists the data service that you selected and asks you to conrm your choice.

Exit the scinstall utility.

Unload the CD-ROM from the drive.

See Also

Go to Registering and Conguring Sun Cluster HA for SAP on page 205.

Registering and Conguring Sun Cluster HA for SAP


This section contains the procedures you need to congure Sun Cluster HA for SAP.

Sun Cluster HA for SAP Extension Properties


Use the extension properties in Table E5 and Table E6 to create your resources. Use the command
line scrgadm -x parameter=value to congure extension properties when you create your resource.
Use the procedure in Chapter 2, Administering Data Service Resources, in Sun Cluster Data
Services Planning and Administration Guide for Solaris OS to congure the extension properties if
you have already created your resources. You can update some extension properties dynamically. You
can update others, however, only when you create or disable a resource. The Tunable entries indicate
when you can update each property. See Appendix A for details on all Sun Cluster properties.
TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
Property Category

Property Name

Description

SAP Conguration

SAPSID

SAP system ID or SID.


Default: None
Tunable: When disabled

Ci_instance_id

Two-digit SAP system


number.
Default: 00
Tunable: When disabled

Ci_services_string

String of central instance


services.
Default: DVEBMGS
Tunable: When disabled

Appendix E Installing and Conguring Sun Cluster HA for SAP

205

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance

(Continued)

Property Category

Property Name

Description

Starting SAP

Ci_start_retry_ interval

The interval in seconds to


wait between attempting
to connect to the database
before starting the central
instance.
Default: 30
Tunable: When disabled

Ci_startup_script

Name of the SAP startup


script for this instance in
your SIDadm home
directory.
Default: None
Tunable: When disabled

Stopping SAP

Stop_sap_pct

Percentage of
stop-timeout variables
that are used to stop SAP
processes. The SAP
shutdown script is used to
stop processes before
calling Process Monitor
Facility (PMF) to
terminate and then kill the
processes.
Default: 95
Tunable: When disabled

Ci_shutdown_script

Name of the SAP


shutdown script for this
instance in your SIDadm
home directory.
Default: None
Tunable: When disabled

Probe

Message_server_name

The name of the SAP


Message Server.
Default: sapms SAPSID
Tunable: When disabled

206

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
Property Category

(Continued)

Property Name

Description

Lgtst_ms_with_ logicalhostname

How to check the SAP


Message Server with the
SAP lgtst utility. The
lgtst utility requires a
hostname (IP address) as
the location for the SAP
Message Server. This
hostname can be either a
Sun Cluster logical
hostname or a local host
(loopback) name. If you
set this resource property
to TRUE, use a logical
hostname. Otherwise, use
a localhost name.
Default: TRUE
Tunable: Any time

Check_ms_retry

Maximum number of
times the SAP Message
Server check fails before a
total failure is reported
and the Resource Group
Manager (RGM) starts.
Default: 2
Tunable: When disabled

Probe_timeout

Timeout value in seconds


for the probes.
Default: 120
Tunable: Any time

Monitor_retry_count

Number of PMF restarts


that are allowed for the
fault monitor.
Default: 4
Tunable: Any time

Appendix E Installing and Conguring Sun Cluster HA for SAP

207

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
Property Category

(Continued)

Property Name

Description

Monitor_retry_ interval

Time interval in minutes


for the fault monitor
restarts.
Default: 2
Tunable: Any time

Development System

Shutdown_dev

Whether the RGM should


shut down the
development system
before starting up the
central instance.
Default: FALSE
Tunable: When disabled

Dev_sapsid

SAP System Name for the


development system (if
you set Shutdown_dev to
TRUE, Sun Cluster HA for
SAP requires this
property).
Default: None
Tunable: When disabled

Dev_shutdown_script

Script that is used to shut


down the development
system. If you set
Shutdown_dev to TRUE,
Sun Cluster HA for SAP
requires this property.
Default: None
Tunable: When disabled

Dev_stop_pct

Percentage of startup
timeouts Sun Cluster HA
for SAP uses to shut down
the development system
before starting the central
instance.
Default: 20
Tunable: When disabled

208

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

TABLE E6 Sun Cluster HA for SAP Extension Properties for the Application Servers
Property Category

Property Name

Description

SAP Conguration

SAPSID

SAP system name or


SAPSID for the
application server.
Default: None
Tunable: When disabled

As_instance_id

Two-digit SAP system


number for the
application server.
Default: None
Tunable: When disabled

As_services_string

String of application
server services.
Default: D
Tunable: When disabled

Starting SAP

As_db_retry_interval

The interval in seconds to


wait between attempting
to connect to the database
and starting the
application server.
Default: 30
Tunable: When disabled

As_startup_script

Name of the SAP startup


script for the application
server.
Default: None
Tunable: When disabled

Appendix E Installing and Conguring Sun Cluster HA for SAP

209

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

TABLE E6 Sun Cluster HA for SAP Extension Properties for the Application Servers

(Continued)

Property Category

Property Name

Description

Stopping SAP

Stop_sap_pct

Percentage of
stop-timeout variables
that are used to stop SAP
processes. The SAP
shutdown script is used to
stop processes before
calling Process Monitor
Facility (PMF) to
terminate and then kill the
processes.
Default: 95
Tunable: When disabled

As_shutdown_script

Name of the SAP


shutdown script for the
application server.
Default: None
Tunable: When disabled

Probe

Probe_timeout

Timeout value in seconds


for the probes.
Default: 60
Tunable: Any time

Monitor_retry_count

Number of PMF restarts


that the probe allows for
the fault monitor.
Default: 4
Tunable: Any time

Monitor_retry_ interval

Time interval in minutes


for fault monitor restarts.
Default: 2
Tunable: Any time

210

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

How to Register and Congure Sun Cluster HA for SAP


with Central Instance
Use this procedure to congure Sun Cluster HA for SAP with central instance.

Become superuser on one of the nodes in the cluster that hosts the central instance.

Register the resource type for the central instance.


# scrgadm -a -t SUNW.sap_ci | SUNW.sap_ci_v2

Add the HAStoragePlus resource to the central instance resource group.


# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j ci-storage-resource \
-g sap-ci-resource-group \
-t SUNW.HAStoragePlus -x filesystemmountpoints=mountpoint, ... |

For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
4

Enable the central instance storage resource.


# scswitch -e -j ci-storage-resource

Create SAP central instance resources in this failover resource group.


# scrgadm -a -j sap-ci-resource \
-g sap-ci-resource-group\
-t SUNW.sap_ci | SUNW.sap_ci_v2
-x SAPSID=SAPSID -x Ci_instance_id=ci-instance-id \
-x Ci_startup_script=ci-startup-script \
-x Ci_shutdown_script=ci-shutdown-script \
-y resource_dependencies=ci-storage-resource

See Sun Cluster Data Services Planning and Administration Guide for Solaris OS for a list of extension
properties.
6

Enable the failover resource group that now includes the SAP central instance resource.
# scswitch -Z -g sap-ci-resource-group

If you congure the central instance resource to shut down a development system, you will receive
the following console message.
ERROR : SAPSYSTEMNAME not set
Please check environment and restart

This message displays when the central instance starts on a node that does not have the development
system installed and that is not meant to run the central instance. SAP renders this message, and you
can safely ignore it.
Appendix E Installing and Conguring Sun Cluster HA for SAP

211

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

See Also

Go to How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service on page
212 or How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213.

How to Register and Congure Sun Cluster HA for SAP


as a Failover Data Service
Use this procedure to congure Sun Cluster HA for SAP as a failover data service.

Become superuser on one of the nodes in the cluster that hosts the application server.

Register the resource type for the failover application server.


# scrgadm -a -t SUNW.sap_as | SUNW.sap_as_v2

Add the HAStoragePlus resource to the failover application server resource group.
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j sap-as-storage-resource -g sap-as-fo-resource-group \
-t SUNW.HAStoragePlus \
-x filesystemmountpoints=mountpoint, ...

For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
4

Enable the failover application server storage resource.


# scswitch -e -j sap-as-storage-resource

Create SAP application server resources in their failover resource group.


# scrgadm -a -j sap-as-resource \
-g sap-as-fo-resource-group \
-t SUNW.sap_as | SUNW.sap_as_v2
-x SAPSID=SAPSID -x As_instance_id=as-instance-id \
-x As_startup_script=as-startup-script \
-x As_shutdown_script=as-shutdown-script \
-y resource_dependencies=sap-as-storage-resource

See Sun Cluster HA for SAP Extension Properties on page 205 for a list of extension properties.
6

Enable the failover resource group that now includes the SAP application server resource.
# scswitch -Z -g sap-as-fo-resource-group

See Also

212

Go to How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance
on page 216.
Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Registering and Conguring Sun Cluster HA for SAP

How to Register and Congure Sun Cluster HA for SAP


as a Scalable Data Service
Use this procedure to congure Sun Cluster HA for SAP as a scalable data service.

Become superuser on one of the nodes in the cluster that hosts the application server.

Create a scalable resource group for the application server.


# scrgadm -a -g sap-as-sa-appinstanceid-resource-group \
-y Maximum_primaries=value \
-y Desired_primaries=value
Note Sun Cluster HA for SAP as a scalable data service does not use shared addresses because the

SAP logon group performs the load balancing of the application server.

Note If you are using the SUNW.RGOffload resource type to ofoad an application server within this

scalable application server resource group, then set Desired_primaries=0. See Freeing Node
Resources by Ofoading Noncritical Resource Groups in Sun Cluster Data Services Planning and
Administration Guide for Solaris OS for more information about using the SUNW.RGOffload resource
type.

Register the resource type for the scalable application server.


# scrgadm -a -t SUNW.sap_as_v2

Add the HAStoragePlus resource to the failover application server resource group.
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j sap-as-storage-resource -g \
-g sap-as-sa-appinstanceid-resource-group \
-t SUNW.HAStoragePlus \
-x filesystemmountpoints=mountpoint, ... \

For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
5

Enable the failover application server storage resource.


# scswitch -e -j sap-as-storage-resource

Create SAP application server resources in this scalable resource group.


# scrgadm -a -j sap-as-resource \
-g sap-as-sa-appinstanceid-resource-group \
-t SUNW.sap_as_v2 \
-x SAPSID=SAPSID \
Appendix E Installing and Conguring Sun Cluster HA for SAP

213

Composed March 29, 2006


Setting Up a Lock File

-x
-x
-x
-y

As_instance_id=as-instance-id \
As_startup_script=as-startup-script \
As_shutdown_script=as-shutdown-script \
resource_dependencies=sap-as-storage-resource

See Sun Cluster HA for SAP Extension Properties on page 205 for a list of extension properties.
7

Enable the scalable resource group that now includes the SAP application server resource.

If you do not use the RGOffload resource type with this application server, use the following
command.
# scswitch -Z -g sap-as-sa-appinstanceid-resource-group

If you use the RGOffload resource type with this application server, use the following command.
# scswitch -z -h node1, node2 -g sap-as-sa-appinstanceid-resource-group

Note If you use the SUNW.RGOffload resource type with this application server, you must specify
which node you want to bring the resource online by using the -z option instead of the -j option.

See Also

Go to How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance
on page 216.

Setting Up a Lock File


Use the procedure in this section to perform the following tasks.

Set up a lock le for the central instance or the failover application server.
Set up a lock le for a scalable application server.

Set up a lock le to prevent multiple startups of the SAP instance when the instance is already active
on one node. Multiple startups of the same instance crash each other. Furthermore, the crash
prevents SAP shutdown scripts from performing a clean shutdown of the instances, which might
cause data corruption.
If you set up a lock le, when you start the SAP instance the SAP software locks the le
startup_lockfile. If you start up the same instance outside of the Sun Cluster environment and
then try to bring up SAP under the Sun Cluster environment, the Sun Cluster HA for SAP data
service will attempt to start up the same instance. However, because of the le-locking mechanism,
this attempt will fail. The data service will log appropriate error messages in /var/adm/messages.
The only difference between the lock le for the central instance or the failover application server
and the lock le for a scalable application server is that the lock le for scalable application server
resides on the local le system and the lock le for the central instance or the failover application
server resides on a cluster le system.
214

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Setting Up a Lock File

How to Set Up a Lock File for Central Instance or the


Failover Application Server
Use this procedure to set up a lock le for the central instance or the failover application server.

Install the latest patch for the sapstart executable, which enables Sun Cluster HA for SAP users to
congure a lock le.

Set up the central instance lock le or the failover application server lock le on a cluster le system.

Edit the prole that sapstart uses to start the instance such that you add the new SAP parameter,
sapstart/lockfile, for central instance or failover application server. This prole is the one that is
passed to sapstart as a parameter in the startsap script.
For central instance, enter the following.
sapstart/lockfile =/usr/sap/SID/ Service-StringSystem-Number/work/startup_lockfile

For failover application server, enter the following.


sapstart/lockfile =/usr/sap/SID/ Dinstance-id/work/startup_lockfile

sapstart/lockfile
New parameter name.
/usr/sap/SID/Service-StringSystem-Number/work
Work directory for the central instance.
/usr/sap/SID/Dinstance-id/work
Work directory for failover application server.
startup_lockfile
Lock le name that Sun Cluster HA for SAP uses.
SAP creates the lock le.
Note You must locate the lock le path on a cluster le system. If you locate the lock le path locally

on the nodes, a startup of the same instance from multiple nodes cannot be prevented.

How to Set Up a Lock File for Scalable Application


Server
Use this procedure to set up a lock le for a scalable application server.

Install the latest patch for the sapstart executable, which enables Sun Cluster HA for SAP users to
congure a lock le.
Appendix E Installing and Conguring Sun Cluster HA for SAP

215

Composed March 29, 2006


Verifying the Sun Cluster HA for SAP Installation and Conguration

Set up the application server lock le on the local le system.

Edit the prole that sapstart uses to start the instance such that you add the new SAP parameter,
sapstart/lockfile, for scalable application server. This prole is the one that is passed to sapstart
as a parameter in the startsap script.
sapstart/lockfile =/usr/sap/local/SID/Dinstance-id/work/startup_lockfile

sapstart/lockfile

New parameter name.

/usr/sap/local/SID/Dinstance-id/work

Work directory for the scalable application server.

startup_lockfile

Lock le name that Sun Cluster HA for SAP uses.

SAP creates the lock le.


Note The lock le will reside on the local le system. The lock le does not prevent multiple startups

from other nodes, but the lock le does prevent multiple startups on the same node.

Verifying the Sun Cluster HA for SAP Installation and


Conguration
This section contains the procedure you need to verify that you installed and congured your data
service correctly.

How to Verify Sun Cluster HA for SAP Installation and


Conguration and Central Instance
Use this procedure to verify the Sun Cluster HA for SAP installation and conguration and central
instance.

Log in to the node that hosts the resource group that contains the SAP central instance resource.

Start the SAP GUI to check that Sun Cluster HA for SAP is functioning correctly.

As user sapsidadm, use the central instance stopsap script to shut down the SAP central instance.
The Sun Cluster software restarts the central instance.

As user root, switch the SAP resource group to another cluster member.
# scswitch -z -h node2 -g sap-ci-resource-group

5
216

Verify that the SAP central instance starts on this node.


Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Verifying the Sun Cluster HA for SAP Installation and Conguration

See Also

Repeat Step 1 through Step 5 until you have tested all of the potential nodes on which the SAP
central instance can run.
Go to How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a Failover
Data Service on page 217 or How to Verify Sun Cluster HA for SAP Installation and Conguration
of as a Scalable Data Service on page 217.

How to Verify the Installation and Conguration of


Sun Cluster HA for SAP as a Failover Data Service
Use this procedure to verify the installation and conguration of Sun Cluster HA for SAP as a
failover data service.

Log in to the node that currently hosts the resource group that contains the SAP application server
resource.

As user sapsidadm, start the SAP GUI to check that the application server is functioning correctly.

Use the application server stopsap script to shut down the SAP application server on the node you
identied in Step 1.
The Sun Cluster software restarts the application server.

As user root, switch the resource group that contains the SAP application server resource to another
cluster member.
# scswitch -z -h node2 -g sap-as-resource-group

Verify that the SAP application server starts on the node you identied in Step 4.

Repeat Step 1 through Step 5 until you have tested all of the potential nodes on which the SAP
application server can run.

How to Verify Sun Cluster HA for SAP Installation and


Conguration of as a Scalable Data Service
Use this procedure to verify the installation and conguration of Sun Cluster HA for SAP as a
scalable data service.

Log on to one of the nodes that runs the application server.

Become user sapsidadm.

Appendix E Installing and Conguring Sun Cluster HA for SAP

217

Composed March 29, 2006


Understanding Sun Cluster HA for SAP Fault Monitor

Start the SAP GUI to check that the application server is functioning correctly.

Use the application server stopsap script to shut down the SAP application server on the node you
identied in Step 1.
The Sun Cluster software restarts the application server.

Repeat Step 1 through Step 3 until you have tested all of the potential nodes on which the SAP
application server can run.

Understanding Sun Cluster HA for SAP Fault Monitor


The Sun Cluster HA for SAP fault monitor checks SAP process and database availability. SAP process
availability impacts SAP resources failure history. SAP resources failure history in turn drives the
fault monitors actions, which include no action, restart, or failover.
In contrast to SAP process availability, SAP database availability uses has no impact on SAP
resources failure history. Database availability does, however, trigger the SAP fault monitor to log
any syslog messages to /var/adm/messages and to set the status accordingly for the SAP resource
that uses the database.

Sun Cluster HA for SAP Fault Probes for Central


Instance
For the central instance, the fault probe executes the following steps.
1. Retrieves the process IDs for the SAP Message Server and the dispatcher
2. Loops innitely (sleeps for Thorough_probe_interval)
3. Checks the availability of the SAP resources
a. Abnormal exit If the Process Monitor Facility (PMF) detects that the SAP process tree has
failed, the fault monitor treats this problem as a complete failure. The fault monitor restarts
or fails over the SAP resource to another node based on the resources failure history.
b. Availability check of the SAP resources through probe The probe uses the ps(1)
command to check the SAP Message Server and main dispatcher processes. If any of the SAP
Message Server or main dispatcher processes are missing from the systems active processes
list, the fault monitor treats this problem as a complete failure.
If you congure the parameter Check_ms_retry to have a value greater than zero, the probe
checks the SAP Message Server connection. If you have set the extension property
Lgtst_ms_with_logicalhostname to its default value TRUE, the probe completes the SAP
Message Server connection test with the utility lgtst. The probe uses the logical hostname
interface that is specied in the SAP resource group to call the SAP-supplied utility lgtst. If

218

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Understanding Sun Cluster HA for SAP Fault Monitor

you set the extension property Lgtst_ms_with_logicalhostname to a value other than TRUE,
the probe calls lgtst with the nodes local hostname (loopback interface).
If the lgtst utility call fails, the SAP Message Server connection is not functioning. In this
situation, the fault monitor considers the problem to be a partial failure and does not trigger
an SAP restart or a failover immediately. The fault monitor counts two partial failures as a
complete failure if the following conditions occur.
i. You congure the extension property Check_ms_retry to be 2.
ii. The fault monitor accumulates two partial failures that happen within the retry interval
that the resource property Retry_interval sets.
A complete failure triggers either a local restart or a failover, based on the resources failure
history.
c. Database connection status through probe The probe calls the SAP-supplied utility
R3trans to check the status of the database connection. Sun Cluster HA for SAP fault probes
verify that SAP can connect to the database. Sun Cluster HA for SAP depends, however, on
the highly available database fault probes to determine database availability. If the database
connection status check fails, the fault monitor logs the message, Database might be down,
to /var/adm/messages. The fault monitor then sets the status of the SAP resource to
DEGRADED. If the probe checks the status of the database again and the connection is
reestablished, the fault monitor logs the message, Database is up, to /var/adm/messages
and sets the status of the SAP resource to OK.
4. Evaluates the failure history
Based on the failure history, the fault monitor completes one of the following actions.

no action
local restart
failover

Sun Cluster HA for SAP Fault Probes for Application


Server
For the application server, the fault probe executes the following steps.
1. Retrieves the process ID for the main dispatcher
2. Loops innitely (sleeps for Thorough_probe_interval)
3. Checks the availability of the SAP resources
a. Abnormal exit If the Process Monitor Facility (PMF) detects that the SAP process tree has
failed, the fault monitor treats this problem as a complete failure. The fault monitor restarts
or fails over the SAP resource to another node, based on the resources failure history.

Appendix E Installing and Conguring Sun Cluster HA for SAP

219

Composed March 29, 2006


Understanding Sun Cluster HA for SAP Fault Monitor

b. Availability check of the SAP resources through probe The probe uses the ps(1)
command to check the SAP Message Server and main dispatcher processes. If the SAP main
dispatcher process is missing from the systems active processes list, the fault monitor treats
the problem as a complete failure.
c. Database connection status through probe The probe calls the SAP-supplied utility
R3trans to check the status of the database connection. Sun Cluster HA for SAP fault probes
verify that SAP can connect to the database. Sun Cluster HA for SAP depends, however, on
the highly available database fault probes to determine database availability. If the database
connection status check fails, the fault monitor logs the message, Database might be down,
to /var/adm/messages and sets the status of the SAP resource to DEGRADED. If the probe
checks the status of the database again and the connection is reestablished, the fault monitor
logs the message, Database is up, to /var/adm/messages. The fault monitor then sets the
status of the SAP resource to OK.
4. Evaluate the failure history
Based on the failure history, the fault monitor completes one of the following actions.

no action

local restart

failover
If the application server resource is a failover resource, the fault monitor fails over the
application server.
If the application server resource is a scalable resource, after the number of local restarts are
exhausted, RGM will bring up the application server on a different node if there is another
node available in the cluster.

220

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006

A P P E N D I X

Upgrading Sun Cluster Software From Solaris 8


to Solaris 9 Software

This appendix provides the following step-by-step procedures to upgrade a Sun Cluster 3.0
conguration to Sun Cluster 3.1 04/04 software including upgrade from Solaris 8 to Solaris 9
software, or to upgrade a Sun Cluster 3.1 04/04 conguration that runs on Solaris 8 software to
Solaris 9 software:

How to Prepare the Cluster for Upgrade on page 223


How to Upgrade the Solaris Operating Environment on page 225
How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227
How to Upgrade Sun Cluster-Module Software for Sun Management Center on page 233
How to Finish Upgrading to Sun Cluster 3.1 04/04 Software on page 234
How to Handle Storage Reconguration During an Upgrade on page 235
How to Resolve Mistaken Storage Changes During an Upgrade on page 236

This appendix replaces the section Upgrading to Sun Cluster 3.1 04/04 Software on page

Upgrading to Sun Cluster 3.1 04/04 Software


Perform the following tasks to upgrade from Sun Cluster 3.0 software to Sun Cluster 3.1 04/04
software, including upgrade from Solaris 8 to Solaris 9 software, or to upgrade a Sun Cluster 3.1
04/04 conguration that runs on Solaris 8 software to Solaris 9 software.
TABLE F1 Task Map: Upgrading to Sun Cluster 3.1 04/04 Software
Task

Instructions

1. Read the upgrade requirements and restrictions.

Upgrade Requirements and Restrictions on page


222

2. Take the cluster out of production, disable


resources, and back up shared data and system disks.

How to Prepare the Cluster for Upgrade on page 223

221

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

TABLE F1 Task Map: Upgrading to Sun Cluster 3.1 04/04 Software

(Continued)

Task

Instructions

3. Upgrade the Solaris software, if necessary, to a


supported Solaris update release. Optionally, upgrade
VERITAS Volume Manager (VxVM).

How to Upgrade the Solaris Operating


Environment on page 225

4. Upgrade to Sun Cluster 3.1 04/04 framework and


data-service software. This is required for upgrade
from Solaris 8 software to Solaris 9 software. If
necessary, upgrade applications. If you upgraded
VxVM, upgrade disk groups.

How to Upgrade to Sun Cluster 3.1 04/04 Software


on page 227

5. (Optional) Upgrade the Sun Cluster module to Sun


Management Center, if needed.

How to Upgrade Sun Cluster-Module Software for


Sun Management Center on page 233

6. Reregister resource types, enable resources, and


bring resource groups online.

How to Finish Upgrading to Sun Cluster 3.1 04/04


Software on page 234

Upgrade Requirements and Restrictions


Observe the following requirements and restrictions when you upgrade to Sun Cluster 3.1 04/04
software:

The cluster must run on or be upgraded to at least Solaris 8 2/02 software, including the most
current required patches.

The cluster hardware must be a supported conguration for Sun Cluster 3.1 04/04 software.
Contact your Sun representative for information about current supported Sun Cluster
congurations.

You must upgrade all software to a version that is supported by Sun Cluster 3.1 04/04 software.
For example, you must upgrade a data service that is supported on Sun Cluster 3.0 software but is
not supported on Sun Cluster 3.1 04/04 software to the version of that data service that is
supported on Sun Cluster 3.1 04/04 software. If the related application is not supported on Sun
Cluster 3.1 04/04 software, you must also upgrade that application to a supported release.

The scinstall upgrade utility only upgrades those data services that are provided with Sun
Cluster 3.1 04/04 software. You must manually upgrade any custom or third-party data services.

Have available the test IP addresses to use with your public network adapters when NAFO
groups are converted to Internet Protocol (IP) Network Multipathing groups. The scinstall
upgrade utility prompts you for a test IP address for each public network adapter in the cluster. A
test IP address must be on the same subnet as the primary IP address for the adapter.
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration
Guide: IP Services (Solaris 9) for information about test IP addresses for IP Network
Multipathing groups.

222

Sun Cluster 3.1 04/04 software supports direct upgrade only from Sun Cluster 3.x software.

Sun Cluster 3.1 04/04 software does not support any downgrade of Sun Cluster software.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

How to Prepare the Cluster for Upgrade


Before you upgrade the software, perform the following steps to take the cluster out of production:

Ensure that the conguration meets requirements for upgrade.


See Upgrade Requirements and Restrictions on page 222.

Have available the CD-ROMs, documentation, and patches for all software products you are
upgrading.

Solaris 8 or Solaris 9 operating environment

Sun Cluster 3.1 04/04 framework

Sun Cluster 3.1 04/04 data services (agents)

Applications that are managed by Sun Cluster 3.1 04/04 data-service agents

VERITAS Volume Manager

Patch 11380101 or later, which is required to upgrade from Solaris 8 software to Solaris 9
software

See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of
patches and installation instructions.
3

(Optional) Install Sun Cluster 3.1 04/04 documentation.


Install the documentation packages on your preferred location, such as an administrative console or
a documentation server. See the index.html le at the top level of the Sun Cluster 3.0 5/02 CD-ROM
to access installation instructions.

Have available your list of test IP addresses, one for each public network adapter in the cluster.
A test IP address is required for each public network adapter in the cluster, regardless of whether the
adapter is the active adapter or the backup adapter in a NAFO group. The test IP addresses will be
used to recongure the adapters to use IP Network Multipathing.
Note Each test IP address must be on the same subnet as the existing IP address that is used by the

public network adapter.


To list the public network adapters on a node, run the following command:
% pnmstat

See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide:
IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.
5

Notify users that cluster services will be unavailable during upgrade.

Ensure that the cluster is functioning normally.


Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

223

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

To view the current status of the cluster, run the following command from any node:
% scstat

See the scstat(1M) man page for more information.

Search the /var/adm/messages log on the same node for unresolved error messages or warning
messages.

Check volume manager status.

Become superuser on a node of the cluster.

Switch each resource group ofine.


# scswitch -F -g resource-group

-F

Switches a resource group ofine

-g resource-group

Species the name of the resource group to take ofine

Disable all resources in the cluster.


The disabling of resources before upgrade prevents the cluster from bringing the resources online
automatically if a node is mistakenly rebooted into cluster mode.
Note If you are upgrading from a Sun Cluster 3.1 release, you can use the scsetup(1M) utility
instead of the command line. From the Main Menu, choose Resource Groups, then choose
Enable/Disable Resources.

a. From any node, list all enabled resources in the cluster.


# scrgadm -pv | grep "Res enabled"

b. Identify those resources that depend on other resources.


You must disable dependent resources rst before you disable the resources that they depend on.
c. Disable each enabled resource in the cluster.
scswitch -n -j resource

-n

Disables

-j resource

Species the resource

See the scswitch(1M) man page for more information.


10

Move each resource group to the unmanaged state.


# scswitch -u -g resource-group

-u

224

Moves the specied resource group to the unmanaged state

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

-g resource-group

11

Species the name of the resource group to move into the


unmanaged state

Verify that all resources on all nodes are disabled and that all resource groups are in the unmanaged
state.
# scstat -g

12

Stop all databases that are running on each node of the cluster.

13

Ensure that all shared data is backed up.

14

From one node, shut down the cluster.


# scshutdown
ok

See the scshutdown(1M) man page for more information.


15

Boot each node into noncluster mode.


ok boot -x

16

Ensure that each system disk is backed up.

17

Determine whether to upgrade the Solaris operating environment.

If Sun Cluster 3.1 04/04 software does not support the release of the Solaris environment that you
currently run on your cluster, you must upgrade the Solaris software to a supported release. Go to
How to Upgrade the Solaris Operating Environment on page 225.

If your cluster conguration already runs on a release of the Solaris environment that supports
Sun Cluster 3.1 04/04 software, go to How to Upgrade to Sun Cluster 3.1 04/04 Software
on page 227.

See Supported Products in Sun Cluster 3.1 Release Notes for more information.

How to Upgrade the Solaris Operating Environment


Perform this procedure on each node in the cluster to upgrade the Solaris operating environment.
Note The cluster must already run on, or be upgraded to, at least the minimum required level of the

Solaris 8 or Solaris 9 environment to support Sun Cluster 3.1 04/04 software. See Supported Products
in Sun Cluster 3.1 Release Notes for more information.

Ensure that all steps in How to Prepare the Cluster for Upgrade on page 223 are completed.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

225

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

Become superuser on the cluster node to upgrade.

Determine whether the following Apache links already exist, and if so, whether the le names
contain an uppercase K or S:
/etc/rc0.d/K16apache
/etc/rc1.d/K16apache
/etc/rc2.d/K16apache
/etc/rc3.d/S50apache
/etc/rcS.d/K16apache

If these links already exist and do contain an uppercase K or S in the le name, no further action
is necessary for these links.

If these links do not exist, or if these links exist but instead contain a lowercase k or s in the le
name, you move aside these links in Step 8.

Comment out all entries for globally mounted le systems in the /etc/vfstab le.
a. Make a record of all entries that are already commented out for later reference.
b. Temporarily comment out all entries for globally mounted le systems in the /etc/vfstab le.
Entries for globally mounted le systems contain the global mount option. Comment out these
entries to prevent the Solaris upgrade from attempting to mount the global devices.

Determine which procedure to follow to upgrade the Solaris operating environment.

Volume Manager

Procedure to Use

Instructions

Solstice DiskSuite/Solaris Volume Any Solaris upgrade method except Solaris 8 or Solaris 9 installation
Manager
the Live Upgrade method
documentation
VERITAS Volume Manager

Upgrading VxVM and Solaris


software

VERITAS Volume Manager


installation documentation

Upgrade the Solaris software, following the procedure you selected in Step 5.
Note Ignore the instruction to reboot at the end of the Solaris software upgrade process. You must

rst perform Step 7 and Step 8, then reboot into noncluster mode in Step 9 to complete Solaris
software upgrade.
If you are instructed to reboot a node at other times in the upgrade process, always add the -x option
to the command. This option ensures that the node reboots into noncluster mode. For example,
either of the following two commands boot a node into single-user noncluster mode:
# reboot -- -xs
ok boot -xs

226

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

In the /a/etc/vfstab le, uncomment those entries for globally mounted le systems that you
commented out in Step 4.

If the Apache links in Step 3 did not already exist or if they contained a lowercase k or s in the le
names before you upgraded the Solaris software, move aside the restored Apache links.
Use the following commands to rename the les with a lowercase k or s:
#
#
#
#
#

mv
mv
mv
mv
mv

/a/etc/rc0.d/K16apache
/a/etc/rc1.d/K16apache
/a/etc/rc2.d/K16apache
/a/etc/rc3.d/S50apache
/a/etc/rcS.d/K16apache

/a/etc/rc0.d/k16apache
/a/etc/rc1.d/k16apache
/a/etc/rc2.d/k16apache
/a/etc/rc3.d/s50apache
/a/etc/rcS.d/k16apache

Reboot the node into noncluster mode.


Include the double dashes (--) in the following command:
# reboot -- -x

10

Install any required Solaris software patches and hardware-related patches, and download any
needed rmware that is contained in the hardware patches.
For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.
Note Do not reboot after you add patches. You reboot the node after you upgrade the Sun Cluster

software.
See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of patches
and installation instructions.
11

Upgrade to Sun Cluster 3.1 04/04 software.


Go to How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227.
Note To complete upgrade from Solaris 8 to Solaris 9 software, you must also upgrade to the Solaris 9

version of Sun Cluster 3.1 04/04 software, even if the cluster already runs on Sun Cluster 3.1 04/04
software.

How to Upgrade to Sun Cluster 3.1 04/04 Software


This procedure describes how to upgrade the cluster to Sun Cluster 3.1 04/04 software. You must also
perform this procedure to complete cluster upgrade from Solaris 8 to Solaris 9 software.
Tip You can perform this procedure on more than one node at the same time.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

227

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

Ensure that all steps in How to Prepare the Cluster for Upgrade on page 223 are completed.
If you upgraded from Solaris 8 to Solaris 9 software, also ensure that all steps in How to Upgrade the
Solaris Operating Environment on page 225 are completed.

Become superuser on a node of the cluster.

Ensure that you have installed all required Solaris software patches and hardware-related patches.
For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice
DiskSuite software patches.

Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive on the node.
If the Volume Management daemon vold(1M) is running and congured to manage CD-ROM
devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.

Upgrade the node to Sun Cluster 3.1 04/04 software.


a. Change to the /cdrom/suncluster_3_0/SunCluster_3.1/Sol_ver/Tools directory, where ver is
8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/suncluster_3_0/SunCluster_3.1/Sol_ver/Tools

b. Upgrade the cluster framework software.

To upgrade from Sun Cluster 3.0 software, run the following command:
# ./scinstall -u update -S interact

-S

Species the test IP addresses to use to convert NAFO groups to IP


Network Multipathing groups

interact

Species that scinstall prompts the user for each test IP address
needed

To upgrade from Sun Cluster 3.1 software, run the following command:
# ./scinstall -u update

Tip If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node

is in noncluster mode (Offline), then restart the scinstall command.


# scstat -n
-- Cluster Nodes -Node name
--------Cluster node:
nodename
Cluster node:
nodename

228

Status
-----Offline
Offline

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

See the scinstall(1M) man page for more information. See the IP Network Multipathing
Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for
information about test addresses for IP Network Multipathing.
Note Sun Cluster 3.1 04/04 software requires at least version 3.5.1 of Sun Explorer software.

Upgrade to Sun Cluster software includes installing Sun Explorer data collector software, to be
used in conjunction with the sccheck utility. If another version of Sun Explorer software was
already installed before Sun Cluster upgrade, it is replaced by the version that is provided with
Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab
entries must be manually recreated.
During Sun Cluster upgrade, scinstall might make one or more of the following conguration
changes:

Convert NAFO groups to IP Network Multipathing groups but keep the original
NAFO-group name.

Rename the ntp.conf le to ntp.conf.cluster, if ntp.conf.cluster does not already exist


on the node.

Set the local-mac-address? variable to true, if the variable is not already set to that value.

c. Change to the CD-ROM root directory and eject the CD-ROM.


d. Install any Sun Cluster 3.1 04/04 patches.
Note If you upgraded from Solaris 8 software to Solaris 9 software, install Patch 11380101 or

later before you proceed to the next step.


See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of
patches and installation instructions.
Note Do not reboot the node at this time.

Upgrade software applications that are installed on the cluster and apply application patches as
needed.
Ensure that application levels are compatible with the current version of Sun Cluster and Solaris
software. See your application documentation for installation instructions. In addition, follow these
guidelines to upgrade applications in a Sun Cluster 3.1 04/04 conguration:

If the applications are stored on shared disks, you must master the relevant disk groups and
manually mount the relevant le systems before you upgrade the application.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

229

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

If you are instructed to reboot a node during the upgrade process, always add the -x option to the
command. This option ensures that the node reboots into noncluster mode. For example, either
of the following two commands boot a node into single-user noncluster mode:
# reboot -- -xs
ok boot -xs

Upgrade Sun Cluster data services to the Sun Cluster 3.1 04/04 software versions.
Note Only those data services that are provided on the Sun Cluster 3.0 5/02 Agents CD-ROM are

automatically upgraded by scinstall(1M). You must manually upgrade any custom or third-party
data services.

a. Insert the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive on the node to upgrade.
b. Upgrade the data-service software.
# scinstall -u update -s all -d /cdrom/cdrom0

-u update

Species upgrade

-s all

Updates all Sun Cluster data services that are installed on the node

Tip If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node

is in noncluster mode (Offline), then restart the scinstall command.


# scstat -n
-- Cluster Nodes -Node name
--------Cluster node:
nodename
Cluster node:
nodename

Status
-----Offline
Offline

c. Change to the CD-ROM root directory and eject the CD-ROM.


d. As needed, manually upgrade any custom data services that are not supplied on the Sun Cluster
3.0 5/02 Agents CD-ROM.
e. Install any Sun Cluster 3.1 04/04 data-service patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of
patches and installation instructions.
8

After all nodes are upgraded, reboot each node into the cluster.
# reboot

230

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

Verify that all upgraded software is at the same version on all upgraded nodes.
a. On each upgraded node, view the installed levels of Sun Cluster software.
# scinstall -pv

b. From one node, verify that all upgraded cluster nodes are running in cluster mode (Online).
# scstat -n

See the scstat(1M) man page for more information about displaying cluster status.
10

Did you upgrade from Solaris 8 to Solaris 9 software?

11

If no, skip to Step 14.


If yes, proceed to Step 11.

On each node, run the following command to verify the consistency of the storage conguration:
# scdidadm -c

-c

Perform a consistency check

Caution Do not proceed to Step 12 until your conguration passes this consistency check. Failure to

do so might result in errors in device identication and cause data corruption.


The following table lists the possible output from the scdidadm -c command and the action you
must take, if any.

Example Message

Action to Take

device id for phys-schost-1:/dev/rdsk/c1t3d0 does not Go to Recovering From Storage


match physical devices id, device may have been replaced Conguration Changes During Upgrade
on page 235 and perform the appropriate
repair procedure.
device id for phys-schost-1:/dev/rdsk/c0t0d0 needs to
be updated, run scdidadm R to update

None. You update this device ID in


Step 12.

No output message

None.

See the scdidadm(1M) man page for more information.


12

On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.
# scdidadm -R all

-R

Perform repair procedures

all

Specify all devices

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

231

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

13

On each node, run the following command to verify that storage database migration to Solaris 9
device IDs is successful:
# scdidadm -c

14

If the scdidadm command displays a message, return to Step 11 to make further corrections to the
storage conguration or the storage database.

If the scdidadm command displays no messages, the device-ID migration is successful. If


device-ID migration is veried on all cluster nodes, proceed to Step 14.

Did you upgrade VxVM?

If no, proceed to Step 15.

If yes, upgrade all disk groups.


To upgrade a disk group to the highest version supported by the VxVM release you installed, run
the following command from the primary node of the disk group:
# vxdg upgrade dgname

See your VxVM administration documentation for more information about upgrading disk
groups.
15

Do you intend to use Sun Management Center to monitor the cluster?

Example F1

If yes, go to How to Upgrade Sun Cluster-Module Software for Sun Management Center
on page 233.
If no, go to How to Finish Upgrading to Sun Cluster 3.1 04/04 Software on page 234.

Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 04/04 Software
The following example shows the process of upgrading a two-node cluster, including data services,
from Sun Cluster 3.0 to Sun Cluster 3.1 04/04 software on the Solaris 8 operating environment. The
cluster node names are phys-schost-1 and phys-schost-2.

(On the rst node, upgrade framework software from the Sun Cluster 3.0 5/02 CD-ROM)
phys-schost-1# cd /cdrom/suncluster_3_0/SunCluster_3.1/Sol_8/Tools
phys-schost-1# ./scinstall -u update -S interact
(On the rst node, upgrade data services from the Sun Cluster 3.0 5/02 Agents CD-ROM)
phys-schost-1# ./scinstall -u update -s all -d /cdrom/cdrom0
(On the second node, upgrade framework software from the Sun Cluster 3.0 5/02 CD-ROM)
phys-schost-2# cd /cdrom/suncluster_3_0/cdrom/suncluster_3_0/cdrom/suncluster_3_0/SunCluster_3.1/Sol_8/Tools
phys-schost-2# ./scinstall -u update -S interact
(On the second node, upgrade data services from the Sun Cluster 3.0 5/02 Agents CD-ROM)
phys-schost-2# ./scinstall -u update -s all -d /cdrom/cdrom0

232

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

(Reboot each node into the cluster)


phys-schost-1# reboot
phys-schost-2# reboot
(Verify cluster membership)
# scstat
-- Cluster Nodes -Node name
--------Cluster node:
phys-schost-1
Cluster node:
phys-schost-2

Status
-----Online
Online

How to Upgrade Sun Cluster-Module Software for Sun


Management Center
Perform the following steps to upgrade to the Sun Cluster 3.1 04/04 module software packages for
Sun Management Center on the Sun Management Center server machine and help-server machine.

Ensure that all Sun Management Center core packages are installed on the appropriate machines, as
described in your Sun Management Center installation documentation.
This step includes installing Sun Management Center agent packages on each cluster node.

Become superuser on the Sun Management Center server machine.

Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive.

Change to the /cdrom/suncluster_3_0/SunCluster_3.1/Sol_ver/Packages directory, where ver is


8 (for Solaris 8) or 9 (for Solaris 9) .
# cd /cdrom/suncluster_3_0/SunCluster_3.1/Sol_ver/Packages

Install the Sun Clustermodule server package SUNWscssv.


# pkgadd -d . SUNWscssv

Change to the CD-ROM root directory and eject the CD-ROM.

Become superuser on the Sun Management Center help-server machine.

Repeat Step 3 through Step 6 to install the Sun Clustermodule help-server package SUNWscshl.

Finish the upgrade.


Go to How to Finish Upgrading to Sun Cluster 3.1 04/04 Software on page 234.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

233

Composed March 29, 2006


Upgrading to Sun Cluster 3.1 04/04 Software

How to Finish Upgrading to Sun Cluster 3.1 04/04


Software
Perform this procedure to reregister and reversion all resource types that received a new version from
the upgrade, then to re-enable resources and bring resource groups back online.
Note To upgrade future versions of resource types, see Upgrading a Resource Type in Sun Cluster 3.1
Data Service 4/03 Planning and Administration Guide.

Ensure that all steps in How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227 are
completed.

From any node, start the scsetup(1M) utility.


# scsetup

To work with resource groups, type 2 (Resource groups).

To register resource types, type 4 (Resource type registration).


Type yes when prompted to continue.

Type 1 (Register all resource types which are not yet registered).
The scsetup utility displays all resource types that are not registered.
Type yes to continue to register these resource types.

Type 8 (Change properties of a resource).


Type yes to continue.

Type 3 (Manage resource versioning).


Type yes to continue.

Type 1 (Show versioning status).


The scsetup utility displays which resources you can upgrade to new versions of the same resource
type. The utility also displays the state that the resource should be in before the upgrade can begin.
Type yes to continue.

Type 4 (Re-version all eligible resources).


Type yes to continue when prompted.

10

234

Return to the Resource Group Menu.

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Recovering From Storage Conguration Changes During Upgrade

11

Type 6 (Enable/Disable a resource).


Type yes to continue when prompted.

12

Select a resource to enable and follow the prompts.

13

Repeat Step 12 for each disabled resource.

14

When all resources are re-enabled, type q to return to the Resource Group Menu.

15

Type 5 (Online/Ofine or Switchover a resource group).


Type yes to continue when prompted.

16

Follow the prompts to bring each resource group online.

17

Exit the scsetup utility.


Type q to back out of each submenu, or press Ctrl-C.
The cluster upgrade is complete. You can now return the cluster to production.

Recovering From Storage Conguration Changes During


Upgrade
This section provides the following repair procedures to follow if changes were inadvertently made
to the storage conguration during upgrade:

How to Handle Storage Reconguration During an Upgrade on page 235


How to Resolve Mistaken Storage Changes During an Upgrade on page 236

How to Handle Storage Reconguration During an


Upgrade
Any changes to the storage topology, including running Sun Cluster commands, should be
completed before you upgrade the cluster to Solaris 9 software. If, however, changes were made to the
storage topology during the upgrade, perform the following procedure. This procedure ensures that
the new storage conguration is correct and that existing storage that was not recongured is not
mistakenly altered.

Ensure that the storage topology is correct.


Check whether the devices that were agged as possibly being replaced map to devices that actually
were replaced. If the devices were not replaced, check for and correct possible accidental
conguration changes, such as incorrect cabling.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

235

Composed March 29, 2006


Recovering From Storage Conguration Changes During Upgrade

Become superuser on a node that is attached to the unveried device.

Manually update the unveried device.


# scdidadm -R device

-R device-

Performs repair procedures on the specied device

See the scdidadm(1M) man page for more information.


4

Update the DID driver.


# scdidadm -ui
# scdidadm -r

-u

Loads the device ID conguration table into the kernel

-i

Initializes the DID driver

-r

Recongures the database

Repeat Step 2 through Step 4 on all other nodes that are attached to the unveried device.

Return to the remaining upgrade tasks.


Go to Step 11 in How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227.

How to Resolve Mistaken Storage Changes During an


Upgrade
If accidental changes are made to the storage cabling during the upgrade, perform the following
procedure to change the storage conguration back to the correct state.
Note This procedure assumes that no physical storage was actually changed. If physical or logical

storage devices were changed or replaced, instead follow procedures in How to Handle Storage
Reconguration During an Upgrade on page 235.

Change the storage topology back to its original conguration.


Check the conguration of the devices that were agged as possibly being replaced, including the
cabling.

As superuser, update the DID driver on each node of the cluster.


# scdidadm -ui
# scdidadm -r

-u

236

Loads the deviceID conguration table into the kernel

Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A

Composed March 29, 2006


Recovering From Storage Conguration Changes During Upgrade

-i

Initializes the DID driver

-r

Recongures the database

See the scdidadm(1M) man page for more information.


3

Did the scdidadm command return any error messages in Step 2?

If no, proceed to Step 4.

If yes, return to Step 1 to make further modications to correct the storage conguration, then
repeat Step 2.

Return to the remaining upgrade tasks.


Go to Step 11 in How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227.

Appendix F Upgrading Sun Cluster Software From Solaris 8 to Solaris 9 Software

237

Composed March 29, 2006

238

Вам также может понравиться