Академический Документы
Профессиональный Документы
Культура Документы
Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without
limitation, these intellectual property rights may include one or more U.S. patents or pending patent applications in the U.S. and in other countries.
U.S. Government Rights Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions
of the FAR and its supplements.
This distribution may include materials developed by third parties.
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other
countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, the Solaris logo, the Java Coffee Cup logo, docs.sun.com, Java, and Solaris are trademarks or registered trademarks of Sun
Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. This
product includes software developed by the Apache Software Foundation (http://www.apache.org/).
The OPEN LOOK and Sun Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of
Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the
Xerox Graphical User Interface, which license also covers Suns licensees who implement OPEN LOOK GUIs and otherwise comply with Suns written license
agreements.
Products covered by and information contained in this publication are controlled by U.S. Export Control laws and may be subject to the export or import laws in
other countries. Nuclear, missile, chemical or biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export
or reexport to countries subject to U.S. embargo or to entities identied on U.S. export exclusion lists, including, but not limited to, the denied persons and specially
designated nationals lists is strictly prohibited.
DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY
IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO
THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Copyright 2006 Sun Microsystems, Inc.
Sun Microsystems, Inc. dtient les droits de proprit intellectuelle relatifs la technologie incorpore dans le produit qui est dcrit dans ce document. En particulier,
et ce sans limitation, ces droits de proprit intellectuelle peuvent inclure un ou plusieurs brevets amricains ou des applications de brevet en attente aux Etats-Unis et
dans dautres pays.
Cette distribution peut comprendre des composants dvelopps par des tierces personnes.
Certaines composants de ce produit peuvent tre drives du logiciel Berkeley BSD, licencis par lUniversit de Californie. UNIX est une marque dpose aux
Etats-Unis et dans dautres pays; elle est licencie exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, le logo Solaris, le logo Java Coffee Cup, docs.sun.com, Java et Solaris sont des marques de fabrique ou des marques dposes de
Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques
dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par
Sun Microsystems, Inc. Ce produit inclut le logiciel dvelopp par la base de Apache Software Foundation (http://www.apache.org/).
Linterface dutilisation graphique OPEN LOOK et Sun a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de
pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces dutilisation visuelle ou graphique pour lindustrie de linformatique. Sun dtient
une licence non exclusive de Xerox sur linterface dutilisation graphique Xerox, cette licence couvrant galement les licencis de Sun qui mettent en place linterface
dutilisation graphique OPEN LOOK et qui, en outre, se conforment aux licences crites de Sun.
Les produits qui font lobjet de cette publication et les informations quil contient sont rgis par la legislation amricaine en matire de contrle des exportations et
peuvent tre soumis au droit dautres pays dans le domaine des exportations et importations. Les utilisations nales, ou utilisateurs naux, pour des armes nuclaires,
des missiles, des armes chimiques ou biologiques ou pour le nuclaire maritime, directement ou indirectement, sont strictement interdites. Les exportations ou
rexportations vers des pays sous embargo des Etats-Unis, ou vers des entits gurant sur les listes dexclusion dexportation amricaines, y compris, mais de manire
non exclusive, la liste de personnes qui font objet dun ordre de ne pas participer, dune faon directe ou indirecte, aux exportations des produits ou des services qui
sont rgis par la legislation amricaine en matire de contrle des exportations et la liste de ressortissants spciquement designs, sont rigoureusement interdites.
LA DOCUMENTATION EST FOURNIE "EN LETAT" ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES OU TACITES
SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y COMPRIS NOTAMMENT TOUTE GARANTIE
IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A LAPTITUDE A UNE UTILISATION PARTICULIERE OU A LABSENCE DE CONTREFACON.
060329@14558
Contents
How to Replace the Sun Cluster Support Packages for Oracle Real Application Clusters ..
48
Bug ID 5109935 .....................................................................................................................................49
Bug ID 6196936 .....................................................................................................................................49
Bug ID 6198608 .....................................................................................................................................49
Bug ID 6210418 .....................................................................................................................................50
Bug ID 6220218 .....................................................................................................................................50
Bug ID 6252555 .....................................................................................................................................50
Known Documentation Problems .............................................................................................................51
System Administration Guide .............................................................................................................51
Software Installation Guide .................................................................................................................51
Man Pages ..............................................................................................................................................52
Sun Cluster Data Services 3.1 10/03 Release Notes Supplement ........................................................71
Revision Record ............................................................................................................................................71
New Features .................................................................................................................................................72
Support for Oracle 10g .........................................................................................................................72
WebLogic Server Version 8.x ...............................................................................................................73
Restrictions and Requirements ...................................................................................................................73
Known Problems ..........................................................................................................................................74
Some Data Services Cannot be Upgraded by Using the scinstall Utility ...................................74
How to Upgrade Data Services That Cannot be Upgraded by Using scinstall .................74
Sun Cluster HA for liveCache nsswitch.conf requirements for passwd make NIS unusable
(4904975) ...............................................................................................................................................75
Known Documentation Problems .............................................................................................................75
Sun Cluster Data Services 3.1 5/03 Release Notes Supplement ..........................................................87
Revision Record ............................................................................................................................................87
New Features .................................................................................................................................................88
Support for Oracle 10g .........................................................................................................................88
Sun Cluster Support for Oracle Real Application Clusters on a Subset of Cluster Nodes ...........89
Restrictions and Requirements ...................................................................................................................90
Known Problems ..........................................................................................................................................91
Known Documentation Problems .............................................................................................................91
Sun Cluster 3.1 Data Service for NetBackup ......................................................................................91
Sun Cluster 3.1 Data Service for Sun ONE Application Server .......................................................91
Release Notes .........................................................................................................................................91
Planning the Sun Cluster HA for SAP liveCache Installation and Conguration ..............................148
Conguration Requirements ............................................................................................................148
Standard Data Service Congurations .............................................................................................149
Conguration Considerations ..........................................................................................................149
Conguration Planning Questions ..................................................................................................149
Preparing the Nodes and Disks .................................................................................................................150
How to Prepare the Nodes ............................................................................................................150
Installing and Conguring SAP liveCache ..............................................................................................151
How to Install and Congure SAP liveCache .............................................................................151
How to Enable SAP liveCache to Run in a Cluster .....................................................................151
Verifying the SAP liveCache Installation and Conguration ................................................................152
How to Verify the SAP liveCache Installation and Conguration ...........................................152
Installing the Sun Cluster HA for SAP liveCache Packages ...................................................................153
How to Install the Sun Cluster HA for SAP liveCache Packages ..............................................153
Registering and Conguring the Sun Cluster HA for SAP liveCache ..................................................154
Sun Cluster HA for SAP liveCache Extension Properties ..............................................................154
How to Register and Congure Sun Cluster HA for SAP liveCache ........................................156
Verifying the Sun Cluster HA for SAP liveCache Installation and Conguration ..............................159
How to Verify the Sun Cluster HA for SAP liveCache Installation and Conguration .........159
Understanding Sun Cluster HA for SAP liveCache Fault Monitors .....................................................160
Extension Properties ..........................................................................................................................160
Monitor Check Method .....................................................................................................................160
Probing Algorithm and Functionality ..............................................................................................161
10
12
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 8/05
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 8/05 product. These online
release notes provide the most current information on the Sun Cluster 3.1 8/05 product. This
chapter includes the following information.
Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 11 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2006
Revision Date
New Information
April 2006
Incorrect Release Date for the First Update of the Solaris 10 OS on page 28
Support for Oracle 10g R2 Real Application Clusters on the x64 Platform on page 16
March 2006
13
TABLE 11 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2006
(Continued)
Revision Date
New Information
January 2006
TABLE 12 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2005
Revision Date
New Information
Required Patches on page 15
Support for InniBand Adapters on the Cluster Interconnect on page 16
Support for the Sun StorEdge QFS Shared File System With Solaris Volume Manager for Sun Cluster
on page 17
Support for Oracle 10g R1 and 10g R2 Real Application Clusters on the SPARC Platform on page 19
Support for Oracle 10g on the x64 Platform With the Solaris 10 OS on page 20
Support for Version SAP Version 6.40 on page 20
Support for MaxDB Version 7.5 on page 21
November 2005
October 2005
September 2005
14
TABLE 12 Sun Cluster 3.1 8/05 Release Notes Supplement Revision Record 2005
Revision Date
(Continued)
New Information
Required Patches on page 15
Support for InniBand Adapters on the Cluster Interconnect on page 16
Support for the Sun StorEdge QFS Shared File System With Solaris Volume Manager for Sun Cluster
on page 17
Support for Oracle 10g R1 and 10g R2 Real Application Clusters on the SPARC Platform on page 19
Support for Oracle 10g on the x64 Platform With the Solaris 10 OS on page 20
Support for Version SAP Version 6.40 on page 20
Support for MaxDB Version 7.5 on page 21
November 2005
Localization Packages For Sun Java Web Console Do Not Exist in the Sun Cluster
Standalone Distribution (6299614) on page 26
New Features
In addition to features documented in the Sun Cluster 3.1 8/05 Release Notes for Solaris OS, this
release now includes support for the following features.
Required Patches
Patches are required to run Sun Cluster 3.1 8/05 on certain operating system congurations. See to
the following table to determine if your operating system conguration requires a patch.
Conguration
Patch Number
Solaris 9
SPARC
11794919
Solaris 9
x86
11790919
Solaris 10
With Kernel Jumbo Patch
118822-15 or greater
SCI adapter
12054502
Solaris 10
x64
12050103
Solaris 10
With Kernel Jumbo Patch
118822-18 or greater
12050003
15
If you are using a storage area network (SAN) to provide access to shared storage and I/O
multipathing is enabled, the following Solaris patches are also required:
119375-13
119716-10
Without these patches, a node can lose access to all shared storage if a physical link that provides
access to storage is disconnected or fails.
16
A two-node cluster must use InniBand switches. You cannot directly connect the InniBand
adapters to each other.
A single Sun InniBand switch, which has nine ports, can support up to nine nodes in a cluster.
Jumbo frames are not supported on a cluster that uses InniBand adapters.
If only one InniBand adapter is installed on a cluster node, each of its two ports must be
connected to a different InniBand switch.
If two InniBand adapters are installed in a cluster node, leave the second port on each adapter
unused. For example, connect port 1 on HCA 1 to switch 1 and connect port 1 on HCA 2 to
switch 2.
120809-01
120807-01
118822-21
120537-04
Note Ensure that you install the stated revision or a higher revision of each patch in the preceding
list.
17
Using the RAID0 metadevices or Solaris Volume Manager soft partitions of such metadevices as
Sun StorEdge QFS devices
N is the version number of the Solaris OS that you are using. For example, if you are using the Solaris
10 OS, N is 10.
Oracle 9.2.
19
crs-home
nodename
The name of the node where you are disabling the GSD
Prevent the Oracle GSD from being started if the node is rebooted.
# crs-home/bin/crs_unregister ora.nodename.gsd
crs-home
nodename
The name of the node where you are disabling the GSD
When performing How to Enable Failover SAP Instances to Run in a Cluster on page 200, add the
following Step 9 to this procedure:
9. As user sapsidadm, add the following entries for enq in the DEFAULT.PFL prole le under the
/sapmnt/SAPSID/profile directory.
20
rdisp/enqname=<ci-logical-hostname>_sapsid_NR
rdisp/myname=<ci-logical-hostname>_sapsid_NR
(MaxDB).
If you are using MaxDB 7.5, the UNIX user identity of the OS user who administers the MaxDB
database must be sdb. Otherwise, the MaxDB fault monitor cannot probe the MaxDB database.
You are required to specify this user identity when you perform the tasks that are explained in the
following sections:
How to Install and Congure MaxDB in Sun Cluster Data Service for MaxDB Guide for Solaris
OS
How to Verify MaxDB Installation and Conguration on Each Node in Sun Cluster Data
Service for MaxDB Guide for Solaris OS
How to Register and Congure a MaxDB Resource in Sun Cluster Data Service for MaxDB
Guide for Solaris OS
How to Verify the Operation of the MaxDB Fault Monitor in Sun Cluster Data Service for
MaxDB Guide for Solaris OS
21
Ensure that the SAP liveCache administrator user is in the sdba user group.
The format of the SAP liveCache administrator users user ID is lc-nameadm.
If you are creating the SAP liveCache administrator user manually, add the following entry to the
/etc/group le:
sdba::group-id:lc-nameadm
group-id
lc-name
For more information about the /etc/group le, see the group(4) man page.
2
If the SCM and SAP liveCache are installed on different machines, ensure that the SAP liveCache
administrator users user ID is identical and belongs to the sdba group on each machine.
To meet these requirements, ensure that the entry for the SAP liveCache administrator user in the
/etc/group le on each machine is identical. The required format of this entry is given in Step 1.
How to Conrm That the SAP liveCache Administrator User Can Run the
lcinit Command
If you are using SAP liveCache 7.5, conrm that the SAP liveCache administrator user can run
lcinit immediately after you perform the task How to Verify the SAP liveCache Installation and
Conguration on page 152.
1
lc-name
2
22
Fixed Problems
There are no xed problems at this time.
Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 8/05 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 8/05 release.
The rsh/telnet/rlogin process hangs when connecting over the cluster interconnect (CR
6352333).
Network devices based on the Intel Ophir chip are unreliable in a back-to-back conguration
(CR 6331252).
2. Use an Ethernet switch with your cluster interconnect cables for all ipge onboard interfaces.
Direct-connect onboard interfaces are not supported by Sun Cluster software at this time.
23
sap_ci and sap_as start methods dump core unable to start SAP unicode systems.
Workaround: To avoid this problems, if you are using a SAP Unicode system, you must perform the
following steps before you perform Step 6 of How to Register and Congure Sun Cluster HA for
SAP with Central Instance on page 211 as the Solaris root user congure the runtime linking
environment to include the SAP exe and load library directories as follows:
1. Congure the runtime linking environment for 32 bit applications.
# crle -u -l /sapmnt/SAPSID/exe
2. Verify that this modication has been applied for 32 bit applications.
# crle
4. Verify that this modication has been applied for 64 bit applications.
# crle -64
You need only perform these steps once. If you have not performed these steps, you will not be able
to:
Enable the failover resource group that includes the SAP central instance as described in How to
Register and Congure Sun Cluster HA for SAP with Central Instance on page 211
Enable the failover resource group that includes the SAP application server resource group, step 6
of How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service on page
212
Enable the scalable resource group that includes the SAP application server resource, as described
in How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213
24
Workaround: Avoiding this problem involves steps specic to the WebLogic Server and its
conguration les. These steps are kept in a single location to ensure that they are kept up to date as
work to x the problem progresses. To see these workaround steps, go to
http://www.sunsolve.sun.com and search on change request 6182519.
25
SUNWtcatu
SUNWmcosx
SUNWmcos
SUNWj3dev
SUNWjato
SUNWjhdev
After all packages are installed, start the Java ES installer and proceed with Sun Cluster software
installation.
Remove any Sun Java Web Console localization packages that are installed on the node.
# pkgrm SUNWcmctg SUNWdmctg SUNWemctg SUNWfmctg SUNWhmctg SUNWkmctg SUNWjmctg
# pkgrm SUNWcmcon SUNWdmcon SUNWemcon SUNWfmcon SUNWhmcon SUNWkmcon SUNWjmcon
Install the base Sun Java Web Console package by using the setup utility.
# Product/sunwebconsole/setup
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# cd /
# eject cdrom
26
Change to the directory that contains the Sun Java Web Console localization packages for the
language that you want.
# cd Product/shared_components/Packages/locale/lang/
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
Ensure that you have prepared all cluster nodes to run IPv6 services. These tasks include proper
conguration of network interfaces, server/client application software, name services, and routing
infrastructure. Failure to do so might result in unexpected failures of network applications. For more
information, see your Solaris system-administration documentation for IPv6 services.
On each node, add the following entry to the /etc/system le.
set cl_comm:ifk_disable_v6=0
The config_ipv6 utility brings up an IPv6 interface on all cluster interconnect adapters that have a
link-local address. The utility enables proper forwarding of IPv6 scalable service packets over the
interconnects.
Alternately, you can reboot each cluster node to activate the conguration change.
27
28
This restriction applies specically to the installation location of Sun Cluster framework software
and Sun Cluster data-service software. It does not restrict the creation of non-global zones on a
cluster node. In addition, applications can be installed in a non-global zone on a cluster node and
congured to be highly available and managed by Sun Cluster software. For more information, see
Sun Cluster HA for Solaris Containers in Sun Cluster 3.1 8/05 Release Notes for Solaris OS.
How to Install and Congure a Zone in Sun Cluster Data Service for Solaris Containers Guide
Replace this incorrect section with How to Install a Zone and Perform the Initial Internal Zone
Conguration on page 30
Patching the Global Zone and Local Zones in Sun Cluster Data Service for Solaris Containers
Guide
Replace this incorrect section with How to Patch to the Global Zone and Local Zones on page
31.
29
Conguration
Perform this task on each node that is to host the zone.
Note For complete information about installing a zone, see System Administration Guide: Solaris
Determine the following requirements for the deployment of the zone with Sun Cluster:
Chapter 18, Planning and Conguring Non-Global Zones (Tasks), in System Administration
Guide: Solaris Containers-Resource Management and Solaris Zones
If the zone is to run in a failover conguration, ensure that the zones zone path can be created on the
zones disk storage.
If the zone is to run in a multiple-masters conguration, omit this step.
a. On the node where you are installing the zone, bring online the resource group that contains the
resource for the zones disk storage.
# scswitch -z -g solaris-zone-resource-group -h node
b. If the zones zone path exists on the zones disk storage, remove the zone path.
The zones zone path exists on the zones disk storage if you have already installed the zone on
another node.
2
30
For more detailed information about installing a zone, see How to Install a Congured Zone in
System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
3
Ensure that the node that you are patching can access the zone paths of all zones that are congured
on the node.
Some zones might be congured to run in a failover conguration. In this situation, bring online on
the node that you are patching the resource group that contains the resources for the zones disk
storage.
# sscswitch -z -g solaris-zone-resource-group -h node
31
After modifying your SAP systems database to refer to a logical host, if you are using SAP DB or
MaxDB as your database, create a .XUSER.62 le in the home directory of the sapsysadm user that
refers to the logical host of the database. Create this .XUSER.62 le using the dbmcli or xuser tools.
Test this change using R3trans -d. This step is necessary so the SAP instance can nd the database
state while starting up.
destination
Species the node to which you are copying the /etc/opt/sdb directory and its
contents
destination
Species the node to which you are copying the /etc/opt/sdb directory and its
contents
6. Create a link from the /sapdb/LCA/db/wrk directory to the /sapdb/data/wrk directory as follows:
32
# ln -s /sapdb/data/wrk /sapdb/LCA/db/wrk
How to Install and Congure the Scalable SAP Web Application Server and the SAP J2EE
Engine on page 33
How to Modify the Installation for a Scalable SAP Web Application Server Component on page
33
How to Create a Dependency on the Web Application Server Database on page 34
How to Install and Congure the Scalable SAP Web Application Server
and the SAP J2EE Engine
When performing the procedure How to Install and Congure the SAP Web Application Server and
the SAP J2EE Engine in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris
OS check http://service.sap.com/ha and corresponding SAP notes for information about any changes
that you must make to the SAP conguration for it to work with a logical host.
Step 2 of How to Install and Congure the SAP Web Application Server and the SAP J2EE Engine
in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS is missing
information for the J2EE user.
2. If you are using the SAP J2EE engine, install J2EE as an addon or a standalone, following these
instructions:
2a. If you are conguring J2EE as a failover data service install the SAP J2EE engine software on the
same node as which you installed the SAP Web Application Server software.
2b. If you are conguring J2EE as a scalable data service, install the same J2EE instance using the
same instance name on each node where you want the corresponding scalable resource to run.
33
Note If you are using the J2EE engine, you have installed the J2EE instance on each node. For more
information, see Step 2 of How to Install and Congure the Scalable SAP Web Application Server
and the SAP J2EE Engine on page 33.
5. Update the script $HOME/loghost as follows:
Here are examples depending on the type of instance you are using:
A scalable J2EE instance
if [ "$1" = "J85" ]; then
echo hostname;
fi
-y hsp-central-rs,db-webas-rs
34
When following Step 2 of the procedure How to Register and Congure an SAP Message Server
Resource in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS, add a
dependency to the Web Application Server database:
2. Create an SAP message server resource in the SAP central services resource group.
# scrgadm -a -j msg-rs -g central-rg -t SUNW.sapscs \
-x SAP_SID=scs-system-ID \
-x SAP_Instance_Number=scs-instance-number \
-x SAP_Instance_Name=scs-instance-name \
-x Msg_Server_Port=msg-server-port \
-x Scs_Startup_Script=scs-server-startup-script \
-x Scs_Shutdown_Script=scs-server-shutdown-script \
-y Resource_Dependencies=hsp-central-rs,db-webas-rs
-y hsp-central-rs,db-webas-rs
Release Notes
The following subsections describe omissions or errors discovered in the Sun Cluster 3.1 8/05 Release
Notes for Solaris OS.
Volume managers
On Solaris 8 - Solstice DiskSuite 4.2.1 and (SPARC only) VERITAS Volume Manager 3.5,
4.0, and 4.1. Also, VERITAS Volume Manager components delivered as part of Veritas
Storage Foundation 4.0 and 4.1.
On Solaris 9 - Solaris Volume Manager and (SPARC only) VERITAS Volume Manager 3.5,
4.0, and 4.1. Also, VERITAS Volume Manager components delivered as part of Veritas
Storage Foundation 4.0 and 4.1.
35
36
On Solaris 10 - Solaris Volume Manager and (SPARC only) VERITAS Volume Manager 4.1.
Also, VERITAS Volume Manager components delivered as part of Veritas Storage
Foundation 4.1.
File systems
On Solaris 8 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 3.5, 4.0, and 4.1. Also, VERITAS File System components delivered
as part of Veritas Storage Foundation 4.0 and 4.1.
On Solaris 9 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 3.5, 4.0, and 4.1. Also, VERITAS File System components delivered
as part of Veritas Storage Foundation 4.0 and 4.1.
On Solaris 10 - Solaris UFS, (SPARC only) Sun StorEdge QFS, and (SPARC only)
VERITAS File System 4.1. Also, VERITAS File System components delivered as part of
Veritas Storage Foundation 4.1.
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 9/04
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.
Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 21 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2006
Revision Date
New Information
April 2006
January 2006
37
TABLE 22 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2005
Revision Date
New Information
November 2005
Change Request 6220218 for VERITAS Storage Foundation 4.0 is now xed by a patch.
See Bug ID 6220218 on page 50.
October 2005
September 2005
Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39.
July 2005
Documented steps for using hardware RAID on internal drives for servers providing
internal hardware disk mirroring (integrated mirroring). See Mirroring Internal Disks
on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring
on page 39.
June 2005
Added restriction on placement of SCI cards in hot swap PCI+ (hsPCI+) I/O
assemblies. See Restriction on SCI Card Placement on page 46.
Bug ID 6252555, problems with quorum reservations and patch 11327728 or later. See
Bug ID 6252555 on page 50.
May 2005
The VERITAS Storage Foundation 4.0 standard license enables PGR functionality,
causing cluster nodes to panic. See Bug ID 6220218 on page 50.
Added restriction on quorum devices when using storage-based data replication. See
Storage-Based Data Replication and Quorum Devices on page 47.
Sun Cluster Support for Oracle Real Application Clusters supports the use of Sun
StorEdge QFS with Oracle 10g Real Application Clusters. For more information, see
SPARC: Support for Sun StorEdge QFS With Oracle 10g Real Application Clusters
on page 43
March 2005
Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50.
Additional requirements to support IPv6 network addresses. See IPv6 Support and
Restrictions for Public Networks on page 51 and IPv6 Requirement for the Cluster
Interconnect on page 51.
Correction to upgrade procedures for Sun Cluster HA for SAP liveCache 3.1. See
Correction to the Upgrade of Sun Cluster HA for SAP liveCache on page 52.
January 2005
SCSI reset errors when using Cauldron-S and 3310 RAID arrays. See Bug ID 6196936
on page 49.
Support for jumbo frames with Solaris 8 limited to clusters using Oracle RAC. See Bug
ID 4333241 on page 47.
38
TABLE 22 Sun Cluster 3.1 9/04 Release Notes Supplement Revision Record 2005
(Continued)
Revision Date
New Information
December 2004
The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
Restrictions apply to Sun Cluster installations on x86 based systems. See Bug ID
5066167 on page 60.
You will receive an error if you try to re-encapsulate root on a device that was
previously encapsulated. See Bug ID 4804696 on page 47.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
When Sun Cluster is upgraded from a previous version to Sun Cluster 3.1 9/04, the Sun
Cluster support packages for Oracle Real Application Clusters are not upgraded. See
Bug ID 5107076 on page 48.
When using scinstall to upgrade Sun Cluster data services for Sun Cluster 3.1 9/04
software, Sun Cluster will issue error messages complaining about missing Solaris_10
Packages directories. See Bug ID 5109935 on page 49.
New Features
In addition to features documented in the Sun Cluster 3.1 9/04 Release Notes for Solaris OS, this
release now includes support for the following features.
39
Depending on the version of the Solaris operating system you use, you might need to install a patch
to correct change request 5023670 and ensure the proper operation of internal mirroring. Check the
PatchPro site to nd the patch for your server.
The best way to set up hardware disk mirroring is to perform RAID conguration after you install
the Solaris OS and before you congure multipathing. If you need to change your mirroring
conguration after you have established the cluster, you must perform some cluster-specic steps to
clean up the device IDs.
For specics about how to congure your servers internal disk mirroring, refer to the documents
that shipped with your server and the raidctl(1M) man page.
Install your cluster hardware as instructed in your server and storage array documentation.
Install the Solaris operating system, as instructed in the Sun Cluster installation guide.
As a part of this procedure, you will check the PatchPro web site and install any necessary patches.
-c clt0d0 clt1d0
Creates the mirror of primary disk to the mirror disk. Enter the name of your
primary disk as the rst argument. Enter the name of the mirror disk as the
second argument.
Continue with installing and conguring your multipathing software, if necessary, as instructed in
the Sun Cluster installation guide.
Install the Sun Cluster software, as instructed in the Sun Cluster installation guide.
Established
Before You Begin
This procedure assumes that you have already installed your hardware and software and have
established the cluster.
Check the PatchPro site for any patches required for using internal disk mirroring.
Pro is a -management tool that eases the selection and download of patches required for installation
or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for
Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPros Expert Mode
tool helps you to maintain your conguration with the latest set of patches. Expert Mode is especially
useful for obtaining all of the latest patches, not just the high availability and security patches.
40
To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun
Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the
PatchPro tool to describe your cluster conguration and download the patches.
For third-party rmware patches, see the SunSolveSM Online site at
http://sunsolve.ebay.sun.com.
1
b. If necessary, move all resource groups and device groups off the node.
# scswitch -S -h fromnode
2
-c clt0d0 clt1d0
Creates the mirror of primary disk to the mirror disk. Enter the name of your
primary disk as the rst argument. Enter the name of the mirror disk as the
second argument.
-R /dev/rdsk/clt0d0
Updates the clusters record of the device IDs for the primary disk. Enter the
name of your primary disk as the argument.
Conrm that the mirror has been created and only the primary disk is visible to the cluster.
# scdidadm -l
The command lists only the primary disk as visible to the cluster.
6
If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on
the primary disk, recreate the state database replicas.
# metadb -afc 3 /dev/rdsk/clt0d0s4
41
If you moved device groups off the node in Step 1, move all device groups back to the node.
Perform the following step for each device group you want to return to the original node.
# scswitch -z -D devicegroup -h nodename
In this command, devicegroup is one or more device groups that are returned to the node.
9
If you moved resource groups off the node in Step 1, move all resource groups back to the node.
# scswitch -z -g resourcegroup -h nodename
b. If necessary, move all resource groups and device groups off the node.
# scswitch -S -h fromnode
2
-d clt0d0
Deletes the mirror of primary disk to the mirror disk. Enter the name of your primary
disk as the argument.
-R /dev/rdsk/clt0d0
-R /dev/rdsk/clt1d0
Updates the clusters record of the device IDs. Enter the names of your disks
separated by spaces.
Conrm that the mirror has been deleted and that both disks are visible.
# scdidadm -l
42
If you are using Solstice DiskSuite or Solaris Volume Manager and if the state database replicas are on
the primary disk, recreate the state database replicas.
# metadb -c 3 -ag /dev/rdsk/clt0d0s4
If you moved device groups off the node in Step 1, return the device groups to the original node.
# scswitch -z -D devicegroup -h nodename
If you moved resource groups off the node in Step 1, return the resource groups and device groups to
the original node.
If you are using Sun Cluster 3.2, use the following command:
Perform the following step for each resource group you want to return to the original node.
# scswitch -z -g resourcegroup -h nodename
If you are using Sun Cluster 3.1, use the following command:
SPARC: Requirements for Using the Sun StorEdge QFS Shared File
System
You can store all of the les that are associated with Oracle Real Application Clusters on the Sun
StorEdge QFS shared le system.
For information about how to create a Sun StorEdge QFS shared le system, see the following
documentation for Sun StorEdge QFS:
Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Conguration Guide
Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide
Distribute these les among several le systems as explained in the subsections that follow.
Chapter 2 Sun Cluster 3.1 9/04 Release Notes Supplement
43
Sun StorEdge QFS File Systems for RDBMS Binary Files and Related Files
For RDBMS binary les and related les, create one le system in the cluster to store the les.
The RDBMS binary les and related les are as follows:
Sun StorEdge QFS File Systems for Database Files and Related Files
For database les and related les, determine whether you require one le system for each database
or multiple le systems for each database.
For simplicity of conguration and maintenance, create one le system to store these les for all
Oracle Real Application Clusters instances of the database.
To facilitate future expansion, create multiple le systems to store these les for all Oracle Real
Application Clusters instances of the database.
Note If you are adding storage for an existing database, you must create additional le systems for
the storage that you are adding. In this situation, distribute the database les and related les among
the le systems that you will use for the database.
Each le system that you create for database les and related les must have its own metadata server.
For information about the resources that are required for the metadata servers, see SPARC:
Resources for the Sun StorEdge QFS Shared File System on page 45.
The database les and related les are as follows:
44
Data les
Control les
Online redo log les
Archived redo log les
Flashback log les
Recovery les
Oracle cluster registry (OCR) les
Oracle CRS voting disk
SPARC: Resources for the Sun StorEdge QFS Shared File System
If you are using the Sun StorEdge QFS shared le system, answer the following questions:
Which resources will you create to represent the metadata server for the Sun StorEdge QFS
shared le system?
One resource is required for each Sun StorEdge QFS metadata server.
Create the resources for the metadata servers in separate resource groups.
Set the resource group for the le system that contains the Oracle CRS voting disk to depend
on the other resource groups.
For more information, see the following documentation for Sun StorEdge QFS:
Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Conguration Guide
Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide
Use the answers to these questions when you perform the procedure in Registering and Conguring
Oracle RAC Server Resources in Sun Cluster Data Service for Oracle Real Application Clusters Guide
for Solaris OS.
45
3644481
3976437
Run the scdidadm(1M) command to nd the raw device identity (DID) that corresponds to shared
disks that are available in the cluster.
The following example lists output from the scdidadm -L command.
# scdidadm -L
1
1
2
2
phys-schost-1:/dev/rdsk/c0t2d0
phys-schost-2:/dev/rdsk/c0t2d0
phys-schost-1:/dev/rdsk/c0t3d0
phys-schost-2:/dev/rdsk/c0t3d0
/dev/did/rdsk/d1
/dev/did/rdsk/d1
/dev/did/rdsk/d2
/dev/did/rdsk/d2
Use the DID that the scdidadm output identies to set up the disk in the ASM disk group.
For example, the scdidadm output might identify that the raw DID that corresponds to the disk is d2.
In this instance, use the /dev/did/rdsk/d2sN raw device, where N is the slice number.
46
Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 9/04 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 9/04 release.
Bug ID 4333241
Problem Summary: System deadlocks when using jumbo frames with Solaris 8 and failover or
scalable data services.
Workaround: Support of jumbo frames with Solaris 8 is limited to clusters using Oracle Real
Application Clusters only. Solaris 9 can be used with all types of data services
Bug ID 4804696
Problem Summary: If an attempt is made by VxVM to re-encapsulate root on a device that was
previously encapsulated, an error can result due to not being able to create the rootdg:
scvxinstall: Failed to create rootdg using "vxdg init root".
# vxdg init rootdg
vxvm:vxdg: ERROR: Disk group rootdg: cannot create: Disk group exists and is imported
# vxdg destroy rootdg
vxvm:vxdg: ERROR: Disk group rootdg: No such disk group is imported
touch /etc/vx/reconfig.d/state.d/install-db
ps -ef | grep vxconfigd
kill -9 vxcongd process
vxconfigd -m disable
47
Bug ID 5107076
Problem Summary: When Sun Cluster software is upgraded from a previous version to Sun Cluster
3.1 9/04 release, the Sun Cluster support packages for Oracle Real Application Clusters are not
upgraded.
Workaround: When you upgrade Sun Cluster software to Sun Cluster 3.1 9/04 release, you must
remove the Sun Cluster support packages for Oracle Real Application Clusters from the Sun Cluster
system and add the Sun Cluster support packages from the Sun Java Enterprise System Accessory
CD Volume 3.
How to Replace the Sun Cluster Support Packages for Oracle Real
Application Clusters
Note If you have edited the conguration les /opt/SUNWudlm/etc/udlm.conf or
/opt/SUNWcvm/etc/cvm.conf, any edits to adjust timeouts will be lost and must be reapplied after
installing the new packages using the procedure Tuning Sun Cluster Support for Oracle Real
Application Clusters in Sun Cluster Data Service for Oracle Real Application Clusters Guide for
Solaris OS. To set up the RAC framework resource group, refer to Registering and Conguring the
RAC Framework Resource Group in Sun Cluster Data Service for Oracle Real Application Clusters
Guide for Solaris OS.
Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.
Become superuser.
Change the current working directory to the directory that contains the packages for the Real
Application Clusters framework resource group.
This directory depends on the version of the Solaris Operating System that you are using.
On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, transfer
the contents of the required software packages from the CD-ROM to the node.
The required software packages depend on the storage management scheme that you are using for
the Oracle Real Application Clusters database.
48
If you are using Solaris Volume Manager for Sun Cluster, run the following commands:
If you are using VxVM with the cluster feature, run the following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
If you are using hardware RAID support, run the following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWschwr SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
If you are using Sun StorEdge QFS shared le system with hardware RAID support, run the
following commands:
# pkgrm SUNWudlm SUNWudlmr SUNWschwr SUNWscucm
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
Bug ID 5109935
Problem Summary: When using scinstall to upgrade Sun Cluster data services for the Sun Cluster
3.1 9/04 release, Sun Cluster will issue error messages complaining about missing Solaris_10
Packages directories.
Workaround: These error messages can be safely ignored.
Bug ID 6196936
Problem Summary: SCSI reset errors when using X4422A Sun Dual Gigabit Ethernet + Dual SCSI
PCI Adapter cards in Sun Fire V40zs PCI slots 2 and 3.
Workaround: Do not use 4422A cards in both slot 2 and 3.
Bug ID 6198608
Problem Summary: An underlying rmware problem caused by issuing an
SCMD_READ_DEFECT_LIST (0x37) to an SE 3510 disk causes clusters to panic when run with Explorer
versions 4.3 or 4.3.1 (these versions call diskinfo -g). The Sun Cluster sccheck command in Sun
Cluster 3.1 (10/03) through Sun Cluster 3.1 (9/04) allows Explorer to run the command that causes
the panic. Java Enterprise System R3 also includes Explorer 4.3.1. This scsi command can be issued
by either using format (defect->grown option) or by running Explorer 4.3 and 4.3.1.
Workaround: Release 4.1 of the SE 3510 rmware contains the x to the problem. Sun Cluster 3.1
(5/05) will include a workaround to the problem when it occurs by using sccheck. There is also a
49
workaround for the problem in Explorer 4.4. EMC Clarion arrays have also experienced this
problem. Contact EMC to obtain the appropriate rmware x.
Bug ID 6210418
Problem Summary: If a process accounting log is located on a cluster le system or on an
HAStoragePlus failover le system, a switchover would be blocked by writes to the log le. This
would then cause the node to hang.
Workaround: Use only a local le system to contain process accounting log les.
Bug ID 6220218
Problem Summary: The standard license for VERITAS Storage Foundation 4.0 is enabling the
VxVM Persistent Group Reservations (PGR) functionality, making the product incompatible with
Sun Cluster software. This incompatibility might bring down the cluster by causing the cluster nodes
to panic.
Workaround: Download from http://www.sunsolve.com Patch 120585 (revision -01 or higher)
and follow the Special Install Instructions at the end of the patch description to apply the patch to
your cluster.
Bug ID 6252555
Problem Summary: The sd driver patches 113277-28 and higher break quorum reservations,
resulting in a node panic.
Workaround: Do not use patch 113277-28 or later, until further notice, if the target cluster uses one
of the following arrays as shared storage:
and if one or more volumes within the array is visible to more than 2 nodes of a Sun Cluster 3 cluster.
Sun Alert 101805 provides more information about this issue
50
Sun Cluster software does not support IPv6 addresses on the public network if the private
interconnect uses SCI adapters.
On Solaris 9 OS, Sun Cluster software supports IPv6 addresses for both failover and scalable data
services.
On Solaris 8 OS, Sun Cluster software supports IPv6 addresses for failover data services only.
51
How to Upgrade Sun Cluster HA for SAP liveCache to Sun Cluster 3.1
1
Go to a node that will host the Sun Cluster HA for SAP liveCache resource.
Man Pages
The following subsections describe omissions or new information that will be added to the next
publication of the man pages.
52
Data Service
Upgrade Name
pax
apache
tomcat
wls
bv
(Continued)
Data Service
Upgrade Name
dhcp
dns
mys
sps
netbackup
nfs
oracle
ebs
smb
sap
sapdb
livecache
sapwebas
siebel
container
sge
s1as
hadb
s1mq
iws
saa
sag
sybase
mqs
mqi
9ias
oracle_rac
53
54
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 4/04
Release Notes for Solaris OS that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.
Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 31 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2006
Revision Date
New Information
April 2006
January 2006
Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.
55
TABLE 32 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2005
Revision Date
New Information
September 2005
Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39 in Chapter 2.
June 2005
Added restriction on placement of SCI cards in hot swap PCI+ (hsPCI+) I/O
assemblies. See Restriction on SCI Card Placement on page 46.
Bug ID 6252555, problems with quorum reservations and patch 11327728 or later. See
Bug ID 6252555 on page 50.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 57.
May 2005
Added restriction on quorum devices when using storage-based data replication. See
Storage-Based Data Replication and Quorum Devices on page 47.
March 2005
Bug ID 6210418, Process accounting log les on global le systems cause the node to
hang. See Bug ID 6210418 on page 50 in Chapter 2.
January 2005
Bug ID 6196936, SCSI reset errors when using Cauldron-S and 3310 RAID arrays. See
Bug ID 6196936 on page 49.
TABLE 33 Sun Cluster 3.1 4/04 Release Notes Supplement Revision Record: 2004
Revision Date
New Information
December 2004
Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
November 2004
The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
56
September 2004
Restrictions apply to Sun Cluster installations on x86 based systems. See Bug ID
5066167 on page 60.
July 2004
Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 59.
June 2004
Information about support for the Sun StorEdge QFS le system added. See Support
for the Sun StorEdge QFS File System on page 57.
New Features
In addition to features documented in the Sun Cluster 3.1 4/04 Release Notes for Solaris OS, this
release now includes support for the following features.
Sun StorEdge QFS and Sun StorEdge SAM-FS Release Notes, part number 817-4094-10
Sun StorEdge QFS and Sun StorEdge SAM-FS Installation and Conguration Guide, part number
817-4092-10
Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide, part number
817-4091-10
57
3.1
115063-04
3.1
115062-04
3.0
114176-05
3.0
111857-09
113801-11
113800-11
Use Oracle 10g Real Application Clusters version 10.1.0.3 with the following Oracle patches:
3923542
3849723
3714210
3455036
Installing Oracle 10g Cluster Ready Services (CRS) With Sun Cluster 3.0
During the installation of CRS, you are prompted in the Cluster Conguration screen for the private
name or private IP address for each node. If you are using CRS with Sun Cluster 3.0, you must specify
the private IP address that Sun Cluster assigns to the node. CRS uses this address to interconnect the
nodes in the cluster.
Each node in the cluster has a different private address. To determine the private address of a node,
determine the private address that is plumbed on interface lo0:1.
# ifconfig lo0:1
58
Note You must not store data les, control les, online redo log les, or Oracle recovery les on the
cluster le system.
If you are using the cluster le system with Sun Cluster 3.1, consider increasing the desired number
of secondary nodes for device groups. By increasing the desired number of secondary nodes for
device groups, you can improve the availability of your cluster. To increase the desired number of
secondary nodes for device groups, change the numsecondaries property. For more information, see
the section about multiported disk device groups in Sun Cluster Concepts Guide for Solaris OS.
59
Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 4/04 Release Notes for Solaris
OS, the following known problems affect the operation of the Sun Cluster 3.1 4/04 release.
Bug ID 5095543
Problem Summary: When using Sun StorEdge 6130 arrays in your cluster, you cannot connect both
host ports of the same controller to the same switch.
Workaround: Connect only one controller host port to a given switch. See Figure 31 for an example
of correct cabling.
Node 1
Node 2
Switch
Switch
Controller module
Bug ID 5066167
Problem Summary: When installing Sun Cluster Software on x86 based systems, you cannot use
autodiscovery.
Workaround: When the installer asks Do you want to use autodiscovery (yes/no) [yes]? answer
no and specify the cluster transport yourself.
60
CR 6341573
Problem Summary: In the chapters for Solstice DiskSuite/Solaris Volume Manager and VxVM, the
procedure for mirroring the root disk instructs you to skip enabling the localonly property if the
mirror is not connected to multiple nodes. This is incorrect.
Workaround: Always enable the localonly property of the mirror disk, even if the disk does not
have more than one node directly attached to it.
# scconf -c -D name=rawdisk-groupname,localonly=true
61
62
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 10/03
Release Notes that shipped with the Sun Cluster 3.1 product. These online release notes provide
the most current information on the Sun Cluster 3.1 product. This chapter includes the following
information.
Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 41 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2006
Revision Date
New Information
April 2006
January 2006
Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.
63
TABLE 42 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2005
Revision Date
New Information
September 2005 Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and VxFS
4.1 on page 39 in Chapter 2.
June 2005
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and VxFS
4.0 on page 57 in Chapter 3.
May 2005
March 2005
Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.
TABLE 43 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date
New Information
December 2004
Restriction against rolling upgrade and VxVM. See Restriction on Rolling Upgrade
and VxVM on page 66.
November 2004
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
July 2004
Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 65.
March 2004
scsetup is not able to add the rst adapter to a single-node cluster. See Bug ID
4983696 on page 67.
Additional procedures to perform when you add a node to a single-node cluster. See
Software Installation Guide on page 67.
Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 69.
Restriction against using Live Upgrade. See Live Upgrade is Not Supported on page
69.
64
TABLE 43 Sun Cluster 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
(Continued)
Revision Date
New Information
February 2004
Instruction to set the localonly property on any shared disks that are used to create a
root disk group on nonroot disks. See Setting the localonly Property For a rootdg
Disk Group on a Nonroot Disk on page 69.
Restriction against creating a swap le using global devices. See Create swap Files Only
on Local Disks on page 70.
Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.
January 2004
Added a brief description of the newly supported 3-room, 2-node campus cluster. See
Additional Campus Cluster Conguration Examples in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
November 2003
Procedure to upgrade Sun Cluster 3.1 10/03 software on a cluster that runs Sun
StorEdge Availability Suite 3.1 .
New Features
In addition to features documented in the Sun Cluster 3.1 10/03 Release Notes, this release now
includes support for the following features.
There are no new features at this time.
65
In this example, d11 is the device ID and s7 the slice of device d11.
2. Identify the existing quorum device, if any.
# /usr/cluster/bin/scstat -q
-- Quorum Votes by Device -Device Name
Present Possible Status
----------------- -------- -----Device votes:
/dev/did/rdsk/d15s2 1
1
Online
Quorum devices do not use any of the partition space. The sufx s2 is displayed for syntax
purposes only. Although they appear to be different, both Sun StorEdge Availability Suite
conguration disk (for example, d11s7) and the Sun Cluster quorum disk (for example, d11s2)
refer to the same disk.
4. Uncongure the original quorum device.
# /usr/cluster/bin/scconf -r -q globaldev=/dev/did/rdsk/d15s2
Note If you are installing Sun Cluster software for the rst time, use a slice on the quorum disk for
Known Problems
In addition to known problems that are documented in Sun Cluster 3.1 10/03 Release Notes, the
following known problems affect the operation of the Sun Cluster 3.1 10/03 release.
Bug ID 4848612
Problem Summary: When all private interconnections fail in a two-node cluster which is running
Oracle Real Application Clusters with VxVM, the rst node might panic with one of the following
messages:
The other node occasionally panics because the cluster reconguration step cvm return times out.
Workaround: Edit the default /opt/SUNWcvm/etc/cvm.conf le to increase the timing parameter
cvm.return_timeout from 40 seconds to 160 seconds. For further inquiries, contact Brian Reynard,
Software Engineering Manager OS Sustaining Escalations (Sun Cluster) at brian.reynard@sun.com.
Bug ID 4983696
Problem Summary: If scsetup is used in an attempt to add the rst adapter to a single-node cluster,
the following error messsage results: Unable to determine transport type.
Workaround: Create an empty install-db le. at least the rst adapter manually:
# scconf -a -A trtype=type,name=nodename,node=nodename
After the rst adapter is congured, further use of scsetup to congure the interconnects works as
expected.
67
From the existing cluster node, determine whether two cluster interconnects already exist.
You must have at least two cables or two adapters congured.
# scconf -p | grep cable
# scconf -p | grep adapter
If the output shows conguration information for two cables or for two adapters, skip to Step 3.
If the output shows no conguration information for either cables or adapters, or shows
conguration information for only one cable or adapter, proceed to Step 2.
The command output should show conguration information for at least two cluster
interconnects.
3
68
69
Nonroot Disk
Enable the localonly property of the raw-disk device group for each shared disk in the root disk
group.
When the localonly property is enabled, the raw-disk device group is used exclusively by the node
in its node list. This usage prevents unintentional fencing of the node from the device that is used by
the root disk group if that device is connected to multiple nodes.
# scconf -c -D name=dsk/dN,localonly=true
For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.
70
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Data
Service 5/03 Release Notes that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.
Revision Record
The following table lists the information contained in this chapter and provides the revision date for
this information.
TABLE 51 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date
New Information
December 2004
Sun Clustersupports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
November 2004
The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
71
TABLE 51 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement Revision Record: 2003/2004
(Continued)
Revision Date
New Information
May 2004
The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 10/03 now
supports Oracle 10g. See Support for Oracle 10g on page 72.
February 2004
Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.
December 2003
Problem of using NIS for naming services in a cluster that runs Sun Cluster HA for SAP
liveCache. See Sun Cluster HA for liveCache nsswitch.conf requirements for passwd
make NIS unusable (4904975) on page 75.
November 2003
Procedure and examples to upgrade data services that cannot be upgraded by using the
scinstall utility. See Some Data Services Cannot be Upgraded by Using the
scinstall Utility on page 74.
Support for WebLogic Server 8.x. See WebLogic Server Version 8.x on page 73.
New Features
In addition to features documented in the Sun Cluster 3.1 Data Service 5/03 Release Notes, this release
now includes support for the following features.
A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.
A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.
Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.
To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null
Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.
Chapter 5 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement
73
Known Problems
In addition to known problems that are documented in the Sun Cluster 3.1 Data Service 5/03 Release
Notes, the following known problems affect the operation of the Sun Cluster 3.1 Data Services 10/03
release.
Apache Tomcat
DHCP
mySQL
Oracle E-Business Suite
Samba
SWIFTAlliance Access
WebLogic Server
WebSphere MQ
WebSphere MQ Integrator
If you plan to upgrade a data service for an application in the preceding list, replace Step 5 in the
procedure Upgrading to Sun Cluster 3.1 10/03 Software (Rolling) in Sun Cluster 3.1 10/03 Software
Installation Guide with the steps that folllow. Perform these steps for each node where the data
service is installed.
scinstall
1
Remove the software package for the data service that you are upgrading.
# pkgrm pkg-inst
pkg-inst species the software package name for the data service that you are upgrading as listed in
the following table.
74
Application
Apache Tomcat
SUNWsctomcat
DHCP
SUNWscdhc
mySQL
SUNWscmys
SUNWscebs
Application
Samba
SUNWscsmb
SWIFTAlliance Access
SUNWscsaa
SUNWscwls
SUNWfscwls
SUNWjscwls
WebSphere MQ
SUNWscmqs
WebSphere MQ Integrator
SUNWscmqi
Install the software package for the version of the data service to which you are upgrading.
To install the software package, follow the instructions in the Sun Cluster documentation for the data
service that you are upgrading. This documentation is available in the Sun Cluster 3.1 10/03 Data
Services Collection at http://docs.sun.com/db/coll/573.11.
do
The entry in the /etc/nsswitch.conf le for the passwd database should be as follows:
passwd: files nis [TRYAGAIN=0]
Chapter 5 Sun Cluster Data Services 3.1 10/03 Release Notes Supplement
75
76
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Release
Notes that shipped with the Sun Cluster 3.1 product. These online release notes provide the most
current information on the Sun Cluster 3.1 product. This chapter includes the following
information.
Revision Record
The following tables list the information contained in this chapter and provides the revision date for
this information.
TABLE 61 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2006
Revision Date
New Information
January 2006
Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.
TABLE 62 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2005
Revision Date
New Information
September 2005
Support is added for VxVM 4.1 and VxFS 4.1. See SPARC: Support for VxVM 4.1 and
VxFS 4.1 on page 39 in Chapter 2.
77
TABLE 62 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2005
(Continued)
Revision Date
New Information
June 2005
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 57 in Chapter 3.
March 2005
Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.
TABLE 63 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2004
Revision Date
New Information
November 2004
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
July 2004
Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 81.
March 2004
Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 84.
Restriction against using the Live Upgrade method to upgrade Solaris software. See
Step 5 of How to Upgrade the Solaris Operating Environment in Appendix F.
February 2004
Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.
January 2004
Added a brief description of the newly supported 3room, 2node campus cluster. See
Chapter 7, Campus Clustering With Sun Cluster Software, in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003
78
Revision Date
New Information
December 2003
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The bug was xed (see
Bug ID 4840853 on page 82).
TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003
(Continued)
Revision Date
New Information
November 2003
The onerror=lock and onerror=umount mount options are not supported on cluster
le systems. See Bug ID 4781666 on page 82.
To upgrade a cluster that uses mediators, you must remove the mediators before you
upgrade to Sun Cluster 3.1 software, then recreate the mediators after the cluster
software is upgraded. See Upgrading a Cluster That Uses Mediators on page 84.
Additional information about the restriction on IPv6 addressing. See Clarication of
the IPv6 Restriction on page 82.
Logical volumes are not supported with the Sun StorEdge 3510 FC storage array. See
the Preface of the Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array
Manual for more information.
October 2003
Certain RPC program numbers are reserved for Sun Cluster software use. See
Reserved RPC Program Numbers on page 81.
Clarication about which name to use for disk slices when you create state database
replicas. See How to Create State Database Replicas on page 85.
Upgrade from Sun Cluster 3.0 software on the Solaris 8 Operating System to Sun
Cluster 3.1 software on the Solaris 9 Operating System removes dual-string mediators.
See Bug ID 4920156 on page 83.
Updated VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic
Multipathing (DMP) on page 111 for more information.
August 2003
Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.
July 2003
Revised support for Multiple Masters conguration of Sun Cluster HA for Sun ONE
Application Server. See Sun Cluster 3.1 Data Service for Sun ONE Application Server
on page 91.
79
TABLE 64 Sun Cluster 3.1 Release Notes Supplement Revision Record: 2003
(Continued)
Revision Date
New Information
June 2003
Procedures to upgrade a Sun Cluster 3.0 conguration to Sun Cluster 3.1 software,
including upgrading from Solaris 8 to Solaris 9 software. See Appendix F.
Modications to make to the /etc/system le to correct changes made by VxFS
installation. See Changing Quorum Device Connectivity on page 81.
Procedures to support Sun StorEdge 6320 storage systems. See Chapter 1, Installing
and Maintaining a Sun StorEdge 6320 System, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6320 System Manual for Solaris OS.
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. (See Bug ID 4840853
on page 82.)
This restriction has been removed for clusters using the 3.1 rmware.
Procedures to support Sun StorEdge 3510 FC storage device. See the Sun Cluster 3.0-3.1
With Sun StorEdge 3510 or 3511 FC RAID Array Manual .
Sun StorEdge 3510 FC storage arrays are no longer limited to 256 LUNs per channel.
See Bug ID 4867584 on page 82.
Sun StorEdge 3510 FC storage arrays are limited to one node per channel. See Bug ID
4867560 on page 83.
Requirements for storage topologies. See Storage Topologies Replaced by New
Requirements on page 112.
Relaxed requirements for shared storage. See Shared Storage Restriction Relaxed
on page 112.
New Features
In addition to features documented in the Sun Cluster 3.1 Release Notes, this release now includes
support for the following features.
80
100141
100142
100248
These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and rgmd,
respectively. If the RPC service you install also uses one of these program numbers, you must change
that RPC service to use a different program number.
The rst line changes the value for the rpcmod:svc_default_stksize variable from 0x4000 to
0x8000.
The second line sets the value of the lwp_default_stksize variable to 0x6000.
81
Fixed Problems
The following problems identied in previous release notes supplements are now resolved.
Bug ID 4840853
Problem Summary: Due to memory segmentation issues, if you congured the StorEdge 6120 or
StorEdge 6320 storage system with four nodes and more than 16 LUNs, the storage device might fail
and cause your data to be compromised.
Problem Fixed: When using a StorEdge 6120 or StorEdge 6320 storage system with the version 3.1
rmware (or later), you no longer must limit your conguration to 16 LUNs. Instead, the limit is 64
LUNs.
Bug ID 4867584
Problem Summary: If you had 512 LUNs in a direct-attach storage conguration with Sun StorEdge
3510 FC storage arrays, LUNs might be lost when the server rebooted.
Problem Fixed: This bug is xed when using both of the following items:
Known Problems
In addition to known problems documented in the Sun Cluster 3.1 Release Notes, the following
known problems affect the operation of the Sun Cluster 3. 1 release.
Bug ID 4781666
Problem Summary: Use of the onerror=umount mount option or the onerror=lock mount option
might cause the cluster le system to lock or become inaccessible if the cluster le system experiences
le corruption. Or, use of these mount options might cause the cluster le system to become
unmountable. The cluster le system might then cause applications to hang or prevent them from
being killed. The node might require rebooting to recover from these states.
82
Bug ID 4863254
Problem Summary: Due to a Solaris bug (4511634), Sun Cluster 3.1 does not provide the ability to
auto-create IPMP groups when you add a logical host.
Workaround: You must manually create an IPMP group when you add a logical host.
Bug ID 4867560
Problem Summary: When two nodes are connected to the same channel of a Sun StorEdge 3510 FC
storage array, rebooting one node causes the other node to lose the SCSI-2 reservation.
Workaround: You can only connect one node per channel on the Sun StorEdge 3510 FC storage
arrays.
Bug ID 4920156
Problem Summary: When performing an upgrade from Sun Cluster 3.0 software on Solaris 8
software with Solstice DiskSuite 4.2.1 to Sun Cluster 3.1 software on Solaris 9 software with Solaris
Volume Manager, the dual-string mediators are removed.
Workaround: Remove mediators before you upgrade the cluster, then recreate them after the cluster
is upgraded.
83
Perform the steps to prepare the cluster for upgrade but do not shut down the cluster.
-s setname-
If the value in the Status eld is Bad, repair the affected mediator host. Follow the procedure to x
bad mediator data in Conguring Mediators in Sun Cluster 3.1 Software Installation Guide.
b. List all mediators.
Use this information for when you restore the mediators during Step 4.
c. For a diskset that uses mediators, take ownership of the diskset if no node already has
ownership.
# metaset -s setname -t
-t
84
-s setname-
-d
-m mediator-host-list-
Species the name of the node to remove as a mediator host for the
diskset
See the mediator(7D) man page for further information about mediator-specic options to the
metaset command.
e. Repeat Step c through Step d for each remaining diskset that uses mediators.
3
Shut down the cluster and continue to follow procedures to upgrade Sun Cluster software.
After all nodes are upgraded and booted back into the cluster, recongure the mediators.
a. Determine which node has ownership of a diskset to which you will add the mediator hosts.
# metaset -s setname
-s setname-
-t
setname -a -m mediator-host-list
-a
-m mediator-host-list-
Species the names of the nodes to add as mediator hosts for the
diskset
d. Repeat Step a through Step c for each diskset in the cluster that uses mediators.
5
85
86
C H A P T E R
This chapter supplements the standard user documentation, including the Sun Cluster 3.1 Data
Service 5/03 Release Notes that shipped with the Sun Cluster 3.1 product. These online release
notes provide the most current information on the Sun Cluster 3.1 product. This chapter includes
the following information.
Revision Record
The following table lists the information contained in this chapter and provides the revision date for
this information.
TABLE 71 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement Revision Record: 2003/2004
Revision Date
New Information
December 2004
Sun Clustersupports the use of ASM with Oracle 10g Real Application Servers on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
November 2004
The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Servers on the SPARC platform. For more information, see
Support for Oracle 10g Real Application Clusters on the SPARC Platform on page 57.
May 2004
The Sun Cluster HA for Oracle data service in Sun Cluster Data Services 3.1 5/03 now
supports Oracle 10g. See Support for Oracle 10g on page 88.
87
TABLE 71 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement Revision Record: 2003/2004
(Continued)
Revision Date
New Information
February 2004
Lack of support for Sun StorEdge 3310 JBOD array in a split-bus conguration has
been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.
July 2003
Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.
Revised support for Multiple Masters conguration of Sun Cluster HA for Sun ONE
Application Server. See Sun Cluster 3.1 Data Service for Sun ONE Application Server
on page 91.
New Features
In addition to features documented in Sun Cluster 3.1 Data Service 5/03 Release Notes, this release
now includes support for the following features.
88
A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.
A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.
Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.
To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null
Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.
with hardware RAID support or VxVM with the cluster feature. The Sun Cluster Support for Oracle
Real Application Clusters software must be installed only on the cluster nodes that are directly
attached to the shared storage used by Oracle Real Application Clusters.
You are adding nodes to a cluster and you plan to run Sun Cluster Support for Oracle Real
Application Clusters on the nodes.
You are enabling Sun Cluster Support for Oracle Real Application Clusters on a node.
Chapter 7 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement
89
To add Sun Cluster Support for Oracle Real Application Clusters to selected nodes, the required data
service software packages on the nodes. The storage management scheme that you are using
determines which packages to install. For installation instructions, see Sun Cluster Data Service for
Oracle Real Application Clusters Guide for Solaris OS.
Become superuser.
Boot the nodes from which you are removing Sun Cluster Support for Oracle Real Application
Clusters in noncluster mode.
Uninstall from each node the Sun Cluster Support for Oracle Real Application Clusters software
packages for the storage management scheme that you are using.
If you are using VxVM with the cluster feature, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
If you are using hardware RAID support, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
If you are using the cluster le system, type the following command:
# pkgrm SUNWscucm SUNWudlm SUNWudlmr
90
Known Problems
In addition to known problems documented in the Sun Cluster 3.1 Data Service 5/03 Release Notes,
the following known problems affect the operation of the Sun Cluster 3.1 Data Service 5/03 release.
There are no known problems at this time.
Release Notes
The following subsections describe omissions or new information that will be added to the next
publishing of the Release Notes.
Chapter 7 Sun Cluster Data Services 3.1 5/03 Release Notes Supplement
91
92
C H A P T E R
This document supplements the standard user documentation, including the Sun Cluster 3.0 5/02
Release Notes that shipped with the Sun Cluster 3.0 product. These online release notes provide
the most current information on the Sun Cluster 3.0 product. This document includes the following
information.
Revision Record
The following tables list the information contained in this document and provides the revision date
for this information.
TABLE 81 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: 2006
Revision Date
New Information
January 2006
Correction to procedures for mirroring the root disk. See CR 6341573 on page 61.
TABLE 82 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2005
Revision Date
New Information
March 2005
Process accounting log les on global le systems cause the node to hang. See Bug ID
6210418 on page 50 in Chapter 2.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 103.
93
TABLE 83 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2004
Revision Date
New Information
March 2005
Bug ID 6210418, Process accounting log les on global le systems cause the node to
hang. See Bug ID 6210418 on page 50 in Chapter 2.
Support is added for VxVM 4.0 and VxFS 4.0. See SPARC: Support for VxVM 4.0 and
VxFS 4.0 on page 103.
December 2004
Sun Cluster supports the use of ASM with Oracle 10g Real Application Clusters on the
SPARC platform. For more information, see IPv6 Support and Restrictions for Public
Networks on page 51.
November 2004
The Sun Cluster Support for Oracle Real Application Clusters data service supports
Oracle 10g Real Application Clusters on the SPARC platform. For more information,
see Support for Oracle 10g Real Application Clusters on the SPARC Platform
on page 57.
Cabling restrictions apply when including Sun StorEdge 6130 arrays in a Sun Cluster
environment. See Bug ID 5095543 on page 60 for more information.
July 2004
Restrictions apply to the compilation of data services that are written in C++. See
Compiling Data Services That Are Written in C++ on page 111.
May 2004
The Sun Cluster HA for Oracle data service in Sun Cluster 3.0 5/02 now supports
Oracle 10g. See Support for Oracle 10g on page 103.
March 2004
Troubleshooting tip to correct stack overow with VxVM disk device groups. See
Correcting Stack Overow Related to VxVM Disk Device Groups on page 119.
February 2004
Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
Conceptual material and example congurations for using storage-based data
replication in a campus cluster. Refer to Chapter 7, Campus Clustering With Sun
Cluster Software, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris
OS.
January 2004
Added a brief description of the newly supported 3room, 2node campus cluster. See
Chapter 7, Campus Clustering With Sun Cluster Software, in Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
Correction to path to the dlmstart.log le in Oracle UDLM Requirement on page
113.
94
TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003
Revision Date
New Information
December 2003
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The restriction has been
removed. See Bug ID 4840853 on page 82.
November 2003
The onerror=lock and onerror=umount mount options are not supported on cluster
le systems. See Bug ID 4781666 on page 82.
Sun Cluster 3.0 12/01 System Administration Guide: The correct caption for Table 5-2
is Task Map: Dynamic Reconguration with Cluster Interconnects.
Logical volumes are not supported with the Sun StorEdge 3510 FC storage array. See
the Preface of the Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array
Manual for more information.
October 2003
Added omission from the Installing and Conguring Sun Cluster HA for NetBackup
chapter of the Data Service Installation and Conguration Guide. See Sun Cluster Data
Service for NetBackup on page 126.
Certain RPC program numbers are reserved for Sun Cluster software use. See
Reserved RPC Program Numbers on page 111.
Clarication about which name to use for disk slices when you create state database
replicas. See How to Create State Database Replicas on page 126.
Updated VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic
Multipathing (DMP) on page 111 for more information.
August 2003
Procedures to enable Sun Cluster Support for Oracle Real Application Clusters on a
subset of cluster nodes. See Sun Cluster Support for Oracle Real Application Clusters
on a Subset of Cluster Nodes on page 89.
The Sun Cluster HA for NetBackup data service in Sun Cluster 3.0 5/02 now supports
VERITAS NetBackup 4.5. See Support for VERITAS NetBackup 4.5 on page 104.
95
TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003
(Continued)
Revision Date
New Information
June 2003
Procedures to upgrade a Sun Cluster 3.0 conguration to Sun Cluster 3.1 software,
including upgrading from Solaris 8 to Solaris 9 software. See Appendix F.
Procedures to support Sun StorEdge 6320 storage systems. See Chapter 1, Installing
and Maintaining a Sun StorEdge 6320 System, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6320 System Manual for Solaris OS.
Sun StorEdge 6120 storage arrays in dual-controller congurations and Sun StorEdge
6320 storage systems were limited to four nodes and 16 LUNs. The restriction has been
removed (see Bug ID 4840853 on page 82.
Procedures to support Sun StorEdge 3510 FC storage array. See the Sun Cluster 3.0-3.1
With Sun StorEdge 3510 or 3511 FC RAID Array Manual .
Sun StorEdge 3510 FC storage arrays are limited to 256 LUNs per channel. See Bug ID
4867584 on page 82.
Sun StorEdge 3510 FC storage arrays are limited to one node per channel. See Bug ID
4867560 on page 83.
May 2003
How to create node-specic les and directories for use with Oracle Real Application
Clusters on the cluster le system. See Creating Node-Specic Files and Directories for
Use With Oracle Real Application Clusters Software on the Cluster File System
on page 129 for more information.
New bge(7D) Ethernet adapter requires patches and modied installation procedure.
See BugID 4838619 on page 116 for more information.
Increased stack-size settings are required when using VxFS. See Bug ID 4662264
on page 115 for more information.
96
TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003
(Continued)
Revision Date
New Information
April 2003
Procedures to support Sun StorEdge 6120 storage arrays. See Chapter 1, Installing and
Maintaining a Sun StorEdge 6120 Array, in Sun Cluster 3.0-3.1 With Sun
StorEdge 6120 Array Manual for Solaris OS.
Added VxVM Dynamic Multipathing (DMP) restrictions. See Dynamic Multipathing
(DMP) on page 111 for more information.
Bug ID 4818874, lack of support for Sun StorEdge 3310 JBOD array in a split-bus
conguration, has been xed. See BugId 4818874 on page 114 for details.
PCI Dual Ultra3 SCSI host adapter needs jumpers set for manual termination. See
BugId 4836405 on page 116 for more information.
Added information on support for Oracle Real Application Clusters on the cluster le
system. See Support for Oracle Real Application Clusters on the Cluster File System
on page 109.
Added information on using the Sun Cluster LogicalHostname resource with Oracle
Real Application Clusters. See Using the Sun Cluster LogicalHostname Resource
With Oracle Real Application Clusters on page 129.
Sun Cluster HA for SAP now supports the SAP J2EE engine and SAP Web dispatcher
congurations. For more information, seeConguring an SAP J2EE Engine Cluster
and an SAP Web Dispatcher on page 126.
Revised procedures on how to for install and congure Sun Cluster HA for SAP
liveCache. See Appendix B.
March 2003
Revised support for installation of the Remote Shared Memory Reliable Datagram
Transport (RSMRDT) driver. See Appendix D.
Revised How to Register and Congure Sun Cluster for SAP liveCache procedure.
See Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Documentation bug in scconf_transp_adap_sci(1M) man page. See
scconf_transp_adap_sci Man Page on page 135.
Updated revised procedure on how to replace a disk drive in a StorEdge A5x00 storage
array. See the Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual.
97
TABLE 84 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2003
(Continued)
Revision Date
New Information
February 2003
Revised procedures to support Sun Cluster HA for SAP on SAP 6.20. See Appendix E.
Virtual Local Area Network (VLAN) support expanded. See the Sun Cluster 3.0-3.1
Hardware Administration Manual for Solaris OS.
Procedures to support Sun StorEdge 9900 Dynamic Link Manager. See the Sun
Cluster 3.0-3.1 With Sun StorEdge 9900 Series Storage Device Manual.
Revised scconf_transp_adap_wrsm(1M) man page to support a Sun Fire Linkbased
cluster interconnect. See scconf_transp_adap_wrsm Man Page on page 135.
Procedures to support a Sun Fire Linkbased cluster interconnect. See the Sun
Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
January 2003
TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002
98
Revision Date
New Information
December 2002
Revised procedures on how to for install and congure Sun Cluster HA for SAP
liveCache. See Appendix B.
TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002
(Continued)
Revision Date
New Information
November 2002
Revised SUNW.HAStoragePlus.5 man page to correct the Notes section and include
FilesystemCheckCommand extension property. See SUNW.HAStoragePlus.5 on page
136.
Sun Cluster HA for Sun ONE Web Server now supports Sun ONE Proxy Server. See
Support for Sun ONE Proxy Server on page 128.
Name to use to congure SCI-PCI adapters for the cluster interconnect. See Names for
SCI-PCI Adapters on page 125.
Requirements for storage topologies. See Storage Topologies Replaced by New
Requirements on page 112.
Support for Dynamic Reconguration with the Sun Fire V880 system and Sun Cluster
software. See Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Correction to the planning statement on how to connect quorum devices to nodes. See
Quorum Device Connection to Nodes on page 125.
Removal of the step on how to add nodes to the authentication list before you install
VERITAS Volume Manager. See New Features on page 102.
Package dependency to upgrade Sun Cluster HA for NFS from Sun Cluster 2.2 to Sun
Cluster 3.0 software. See How to Create State Database Replicas on page 85.
October 2002
99
TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002
(Continued)
Revision Date
New Information
September 2002
IP address conguration requirement for Sun Fire 15000 systems. See IP Address
Requirement for Sun Fire 15000 Systems on page 125.
Corrected cross-reference between uninstall procedures. See How to Uninstall Sun
Cluster Software From a Cluster Node (5/02) on page 134.
August 2002
Restriction on EMC storage use in a two node conguration. See EMC Storage
Restriction on page 112.
July 2002
Revised procedure to upgrade to the Sun Cluster 3.0 5/02 release from any previous
version of Sun Cluster 3.0 software. See How to Upgrade to the Sun Cluster 3.0 5/02
Software Update Release on page 119.
Revised procedure on how to replace a disk drive in StorEdge A5x00 storage array. See
the Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual.
Requirements for ATM support with Sun Cluster 3.0 5/02. See ATM with Sun Cluster
3.0 5/02 on page 117
Sun Cluster Security Hardening support for Solaris 9. See Security Hardening for
Solaris 9 on page 108.
100
TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002
(Continued)
Revision Date
New Information
June 2002
Restriction on concurrent upgrade of Solaris 9 and Sun Cluster 3.0 5/02 software. See
Framework Restrictions and Requirements on page 113.
Revised appendix to support Sun StorEdge 9970 system and Sun StorEdge 9980 system
with Sun Cluster software. See the Sun Cluster 3.0-3.1 With Sun StorEdge 9900 Series
Storage Device Manual.
Procedures to support Sun StorEdge D2 storage systems. See Sun Cluster 3.0-3.1 With
SCSI JBOD Storage Device Manual for Solaris OS.
Revised procedures to support Sun StorEdge T3/T3+ Partner Group and Sun StorEdge
3900 storage arrays in a 4node conguration. See Sun StorEdge T3/T3+ Partner
Group and Sun StorEdge 3900 Storage Devices Supported in a Scalable Topology.
on page 119.
Updated procedures to support Sun Cluster software on Sybase 12.0 64bit version. See
Appendix C.
Documentation bug in the Sun Cluster Hardware Guide. See Failover File System
(HAStoragePlus) on page 108.
Documentation bug in the Sun Cluster Hardware Guide. See Changing Quorum
Device Connectivity on page 112.
Documentation bug in the Sun Cluster Hardware Guide: ce Sun Ethernet Driver
Considerations. See Chapter 5, Installing and Maintaining Public Network
Hardware, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Documentation bug in the Sun Cluster Hardware Guide: Hard zone conguration
changed. See the Sun Cluster 3.0-3.1 With Sun StorEdge 3900 Series or Sun
StorEdge 6900 Series System Manual.
Updated procedures to support Apache version 2.0. See Apache 2.0 on page 109.
101
TABLE 85 Sun Cluster 3.0 5/02 Release Notes Supplement Revision Record: Year 2002
(Continued)
Revision Date
New Information
May 2002
May 2002
(continued)
New Features
In addition to features documented in Sun Cluster 3.0 5/02 Release Notes, this release now includes
support for the following features.
102
A node is running in noncluster mode. In this situation, le systems that Sun Cluster controls are
never mounted.
A node is booting. In this situation, the messages are displayed repeatedly until Sun Cluster
mounts the le system where the Oracle binary les are installed.
Oracle is started on or fails over to a node where the Oracle installation was not originally run. In
such a conguration, the Oracle binary les are installed on a highly available local le system. In
this situation, the messages are displayed on the console of the node where the Oracle installation
was run.
To prevent these error messages, remove the entry for the Oracle cssd daemon from the
/etc/inittab le on the node where the Oracle software is installed. To remove this entry, remove
the following line from the /etc/inittab le:
h1:23:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 > </dev/null
Sun Cluster HA for Oracle does not require the Oracle cssd daemon. Therefore, removal of this
entry does not affect the operation of Oracle 10g with Sun Cluster HA for Oracle. If your Oracle
installation changes so that the Oracle cssd daemon is required, restore the entry for this daemon to
the /etc/inittab le.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement
103
Caution If you are using Real Application Clusters, do not remove the entry for the cssd daemon
from the /etc/inittab le.
The network resource that clients use to access the data service. Normally, you set up this IP
address when you install the cluster. See the Sun Cluster concepts documentation document for
details on network resources.
Create a failover resource group to hold the network and application resources.
You can optionally select the set of nodes that the data service can run on with the -h option, as
follows.
# scrgadm -a- -g- resource-group [-h- nodelist]
-g resource-group
104
[-h nodelist]
Verify that you have added all of your network resources to the name service database.
You should have performed this verication during the Sun Cluster installation.
Note Ensure that all of the network resources are present in the servers and clients
Enable the failover resource group and bring the resource group online.
# scswitch -Z -g resource-group
-g resource-group
-Z
Moves the resource group to the managed state, and brings the resource group
online.
Execute the install script to install the VERITAS Netbackup packages from the VERITAS product
CD-ROM into the /usr/openv directory.
phys-schost-1# ./install
105
11
Repeat Step 6 through Step 10 until you install the NetBackup binaries on all the nodes that will run
the NetBackup resource.
SERVER = logical-hostname-resource
All requests to the backup server originate from the primary node. The server name equals the
logical hostname resource.
CLIENT_NAME = logical-hostname-resource
On a cluster that runs Sun Cluster HA for NetBackup, the CLIENT_NAME equals nb-master.
Note Use this client name to back up les in the cluster running Sun Cluster HA for NetBackup.
REQUIRED_INTERFACE = logical-hostname-resource
This entry indicates the logical interface that the NetBackup application is to use.
106
From one node, put the NetBackup conguration les on a multihost disk.
Place the les on a disk that is part of a failover disk device group that NetBackup is to use.
a. Run the following commands from the primary node of the failover disk device group. In this
example, the failover disk device group is global.
#
#
#
#
#
#
#
mkdir /global/netbackup
mv /usr/openv/netbackup/bp.conf /global/netbackup
mv /usr/openv/netbackup/db /global/netbackup
mv /usr/openv/volmgr/database /global/netbackup
ln -s /global/netbackup/bp.conf /usr/openv/netbackup/bp.conf
ln -s /global/netbackup/db /usr/openv/netbackup/db
ln -s /global/netbackup/database /usr/openv/volmgr/database
mv
mv
ln
ln
/usr/openv/db/var /global/netbackup/nbdb
/usr/openv/volmgr/vm.conf /global/netbackup
-s /global/netbackup/nbdb /usr/openv/db/var
-s /global/netbackup/vm.conf /usr/openv/volmgr/vm.conf
Note Run the command scstat -D to identify the primary for a particular disk device group.
c. Run the following commands from all of the other nodes that will run the NetBackup resource.
#
#
#
#
#
#
rm
rm
rm
ln
ln
ln
-rf /usr/openv/netbackup/bp.conf
-rf /usr/openv/netbackup/db
-rf /usr/openv/volmgr/database
-s /global/netbackup/bp.conf /usr/openv/netbackup/bp.conf
-s /global/netbackup/db /usr/openv/netbackup/db
-s /global/netbackup/database /usr/openv/volmgr/database
d. On all of the other nodes that will run the NetBackup resource, if the directory
/usr/openv/db/var and the le /usr/openv/volmgr/vm.conf exist on the node, run the
following commands:
#
#
#
#
rm
rm
ln
ln
-rf /usr/openv/db/var
-rf /usr/openv/volmgr/vm.conf
-s /global/netbackup/nbdb /usr/openv/db/var
-s /global/netbackup/vm.conf /usr/openv/volmgr/vm.conf
107
Note You must congure the NetBackup master server before you remove and link
/usr/openv/volmgr/vm.conf le.
Sun Cluster HA for NetBackup can work with either of these two sets of daemons. The Sun Cluster
HA for NetBackup fault monitor monitors either of these two sets of processes. While the START
method runs, the fault monitor waits until the daemons are online before monitoring the
application. The Probe_timeout extension property species the amount of time that the fault
monitor waits.
After the daemons are online, the fault monitor uses kill (pid, 0) to determine whether the
daemons are running. If any daemon is not running, the fault monitor initiates the following actions,
in order, until all of the probes are running successfully.
1. Restarts the resource on the current node.
2. Restarts the resource group on the current node.
3. Fails over the resource group to the next node on the resource groups nodelist.
All process IDs (PIDs) are stored in a temporary le, /var/run/.netbackup_master.
108
Apache 2.0
Sun Cluster 3.0 5/02 now supports Apache version 2.0. For Apache version 2.0, the procedure for
conguring the httpd.conf conguration le has changed as follows. (See the Sun Cluster data
services collection for the complete procedure.)
The BindAddress and Port directives have been replaced with the Listen directive. The Listen
directive must use the address of the logical host or shared address.
Pre-Installation Considerations
Oracle Real Application Clusters is a scalable application that can run on more than one node
concurrently. You can store all of the les that are associated with this application on the cluster le
system, namely:
Binary les
Control les
Data les
Log les
Conguration les
109
For optimum I/O performance during the writing of redo logs, ensure that the following items are
located on the same node:
The primary of the device group that contains the cluster le system that holds the following logs
of the database instance:
For other pre-installation considerations that apply to Sun Cluster Support for Oracle Real
Application Clusters, see Overview of the Installation and Conguration Process in Sun
Cluster 3.0 12/01 Data Services Installation and Conguration Guide.
Options
global, logging
How to Install Sun Cluster Support for Oracle Real Application Clusters
Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.
Become superuser.
On all of the nodes, run the following command to install the data service packages.
# pkgadd -d \
/cdrom/scdataservices_3_0_u3/components/\
110
SunCluster_Oracle_Parallel_Server_3.0_u3/Packages \
SUNWscucm SUNWudlm SUNWudlmr
Troubleshooting
See Also
Before you reboot the nodes, you must ensure that you have correctly installed and congured the
Oracle UDLM software. For more information, see Installing the Oracle Software in Sun
Cluster 3.0 12/01 Data Services Installation and Conguration Guide.
Go to Installing the Oracle Software in Sun Cluster 3.0 12/01 Data Services Installation and
Conguration Guide to install the Oracle UDLM and Oracle RDBMS software.
100141
100142
100248
These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and rgmd,
respectively. If the RPC service you install also uses one of these program numbers, you must change
that RPC service to use a different program number.
111
A supported multipathing solution (Sun StorEdge Trafc Manager, EMC PowerPath, Hitachi
HDLM) that manages multiple I/O paths per node to the shared cluster storage
The use of DMP alone to manage multiple I/O paths per node to the shared storage is not supported.
Sun Cluster supports a maximum of eight nodes in a cluster, regardless of the storage
congurations that you implement.
A shared storage device can connect to as many nodes as the storage device supports.
Shared storage devices do not need to connect to all nodes of the cluster. However, these storage
devices must connect to at least two nodes.
112
After applying the core patch 110648-20 or later in a two node cluster with an EMC Powerpath
congured quorum disk.
After upgrading from Sun Cluster 3.0 12/01 software to Sun Cluster 3.0 05/02 software in a two
node cluster with an EMC Powerpath congured quorum disk.
Note This is a problem only for a multipath quorum device congured with EMC Powerpath in a
two node conguration. The problem is characterized by a value of NULL being printed for the
quorum device access mode property.
To x the property setting after applying the patch or performing the upgrade, use the scsetup
command to remove the existing quorum disk and add it back to the conguration. Removing and
adding back the quorum disk will correct the Sun Cluster software to use scsi-3 PGR for reserving
quorum disks. To verify that the quorum device access mode is set correctly, run scconf -p to print
the conguration.
Upgrade to Solaris 9 Upgrade to Solaris 9 software during upgrade to Sun Cluster 3.0 5/02
software is not supported. You can only upgrade to subsequent, compatible versions of the
Solaris 8 Operating System during upgrade to Sun Cluster 3.0 5/02 software. To run Sun Cluster
3.0 5/02 software on the Solaris 9 Operating System, you must perform a new installation of the
Solaris 9 version of Sun Cluster 3.0 5/02 software after the nodes are upgraded to Solaris 9
software.
reconguration where the reconguration process will hang, leaving all nodes in the cluster unable to
provide Oracle RAC database service. You can x this problem by ensuring that your Oracle UDLM
is at least version 3.3.4.5. This problem and x are documented in Oracle Bug #2273410.
You can determine the version of Oracle UDLM currently installed on your system by running the
following command.
pkginfo -l ORCLudlm | grep VERSION
The version of the Oracle UDLM currently installed on your system also appears in the le
/var/cluster/ucmm/dlm_node-name/logs/dlmstart.log.
The version information appears just before the Copyright (c) line. Look for the latest occurrence of
this information in the le. If you do not have this version of the Oracle UDLM package, please
contact Oracle Support to obtain the latest version.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement
113
Fixed Problems
BugId 4818874
Problem Summary: When used in a clustered environment, the Sun StorEdge 3310 JBOD array
relies on the cluster nodes to provide SCSI bus termination. Because termination power was not
supplied from the arrays IN ports, if the server connected to these ports lost power then SCSI bus
termination was lost. This in turn could result in the remaining cluster node losing access to the
shared storage on that bus.
Problem Fixed: The StorEdge 3310 JBOD array is now supported in a split-bus conguration, when
using the updated version (part number 370-5396-02/50 or newer) of the I/O board.
Known Problems
In addition to known problems documented in Sun Cluster 3.0 5/02 Release Notes, the following
known problems affect the operation of the Sun Cluster 3.0 12/01 release.
Bug ID 4346123
Problem Summary: When booting a cluster node after multiple failures, a cluster le system might
fail to mount automatically from its /etc/vfstab entry, and the boot process will place the node in
an administrative shell. Running the fsck command on the device might yield the following error.
Cant roll the log for /dev/global/rdsk/dXsY
Workaround: This problem might occur when the global device is associated with a stale cluster le
system mount. Run the following command, and check if the le system shows up in an error state to
conrm a stale mount.
# /usr/bin/df -k
If the global device is associated with a stale cluster le system mount, unmount the global device. If
any users of the le system exist on any of the nodes, the unmount cannot succeed. Run the following
command on each node to identify current users of the le system.
# /usr/sbin/fuser -c mountpoint
If there are users of the le system, terminate those users connection to the le system. Run the
share(1M) command to conrm that the le system is not NFS- shared by any node.
114
Bug ID 4662264
Problem Summary: To avoid panics when using VxFS with Sun Cluster software, the default thread
stack size must be greater than the VxFS default value of 0x4000.
Workaround: Increase the stack size by putting the following lines in the /etc/system le:
set rpcmod:svc_default_stksize=0x8000
set lwp_default_stksize=0x6000
After installing VxFS packages, verify that VxFS installation has not added similar statements to the
/etc/system le. If multiple entries exist, resolve them to one statement per variable, using these
higher values.
Bug ID 4665886
Problem Summary: Mapping a le into the address space with mmap(2) and then issuing a write(2)
call to the same le results in a recursive mutex panic. This problem was identied in a cluster
conguration running the iPlanet Mail Server.
Workaround: There is no workaround.
Bug ID 4668496
Problem Summary: The default JumpStart profile le allocates 10 Mbytes to slice 7. If you use
Solaris 9 software with Solstice DiskSuite, this amount of space is not enough for Solstice DiskSuite
replicas. Solaris 9 software with Solstice DiskSuite requires at least 20 Mbytes.
Workaround: Edit the default profile le to congure slice 7 of the system disk with 20 Mbytes of
space, instead of 10 Mbytes. This workaround is only necessary if you install Solaris 9 software with
Solstice DiskSuite
Bug ID 4680862
Problem Summary: When you install Oracle or Sybase binaries and conguration les on a highly
available local le system managed by HAStoragePlus, the node that does not have access to this le
system fails validation. The result is that you cannot create the resource.
Workaround: Create a symbolic link named /usr/cluster/lib/hasp_check to link to the
/usr/cluster/lib/scdsbuilder/src/scripts/hasp_check le.
Bug ID 4779686
Problem Summary: Availability Suite 3.1 does not support the Sun Cluster 3.0 HAStoragePlus
resource.
Chapter 8 Sun Cluster 3.0 5/02 Release Notes Supplement
115
Workaround: If you intend to implement Availability Suite 3.1 and failover le system, use an
HAStorage resource in the light-weight resource group that includes the Availability Suite logical
host. For the application resource group, use HAStoragePlus. This allows you to use a failover le
system for application performance and also use Availability Suite 3.1 to back up the disk blocks
under the failover le system.
BugId 4836405
Problem Summary: When using the PCI Dual Ultra3 SCSI host adapter in a clustered environment,
the host adapter jumpers for each port must be set for manual SCSI termination. If the ports are not
set to manual SCSI termination, a loss of power to one host could prevent correct SCSI bus operation
and might result in loss of access to all SCSI devices attached to that bus from the remaining host.
Workaround: When using the PCI Dual Ultra3 SCSI host adapter in a clustered environment, set the
jumpers on the host adapter to manual SCSI termination. This setting causes the host adapter to
activate its built-in SCSI terminators, whether or not the host adapter receives PCI bus power.
The jumper settings needed for manual termination are listed below.
SCSI bus 1 (internal SCSI connector and external SCSI connector furthest from the PCI slot)
BugID 4838619
Problem Summary: Without a patch, Sun Cluster software will not recognize bge(7D) Ethernet
adapters.
Workaround: If you plan to use bge(7D) Ethernet adapters as cluster interconnects in your Sun
Cluster conguration, you will need to install patches and use a modied installation procedure. The
onboard Ethernet ports on the Sun Fire V210 and V240 are examples of bge(7D) Ethernet adapters.
If you use Solaris 8 software, install the following patches.
116
For the modied installation procedure, refer to the patchs README le.
Hardware Guide
The following subsections describe omissions or new information that will be added to the next
publishing of the Hardware Guide.
117
All LANE instances in a NAFO group must be congured on the same ELAN. For example, all
LANE instances in NAFO1 must be in the same ELAN on all cluster nodes.
Congure the primary LANE interface using the /etc/hostname.lanen le. This le is
necessary, but will cause warning messages to display at boot up on SunATM 5.0. The following
example is of the console messages. These messages can be ignored.
Rebooting with command: boot
Boot device: diskbrd:a File and args:
SunOS Release 5.8 Version Generic_108528-13 64-bit
Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_ATTACH_REQ(11), errno 8, unix 0
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_BIND_REQ(1), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_PHYS_ADDR_REQ(49), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_UNBIND_REQ(2), errno 3, unix 71
ip_rput_dlpi(lane1): DL_ERROR_ACK for DL_DETACH_REQ(12), errno 3, unix 71
ifconfig: SIOCSLIFNAME for ip: lane1: Protocol error
moving addresses from failed IPv4 interfaces: lane1 (couldnt move, no
alternative interface).
Hostname: atm10
The following example shows an atmconfig le with the primary and secondary LANE
interfaces congured. Note the IP address is assigned only to the primary LANE interface.
118
ba0
3.1
ba0
SONET
ba0
ba1
3.1
ba1
SONET
ba1
1
-
atm20
Sun StorEdge T3/T3+ Partner Group and Sun StorEdge 3900 Storage
Devices Supported in a Scalable Topology.
The Sun StorEdge T3/T3+ Partner Group and Sun StorEdge 3900 storage devices are supported with
4node connectivity in a cluster environment.
To congure and maintain these storage devices with 4node connectivity, use the procedures listed
in the storage devices chapter and repeat the steps for Node B on each additional node that connects
to the storage device.
For the following node-related procedures , see Appendix A.
How to Upgrade to the Sun Cluster 3.0 5/02 Software Update Release
Use the following procedure to upgrade any previous release of Sun Cluster 3.0 software to the Sun
Cluster 3.0 5/02 update release.
Note Do not use any new features of the update release, install new data services, or issue any
administrative conguration commands until all nodes of the cluster are successfully upgraded.
Back up the shared data from all device groups within the cluster.
119
From any node, view the current status of the cluster to verify that the cluster is running normally.
% scstat
Evacuate all resource groups and device groups that are running on the node to upgrade.
Specify the node that you are upgrading in the node argument of the following scswitch command:
# scswitch -S -h from-node
-S
-h node
Species the name of the node from which to evacuate resource groups and
device groups (the node you are upgrading)
Ensure that the node you are upgrading is no longer the primary for any resource groups or device
groups in the cluster.
7
Determine whether any of the Cool Stuff CD packages are installed on the node.
To display the version of an installed package, use the following command:
# pkginfo -l package
The following table lists the packages from the Sun Cluster 3.0 GA Cool Stuff CD-ROM:
Package
Version
Description
SUNWscrtw
3.0.0/2000.10.17.22.22
SUNWscsdk
3.0.0/2000.10.10.13.06
SUNWscset
3.0.0/2000.10.17.22.22
rgmsetup
SUNWscvxi
3.0.0/2000.10.17.22.22
120
Remove any Cool Stuff CD-ROM packages found on the node. These packages will be replaced with
supported versions in Sun Cluster 3.0 5/02 software.
# pkgrm package
10
Solaris 8 Operating System to support Sun Cluster 3.0 5/02 software. See the Info Documents page for
Sun Cluster 3.0 software on http://sunsolve.sun.com for the latest Solaris support information.
11
If these links already exist and contain an uppercase K or S in the le name, no further action is
necessary concerning these links. If these links do not exist, or if these links exist but contain a
lowercase k or s in the le name, you will move aside these links in Step g.
b. Are you using the Maintenance Update upgrade method?
If no, temporarily comment out all global device entries in the /etc/vfstab le.
Do this to prevent the Solaris upgrade from attempting to mount the global devices. To
identify global device entries, look for entries that contain global in the mount-options list.
d. Follow instructions in the installation guide for the Solaris 8 update version you want to upgrade
to.
121
Note To reboot the node during Solaris software upgrade, always add the -x option to the
command. This ensures that the node reboots into noncluster mode. The following two
commands boot a node into single-user noncluster mode:
# reboot -- -sx
ok boot -sx
Do not reboot the node into cluster mode during or after Solaris software upgrade.
If no, uncomment all global device entries that you commented out in the /a/etc/vfstab le.
f. Install any Solaris software patches and hardware-related patches, and download any needed
rmware contained in the hardware patches.
Do not reboot yet if any patches require rebooting.
g. If the Apache links in Step a did not already exist or they contained a lowercase k or s in the le
names before you upgraded Solaris software, move aside the restored Apache links.
Use the following commands to rename the les with a lowercase k or s:
#
#
#
#
#
mv
mv
mv
mv
mv
/a/etc/rc0.d/K16apache
/a/etc/rc1.d/K16apache
/a/etc/rc2.d/K16apache
/a/etc/rc3.d/S50apache
/a/etc/rcS.d/K16apache
/a/etc/rc0.d/k16apache
/a/etc/rc1.d/k16apache
/a/etc/rc2.d/k16apache
/a/etc/rc3.d/s50apache
/a/etc/rcS.d/k16apache
Note For the Maintenance Update upgrade method, the paths to the les do not begin with /a.
Sun Cluster software upgrade requires that these packages exist on the node before upgrade begins. If
any of these packages are missing, install them from the Sun Cluster 3.0 5/02 CD-ROM.
# cd /cdrom/suncluster_3_0/SunCluster_3.0/Packages
# pkgadd -d . SUNWscva SUNWscvr SUNWscvw SUNWscgds
13
122
If yes, ensure that the required Apache software packages are installed on the node.
# pkginfo SUNWapchr SUNWapchu
If any Apache software packages are missing, install them on the node from the Solaris CD-ROM.
# pkgadd -d . SUNWapchr SUNWapchu
14
16
17
Repeat Step 4 through Step 16 on each remaining cluster node, one node at a time.
18
Take ofine all resource groups for the data services you will upgrade.
# scswitch -F -g resource-grp
-F
Take ofine
-g resource-grp
123
19
20
On each cluster node on which data services are installed, upgrade to the Sun Cluster 3.0 5/02 data
services update software.
a. Insert the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive on the node.
b. Install the Sun Cluster 3.0 5/02 data services update patches.
Use one of the following methods:
To upgrade one or more specied data services, type the following command:
To upgrade all data services present on the node, type the following command:
update release. If an update for a particular data service does not exist in the update release,
that data service is not upgraded.
After all data services on all cluster nodes are upgraded, bring back online the resource groups for
each upgraded data service.
# scswitch -Z -g resource-grp
-Z
22
Bring online
23
124
This command requires that the SUNWscnfs package is already installed from the Sun Cluster 3.0
5/02 Agents CD-ROM on all nodes before you invoke the scinstall command. To ensure successful
upgrade of the Sun Cluster HA for NFS data service, do the following:
Ensure that the SUNWscnfs package is installed on all nodes of the cluster before you run this
scinstall command.
If the scinstall command fails because the SUNWscnfs package is missing from a node, install
the SUNWscnfs package on all nodes from the Sun Cluster 3.0 5/02 Agents CD-ROM, then rerun
the scinstall command.
125
To congure a J2EE engine cluster with your Sun Cluster HA for SAP with a Central Instance, see
How to Congure an SAP J2EE Engine with your Sun Cluster HA for SAP with Central
Instance on page 127.
To congure a J2EE engine cluster with your Sun cluster HA for SAP with an SAP Application
Server, see How to Congure an SAP J2EE Engine Cluster with your Sun Cluster HA for SAP
with an Application Server on page 127.
To congure SAP Web dispatcher with your Sun Cluster HA for SAP agent, see How to
Congure a SAP Web Dispatcher with your Sun Cluster HA for SAP on page 128.
The SAP J2EE engine is started by the SAP dispatcher which is under the protection of the Sun
Cluster HA for SAP. If the SAP J2EE engine goes down, the SAP dispatcher will restart it.
126
The SAP Web dispatcher has the capability of auto restart. If the SAP Web dispatcher goes down, the
SAP Web dispatcher watch dog process will restart. Currently, there is no Sun Cluster agent available
for the SAP Web dispatcher.
How to Congure an SAP J2EE Engine with your Sun Cluster HA for SAP
Using the SAP J2EE Admintool GUI, change the ClusterHosts parameter to list all logical hosts for the
application server and port pair under dispatcher/Manager/ClusterManager. For example,
as11h:port;as21h:port ...
How to Congure an SAP J2EE Engine Cluster with your Sun Cluster HA
Using the SAP J2EE Admintool GUI, change ClusterHosts parameter to list the logical host for the
central instance and port pair under the dispatcher/Manager/ClusterManager.
logical-host-ci:port
127
How to Congure a SAP Web Dispatcher with your Sun Cluster HA for
SAP
After you have congured the SAP Web dispatcher with your Sun Cluster HA for SAP, perform the
following steps.
1
Ensure that SAP Web dispatcher has an instance number different than the Central Instance and the
application server instances.
For example, SAPSYSTEM = 66 is used in the prole for the SAP Web dispatcher.
Activate the Internet Communication Frame Services manually after you install the SAP Web
Application Server.
See SAP OSS note 517484 for more details.
permissions.
be failover
128
For the procedure on how to set up an HAStoragePlus resource, see Sun Cluster 3.0 Data Service
Installation and Conguration Guide.
Creating Node-Specic Files and Directories for Use With Oracle Real
Application Clusters Software on the Cluster File System
When Oracle software is installed on the cluster le system, all the les in the directory that the
ORACLE_HOME environment variable species are accessible by all cluster nodes.
An installation might require that some Oracle les or directories maintain node-specic
information. You can satisfy this requirement by using a symbolic link whose target is a le or a
directory on a le system that is local to a node. Such a le system is not part of the cluster le system.
To use a symbolic link for this purpose, you must allocate an area on a local le system. To enable
Oracle applications to create symbolic links to les in this area, the applications must be able to
access les in this area. Because the symbolic links reside on the cluster le system, all references to
the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area
on the local le system.
129
$ORACLE_HOME/network/agent
$ORACLE_HOME/network/log
$ORACLE_HOME/network/trace
$ORACLE_HOME/srvm/log
$ORACLE_HOME/apache
For information about other directories that might be required to maintain node-specic
information, see your Oracle documentation.
1
On each cluster node, create the local directory that is to maintain node-specic information.
# mkdir -p local-dir
-p
local-dir
Species the full path name of the directory that you are creating
On each cluster node, make a local copy of the global directory that is to maintain node-specic
information.
# cp -pr global-dir local-dir-parent
-p
Species that the owner, group, permissions modes, modication time, access
time, and access control lists are preserved.
-r
Species that the directory and all its les, including any subdirectories and their
les, are copied.
global-dir
Species the full path of the global directory that you are copying. This directory
resides on the cluster le system under the directory that the ORACLE_HOME
environment variable species.
local-dir-parent
Species the directory on the local node that is to contain the local copy. This
directory is the parent directory of the directory that you created in Step 1.
Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the
global directory.
a. From any cluster node, remove the global directory that you copied in Step 2.
# rm -r global-dir
-r
130
Species that the directory and all its les, including any subdirectories and their
les, are removed.
global-dir
Species the le name and full path of the global directory that you are removing.
This directory is the global directory that you copied in Step 2.
b. From any cluster node, create a symbolic link from the local copy of the directory to the global
directory that you removed in Step a.
# ln -s local-dir global-dir
Example 81
-s
local-dir
Species that the local directory that you created in Step 1 is the source of the link
global-dir
Species that the global directory that you removed in Step a is the target of the link
2. To make local copies of the global directories that are to maintain node-specic information, the
following commands are run:
# cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.
# cp -pr $ORACLE_HOME/network/log /local/oracle/network/.
# cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.
# cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.
# cp -pr $ORACLE_HOME/apache /local/oracle/.
131
# rm -r $ORACLE_HOME/network/trace
# rm -r $ORACLE_HOME/srvm/log
# rm -r $ORACLE_HOME/apache
2. To create symbolic links from the local directories to their corresponding global directories, the
following commands are run:
# ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent
# ln -s /local/oracle/network/log $ORACLE_HOME/network/log
# ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace
# ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log
# ln -s /local/oracle/apache $ORACLE_HOME/apache
How to Create a Node-Specic File for Use With Oracle Real Application
$ORACLE_HOME/network/admin/snmp_ro.ora
$ORACLE_HOME/network/admin/snmp_rw.ora
For information about other les that might be required to maintain node-specic information, see
your Oracle documentation.
1
On each cluster node, create the local directory that will contain the le that is to maintain
node-specic information.
# mkdir -p local-dir
-p
local-dir
Species the full path name of the directory that you are creating
On each cluster node, make a local copy of the global le that is to maintain node-specic
information.
# cp -p global-le local-dir
132
-p
Species that the owner, group, permissions modes, modication time, access time,
and access control lists are preserved.
global-le
Species the le name and full path of the global le that you are copying. This le was
installed on the cluster le system under the directory that the ORACLE_HOME
environment variable species.
local-dir
Species the directory that is to contain the local copy of the le. This directory is the
directory that you created in Step 1.
Replace the global le that you copied in Step 2 with a symbolic link to the local copy of the le.
a. From any cluster node, remove the global le that you copied in Step 2.
# rm global-le
global-le
Species the le name and full path of the global le that you are removing. This
le is the global le that you copied in Step 2.
b. From any cluster node, create a symbolic link from the local copy of the le to the directory from
which you removed the global le in Step a.
# ln -s local-le global-dir
Example 82
-s
local-le
Species that the le that you copied in Step 2 is the source of the link
global-dir
Species that the directory from which you removed the global version of the le in
Step a is the target of the link
2. To make a local copy of the global les that are to maintain node-specic information, the
following commands are run:
# cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
/local/oracle/network/admin/.
# cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
/local/oracle/network/admin/.
133
2. To create symbolic links from the local copies of the les to their corresponding global les, the
following commands are run:
# ln -s /local/oracle/network/admin/snmp_ro.ora \
$ORACLE_HOME/network/admin/snmp_rw.ora
# ln -s /local/oracle/network/admin/snmp_rw.ora \
$ORACLE_HOME/network/admin/snmp_rw.ora
Supplement
The following subsections describe known errors in or omissions from the Sun Cluster 3.0 5/02
Supplement.
install mode, do not perform this procedure. Instead, go to How to Uninstall Sun Cluster Software
to Correct Installation Problems in the Sun Cluster 3.0 12/01 Software Installation Guide.
The note should instead read as follows:
Note To uninstall Sun Cluster software from a node that has not yet joined the cluster or is still in
install mode, do not perform this procedure. Instead, go to How to Uninstall Sun Cluster Software
to Correct Installation Problems in the Sun Cluster 3.0 5/02 Supplement.
Release Notes
The following subsections describe omissions or new information that will be added to the next
publishing of the Release Notes.
BugId 4662264
The Workaround documented in the Sun Cluster 3.1 8/05 Release Notes for Solaris OS is incorrect.
Incorrect:
Increase the stack size by putting the following lines in the /etc/system le.
134
set lwp_default_stksize=0x6000
set svc_default_stksize 0x8000
Correct:
Increase the stack size by putting the following lines in the /etc/system le.
set lwp_default_stksize=0x6000
set rpcmod:svc_default_stksize=0x8000
Man Pages
The following subsections describe omissions or new information that will be added to the next
publishing of the man pages.
135
The transport junction, whether a virtual switch or a hardware switch, must have a specic name.
The name must be sw_wrsmN where the adapter is wrsmN. This requirement reects a Wildcat
restriction that requires that all wrsm controllers on the same Wildcat network have the same instance
number.
When a transport junction is used and the endpoints of the transport cable are congured using
scconf, scinstall, or other tools, you are asked to specify a port name on the transport junction.
You can provide any port name, or accept the default, as long as the name is unique for the transport
junction.
The default sets the port name to the node ID that hosts the adapter at the other end of the cable.
Refer to scconf(1M) for more conguration details.
There are no user congurable properties for cluster transport adapters of this type.
SEE ALSO
scconf(1M), scinstall(1M), wrsmconf(1M), wrsmstat(1M), wrsm(7D), wrsmd(7D)
SUNW.HAStoragePlus.5
The SunW.HAStoragePlus.5 man page has been modied. The following paragraph replaces the
paragraph in the Notes section of the man page.
Although unlikely, the SUNW.HAStoragePlus resource is capable of mounting any global le system
found to be in a unmounted state. This check will be skipped only if the le system is of type UFS and
logging is turned off. All le systems are mounted in the overlay mode. Local le systems will be
forcibly unmounted.
The following FilesystemCheckCommand extension property has been added to the
SUNW.HAStoragePlus.5 man page.
FilesystemCheckCommand
136
137
138
A P P E N D I X
This appendix provides information and procedures for using the scalable cluster topology. This
information supplements the . Certain procedures have been updated and included here to
accommodate this new Sun Cluster 3.x topology.
This chapter contains new information for the following topics.
All nodes must have the Oracle Real Application Clusters software installed. For information
about installing and using Oracle Real Application Clusters in a cluster, see the Sun Cluster 3.0
12/01 Data Services Installation and Conguration Guide.
The storage arrays supported with this cluster topology include the Sun StorEdge T3/T3+ array
(single-controller and partner-group congurations), the Sun StorEdge 9900 Series storage
device, and the Sun StorEdge 3900 storage device.
139
140
Caution Do not use this procedure if your cluster is running an Oracle Real Application Clusters
conguration. At this time, removing a node in an Oracle Real Application Clusters conguration
might cause nodes to panic at reboot.
For Instructions, Go To
# scswitch -S -h from-node
- Use scswitch
Remove the node from all resource groups.
- Use scrgadm
Remove node from all disk device groups
- Use scconf, metaset, and scsetup
Sun Cluster data services collection: See the procedure for how to
remove a node from an existing resource group.
Sun Cluster system administration documentation: see the
procedures for how to remove a node from a disk device group
(separate procedures for Solstice DiskSuite, VERITAS Volume
Manager, and raw disk device groups).
Caution: Do not remove the quorum device if you are
removing a node from a two-node cluster.
Sun Cluster system administration documentation: How to
Remove a Quorum Device.
Note that although you must remove the quorum device before
you remove the storage device in the next step, you can add the
quorum device back immediately afterward.
- Use scconf -a -q
globaldev=d[n],node=node1,node=node2,...
Place the node being removed into
maintenance state.
141
(Continued)
For Instructions, Go To
Remove all logical transport connections to Sun Cluster system administration documentation: How to
the node being removed.
Remove Cluster Transport Cables, Transport Adapters, and
Transport Junctions
- Use scsetup.
Remove node from the cluster software
conguration.
- Use scconf.
Back up all database tables, data services, and volumes that are associated with the storage array
that you are removing.
Determine the resource groups and device groups that are running on the node to be disconnected.
# scstat
If necessary, move all resource groups and device groups off the node to be disconnected.
Caution If your cluster is running Oracle Real Application Clusters software, shut down the Oracle
Real Application Clusters database instance that is running on the node before you move the groups
off the node. For instructions see the Oracle Database Administration Guide.
# scswitch -S -h from-node
4
142
If you use VERITAS Volume Manager or raw disk, use the scconf command to remove the
device groups.
If you use Solstice DiskSuite/Solaris Volume Manager, use the metaset command to remove the
device groups.
If the cluster is running HAStorage or HAStoragePlus, remove the node from the resource groups
nodelist.
# scrgadm -a -g resource-group -h nodelist
See the Sun Cluster data services collection for more information on changing a resource groups
nodelist.
7
If the storage array you are removing is the last storage array that is connected to the node,
disconnect the ber-optic cable between the node and the hub or switch that is connected to this
storage array (otherwise, skip this step).
Do you want to remove the host adapter from the node you are disconnecting?
10
11
12
If OPS/RAC software has been installed, remove the OPS/RAC software package from the node that
you are disconnecting.
# pkgrm SUNWscucm
Caution If you do not remove the Oracle Real Application Clusters software from the node you
disconnected, the node will panic when the node is reintroduced to the cluster and potentially cause
a loss of data availability.
13
For more information, see the Sun Cluster system administration documentation.
Appendix A Scalable Cluster Topology
143
14
On the node, update the device namespace by updating the /devices and /dev entries.
# devfsadm -C
# scdidadm -C
15
144
A P P E N D I X
This chapter contains the procedures on how to install and congure Sun Cluster HA for SAP
liveCache.
This chapter contains the following procedures.
145
Protected by
NFS le system
146
RDBMS
R/3
Sun Cluster
data service for
your RDBMS
Sun Cluster
HA for SAP
liveCache
Sun Cluster
HA for SAP
liveCache
For Instructions, Go To
Plan the Sun Cluster HA for SAP liveCache Your SAP documentation.
installation
Sun Cluster data services collection
Prepare the nodes and disks
147
TABLE B2 Task Map: Installing and Conguring Sun Cluster HA for SAP liveCache
(Continued)
Task
For Instructions, Go To
Sun Cluster HA for SAP liveCache installation and conguration because your SAP documentation
includes conguration restrictions and requirements that are not outlined in Sun Cluster
documentation or dictated by Sun Cluster software.
Conguration Requirements
Caution Your data service conguration might not be supported if you do not adhere to these
requirements.
Use the requirements in this section to plan the installation and conguration of Sun Cluster HA for
SAP liveCache. These requirements apply to Sun Cluster HA for SAP liveCache only. You must meet
these requirements before you proceed with your Sun Cluster HA for SAP liveCache installation and
conguration.
For requirements that apply to all data services, see Sun Cluster data services collection.
148
Congure SAP xserver so that SAP xserver starts on all nodes that the SAP liveCache resource
can failover to. To implement this conguration, ensure that the nodelist of the SAP xserver
resource group and the SAP liveCache resource group contain the same nodes. Also, the value of
desired_primaries and maximum_primaries of the SAP xserver resource must be equal to the
number of nodes listed in the nodelist parameter of the SAP liveCache resource. For more
information, see How to Register and Congure Sun Cluster HA for SAP liveCache on page
156.
CI
APP
liveCache
XServer
XServer
DB
APP
Conguration Considerations
Use the information in this section to plan the installation and conguration of Sun Cluster HA for
SAP liveCache. The information in this section encourages you to think about the impact your
decisions have on the installation and conguration of Sun Cluster HA for SAP liveCache.
Install SAP liveCache on its own global device group, separate from the global device group for
the APO Oracle database and SAP R/3 software. This separate global device group for SAP
liveCache ensures that the SAP liveCache resource can depend on the HAStoragePlus resource
for SAP liveCache only.
If you want to run SAP xserver as any user other than user root, create that user on all nodes on
which SAP xserver runs, and dene this user in the Xserver_User extension property. SAP
xserver starts and stops based on the user you identify in this extension property. The default for
this extension property is user root.
Congure SAP xserver as a failover resource unless you are running multiple liveCache instances
that overlap.
149
What resource groups will you use for network addresses and application resources and the
dependencies between them?
What is the logical hostname (for SAP liveCache resource) for clients that will access the data
service?
b. On each node that can master the SAP liveCache resource, ensure that files appears rst for the
protocols database entry in the /etc/nsswitch.conf le.
Example:
protocols: files nis
Sun Cluster HA for SAP liveCache uses the su - user command and the dbmcli command to start and
stop SAP liveCache.
The network information name service might become unavailable when a cluster nodes public
network fails. Implementing the preceding changes to the /etc/nsswitch.conf le ensures that the
su(1M) command and the dbmcli command do not refer to the NIS/NIS+ name services.
150
Create the .XUSER.62 le for the SAP APO administrator user and the SAP liveCache administrator
user by using the following command.
# dbmcli -d LC-NAME -n logical-hostname -us user,passwd
LC-NAME
logical-hostname
Caution Neither SAP APO transaction LC10 nor Sun Cluster HA for SAP liveCache functions
properly if you do not create this le correctly.
Copy /usr/spool/sql from the node, on which you installed SAP liveCache, to all the nodes that will
run the SAP liveCache resource. Ensure that the ownership of these les is the same on all node as it is
on the node on which you installed SAP liveCache.
Example:
# tar cfB - /usr/spool/sql | rsh phys-schost-1 tar xfB -
151
Create the failover resource group to hold the network and SAP liveCache resource.
# scrgadm -a -g livecache-resource-group [-h nodelist]
Verify that you added all the network resources you use to your name service database.
Log on to the node that hosts the SAP liveCache resource group.
Start SAP xserver manually on the node that hosts the SAP liveCache resource group.
# su - lc-nameadm
# x_server start
lc-name
Log on to SAP APO System by using your SAP GUI with user DDIC.
Go to transaction LC10 and change the SAP liveCache host to the logical hostname you dened in
Step 3.
liveCache host: lc-logical-hostname
152
Log on to SAP APO System by using your SAP GUI with user DDIC.
Go to transaction LC10.
Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.
Choose the Add Support for New Data Service to This Cluster Node menu option.
The scinstall utility prompts you for additional information.
Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD-ROM as the data services cd.
153
Use the procedure in Sun Cluster data services collection to congure the extension properties if you
have already created your resources. You can update some extension properties dynamically. You can
update others, however, only when you create or disable a resource. The Tunable elds in Table B3
and Table B4 indicate when you can update each property. See Appendix A for details on all Sun
Cluster properties.
TABLE B3 Sun Cluster HA for SAP liveCache (SUNW.sap_xserver) Extension Properties
Name/Data Type
Description
Monitor_retry_count
Monitor_retry_ interval
Probe_timeout
154
(Continued)
Name/Data Type
Description
Description
Monitor_retry_count
Monitor_retry_interval
155
Description
Probe_timeout
(Continued)
Default: 90
Tunable: Any time
xserver serves multiple SAP liveCache instances in the cluster. More than one SAP xserver resource
that runs on the same cluster causes conicts between the SAP xserver resources. These conicts
cause all SAP xserver resources to become unavailable. If you attempt to start the SAP xserver twice,
you receive an error message that says Address already in use.
Become superuser on one of the nodes in the cluster that will host the SAP liveCache resource.
LC-NAME
3
Note The CONFDIR_LIST=put-Confdir_list-here entry exists only in the Sun Cluster 3.1
version.
b. Replace put-LC_NAME-here with the SAP liveCache instance name. The SAP liveCache instance
name is the value you dened in the Livecache_Name extension property.
LC_NAME="liveCache-instance-name"
Example:
If the SAP liveCache instance name is LC1 and the SAP liveCache software directory is /sapdb,
edit the lccluster script as follows.
LC_NAME="LC1"
CONFDIR_LIST="/sapdb" [Sun Cluster 3.1 version only]
4
Congure the SAP xserver as a scalable resource, completing the following substeps.
a. Create a scalable resource group for SAP xserver. Congure SAP xserver to run on all the potential
nodes that SAP liveCache will run on.
157
Note Congure SAP xserver so that SAP xserver starts on all nodes that the SAP liveCache
resources can fail over to. To implement this conguration, ensure that the nodelist parameter of
the SAP xserver resource group contains all the nodes listed in the liveCache resource groups
nodelist. Also, the value of desired_primaries and maximum_primaries of the SAP xserver
resource group must be equal to each other.
# scrgadm -a -g xserver-resource-group \
-y Maximum_primaries=value \
-y Desired_primaries=value \
-h nodelist
c. Enable the scalable resource group that now includes the SAP xserver resource.
# scswitch -Z -g xserver-resource-group
9
10
Set up a resource group dependency between SAP xserver and SAP liveCache.
# scrgadm -c -g livecache-resource-group \
-y rg_dependencies=xserver-resource-group
11
12
Are you running an APO application server on a node that SAP liveCache can fail over to?
13
Is the scalable APO application server resource group already in an RGOfoad resources
rg_to_offload list?
# scrgadm -pvv | grep -i rg_to_offload | grep value:
If no, consider adding an RGOfoad resource in the SAP liveCache resource group.
This conguration enables you to automatically shut down the APO application server if the
liveCache resource fails over to a node on which the APO application server was running.
158
State
Description
OFFLINE
COLD
WARM
STOPPED INCORRECTLY
ERROR
UNKNOWN
Log on to the node that hosts the resource group that contains the SAP liveCache resource, and
verify that the fault monitor functionality works correctly.
a. Terminate SAP liveCache abnormally by stopping all SAP liveCache processes.
Sun Cluster software restarts SAP liveCache.
# ps -ef|grep sap|grep kernel
# kill -9 livecache-processes
b. Terminate SAP liveCache by using the Stop liveCache button in LC10 or by running the lcinit
command.
Sun Cluster software does not restart SAP liveCache. However, the SAP liveCache resource status
message reects that SAP liveCache stopped outside of Sun Cluster software through the use of
the Stop liveCache button in LC10 or the lcinit command. The state of the SAP liveCache
resource is UNKNOWN. When the user successfully restarts SAP liveCache by using the Start
Appendix B Installing and Conguring Sun Cluster HA for SAP liveCache
159
liveCache button in LC10 or the lcinit command, the Sun Cluster HA for SAP liveCache Fault
Monitor updates the resource state and status message to indicate that SAP liveCache is running
under the control of Sun Cluster software.
2
Log on to SAP APO by using your SAP GUI with user DDIC, and verify that SAP liveCache starts
correctly by using transaction LC10.
As user root, switch the SAP liveCache resource group to another node.
# scswitch -z -g livecache-resource-group -h node2
Repeat Step 1 through Step 3 for each potential node on which the SAP liveCache resource can run.
Log on to the nodes that host the SAP xserver resource, and verify that the fault monitor
functionality works correctly.
Terminate SAP xserver abnormally by stopping all SAP xserver processes.
# ps -ef|grep xserver
# kill -9 xserver-process
Extension Properties
See Sun Cluster HA for SAP liveCache Extension Properties on page 154 for the extension
properties that the Sun Cluster HA for SAP liveCache Fault Monitors use.
If SAP xserver is unavailable, SAP xserver probe restarts or fails over the SAP xserver resource
if it reaches the maximum number of restarts.
If any system error messages are logged in syslog during the checking process, the SAP
xserver probe concludes that a partial failure has occurred. If the system error messages
logged in syslog occur four times within the probe_interval, SAP xserver probe restarts
SAP xserver.
161
If the parent process terminates, SAP liveCache probe returns liveCache is offline.
4. If SAP liveCache is not online, SAP liveCache probe determines if the user stopped SAP
liveCache outside of Sun Cluster software by using the Stop liveCache button in LC10 or the
lcinit command.
5. If the user stopped SAP liveCache outside of Sun Cluster software by using the Stop liveCache
button in LC10 or the lcinit command, returns OK.
6. If the user did not stop SAP liveCache outside of Sun Cluster software by using the Stop
liveCache button in LC10 or the lcinit command, checks SAP xserver availability.
If SAP xserver is unavailable, returns OK because the probe cannot restart SAP liveCache if
SAP xserver is unavailable.
7. If any errors are reported from system function calls, returns system failure.
162
A P P E N D I X
This chapter provides instructions on how to congure and administer Sun Cluster HA for Sybase
ASE on your Sun Cluster release notes documentation nodes.
This chapter contains the following procedures.
You must congure Sun Cluster HA for Sybase ASE as a failover data service. See the Sun Cluster
concepts documentation document and Sun Cluster data services collection for general information
about data services, resource groups, resources, and other related topics.
For Instructions, Go To
163
TABLE C1 Task Map: Installing and Conguring Sun Cluster HA for Sybase ASE
(Continued)
Task
For Instructions, Go To
Sybase ASE application les These les include Sybase ASE binaries and libraries. You can
install these les on either the local le system or the cluster le system.
See the Sun Cluster data services collection for the advantages and disadvantages of placing the
Sybase ASE binaries on the local le system as opposed to the cluster le system.
Sybase ASE conguration les These les include the interfaces le, config le, and
environment le. You can install these les on the local le system (with links), the highly
available local le system, or on the cluster le system.
Database data les These les include Sybase device les. You must install these les on the
highly available local le system or the cluster le system as either raw devices or regular les.
164
Note Before you congure Sun Cluster HA for Sybase ASE, use the procedures that the Sun Cluster
data services collection describes to congure the Sybase ASE software on each node.
steps on all of the nodes, the Sybase ASE installation will be incomplete, and Sun Cluster HA for
Sybase ASE will fail during startup.
Note Consult the Sybase ASE documentation before you perform this procedure.
Congure the /etc/nsswitch.conf le as follows so that Sun Cluster HA for Sybase ASE starts and
stops correctly if a switchover or failover occurs.
On each node that can master the logical host that runs Sun Cluster HA for Sybase ASE, include one
of the following entries for group in the /etc/nsswitch.conf le.
group:
group: files [NOTFOUND=return] nis
group: file [NOTFOUND=return] nisplus
Sun Cluster HA for Sybase ASE uses the su user command to start and stop the database node.
The network information name service might become unavailable when a cluster nodes public
network fails. Adding one of the preceding entries for group ensures that the su(1M) command does
not refer to the NIS/NIS+ name services if the network information name service is unavailable.
3
Congure the cluster le system for Sun Cluster HA for Sybase ASE.
If raw devices contain the databases, congure the global devices for raw-device access. See the Sun
Cluster data services collection for information on how to congure global devices.
If you use the Solstice DiskSuite/Solaris Volume Manager volume manager, congure the Sybase ASE
software to use UNIX le system (UFS) logging on mirrored meta devices or raw-mirrored meta
devices. See the Solstice DiskSuite/Solaris Volume Manager documentation for information on how
to congure raw-mirrored metadevices.
165
Note If you install the Sybase ASE binaries on a local disk, use a separate disk if possible. Installing
the Sybase ASE binaries on a separate disk prevents the binaries from overwrites during operating
environment reinstallation.
On each node, create an entry for the database administrator (DBA) group in the /etc/group le,
and add potential users to the group.
Verify that the root and sybase users are members of the dba group, and add entries as necessary for
other DBA users. Ensure that group IDs are the same on all of the nodes that run Sun Cluster HA for
Sybase ASE, as the following example illustrates.
dba:*:520:root,sybase
You can create group entries in a network name service. If you do so, also add your entries to the local
/etc/group le to eliminate dependency on the network name service.
6
Ensure that the sybase user entry is the same on all of the nodes that run Sun Cluster HA for Sybase
ASE.
Cluster le system
Note Before you install the Sybase ASE software on the cluster le system, start the Sun Cluster
release notes documentation software and become the owner of the disk device group.
See Preparing to Install Sun Cluster HA for Sybase ASE on page 164 for more information about
installation locations.
166
Create a failover resource group to hold the network and application resources.
# scrgadm -a -g resource-group [-h nodelist]
-g resource-group
Species the name of the resource group. This name can be your choice but
must be unique for resource groups within the cluster.
-h nodelist
Note Use the -h option to specify the order of the node list. If all of the nodes in the cluster are
Verify that you have added all of the network resources that Sun Cluster HA for Sybase ASE uses to
either the /etc/inet/hosts le or to your name service (NIS, NIS+) database.
Add a network resource (logical hostname or shared address) to the failover resource group.
# scrgadm -a -L -g resource-group -l logical-hostname [-n netiist]
-l logical-hostname
-n netiist
# scswitch -Z -g resource-group
7
On the node mastering the resource group that you just created, login as sybase.
The installation of the Sybase binaries must be performed on the node where the corresponding
logical host is running.
167
See Also
After you install the Sybase ASE software, go to How to Congure Sybase ASE Database Access
With Solstice DiskSuite/Solaris Volume Manager on page 168 if you use the Solstice
DiskSuite/Solaris Volume Manager volume manager. Go to How to Congure Sybase ASE Database
Access With VERITAS Volume Manager on page 169 if you use the VERITAS Volume Manager
(VxVM).
Verify that the sybase user and the dba group own the $SYBASE_HOME directory and $SYBASE_HOME
children directories.
Run the scstat(1M) command to verify that the Sun Cluster release notes documentationsoftware
functions correctly.
Congure Sybase ASE database access with Solstice DiskSuite/Solaris Volume Manager or
VERITAS Volume Manager.
168
Congure the disk devices for the Solstice DiskSuite/Solaris Volume Manager software to use.
See the Sun Cluster software installation documentation for information on how to congure
Solstice DiskSuite/Solaris Volume Manager.
If you use raw devices to contain the databases, run the following commands to change each
raw-mirrored metadevices owner, group, and mode.
If you do not use raw devices, do not perform this step.
a. If you create raw devices, run the following commands for each device on each node that can
master the Sybase ASE resource group.
# chown sybase /dev/md/metaset/rdsk/dn
# chgrp dba /dev/md/metaset/rdsk/dn
# chmod 600 /dev/md/metaset/rdsk/dn
metaset
/rdsk/dn
Species the name of the raw disk device within the metaset diskset.
Congure the disk devices for the VERITAS Volume Manager software to use.
See the Sun Cluster software installation documentation for information on how to congure
VERITAS Volume Manager.
If you use raw devices to contain the databases, run the following commands on the current
disk-group primary to change each devices owner, group, and mode.
If you do not use raw devices, do not perform this step.
a. If you create raw devices, run the following command for each raw device.
# vxedit -g diskgroup set user=sybase group=dba mode=0600 volume
-g resource-group
Species the name of the resource group. This name can be your choice
but must be unique for resource groups within the cluster.
-h nodelist
169
c. Reregister the disk device group with the cluster to keep the VERITAS Volume Manager
namespace consistent throughout the cluster.
# scconf -c -D name=diskgroup
Establish a highly available IP address and name, that is, a network resource that operates at
installation time.
Locate device paths for all of the Sybase ASE devicesincluding the master device and system
devicesin the highly available local le system or cluster le system. Congure device paths as
one of the following le types.
regular les
raw devices
les that the Solstice DiskSuite/Solaris Volume Manager software or the VERITAS Volume
Manager software manage
Locate the Sybase ASE server logs in either the cluster le system or the local le system.
The Sybase ASE 12.0 environment consists of the data server, backup server, monitor server, text
server, and XP server. The data server is the only server that you must congureyou can choose
whether to congure all of the other servers.
The entire cluster must contain only one copy of the interfaces le. The $SYBASE directory
contains the interfaces le. If you plan to maintain per-node le copies, ensure the le contents
are identical.
All of the clients that connect to Sybase ASE servers connect with Sybase OpenClient libraries
and utilities. When you congure the Sybase ASE software, in the interfaces le, enter
information about the network resource and various ports. All of the clients use this connection
information to connect to the Sybase ASE servers.
Perform the following steps to create the Sybase ASE database environment.
1
Run the GUI-based utility srvbuild to create the Sybase ASE database.
The $SYBASE/ASE_12-0/bin directory contains this utility. See the Sybase ASE document entitled
Installing Sybase Adaptive Server Enterprise on Sun Solaris 2.x (SPARC).
170
To verify successful database installation, ensure that all of the servers start correctly.
Run the ps(1) command to verify the operation of all of the servers. Sybase ASE server logs indicate
any errors that have occurred.
Set the password for the Sybase ASE system administrator account.
See the Sybase Adaptive Server Enterprise System Administration Guide for details on changing the sa
login password.
See Sun Cluster HA for Sybase ASE Fault Monitor on page 180 for more information.
5
See Also
After you create the Sybase ASE database environment, go to How to Install Sun Cluster HA for
Sybase ASE Packages on page 172.
171
Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.
Choose the menu option, Add Support for New Data Service to This Cluster Node.
The scinstall utility prompts you for additional information.
Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD as the data services cd.
See Also
When you nish the Sun Cluster HA for Sybase ASE package installation, go to How to Register and
Congure Sun Cluster HA for Sybase ASE on page 172.
172
This procedure includes creating the HAStoragePlus resource type. This resource type synchronizes
actions between HAStorage and Sun Cluster HA for Sybase ASE and enables you to use a highly
available local le system. Sun Cluster HA for Sybase ASE is disk-intensive, and therefore you should
congure the HAStoragePlus resource type.
See the SUNW.HAStoragePlus(5) man page and Sun Cluster data services collection for more
information about the HAStoragePlus resource type.
Note Other options also enable you to register and congure the data service. See Sun Cluster data
The names of the cluster nodes that master the data service.
The network resource that clients use to access the data service. You typically congure the IP
address when you install the cluster. See the sections in the Sun Cluster software installation
documentation on planning the Sun Cluster environment and on how to install the Solaris
operating environment for details.
Run the scrgadm command to register resource types for Sun Cluster HA for Sybase ASE.
# scrgadm -a -t SUNW.sybase
-a
-t SUNW.sybase
Species the resource type name that is predened for your data service.
173
Note AfnityOn must be set to TRUE and the local le system must reside on global disk groups to
be failover.
Run the scrgadm command to complete the following tasks and bring the resource group sybase-rg
online on a cluster node.
This node will be made the primary for device group sybase-set1 and raw device
/dev/global/dsk/d1. Device groups associated with le systems such as /global/sybase-inst will
also be made primaries on this node.
# scrgadm -Z -g sybase-rg
6
-j resource
Species the resource name to add.
-g resource-group
Species the resource group name into which the RGMplaces the resources.
-t SUNW.sybase
Species the resource type to add.
-x Environment_File=environment-le
Sets the name of the environment le.
-x Adaptive_Server_Name=adaptive-server-name
Sets the name of the adaptive server.
-x Backup_Server_Name=backup-server-name
Sets the name of the backup server.
-x Text_Server_Name=text-server-name
Sets the name of the text server.
174
-x Monitor_Server_Name=monitor-server-name
Sets the name of the monitor server.
-x Adaptive_Server_Log_File=log-le-path
Sets the path to the log le for the adaptive server.
-x Stop_File=stop-le-path
Sets the path to the stop le.
-x Connect_string=user/passwd
Species the user name and password that the fault monitor uses to connect to the database.
You do not have to specify extension properties that have default values. See Conguring Sun
Cluster HA for Sybase ASE Extension Properties on page 177 for more information.
7
messages to print to the console, update the appropriate RUN les to redirect these messages to
another le.
# scswitch -Z -g resource-group
See Also
After you register and congure Sun Cluster HA for Sybase ASE, go to How to Verify the Sun
Cluster HA for Sybase ASE Installation on page 175.
Log in to the node that masters the Sybase ASE resource group.
175
Verify that the Sun Cluster HA for Sybase ASE resource is online.
# scstat -g
Inspect the Sybase ASE logs to determine the cause of any errors that have occurred.
Conrm that you can connect to the data server and execute the following test command.
# isql -S adaptive-server -U sa
isql> sp_help
isql> go
isql> quit
Switch the resource group that contains the Sybase ASE resource to another cluster member.
# scswitch -z -g resource-group -h node
switchover occurs, the existing client connections to Sybase ASE terminate, and clients must
reestablish their connections. After a switchover, the time that is required to replay the Sybase ASE
transaction log determines Sun Cluster HA for Sybase ASE recovery time.
As part of your regular le maintenance, check the following log les and remove les that you no
longer need.
syslog
message_log
restart_history
sybase user
sybase group
177
Description
Environment_File
File that contains all of the Sybase ASE environment variables. This le is automatically
created in the Sybase home directory.
Default: None
Range: Minimum=1
Tunable: When disabled
Adaptive_Server_NameThe name of the data server. Sun Cluster HA for Sybase ASE uses this property to locate
the RUN server in the $SYBASE/$ASE/install directory.
Default: None
Range: Minimum=1
Tunable: When disabled
Backup_Server_Name The name of the backup server. Sun Cluster HA for Sybase ASE uses this property to
locate the RUN server in the $SYBASE/$ASE/install directory. If you do not set this
property, Sun Cluster HA for Sybase ASE will not manage the server.
Default: Null
Range: None
Tunable: When disabled
Monitor_Server_Name The name of the monitor server. Sun Cluster HA for Sybase ASE uses this property to
locate the RUN server in the $SYBASE/$ASE/install directory. If you do not set this
property, Sun Cluster HA for Sybase ASE will not manage the server.
Default: Null
Range: None
Tunable: When disabled
Text_Server_Name
The name of the text server. The Sun Cluster HA for Sybase ASE data service uses this
property to locate the RUN server in the $SYBASE/$ASE/install directory. If you do not
set this property, the Sun Cluster HA for Sybase ASE data service will not manage the
server.
Default: Null
Range: None
Tunable: When disabled
178
(Continued)
Description
Adaptive_Server_Log_The path to the log le for the adaptive server. Sun Cluster HA for Sybase ASE
File
continually reads this property for error monitoring.
Default: None
Range: Minimum=1
Tunable: When disabled
Stop_File
Sun Cluster HA for Sybase ASEuses this property during server stoppages. This
property contains the sa password. Protect this property from general access.
Default: None
Range: Minimum=1
Tunable: When disabled
Probe_timeout
Debug_level
Debug level for writing to the Sun Cluster HA for Sybase ASE log.
Default: 0
Range: 0 15
Tunable: Any time
Connect_string
String of format user/password. Sun Cluster HA for Sybase ASE uses this property for
database probes.
Default: None
Range: Minimum=1
Tunable: When disabled
Connect_cycle
Number of fault monitor probe cycles before Sun Cluster HA for Sybase ASE
establishes a new connection.
Default: 5
Range: 1 100
Tunable: Any time
179
(Continued)
Name/Data Type
Description
Wait_for_online
Whether the start method waits for the database to come online before exiting.
Default: FALSE
Range: TRUE FALSE
Tunable: Any time
The following sections describe the Sun Cluster HA for Sybase ASE fault monitor processes and the
extension properties that the fault monitor uses.
If an operation fails, the main process checks the action table for an action to perform and then
performs the predetermined action. If an operation fails, the main process can perform the following
actions, which execute external programs as separate processes in the background.
1. Restarts the resource on the current node.
2. Restarts the resource group on the current node.
3. Fails over the resource group to the next node on the resource groups nodelist.
The server fault monitor also scans the Adaptive_Server_Log le and acts to correct any errors that
the scan identies.
180
Extension Properties
The fault monitor uses the following extension properties.
Thorough_probe_interval
Retry_count
Retry_interval
Probe_timeout
Connect_string
Connect_cycle
Adaptive_Server_Log
See Conguring Sun Cluster HA for Sybase ASE Extension Properties on page 177 for more
information about these extension properties.
181
182
A P P E N D I X
This appendix describes the prerequisites and procedures for installation of the Remote Shared
Memory Reliable Datagram Transport (RSMRDT) driver. This appendix includes the following
sections:
Note The RSMRDT driver should not be installed until RSM with 9iRAC is supported. Contact your
Restrictions
Use of the RSMRDT Driver is restricted to customers running an Oracle9i release 2 SCI
conguration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed
installation and conguration instructions. The SUNWscrdt package (RSMRDT driver package)
depends on the following packages:
The SUNWscrdt package also has a functional dependency on the following RSM packages:
Verify that SUNWrsmo and SUNWrsmx are installed before completing this procedure.
Become superuser on the node to which you want to install the SUNWscrdt package.
Note You must repeat this procedure for each node in the cluster.
pathname
Verify that no applications are using the RSMRDT driver before performing this procedure.
Become superuser on the node to which you want to uninstall the SUNWscrdt package.
Note You must repeat this procedure for each node in the cluster.
184
applications are probably still using the driver. Terminate the applications before running modunload
again.
clif_rsmrdt_id
6
rsmrdt_id
7
Example D1
185
186
A P P E N D I X
This appendix contains the procedures on how to install and congure Sun Cluster HA for SAP.
This appendix contains the following procedures.
187
Protected by
SAP database
NFS le system
Use the scinstall(1M) command to install Sun Cluster HA for SAP. Sun Cluster HA for SAP
requires a functioning cluster with the initial cluster framework already installed. See the Sun Cluster
software installation documentation for details on initial installation of clusters and data service
software. Register Sun Cluster HA for SAP after you successfully install the basic components of the
Sun Cluster and SAP software.
188
TABLE E2 Task Map: Installing and Conguring Sun Cluster HA for SAP
Task
For Instructions, Go To
or
Install SAP, SAP scalable
application server, and the
database
Congure the Sun Cluster
HA for DBMS
How to Install the Sun Cluster HA for SAP Packages on page 204
Register and congure Sun How to Register and Congure Sun Cluster HA for SAP with Central Instance
Cluster HA for SAP as a
on page 211
failover data service
How to Register and Congure Sun Cluster HA for SAP as a Failover Data
Service on page 212
or
Register and congure Sun How to Register and Congure Sun Cluster HA for SAP with Central Instance
Cluster HA for SAP as a
on page 211
scalable data service
How to Register and Congure Sun Cluster HA for SAP as a Scalable Data
Service on page 213
Set up a lock le
189
TABLE E2 Task Map: Installing and Conguring Sun Cluster HA for SAP
(Continued)
Task
For Instructions, Go To
How to Verify Sun Cluster HA for SAP Installation and Conguration and
Central Instance on page 216
How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a
Failover Data Service on page 217
How to Verify Sun Cluster HA for SAP Installation and Conguration of as a
Scalable Data Service on page 217
Conguration Restrictions
Caution Your data service conguration might not be supported if you do not observe these
restrictions.
Use the restrictions in this section to plan the installation and conguration of Sun Cluster HA for
SAP. This section provides a list of software and hardware conguration restrictions that apply to
Sun Cluster HA for SAP.
For restrictions that apply to all data services, see the Sun Cluster release notes documentation.
Limit node names as outlined in the SAP installation guide This limitation is an SAP
software restriction.
Conguration Requirements
Caution Your data service conguration might not be supported if you do not adhere to these
requirements.
190
Use the requirements in this section to plan the installation and conguration of Sun Cluster HA for
SAP. These requirements apply to Sun Cluster HA for SAP only. You must meet these requirements
before you proceed with your Sun Cluster HA for SAP installation and conguration.
For requirements that apply to all data services, see Conguring and Administering Sun Cluster
Data Services in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
After you create all of the le systems for the database and for SAP software, create the mount
points, and put the mount points in the /etc/vfstab le on all of the cluster nodes See the
SAP installation guides, Installation of the SAP R/3 on UNIX and R/3 Installation on UNIX-OS
Dependencies, for details on how to set up the database and SAP le systems.
Create the required groups and users on all of the cluster nodes See the SAP installation
guides, Installation of the SAP R/3 on UNIX and R/3 Installation on UNIX-OS Dependencies, for
details on how to create SAP groups and users.
Congure Sun Cluster HA for NFS on the cluster that hosts the central instance if you plan to
install some external SAP application servers See Overview of the Installation and
Conguration Process for Sun Cluster HA for NFS in Sun Cluster Data Service for NFS Guide for
Solaris OS for details on how to congure Sun Cluster HA for NFS.
Install application servers on either the same cluster that hosts the central instance or on a
separate cluster If you install and congure any application server outside of the cluster
environment, Sun Cluster HA for SAP does not perform fault monitoring and does not
automatically restart or fail over those application servers. You must manually start and shut
down application servers that you install and congure outside of the cluster environment.
Use an SAP software version with automatic enqueue reconnect mechanism capability Sun
Cluster HA for SAP relies on this capability. SAP 4.0 software with patch information and later
releases should have automatic enqueue reconnect mechanism capability.
Node 2
Node 3
Node 4
DB
CI
AS1
AS2
CLUSTER 1
FIGURE E1 Four-Node Cluster with Central Instance, Application Servers, and Database
191
Node 1
Node 2
CI
NFS
AS1
AS2
AS3
CLUSTER 1
FIGURE E2 Two-Node Cluster with Central Instance, NFS, and Non-HA External Application
Note The conguration in Figure E2 was a common conguration under previous Sun Cluster
releases. To use the Sun Cluster software to the fullest extent, congure SAP as shown in Figure E1
or Figure E3.
Node 1
Node 2
CI
DEV
CLUSTER 1
FIGURE E3 Two-Node Cluster With Central Instance and Development Node
Conguration Considerations
Use the information in this section to plan the installation and conguration of Sun Cluster HA for
SAP. The information in this section encourages you to think about the impact your decisions have
on the installation and conguration of Sun Cluster HA for SAP.
192
Retrieve the latest patch for the sapstart executable This patch enables Sun Cluster HA for
SAP users to congure a lock le. For details on the benets of this patch in your cluster
environment, see Setting Up a Lock File on page 214.
Read all of the related SAP online service-system notes for the SAP software release and
database that you are installing on your Sun Cluster conguration Identify any known
installation problems and xes.
Consult SAP software documentation for memory and swap recommendations SAP
software uses a large amount of memory and swap space.
Generously estimate the total possible load on nodes that might host the central instance, the
database instance, and the application server, if you have an internal application server This
consideration is especially important if you congure the cluster to ensure that the central
instance, database instance, and application server will all exist on one node if failover occurs.
Scalable Applications
Ensure that the SAPSIDadm home directory resides on a cluster le system - This consideration
enables you to maintain only one set of scripts for all application server instances that run on all
nodes. However, if you have some application servers that need to be congured differently (for
example, application servers with different proles), install those application servers with
different instance numbers, and then congure them in a separate resource group.
Install the application servers directory locally on each node instead of on a cluster le
system - This consideration ensures that another application server does not overwrite the
log/data/work/sec directory for the application server.
Use the same instance number when you create all application server instances on multiple
nodes - This consideration ensures ease of maintenance and ease of administration because you
will only need to use one set of commands to maintain all application servers on multiple nodes.
Place the application servers into multiple resource groups if you want to use the RGOfoad
resource type to shut down one or more application servers when a higher priority resource is
failing over - This consideration provides exibility and availability if you want to use the
RGOfoad resource type to ofoad one or more application servers for the database. The value
you gain from this consideration supersedes the ease of use you gain from placing the application
servers into one large group. See Freeing Node Resources by Ofoading Noncritical Resource
Groups in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more
information on using the RGOffload resource type.
Create separate scalable application server instances for each SAP logon group.
Create an SAP lock le on the local instance directory - This consideration prevents a system
administrator from manually starting an application instance that is already running.
193
What resource groups will you use for network addresses and application resources and the
dependencies between them?
What is the logical hostname (for failover services) for clients that will access the data service?
Description
SUNW.sap_ci
SUNW.sap_as
The *_v2 resource types are the latest version of the resource types (RT) for Sun Cluster HA for SAP.
The *_v2 resource types are a superset of the original RTs. Whenever possible, use the latest RTs
provided.
TABLE E4 Sun Cluster HA for SAP package from Sun Cluster 3.0 12/01
Resource Type
Description
SUNW.sap_ci
SUNW.sap_as
SUNW.sap_ci_v2
SUNW.sap_as_v2
194
TABLE E4 Sun Cluster HA for SAP package from Sun Cluster 3.0 12/01
Resource Type
(Continued)
Description
Retain (do not upgrade) the existing SUNW.sap_ci and SUNW.sap_as resource types. Choose this
option if any of the following statements apply to you.
If you are upgrading the resource type for the central instance, skip to Step 7.
195
If you are converting a failover application server resource to a scalable application server
resource, proceed to Step 6.
See Also
Congure the /etc/nsswitch.conf so that Sun Cluster HA for SAP starts and stops correctly in the
event of a switchover or a failover.
On each node that can master the logical host that runs Sun Cluster HA for SAP, include one of the
following entries for group in the /etc/nsswitch.conf le.
group:
group: files [NOTFOUND=return] nis
group: file [NOTFOUND=return] nisplus
Sun Cluster HA for SAP uses the su user command to start and probe SAP. The network information
name service might become unavailable when a cluster nodes public network fails. When you add
one of the entries for group in the /etc/nsswitch.conf le, you ensure that the su(1M) command
does not refer to the NIS/NIS+ name services if the network information name service is unavailable.
See Also
196
Go to How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213.
Become superuser on one of the nodes in the cluster where you are installing the central instance.
Go to How to Enable Failover SAP Instances to Run in a Cluster on page 200 or How to Install an
SAP Scalable Application Server on page 197.
197
/sapmnt/SID
/usr/sap/SID -> all subdirectories except the app-instance subdirectory
/usr/sap/SID/home -> the SAPSIDadm home directory
/usr/sap/trans
Ensure that the central instance and the database can fail over.
Set up the lock le on cluster le system for the central instance to prevent a multiple startup
from a different node.
For the procedure on how to set up a lock le on the central instance, see How to Set Up a Lock
File for Central Instance or the Failover Application Server on page 215.
Ensure that all application servers can use the SAP binaries on a cluster le system.
On all nodes that will host the scalable application server, create a local directory for the
data/log/sec/work directories and the log les for starting and stopping the application server.
Create a local directory for each new application server.
Example:
# mkdir -p /usr/sap/local/SID/D03
Caution You must perform this step. If you do not perform this step, you will inadvertently install a
different application server instance on a cluster le system and the two application servers will
overwrite each other.
Set up a link to point to the local application server directory from a cluster le system, so the
application server and the startup log le and the stop log le will be installed on the local le
system.
Example:
# ln -s /usr/sap/local/SID/D03 /usr/sap/SID/D03
198
Make a copy of the startsap script and the stopsap script, and save these les in the SAPSIDadm
home directory. The lenames that you choose specify this instance.
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/startsap_instance-number
# cp /usr/sap/SID/SYS/exe/run/stopsap \
$SAPSID_HOME/stopsap_instance-number
Make backup copies of the following les because you will modify them. In the SAP prole directory,
modify all the lenames for this instance. The lenames that you choose must be specic to this
instance, and they must follow the same naming convention you chose in Step 8.
# mv SAPSID_Service-StringSystem-Number_physical-hostname \
SAPSID_Service-StringSystem_instance-number
# mv START_Service-StringSystem-Number_physical-hostname \
START_Service-StringSystem_instance-number
10
Modify the contents of the les you created in Step 9 to replace any reference to the physical host
with the instance number.
Caution It is important that you make your updates consistent so that you can start and stop this
application server instance from all the nodes that will run this scalable application server. For
example, if you make these changes for SAP instance number 02, then use 02 where this instance
number appears. If you do not use a consistent naming convention you will be unable start and stop
this application server instance from all the nodes that will run this scalable application server.
11
Edit the start script and the stop script so that the startup log le and the stop log le will be node
specic under the home directories of users sapsidadm and orasapsid.
Example:
# vi startsap_D03
Before:
LOGFILE=$R3S_LOGDIR/basename $0.log
After:
LOGFILE=$R3S_LOGDIR/basename $0_uname -n.log
199
12
Copy the application server (with the same SAPSID and the same instance number) on all nodes that
run the scalable application server.
The nodes that run the scalable application server are in the scalable application server resource
group nodelist.
13
Ensure that you can startup and stop the application server from each node, and verify that the log
les are in the correct location.
14
See Also
Make backup copies of the les you will modify in Step 5 through Step 8.
Shut down the SAP instances (central instance and application server instances) and the database.
Make a copy of the startsap script and the stopsap script, and save these les in the SAPSIDadm
home directory. The lenames that you choose must specify this instance.
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/startsap_logical-hostname_instance-number
# cp /usr/sap/SID/SYS/exe/run/startsap \
$SAPSID_HOME/stopsap_logical-hostname_instance-number
200
In the SAPSIDadm home directory, modify all of the le names that reference a physical server
name.
In the SAPSIDadm home directory, modify all of the le contentsexcept log le contentsthat
reference a physical server name.
In the SAP prole directory, modify all of the le names that reference a physical server name.
This entry enables the external application server to locate the central instance by using the network
resource (logical hostname).
For Application Server:
SAPLOCALHOST=as-logical-hostname
8
See Also
In the oraSAPSID home directory, modify all of the le names that reference a physical server
name.
In the oraSAPSID home directory, modify all of the le contentsexcept log le contentsthat
reference a physical server name.
Ensure that the /usr/sap/tmp directory owned by user sapsidadm and group sapsys exists on all
nodes that can master the failover SAP instance.
Go to Conguring Sun Cluster HA for DBMS on page 201.
201
Create the failover resource group to hold the network and central instance resources.
# scrgadm -a -g sap-ci-resource-group [-h nodelist]
Note Use the -h option to the scrgadm(1M) command to select the set of nodes on which the SAP
Verify that you have added to your name service database all of the network resources that you use.
Log in to the cluster member that hosts the central instance resource group.
Start the SAP GUI using the logical hostname, and verify that SAP initializes correctly.
The default dispatcher port is 3200.
202
9
10
11
See Also
Repeat Step 5 through Step 9 until you verify startup and shutdown of the central instance on each
cluster node that can host the central instance.
Go to How to Verify an SAP Failover Application Server on page 203.
Create the failover resource group to hold the network and application server resources.
# scrgadm -a -g sap-as-fo-resource-group
Note Use the -h option to the scrgadm command to select the set of nodes on which the SAP
Verify that you added to your name service database all of the network resources that you use.
Log in to the cluster member that hosts the application server resource group.
Start the SAP GUI using the logical hostname, and verify that SAP initializes correctly.
Appendix E Installing and Conguring Sun Cluster HA for SAP
203
Switch this resource group to another cluster member that can host the application server.
# scswitch -z -h node -g sap-as-fo-resource-group
10
See Also
Repeat Step 5 through Step 7 until you verify startup and shutdown of the application server on each
cluster node that can host the application server.
Go to How to Install the Sun Cluster HA for SAP Packages on page 204.
Load the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive.
Choose the Add Support for New Data Service to This Cluster Node menu option.
The scinstall utility prompts you for additional information.
Provide the path to the Sun Cluster 3.0 5/02 Agents CD-ROM.
The utility refers to the CD-ROM as the data services cd.
204
See Also
Property Name
Description
SAP Conguration
SAPSID
Ci_instance_id
Ci_services_string
205
TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
(Continued)
Property Category
Property Name
Description
Starting SAP
Ci_start_retry_ interval
Ci_startup_script
Stopping SAP
Stop_sap_pct
Percentage of
stop-timeout variables
that are used to stop SAP
processes. The SAP
shutdown script is used to
stop processes before
calling Process Monitor
Facility (PMF) to
terminate and then kill the
processes.
Default: 95
Tunable: When disabled
Ci_shutdown_script
Probe
Message_server_name
206
TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
Property Category
(Continued)
Property Name
Description
Lgtst_ms_with_ logicalhostname
Check_ms_retry
Maximum number of
times the SAP Message
Server check fails before a
total failure is reported
and the Resource Group
Manager (RGM) starts.
Default: 2
Tunable: When disabled
Probe_timeout
Monitor_retry_count
207
TABLE E5 Sun Cluster HA for SAP Extension Properties for the Central Instance
Property Category
(Continued)
Property Name
Description
Monitor_retry_ interval
Development System
Shutdown_dev
Dev_sapsid
Dev_shutdown_script
Dev_stop_pct
Percentage of startup
timeouts Sun Cluster HA
for SAP uses to shut down
the development system
before starting the central
instance.
Default: 20
Tunable: When disabled
208
TABLE E6 Sun Cluster HA for SAP Extension Properties for the Application Servers
Property Category
Property Name
Description
SAP Conguration
SAPSID
As_instance_id
As_services_string
String of application
server services.
Default: D
Tunable: When disabled
Starting SAP
As_db_retry_interval
As_startup_script
209
TABLE E6 Sun Cluster HA for SAP Extension Properties for the Application Servers
(Continued)
Property Category
Property Name
Description
Stopping SAP
Stop_sap_pct
Percentage of
stop-timeout variables
that are used to stop SAP
processes. The SAP
shutdown script is used to
stop processes before
calling Process Monitor
Facility (PMF) to
terminate and then kill the
processes.
Default: 95
Tunable: When disabled
As_shutdown_script
Probe
Probe_timeout
Monitor_retry_count
Monitor_retry_ interval
210
Become superuser on one of the nodes in the cluster that hosts the central instance.
For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
4
See Sun Cluster Data Services Planning and Administration Guide for Solaris OS for a list of extension
properties.
6
Enable the failover resource group that now includes the SAP central instance resource.
# scswitch -Z -g sap-ci-resource-group
If you congure the central instance resource to shut down a development system, you will receive
the following console message.
ERROR : SAPSYSTEMNAME not set
Please check environment and restart
This message displays when the central instance starts on a node that does not have the development
system installed and that is not meant to run the central instance. SAP renders this message, and you
can safely ignore it.
Appendix E Installing and Conguring Sun Cluster HA for SAP
211
See Also
Go to How to Register and Congure Sun Cluster HA for SAP as a Failover Data Service on page
212 or How to Register and Congure Sun Cluster HA for SAP as a Scalable Data Service on page
213.
Become superuser on one of the nodes in the cluster that hosts the application server.
Add the HAStoragePlus resource to the failover application server resource group.
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j sap-as-storage-resource -g sap-as-fo-resource-group \
-t SUNW.HAStoragePlus \
-x filesystemmountpoints=mountpoint, ...
For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
4
See Sun Cluster HA for SAP Extension Properties on page 205 for a list of extension properties.
6
Enable the failover resource group that now includes the SAP application server resource.
# scswitch -Z -g sap-as-fo-resource-group
See Also
212
Go to How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance
on page 216.
Sun Cluster 3.0-3.1 Release Notes Supplement April 2006, Revision A
Become superuser on one of the nodes in the cluster that hosts the application server.
SAP logon group performs the load balancing of the application server.
Note If you are using the SUNW.RGOffload resource type to ofoad an application server within this
scalable application server resource group, then set Desired_primaries=0. See Freeing Node
Resources by Ofoading Noncritical Resource Groups in Sun Cluster Data Services Planning and
Administration Guide for Solaris OS for more information about using the SUNW.RGOffload resource
type.
Add the HAStoragePlus resource to the failover application server resource group.
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -j sap-as-storage-resource -g \
-g sap-as-sa-appinstanceid-resource-group \
-t SUNW.HAStoragePlus \
-x filesystemmountpoints=mountpoint, ... \
For more details on how to set up an HAStoragePlus resource, see Enabling Highly Available Local
File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
5
213
-x
-x
-x
-y
As_instance_id=as-instance-id \
As_startup_script=as-startup-script \
As_shutdown_script=as-shutdown-script \
resource_dependencies=sap-as-storage-resource
See Sun Cluster HA for SAP Extension Properties on page 205 for a list of extension properties.
7
Enable the scalable resource group that now includes the SAP application server resource.
If you do not use the RGOffload resource type with this application server, use the following
command.
# scswitch -Z -g sap-as-sa-appinstanceid-resource-group
If you use the RGOffload resource type with this application server, use the following command.
# scswitch -z -h node1, node2 -g sap-as-sa-appinstanceid-resource-group
Note If you use the SUNW.RGOffload resource type with this application server, you must specify
which node you want to bring the resource online by using the -z option instead of the -j option.
See Also
Go to How to Verify Sun Cluster HA for SAP Installation and Conguration and Central Instance
on page 216.
Set up a lock le for the central instance or the failover application server.
Set up a lock le for a scalable application server.
Set up a lock le to prevent multiple startups of the SAP instance when the instance is already active
on one node. Multiple startups of the same instance crash each other. Furthermore, the crash
prevents SAP shutdown scripts from performing a clean shutdown of the instances, which might
cause data corruption.
If you set up a lock le, when you start the SAP instance the SAP software locks the le
startup_lockfile. If you start up the same instance outside of the Sun Cluster environment and
then try to bring up SAP under the Sun Cluster environment, the Sun Cluster HA for SAP data
service will attempt to start up the same instance. However, because of the le-locking mechanism,
this attempt will fail. The data service will log appropriate error messages in /var/adm/messages.
The only difference between the lock le for the central instance or the failover application server
and the lock le for a scalable application server is that the lock le for scalable application server
resides on the local le system and the lock le for the central instance or the failover application
server resides on a cluster le system.
214
Install the latest patch for the sapstart executable, which enables Sun Cluster HA for SAP users to
congure a lock le.
Set up the central instance lock le or the failover application server lock le on a cluster le system.
Edit the prole that sapstart uses to start the instance such that you add the new SAP parameter,
sapstart/lockfile, for central instance or failover application server. This prole is the one that is
passed to sapstart as a parameter in the startsap script.
For central instance, enter the following.
sapstart/lockfile =/usr/sap/SID/ Service-StringSystem-Number/work/startup_lockfile
sapstart/lockfile
New parameter name.
/usr/sap/SID/Service-StringSystem-Number/work
Work directory for the central instance.
/usr/sap/SID/Dinstance-id/work
Work directory for failover application server.
startup_lockfile
Lock le name that Sun Cluster HA for SAP uses.
SAP creates the lock le.
Note You must locate the lock le path on a cluster le system. If you locate the lock le path locally
on the nodes, a startup of the same instance from multiple nodes cannot be prevented.
Install the latest patch for the sapstart executable, which enables Sun Cluster HA for SAP users to
congure a lock le.
Appendix E Installing and Conguring Sun Cluster HA for SAP
215
Edit the prole that sapstart uses to start the instance such that you add the new SAP parameter,
sapstart/lockfile, for scalable application server. This prole is the one that is passed to sapstart
as a parameter in the startsap script.
sapstart/lockfile =/usr/sap/local/SID/Dinstance-id/work/startup_lockfile
sapstart/lockfile
/usr/sap/local/SID/Dinstance-id/work
startup_lockfile
from other nodes, but the lock le does prevent multiple startups on the same node.
Log in to the node that hosts the resource group that contains the SAP central instance resource.
Start the SAP GUI to check that Sun Cluster HA for SAP is functioning correctly.
As user sapsidadm, use the central instance stopsap script to shut down the SAP central instance.
The Sun Cluster software restarts the central instance.
As user root, switch the SAP resource group to another cluster member.
# scswitch -z -h node2 -g sap-ci-resource-group
5
216
See Also
Repeat Step 1 through Step 5 until you have tested all of the potential nodes on which the SAP
central instance can run.
Go to How to Verify the Installation and Conguration of Sun Cluster HA for SAP as a Failover
Data Service on page 217 or How to Verify Sun Cluster HA for SAP Installation and Conguration
of as a Scalable Data Service on page 217.
Log in to the node that currently hosts the resource group that contains the SAP application server
resource.
As user sapsidadm, start the SAP GUI to check that the application server is functioning correctly.
Use the application server stopsap script to shut down the SAP application server on the node you
identied in Step 1.
The Sun Cluster software restarts the application server.
As user root, switch the resource group that contains the SAP application server resource to another
cluster member.
# scswitch -z -h node2 -g sap-as-resource-group
Verify that the SAP application server starts on the node you identied in Step 4.
Repeat Step 1 through Step 5 until you have tested all of the potential nodes on which the SAP
application server can run.
217
Start the SAP GUI to check that the application server is functioning correctly.
Use the application server stopsap script to shut down the SAP application server on the node you
identied in Step 1.
The Sun Cluster software restarts the application server.
Repeat Step 1 through Step 3 until you have tested all of the potential nodes on which the SAP
application server can run.
218
you set the extension property Lgtst_ms_with_logicalhostname to a value other than TRUE,
the probe calls lgtst with the nodes local hostname (loopback interface).
If the lgtst utility call fails, the SAP Message Server connection is not functioning. In this
situation, the fault monitor considers the problem to be a partial failure and does not trigger
an SAP restart or a failover immediately. The fault monitor counts two partial failures as a
complete failure if the following conditions occur.
i. You congure the extension property Check_ms_retry to be 2.
ii. The fault monitor accumulates two partial failures that happen within the retry interval
that the resource property Retry_interval sets.
A complete failure triggers either a local restart or a failover, based on the resources failure
history.
c. Database connection status through probe The probe calls the SAP-supplied utility
R3trans to check the status of the database connection. Sun Cluster HA for SAP fault probes
verify that SAP can connect to the database. Sun Cluster HA for SAP depends, however, on
the highly available database fault probes to determine database availability. If the database
connection status check fails, the fault monitor logs the message, Database might be down,
to /var/adm/messages. The fault monitor then sets the status of the SAP resource to
DEGRADED. If the probe checks the status of the database again and the connection is
reestablished, the fault monitor logs the message, Database is up, to /var/adm/messages
and sets the status of the SAP resource to OK.
4. Evaluates the failure history
Based on the failure history, the fault monitor completes one of the following actions.
no action
local restart
failover
219
b. Availability check of the SAP resources through probe The probe uses the ps(1)
command to check the SAP Message Server and main dispatcher processes. If the SAP main
dispatcher process is missing from the systems active processes list, the fault monitor treats
the problem as a complete failure.
c. Database connection status through probe The probe calls the SAP-supplied utility
R3trans to check the status of the database connection. Sun Cluster HA for SAP fault probes
verify that SAP can connect to the database. Sun Cluster HA for SAP depends, however, on
the highly available database fault probes to determine database availability. If the database
connection status check fails, the fault monitor logs the message, Database might be down,
to /var/adm/messages and sets the status of the SAP resource to DEGRADED. If the probe
checks the status of the database again and the connection is reestablished, the fault monitor
logs the message, Database is up, to /var/adm/messages. The fault monitor then sets the
status of the SAP resource to OK.
4. Evaluate the failure history
Based on the failure history, the fault monitor completes one of the following actions.
no action
local restart
failover
If the application server resource is a failover resource, the fault monitor fails over the
application server.
If the application server resource is a scalable resource, after the number of local restarts are
exhausted, RGM will bring up the application server on a different node if there is another
node available in the cluster.
220
A P P E N D I X
This appendix provides the following step-by-step procedures to upgrade a Sun Cluster 3.0
conguration to Sun Cluster 3.1 04/04 software including upgrade from Solaris 8 to Solaris 9
software, or to upgrade a Sun Cluster 3.1 04/04 conguration that runs on Solaris 8 software to
Solaris 9 software:
This appendix replaces the section Upgrading to Sun Cluster 3.1 04/04 Software on page
Instructions
221
(Continued)
Task
Instructions
The cluster must run on or be upgraded to at least Solaris 8 2/02 software, including the most
current required patches.
The cluster hardware must be a supported conguration for Sun Cluster 3.1 04/04 software.
Contact your Sun representative for information about current supported Sun Cluster
congurations.
You must upgrade all software to a version that is supported by Sun Cluster 3.1 04/04 software.
For example, you must upgrade a data service that is supported on Sun Cluster 3.0 software but is
not supported on Sun Cluster 3.1 04/04 software to the version of that data service that is
supported on Sun Cluster 3.1 04/04 software. If the related application is not supported on Sun
Cluster 3.1 04/04 software, you must also upgrade that application to a supported release.
The scinstall upgrade utility only upgrades those data services that are provided with Sun
Cluster 3.1 04/04 software. You must manually upgrade any custom or third-party data services.
Have available the test IP addresses to use with your public network adapters when NAFO
groups are converted to Internet Protocol (IP) Network Multipathing groups. The scinstall
upgrade utility prompts you for a test IP address for each public network adapter in the cluster. A
test IP address must be on the same subnet as the primary IP address for the adapter.
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration
Guide: IP Services (Solaris 9) for information about test IP addresses for IP Network
Multipathing groups.
222
Sun Cluster 3.1 04/04 software supports direct upgrade only from Sun Cluster 3.x software.
Sun Cluster 3.1 04/04 software does not support any downgrade of Sun Cluster software.
Have available the CD-ROMs, documentation, and patches for all software products you are
upgrading.
Applications that are managed by Sun Cluster 3.1 04/04 data-service agents
Patch 11380101 or later, which is required to upgrade from Solaris 8 software to Solaris 9
software
See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of
patches and installation instructions.
3
Have available your list of test IP addresses, one for each public network adapter in the cluster.
A test IP address is required for each public network adapter in the cluster, regardless of whether the
adapter is the active adapter or the backup adapter in a NAFO group. The test IP addresses will be
used to recongure the adapters to use IP Network Multipathing.
Note Each test IP address must be on the same subnet as the existing IP address that is used by the
See the IP Network Multipathing Administration Guide (Solaris 8) or System Administration Guide:
IP Services (Solaris 9) for more information about test IP addresses for IP Network Multipathing.
5
223
To view the current status of the cluster, run the following command from any node:
% scstat
Search the /var/adm/messages log on the same node for unresolved error messages or warning
messages.
-F
-g resource-group
-n
Disables
-j resource
-u
224
-g resource-group
11
Verify that all resources on all nodes are disabled and that all resource groups are in the unmanaged
state.
# scstat -g
12
Stop all databases that are running on each node of the cluster.
13
14
16
17
If Sun Cluster 3.1 04/04 software does not support the release of the Solaris environment that you
currently run on your cluster, you must upgrade the Solaris software to a supported release. Go to
How to Upgrade the Solaris Operating Environment on page 225.
If your cluster conguration already runs on a release of the Solaris environment that supports
Sun Cluster 3.1 04/04 software, go to How to Upgrade to Sun Cluster 3.1 04/04 Software
on page 227.
See Supported Products in Sun Cluster 3.1 Release Notes for more information.
Solaris 8 or Solaris 9 environment to support Sun Cluster 3.1 04/04 software. See Supported Products
in Sun Cluster 3.1 Release Notes for more information.
Ensure that all steps in How to Prepare the Cluster for Upgrade on page 223 are completed.
225
Determine whether the following Apache links already exist, and if so, whether the le names
contain an uppercase K or S:
/etc/rc0.d/K16apache
/etc/rc1.d/K16apache
/etc/rc2.d/K16apache
/etc/rc3.d/S50apache
/etc/rcS.d/K16apache
If these links already exist and do contain an uppercase K or S in the le name, no further action
is necessary for these links.
If these links do not exist, or if these links exist but instead contain a lowercase k or s in the le
name, you move aside these links in Step 8.
Comment out all entries for globally mounted le systems in the /etc/vfstab le.
a. Make a record of all entries that are already commented out for later reference.
b. Temporarily comment out all entries for globally mounted le systems in the /etc/vfstab le.
Entries for globally mounted le systems contain the global mount option. Comment out these
entries to prevent the Solaris upgrade from attempting to mount the global devices.
Volume Manager
Procedure to Use
Instructions
Solstice DiskSuite/Solaris Volume Any Solaris upgrade method except Solaris 8 or Solaris 9 installation
Manager
the Live Upgrade method
documentation
VERITAS Volume Manager
Upgrade the Solaris software, following the procedure you selected in Step 5.
Note Ignore the instruction to reboot at the end of the Solaris software upgrade process. You must
rst perform Step 7 and Step 8, then reboot into noncluster mode in Step 9 to complete Solaris
software upgrade.
If you are instructed to reboot a node at other times in the upgrade process, always add the -x option
to the command. This option ensures that the node reboots into noncluster mode. For example,
either of the following two commands boot a node into single-user noncluster mode:
# reboot -- -xs
ok boot -xs
226
In the /a/etc/vfstab le, uncomment those entries for globally mounted le systems that you
commented out in Step 4.
If the Apache links in Step 3 did not already exist or if they contained a lowercase k or s in the le
names before you upgraded the Solaris software, move aside the restored Apache links.
Use the following commands to rename the les with a lowercase k or s:
#
#
#
#
#
mv
mv
mv
mv
mv
/a/etc/rc0.d/K16apache
/a/etc/rc1.d/K16apache
/a/etc/rc2.d/K16apache
/a/etc/rc3.d/S50apache
/a/etc/rcS.d/K16apache
/a/etc/rc0.d/k16apache
/a/etc/rc1.d/k16apache
/a/etc/rc2.d/k16apache
/a/etc/rc3.d/s50apache
/a/etc/rcS.d/k16apache
10
Install any required Solaris software patches and hardware-related patches, and download any
needed rmware that is contained in the hardware patches.
For Solstice DiskSuite software (Solaris 8), also install any Solstice DiskSuite software patches.
Note Do not reboot after you add patches. You reboot the node after you upgrade the Sun Cluster
software.
See Patches and Required Firmware Levels in Sun Cluster 3.1 Release Notes for the location of patches
and installation instructions.
11
version of Sun Cluster 3.1 04/04 software, even if the cluster already runs on Sun Cluster 3.1 04/04
software.
227
Ensure that all steps in How to Prepare the Cluster for Upgrade on page 223 are completed.
If you upgraded from Solaris 8 to Solaris 9 software, also ensure that all steps in How to Upgrade the
Solaris Operating Environment on page 225 are completed.
Ensure that you have installed all required Solaris software patches and hardware-related patches.
For Solstice DiskSuite software (Solaris 8), also ensure that you have installed all required Solstice
DiskSuite software patches.
Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive on the node.
If the Volume Management daemon vold(1M) is running and congured to manage CD-ROM
devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.
To upgrade from Sun Cluster 3.0 software, run the following command:
# ./scinstall -u update -S interact
-S
interact
Species that scinstall prompts the user for each test IP address
needed
To upgrade from Sun Cluster 3.1 software, run the following command:
# ./scinstall -u update
Tip If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node
228
Status
-----Offline
Offline
See the scinstall(1M) man page for more information. See the IP Network Multipathing
Administration Guide (Solaris 8) or System Administration Guide: IP Services (Solaris 9) for
information about test addresses for IP Network Multipathing.
Note Sun Cluster 3.1 04/04 software requires at least version 3.5.1 of Sun Explorer software.
Upgrade to Sun Cluster software includes installing Sun Explorer data collector software, to be
used in conjunction with the sccheck utility. If another version of Sun Explorer software was
already installed before Sun Cluster upgrade, it is replaced by the version that is provided with
Sun Cluster software. Options such as user identity and data delivery are preserved, but crontab
entries must be manually recreated.
During Sun Cluster upgrade, scinstall might make one or more of the following conguration
changes:
Convert NAFO groups to IP Network Multipathing groups but keep the original
NAFO-group name.
Set the local-mac-address? variable to true, if the variable is not already set to that value.
Upgrade software applications that are installed on the cluster and apply application patches as
needed.
Ensure that application levels are compatible with the current version of Sun Cluster and Solaris
software. See your application documentation for installation instructions. In addition, follow these
guidelines to upgrade applications in a Sun Cluster 3.1 04/04 conguration:
If the applications are stored on shared disks, you must master the relevant disk groups and
manually mount the relevant le systems before you upgrade the application.
229
If you are instructed to reboot a node during the upgrade process, always add the -x option to the
command. This option ensures that the node reboots into noncluster mode. For example, either
of the following two commands boot a node into single-user noncluster mode:
# reboot -- -xs
ok boot -xs
Upgrade Sun Cluster data services to the Sun Cluster 3.1 04/04 software versions.
Note Only those data services that are provided on the Sun Cluster 3.0 5/02 Agents CD-ROM are
automatically upgraded by scinstall(1M). You must manually upgrade any custom or third-party
data services.
a. Insert the Sun Cluster 3.0 5/02 Agents CD-ROM into the CD-ROM drive on the node to upgrade.
b. Upgrade the data-service software.
# scinstall -u update -s all -d /cdrom/cdrom0
-u update
Species upgrade
-s all
Updates all Sun Cluster data services that are installed on the node
Tip If upgrade processing is interrupted, use the scstat(1M) command to ensure that the node
Status
-----Offline
Offline
After all nodes are upgraded, reboot each node into the cluster.
# reboot
230
Verify that all upgraded software is at the same version on all upgraded nodes.
a. On each upgraded node, view the installed levels of Sun Cluster software.
# scinstall -pv
b. From one node, verify that all upgraded cluster nodes are running in cluster mode (Online).
# scstat -n
See the scstat(1M) man page for more information about displaying cluster status.
10
11
On each node, run the following command to verify the consistency of the storage conguration:
# scdidadm -c
-c
Caution Do not proceed to Step 12 until your conguration passes this consistency check. Failure to
Example Message
Action to Take
No output message
None.
On each node, migrate the Sun Cluster storage database to Solaris 9 device IDs.
# scdidadm -R all
-R
all
231
13
On each node, run the following command to verify that storage database migration to Solaris 9
device IDs is successful:
# scdidadm -c
14
If the scdidadm command displays a message, return to Step 11 to make further corrections to the
storage conguration or the storage database.
See your VxVM administration documentation for more information about upgrading disk
groups.
15
Example F1
If yes, go to How to Upgrade Sun Cluster-Module Software for Sun Management Center
on page 233.
If no, go to How to Finish Upgrading to Sun Cluster 3.1 04/04 Software on page 234.
Upgrade From Sun Cluster 3.0 to Sun Cluster 3.1 04/04 Software
The following example shows the process of upgrading a two-node cluster, including data services,
from Sun Cluster 3.0 to Sun Cluster 3.1 04/04 software on the Solaris 8 operating environment. The
cluster node names are phys-schost-1 and phys-schost-2.
(On the rst node, upgrade framework software from the Sun Cluster 3.0 5/02 CD-ROM)
phys-schost-1# cd /cdrom/suncluster_3_0/SunCluster_3.1/Sol_8/Tools
phys-schost-1# ./scinstall -u update -S interact
(On the rst node, upgrade data services from the Sun Cluster 3.0 5/02 Agents CD-ROM)
phys-schost-1# ./scinstall -u update -s all -d /cdrom/cdrom0
(On the second node, upgrade framework software from the Sun Cluster 3.0 5/02 CD-ROM)
phys-schost-2# cd /cdrom/suncluster_3_0/cdrom/suncluster_3_0/cdrom/suncluster_3_0/SunCluster_3.1/Sol_8/Tools
phys-schost-2# ./scinstall -u update -S interact
(On the second node, upgrade data services from the Sun Cluster 3.0 5/02 Agents CD-ROM)
phys-schost-2# ./scinstall -u update -s all -d /cdrom/cdrom0
232
Status
-----Online
Online
Ensure that all Sun Management Center core packages are installed on the appropriate machines, as
described in your Sun Management Center installation documentation.
This step includes installing Sun Management Center agent packages on each cluster node.
Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive.
Repeat Step 3 through Step 6 to install the Sun Clustermodule help-server package SUNWscshl.
233
Ensure that all steps in How to Upgrade to Sun Cluster 3.1 04/04 Software on page 227 are
completed.
Type 1 (Register all resource types which are not yet registered).
The scsetup utility displays all resource types that are not registered.
Type yes to continue to register these resource types.
10
234
11
12
13
14
When all resources are re-enabled, type q to return to the Resource Group Menu.
15
16
17
235
-R device-
-u
-i
-r
Repeat Step 2 through Step 4 on all other nodes that are attached to the unveried device.
storage devices were changed or replaced, instead follow procedures in How to Handle Storage
Reconguration During an Upgrade on page 235.
-u
236
-i
-r
If yes, return to Step 1 to make further modications to correct the storage conguration, then
repeat Step 2.
237
238