Вы находитесь на странице: 1из 68

STEP BY STEP INSTALLATION OF ORACLE RAC 12cR1 (12.1.0.2) ON ORACLE SOLARIS SPARC-64

Oracle Global Customer Support - RAC / Scalability

Copyright © 1993, 2014, Oracle and/or its affiliates. All rights reserved

1

Contents

1

Introduction

 

3

1.1

Oracle Grid Infrastructure Installation Server Hardware Checklist

3

1.1.1 Server Hardware Checklist for Oracle Grid Infrastructure

3

1.1.2 Environment Configuration for Oracle Grid Infrastructure and Oracle RAC

3

1.1.3 Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC Public

Network Hardware:

 

4

1.1.4

Oracle Grid Infrastructure Storage Configuration Checks

5

1.2

Configuring Servers for Oracle Grid Infrastructure and Oracle RAC

6

1.2.1 Checking Server Hardware and Memory Configuration

6

1.2.2 Server Storage Minimum Requirements

6

1.2.3 64-bit System Memory Requirements

7

1.3.

Operating System Requirements for Oracle Solaris on SPARC (64-Bit)

7

1.3.1 Supported Oracle Solaris 11 Releases for SPARC (64-Bit)

8

1.3.2 Supported Oracle Solaris 10 Releases for SPARC (64-Bit)

8

1.4 Setting Network Time Protocol for Cluster Time Synchronization

9

1.5 Network Interface Hardware Requirements

10

1.5.1 IP Interface Configuration Requirements

10

1.5.2 Broadcast Requirements for Networks Used by Oracle Grid Infrastructure

11

1.5.3 Multicast Requirements for Networks Used by Oracle Grid Infrastructure

11

1.6

Installation method

 

11

2.

Prepare the cluster nodes for Oracle RAC

12

2.1

User Accounts

 

12

2.2

Networking

 

12

2.3.

Synchronizing the Time on ALL Nodes

14

2.4.

Create the Oracle Inventory Directory

14

2.5

Creating the Oracle Grid Infrastructure Home Directory

14

2.6

Creating

the

Oracle

Base Directory

14

2.7

Creating the Oracle RDBMS Home Directory

14

2.8

Stage the Oracle Software

 

14

2.9

Check OS Software Requirements

15

3.

Prepare the shared storage for Oracle RAC

15

4

Oracle Grid Infrastructure Install

18

5.

Run ASMCA to create diskgroups

42

6

RDBMS Software Install

 

45

2

1

Introduction

1.1 Oracle Grid Infrastructure Installation Server Hardware Checklist

1.1.1 Server Hardware Checklist for Oracle Grid Infrastructure

Server hardware: Please select server make, model, core architecture, and host bus adaptors (HBA) are supported to run with Oracle RAC. Please refer below URL for more details

http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html

Network Switches:

Public network switch, at least 1 GbE, connected to a public gateway.

Private network switch, at least 1 GbE, with 10 GbE recommended, dedicated for use only with other cluster member nodes. The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP. Alternatively, use InfiniBand for the interconnect.

Runlevel: Servers should be either in runlevel 3 or runlevel 5.

Random Access Memory (RAM): At least 4 GB of RAM for Oracle Grid Infrastructure for cluster installations, including installations where you plan to install Oracle RAC.

Temporary disk space allocation: At least 1 GB allocated to /tmp.

Storage hardware: Either Storage Area Network (SAN) or Network-Attached Storage (NAS).

Local Storage Space for Oracle Software

At least 8 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home). Oracle recommends that you allocate 100 GB to allow additional space for patches.

At least 12 GB of space for the Oracle Base of the Oracle Grid Infrastructure installation owner (Grid user). The Oracle Base includes Oracle Clusterware and Oracle ASM log files.

For Linux x86-64 platforms, if you intend to install Oracle Database, then allocate 6.4 GB of disk space for the Oracle home (the location for the Oracle Database software binaries).

1.1.2 Environment Configuration for Oracle Grid Infrastructure and Oracle RAC

Create Groups and Users. A user created to own only Oracle Grid Infrastructure software installations is called the grid user. A user created to own either all Oracle installations, or only Oracle database installations, is called the oracle user.

Create mount point paths for the software binaries. Oracle recommends that you follow the guidelines for an Optimal Flexible Architecture configuration.

Review Oracle Inventory (oraInventory) and OINSTALL Group Requirements. The Oracle Inventory directory is the central inventory of Oracle software installed on your system. Users who have the Oracle Inventory group as their primary group are granted the OINSTALL privilege to write to the central inventory.

3

Ensure that the Grid home (the Oracle home path you select for Oracle Grid Infrastructure) uses only ASCII characters

Unset Oracle software environment variables. If you have set ORA_CRS_HOME as an environment variable, then unset it before starting an installation or upgrade. Do not use ORA_CRS_HOME as a user environment variable.

If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables:

ORA_CRS_HOME

ORACLE_HOME

ORA_NLS10

TNS_ADMIN

1.1.3 Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC Public

Network Hardware:

Public network switch (redundant switches recommended) connected to a public gateway and to the public interface ports for each cluster member node.

Ethernet interface card (redundant network cards recommended, bonded as one Ethernet port name).

The switches and network interfaces must be at least 1 GbE.

The network protocol is TCP/IP.

Private Network Hardware for the Interconnect

Private dedicated network switches (redundant switches recommended), connected to the private interface ports for each cluster member node. NOTE: If you have more than one private network interface card for each server, then Oracle Clusterware automatically associates these interfaces for the private network using Grid inter process Communication (GIPC) and Grid Infrastructure Redundant Interconnect, also known as Cluster High Availability IP (HAIP).

The switches and network interface adapters must be at least 1 GbE, with 10 GbE recommended. Alternatively, use InfiniBand for the interconnect.

The interconnect must support the user datagram protocol (UDP).

Oracle Flex ASM Network Hardware

Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its own dedicated private networks. Each network can be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM. ASM networks use the TCP protocol.

Cluster Names and Addresses: Determine and configure the following names and addresses for the cluster

4

Cluster name: Decide a name for the cluster, and be prepared to enter it during installation. The cluster name should have the following characteristics:

Globally unique across all hosts and unique across different DNS domains.

At least one character long and less than or equal to 15 characters long.

Grid Naming Service Virtual IP Address (GNS VIP): If you plan to use GNS, then configure a GNS name and fixed address on the DNS for the GNS VIP, and configure a subdomain on your DNS delegated to the GNS VIP for resolution of cluster addresses. GNS domain delegation is mandatory with dynamic public networks (DHCP, autoconfig).

Single Client Access Name (SCAN) and addresses

Using Grid Naming Service Resolution: Do not configure SCAN names and addresses in your DNS. SCANs are managed by GNS.

Using Manual Configuration and DNS resolution: Configure a SCAN name to resolve to three addresses on the domain name service (DNS).

Standard or Hub Node Public, Private and Virtual IP names and Addresses:

Public node name and address, configured on the DNS and in /etc/hosts (for example, node1.example.com, address 192.0.2.10). The public node name should be the primary host name of each node, which is the name displayed by the hostname command.

Private node address, configured on the private interface for each node. The private subnet that the private interfaces use must connect all the nodes you intend to have as cluster members. Oracle recommends that the network you select for the private network uses an address range defined as private by RFC 1918.

Public node virtual IP name and address (for example, node1-vip.example.com, address

192.0.2.11).

1.1.4 Oracle Grid Infrastructure Storage Configuration Checks

During installation, you are asked to provide paths for the following Oracle Clusterware files. These path locations must be writable by the Oracle Grid Infrastructure installation owner (Grid user). These locations must be shared across all nodes of the cluster, either on Oracle ASM (preferred), or on a cluster file system, because the files created during installation must be available to all cluster member nodes.

Voting files are files that Oracle Clusterware uses to verify cluster node membership and status. The location for voting files must be owned by the user performing the installation (oracle or grid), and must have permissions set to 640.

Oracle Cluster Registry files (OCR) contain cluster and database configuration information for Oracle Clusterware. Before installation, the location for OCR files must be owned by the user performing the installation (grid or oracle). That installation user must have oinstall as its primary group. During installation, the installer creates the OCR files and changes ownership of the path and OCR files t

5

1.2

Configuring Servers for Oracle Grid Infrastructure and Oracle RAC

1.2.1 Checking Server Hardware and Memory Configuration

Run the following commands to gather your current system information:

1. To determine the available RAM and swap space, use the sar command. For example, to check for available free memory and swap space, you can enter the following command, which shows free memory and swap memory at two-second intervals checked 10 times:

# sar -r 2 10

If the size of the physical RAM installed in the system is less than the required size, then you must install more memory before continuing.

2. To determine the size of the configured swap space, enter the following command:

# /usr/sbin/swap -l

If necessary, see your operating system documentation for information about how to configure additional swap space.

3. To determine the amount of space available in the /tmp directory, enter the following command:

# df -h /tmp

4. To determine the amount of free disk space on file systems, enter the following command:

# df -kh

5. To determine if the system architecture can run the software, enter the following command:

# /bin/isainfo kv

The following are examples of responses on 64-bit operating systems:

64-bit SPARC installation:

# 64-bit sparcv9 kernel modules

64-bit x86 installation:

# 64-bit amd64 kernel modules.

1.2.2 Server Storage Minimum Requirements

Each system must meet the following minimum storage requirements:

1 GB of space in the /tmp directory.

6

If the free space available in the /tmp directory is less than what is required, then complete one of the following steps:

o

Delete unnecessary files from the /tmp directory to make available the space required.

o

Extend the file system that contains the /tmp directory. If necessary, contact your system administrator for information about extending file systems.

At least 8.0 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home). Oracle recommends that you allocate 100 GB to allow additional space for patches.

Up to 10 GB of additional space in the Oracle base directory of the Grid Infrastructure owner for diagnostic collections generated by Trace File Analyzer (TFA) Collector.

At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files.

For Oracle Solaris platforms, if you intend to install Oracle Database, then allocate 5.2 GB of disk space for the Oracle home (the location for the Oracle Database software binaries).

1.2.3 64-bit System Memory Requirements

Each system must meet the following memory requirements:

At least 4 GB of RAM for Oracle Grid Infrastructure for cluster installations, including installations where you plan to install Oracle RAC.

Swap space equivalent to the multiple of the available RAM, as indicated in the following table:

Available RAM

Swap Space Required

Between 4 GB and 16 GB

Equal to RAM

More than 16 GB

16 GB of RAM

1.3. Operating System Requirements for Oracle Solaris on SPARC (64-Bit)

The Oracle Solaris kernels and packages listed in this section are supported on SPARC 64-bit systems for Oracle Database and Oracle Grid Infrastructure 12c Release 1 (12.1).

7

1.3.1

Supported Oracle Solaris 11 Releases for SPARC (64-Bit)

Use the following information to check supported Oracle Solaris 11 releases:

Item

Requirements

SSH Requirement

Secure Shell is configured at installation for Oracle Solaris.

Oracle Solaris 11 operating system

Oracle Solaris 11 SRU 14.5 or later SRUs and updates.

Packages for Oracle Solaris 11

Install the following packages:

pkg://system/dtrace

 

pkg://solaris/developer/assembler

pkg://solaris/developer/build/make

pkg://solaris/system/xopen/xcu4 package (if not already installed as part of the standard Oracle Solaris 11 installation)

pkg://solaris/x11/diagnostic/x11-info-clients

pkg://solaris/compress/unzip

pkg://solaris/system/kernel/oracka (For Oracle RAC only)

1.3.2 Supported Oracle Solaris 10 Releases for SPARC (64-Bit)

Use the following information to check supported Oracle Solaris 10 releases:

Item

Requirements

SSH Requirement

Ensure that OpenSSH is installed on your servers. OpenSSH is the required SSH software.

Oracle Solaris 10 operating system

Oracle Solaris 10 Update 11 (Oracle Solaris 10 1/13 s10s_u11wos_24a) or later updates.

Packages for Oracle Solaris 10

The following packages (or later versions) must be installed:

SUNWarc SUNWbtool SUNWcsl SUNWdtrc

SUNWeu8os

SUNWhea SUNWi1cs (ISO8859-1) SUNWi15cs (ISO8859-15)

SUNWi1of

SUNWlibC

SUNWlibm

SUNWlibms

SUNWsprot

SUNWtoo

SUNWxwfnt

147440-25

147441-25

Note: You may also require additional font packages for Java, depending on your locale. Refer to the following website for more information:

8

1.4

Setting Network Time Protocol for Cluster Time Synchronization

Oracle Clusterware requires the same time zone variable setting on all cluster nodes. During installation, the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs, and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware. The time zone default is used for databases, Oracle ASM, and any other managed processes. You have two options for time synchronization:

An operating system configured network time protocol (NTP)

Oracle Cluster Time Synchronization Service

Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server.

On Oracle Solaris Cluster systems, Oracle Solaris Cluster software supplies a template file called ntp.conf.cluster (see /etc/inet/ntp.conf.cluster on an installed cluster host) that establishes a peer relationship between all cluster hosts. One host is designated as the preferred host. Hosts are identified by their private host names. Time synchronization occurs across the cluster interconnect. If Oracle Clusterware detects either that the Oracle Solaris Cluster NTP or an outside NTP server is set as the default NTP server in the system in the /etc/inet/ntp.conf or the /etc/inet/ntp.conf.cluster files, then CTSS is set to the observer mode. See the Oracle Solaris 11 Information Library for more information about configuring NTP for Oracle Solaris.

If you have NTP daemons on your server but you cannot configure them to synchronize time with a time server, and you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then disable the NTP.

To disable the NTP service, run the following command as the root user

# /usr/sbin/svcadm disable ntp

When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle Clusterware within the cluster.

To confirm that ctssd is active after installation, enter the following command as the Grid installation owner:

$ crsctl check ctss

If you are using NTP, and you prefer to continue using it instead of Cluster Time Synchronization Service, then you need to modify the NTP configuration to set the -x flag, which prevents time from being adjusted backward. Restart the network time protocol daemon after you complete this task.

9

To do this, edit the /etc/sysconfig/ntpd file to add the -x flag, as in the following example:

# Drop root to id 'ntp:ntp' by default.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no

# Additional options for ntpdate NTPDATE_OPTIONS=""

Then, restart the NTP service:

# /sbin/service ntpd restart

To enable NTP after it has been disabled, enter the following command:

# /usr/sbin/svcadm enable ntp

1.5 Network Interface Hardware Requirements

The following is a list of requirements for network configuration:

Each node must have at least two network adapters or network interface cards (NICs): one for the public network interface and one for the private network interface (the interconnect).

Note: Oracle recommends that you use the Redundant Interconnect Usage feature to make use of multiple interfaces for the private network. However, you can also use third-party technologies to provide redundancy for the private network, or link aggregation (IPMI).

For the public network, each network adapter must support TCP/IP.

For the private network, the interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet).

If you have a shared Ethernet VLAN deployment, with shared physical adapter, ensure that you apply standard Ethernet design, deployment, and monitoring best practices to protect against cluster outages and performance degradation due to common shared Ethernet switch network events.

1.5.1 IP Interface Configuration Requirements

A public IP address for each node

A virtual IP address for each node

Three single client access name (SCAN) addresses for the cluster.

(Define the SCAN in your corporate DNS (Domain Name Service). Request that your network administrator create a single name, that resolves to 3 IP addresses using a round robin algorithm. The IP addresses must be on the same subnet as your public network in the cluster.)

10

1.5.2

Broadcast Requirements for Networks Used by Oracle Grid Infrastructure

Broadcast communications (ARP and UDP) must work properly across all the public and private interfaces configured for use by Oracle Grid Infrastructure.

The broadcast must work across any configured VLANs as used by the public or private interfaces.

When configuring public and private network interfaces for Oracle RAC, you must enable ARP. Highly Available IP (HAIP) addresses do not require ARP on the public network, but for VIP failover, you will need to enable ARP. Do not configure NOARP.

1.5.3 Multicast Requirements for Networks Used by Oracle Grid Infrastructure

For each cluster member node, the Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster. Multicasting is required on the private interconnect. For this reason, at a minimum, you must enable multicasting for the cluster:

Across the broadcast domain as defined for the private interconnect

On the IP address subnet ranges 224.0.0.0/24 and optionally 230.0.1.0/24

Note: You do not need to enable multicast communications across routers.

1.6 Installation method

This document details the steps installing a 2-node Oracle 12.1.0.2 RAC cluster on Solaris:

The Oracle Grid Homes binaries are installed on the local disk of each of the RAC nodes.

The files required by Oracle Clusterware (OCR and Voting disks) are stored in ASM. The installation is explained without GNS and IPMI (additional Information for Installation with GNS and IPMI are explained)

11

2.

Prepare the cluster nodes for Oracle RAC

2.1 User Accounts

Create OS groups using the command below as the root user:

#/usr/sbin/groupadd oinstall #/usr/sbin/groupadd dba #/usr/sbin/groupadd asmadmin #/usr/sbin/groupadd asmdba #/usr/sbin/groupadd asmoper

Create the users that will own the Oracle software using these commands:

#/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper -d /export/home/grid -m grid #/usr/sbin/useradd -g oinstall -G dba,asmdba -d /home/oracle -m oracle

Set the password for the oracle account using the following command. Replace password with your own password:

#passwd oracle Changing password for user oracle. New UNIX password: password retype new UNIX password: password passwd: all authentication tokens updated successfully.

#passwd grid Changing password for user oracle. New UNIX password: password retype new UNIX password: password passwd: all authentication tokens updated successfully

Repeat Step 1 through Step 3 on each node in your cluster.

OUI can setup passwordless SSH for you, if you want to configure this yourself, refer to My Oracle Support Document : 300548.1

2.2 Networking

NOTE: This section is intended to be used for installations NOT using GNS. Determine your cluster name. The cluster name should satisfy the following conditions:

The cluster name is globally unique throughout your host domain.

The cluster name is at least 1 character long and less than 15 characters long.

The cluster name must consist of the same character set used for host names: single-byte alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).

Determine the public host name for each node in the cluster. For the public host name, use the primary hostname of each node. In other words, use the name displayed by the hostname command for example: racnode1.

12

Determine the public virtual hostname for each node in the cluster. The virtual hostname is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you provide a name in the format <public hostname>-vip, for example: racnode1- vip. The virtual hostname must meet the following requirements:

The virtual IP address and the network name must not be currently in use.

The virtual IP address must be on the same subnet as your public IP address.

The virtual hostname for each node should be registered with DNS.

Determine the private hostname for each node in the cluster. This private hostname does not need to be resolvable through DNS and should be entered in the /etc/hosts file. A common naming convention for the private hostname is <public hostname>-pvt.

The private IP should NOT be accessible to servers not participating in the local cluster.

The private network should be on standalone dedicated switch(es).

The private network should NOT be part of a larger overall network topology.

The private network should be deployed on Gigabit Ethernet or better.

It is recommended that redundant NICs are configured for Solaris either Sun Trunking (OS based) or Sun

IPMP (OS based) Reference My Oracle Support Document: 283107.1

IPMP in general. When IPMP is used for the interconnect. Reference My Oracle Support Document: 368464.1

NOTE: If IPMP is used for public and/or cluster interconnect, critical merge patch 9729439 should be applied to both Grid Infrastructure and RDBMS Oracle homes.

Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN names and VIPs must NOT be in the /etc/hosts file, they must be resolved by DNS.

Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts file on each node, specifying the public IP, VIP and private addresses. Configure the /etc/hosts file so that it is similar to the following example:

# cat /etc/hosts

# Created by RAC OVM at Tue May 26 17:25:38 EDT 2015

127.0.0.1

localhost

::1

localhost

Public

10.64.145.72

rac12cn1.idc.oracle.com rac12cn1

10.64.145.74

rac12cn2.idc.oracle.com rac12cn2

VIP

10.64.145.73

rac12cn1-v.idc.oracle.com rac12cn1-vip

10.64.145.75

rac12cn2-v.idc.oracle.com rac12cn2-vip

Private

10.64.131.48

rac12cn1-pvt.idc.oracle.com rac12cn1-pvt

10.64.131.49

rac12cn2-pvt.idc.oracle.com rac12cn2-pvt

13

2.3.

Synchronizing the Time on ALL Nodes

Ensure that the date and time settings on all nodes are set as closely as possible to the same date and time. Time may be kept in sync with NTP or by using Oracle Cluster Time Synchronization Service (ctssd). For NTP with Solaris 10 the "slewalways yes" option in /etc/inet/ntp.conf should be used. See My Oracle Support Document: 759143.1 for details.

2.4. Create the Oracle Inventory Directory

To create the Oracle Inventory directory, enter the following commands as the root user:

# mkdir -p /u01/app/oraInventory

# chown -R grid:oinstall /u01/app/oraInventory

# chmod -R 775 /u01/app/oraInventory

2.5 Creating the Oracle Grid Infrastructure Home Directory

To create the Grid Infrastructure home directory, enter the following commands as the root user:

# mkdir -p /u01/12.1.0/grid

# chown -R grid:oinstall /u01/12.1.0/grid

# chmod -R 775 /u01/12.1.0/grid

2.6 Creating the Oracle Base Directory

To create the Oracle Base directory, enter the following commands as the root user:

# mkdir -p /u01/app/oracle

# mkdir /u01/app/oracle/cfgtoollogs --needed to ensure that dbca is able to run after the rdbms installation.

# chown -R oracle:oinstall /u01/app/oracle

# chmod -R 775 /u01/app/oracle

2.7 Creating the Oracle RDBMS Home Directory

To create the Oracle RDBMS Home directory, enter the following commands as the root user:

# mkdir -p /u01/app/oracle/product/12.1.0/db_1

# chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1

# chmod -R 775 /u01/app/oracle/product/12.1.0/db_1

2.8 Stage the Oracle Software

It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. For the Grid Infrastructure (clusterware and ASM) software download:

Oracle Database 12c Release 2 Grid Infrastructure (12.1.0.2.0) for Solaris

Important: Ensure that you use the correct version, (either SPARC or x86-64) of the RDBMS software download from OTN:

Oracle Database 12c Release 2 (12.1.0.2.0) for Solaris

14

2.9

Check OS Software Requirements

The OUI will check during the install for missing packages and you will have the opportunity to install them at that point during the prechecks. Nevertheless you might want to validate that all required packages have been installed prior to launching the OUI.

NOTE: check on all nodes that the Firewall is disabled. Disable if needed:

#svcadm disable ipfilter

3. Prepare the shared storage for Oracle RAC

This section describes how to prepare the shared storage for Oracle RAC. Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files. Use the following guidelines when identifying appropriate disk devices:

All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

A disk group should not contain more than one partition on a single physical disk device.

Using logical volumes as a device in an Automatic Storage Management disk group is not supported with Oracle RAC.

The user account with which you perform the installation (oracle) must have write permissions to create the files in the path that you specify. On Solaris 10, you can use format or smc utilities to carve disk or LUNs partitions/slices. It is very important to skip the first cylinder on the disk to avoid ASM or Oracle Clusterware to overwrite the partition table. So always start partitioning from cylinder number 3. Failing to do so, you will find out after rebooting your machines that data on your disks is erased and Oracle Clusterware will not start and ASM will not be able to recognize any disks. Such as in the example below, run the format command from the first Solaris node only. This formats the disk with Solaris partitions, changes slice 4 to skip the first 3 cylinders and labels the disk:

root@rac12cn1:/tmp/soft/grid# format

Searching for disks

done

AVAILABLE DISK SELECTIONS:

0. c2d0 <SUN-DiskImage-14GB cyl 396 alt 2 hd 96 sec 768>

/virtual-devices@100/channel-devices@200/disk@0

1. c2d1 <Unknown-Unknown-0001-25.00GB>

/virtual-devices@100/channel-devices@200/disk@1

2. c2d2 <SUN-DiskImage-5GB cyl 17474 alt 2 hd 1 sec 600>

/virtual-devices@100/channel-devices@200/disk@2

3. c2d3 <SUN-DiskImage-5GB cyl 17474 alt 2 hd 1 sec 600>

/virtual-devices@100/channel-devices@200/disk@3

4. c2d4 <SUN-DiskImage-5GB cyl 17474 alt 2 hd 1 sec 600>

/virtual-devices@100/channel-devices@200/disk@4

5. c2d5 <SUN-DiskImage-5GB cyl 17474 alt 2 hd 1 sec 600>

/virtual-devices@100/channel-devices@200/disk@5

6. c2d6 <SUN-DiskImage-5GB cyl 17474 alt 2 hd 1 sec 600>

/virtual-devices@100/channel-devices@200/disk@6

15

Specify disk (enter its number): 2 selecting c2d2 [disk formatted, no defect list found]

FORMAT MENU:

disk

- select a disk

type

- select (define) a disk type

partition - select (define) a partition table

current format repair show label analyze defect backup verify save inquiry volname !<cmd> quit format> partition

- describe the current disk - format and analyze the disk - repair a defective sector - translate a disk address - write label to the disk - surface analysis - defect list management - search for backup labels - read and display labels - save new disk/partition definitions - show disk ID - set 8-character volume name - execute <cmd>, then return

PARTITION MENU:

0 - change `0' partition

1 - change `1' partition

2 - change `2' partition

3 - change `3' partition

4 - change `4' partition

5 - change `5' partition

6 - change `6' partition

7 - change `7' partition

select - select a predefined table

modify - modify a predefined partition table

name

- name the current table

print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> 4

Part

Tag

Flag

Cylinders

Size

Blocks

4 unassigned

wm

0

0

(0/0/0)

0

Enter partition id tag[unassigned]:

Enter partition permission flags[wm]:

Enter new starting cyl[0]: 3 Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 5gb `5.00gb' is out of range Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 4.9gb partition> l Ready to label disk, continue? y

partition> q

16

FORMAT MENU:

disk

- select a disk

type

- select (define) a disk type

partition - select (define) a partition table

current format repair show label analyze defect backup verify save inquiry volname !<cmd> quit format> q

- describe the current disk - format and analyze the disk - repair a defective sector - translate a disk address - write label to the disk - surface analysis - defect list management - search for backup labels - read and display labels - save new disk/partition definitions - show disk ID - set 8-character volume name - execute <cmd>, then return

Note: do the same for the other disks to be used for ASM. Commands similar to the following should be entered on every node to change the owner, group, and permissions on the character raw device file for each disk slice that you want to add to a diskgroup, where grid is the grid infrastructure installation owner, and asmadmin is the OSASM group:

root@rac12cn1:/dev/rdsk# chown grid:oinstall /dev/rdsk/c2d2s* root@rac12cn1:/dev/rdsk# chown grid:oinstall /dev/rdsk/c2d3s* root@rac12cn1:/dev/rdsk# chown grid:oinstall /dev/rdsk/c2d4s* root@rac12cn1:/dev/rdsk# chown grid:oinstall /dev/rdsk/c2d5s* root@rac12cn1:/dev/rdsk# chown grid:oinstall /dev/rdsk/c2d6s*

root@rac12cn1:/dev/rdsk# chmod 660 /dev/rdsk/c2d2s* root@rac12cn1:/dev/rdsk# chmod 660 /dev/rdsk/c2d3s* root@rac12cn1:/dev/rdsk# chmod 660 /dev/rdsk/c2d4s* root@rac12cn1:/dev/rdsk# chmod 660 /dev/rdsk/c2d5s* root@rac12cn1:/dev/rdsk# chmod 660 /dev/rdsk/c2d6s*

Verify the setting with:

root@rac12cn1:/dev/rdsk# ls -lL /dev/rdsk/c2d2s*

crw-------

1 grid

oinstall 265, 16 Jun 19 05:07 /dev/rdsk/c2d2s0

crw-------

1 grid

oinstall 265, 17 Jun 19 05:07 /dev/rdsk/c2d2s1

crw-------

1 grid

oinstall 265, 18 Jun 19 05:07 /dev/rdsk/c2d2s2

crw-------

1 grid

oinstall 265, 19 Jun 19 05:07 /dev/rdsk/c2d2s3

crw-------

1 grid

oinstall 265, 20 Jun 19 05:07 /dev/rdsk/c2d2s4

crw-------

1 grid

oinstall 265, 21 Jun 19 05:07 /dev/rdsk/c2d2s5

crw-------

1 grid

oinstall 265, 22 Jun 19 05:07 /dev/rdsk/c2d2s6

crw-------

1 grid

oinstall 265, 23 Jun 19 05:07 /dev/rdsk/c2d2s7

In this example, the device name specifies slice 4

17

4

Oracle Grid Infrastructure Install

Basic Grid Infrastructure Install (without GNS and IPMI)

As the grid user (Grid Infrastructure software owner) start the installer by running "runInstaller"

from the staged installation media.

NOTE:

Be sure the installer is run as the intended software owner since the only supported method to

change the software owner is to reinstall. #xhost + #su - grid #DISPLAY=<ip address>:0.0; export DISPLAY

cd into the folder where you staged the grid infrastructure software

./runInstaller

you staged the grid infrastructure software ./runInstaller Action: Select radio button 'Install and Configure Grid

Action:

Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next>'

18

Action: Select radio button ‘Standard cluster' and click ' Next>' 19

Action:

Select radio button

‘Standard cluster' and click ' Next>'

19

Action: Select radio button ‘Advanced Installation' and click ' Next>' 20

Action:

Select radio button

‘Advanced Installation' and click ' Next>'

20

Action: Select ‘English' and click ' Next>' 21

Action:

Select ‘English' and click ' Next>'

21

Action: Specify your cluster name and the SCAN name you want to use and click

Action:

Specify your cluster name and the SCAN name you want to use and click ' Next>'

Note:

Make sure 'Configure GNS' is NOT selected.

22

Action: Use the Edit and Add buttons to specify the node names and virtual IP

Action:

Use the Edit and Add buttons to specify the node names and virtual IP names you configured previously in the /etc/hosts file. When finished click 'OK' and use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes. Action:

Type in the OS password for the user 'grid' and press 'Setup'

for the user 'grid' and press 'Setup' Action: click ' OK ' Note: In case you

Action:

click ' OK '

Note: In case you encounter any issues while setting up the ssh user equivalence, you may setup by manual process as described below:-

1. $ cd /export/home/grid

2. mv .ssh .ssh_old

3. cd <stage_location>/grid/sshsetup

4. ./sshUserSetup.sh -user grid -hosts "rac12cn1 rac12cn2" noPromptPassphrase

23

Action: Click on 'Interface Type' next to the Interfaces you want to use for your

Action:

Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct values for 'Public', 'Private' and 'Do Not Use'. When finished click ' Next>'

Note:

If you use multiple NICs for redundancy the passive interfaces need to be selected here as well. In this example we are using IPMP for public network and Link Aggregation of private interconnect.

24

Action: Select radio button 'Automatic Storage Management (ASM) and click ' Next>' 25

Action:

Select radio button 'Automatic Storage Management (ASM) and click ' Next>'

25

Action: Type in a 'Disk Group Name' specify the 'Redundancy' and tick the disks you

Action:

Type in a 'Disk Group Name' specify the 'Redundancy' and tick the disks you want to use, when done click ' Next>'

NOTE: The number of voting disks that will be created depend on the redundancy level you specify:

EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks.

26

Action: Specify and conform the password you want to use and click ' Next>' 27

Action:

Specify and conform the password you want to use and click ' Next>'

27

Action: Don’t select anything and click ' Next>' 28

Action:

Don’t select anything and click ' Next>'

28

Action: Assign the correct OS groups for OS authentication and click ' Next>' 29

Action:

Assign the correct OS groups for OS authentication and click ' Next>'

29

Action: Specify the locations for your ORACLE BASE(/u01/app/grid) and for the Software location (/u01/app/12.1.0/grid) and

Action:

Specify the locations for your ORACLE BASE(/u01/app/grid) and for the Software location (/u01/app/12.1.0/grid) and click ' Next>'

Note:

We created these directories in step 2.6.

30

Action: Specify the locations for your Inventory (/u01/app/oraInventory) directory and click ' Next>' Note: We

Action:

Specify the locations for your Inventory (/u01/app/oraInventory) directory and click ' Next>'

Note:

We created the directory in step 2.5.

31

Action: Check that status of all checks has succeeded and click ' Next>' Note: If

Action:

Check that status of all checks has succeeded and click ' Next>'

Note:

If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up a window that instructs you to execute fixup scripts. Execute the runfixup.sh script as described on the screen as root user. Click 'Check Again' and if all checks succeeded click 'Next>'

32

Action: Cl ick 'Install’ 33

Action:

Click 'Install’

33

Action: Wait for the OUI to complete its tasks 34

Action:

Wait for the OUI to complete its tasks

34

Action: Follow the instructions on the screen running the orainstRoot.sh and root.sh scripts as root

Action:

Follow the instructions on the screen running the orainstRoot.sh and root.sh scripts as root on all nodes before you click 'OK'

Note:

The required root scripts MUST BE RUN ON ONE NODE AT A TIME!

35

Results of root scripts (Node 1):-

# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. root@rac12cn1:~# /u01/app/12.1.0/grid/root.sh Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Creating /usr/local/bin directory Copying dbhome to /usr/local/bin Copying oraenv to /usr/local/bin Copying coraenv to /usr/local/bin

Creating /var/opt/oracle/oratab file Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file:

/u01/app/12.1.0/grid/crs/install/crsconfig_params

2015/06/19 09:55:53 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2015/06/19 09:58:07 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2015/06/19 09:58:11 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert

36

2015/06/19 09:59:59 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'

CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'rac12cn1' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cn1' CRS-2676: Start of 'ora.mdnsd' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.evmd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cn1' CRS-2676: Start of 'ora.gpnpd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cn1' CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cn1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.gipcd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12cn1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cn1' CRS-2676: Start of 'ora.diskmon' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac12cn1' succeeded

ASM created and started successfully.

Disk Group OCRVOTE created successfully.

CRS-2672: Attempting to start 'ora.crf' on 'rac12cn1' CRS-2672: Attempting to start 'ora.storage' on 'rac12cn1' CRS-2676: Start of 'ora.crf' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.storage' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12cn1' CRS-2676: Start of 'ora.crsd' on 'rac12cn1' succeeded

CRS-4256: Updating the profile Successful addition of voting disk 45894980ed214fddbfa3210450a91090. Successfully replaced voting disk group with +OCRVOTE. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced

##

--------- --------- 45894980ed214fddbfa3210450a91090 (/dev/rdsk/c2d2s4) [OCRVOTE]

Located 1 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cn1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac12cn1' CRS-2677: Stop of 'ora.crsd' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn1' CRS-2673: Attempting to stop 'ora.storage' on 'rac12cn1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12cn1' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12cn1' CRS-2673: Attempting to stop 'ora.crf' on 'rac12cn1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12cn1' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn1' succeeded CRS-2677: Stop of 'ora.storage' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn1' CRS-2677: Stop of 'ora.ctssd' on 'rac12cn1' succeeded CRS-2677: Stop of 'ora.crf' on 'rac12cn1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac12cn1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac12cn1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12cn1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.evmd' on 'rac12cn1'

-- ----- -----------------

STATE

File Universal Id

File Name Disk group

1. ONLINE

37

CRS-2677: Stop of 'ora.evmd' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac12cn1' CRS-2677: Stop of 'ora.cssd' on 'rac12cn1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12cn1' CRS-2677: Stop of 'ora.gipcd' on 'rac12cn1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cn1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cn1' CRS-2672: Attempting to start 'ora.evmd' on 'rac12cn1' CRS-2676: Start of 'ora.mdnsd' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.evmd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cn1' CRS-2676: Start of 'ora.gpnpd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cn1' CRS-2676: Start of 'ora.gipcd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cn1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12cn1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cn1' CRS-2676: Start of 'ora.diskmon' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cn1' CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cn1' CRS-2676: Start of 'ora.ctssd' on 'rac12cn1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12cn1' CRS-2676: Start of 'ora.asm' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac12cn1' CRS-2676: Start of 'ora.storage' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac12cn1' CRS-2676: Start of 'ora.crf' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12cn1' CRS-2676: Start of 'ora.crsd' on 'rac12cn1' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: rac12cn1 CRS-6016: Resource auto-start has completed for server rac12cn1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2015/06/19 10:14:31 CLSRSC-343: Successfully started Oracle Clusterware stack

CRS-2672: Attempting to start 'ora.asm' on 'rac12cn1' CRS-2676: Start of 'ora.asm' on 'rac12cn1' succeeded CRS-2672: Attempting to start 'ora.OCRVOTE.dg' on 'rac12cn1' CRS-2676: Start of 'ora.OCRVOTE.dg' on 'rac12cn1' succeeded

2015/06/19 10:17:35 CLSRSC-325: Configure Oracle Grid Infrastructure

for a Cluster

succeeded

38

Results of root scripts (Node 2):-

# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. root@rac12cn2:~# /u01/app/12.1.0/grid/root.sh Performing root user operation.

The following environment variables are set as:

ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Creating /usr/local/bin directory Copying dbhome to /usr/local/bin Copying oraenv to /usr/local/bin Copying coraenv to /usr/local/bin

Creating /var/opt/oracle/oratab file Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file:

/u01/app/12.1.0/grid/crs/install/crsconfig_params

2015/06/19 10:19:07 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2015/06/19 10:20:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2015/06/19 10:20:38 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful 2015/06/19 10:22:41 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'

CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cn2' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn2' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cn2' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cn2' CRS-2672: Attempting to start 'ora.evmd' on 'rac12cn2' CRS-2676: Start of 'ora.mdnsd' on 'rac12cn2' succeeded CRS-2676: Start of 'ora.evmd' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cn2' CRS-2676: Start of 'ora.gpnpd' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cn2'

39

CRS-2676: Start of 'ora.gipcd' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cn2' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12cn2' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cn2' CRS-2676: Start of 'ora.diskmon' on 'rac12cn2' succeeded CRS-2676: Start of 'ora.cssd' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cn2' CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cn2' CRS-2676: Start of 'ora.ctssd' on 'rac12cn2' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12cn2' CRS-2676: Start of 'ora.asm' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac12cn2' CRS-2676: Start of 'ora.storage' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac12cn2' CRS-2676: Start of 'ora.crf' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12cn2' CRS-2676: Start of 'ora.crsd' on 'rac12cn2' succeeded CRS-6017: Processing resource auto-start for servers: rac12cn2 CRS-2672: Attempting to start 'ora.net1.network' on 'rac12cn2' CRS-2676: Start of 'ora.net1.network' on 'rac12cn2' succeeded CRS-2672: Attempting to start 'ora.ons' on 'rac12cn2' CRS-2676: Start of 'ora.ons' on 'rac12cn2' succeeded CRS-6016: Resource auto-start has completed for server rac12cn2 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2015/06/19 10:29:59 CLSRSC-343: Successfully started Oracle Clusterware stack

2015/06/19 10:30:46 CLSRSC-325: Configure Oracle Grid Infrastructure

for a Cluster

succeeded

40

Action: You should see the confirmation that the installation of the Grid Infrastructure was successful.
Action: You should see the confirmation that the installation of the Grid Infrastructure was successful.

Action:

You should see the confirmation that the installation of the Grid Infrastructure was successful. Click 'Close' to finish the install.

41

5.

Run ASMCA to create diskgroups

As the grid user start the ASM Configuration Assistant (ASMCA)

#su - grid cd /u01/app/12.1.0/grid/bin ./asmca

(ASMCA) #su - grid cd /u01/app/12.1.0/grid/bin ./asmca Action: Click 'Create' to create a new diskgroup 42

Action:

Click 'Create' to create a new diskgroup

42

Action: Type in a name for the diskgroup, select the redundancy you want to provide

Action:

Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box for the disks you want to assign to the new diskgroup.

43

Action: Click 'Exit' 44

Action:

Click 'Exit'

44

6

RDBMS Software Install

As the oracle user (rdbms software owner) start the installer by running "runInstaller" from the

staged installation media.

NOTE: Be sure the installer is run as the intended software owner since the only supported method

to

change the software owner is to reinstall.

cd

to the directory where you staged the RDBMS software

./runInstaller

directory where you staged the RDBMS software ./runInstaller Action: Provide your e-mail address, tick the check

Action:

Provide your e-mail address, tick the check box and provide your Oracle Support Password if you want to receive Security Updates from Oracle Support. Then click ' Next>'

45

Action: Click ‘Create and configure database’ after click ' Next>' 46

Action:

Click ‘Create and configure database’ after click ' Next>'

46

Action: Click ‘Server class’ . Then click ' Next>' 47

Action:

Click ‘Server class’. Then click ' Next>'

47

Action: Click ‘Oracle Real Application Clusters database installation’ . Then click ' Next>' 48

Action:

Click ‘Oracle Real Application Clusters database installation’. Then click ' Next>'

48

Action: Click ‘Admin managed’ . Then click ' Next>' 49

Action:

Click ‘Admin managed’. Then click ' Next>'

49

Action: Select all nodes. If User Equivalence is not configured, click the ‘ SSH Connectivity'

Action:

Select all nodes. If User Equivalence is not configured, click the SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes.

Type in the OS password for the oracle user and click 'Setup'

Note:

During the Grid Infrastructure installation you configured SSH for the grid user. If you install RDBMS with a different user (recommended) you have to configure it for this user now.

Note: In case you encounter any issues while setting up the ssh user equivalence, you may setup by manual process as described below:

1. $ cd /export/home/oracle

2. mv .ssh .ssh_old

3. cd <stage_location>/database/sshsetup

4. ./sshUserSetup.sh -user oracle -hosts "rac12cn1 rac12cn2" noPromptPassphrase

50

Action: Select ‘Advanced install’ . Then click ‘Next >’ 51

Action:

Select ‘Advanced install’. Then click ‘Next >’

51

Action: To confirm English as selected language click ' Next>' 52

Action:

To confirm English as selected language click ' Next>'

52

Action: Click ' Next>' 53

Action:

Click ' Next>'

53

Action: Specify path to your Oracle Base and the path/location where you want to store

Action:

Specify path to your Oracle Base and the path/location where you want to store the software (Oracle home). Click ' Next>'

Note:

We created the directories in steps 2.7 and 2.8

54

Action: Select ‘General Purpose / Transaction Processing’ . Then click ' Next>' 55

Action:

Select ‘General Purpose / Transaction Processing’. Then click ' Next>'

55

Action: Enter Global database name and SID. S elect ‘Create as Container database’ if you

Action:

Enter Global database name and SID. Select ‘Create as Container database’ if you wish to create one. Then click ' Next>'

56

Action: Review and change the settings for memory allocation, characterset, etc. according to your needs

Action:

Review and change the settings for memory allocation, characterset, etc. according to your needs and click 'Next >'

57

Action: Select ‘Oracle Automatic Storage Management’ . Then click ' Next>' 58

Action:

Select ‘Oracle Automatic Storage Management’. Then click ' Next>'

58

Action: Decide here whether to configure management of your database by EM Grid Control. Then

Action:

Decide here whether to configure management of your database by EM Grid Control. Then click ' Next>'

59

Action: Decide here whether to enable or disable recovery of your database. Then click '

Action:

Decide here whether to enable or disable recovery of your database. Then click ' Next>'

60

Action: Select the diskgroup for the database datafiles. Then click ' Next>' 61

Action:

Select the diskgroup for the database datafiles. Then click ' Next>'

61

Action: Enter the password for sys, system, etc., user accounts. Then click ' Next>' 62

Action:

Enter the password for sys, system, etc., user accounts. Then click ' Next>'

62

Action: Use the drop down menu to select the names of the Database Administrators and

Action:

Use the drop down menu to select the names of the Database Administrators and Database Operators group. Then click Next>'

63

Action: Check to ensure that the status of all checks is Succeeded and click '

Action:

Check to ensure that the status of all checks is Succeeded and click ' Next>'

Note:

If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up a window that instructs you to execute fixup scripts. Execute the runfixup.sh script as described on the screen as root user.

Click 'Check Again‘ and if all checks succeeded click 'Next>.' If you are sure that the unsuccessful checks can be ignored tick the box 'Ignore All' before you click ' Next>'.

64

Action: Login to a terminal window as root user and run the root.sh script on
Action: Login to a terminal window as root user and run the root.sh script on

Action:

Login to a terminal window as root user and run the root.sh script on the first node. When finished do the same for all other nodes in your cluster as well. When finished click 'OK'

Note:

root.sh should be run on one node at a time.

65

Monitor the database software installation and database creation process. 66

Monitor the database software installation and database creation process.

66

Action: The database is now created, you can either change or unlock your passwords or

Action:

The database is now created, you can either change or unlock your passwords or just click ‘OK’.

67

Action: Click ‘Close’ to finish the installation 68

Action:

Click ‘Close’ to finish the installation

68