Академический Документы
Профессиональный Документы
Культура Документы
References
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Table of Contents
1 INTRODUCTION........................................................................................................ 5
2 SCOPE....................................................................................................................... 5
3 PRE-REQUISITES ..................................................................................................... 5
4 CONFIGURATION ..................................................................................................... 5
7 ACKNOWLEDGEMENTS....................................................................................... 146
23-Mar-2009 1 Introduction
Oracle 10g RAC Setup for Solaris on HP
This document defines the steps to be followed for the Setting up the Storage in HP
Storage CS3000 using iSCSI and the allocation of the storage to the HP Blade Servers
BLTEST1 and BLTEST2. In addition, this document will contain the steps to be followed
for the Oracle 10g Real Application Clusters on Sun Solaris installed on the HP Blade
Servers.
2 Scope
The scope of the document pertains to the iSCSI Storage Allocation in HP Infrastructure
and the Oracle 10g RAC Setup on Solaris running on this Infrastructure.
3 Pre-Requisites
Windows 2003 Storage Server running on HP Storage Blade Server
iSCSI SAN Storage on HP Storage CS3000.
Solaris 10 with latest Patch Set installed on HP Blade Server BLTEST1, BLTEST2
which will be the 2 Node of the Oracle 10g Real Application.
Host Names of the RAC Nodes should be in lower case ONLY
4 Configuration
The following are the Storage Blade Server, 2 Blade Server for RAC and an HP Onboard
Administration Server:
RAC Blade Server 1:-
o Hostname : bltest1
o Server model : BL 460c G1
o IP address : 192.168.15.6
RAC Blade Server 2:-
o Hostname : bltest2
o Server model : BL 460c G1
o IP address : 192.168.15.7
Storage Blade Server:-
o Server model : BL 460c G1
o IP Address : 192.168.15.8
HP Onboard Administration :
o IP Address : 192.168.15.2
Storage:-
o Server model : AIO SB600c
5 Storage Allocation
5.1 Shared-Storage Allocation
Today, fibre channel is one of the most popular solutions for shared storage. Fibre channel
is a high-speed serial-transfer interface that is used to connect systems and storage
devices in either point-to-point (FC-P2P), arbitrated loop (FC-AL), or switched topologies
(FC-SW). Protocols supported by Fibre Channel include SCSI and IP. Fibre channel
configurations can support as many as 127 nodes and have a throughput of up to 2.12
Gbps in each direction, and 4.25 Gbps is expected.
Fibre channel, however, is very expensive. A less expensive alternative to fibre channel is
SCSI. SCSI technology provides acceptable performance for shared storage, but for
administrators and developers who are used to GPL-based Linux prices, even SCSI can
come in over budget, at around US$2,000 to US$5,000 for a two-node cluster.
23-Mar-2009
Oracle
Another10gpopular
RAC Setup for Solaris
solution is theonHP
Sun NFS (Network File System) found on a NAS. It can be
used for shared storage but only if you are using a network appliance or something similar.
Specifically, you need servers that guarantee direct I/O over NFS, TCP as the transport
protocol, and read/write block sizes of 32K.
The shared storage that will be used for this article is based on iSCSI technology using a
Windows 2003 storage server installed with HP Storage Software. This solution offers an
alternative to fibre channel.
5.4 Hardware
5.4.1 Nodes
For our infrastructure, we used a cluster which is composed of Two HP ProLiant BL460c
server with Solaris 10 using HP Blade SB600c storage. With features equal to standard
1U rack mount servers, the dual processor, multi-core BL460c combines power-efficient
compute power and high density with expanded memory and I/O for maximum
performance. Now with Low Voltage or Standard Quad-Core, and Dual-Core Intel Xeon
processors, DDR2 Fully Buffered DIMMs, optional Serial Attached SAS or SATA hard
drives, support of Multi-Function NICS and multiple I/O cards, the BL460c provides a
performance system ideal for the full range of scale out applications. In this small form
factor, the BL460c includes more features to ensure high-availability such as optional hot-
plug hard drives, mirrored-memory, online spare memory, memory interleaving, embedded
RAID capability, and enhanced Remote Lights-Out management.
5.4.2 HP StorageWorks All-in-One SB600C Storage Blade
23-Mar-2009 The All-in-One (AiO) SB600c is a preferred storage solution for customers who desire a
Oracle 10g RAC Setup for Solaris on HP
shared storage solution in their blade chassis. The AiO SB600c storage blade provides the
shared storage infrastructure required to support the Oracle RAC database.
Software
o HP All-in-One Storage System Manager—provides an easy-to-use graphical
interface that allows the end user to set up physical and logical volumes and
also to create and present the iSCSI LUN’s to the Solaris machines.
Note: Assigning the RAID Level 5 will reduce the Available Storage.
Click NEXT
Click FINISH
5.4.5.1.2 Creating the Logical Partition for OCR & Voting Disk Files
23-Mar-2009 Right Click on Unallocated Space 107.4 G and click “New Partition”
Oracle 10g RAC Setup for Solaris on HP
Select “Primary Partition” and click NEXT
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click on NEXT
Change the Volume Label to “ocr_voting” and Enable Perform Quick Format and
click NEXT
Click on Finish
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click on Finish
5.4.6 iSCSI Target Creation
23-Mar-2009 In HP AIO Storage Management Console, Click in “HP All-in-One Storage System
Oracle 10g RAC Setup for Solaris
Management””Microsoft HP
on
iSCSI Target”
Click on Add
Select Identifier Type= “IP Address” and enter an IP Address=192.168.15.6 (One
23-Mar-2009 of the RAC Node IP Addresses)
Oracle 10g RAC Setup for Solaris on HP
Click Ok and then Add IP=192.168.15.7 (One of the RAC Node Address)
Now both the IP Addresses (also known as the RAC Node Address or iSCSI
23-Mar-2009 Clients) are displayed. Click Ok
Oracle 10g RAC Setup for Solaris on HP
Now view the iSCSI Target created and right click “NAS1” and select properties:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Change the Description for NAS1:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Change the Virtual Disk name to ocr_voting.vhd and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Set Size of Virtual Disk to 1020 and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Enter a Meaningful description for the Virtual Disk and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Add (to add the iSCSI Targets in which this Virtual Disk/ LUN’s would be
23-Mar-2009 available)
Oracle 10g RAC Setup for Solaris on HP
Add “NAS1” as the iSCSI target and Click Ok
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Finish to complete the Virtual Disk Creation
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Warning:
Make sure you are selecting the correct Logical Disk. If the disk(s) you selected has been
in use for some other targets, the possibility of data loss is immense.
After virtual disks has been created for all the disks, the console will look like the
23-Mar-2009 following
Oracle 10g RAC Setup for Solaris on HP
iSCSI Virtual Disk Creation – ASM Disk
23-Mar-2009
Right
10g
Oracle RACClick
Setuptheforasm_files HP
Solaris onpartition (E Drive) and Select Create Virtual Disk
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Change the Virtual Disk name to asm_files.vhd and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Set the Size of Virtual Disk to 698368 and then click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Add meaningful description to the Virtual Disk and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Add (to add the iSCSI targets which will have access to this Virtual Disk/
23-Mar-2009 LUN)
Oracle 10g RAC Setup for Solaris on HP
Add iSCSI Target = NAS1 and Click Ok
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Finish to complete the Virtual Disk Creation:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Now this is how the console will look like after the Virtual Disks/ LUN ocr_voting.vhd and
asm_files.vhd has been created.
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
We first need to verify that the iSCSI software packages are installed on our servers
before we can proceed further.
We will now configure the iSCSI target device to be discovered dynamically like so:
#iscsiadm add discovery-address 192.168.2.195:3260
The iSCSI connection is not initiated until the discovery method is enabled. This is enabled
using the following command:
#iscsiadm modify discovery –sendtargets enable
Now, we need to create the iSCSI device links for the local system. The following
command can be used to do this:
#devfsadm -i iscsi
5.4.9 Configure Solaris Partitions on Oracle RAC Nodes
To verify that the iSCSI Devices are available on the node we will use the following format
23-Mar-2009 command by connecting to 192.168.15.6 as a root user and running the following.
Oracle 10g RAC Setup for Solaris on HP
bash-3.00# format
Searching for disks...done
Note: This Disk c4t3d0, c4t5d0 refers to the iSCSI SAN disks.
Now the Disks c4t3d0, c4t5d0 has to be made into Solaris Partitions using “fdisk”
23-Mar-2009
Oracle 10gpRAC Setup for Solaris on
format> HP
WARNING - This disk may be in use by an application that has modified the fdisk table.
Ensure that this disk is not currently in use before proceeding to use fdisk.
format> fdisk
No fdisk table exists. The default partition for the disk is:
Type "y" to accept the default partition, otherwise type "n" to edit the partition table.
y
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 44511 + 2 (reserved cylinders)
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
23-Mar-2009 label - write label to the disk
Oracle 10g RAC Setup
analyze for Solaris
- surface HP
on
analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> disk
Type "y" to accept the default partition, otherwise type "n" to edit the partition table.
y
format> PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 1016 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
23-Mar-2009 0 unassigned wm 0 0 (0/0/0) 0
Oracle 10g RAC Setup
1 unassigned wmfor Solaris
0 on HP 0 (0/0/0) 0
2 backup wu 0 - 1015 1016.00MB (1016/0/0) 2080768
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0- 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0
Partition the Available Solaris Partitions/ Disks based on the Files required for OCR/
Voting Disk/ ASM Files etc is shown below. Only 1 Raw File Creation is demonstrated for
this purpose.
bash-3.00# format
Searching for disks...done
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
23-Mar-2009 5 - change `5' partition
Oracle 10g
6 RAC Setup `6'
- change HP
for partition
Solaris on
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 44511 + 2 (reserved cylinders)
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
partition> p
Current partition table (original):
Total disk cylinders available: 44511 + 2 (reserved cylinders)
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/useradd -u 200 -g oinstall -G dba oracle
# id oracle
uid=200(oracle) gid=100(oinstall)
# passwd oracle
mkdir -p /applns/oracle
chown oracle:oinstall /applns/oracle
vi /etc/passwd
oracle:x:200:100::/home/oracle:/bin/sh
with
oracle:x:200:100::/applns/oracle:/bin/bash
Create the default Directory for CRS. This will be used as a location for the Oracle
Clusterware.
mkdir -p /applns/crs/oracle/product/10.2.0/app
chown -R oracle:oinstall /applns/crs
chmod -R 775 /applns/crs
# /usr/sbin/groupadd oinstall
# /usr/sbin/groupadd dba
# /usr/sbin/useradd -u 200 -g oinstall -G dba oracle
# id oracle
uid=200(oracle) gid=100(oinstall)
# passwd oracle
Note: The UID and GID of the Users created should be the same in both the RAC Nodes.
This is a pre-requisite for Oracle 10g Clusterware Installation to work.
23-Mar-2009
Oracle
The 10g RAC Setup for Solaris
Oracle Home on HPdirectory should be /applns/oracle
mkdir -p /applns/oracle
chown oracle:oinstall /applns/oracle
Create the default Directory for CRS. This will be used as a location for the Oracle
Clusterware.
mkdir -p /applns/crs/oracle/product/10.2.0/app
chown -R oracle:oinstall /applns/crs
oracle:x:200:100::/home/oracle:/bin/sh
with
oracle:x:200:100::/applns/oracle:/bin/bash
23-Mar-2009 5.4.12 Create Symbolic Links for all the Created Raw Disks on all
HP
Oracle 10g RAC Setup for Solaris on
Oracle RAC Nodes
Since the same Disks are having different names in the nodes of the RAC, we require
symbolic links for each of the Raw disks, as the Shared Files have to be of same name
across all the Nodes for RAC to work.
mkdir /oracle_files
ln -s /dev/rdsk/c2t3d0s5 /oracle_files/ocr_disk1
ln -s /dev/rdsk/c2t3d0s0 /oracle_files/ocr_disk2
ln -s /dev/rdsk/c2t3d0s6 /oracle_files/voting_disk1
ln -s /dev/rdsk/c2t3d0s7 /oracle_files/voting_disk2
ln -s /dev/rdsk/c2t3d0s1 /oracle_files/voting_disk3
ln -s /dev/rdsk/c2t3d0s3 /oracle_files/data_disk1
ln -s /dev/rdsk/c2t3d0s4 /oracle_files/arch_disk1
Connect to Node 2 (BLTEST2) as a root user and do the following:
5.4.13 Create Symbolic Link for the SSH on all Oracle RAC Nodes
Connect to Node 1 (BLTEST1) as a root user and run the following:
mkdir -p /usr/local
cd /usr/local
ln -s /usr/bin bin
mkdir -p /usr/local
cd /usr/local
ln -s /usr/bin bin
Note: This is required as Clusterware looks for the ssh executable in the /usr/local/bin
folder.
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.15.6 bltest1 loghost
192.168.15.7 bltest2
10.10.1.1 bltest1-priv
10.10.1.2 bltest2-priv
192.168.15.201 bltest1-vip
192.168.15.202 bltest2-vip
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.15.7 bltest2 loghost
192.168.15.6 bltest1
10.10.1.1 bltest1-priv
10.10.1.2 bltest2-priv
192.168.15.201 bltest1-vip
192.168.15.202 bltest2-vip
23-Mar-2009
Oracle
5.4.1510g RAC Configure
Setup for Solaris
SSH on HP
on Oracle RAC Nodes
Before you install and use Oracle Real Application Clusters, you should configure Secure
shell (SSH) for the oracle user on all cluster nodes (BLTEST1 and BLTEST2). Using SSH
provides greater security than Berkeley services remote shell (RSH). Oracle Universal
Installer uses the rsh and scp commands during installation to run remote commands on
and copy files to the other cluster nodes. You must configure SSH (or RSH) so that these
commands do not prompt for a password.
To configure SSH, you must first create RSA and DSA keys on each cluster node, and
then copy the keys from all cluster node members into an authorized keys file on each
node. For example, with the two-node cluster, BLTEST1 and BLTEST2, you create RSA
and DSA keys on the local host, BLTEST1; create RSA and DSA keys on the second
node, BLTEST2; and then copy RSA and DSA codes from both BLTEST1 and BLTEST2
to each node.
Create RSA and DSA keys on each node: Complete the following steps on each
Node:
o Enter the following commands to generate an RSA key for version 2 of the
SSH protocol:
$ /usr/bin/ssh-keygen -t rsa
At the prompt:
Accept the default location for the key file.
Enter and confirm a pass phrase that is different from the oracle user’s
password. (No Password would be fine).
This command writes the public key to the ~/.ssh/id_rsa.pub file and the
private key to the ~/.ssh/id_rsa file. Never distribute the private key to
anyone.
o Enter the following commands to generate a DSA key for version 2 of the
SSH protocol:
$ /usr/bin/ssh-keygen -t dsa
At the prompts:
■ Accept the default location for the key file
■ Enter and confirm a pass phrase that is different from the oracle user’s
password (No Password would be fine).
This command writes the public key to the ~/.ssh/id_dsa.pub file and the
private key to the ~/.ssh/id_dsa file. Never distribute the private key to
anyone.
o Now repeat Steps 1 to 3 on the Node BLTEST2 (192.168.15.7).
o Now connect to BLTEST1 and add keys to an authorized key file by
completing the following steps:
On the local node, determine if you have an authorized key file
(~/.ssh/authorized_keys). If the authorized key file already exists, then
proceed to step 2. Otherwise, enter the following commands:
23-Mar-2009 $ touch ~/.ssh/authorized_keys
Oracle 10g RAC Setup for~/.ssh
$ cd Solaris onHP
$ ls
You should see the id_dsa.pub and id_rsa.pub keys that you have created.
o To determine the size of the configured swap space, enter the following command:
# /usr/sbin/swap –s
If necessary, refer to your operating system documentation for information about
how to configure additional swap space.
23-Mar-2009 o To determine the amount of disk space available in the /tmp directory, enter the
Oracle 10g RAC Setup
following HP
for Solaris on
command:
# df -k /tmp
If there is less than 400 MB of disk space available in the /tmp directory, then
complete one of the following steps:
Delete unnecessary files from the /tmp directory to meet the disk space
requirement.
To determine the amount of disk space available in the /tmp directory, enter the
following command
o To determine whether the system architecture can run the Oracle software you
have obtained, enter the following command:
# /bin/isainfo -kv
Ensure that the Oracle software you have is the correct Oracle software for your processor
type. If the output of this command indicates that your system architecture does not match
the system for which the Oracle software you have is written, then you cannot install the
software. Obtain the correct software for your system architecture before proceeding
further.
Open the /etc/system file in any text editor and, if necessary, add the following
lines:
set noexec_user_stack=1
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmns=2000
set semsys:seminfo_semmsl=1000
set semsys:seminfo_semmni=100
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
Connect to Node 1 (BLTEST1) as a root user and ensure the following exists in the
/etc/hosts file:
Connect to Node 2 (BLTEST2) as a root user and ensure the following exists in the
/etc/hosts file:
Note: The above path refers to the location where the Clusterware Binary/cluvfy is located.
./runcluvfy.sh stage -pre crsinst -n bltest1,bltest2 -r 10gR2 -verbose
The output after running the above command in 192.168.15.213 is given in the attached
log file below:
cluvfy.log
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: The Expected Response after running the above command should be “Pre-check for
cluster services setup was successful on all the nodes”. But although I have not got it,
since some of the OS Patches required for Clusterware has not been installed, I am
ignoring it as it will not cause any issues to the Clusterware Installation. The OS Patches
can be installed later. Also the “Swap space” failure in the log attached is not corrected, as
the actual Swap Space allocated is 20G. This will be evident during the Clusterware Pre-
Requisite check during Clusterware Installation using Oracle Universal Installer (OUI).
6.1.2 Create the Default Home for CRS in all the Nodes involved in the
Cluster
The following has to be run as a root user:
ssh root@bltest1
mkdir -p /applns/crs/oracle/product/10.2.0/app
cd /applns
chown -R oracle: install crs
ssh root@bltest2
mkdir -p /applns/crs/oracle/product/10.2.0/app
cd /applns
chown -R oracle: install crs
ssh root@bltest1
cd /applns/setup/clusterware/rootpre
./rootpre.sh
Expected Result:
No SunCluster running
ssh root@bltest2
cd /applns/setup/clusterware/rootpre
./rootpre.sh
Expected Result:
No SunCluster running
6.1.4 Ensure the Display is set correctly and any X Server Software is
working as required
This is applicable only if you are initiating the Clusterware Setup from a remote system
with X server software installed.
Note: While Using XMING ensure that all the Fonts exists, else the Installation could
get stuck in the middle.
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
cd /applns/setup/clusterware/
-bash-3.00$ ./runInstaller
********************************************************************************
Please run the script rootpre.sh as root on all machines/nodes. The script can be found
at the toplevel of the CD or stage-area. Once you have run the script, please type Y to
proceed
Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle Clusterware
installation.
Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.
********************************************************************************
Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536
Passed
Note: The Default Inventory Location displayed by the Installer is being used for this
installation.
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: The Check “Checking operating system package requirements” does not succeed
and I check the box indicating that it has been manually verified. This failed because of the
Missing OS Patches required for the Clusterware. As mentioned earlier, this can be
ignored and we can proceed with the Clusterware Installation.
Click Add
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Add the VIP, Interconnect IP and Public IP of the Remote Node involved in the
Clusterware and Click OK
Note: This should be present in the /etc/hosts file in both the nodes.
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Edit
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Change Interface bnx0 to Public and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Enter the OCR/ OCR Mirror Location and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Enter the Voting Disk and 2 Mirrored Voting Disk Location and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Install
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Progress of Clusterware Installation is displayed below:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: Clusterware Installation is first completed on BLTEST1 (node in which OUI was
initiated) and then it is done remotely on BLTEST2.
Run the following Scripts as Root user in bltest1 and then in bltest2 and Click OK:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
The log after running the root.sh script in both the Cluster Nodes is given below:
clusterware_root.log
VIPCA installation is done below :
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Clusterware Installation is complete :
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
ssh root@bltest1
mkdir -p /applns/oracle/product
cd /applns
chown -R oracle: install oracle
ssh root@bltest2
mkdir -p /applns/oracle/product
cd /applns
chown -R oracle: install oracle
6.2.2 Database Home Setup using Oracle Universal Installer (OUI)
23-Mar-2009
Connect
Oracle 10gtoRAC
bltest1 asfor
Setup anSolaris HP
Oracleonuser and run the following
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ cd /applns/setup/database/
-bash-3.00$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536
Passed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: The Check “Checking operating system package requirements” does not succeed
and I check the box indicating that it has been manually verified. This failed because of the
Missing OS Patches required for the Clusterware. As mentioned earlier, this can be
ignored and we can proceed with the Clusterware Installation.
Select “Install database Software only” and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: Database will be created only after the Oracle Software has been installed and
patched to 10.2.0.3.
Click Install
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Progress of Database Home Installation is displayed below:
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: Database Home Installation is first completed on BLTEST1 (node in which OUI was
initiated) and then it is done remotely on BLTEST2.
Installation is completed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ cd /applns/setup/companion/
-bash-3.00$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536
Passed
Click Next
Select ‘bltest2’ and Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Next
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Note: The Check “Checking operating system package requirements” does not succeed
and I check the box indicating that it has been manually verified. This failed because of the
Missing OS Patches required for the Clusterware. As mentioned earlier, this can be
ignored and we can proceed with the Clusterware Installation.
Click Install
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Installation is completed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
ssh root@bltest1
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ cd /applns/setup/Disk1/
-bash-3.00$ ./runInstaller
Starting Oracle Universal Installer...
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ cd /applns/setup/Disk1/
-bash-3.00$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 250 MB. Actual 11238 MB Passed
Checking swap space: must be greater than 500 MB. Actual 11483 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 65536
Passed
10.2.0.3_root.log
Installation is completed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ export ORACLE_HOME= /applns/oracle/product/10.2.0/db_1
-bash-3.00$ export PATH=$PATH:
/applns/oracle/product/10.2.0/db_1/bin:/applns/crs/oracle/product/10.2.0/app:
-bash-3.00$ /applns/oracle/product/10.2.0/db_1/bin/dbca
Warning: Cannot convert string "-monotype-arial-regular-r-normal--*-140-*-*-p-*-
iso8859-1" to type FontStruct
Welcome Screen is displayed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
Click Yes so that LISTENERS are created on both the RAC Nodes
ssh oracle@bltest1
export DISPLAY=192.168.73.27:0.0
-bash-3.00$ export ORACLE_HOME= /applns/oracle/product/10.2.0/db_1
-bash-3.00$ export PATH=$PATH:
/applns/oracle/product/10.2.0/db_1/bin:/applns/crs/oracle/product/10.2.0/app:
-bash-3.00$ /applns/oracle/product/10.2.0/db_1/bin/dbca
Warning: Cannot convert string "-monotype-arial-regular-r-normal--*-140-*-*-p-*-
iso8859-1" to type FontStruct
Welcome Screen is displayed
23-Mar-2009
Oracle 10g RAC Setup for Solaris on HP
7 Acknowledgements
Special thanks to Sethunath from whom I have learnt the basics of Real Application
Cluster. This document wouldn’t have been complete without his help in configuring the
Storage in HP Hardware. Thanks to the almighty for the help rendered always.