Академический Документы
Профессиональный Документы
Культура Документы
RELEASE 2 ON HP-UX
CONTENTS
• HARDWARE CONSIDERATIONS
• SOFTWARE CONSIDERATIONS
• STORAGE CONSIDERATIONS
1. SYSTEM REQUIREMENTS
2. NETWORK REQUIREMENTS
SYSTEM PARAMETERS REQUIRED BEFORE INSTALLATION OF
ORACLE RAC 10 g RELEASE 2 ON HP-UX: -
1. PATCHES REQUIRED
LATEST PATCH BUNDLE: QUALITY PACK PATCHES FOR HP-UX 11I V2, MAY
2005
The following table shows the storage options supported for storing Oracle
Cluster Ready Services (CRS) files, Oracle Database Files, and Oracle
Database Recovery Files. Oracle Database Files include data files, control files,
and redo log files, the server parameter file, and the password file. Oracle CRS
files include the oracle cluster registry (OCR) and the CRS voting disk. Oracle
recovery files includes archive log files.
The command given on both the nodes to make the disks available and the resultant output
obtained are as following: -
# /usr/sbin/ioscan -fun -C disk
This command displays information about each disk attached to the system, including the
block device name (/dev/dsk/cxtydz) and character raw device name (/dev/rdsk/cxtydz).
Automatic Storage Management (ASM) is a feature in Oracle Database 10g that provides
the database administrator with a simple storage management interface that is consistent
across all server and storage platforms. As a vertically integrated file system and volume
manager, purpose-built for Oracle database files, ASM provides the performance of async
I/O with the easy management of a file system. ASM provides capability that saves the
DBA’s time and provides flexibility to manage a dynamic database environment with
increased efficiency.
Automatic Storage Management is part of the database kernel. It is linked into
$ORACLE_HOME/bin/oracle so that its code may be executed by all database processes.
One portion of the ASM code allows for the start-up of a special instance called an ASM
Instance. ASM Instances do not mount databases, but instead manage the metadata
needed to make ASM files available to ordinary database instances.
ASM instances manage the metadata describing the layout of the ASM files. Database
instances access the contents of ASM files directly, communicating with an ASM
instance only to get information about the layout of these files. This requires that a
second portion of the ASM code run in the database instance, in the I/O path.
Four disk groups are created at CRIS namely ASMdb1, ASM db2, ASM db3 and
ASMARCH. For each disk that has to be added to a disk group, enter the following
command to verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, the disk is already part of a volume
group. The disks that you choose must not be part of an LVM volume group. The device
paths must be the same from both systems and if not same they are mapped to one virtual
device name.
The following commands are executed to change the owner, group, and permissions on
the character raw device file for each disk that is added to a disk group:
The redundancy level chosen for the ASM disk group is the External Redundancy, which
had an intelligent subsystem an HP Storage Works EVA or HP Storage Works XP.
Useful ASM v$ views commands:
Checking to see if the volume groups are properly created and available using the
following commands:
# strings /etc/lvmtab
# vgdisplay –v /dev/vg_rac
Changing the permissions of the database volume group vg_rac to 777, and change the
permissions of all raw logical volumes to 660 and the owner to oracle:dba.
$ export ORACLE_BASE=/opt/oracle/product
Create a database file subdirectory under the Oracle base directory and set the appropriate
owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/<dbname>
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
Change directory to the $ORACLE_BASE/oradata/dbname directory.
Enter a command similar to the following to create a text file that you can be used to
create the raw device mapping file:
system=/dev/vg_name/rdbname_system_raw_500m
sysaux=/dev/vg_name/rdbname_sysaux_raw_800m
example=/dev/vg_name/rdbname_example_raw_160m
users=/dev/vg_name/rdbname_users_raw_120m
temp=/dev/vg_name/rdbname_temp_raw_250m
undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m
undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m
redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m
redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m
redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m
redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m
control1=/dev/vg_name/rdbname_control1_raw_110m
control2=/dev/vg_name/rdbname_control2_raw_110m
spfile=/dev/vg_name/rdbname_spfile_raw_5m
pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m
When we are configuring the Oracle user's environment, set the DBCA_RAW_CONFIG
environment variable to specify the full path to this file:
$ export DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf
CLUSTER MANAGEMENT CONSIDERATIONS:
After all the LAN cards are installed and configured, and the RAC volume group and the
cluster lock volume group(s) are configured, cluster configuration is started. Activate the
lock disk on the configuration node ONLY. Lock volume can only be activated on the
node where the cmapplyconf command is issued so that the lock disk can be initialized
accordingly.
# vgchange -a y /dev/vg_rac
Creation of a cluster configuration template:
# cmcheckconf -v -C rac.asc
Create the binary configuration file and distribute the cluster configuration to all the
nodes in the cluster:
# cmapplyconf -v -C rac.asc
Cluster is not started until cmrunnode on each node or cmruncl command are run.
De-activate the lock disk on the configuration node after cmapplyconf command.
# vgchange -a n /dev/vg_rac
Start up the cluster and view it to be sure its up and running.
Start the cluster from any node in the cluster
# vgchange -a s /dev/vg_rac
Check the cluster status:
# cmviewcl –v
INSTALLATION OF ORACLE SOFTWARE:
Before the installation of CRS a user is created who owns the Oracle RAC software.
Before CRS is installed, the storage option is chosen that is to be used for the Oracle
Cluster Registry (100 MB) and CRS voting disk (20 MB). Automatic Storage
Management cannot be used to store these files, because they must be accessible
before any Oracle instance starts. Display is to be set first before the installation of
the CRS. Steps involved in the installation of CRS are as follows: -
This part describes phase two of the installation procedures for installing the Oracle
Database 10g with Real Application Clusters (RAC).
When the database software is installed, the OUI also installs the software for Oracle
Enterprise Manager Database Control and integrates this tool into the cluster
environment. Once installed, Enterprise Manager Database Control is fully
configured and operational for RAC. We can also install Enterprise Manager Grid
Control onto other client machines outside our cluster to monitor multiple RAC and
single-instance Oracle database environments.
• Start the DBConsole agent on one of the cluster nodes as Oracle user:
$ emctl start dbconsole
• To connect to the Oracle Enterprise Manager Database Control (default port
5500) open the following URL in the web browser: http://<node1a>:5500/em
• Login as sys/manager and sysdba profile.
• Accepted the licensing.
• Now OEM Database Control Home Page is reached.
With this the installation of Oracle 10g RAC on HP-UX at CRIS was done and the
project was completed successfully.