Вы находитесь на странице: 1из 16

ora.DATA.

dg
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.LISTENER.lsnr
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.asm
ONLINE ONLINE oel61a Started,STABLE
ONLINE ONLINE oel61b STABLE
ora.net1.network
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.ons
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.registry.acfs
ONLINE OFFLINE oel61a STABLE
ONLINE OFFLINE oel61b STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE oel61b STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE oel61a STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE oel61a STABLE
ora.MGMTLSNR
1 ONLINE ONLINE oel61a 169.254.46.130 10.10
.2.21 10.10.10.21,ST
ABLE
ora.cvu
1 ONLINE ONLINE oel61b STABLE
ora.gns
1 ONLINE ONLINE oel61a STABLE
ora.gns.vip
1 ONLINE ONLINE oel61a STABLE
ora.mgmtdb
1 ONLINE ONLINE oel61a Open,STABLE
ora.oc4j
1 ONLINE ONLINE oel61b STABLE
ora.oel61a.vip
1 ONLINE ONLINE oel61a STABLE
ora.oel61b.vip
1 ONLINE ONLINE oel61b STABLE
ora.racdb.db
1 ONLINE ONLINE oel61a Open,STABLE
2 ONLINE OFFLINE Instance Shutdown,ST
ABLE
ora.racdb.racdbsrv.svc
1 ONLINE ONLINE oel61a STABLE
2 ONLINE OFFLINE STABLE
ora.scan1.vip
1 ONLINE ONLINE oel61b STABLE
ora.scan2.vip
1 ONLINE ONLINE oel61a STABLE
ora.scan3.vip
1 ONLINE ONLINE oel61a STABLE
-------------------------------------------------------------------------------[root@oel61a bin]#

2.16 This concludes the Grid Infrastructure Upograde from 11.2.0.3 to 12c
12.1.0.1
2.17 Install Oracle 12c RDBMS binaries in a separate $OH.

Run OUI from 12c database stage directory.


Select Skip updates
Select Install software only
Select RAC database installation.
Select All Nodes
Select languages
Select Enterprise Edition
Verify the locations.
Verify OS groups.
Review
The errors are as follows
Task resolv.conf Integrity - This task checks consistency of file /etc/resolv.conf file
across nodes
Check Failed on Nodes: [oel61b, oel61a]
Verification result of failed node: oel61b Details:
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: oel61a,oel61b - Cause: The DNS response time for an unreachable node
exceeded the value specified on nodes specified. - Action: Make sure that 'options
timeout', 'options attempts' and 'nameserver' entries in file resolv.conf are proper.
On HPUX these entries will be 'retrans', 'retry' and 'nameserver'. On Solaris these
will be 'options retrans', 'options retry' and 'nameserver'. Make sure that the DNS
server responds back to name lookup request within the specified time when looking up
an unknown host name.
Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of Problem Not
Available - Action: User Action Not Available
Back to Top
Verification result of failed node: oel61a Details:
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: oel61a,oel61b - Cause: The DNS response time for an unreachable node
exceeded the value specified on nodes specified. - Action: Make sure that 'options
timeout', 'options
attempts' and 'nameserver' entries in file resolv.conf are proper. On HPUX these
entries will be 'retrans', 'retry' and 'nameserver'. On Solaris these will be 'options
retrans', 'options retry' and 'nameserver'. Make sure that the DNS server responds back
to name lookup request within the specified time when looking up an unknown host name.
Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of Problem Not
Available - Action: User Action Not Available
Back to Top
Single Client Access Name (SCAN) - This test verifies the Single Client Access Name
configuration. Error:
PRVG-1101 : SCAN name "oel61-cluster-scan.gns.grid.gj.com" failed to resolve - Cause:
An attempt to resolve specified SCAN name to a list of IP addresses failed because SCAN
could not be resolved in DNS or GNS using 'nslookup'. - Action: Check whether the
specified SCAN name is correct. If SCAN name should be resolved in DNS, check the
configuration of SCAN name in DNS. If it should be resolved in GNS make sure that GNS
resource is online.
PRVG-1101 : SCAN name "oel61-cluster-scan.gns.grid.gj.com" failed to resolve - Cause:
An attempt to resolve specified SCAN name to a list of IP addresses failed because SCAN
could not be resolved in DNS or GNS using 'nslookup'. - Action: Check whether the
specified SCAN name is correct. If SCAN name should be resolved in DNS, check the
configuration of SCAN name in DNS. If it should be resolved in GNS make sure that GNS
resource is online.
Check Failed on Nodes: [oel61b, oel61a]
Verification result of failed node: oel61b Back to Top
Verification result of failed node: oel61a Back to Top

For PRVF-5636 Look at : PRVF-5636 : The DNS response time for an unreachable node
exceeded "15000" ms on following nodes (Doc ID 1356975.1)
For PRVG-1101 Do the following
Before:
[grid@oel61a ~]$ cluvfy comp gns -postcrsinst -verbose
Verifying GNS integrity
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "gns.grid.gj.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.2.0, 192.168.2.0, 192.168.2.0, 192.168.2.0,
192.168.2.0" match with the GNS VIP "192.168.2.0, 192.168.2.0, 192.168.2.0,
192.168.2.0, 192.168.2.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.52" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "gns.grid.gj.com" are reachable
WARNING:
PRVF-5218 : "oel61a-vip.gns.grid.gj.com" did not resolve into any IP address
PRVF-5827 : The response time for name lookup for name "oel61a-vip.gns.grid.gj.com"
exceeded 15 seconds
WARNING:
PRVF-5218 : "oel61b-vip.gns.grid.gj.com" did not resolve into any IP address
PRVF-5827 : The response time for name lookup for name "oel61b-vip.gns.grid.gj.com"
exceeded 15 seconds
Checking status of GNS resource...
Node Running? Enabled?
------------ ------------------------ -----------------------oel61a yes yes
oel61b no yes
GNS resource configuration check passed
Checking status of GNS VIP resource...
Node Running? Enabled?
------------ ------------------------ -----------------------oel61a yes yes
oel61b no yes
GNS VIP resource configuration check passed.
GNS integrity check failed
Verification of GNS integrity was unsuccessful on all the specified nodes.
[grid@oel61a ~]$

Make sure that the GNS static VIP address is in /etc/resolv.conf. Note that if
NetwokManager is running the file content is overridden.
[root@oel61a bin]# cat cat /etc/resolv.conf
cat: cat: No such file or directory
# Generated by NetworkManager
search gj.com
nameserver 192.168.2.1
nameserver 192.168.2.11
nameserver 192.168.2.52
[root@oel61a bin]#
[root@oel61b bin]# cat /etc/resolv.conf
# Generated by NetworkManager
search gj.com
nameserver 192.168.2.11
nameserver 192.168.2.1
nameserver 192.168.2.52
[root@oel61b bin]#

After the modification we get


[oracle@oel61a dbs]$ nslookup oel61-cluster-scan.gns.grid.gj.com
Server: 192.168.2.52
Address: 192.168.2.52#53
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.117

Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.111
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.112
[oracle@oel61a dbs]$
[grid@oel61b ~]$ nslookup oel61-cluster-scan.gns.grid.gj.com
Server: 192.168.2.52
Address: 192.168.2.52#53
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.117
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.111
Name: oel61-cluster-scan.gns.grid.gj.com
Address: 192.168.2.112
[grid@oel61b ~]$ cluvfy comp gns -postcrsinst -verbose
Verifying GNS integrity
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "gns.grid.gj.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.2.0, 192.168.2.0, 192.168.2.0" match with the GNS VIP
"192.168.2.0, 192.168.2.0, 192.168.2.0"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.2.52" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "gns.grid.gj.com" are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
Checking status of GNS resource...
Node Running? Enabled?
------------ ------------------------ -----------------------oel61a yes yes
oel61b no yes
GNS resource configuration check passed
Checking status of GNS VIP resource...
Node Running? Enabled?
------------ ------------------------ -----------------------oel61a yes yes
oel61b no yes
GNS VIP resource configuration check passed.
GNS integrity check passed
Verification of GNS integrity was successful.
[grid@oel61b ~]$

Select Ignore All


Wait until prompted for the actions that need to be ran as root.
Run the scripts
[root@oel61a bin]# /u01/app/oracle/product/12.1.0/db_1/root.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@oel61a bin]#
[root@oel61b ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@oel61b ~]#

Edit OUI
2.18 From the OLD $OH make sure that database is started and FRA is large
enough and you db_recovery_file_dest_size sized appropriately.
SQL> alter system set db_recovery_file_dest_size=60G scope=both sid='*';
System altered.
SQL>

2.19 Run dbua


Take a Note that dbua makes a convenient backup that you can use for recovery
if something goes wrong. For you to take advantage of this option you need to
select it on the dbua before you start. I strongly recommend using it. I had a
problem with the db_recovery_file_dest_size set too low and had to restore and retry.
Invoke dbua from the new 12c $OH
Select Upgrade Oracle Database.

Select the database, review and press Next.


Wait for the prerequisites check to complete.
Examine the findings. This particular one is fixable so press Next to continue.
Note here is where you specify the backup option, some parallelism options,
statistics gathering prior to the upgrade etc. If something goes wrong a script
will wait for you in the specified location for restore.
Select an option for EM Express
Note here is where you specify the backup location. If something goes wrong a
script will wait for you in the specified location for restore.
You should not get this is FRA is big enough. If you are stuck restore the
database to a state to before the upgrade. Fix whatever the problem is and
retry.
Review the summary
Review the actions
Wait for the upgrade to complete.
At the end of the upgrade you can have something like this.
View the results and close dbua.
2.20 Verify that database is successfully upgraded.
[oracle@oel61b ~]$ srvctl config database -d racdb
Database unique name: RACDB
Database name: RACDB
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: servpool
Database instances:
Disk Groups: DATA
Mount point paths:
Services: racdbsrv

Type: RAC
Start concurrency:
Stop concurrency:
Database is policy managed
[oracle@oel61b ~]$
[oracle@oel61b admin]$ srvctl status database -d racdb
Instance RACDB_1 is running on node oel61b
Instance RACDB_2 is running on node oel61a
[oracle@oel61b admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sat May 24 22:59:37 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> select * from v$active_instances;
INST_NUMBER
----------INST_NAME
-------------------------------------------------------------------------------CON_ID
---------1
oel61b.gj.com:RACDB_1
0
2
oel61a.gj.com:RACDB_2
0
INST_NUMBER
----------INST_NAME
-------------------------------------------------------------------------------CON_ID
---------SQL> set linesize 300
SQL> /
INST_NUMBER INST_NAME CON_ID
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------1 oel61b.gj.com:RACDB_1 0
2 oel61a.gj.com:RACDB_2 0
SQL>
SQL> select * from gv$instance;
INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_T STATUS PAR THREAD#
ARCHIVE LOG_SWITCH_WAIT LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO CON_ID
INSTANCE_MO EDITION
---------- --------------- ------------------------------------------------------------------------------- ------------------------- ------------ --- ---------- ------- --------------- ---------- ------------------- ------------------ --------- --- ---------- ----------- ------FAMILY
-------------------------------------------------------------------------------2 2 RACDB_2 oel61a.gj.com 12.1.0.1.0 24-MAY-14 OPEN YES 2 STARTED ALLOWED NO ACTIVE
PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
1 1 RACDB_1 oel61b.gj.com 12.1.0.1.0 24-MAY-14 OPEN YES 1 STARTED ALLOWED NO ACTIVE
PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE
SQL>
[grid@oel61b ~]$ crsctl stat res -t
-------------------------------------------------------------------------------Name Target State Server State details
-------------------------------------------------------------------------------Local Resources

-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.LISTENER.lsnr
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.asm
ONLINE ONLINE oel61a Started,STABLE
ONLINE ONLINE oel61b Started,STABLE
ora.net1.network
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.ons
ONLINE ONLINE oel61a STABLE
ONLINE ONLINE oel61b STABLE
ora.registry.acfs
ONLINE OFFLINE oel61a STABLE
ONLINE OFFLINE oel61b STABLE
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE oel61a STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE oel61b STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE oel61b STABLE
ora.MGMTLSNR
1 ONLINE ONLINE oel61b 169.254.81.184 10.10
.10.22 10.10.5.22,ST
ABLE
ora.cvu
1 ONLINE ONLINE oel61b STABLE
ora.gns
1 ONLINE ONLINE oel61b STABLE
ora.gns.vip
1 ONLINE ONLINE oel61b STABLE
ora.mgmtdb
1 ONLINE ONLINE oel61b Open,STABLE
ora.oc4j
1 ONLINE ONLINE oel61b STABLE
ora.oel61a.vip
1 ONLINE ONLINE oel61a STABLE
ora.oel61b.vip
1 ONLINE ONLINE oel61b STABLE
ora.racdb.db
1 ONLINE ONLINE oel61b Open,STABLE
2 ONLINE ONLINE oel61a Open,STABLE
ora.racdb.racdbsrv.svc
1 ONLINE ONLINE oel61a STABLE
2 ONLINE ONLINE oel61b STABLE
ora.scan1.vip
1 ONLINE ONLINE oel61a STABLE
ora.scan2.vip
1 ONLINE ONLINE oel61b STABLE
ora.scan3.vip
1 ONLINE ONLINE oel61b STABLE
-------------------------------------------------------------------------------[grid@oel61b ~]$

2.21 Handy and Useful script to recover the database in case of failed upgrade.
[root@oel61a RACDB]# cd backup
[root@oel61a backup]# pwd
/u01/app/oracle/admin/RACDB/backup

[root@oel61a backup]# ls
createSPFile_RACDB.sql ctl_backup_1400922874007 df_backup_04p93abe_1_1
RACDB_2_restore.sh
ctl_backup_1400807180325 df_backup_01p8voqo_1_1 init.ora rmanRestoreCommands_RACDB
[root@oel61a backup]# cat createSPFile_RACDB.sql
connect / as sysdba
CREATE SPFILE='+DATA/racdb/spfileracdb.ora' from
pfile='/u01/app/oracle/admin/RACDB/backup/init.ora';
exit;
[root@oel61a backup]# cat RACDB_2_restore.sh
#!/bin/sh
# -- Run this Script to Restore Oracle Database Instance RACDB_2
echo -- Shutting down the database from the new oracle home ...
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1; export ORACLE_HOME
/u01/app/oracle/product/12.1.0/db_1/bin/srvctl stop database -d RACDB
echo -- Downgrading the database CRS resources ...
echo y | /u01/app/oracle/product/12.1.0/db_1/bin/srvctl downgrade database -d RACDB -t
11.2.0.3.0 -o /u01/app/oracle/product/11.2.0/db_1
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=RACDB_2; export ORACLE_SID
echo y | /u01/app/oracle/product/11.2.0/db_1/bin/srvctl modify database -d RACDB -p
+DATA/RACDB/spfileRACDB.ora
echo -- Removing /u01/app/oracle/cfgtoollogs/dbua/logs/Welcome_RACDB_2.txt file
rm -f /u01/app/oracle/cfgtoollogs/dbua/logs/Welcome_RACDB_2.txt ;
/u01/app/oracle/product/11.2.0/db_1/bin/sqlplus /nolog
@/u01/app/oracle/admin/RACDB/backup/createSPFile_RACDB.sql
/u01/app/oracle/product/11.2.0/db_1/bin/rman
@/u01/app/oracle/admin/RACDB/backup/rmanRestoreCommands_RACDB
echo -- Starting up the database from the old oracle home ...
/u01/app/oracle/product/11.2.0/db_1/bin/srvctl start database -d RACDB
[root@oel61a backup]#
[root@oel61a backup]# cat rmanRestoreCommands_RACDB
connect target /;
startup nomount;
set nocfau;
restore controlfile from '/u01/app/oracle/admin/RACDB/backup/ctl_backup_1400922874007';
alter database mount;
restore database;
alter database open resetlogs;
exit
[root@oel61a backup]#

2.22 This conclude the database upgrade from 11.2.0.3 to 12c (12.1.0.1)

Annex A
Cluvfy output
[grid@oel61a grid]$ ./runcluvfy.sh stage -pre crsinst -n oel61a,oel61b
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "oel61a"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) oel61b,oel61a
TCP connectivity check passed for subnet "192.168.2.0"
Check: Node connectivity using interfaces on subnet "10.10.10.0"
Node connectivity passed for subnet "10.10.10.0" with node(s) oel61b,oel61a
TCP connectivity check passed for subnet "10.10.10.0"
Check: Node connectivity using interfaces on subnet "10.10.2.0"
Node connectivity passed for subnet "10.10.2.0" with node(s) oel61a,oel61b
TCP connectivity check passed for subnet "10.10.2.0"
Check: Node connectivity using interfaces on subnet "10.10.5.0"

Node connectivity passed for subnet "10.10.5.0" with node(s) oel61b,oel61a


TCP connectivity check passed for subnet "10.10.5.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.2.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.10.5.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.2.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.2.0" for multicast communication with multicast group
"224.0.0.251" passed.
Checking subnet "10.10.10.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.10.0" for multicast communication with multicast group
"224.0.0.251" passed.
Checking subnet "10.10.5.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.5.0" for multicast communication with multicast group
"224.0.0.251" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check failed
Check failed on nodes:
oel61b,oel61a
Available memory check passed
Swap space check passed
Free disk space check passed for "oel61b:/usr"
Free disk space check passed for "oel61a:/usr"
Free disk space check passed for "oel61b:/var"
Free disk space check passed for "oel61a:/var"
Free disk space check passed for "oel61b:/etc,oel61b:/sbin"
Free disk space check passed for "oel61a:/etc,oel61a:/sbin"
Free disk space check passed for "oel61b:/u01/app/11.2.0/grid"
Free disk space check passed for "oel61a:/u01/app/11.2.0/grid"
Free disk space check passed for "oel61b:/tmp"
Free disk space check passed for "oel61a:/tmp"
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Membership check for user "grid" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"


Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: oel61a,oel61b
Check for integrity of file "/etc/resolv.conf" failed
Time zone consistency check passed
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf"
passed
Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check failed for process "avahi-daemon"
Check failed on nodes:
oel61b,oel61a
Daemon not running check failed for process "avahi-daemon"
Check failed on nodes:
oel61b,oel61a
Starting check for Reverse path filter setting ...
Check for Reverse path filter setting passed
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@oel61a grid]$
[grid@oel61a grid]$ ./runcluvfy.sh stage -post hwos -n oel61a,oel61b
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "oel61a"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.2.0"

Node connectivity passed for subnet "192.168.2.0" with node(s) oel61b,oel61a


TCP connectivity check passed for subnet "192.168.2.0"
Check: Node connectivity using interfaces on subnet "10.10.10.0"
Node connectivity passed for subnet "10.10.10.0" with node(s) oel61a,oel61b
TCP connectivity check passed for subnet "10.10.10.0"
Check: Node connectivity using interfaces on subnet "10.10.2.0"
Node connectivity passed for subnet "10.10.2.0" with node(s) oel61b,oel61a
TCP connectivity check passed for subnet "10.10.2.0"
Check: Node connectivity using interfaces on subnet "10.10.5.0"
Node connectivity passed for subnet "10.10.5.0" with node(s) oel61b,oel61a
TCP connectivity check passed for subnet "10.10.5.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.2.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "10.10.5.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.10.2.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.2.0" for multicast communication with multicast group
"224.0.0.251" passed.
Checking subnet "10.10.10.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.10.0" for multicast communication with multicast group
"224.0.0.251" passed.
Checking subnet "10.10.5.0" for multicast communication with multicast group
"224.0.0.251"...
Check of subnet "10.10.5.0" for multicast communication with multicast group
"224.0.0.251" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
ASM Disk Group Sharing Nodes (2 in count)
------------------------------------ -----------------------DATA oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sde oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sde1 oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdf oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdf1 oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdb oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdb1 oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdd oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdd1 oel61a oel61b
Disk Sharing Nodes (2 in count)
------------------------------------ ------------------------

/dev/sdc oel61a oel61b


Disk Sharing Nodes (2 in count)
------------------------------------ -----------------------/dev/sdc1 oel61a oel61b
Shared storage check was successful on nodes "oel61a,oel61b"
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf"
passed
Post-check for hardware and operating system setup was successful.
[grid@oel61a grid]$
MAKE SURE THE CLUSTER IS UP AND RUNNING
[root@oel61a bin]# ./crsctl check cluster -all
**************************************************************
oel61a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
oel61b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@oel61a bin]# ./crsctl stat res -t
-------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE oel61a
ONLINE ONLINE oel61b
ora.LISTENER.lsnr
ONLINE ONLINE oel61a
ONLINE ONLINE oel61b
ora.asm
ONLINE ONLINE oel61a Started
ONLINE ONLINE oel61b Started
ora.gsd
OFFLINE OFFLINE oel61a
OFFLINE OFFLINE oel61b
ora.net1.network
ONLINE ONLINE oel61a
ONLINE ONLINE oel61b
ora.ons
ONLINE ONLINE oel61a
ONLINE ONLINE oel61b
ora.registry.acfs
ONLINE OFFLINE oel61a
ONLINE OFFLINE oel61b
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE oel61a
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE oel61b
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE oel61b
ora.cvu
1 ONLINE ONLINE oel61b
ora.gns
1 ONLINE ONLINE oel61b

ora.gns.vip
1 ONLINE ONLINE oel61b
ora.oc4j
1 ONLINE ONLINE oel61b
ora.oel61a.vip
1 ONLINE ONLINE oel61a
ora.oel61b.vip
1 ONLINE ONLINE oel61b
ora.racdb.db
1 ONLINE ONLINE oel61a Open
2 ONLINE ONLINE oel61b Open
ora.racdb.racdbsrv.svc
1 ONLINE ONLINE oel61a
2 ONLINE ONLINE oel61b
ora.scan1.vip
1 ONLINE ONLINE oel61a
ora.scan2.vip
1 ONLINE ONLINE oel61b
ora.scan3.vip
1 ONLINE ONLINE oel61b
[root@oel61a bin]#

BASH PROFILES

[oracle@oel61a ~]$ cat .bash_profile


# .bash_profile
umask 022
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
#ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
ORACLE_HOSTNAME=oel61a
ORACLE_SID=RACDB_2
ORACLE_UNQNAME=RACDB
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME
ORACLE_UNQNAME
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
#ulimit -s unlimited
ulimit -v unlimited
ulimit -n 36500
if [ -t 0 ]; then
stty intr ^C
fi
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
[oracle@oel61a ~]$
[grid@oel61a ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
umask 022
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.1.0/grid_1
ORACLE_HOSTNAME=oel61a
ORACLE_SID=+ASM1
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
stty intr ^C
fi
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
[grid@oel61a ~]$
[root@oel61b ~]# su - oracle
[oracle@oel61b ~]$ cat .bash_profile
# .bash_profile
umask 022
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
ORACLE_HOSTNAME=oel61b
ORACLE_SID=RACDB_1
ORACLE_UNQNAME=RACDB
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME
ORACLE_UNQNAME
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
#ulimit -s 65000
ulimit -v unlimited
ulimit -n 32560
if [ -t 0 ]; then
stty intr ^C
fi
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs

PATH=$PATH:$HOME/bin
export PATH
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
[oracle@oel61b ~]$
[root@oel61b ~]# su - oracle
[oracle@oel61b ~]$ cat .bash_profile
# .bash_profile
umask 022
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
ORACLE_HOSTNAME=oel61b
ORACLE_SID=RACDB_1
ORACLE_UNQNAME=RACDB
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME
ORACLE_UNQNAME
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
#ulimit -s 65000
ulimit -v unlimited
ulimit -n 32560
if [ -t 0 ]; then
stty intr ^C
fi
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
[oracle@oel61b ~]$ exit
logout
[root@oel61b ~]# su - grid
[grid@oel61b ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
umask 022
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.1.0/grid_1
ORACLE_HOSTNAME=oel61b
ORACLE_SID=+ASM2
LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
ulimit -n 32560
if [ -t 0 ]; then
stty intr ^C
fi
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
[grid@oel61b ~]$