Вы находитесь на странице: 1из 41

Oracle Database Appliance Review Detailed Notes

By Caleb Small, Senior Oracle DBA and RAC Consultant


Here are my detailed (sometimes too detailed) notes and screen shots from a real ODA deployment. In some
sections, there is voluminous output included for the sake of completeness, but there is plenty of good
information between so dont be discouraged.
After our initial deployment, in which we had time server issues, we ran the de-install procedure to wipe the
machine clean back to the OS. This is not the same as the bare metal restore which does a factory reset of the
OS installation. This procedure simply removes all the Oracle software and configuration so you can start over.
One of our issues, which is no surprise, was deploying without a time server. The ODA developers said it cant
be done, but of course CTSS (Cluster Time Synchronization Service) is supposed to take care of this. However,
the ODA deployment does not have an easy way to implement this. You have to disable and de-configure NTP
manually as the OS level before performing the ODA setup. Then, and only then, will CTSS take over and
allow you to have an autonomus ODA free of time server requirement.
Some of the other lessons learned are:
Lessons Learned
1. root passwords on both nodes must be default of welcome1
2. If an existing deployment exists, clean it off with cleanupDeploy.sh
3. When running firstnet, use the public IP of node 1 (a hint about this one the setup poster would be nice)
4. If you know the IP of the ILOM, the whole install can be done with putty and Xwindows. You do not
need a keyboard, mouse and monitor for the console.
5. Need to add the localhost entry to /etc/hosts for Xwindows to work w/putty
6. Be patient, especially at 60%, some of the stpes take a long time.
7. Click the Show Details button to tail the log file, this helps to see that things really are moving.
Outstanding Issues
1. Unable to cut and paste text from ILOM console window, also no way to scroll back very frustrating!
2. Still dont understand why only one SCAN Listener and SCAN IP configured, should be two.
3. NTP was still configured to start, even though we specified no NTP. Because we had previously
renamed the NTP config file & directory to .disable, NTP was unable to start and CTSS did start in
Active Mode and clocks were synchronized
4. Error messages on console screen during boot up persist.
5. Boot messages are not available for review in any log file, and no scroll back in console window.
Caleb Small
caleb@caleb.com
800-223-8992
Re-Deploying ODA Software
June 24, 2012, Caleb Small, caleb@caleb.com
Configurator program has been run on laptop to create a config file called ODAconfig
Install kit has been uploaded to ODA (using SCP) and unpacked in directory /tmp
A previous deployment exists, but has problems with its ASM storage.
START OVER - Deinstalling Oracle Software and Reinitializing Storage

IMPORTANT NOTE: This procedure resets the public (and probably other) IP addresses to defaults, which
may be different from what was previously configured. Especially as we never actually knew the original
default IPs in the first place. However, the ILOM IPs are not changed, so the ILOM console still works
As root on node 1 only:
login as: root
root@192.0.2.18's password:
Last login: Sat Jun 23 23:07:44 2012 from 192.0.2.200
[root@orclsys1 ~]# cd /opt/oracle/oak/onecmd
[root@orclsys1 onecmd]# ./cleanupDeploy.pl
Please enter the root password for performing cleanup:
Re-enter root password:
About to clear up OAK deployment,public network connectivity will be lost,root p
assword will be set to default and both nodes will be rebooted
Do you want to continue(yes/no): yes
Setting up ssh for root
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001351.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001351.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:13:51
INFO
: Step 1 SetupSSHroot
INFO
: Setting up ssh for root...
INFO
: Setting up ssh across the private network...
............done
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24 /root/DoAllcmds.sh
INFO
: Background process 13797 (node: 192.168.16.24) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25 /root/DoAllcmds.sh
INFO
: Background process 13821 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Done setting up ssh
INFO
: Running /usr/bin/rsync -tarvz /opt/oracle/oak/onecmd/
192.168.16.25:/opt/oracle/oak/onecmd --exclude=*zip --exclude=*gz --exclude=*log -exclude=*trc --exclude=*rpm to sync directory</opt/oracle/oak/onecmd> on node
<192.168.16.25>
SUCCESS: Ran /usr/bin/rsync -tarvz /opt/oracle/oak/onecmd/
192.168.16.25:/opt/oracle/oak/onecmd --exclude=*zip --exclude=*gz --exclude=*log -exclude=*trc --exclude=*rpm and it returned: RC=0
sending incremental file list
./
onecommand.params
tmp/
tmp/DoAllcmds-20120623235336.sh
tmp/DoAllcmds-20120623235500.sh
tmp/DoAllcmds.sh
tmp/all_nodes
tmp/clone-optoracleoakpkgreposorapkgsGI11.2.0.3.2Basegrid112.tar.gz.out
tmp/db_nodes
tmp/hosts-gen
tmp/hosts-gen-orclsys1
tmp/hosts-gen-orclsys2
tmp/oakpartition.sh
tmp/ocmd-checkusers.pl
tmp/orclsys1-ocmd-checkusers.res
tmp/orclsys2-ocmd-checkusers.res
tmp/priv_ip_group
tmp/setilomconfign0.sh
tmp/setuptz.sh
tmp/sysconfntp.sh
tmp/vip_node

sent 6829 bytes received 5277 bytes 24212.00 bytes/sec


total size is 9413059 speedup is 777.55
INFO
: Time in SetupSSHroot is 7 seconds.
..........done
INFO
: Running as root: /usr/bin/ssh -l root orclsys1 /root/DoAllcmds.sh
INFO
: Background process 14029 (node: orclsys1) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys2 /root/DoAllcmds.sh
INFO
: Background process 14053 (node: orclsys2) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001407.sh
INFO
: Background process 14074 (node: orclsys1) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001407.sh
INFO
: Background process 14097 (node: orclsys2) gets done with the exit code 0
INFO
: Time spent in step 1 SetupSSHroot is 16 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001351.log...
Exiting...
Uninstalling ASR
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001407.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001407.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:08
INFO
: Step 2 deinstallASR
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/removeAsrrpm.sh
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/removeAsrrpm.sh
INFO
: Background process 14178 (node: 192.168.16.24) gets done with the exit code 0
INFO
: Background process 14201 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Time spent in step 2 deinstallASR is 0 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001407.log...
Exiting...
Deinstalling oracle stack
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001408.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001408.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:09
INFO
: Step 3 DeinstallGI
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001409.sh
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001409.sh
INFO
: Background process 14283 (node: orclsys1) gets done with the exit code 0
INFO
: Background process 14306 (node: orclsys2) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001409.sh
INFO
: Background process 14327 (node: orclsys1) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001409.sh
INFO
: Background process 14350 (node: orclsys2) gets done with the exit code 0
INFO
: Disabling crs on all nodes...
INFO
: Stopping clusterware on all nodes...
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/stopcluster.sh
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/stopcluster.sh

WARNING: Ignore any errors returned by '/usr/bin/ssh -l root


/opt/oracle/oak/onecmd/tmp/stopcluster.sh'
INFO
: Background process 14399 (node: orclsys2) gets done
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/stopcluster.sh'
INFO
: Background process 14374 (node: orclsys1) gets done
INFO
: CSSD is down, continuing...
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/deinstall.sh
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/deinstall.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/deinstall.sh'
INFO
: Background process 14456 (node: 192.168.16.24) gets
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/deinstall.sh'
INFO
: Background process 14480 (node: 192.168.16.25) gets
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14683 (node: orclsys1) gets done
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14705 (node: orclsys2) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14726 (node: orclsys1) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14748 (node: orclsys2) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14769 (node: orclsys1) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14791 (node: orclsys2) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh
INFO
: Background process 14812 (node: orclsys1) gets done
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14834 (node: orclsys2) gets done
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh

orclsys2
with the exit code 0
orclsys1
with the exit code 0

192.168.16.24
done with the exit code 0
192.168.16.25
done with the exit code 0

orclsys1
with the exit code 0
orclsys2
with the exit code 0

orclsys1
with the exit code 0

orclsys2
with the exit code 0

orclsys1
with the exit code 0

orclsys2
with the exit code 0

orclsys1

with the exit code 0


orclsys2
with the exit code 0

orclsys1

INFO
: Background process 14855 (node: orclsys1) gets done with the exit code 0
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14877 (node: orclsys2) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root orclsys1
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14898 (node: orclsys1) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh
WARNING: Ignore any errors returned by '/usr/bin/ssh -l root orclsys2
/opt/oracle/oak/onecmd/tmp/dokill.sh'
INFO
: Background process 14920 (node: orclsys2) gets done with the exit code 0
INFO
: Time spent in step 3 DeinstallGI is 25 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001408.log...
Exiting...
Updating multipath.conf
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:34
INFO
: Step 4 resetmultipathconf
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/resetmultpath.sh
INFO
: Background process 15004 (node: 192.168.16.24) gets done with the exit code 0
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/resetmultpath.sh
INFO
: Background process 15026 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Time spent in step 4 resetmultipathconf is 0 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.log...
Exiting...
Dropping users
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:35
INFO
: Step 5 DropUsersGroups
INFO
: Dropping user oracle on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/userdel -r -f oracle
SUCCESS: Dropping user oracle on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/userdel -r -f oracle
INFO
: Dropping user oracle on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/userdel -r -f oracle
SUCCESS: Dropping user oracle on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/userdel -r -f oracle
INFO
: Dropping user grid on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/userdel -r -f grid
SUCCESS: Dropping user grid on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/userdel -r -f grid
INFO
: Dropping user grid on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/userdel -r -f grid
SUCCESS: Dropping user grid on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/userdel -r -f grid
INFO
: Dropping group oinstall on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel oinstall

INFO
: Dropping group oinstall on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel oinstall
INFO
: Dropping group dba on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel dba
INFO
: Dropping group dba on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel dba
INFO
: Dropping group racoper on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel racoper
INFO
: Dropping group racoper on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel racoper
INFO
: Dropping group asmdba on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel asmdba
INFO
: Dropping group asmdba on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel asmdba
INFO
: Dropping group asmoper on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel asmoper
INFO
: Dropping group asmoper on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel asmoper
INFO
: Dropping group asmadmin on 192.168.16.24 as root using command /usr/bin/ssh
root@192.168.16.24 /usr/sbin/groupdel asmadmin
INFO
: Dropping group asmadmin on 192.168.16.25 as root using command /usr/bin/ssh
root@192.168.16.25 /usr/sbin/groupdel asmadmin
INFO
: Time spent in step 5 DropUsersGroups is 3 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001434.log...
Exiting...
Resetting network
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001438.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001438.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:39
INFO
: Step 6 resetnetwork
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/resetnet.sh
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/resetnet.sh
INFO
: Background process 15327 (node: 192.168.16.24) gets done with the exit code 0
INFO
: Background process 15355 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Time spent in step 6 resetnetwork is 2 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001438.log...
Exiting...
Resetting password
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001441.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001441.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:42
INFO
: Step 7 resetpasswd
INFO
: Resetting root password
...
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/secuser.sh
...
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/secuser.sh
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001443.sh
INFO
: Background process 15573 (node: 192.168.16.24) gets done with the exit code 0

INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/DoAllcmds-20120624001443.sh
INFO
: Background process 15596 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Time spent in step 7 resetpasswd is 1 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001441.log...
Exiting...
Resetting dns, ntp and Rebooting
INFO
: Logging all actions in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001443.log
and traces in /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001443.trc
INFO
: Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
INFO
: Creating nodelist files...
==================================================================================
INFO
: 2012-06-24 00:14:43
INFO
: Step 8 reboot
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.24
/opt/oracle/oak/onecmd/tmp/reboot.sh
INFO
: Running as root: /usr/bin/ssh -l root 192.168.16.25
/opt/oracle/oak/onecmd/tmp/reboot.sh
Broadcast message from root (Sun Jun 24 00:14:54 2012):
The system is going down for reboot NOW!
INFO
: Background process 15677 (node: 192.168.16.24) gets done with the exit code 0
INFO
: Background process 15704 (node: 192.168.16.25) gets done with the exit code 0
INFO
: Time spent in step 8 reboot is 11 seconds.
==================================================================================
INFO
: Log file is /opt/oracle/oak/onecmd/tmp/orclsys1-20120624001443.log...
Exiting...
[root@orclsys1 onecmd]#

After rebooting, heres what ifconfig reports, but not able to ping on these IPs they are for private interface.
BOND0 is the public interface; it is not configured at this point.
Eth0: 192.168.16.24/25
Eth1: 192.168.17.24/25

NOTE: Unable to cut and paste text from the ILOM console window
Run the initial network configuration command on node 1 and use default values:
NOTE: default value (example value) of .2 was probably a bad idea. Should use the actual IP address for the
public interface on node 1. Then we might not run into the session disconnect problem below.
/opt/oracle/oak/bin/oakcli configure firstnet
Select the interface to configure
network on [bond0 bond1 bond2
xbond0]:bond0
Configure DHCP on bond0?(yes/no):no
INFO: Static configuration selected
Enter the IP address to
configure:192.0.2.2
Enter the netmask address to
configure:255.255.255.0
Enter the gateway address to
configure:192.0.2.1
Plumbing the IPs now
Restarting the network

Now, we are able to ssh in on 192.0.2.2, but Xwindows doesnt work:

[root@oak1 ~]# xclock


_X11TransSocketINETConnect() can't get address for localhost:6010: Temporary failure in
name resolution
Error: Can't open display: localhost:10.0

The /etc/hosts file does NOT have an entry for localhost, the line in bold was added to fix the problem:
[root@oak1 etc]# cat hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
::1
localhost6.localdomain6 localhost6
127.0.0.1
localhost.localdomain
localhost

Shortly after starting, we lost connectivity in putty session - because default public IP address changed from .2
to .18 (.2 was the example value used while running firstnet setup, .18 is the actual value specified on the
ODAconfig file) . In future, use the real value (eg. .18) when running firstnet.
NOTE: It would be really nice if there was a hint in Step 3 on the ODA Setup Poster to say hint use the IP
address for the public interface on node 1
Restart ssh connection on .18 IP. Start 5:56

Click custom
Click Load an existing configuration Browse

Enter a new password for user root.


IMPORTANT NOTE: deployment fails at 4% (step 1 Setup Network) if root passwords are not the default of
welcome1 on both servers.

We are not using DNS, nor NTP. We want CTSS for time synchronization.

The SCAN IPs are not populated, even though they are specified in the ODAconfig file as:
# SCAN INFO
SCAN_NAME=orclsys-scan
SCAN_IPS=(192.0.2.22 192.0.2.23)
SCAN_PORT=1521
You can only enter ONE SCAN IP, the other field is not accessible you cannot click nor tab into it. We fill in
the first SCAN IP only:

Other Network is not used.

Accept defaults

Not configured, how would this work if the ODA was on an isolated network without Internet access?

No Cloud File system.

--Configuration Information-ConfigVersion = 2.2.0.0.0


--Cluster Information-System name = orclsys
Domain Name = example.com
Region = America
Timezone = America/Los_Angeles
--Database Information-Database name = (orcl)
Database block size = (8192)
Database Backup Location = Local
Deployment type = (RAC)
Database CharacterSet = AL32UTF8
Database Teritory = AMERICA
Database Language = AMERICAN
Database Class = Medium
--Host Information-Host VIP Names = (orclsys1-vip orclsys2-vip)

New Root Password = 2C325CE00B9B02204ABD81FDDF709347


--SCAN Information-Scan Name = orclsys-scan
Scan Name = (192.0.2.22 )
Is DNS Server used = false
DNS Servers = ( )
NTP Servers = ( )
VIP IP = (192.0.2.20 192.0.2.21)
--Network Information-Public IP = (192.0.2.18 192.0.2.19)
Public Network Mask = 255.255.255.0
Public Network Gateway = 192.0.2.1
Public Network interface = bond0
Public Network Hostname = (orclsys1 orclsys2)
NET1 IP = ( )
NET1 Mask =
NET1 Gateway =
NET1 Interface = bond1
NET1 Network Hostname = ( )
NET2 IP = ( )
NET2 Mask =
NET2 Gateway =
NET2 Interface = bond2
NET2 Network Hostname = ( )
NET3 IP = ( )
NET3 Mask =
NET3 Gateway =
NET3 Interface = xbond0
NET3 Network Hostname = ( )
ILOM IPs = (192.0.2.100 192.0.2.101)
ILOM Hostname = (orclsys1-ilom orclsys2-ilom)
ILOM Netmask = 255.255.255.0
ILOM Gateway = 192.0.2.1
--CloudF FileSystem Info-Configure Cloud FileSystem = False
Cloud FileSystem Mount point =
Cloud Filesystem size(GB) =
--Automatic Service Request-Configure ASR = False
ASR proxy server =
ASR online account username =
ASR online account password =

First try failed at 4% - step 1, SetupNetwork, no error messages displayed, just hangs. Click Show Details and
then you see an error message in the log file password for root
Passwords for root, and for oracle and grid if they exist, must be default of welcome1 or deployment will fail.
If this were an initial deployment, oracle and grid would not exist, root would still have to be the default.

Click the Show Details button to see detailed progress (and watch for errors)

Has been at 60% for a long time now 10 minutes


The log file in Show Details did move
INFO
: Running rootcrs.pl using </u01/app/11.2.0.3/grid/perl/bin/perl I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install
/u01/app/11.2.0.3/grid/crs/install/rootcrs.pl> on nodes <orclsys1 orclsys2>...
INFO
: Look at the log file '/opt/oracle/oak/onecmd/tmp/rootcrs.pl-<nodename>-log' for more
details
INFO
: Running as root: /usr/bin/ssh -l root orclsys1 /u01/app/11.2.0.3/grid/perl/bin/perl I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install
/u01/app/11.2.0.3/grid/crs/install/rootcrs.pl > /opt/oracle/oak/onecmd/tmp/rootcrs.pl-orclsys1-log
2>&1
INFO
: Running as root: /usr/bin/ssh -l root orclsys2 /u01/app/11.2.0.3/grid/perl/bin/perl I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install
/u01/app/11.2.0.3/grid/crs/install/rootcrs.pl > /opt/oracle/oak/onecmd/tmp/rootcrs.pl-orclsys2-log
2>&1

Finally proceeding

Roughly one hour to finish


Cluster comes up OK
Check with crsctl status resource t
Check ctss, it is running in acive mode, and clocks are synchronized:
[grid@orclsys1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

[grid@orclsys2 ~]$ crsctl check ctss


CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

Ntp service had previously been disabled, and the config files renamed to .disable in order to activate ctss.
After the rebuild, config files and directory retain .disable suffix, however, ntpd services IS configured to
START:
[root@orclsys1 ~]# chkconfig --list ntpd
ntpd
0:off
1:off
2:off
3:on
[root@orclsys1 etc]# ls -al ntp*
-rw-r--r-- 1 root root 1833 Nov 16
ntp.disable:
total 44
drwxr-xr-x 2
drwxr-xr-x 94
-rw------- 1
-rw-r--r-- 1
-rw-r--r-- 1

root
root
root
root
root

6:off

root 4096 Jun 7 08:15 .


root 12288 Jun 24 01:46 ..
root
73 Nov 16 2011 keys
root
186 Dec 23 2011 ntpservers
root
0 Nov 16 2011 step-tickers

[root@orclsys2 etc]# ls -al ntp*


-rw-r--r-- 1 root root 1833 Nov 16

root
root
root
root
root

5:on

2011 ntp.conf.disable

[root@orclsys2 ~]# chkconfig --list ntpd


ntpd
0:off
1:off
2:off
3:on

ntp.disable:
total 44
drwxr-xr-x 2
drwxr-xr-x 94
-rw------- 1
-rw-r--r-- 1
-rw-r--r-- 1

4:off

4:off

5:on

6:off

2011 ntp.conf.disable

root 4096 Jun 7 08:04 .


root 12288 Jun 24 01:46 ..
root
73 Nov 16 2011 keys
root
186 Dec 23 2011 ntpservers
root
0 Nov 16 2011 step-tickers

Configure it OFF on both nodes:


[root@orclsys1 etc]# chkconfig ntpd off
[root@orclsys1 etc]# chkconfig --list ntpd
ntpd
0:off
1:off
2:off
3:off

4:off

5:off

6:off

[root@orclsys2 etc]# chkconfig ntpd off


[root@orclsys2 etc]# chkconfig --list ntpd
ntpd
0:off
1:off
2:off
3:off

4:off

5:off

6:off

ASM diskgroups are all mounted, all disks are mount_state=cached


SQL> select * from v$asm_diskgroup;
GROUP_NUMBER NAME
SECTOR_SIZE BLOCK_SIZE
------------ ------------------------------ ----------- ---------ALLOCATION_UNIT_SIZE STATE
TYPE
TOTAL_MB
FREE_MB HOT_USED_MB
-------------------- ----------- ------ ---------- ---------- ----------COLD_USED_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB OFFLINE_DISKS
------------ ----------------------- -------------- ------------COMPATIBILITY
-----------------------------------------------------------DATABASE_COMPATIBILITY
V
------------------------------------------------------------ 1 DATA
512
4096

4194304 MOUNTED
HIGH
4669440
4662540
0
6900
491520
1390340
0
11.2.0.2.0
11.2.0.2.0
Y
................................................................................
2 RECO
512
4096
4194304 MOUNTED
HIGH
6095960
6092712
0
3248
641680
1817010
0
11.2.0.2.0
11.2.0.2.0
N
................................................................................
3 REDO
512
4096
4194304 MOUNTED
HIGH
280016
254940
0
25076
140008
38310
0
11.2.0.2.0
11.2.0.2.0
N
................................................................................
SQL> select * from v$asm_disk;
GROUP_NUMBER DISK_NUMBER COMPOUND_INDEX INCARNATION MOUNT_S HEADER_STATU MODE_ST
------------ ----------- -------------- ----------- ------- ------------ ------STATE
REDUNDA
-------- ------LIBRARY
OS_MB
---------------------------------------------------------------- ---------TOTAL_MB
FREE_MB HOT_USED_MB COLD_USED_MB NAME
---------- ---------- ----------- ------------ -----------------------------FAILGROUP
LABEL
------------------------------ ------------------------------PATH
-------------------------------------------------------------------------------UDID
---------------------------------------------------------------PRODUCT
CREATE_DA MOUNT_DAT REPAIR_TIMER
READS
-------------------------------- --------- --------- ------------ ---------WRITES READ_ERRS WRITE_ERRS READ_TIME WRITE_TIME BYTES_READ BYTES_WRITTEN
---------- ---------- ---------- ---------- ---------- ---------- ------------P HASH_VALUE HOT_READS HOT_WRITES HOT_BYTES_READ HOT_BYTES_WRITTEN COLD_READS
- ---------- ---------- ---------- -------------- ----------------- ---------COLD_WRITES COLD_BYTES_READ COLD_BYTES_WRITTEN V SECTOR_SIZE FAILGRO
----------- --------------- ------------------ - ----------- ------2
18
33554450 3916357228 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320676
0
164 HDD_E1_S18_1135740479P2
HDD_E1_S18_1135740479P2
/dev/mapper/HDD_E1_S18_1135740479p2
24-JUN-12 24-JUN-12
0
33
8
0
0
.317552
.077174
135168
32768
U 1141006284
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
18
16777234 3916357209 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245404
0
356 HDD_E1_S18_1135740479P1
HDD_E1_S18_1135740479P1
/dev/mapper/HDD_E1_S18_1135740479p1

9786

24-JUN-12 24-JUN-12
1.537523 2136.8542

0
15958016

813
399114240

U 3683223193
0
0
0
0
775
9762
15802368
399015936 N
512 REGULAR
................................................................................
2
15
33554447 3916357229 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320680
0
160 HDD_E1_S15_1135794951P2
HDD_E1_S15_1135794951P2
/dev/mapper/HDD_E1_S15_1135794951p2
24-JUN-12 24-JUN-12
0
454
1542
0
0
.799445
9.808538
6909952
10493952
U 1869192741
0
0
0
0
421
1536
6774784
6291456 N
512 REGULAR
................................................................................
1
15
16777231 3916357210 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245424
0
336 HDD_E1_S15_1135794951P1
HDD_E1_S15_1135794951P1
/dev/mapper/HDD_E1_S15_1135794951p1
24-JUN-12 24-JUN-12
0
1474
7065
0
0
3.503576 1525.12385
20303872
357339136
U 2912767788
0
0
0
0
1439
7060
20160512
357318656 N
512 REGULAR
................................................................................
2
19
33554451 3916357230 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320676
0
164 HDD_E1_S19_1135610547P2
HDD_E1_S19_1135610547P2
/dev/mapper/HDD_E1_S19_1135610547p2
24-JUN-12 24-JUN-12
0
33
1018
0
0
.121676
7.192627
135168
4169728
U 346233576
0
0
0
0
0
1014
0
4153344 N
512 REGULAR
................................................................................
1
19
16777235 3916357211 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245396
0
364 HDD_E1_S19_1135610547P1
HDD_E1_S19_1135610547P1
/dev/mapper/HDD_E1_S19_1135610547p1
24-JUN-12 24-JUN-12
0
504
9975
0
0
.826751 2622.5959
7024640
389607424
U 851530998
0
0
0
0
469
9964
6881280
385384448 N
512 REGULAR
................................................................................
2
10
33554442 3916357231 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320680
0
160 HDD_E1_S10_1135835919P2
HDD_E1_S10_1135835919P2
/dev/mapper/HDD_E1_S10_1135835919p2
24-JUN-12 24-JUN-12
0
33
523
0
0
.12402
2.583637
135168
2142208
U 258193846
0
0
0
0
0
522
0
2138112 N
512 REGULAR
................................................................................

1
10
16777226
NORMAL
UNKNOWN
System
245760
245424
0
HDD_E1_S10_1135835919P1
/dev/mapper/HDD_E1_S10_1135835919p1

3916357212 CACHED

MEMBER

ONLINE

245760
336 HDD_E1_S10_1135835919P1

24-JUN-12 24-JUN-12
0
380
8252
0
0
.642678 1576.61881
3649536
365268992
U 560669603
0
0
0
0
345
8223
3506176
360972288 N
512 REGULAR
................................................................................
2
17
33554449 3916357232 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320660
0
180 HDD_E0_S17_1135602799P2
HDD_E0_S17_1135602799P2
/dev/mapper/HDD_E0_S17_1135602799p2
24-JUN-12 24-JUN-12
0
33
1
0
0
.14176
.002745
135168
4096
U 2948261017
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
17
16777233 3916357213 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245400
0
360 HDD_E0_S17_1135602799P1
HDD_E0_S17_1135602799P1
/dev/mapper/HDD_E0_S17_1135602799p1
24-JUN-12 24-JUN-12
0
465
8103
0
0
.907074 1548.67379
7409664
365342720
U 2749406090
0
0
0
0
430
8096
7266304
365314048 N
512 REGULAR
................................................................................
2
14
33554446 3916357233 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320668
0
172 HDD_E1_S14_1135787679P2
HDD_E1_S14_1135787679P2
/dev/mapper/HDD_E1_S14_1135787679p2
24-JUN-12 24-JUN-12
0
58
11
0
0
.173348
.164176
237568
45056
U 3876082586
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
14
16777230 3916357214 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245396
0
364 HDD_E1_S14_1135787679P1
HDD_E1_S14_1135787679P1
/dev/mapper/HDD_E1_S14_1135787679p1
24-JUN-12 24-JUN-12
0
114
8584
0
0
.287308 1548.24642
1003520
384868352
U 2948137896
0
0
0
0
79
8568
860160
376446976 N
512 REGULAR
................................................................................
2
11
33554443 3916357234 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840

320840
320668
0
HDD_E1_S11_1135804775P2
/dev/mapper/HDD_E1_S11_1135804775p2

172 HDD_E1_S11_1135804775P2

24-JUN-12 24-JUN-12
0
60
531
0
0
.158545
2.839062
245760
6352896
U 2130898428
0
0
0
0
0
522
0
2138112 N
512 REGULAR
................................................................................
1
11
16777227 3916357215 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245428
0
332 HDD_E1_S11_1135804775P1
HDD_E1_S11_1135804775P1
/dev/mapper/HDD_E1_S11_1135804775p1
24-JUN-12 24-JUN-12
0
249
6995
0
0
.397028 1451.34704
2093056
339873792
U 2336953582
0
0
0
0
214
6966
1949696
335577088 N
512 REGULAR
................................................................................
2
7
33554439 3916357235 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320672
0
168 HDD_E1_S07_1135801023P2
HDD_E1_S07_1135801023P2
/dev/mapper/HDD_E1_S07_1135801023p2
24-JUN-12 24-JUN-12
0
33
2
0
0
.120687
.06608
135168
8192
U 1932201569
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
7
16777223 3916357216 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245416
0
344 HDD_E1_S07_1135801023P1
HDD_E1_S07_1135801023P1
/dev/mapper/HDD_E1_S07_1135801023p1
24-JUN-12 24-JUN-12
0
148
8465
0
0
.321597 1884.79417
1093632
372817920
U 806493500
0
0
0
0
113
8455
950272
368599040 N
512 REGULAR
................................................................................
2
13
33554445 3916357236 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320664
0
176 HDD_E0_S13_1135804119P2
HDD_E0_S13_1135804119P2
/dev/mapper/HDD_E0_S13_1135804119p2
24-JUN-12 24-JUN-12
0
33
1
0
0
.283171
.025065
135168
4096
U 1248954937
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
13
16777229 3916357217 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245380
0
380 HDD_E0_S13_1135804119P1
HDD_E0_S13_1135804119P1
/dev/mapper/HDD_E0_S13_1135804119p1

24-JUN-12 24-JUN-12
0
311
11981
0
0
.656397 1554.76878
2531328
444058112
U 380181004
0
0
0
0
247
11965
2269184
443992576 N
512 REGULAR
................................................................................
2
16
33554448 3916357237 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320660
0
180 HDD_E0_S16_1135664919P2
HDD_E0_S16_1135664919P2
/dev/mapper/HDD_E0_S16_1135664919p2
24-JUN-12 24-JUN-12
0
33
14
0
0
.117327
.098567
135168
333824
U 3126164346
0
0
0
0
0
4
0
292864 N
512 REGULAR
................................................................................
1
16
16777232 3916357218 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245408
0
352 HDD_E0_S16_1135664919P1
HDD_E0_S16_1135664919P1
/dev/mapper/HDD_E0_S16_1135664919p1
24-JUN-12 24-JUN-12
0
455
8539
0
0
.593777 1416.9935
3805184
374839808
U 1806772258
0
0
0
0
420
8532
3661824
374811136 N
512 REGULAR
................................................................................
2
3
33554435 3916357238 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320672
0
168 HDD_E1_S03_1137509855P2
HDD_E1_S03_1137509855P2
/dev/mapper/HDD_E1_S03_1137509855p2
24-JUN-12 24-JUN-12
0
65
3224
0
0
.105077
41.35479
266240
13205504
U 595662271
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
3
16777219 3916357219 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245360
0
400 HDD_E1_S03_1137509855P1
HDD_E1_S03_1137509855P1
/dev/mapper/HDD_E1_S03_1137509855p1
24-JUN-12 24-JUN-12
0
412
11455
0
0
.564261 1448.46567
6615040
396529664
U 1620268423
0
0
0
0
345
8226
6340608
383303680 Y
512 REGULAR
................................................................................
2
6
33554438 3916357239 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320684
0
156 HDD_E1_S06_1135821887P2
HDD_E1_S06_1135821887P2
/dev/mapper/HDD_E1_S06_1135821887p2

24-JUN-12 24-JUN-12
.086206
.032202

0
135168

33
8192

119170704
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
6
16777222 3916357220 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245396
0
364 HDD_E1_S06_1135821887P1
HDD_E1_S06_1135821887P1
/dev/mapper/HDD_E1_S06_1135821887p1
24-JUN-12 24-JUN-12
0
17920
6876
0
0 70.896994 1146.77695 292769792
359120896
U 933034986
0
0
0
0
17885
6864
292626432
354893824 N
512 REGULAR
................................................................................
2
12
33554444 3916357240 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320668
0
172 HDD_E0_S12_1135827487P2
HDD_E0_S12_1135827487P2
/dev/mapper/HDD_E0_S12_1135827487p2
24-JUN-12 24-JUN-12
0
33
6
0
0
.09605
.023027
135168
301056
U 1493910614
0
0
0
0
0
4
0
292864 N
512 REGULAR
................................................................................
1
12
16777228 3916357221 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245400
0
360 HDD_E0_S12_1135827487P1
HDD_E0_S12_1135827487P1
/dev/mapper/HDD_E0_S12_1135827487p1
24-JUN-12 24-JUN-12
0
6499
11937
0
0 18.306767 1692.06548 105550848
414034432
U 3204712685
0
0
0
0
6454
11882
105398272
409690112 N
512 REGULAR
................................................................................
2
9
33554441 3916357241 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320656
0
184 HDD_E0_S09_1135802123P2
HDD_E0_S09_1135802123P2
/dev/mapper/HDD_E0_S09_1135802123p2
24-JUN-12 24-JUN-12
0
33
8
0
0
.102085
.132748
135168
32768
U 3131333186
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
9
16777225 3916357222 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245404
0
356 HDD_E0_S09_1135802123P1
HDD_E0_S09_1135802123P1
/dev/mapper/HDD_E0_S09_1135802123p1
24-JUN-12 24-JUN-12
0
743
7326
0
0
1.502429 1081.61402
5918720
356596224
U 735022993
0
0
0
0
699
7297
5738496
352299520 N
512 REGULAR
................................................................................

2
5
33554437
NORMAL
UNKNOWN
System
320840
320680
0
HDD_E0_S05_1135619487P2
/dev/mapper/HDD_E0_S05_1135619487p2

3916357242 CACHED

MEMBER

ONLINE

320840
160 HDD_E0_S05_1135619487P2

24-JUN-12 24-JUN-12
0
65
3231
0
0
.085671 41.425325
266240
13234176
U 1413406603
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
5
16777221 3916357223 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245388
0
372 HDD_E0_S05_1135619487P1
HDD_E0_S05_1135619487P1
/dev/mapper/HDD_E0_S05_1135619487p1
24-JUN-12 24-JUN-12
0
8740
13997
0
0
3.28911 1224.58466 139714560
424706048
U 2491675494
0
0
0
0
8660
10671
139386880
406577152 Y
512 REGULAR
................................................................................
2
1
33554433 3916357243 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320656
0
184 HDD_E0_S01_1137772287P2
HDD_E0_S01_1137772287P2
/dev/mapper/HDD_E0_S01_1137772287p2
24-JUN-12 24-JUN-12
0
72
3259
0
0
.127157 59.567526
294912
13463552
U 1416051157
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
1
16777217 3916357224 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245396
0
364 HDD_E0_S01_1137772287P1
HDD_E0_S01_1137772287P1
/dev/mapper/HDD_E0_S01_1137772287p1
24-JUN-12 24-JUN-12
0
667
11644
0
0
.773946 1378.40318
7606272
370405376
U 836016366
0
0
0
0
600
8389
7331840
352894976 Y
512 REGULAR
................................................................................
2
0
33554432 3916357244 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320672
0
168 HDD_E0_S00_1137537659P2
HDD_E0_S00_1137537659P2
/dev/mapper/HDD_E0_S00_1137537659p2
24-JUN-12 24-JUN-12
0
93
4285
0
0
.210999 66.857554
4562944
21843968
U 2700127421
0
0
0
0
8
1014
81920
4153344 N
512 REGULAR
................................................................................
1
0
16777216 3916357225 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760

245760
245364
0
HDD_E0_S00_1137537659P1
/dev/mapper/HDD_E0_S00_1137537659p1

396 HDD_E0_S00_1137537659P1

24-JUN-12 24-JUN-12
0
546
12775
0
0
1.100878 1862.98219
8945664
406859776
U 556487832
0
0
0
0
453
9472
4448256
393003008 Y
512 REGULAR
................................................................................
2
8
33554440 3916357245 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320656
0
184 HDD_E0_S08_1135792215P2
HDD_E0_S08_1135792215P2
/dev/mapper/HDD_E0_S08_1135792215p2
24-JUN-12 24-JUN-12
0
36
41
0
0
.107212 18.236949
147456
559104
U 1052113546
0
0
0
0
0
4
0
292864 N
512 REGULAR
................................................................................
1
8
16777224 3916357226 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245380
0
380 HDD_E0_S08_1135792215P1
HDD_E0_S08_1135792215P1
/dev/mapper/HDD_E0_S08_1135792215p1
24-JUN-12 24-JUN-12
0
867
9551
0
0
1.112943 1589.32098
9785344
386296320
U 2401545944
0
0
0
0
814
9462
9568256
381485056 N
512 REGULAR
................................................................................
2
2
33554434 3916357246 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
320840
320840
320664
0
176 HDD_E1_S02_1137581539P2
HDD_E1_S02_1137581539P2
/dev/mapper/HDD_E1_S02_1137581539p2
24-JUN-12 24-JUN-12
0
65
3224
0
0
.079387 41.355373
266240
13205504
U 1693109306
0
0
0
0
0
0
0
0 N
512 REGULAR
................................................................................
1
2
16777218 3916357227 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
245760
245760
245376
0
384 HDD_E1_S02_1137581539P1
HDD_E1_S02_1137581539P1
/dev/mapper/HDD_E1_S02_1137581539p1
24-JUN-12 24-JUN-12
0
397
10524
0
0
.529004 1559.72947
3149824
369252864
U 3709719892
0
0
0
0
330
7278
2875392
356016128 Y
512 REGULAR
................................................................................
3
23
50331671 3916357255 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
70005
70004
63748
0
6256 SSD_E1_S23_805682605P1
SSD_E1_S23_805682605P1
/dev/mapper/SSD_E1_S23_805682605p1

24-JUN-12 24-JUN-12
0
115
67134
0
0
.043294 89.241655
743936
6455553536
U 729178670
0
0
0
0
6
63820
297472
6394449408 N
512 REGULAR
................................................................................
3
22
50331670 3916357256 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
70005
70004
63716
0
6288 SSD_E1_S22_805682628P1
SSD_E1_S22_805682628P1
/dev/mapper/SSD_E1_S22_805682628p1
24-JUN-12 24-JUN-12
0
80
71557
0
0
.028096 80.631523
288256
6474881024
U 3713363845
0
0
0
0
11
67553
5632
6399989760 N
512 REGULAR
................................................................................
3
21
50331669 3916357257 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
70005
70004
63740
0
6264 SSD_E0_S21_805682553P1
SSD_E0_S21_805682553P1
/dev/mapper/SSD_E0_S21_805682553p1
24-JUN-12 24-JUN-12
0
79
76037
0
0
.024227 102.813238
302592
6478965760
U 1309917545
0
0
0
0
7
71993
3584
6406515712 N
512 REGULAR
................................................................................
3
20
50331668 3916357258 CACHED MEMBER
ONLINE
NORMAL
UNKNOWN
System
70005
70004
63736
0
6268 SSD_E0_S20_805682621P1
SSD_E0_S20_805682621P1
/dev/mapper/SSD_E0_S20_805682621p1
24-JUN-12 24-JUN-12
0
104
69566
0
0
.051755 108.121605
4554752
6456423424
U 2935325904
0
0
0
0
0
65520
0
6383965184 N
512 REGULAR
................................................................................
42 rows selected.

During boot up, console screen still shows errors regarding storage.
This was first noticed when we were having problems with ASM on the first deployment. However, the same
error messages persist with this new deployment. They occur right after starting udev.
NOTE: There is no place to retrieve the boot information displayed on the console, and these error messages do
not appear in any of the log files in /var/log. There is no way to scroll back the console window. You have to
wait patiently and catch the output with screen capture to ever even see it!

Lessons Learned
8. root passwords on both nodes must be default of welcome1
9. If an existing deployment exists, clean it off with cleanupDeploy.sh
10. When running firstnet, use the public IP of node 1 (a hint about this one the setup poster would be nice)
11. If you know the IP of the ILOM, the whole install can be done with putty and Xwindows. You do not
need a keyboard, mouse and monitor for the console.
12. Need to add the localhost entry to /etc/hosts for Xwindows to work w/putty
13. Be patient, especially at 60%, some of the stpes take a long time.
14. Click the Show Details button to tail the log file, this helps to see that things really are moving.
Outstanding Issues
6. Unable to cut and paste text from ILOM console window, also no way to scroll back very frustrating!
7. Still dont understand why only one SCAN Listener and SCAN IP configured, should be two.
8. NTP was still configured to start, even though we specified no NTP. Because we had previously
renamed the NTP config file & directory to .disable, NTP was unable to start and CTSS did start in
Active Mode and clocks were synchronized
9. Error messages on console screen during boot up persist.
10. Boot messages are not available for review in any log file, and no scroll back in console window.
Caleb Small
caleb@caleb.com
800-223-8992
Continuing On with ReBoot of ODA, June 27, 2012

Takes about 2 minutes to get to Starting udev: Sits here for a while longer

Right after this is where we get the ERROR messages, but have to be quick to catch them.

Total boot time is about 7 minutes.


Check Node 1 all looks good!
[grid@orclsys1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@orclsys1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@orclsys1 ~]$ su
Password:
[root@orclsys1 grid]# ocrcheck
Status of Oracle Cluster Registry
Version
Total space (kbytes)
Used space (kbytes)
Available space (kbytes)
ID
Device/File Name
Device/File Name

is as follows :
:
3
:
262120
:
3004
:
259116
: 427182197
:
+RECO
Device/File integrity check succeeded
:
+DATA

Device/File integrity check succeeded


Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded

Node 2 comes up
[grid@orclsys2 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@orclsys2 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@orclsys2 ~]$ su
Password:
[root@orclsys2 grid]# ocrcheck
Status of Oracle Cluster Registry
Version
Total space (kbytes)
Used space (kbytes)
Available space (kbytes)
ID
Device/File Name

is as follows :
:
3
:
262120
:
3036
:
259084
: 427182197
:
+RECO
Device/File integrity check succeeded
Device/File Name
:
+DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded

[grid@orclsys2 ~]$ crsctl status resource -t


-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.DATA.dg
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.LISTENER.lsnr
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.RECO.dg
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.REDO.dg
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.asm
ONLINE ONLINE
orclsys1
Started
ONLINE ONLINE
orclsys2
Started
ora.gsd
OFFLINE OFFLINE
orclsys1
OFFLINE OFFLINE
orclsys2

ora.net1.network
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.ons
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
ora.registry.acfs
ONLINE ONLINE
orclsys1
ONLINE ONLINE
orclsys2
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
orclsys1
ora.cvu
1
ONLINE ONLINE
orclsys1
ora.oc4j
1
ONLINE ONLINE
orclsys1
ora.orcl.db
1
ONLINE ONLINE
orclsys1
Open
2
ONLINE ONLINE
orclsys2
Open
ora.orclsys1.vip
1
ONLINE ONLINE
orclsys1
ora.orclsys2.vip
1
ONLINE ONLINE
orclsys2
ora.scan1.vip
1
ONLINE ONLINE
orclsys1
[grid@orclsys1 ~]$ oakcli
Usage: oakcli show
diskgroup, disk
oakcli locate
oakcli apply
oakcli deploy
oakcli update
oakcli validate
oakcli manage
oakcli unpack
oakcli configure
oakcli copy

-h
- show storage, core_config_key, expander, controller,
-

locates a disk
applies the core_config_key
deploys the Database Appliance
updates the Database Appliance
validates the Database Appliance
manages the oak repository, diagcollect e.t.c
unpack the given package to oak repository
configures the network
copies the deployment config file

[grid@orclsys1 ~]$ su
Password:
[root@orclsys1 grid]# oakcli show version
Version
------2.2.0.0.0
[root@orclsys1 grid]# oakcli validate
[root@orclsys1 grid]#

Cluster Time Synchronization Service (CTSS) is Active on both nodes, clocks are syncd:
[grid@orclsys1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@orclsys2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

Some niceties to set up


Edit grids .bashrc and add:
Export ORACLE_SID=+ASMn

Edit oracles .bashrc and add:


export ORACLE_SID=orcln
export ORACLE_UNQNAME=orcl

Add to $ORACLE_HOME/sqlplus/admin/glogin.sql in both Oracle Homes (both nodes)


set pagesize 1000
set linesize 120
set recsepchar .

emctl shows Enterprise Manager running on both nodes (not just agent on node 2)
[oracle@orclsys1 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
https://orclsys1.example.com:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
-----------------------------------------------------------------Logs are generated in directory
/u01/app/oracle/product/11.2.0.3/dbhome_1/orclsys1_orcl/sysman/log
[oracle@orclsys2 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation. All rights reserved.
https://orclsys2.example.com:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
-----------------------------------------------------------------Logs are generated in directory
/u01/app/oracle/product/11.2.0.3/dbhome_1/orclsys2_orcl/sysman/log

OEM is available by hitting the URL on either node (in 10g it was only node 1)
https://192.0.2.18:1158/em/console/logon/logon
https://192.0.2.19:1158/em/console/logon/logon

SQL> select * from v$asm_template;


GROUP_NUMBER ENTRY_NUMBER REDUND STRIPE S NAME
------------ ------------ ------ ------ - -----------------------------1
70 HIGH
COARSE Y PARAMETERFILE
1
71 HIGH
COARSE Y ASMPARAMETERFILE
1
73 HIGH
COARSE Y DUMPSET
1
74 HIGH
FINE
Y CONTROLFILE
1
75 HIGH
COARSE Y FLASHFILE
1
76 HIGH
COARSE Y ARCHIVELOG
1
77 HIGH
COARSE Y ONLINELOG
1
78 HIGH
COARSE Y DATAFILE
1
79 HIGH
COARSE Y TEMPFILE
1
180 HIGH
COARSE Y BACKUPSET
1
181 HIGH
COARSE Y AUTOBACKUP
1
182 HIGH
COARSE Y XTRANSPORT
1
183 HIGH
COARSE Y CHANGETRACKING
1
184 HIGH
COARSE Y FLASHBACK
1
185 HIGH
COARSE Y DATAGUARDCONFIG
1
186 HIGH
COARSE Y OCRFILE
2
70 HIGH
COARSE Y PARAMETERFILE

PRIM
---COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD

MIRR
---COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD

2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3

71
73
74
75
76
77
78
79
180
181
182
183
184
185
186
70
71
73
74
75
76
77
78
79
180
181
182
183
184
185
186

HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH
HIGH

COARSE
COARSE
FINE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
FINE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE
COARSE

Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y

ASMPARAMETERFILE
DUMPSET
CONTROLFILE
FLASHFILE
ARCHIVELOG
ONLINELOG
DATAFILE
TEMPFILE
BACKUPSET
AUTOBACKUP
XTRANSPORT
CHANGETRACKING
FLASHBACK
DATAGUARDCONFIG
OCRFILE
PARAMETERFILE
ASMPARAMETERFILE
DUMPSET
CONTROLFILE
FLASHFILE
ARCHIVELOG
ONLINELOG
DATAFILE
TEMPFILE
BACKUPSET
AUTOBACKUP
XTRANSPORT
CHANGETRACKING
FLASHBACK
DATAGUARDCONFIG
OCRFILE

COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD

COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD
COLD

48 rows selected.
1* select * from v$controlfile
SQL> /
STATUS NAME
IS_ BLOCK_SIZE FILE_SIZE_BLKS
------- ---------------------------------------- --- ---------- -------------+DATA/orcl/control01.ctl
NO
16384
1130
1* select * from v$log
SQL> /
GROUP# THREAD# SEQUENCE#
BYTES BLOCKSIZE
MEMBERS ARC STATUS
FIRST_CHANGE#
FIRST_TIM NEXT_CHANGE# NEXT_TIME
------- -------- ---------- ---------- ---------- ---------- --- -------- ------------- -------- ------------ --------1
1
15 2147483648
512
1 YES INACTIVE
1168145
03-JUL-12
1259919 03-JUL-12
2
1
16 2147483648
512
1 YES INACTIVE
1259919
03-JUL-12
1259921 03-JUL-12
3
2
21 2147483648
512
1 NO CURRENT
1283071
04-JUL-12
2.8147E+14
4
2
20 2147483648
512
1 YES INACTIVE
1260251
03-JUL-12
1283071 04-JUL-12
1* select * from v$logfile
SQL> /

GROUP# STATUS
TYPE
MEMBER
IS_
------- -------- ------- -------------------------------------------------- ---

1
2
3
4

ONLINE
ONLINE
ONLINE
ONLINE

+REDO/orcl/onlinelog/group_1.256.786763897
+REDO/orcl/onlinelog/group_2.257.786763905
+REDO/orcl/onlinelog/group_3.258.786764271
+REDO/orcl/onlinelog/group_4.259.786764281

NO
NO
NO
NO

Вам также может понравиться