Вы находитесь на странице: 1из 47

Creating an Oracle 11g Dataguard on VWware

using rman "duplicate target database for standby


from active database"
I created a VMware Linux Redhat 5.3 server using the Unbreakable Linix distribution provided by Oracle.
In this exercise I will use the server ha1 as master and the server ha2 as standby
The beauty of this exercise is that I am going to use the new 11g command "duplicate target database for
standby from active database". This command creates a standby database from an active database on the fly.
In previous releases the DBA had to take a backup of the master, copy it over onto the standby site and restore
it. By using "duplicate target database for standby from active database" no ad interim backup is needed, the
datafiles are directly copied over.

Create an 11g database to start with


To start with, the Oracle 11g software will be installed choosing only the required components and a database
created from the GUI

I will leave out components that are not necessary

Having moved the VMware nodes around, I forgot to check the status of the network and a result the error
ORA-12157 occurred.

The reason for the error ORA-12157 was that the network was down; it is better for everybody that it is up.

Configuring the database as a master


The first requirement is that the database is put in "FORCE LOGGING" mode.

If you decide to run the data guard in maximum protection or maximum availability, the transaction are copied
over to the standby site by the log writer and are written to the standby logfiles; this is why we create standby
logfiles that will be used not locally, but by the standby.
The view V$STANDBY_LOG contains information about the standby redo logs.

When standby redologs are created, they are listed in v$logfile, but their groups are listed in
V$STANDBY_LOG
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo01.log' size
50M;
SQL> SELECT * FROM v$standby_log;

GROUP# DBID
------------------4
UNASSIGNED

THREAD# SEQUENCE#
-----------------0
0

BYTES
USED ARC STATUS
-------------------------------52428800 512
YES UNASSIGNED

Infact the view V$STANDBY_LOGFILE does not exist.


The standby redo logs are instead in v$logfile

SQL> select * from V$STANDBY_LOGfile;


select * from V$STANDBY_LOGfile
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------

.....
...

/opt/oracle/oradata/MASTERF/redo03.log
/opt/oracle/oradata/MASTERF/redo02.log
/opt/oracle/oradata/MASTERF/redo01.log
/opt/oracle/oradata/MASTERF/sb_redo01.log

The Oracle documentation recommends creating standby redo logs in number equal to the number of redologs
groups plus one, to avoid delays on the standby site when a switch occurs.
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo02.log' size
50M;
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo03.log' size
50M;
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo04.log' size
50M;

The instance on the master ha1 will be called MASTERF and the instance on the standby side ha2, SBF1
MASTERF =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ha1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = MASTERF)
)
)
SBF1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ha2)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SBF1)
)
)

Two parameters must be set on the master, log_archive_config and log_archive_dest_2


SQL> alter system set log_archive_config='dg_config=(MASTERF,SBF1)';
SQL> alter system set log_archive_dest_2='service=SBF1 async
2 valid_for=(online_logfile,primary_role) db_unique_name=SBF1';

The password file has to be copied on the standby site and an essential initSBF1.ora will be created, only
containing db_name

cat initSBF1.ora
db_name=SBF1

In doubt, I created soft links on the standby server ha2 to make sure that /oradata/MASTERF is available.
The listener has to be started on ha2, for the whole process to work
ha1-> tnsping SBF1
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ha2)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SBF1)))
OK (240 msec)
ha1->

Using rman to create the standby


At this point we will try creating the standby, but likely expecting errors

ha1-> cat create_physical_standby.txt


run {
allocate channel ch1 type disk;
allocate channel ch2 type disk;
allocate auxiliary channel chsb type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'MASTERF', 'SBF1'
set db_unique_name='SBF1'
set db_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
set log_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'

set control_files='/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'
set log_archive_max_processes='2'
set fal_client='SBF1'
set fal_server='MASTERF'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(MASTERF,SBF1)'
set log_archive_dest_1='SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'
;
ha1-> rman target /
connected to target database: MASTERF (DBID=3441682046)
RMAN> connect auxiliary sys/syspass@SBF1
connected to auxiliary database: SBF1 (not mounted)
RMAN> @create_physical_standby.txt
RMAN> run {
2>
allocate channel ch1 type disk;
3>
allocate channel ch2 type disk;
4>
allocate auxiliary channel chsb type disk;
5>
duplicate target database for standby from active database
6>
spfile
7>
parameter_value_convert 'MASTERF', 'SBF1'
8>
set db_unique_name='SBF1'
9>
set db_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
10>
set
log_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
11>
set control_files='/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'
12>
set log_archive_max_processes='2'
13>
set fal_client='SBF1'
14>
set fal_server='MASTERF'
15>
set standby_file_management='AUTO'
16>
set log_archive_config='dg_config=(MASTERF,SBF1)'
17>
set log_archive_dest_1='SERVICE=MASTERF ASYNC
18>
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'
19> ;
20> }
using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=124 device type=DISK
allocated channel: ch2
channel ch2: SID=170 device type=DISK
allocated channel: chsb
channel chsb: SID=99 device type=DISK
Starting Duplicate Db at 18-MAY-09
contents of Memory Script:
{
backup as copy reuse
file '/opt/oracle/product/11g/db_1/dbs/orapwMASTERF' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/orapwSBF1'
file
'/opt/oracle/product/11g/db_1/dbs/spfileMASTERF.ora' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora'
;
sql clone "alter system set spfile=
''/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora''";
}

executing Memory Script


Starting backup at 18-MAY-09
Finished backup at 18-MAY-09
sql statement: alter system set spfile=
''/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora''
contents of Memory Script:
{
sql clone "alter system set audit_file_dest =
''/opt/oracle/admin/SBF1/adump'' comment=
'''' scope=spfile";
sql clone "alter system set dispatchers =
''(PROTOCOL=TCP) (SERVICE=SBF1XDB)'' comment=
'''' scope=spfile";
sql clone "alter system set db_unique_name =
''SBF1'' comment=
'''' scope=spfile";
sql clone "alter system set db_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment=
'''' scope=spfile";
sql clone "alter system set log_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment=
'''' scope=spfile";
sql clone "alter system set control_files =
''/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_max_processes =
2 comment=
'''' scope=spfile";
sql clone "alter system set fal_client =
''SBF1'' comment=
'''' scope=spfile";
sql clone "alter system set fal_server =
''MASTERF'' comment=
'''' scope=spfile";
sql clone "alter system set standby_file_management =
''AUTO'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_config =
''dg_config=(MASTERF,SBF1)'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_dest_1 =
''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)
db_unique_name=MASTERF'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount ;
}
executing Memory Script
sql statement: alter system set audit_file_dest = ''/opt/oracle/admin/SBF1/adump''
comment= '''' scope=spfile
sql statement: alter system set dispatchers = ''(PROTOCOL=TCP) (SERVICE=SBF1XDB)''
comment= '''' scope=spfile
sql statement: alter system set db_unique_name = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set db_file_name_convert = ''/opt/oracle/oradata/MASTERF/'',
''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set log_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set control_files =
''/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'' comment= '''' scope=spfile
sql statement: alter system set log_archive_max_processes = 2 comment= '''' scope=spfile

sql statement: alter system set fal_client = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set fal_server = ''MASTERF'' comment= '''' scope=spfile
sql statement: alter system set standby_file_management = ''AUTO'' comment= ''''
scope=spfile
sql statement: alter system set log_archive_config = ''dg_config=(MASTERF,SBF1)''
comment= '''' scope=spfile
sql statement: alter system set log_archive_dest_1 = ''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'' comment= ''''
scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
released channel: ch1
released channel: ch2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/18/2009 20:31:24
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-01261: Parameter db_recovery_file_dest destination string
cannot be translated
ORA-01262: Stat failed on a file destination directory
Linux Error: 2: No such file or directory
RMAN>

After a while the script stopped with error ORA-01261, which in this case means that the directory
/opt/oracle/flash_recovery_area had not been created on ha2
SQL> show parameter db_recovery
NAME
-----------------------------------db_recovery_file_dest
db_recovery_file_dest_size

TYPE
----------string
big integer

VALUE
-----------------------------/opt/oracle/flash_recovery_area
2G

We start again

connected to auxiliary database: SBF1 (not mounted)


RMAN> @create_physical_standby.txt
RMAN> run {
2>
allocate channel ch1 type disk;
3>
allocate channel ch2 type disk;
4>
allocate auxiliary channel chsb type disk;
5>
duplicate target database for standby from active database
6>
spfile
7>
parameter_value_convert 'MASTERF', 'SBF1'
8>
set db_unique_name='SBF1'
9>
set db_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
10>
set
log_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
11>
set control_files='/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'
12>
set log_archive_max_processes='2'
13>
set fal_client='SBF1'
14>
set fal_server='MASTERF'
15>
set standby_file_management='AUTO'
16>
set log_archive_config='dg_config=(MASTERF,SBF1)'
17>
set log_archive_dest_1='SERVICE=MASTERF ASYNC
18>
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'
19> ;
20> }
using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=127 device type=DISK

allocated channel: ch2


channel ch2: SID=132 device type=DISK
allocated channel: chsb
channel chsb: SID=99 device type=DISK
Starting Duplicate Db at 18-MAY-09
contents of Memory Script:
{
backup as copy reuse
file '/opt/oracle/product/11g/db_1/dbs/orapwMASTERF' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/orapwSBF1'
file
'/opt/oracle/product/11g/db_1/dbs/spfileMASTERF.ora' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora'
;
sql clone "alter system set spfile=
''/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora''";
}
executing Memory Script
Starting backup at 18-MAY-09
Finished backup at 18-MAY-09
sql statement: alter system set spfile=
''/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora''
contents of Memory Script:
{
sql clone "alter system set audit_file_dest =
''/opt/oracle/admin/SBF1/adump'' comment=
'''' scope=spfile";
sql clone "alter system set dispatchers =
''(PROTOCOL=TCP) (SERVICE=SBF1XDB)'' comment=
'''' scope=spfile";
sql clone "alter system set db_unique_name =
''SBF1'' comment=
'''' scope=spfile";
sql clone "alter system set db_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment=
'''' scope=spfile";
sql clone "alter system set log_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment=
'''' scope=spfile";
sql clone "alter system set control_files =
''/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_max_processes =
2 comment=
'''' scope=spfile";
sql clone "alter system set fal_client =
''SBF1'' comment=
'''' scope=spfile";
sql clone "alter system set fal_server =
''MASTERF'' comment=
'''' scope=spfile";
sql clone "alter system set standby_file_management =
''AUTO'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_config =
''dg_config=(MASTERF,SBF1)'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_dest_1 =

''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)
db_unique_name=MASTERF'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount ;
}
executing Memory Script
sql statement: alter system set audit_file_dest = ''/opt/oracle/admin/SBF1/adump''
comment= '''' scope=spfile
sql statement: alter system set dispatchers = ''(PROTOCOL=TCP) (SERVICE=SBF1XDB)''
comment= '''' scope=spfile
sql statement: alter system set db_unique_name = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set db_file_name_convert = ''/opt/oracle/oradata/MASTERF/'',
''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set log_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set control_files =
''/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'' comment= '''' scope=spfile
sql statement: alter system set log_archive_max_processes = 2 comment= '''' scope=spfile
sql statement: alter system set fal_client = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set fal_server = ''MASTERF'' comment= '''' scope=spfile
sql statement: alter system set standby_file_management = ''AUTO'' comment= ''''
scope=spfile
sql statement: alter system set log_archive_config = ''dg_config=(MASTERF,SBF1)''
comment= '''' scope=spfile
sql statement: alter system set log_archive_dest_1 = ''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'' comment= ''''
scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area

200867840 bytes

Fixed Size
Variable Size
Database Buffers
Redo Buffers

1298864
71306832
125829120
2433024

bytes
bytes
bytes
bytes

contents of Memory Script:


{
backup as copy current controlfile for standby auxiliary format
'/opt/oracle/oradata/SBF1/cntrlSBF1.ctl';
sql clone 'alter database mount standby database';
}
executing Memory Script
Starting backup at 18-MAY-09
channel ch1: starting datafile copy
copying standby control file
output file name=/opt/oracle/product/11g/db_1/dbs/snapcf_MASTERF.f tag=TAG20090518T203905
RECID=2 STAMP=687213560
channel ch1: datafile copy complete, elapsed time: 00:00:25
Finished backup at 18-MAY-09
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for tempfile 1 to
"/opt/oracle/oradata/SBF1/temp01.dbf";

switch clone tempfile all;


set newname for datafile 1 to
"/opt/oracle/oradata/SBF1/system01.dbf";
set newname for datafile 2 to
"/opt/oracle/oradata/SBF1/sysaux01.dbf";
set newname for datafile 3 to
"/opt/oracle/oradata/SBF1/undotbs01.dbf";
set newname for datafile 4 to
"/opt/oracle/oradata/SBF1/users01.dbf";
set newname for datafile 5 to
"/opt/oracle/oradata/SBF1/example01.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/opt/oracle/oradata/SBF1/system01.dbf"
datafile
2 auxiliary format
"/opt/oracle/oradata/SBF1/sysaux01.dbf"
datafile
3 auxiliary format
"/opt/oracle/oradata/SBF1/undotbs01.dbf"
datafile
4 auxiliary format
"/opt/oracle/oradata/SBF1/users01.dbf"
datafile
5 auxiliary format
"/opt/oracle/oradata/SBF1/example01.dbf"
;
sql 'alter system archive log current';

}
executing Memory Script

executing command: SET NEWNAME


renamed tempfile 1
executing command:
executing command:
executing command:
executing command:
executing command:

to /opt/oracle/oradata/SBF1/temp01.dbf in control file


SET NEWNAME
SET NEWNAME
SET NEWNAME
SET NEWNAME
SET NEWNAME

In the following with see that the datafiles are copied onto the other server directly, with no temporary
space required
Starting backup at 18-MAY-09
channel ch1: starting datafile copy
input datafile file number=00001 name=/opt/oracle/oradata/MASTERF/system01.dbf
channel ch2: starting datafile copy
input datafile file number=00002 name=/opt/oracle/oradata/MASTERF/sysaux01.dbf
output file name=/opt/oracle/oradata/SBF1/sysaux01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:04:27
channel ch2: starting datafile copy
input datafile file number=00005 name=/opt/oracle/oradata/MASTERF/example01.dbf
output file name=/opt/oracle/oradata/SBF1/system01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch1: datafile copy complete, elapsed time: 00:04:43
channel ch1: starting datafile copy
input datafile file number=00003 name=/opt/oracle/oradata/MASTERF/undotbs01.dbf
output file name=/opt/oracle/oradata/SBF1/example01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:00:31
channel ch2: starting datafile copy
input datafile file number=00004 name=/opt/oracle/oradata/MASTERF/users01.dbf
output file name=/opt/oracle/oradata/SBF1/undotbs01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch1: datafile copy complete, elapsed time: 00:00:19
output file name=/opt/oracle/oradata/SBF1/users01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 18-MAY-09

sql statement: alter system archive log current


contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=687201144
name=/opt/oracle/oradata/SBF1/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=687201144
name=/opt/oracle/oradata/SBF1/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=687201144
name=/opt/oracle/oradata/SBF1/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=687201144
datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=687201144
name=/opt/oracle/oradata/SBF1/example01.dbf
Finished Duplicate Db at 18-MAY-09
released channel: ch1
released channel: ch2

file
file
file
file name=/opt/oracle/oradata/SBF1/users01.dbf
file

RMAN> **end-of-file**
RMAN>

Starting the standby database in managed recovery

As a consequence of the above statement, an Oracle process ora_dbrm is started

The alert.log on both sides give details about the log transmission and inform us of any errors.

In the alert.log on the standy site the error "ORA-16401 archivelog rejected by RFS" was detected; it seems it
was nothing to be worried about

ORA-16401: archivelog rejected by RFS


Cause: An attempt was made to re-archive an existing archivelog. This usually happens
because either a multiple primary database or standby database(s) or both are trying to
archive to this standby database.
Action: See alert log and trace file for more details. No action is necessary; this is an
informational statement provided to record the event for diagnostic purposes.

A very useful view on both sides is V$MANAGED_STANDBY; on ha2 it informs us that the process MRP0 is
doing its work:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY;

Another way of checking the progress is, of course, to create a user LOLO on the master database and his table
t_a, populate it and make sure that it gets propragated onto the standby.

At this point we have a working data guard; as the grid control is becoming more and more popular, I will
investigate how to manage the data guard from the console, as shown in next pages

[AD]

Manage a Data Guard configuration with the Grid


Control
For this exercise I will be using the VMware Linux servers: grid10g, which hosts the Grid Control management
server, ha1, where the master database in a data guard couple is running, and ha2, where the standby database
has benn created. Both ha1 and ha2 are configured with the grid control, both having a management agent.

Our purpose is to couple the master database MASTERF on ha1 with the standby database SBF1 on ha2

We start setting up the data guard broker configuration using the grid control

We expect that we will be able to choose SBF1, and the screenshot offers indeed this option

We do we will be able to connect to the standby ... even if the database is only mounted and not open!

We were able to connect! The next screen is about "Add Standby Database: Configuration"

This was about configuring the dataguard broker, see the next pages for the installation Enterprise Manager
Repository.
[AD]

Installing Grid Control 10g on Unbreakable Linux


5.3 (VMware)
See libXp.so.6 to resolve the depency on libXp.so.6
This is an installation on a VMware Redhat 5.3 guest system; the purpose of this full exercise is using the grid
controle to manage the 11g Data Guard configuration on two other VMware guest operating systems.
As there is no grid repository available, I will choose to create a database, install the OMS and install the agent
on my virtual node grid10g.
Even if the OMS is local, an agent is needed. I will use the version 10.2.0.1.1 to make my life difficult
(sometimes in our profession as DBA we are faced by complex environements), and then upgrade to 10.2.0.5

In this case, three ORACLE_HOME will be created by the installer: one for the database, one for the
management server and one for the agent.

The database name and the passwords for the privileged users are required.

As shown in the screenshot, the software for the Oracle Enterprise Manger Repository Database is first
installed.

Second comes the software for the grid console, and third the Oracle Management Server

The database is cloned using the files that come with the CDs

While linking and confuring the assistants, the web cache assistant gives the error libdb-so.2 missing

There is no point in continuing; in my case the solution was installing the rpm db1-1.85-8.i386.rpm, that can be
downloaded from here
[root@grid10g tmp]# rpm -iv db1-1.85-8.i386.rpm
warning: db1-1.85-8.i386.rpm: Header V3 DSA signature: NOKEY, key ID db42a60e
Preparing packages for installation...
db1-1.85-8
[root@grid10g tmp]# ls /usr/lib/libdb*
/usr/lib/libdb1.so.2
/usr/lib/libdb.so.2
/usr/lib/libdb-4.3.so
/usr/lib/libdbus-glib-1.so.2
/usr/lib/libdb_cxx-4.3.so /usr/lib/libdbus-glib-1.so.2.1.0
[root@grid10g tmp]#

The installation has completed and a summary is offered


This information is also available at:
/u01/app/oracle/OracleHomes/agent10g/sysman/setupinfo.txt
See below for information pertaining to your Enterprise Manager installation:
1. Use the following URL to access the Enterprise Manager Grid Control:
http://grid10g:4889/em
2. The Enterprise Manager installation includes an Oracle Application Server instance,
which is administered using a special "ias_admin" user account. The following information
was defaulted for the embedded Application Server instance:
Instance Name:
EnterpriseManager0.grid10g
ias_admin password: This password is the same the SYSMAN account.
3. The connection details for the Enterprise Manager Repository are:
Host: grid10g
Port: 1521
SID: REP10G
4. The Enterprise Manager Release Notes are located at:
/u01/app/oracle/OracleHomes/agent10g/relnotes/README_EM.htm

If everything has gone well, the console will already be available on http://grid10g:4889/em

To start/stop the Oracle Grid Console


Go to the oms ORACLE_HOME
$ pwd
/u01/app/oracle/OracleHomes/oms10g/bin
$ ./emctl stop oms
Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
Oracle Management Server is Down.
$ ./emctl start oms
Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
opmnctl: opmn is already running
Starting HTTP Server ...
Starting Oracle Management Server ...
Checking Oracle Management Server Status ...
Oracle Management Server is Up.

As we deal with three different ORACLE_HOMEs, we often have to switch between the three environments
and redefine the PATH.
It is therefore an excellent idea to define aliases in .profile, making sure that the original PATH be preserved in
BASE_PATH, since PATH will be overwritten when switching environments.
export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/app/oracle/OracleHomes/db10g
export DBS_HOME=/u01/app/oracle/OracleHomes/db10g
export OMS_HOME=/u01/app/oracle/OracleHomes/oms10g
export AGENT_HOME=/u01/app/oracle/OracleHomes/agent10g
export ORACLE_SID=REP10G
export BASE_PATH=$PATH
alias 10db='export ORACLE_HOME=$DBS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
alias 10gr='export ORACLE_HOME=$OMS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
alias 10ag='export ORACLE_HOME=$AGENT_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'

We source in the .profile file and we check all the components


$ . ./.profile

$ $AGENT_HOME/bin/emctl status agent


Oracle Enterprise Manager 10g Release 10.2.0.1.0.
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
--------------------------------------------------------------Agent Version
: 10.2.0.1.0
OMS Version
: 10.2.0.1.0
Protocol Version : 10.2.0.0.0
Agent Home
: /u01/app/oracle/OracleHomes/agent10g
Agent binaries
: /u01/app/oracle/OracleHomes/agent10g
Agent Process ID : 4792
Parent Process ID : 4774
Agent URL
: https://grid10g:3872/emd/main/
Repository URL
: https://grid10g:1159/em/upload
Started at
: 2009-06-06 19:55:56
Started by user
: oracle
Last Reload
: 2009-06-06 19:55:56
Last successful upload
: (none)
Last attempted upload
: (none)
Total Megabytes of XML files uploaded so far :
0.00
Number of XML files pending upload
:
297
Size of XML files pending upload(MB)
:
4.70
Available disk space on upload filesystem
:
20.01%
Last successful heartbeat to OMS
: 2009-06-06 21:21:03
--------------------------------------------------------------Agent is Running and Ready
$ 10gr
$ $ORACLE_HOME/bin/emctl status iasconsole
Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
Oracle 10g Application Server Control is not running.
$ $ORACLE_HOME/bin/emctl start iasconsole
Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
Starting Oracle 10g Application Server Control .... started.
$ $ORACLE_HOME/bin/emctl start oms

Steps to make sure that the grid components are up


$ alias

10ag='export ORACLE_HOME=$AGENT_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'


10db='export ORACLE_HOME=$DBS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
10om='export ORACLE_HOME=$OMS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'

$ 10db
$ sqlplus / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.

$ lsnrctl status
LSNRCTL for Linux: Version 10.1.0.4.0 - Production on 13-JUN-2009 17:41:33
Copyright (c) 1991, 2004, Oracle.

All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
STATUS of the LISTENER
-----------------------Alias
LISTENER
Version
TNSLSNR for Linux: Version 10.1.0.4.0 - Production
Start Date
13-JUN-2009 17:19:58
Uptime
0 days 0 hr. 21 min. 36 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/oracle/OracleHomes/db10g/network/admin/listener.ora
Listener Log File
/u01/app/oracle/OracleHomes/db10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=grid10g)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "REP10G" has 1 instance(s).
Instance "REP10G", status READY, has 1 handler(s) for this service...
The command completed successfully

$ 10om

$ emctl status oms


Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
Oracle Management Server is Down.
$ emctl start oms
Oracle Enterprise Manager 10g Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
opmnctl: opmn is already running
Starting HTTP Server ...
Starting Oracle Management Server ...
Checking Oracle Management Server Status ...
Oracle Management Server is Up.

At this point the console has already come up

Let's check the agent:


$10ag
$ emctl status agent
... ... ...
Agent is Running and Ready

This was about configuring the Enterprise Manager Repository; the agent on the data guard nodes will have to
be installed, see installation of the grid agent on a monitored node
[AD]

Вам также может понравиться