Академический Документы
Профессиональный Документы
Культура Документы
Having moved the VMware nodes around, I forgot to check the status of the network and a result the error
ORA-12157 occurred.
The reason for the error ORA-12157 was that the network was down; it is better for everybody that it is up.
If you decide to run the data guard in maximum protection or maximum availability, the transaction are copied
over to the standby site by the log writer and are written to the standby logfiles; this is why we create standby
logfiles that will be used not locally, but by the standby.
The view V$STANDBY_LOG contains information about the standby redo logs.
When standby redologs are created, they are listed in v$logfile, but their groups are listed in
V$STANDBY_LOG
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo01.log' size
50M;
SQL> SELECT * FROM v$standby_log;
GROUP# DBID
------------------4
UNASSIGNED
THREAD# SEQUENCE#
-----------------0
0
BYTES
USED ARC STATUS
-------------------------------52428800 512
YES UNASSIGNED
.....
...
/opt/oracle/oradata/MASTERF/redo03.log
/opt/oracle/oradata/MASTERF/redo02.log
/opt/oracle/oradata/MASTERF/redo01.log
/opt/oracle/oradata/MASTERF/sb_redo01.log
The Oracle documentation recommends creating standby redo logs in number equal to the number of redologs
groups plus one, to avoid delays on the standby site when a switch occurs.
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo02.log' size
50M;
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo03.log' size
50M;
SQL> alter database add standby logfile '/opt/oracle/oradata/MASTERF/sb_redo04.log' size
50M;
The instance on the master ha1 will be called MASTERF and the instance on the standby side ha2, SBF1
MASTERF =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ha1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = MASTERF)
)
)
SBF1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ha2)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = SBF1)
)
)
The password file has to be copied on the standby site and an essential initSBF1.ora will be created, only
containing db_name
cat initSBF1.ora
db_name=SBF1
In doubt, I created soft links on the standby server ha2 to make sure that /oradata/MASTERF is available.
The listener has to be started on ha2, for the whole process to work
ha1-> tnsping SBF1
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ha2)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SBF1)))
OK (240 msec)
ha1->
set control_files='/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'
set log_archive_max_processes='2'
set fal_client='SBF1'
set fal_server='MASTERF'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(MASTERF,SBF1)'
set log_archive_dest_1='SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'
;
ha1-> rman target /
connected to target database: MASTERF (DBID=3441682046)
RMAN> connect auxiliary sys/syspass@SBF1
connected to auxiliary database: SBF1 (not mounted)
RMAN> @create_physical_standby.txt
RMAN> run {
2>
allocate channel ch1 type disk;
3>
allocate channel ch2 type disk;
4>
allocate auxiliary channel chsb type disk;
5>
duplicate target database for standby from active database
6>
spfile
7>
parameter_value_convert 'MASTERF', 'SBF1'
8>
set db_unique_name='SBF1'
9>
set db_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
10>
set
log_file_name_convert='/opt/oracle/oradata/MASTERF/','/opt/oracle/oradata/SBF1/'
11>
set control_files='/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'
12>
set log_archive_max_processes='2'
13>
set fal_client='SBF1'
14>
set fal_server='MASTERF'
15>
set standby_file_management='AUTO'
16>
set log_archive_config='dg_config=(MASTERF,SBF1)'
17>
set log_archive_dest_1='SERVICE=MASTERF ASYNC
18>
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'
19> ;
20> }
using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=124 device type=DISK
allocated channel: ch2
channel ch2: SID=170 device type=DISK
allocated channel: chsb
channel chsb: SID=99 device type=DISK
Starting Duplicate Db at 18-MAY-09
contents of Memory Script:
{
backup as copy reuse
file '/opt/oracle/product/11g/db_1/dbs/orapwMASTERF' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/orapwSBF1'
file
'/opt/oracle/product/11g/db_1/dbs/spfileMASTERF.ora' auxiliary format
'/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora'
;
sql clone "alter system set spfile=
''/opt/oracle/product/11g/db_1/dbs/spfileSBF1.ora''";
}
sql statement: alter system set fal_client = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set fal_server = ''MASTERF'' comment= '''' scope=spfile
sql statement: alter system set standby_file_management = ''AUTO'' comment= ''''
scope=spfile
sql statement: alter system set log_archive_config = ''dg_config=(MASTERF,SBF1)''
comment= '''' scope=spfile
sql statement: alter system set log_archive_dest_1 = ''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'' comment= ''''
scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
released channel: ch1
released channel: ch2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/18/2009 20:31:24
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-01261: Parameter db_recovery_file_dest destination string
cannot be translated
ORA-01262: Stat failed on a file destination directory
Linux Error: 2: No such file or directory
RMAN>
After a while the script stopped with error ORA-01261, which in this case means that the directory
/opt/oracle/flash_recovery_area had not been created on ha2
SQL> show parameter db_recovery
NAME
-----------------------------------db_recovery_file_dest
db_recovery_file_dest_size
TYPE
----------string
big integer
VALUE
-----------------------------/opt/oracle/flash_recovery_area
2G
We start again
''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)
db_unique_name=MASTERF'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount ;
}
executing Memory Script
sql statement: alter system set audit_file_dest = ''/opt/oracle/admin/SBF1/adump''
comment= '''' scope=spfile
sql statement: alter system set dispatchers = ''(PROTOCOL=TCP) (SERVICE=SBF1XDB)''
comment= '''' scope=spfile
sql statement: alter system set db_unique_name = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set db_file_name_convert = ''/opt/oracle/oradata/MASTERF/'',
''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set log_file_name_convert =
''/opt/oracle/oradata/MASTERF/'', ''/opt/oracle/oradata/SBF1/'' comment= '''' scope=spfile
sql statement: alter system set control_files =
''/opt/oracle/oradata/SBF1/cntrlSBF1.ctl'' comment= '''' scope=spfile
sql statement: alter system set log_archive_max_processes = 2 comment= '''' scope=spfile
sql statement: alter system set fal_client = ''SBF1'' comment= '''' scope=spfile
sql statement: alter system set fal_server = ''MASTERF'' comment= '''' scope=spfile
sql statement: alter system set standby_file_management = ''AUTO'' comment= ''''
scope=spfile
sql statement: alter system set log_archive_config = ''dg_config=(MASTERF,SBF1)''
comment= '''' scope=spfile
sql statement: alter system set log_archive_dest_1 = ''SERVICE=MASTERF ASYNC
valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=MASTERF'' comment= ''''
scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area
200867840 bytes
Fixed Size
Variable Size
Database Buffers
Redo Buffers
1298864
71306832
125829120
2433024
bytes
bytes
bytes
bytes
}
executing Memory Script
In the following with see that the datafiles are copied onto the other server directly, with no temporary
space required
Starting backup at 18-MAY-09
channel ch1: starting datafile copy
input datafile file number=00001 name=/opt/oracle/oradata/MASTERF/system01.dbf
channel ch2: starting datafile copy
input datafile file number=00002 name=/opt/oracle/oradata/MASTERF/sysaux01.dbf
output file name=/opt/oracle/oradata/SBF1/sysaux01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:04:27
channel ch2: starting datafile copy
input datafile file number=00005 name=/opt/oracle/oradata/MASTERF/example01.dbf
output file name=/opt/oracle/oradata/SBF1/system01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch1: datafile copy complete, elapsed time: 00:04:43
channel ch1: starting datafile copy
input datafile file number=00003 name=/opt/oracle/oradata/MASTERF/undotbs01.dbf
output file name=/opt/oracle/oradata/SBF1/example01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:00:31
channel ch2: starting datafile copy
input datafile file number=00004 name=/opt/oracle/oradata/MASTERF/users01.dbf
output file name=/opt/oracle/oradata/SBF1/undotbs01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch1: datafile copy complete, elapsed time: 00:00:19
output file name=/opt/oracle/oradata/SBF1/users01.dbf tag=TAG20090518T204012 RECID=0
STAMP=0
channel ch2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 18-MAY-09
file
file
file
file name=/opt/oracle/oradata/SBF1/users01.dbf
file
RMAN> **end-of-file**
RMAN>
The alert.log on both sides give details about the log transmission and inform us of any errors.
In the alert.log on the standy site the error "ORA-16401 archivelog rejected by RFS" was detected; it seems it
was nothing to be worried about
A very useful view on both sides is V$MANAGED_STANDBY; on ha2 it informs us that the process MRP0 is
doing its work:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS
FROM V$MANAGED_STANDBY;
Another way of checking the progress is, of course, to create a user LOLO on the master database and his table
t_a, populate it and make sure that it gets propragated onto the standby.
At this point we have a working data guard; as the grid control is becoming more and more popular, I will
investigate how to manage the data guard from the console, as shown in next pages
[AD]
Our purpose is to couple the master database MASTERF on ha1 with the standby database SBF1 on ha2
We start setting up the data guard broker configuration using the grid control
We expect that we will be able to choose SBF1, and the screenshot offers indeed this option
We do we will be able to connect to the standby ... even if the database is only mounted and not open!
We were able to connect! The next screen is about "Add Standby Database: Configuration"
This was about configuring the dataguard broker, see the next pages for the installation Enterprise Manager
Repository.
[AD]
In this case, three ORACLE_HOME will be created by the installer: one for the database, one for the
management server and one for the agent.
The database name and the passwords for the privileged users are required.
As shown in the screenshot, the software for the Oracle Enterprise Manger Repository Database is first
installed.
Second comes the software for the grid console, and third the Oracle Management Server
The database is cloned using the files that come with the CDs
While linking and confuring the assistants, the web cache assistant gives the error libdb-so.2 missing
There is no point in continuing; in my case the solution was installing the rpm db1-1.85-8.i386.rpm, that can be
downloaded from here
[root@grid10g tmp]# rpm -iv db1-1.85-8.i386.rpm
warning: db1-1.85-8.i386.rpm: Header V3 DSA signature: NOKEY, key ID db42a60e
Preparing packages for installation...
db1-1.85-8
[root@grid10g tmp]# ls /usr/lib/libdb*
/usr/lib/libdb1.so.2
/usr/lib/libdb.so.2
/usr/lib/libdb-4.3.so
/usr/lib/libdbus-glib-1.so.2
/usr/lib/libdb_cxx-4.3.so /usr/lib/libdbus-glib-1.so.2.1.0
[root@grid10g tmp]#
If everything has gone well, the console will already be available on http://grid10g:4889/em
As we deal with three different ORACLE_HOMEs, we often have to switch between the three environments
and redefine the PATH.
It is therefore an excellent idea to define aliases in .profile, making sure that the original PATH be preserved in
BASE_PATH, since PATH will be overwritten when switching environments.
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/OracleHomes/db10g
export DBS_HOME=/u01/app/oracle/OracleHomes/db10g
export OMS_HOME=/u01/app/oracle/OracleHomes/oms10g
export AGENT_HOME=/u01/app/oracle/OracleHomes/agent10g
export ORACLE_SID=REP10G
export BASE_PATH=$PATH
alias 10db='export ORACLE_HOME=$DBS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
alias 10gr='export ORACLE_HOME=$OMS_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
alias 10ag='export ORACLE_HOME=$AGENT_HOME; export PATH=${ORACLE_HOME}/bin:${BASE_PATH}'
$ 10db
$ sqlplus / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
$ lsnrctl status
LSNRCTL for Linux: Version 10.1.0.4.0 - Production on 13-JUN-2009 17:41:33
Copyright (c) 1991, 2004, Oracle.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
STATUS of the LISTENER
-----------------------Alias
LISTENER
Version
TNSLSNR for Linux: Version 10.1.0.4.0 - Production
Start Date
13-JUN-2009 17:19:58
Uptime
0 days 0 hr. 21 min. 36 sec
Trace Level
off
Security
ON: Local OS Authentication
SNMP
OFF
Listener Parameter File
/u01/app/oracle/OracleHomes/db10g/network/admin/listener.ora
Listener Log File
/u01/app/oracle/OracleHomes/db10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=grid10g)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "REP10G" has 1 instance(s).
Instance "REP10G", status READY, has 1 handler(s) for this service...
The command completed successfully
$ 10om
This was about configuring the Enterprise Manager Repository; the agent on the data guard nodes will have to
be installed, see installation of the grid agent on a monitored node
[AD]