Вы находитесь на странице: 1из 47

g:\prints\core\11g installation.

txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Installing Oracle 11gr2 On Linux 6

step.1

Hosts File
The "/etc/hosts" file must contain a fully qualified name for the server.
<IP-address> <fully-qualified-machine-name> <machine-name>
For example.
127.0.0.1 localhost.localdomain localhost
192.168.0.181 ol6-112.localdomain ol6-112

step.2

Oracle recommend the following minimum parameter settings.

fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
The current values can be tested using the following command.

/sbin/sysctl -a | grep <param-name>


Add or amend the following lines in the "/etc/sysctl.conf" file.

fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586

Run the following command to change the current kernel parameters.


/sbin/sysctl -p

step.3

Add the following lines to the "/etc/security/limits.conf" file.

oracle soft nproc 16384


oracle hard nproc 16384
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft stack 10240

step.4

Install the following packages if they are not already present.

# From Oracle Linux 6 DVD


cd /media/cdrom/Server/Packages
rpm -Uvh binutils-2*x86_64*
rpm -Uvh glibc-2*x86_64* nss-softokn-freebl-3*x86_64*
rpm -Uvh glibc-2*i686* nss-softokn-freebl-3*i686*
rpm -Uvh compat-libstdc++-33*x86_64*
rpm -Uvh glibc-common-2*x86_64*
rpm -Uvh glibc-devel-2*x86_64*
rpm -Uvh glibc-devel-2*i686*
rpm -Uvh glibc-headers-2*x86_64*
rpm -Uvh elfutils-libelf-0*x86_64*
rpm -Uvh elfutils-libelf-devel-0*x86_64*
rpm -Uvh gcc-4*x86_64*
rpm -Uvh gcc-c++-4*x86_64*
rpm -Uvh ksh-*x86_64*
rpm -Uvh libaio-0*x86_64*
rpm -Uvh libaio-devel-0*x86_64*
rpm -Uvh libaio-0*i686*
rpm -Uvh libaio-devel-0*i686*
rpm -Uvh libgcc-4*x86_64*
rpm -Uvh libgcc-4*i686*
rpm -Uvh libstdc++-4*x86_64*
rpm -Uvh libstdc++-4*i686*
rpm -Uvh libstdc++-devel-4*x86_64*
rpm -Uvh make-3.81*x86_64*
rpm -Uvh numactl-devel-2*x86_64*
rpm -Uvh sysstat-9*x86_64*
rpm -Uvh compat-libstdc++-33*i686*
rpm -Uvh compat-libcap*

This will install all the necessary 32-bit packages for 11.2.0.1. From 11.2.0.2
onwards many of these are unnecessary, but having them present does not cause a
problem.

step.5

Create the new groups and users.

groupadd -g 501 oinstall


groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 506 asmdba
groupadd -g 505 asmoper

useradd -u 502 -g oinstall -G dba,asmdba,oper oracle

passwd oracle
We are not going to use the "asm" groups, since this installation will not use ASM.

Additional Setup
Set the password for the "oracle" user.
passwd oracle
Amend the "/etc/security/limits.d/90-nproc.conf" file as described below. See MOS
Note [ID 1487773.1]

# Change this
* soft nproc 1024

# To this
* - nproc 16384

step.6

Set secure Linux to permissive by editing the "/etc/selinux/config" file, making


sure the SELINUX flag is set as follows.

SELINUX=permissive
Once the change is complete, restart the server.

If you have the Linux firewall enabled, you will need to disable or configure it,
as shown here or here.

step.7

Create the directories in which the Oracle software will be installed.

mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01

step.8

Login as root and issue the following command.


xhost +<machine-name>

step.9

Login as the oracle user and add the following lines at the end of the
".bash_profile" file.

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=ol6-112.localdomain; export ORACLE_HOSTNAME


ORACLE_UNQNAME=DB11G; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=DB11G; export ORACLE_SID

PATH=/usr/sbin:$PATH; export PATH


PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH


CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

step.10

Log into the oracle user. If you are using X emulation then set the DISPLAY
environmental variable.
DISPLAY=<machine-name>:0.0; export DISPLAY
Start the Oracle Universal Installer (OUI) by issuing the following command in the
database directory.

./runInstaller
Proceed with the installation of your choice. The prerequisites checks will fail
for the following version-dependent reasons:

11.2.0.1: The installer shows multiple "missing package" failures because it does
not recognize several of the newer version packages that were installed. These
"missing package" failures can be ignored as the packages are present. The failure
for the "pdksh" package can be ignored because we installed the "ksh" package in
its place.
11.2.0.2: The installer should only show a single "missing package" failure for the
"pdksh" package. It can be ignored because we installed the "ksh" package in its
place.
11.2.0.3: The installer shows no failures and continues normally.
You can see the type of installation I performed by clicking on the links below to
see screen shots of each stage.

If you are doing an installation for an Enterprise Manager repository, remember to


do an advanced installation and pick the ALT32UTF8 character set.

Edit the "/etc/oratab" file setting the restart flag for each instance to 'Y'.

DB11G:/u01/app/oracle/product/11.2.0/db_1:Y

g:\prints\core\12c admin.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
12c Database Administration
++++++++++++++++++++++++++++:

In Oracle 12c, you can connect to a PDB using two methods :

� Switch the container using Alter system set container �


� Use connect command to connect to PDB using network alias

The use of SET CONTAINER avoids the need to create a new connection from scratch.
If there is an existing connection to a PDB / CDB$root, the same connection can be
used to connect to desired PDB / CDB$root.

� Connect to CDB

[oracle@em12 ~]$ sqlplus system/oracle@cdb1

CDB$ROOT@CDB1> sho con_name

CON_NAME
------------------------------
CDB$ROOT
� Check the PID for the process created on the operating system

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)


� Change the container to PDB1 using Set container
CDB$ROOT@CDB1> alter session set container=pdb1;

sho con_name

CON_NAME
------------------------------
PDB1
� Check that the operating system PID remains the same as earlier connection is
reused and a new connection has not been created

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)


� Switch the container back to cdb$root using connect

CDB$ROOT@CDB1> conn system/oracle@cdb1


sho con_name

CON_NAME
------------------------------
CDB$ROOT
� Check that a new operating system PID has been created as a new connection has
been created

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23409 1 0 10:29 ? 00:00:00 oraclecdb1 (LOCAL=NO)


glogin.sql is not executed when Alter session set container is used
To demonstrate it, I have added following lines to my glogin.sql to display CDB/PDB
name in SQL prompt:

define gname=idle
column global_name new_value gname
set heading off
set termout off
col global_name noprint
select upper(sys_context ('userenv', 'con_name') || '@' || sys_context('userenv',
'db_name')) global_name from dual;
set sqlprompt '&gname> '
set heading on
set termout on

- Let�s connect to PDB1 using �Connect� and verify that glogin.sql is executed and
prompt displays CDB/PDB name
SQL> conn sys/oracle@pdb1 as sysdba
PDB1@CDB1>
- Verify that the prompt displays current container (PDB1) and container database
(CDB1)

PDB1@CDB1> sho con_name


PDB1

PDB1@CDB1> sho parameter db_name


db_name string cdb1

� Now let�s connect to PDB2 using Alter session set container and verify that
glogin.sql is not executed and the same prompt as earlier is displayed
PDB1@CDB1> alter session set container=pdb2;
Session altered.
PDB1@CDB1> sho con_name
CON_NAME
------------------------------
PDB2
-- Let's connect to PDB2 using connect and verify that glogin.sql is executed as
the prompt displays the PDB name PDB2

PDB1@CDB1> connect sys/oracle@pdb2 as sysdba

PDB2@CDB1>
Pending transactions are not committed when Alter system set container is used
� Let�s start a transaction in PDB1

PDB1@CDB1> create table pdb1_tab(x number);


Table created.
PDB1@CDB1> insert into pdb1_tab values (1);
1 row created.
� Switch the container to PDB2

PDB1@CDB1> alter session set container=pdb2;


� Try to start another transaction on PDB2 � does not allow as an active
transaction exists in the parent container PDB1

PDB1@CDB1> create table pdb2_tab (x number);

create table pdb2_tab (x number)

ERROR at line 1:
ORA-65023: active transaction exists in container PDB1
� In another session check that the transaction was not committed and no rows are
visible in table pdb1_tab

CDB$ROOT@CDB1> conn system/oracle@pdb1

PDB1@CDB1> select * from pdb1_tab;


no rows selected
Alter session set container cannot be used by local users

� Try to give set container privilege to a local user HR in PDB2 � fails as common
privilege cannot be granted to a local user and hence a local user cannot user
alter session set container to connect to another PDB

PDB2@CDB1> connect system/oracle@pdb2


PDB2@CDB1> grant set container to hr container=all;
grant set container to hr container=all
*
ERROR at line 1:
ORA-65030: one may not grant a Common Privilege to a Local User or Role

g:\prints\core\12c new features.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
12c New Features
=================:

- Container & Pluggable Databases �


- New parameter PGA_AGGREGATE_LIMIT, to limit the pga memory utilization
- New role CDB Administrator has been introduced
- Multi threaded database with parameter threaded_executions
- Multiple LGWR processes for each PDB�s and can share master container LGWR
process.

New features in Database 12c

Database 12c brought with it many new features. There are improvement in many areas
and new concepts. Most notable is the concept of Container database and Pluggable
database.

Container Databases (CBD) and Pluggable Databases (PDB) brings with it a radical
change and a major change in the core database architecture. Besides this major
change there are many other improvements that Database 12c has. Some of those new
features are listed below:

1) The limit of 63 ASM disk groups has been increased.


2) Oracle also allows you now to store the ASM password file in a shared ASM disk
group.
3) The alter diskgroup command has been extended to include the scrub clause. This
clause allows the ASM administrator to check ASM diskgroups, individual disks in a
disk group, or even a single ASM files for logical corruption in cases where ASM is
responsible for protecting the data. ASM can also try to repair it using mirror
copies of the extent If logical corruption is detected during the scrubbing process
.

4) In Oracle 12c every node in the cluster does NOT need to have its own ASM
instance. Oracle Flex ASM, as the new set of features addresses this situation by
removing the strict requirement to have one ASM instance per cluster node. In this
scenario, if an ASM instance to which databases are connected fail, the database
will dynamically reconnect to another ASM instance in the cluster.

5) RAC crsctl & srvctl commands have a new option named -eval to evaluate commands
before they are executed

6) In 12c you can now move a partition online.

7) A new command �alter database move datafile� by which it is very simple to move
data and temp files from a file system into ASM while they are in use. Earlier it
was not possible to do this activity online.

8) Oracle has removed the Database Console in Oracle 12c. It was introduced with
Oracle 10g and it was not frequently used by DBAs.

9) Increase in varchar2 Limit. Instead of the previous limit of 4000 bytes for this
field it is now possible to store up to 32 kilobytes. This new behavior can be
controlled by max_string_size initialization parameter. Although SQL*Plus won�t be
able to insert that much data as the inherent limit of SQL*Plus is 2500 characters
for a column so you will need some other tool for it.

g:\prints\core\ag advntg.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Active Data Guard
==================:

The Active Data Guard Option is an evolution of Data Guard technology, it is


designed for a specific purpose:

-To improve production database performance for critical transactions.


-Active Data Guard enables read-only access to a physical standby database while
Redo Apply is active.
-Queries and reports can be offloaded from the production system to a synchronized
physical standby database
-All queries at the standby database return up-to-date results.
-Unique corruption detection and automatic repair
-Offload read-only workloads to an up-to-date standby database
-Database rolling upgrades and standby-first patching using physical standby
-Zero data loss protection across any distance
-Enable incremental backups on an active standby
-Load balancing and service management across replicated databases
-An Active Data Guard Option license must be purchased in addition to Oracle
Enterprise Edition in order to utilize these new capabilities

Benefits:

-Absolutely the best protection for Oracle Database


-Highest performance data recovery protection without compromises
-Zero data loss data recovery protection across any distance without impacting
performance
-Comprehensive protection against planned and unplanned outages

g:\prints\core\archive_solution.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Using ALTERNATE archive destination to handle archive overflow

Scenario � Define a secondary archive location, which will be used when primary
destination is full

Solution � We can define an archive destination with value �ALTERNATE�, which will
take over if primary destination is full.
As per Oracle documentation � An archiving destination can have a maximum of one
alternate destination specified. An alternate destination is used when the
transmission of an online redo log from the primary site to the standby site fails.

This is in context of data guard, but also applies to a standalone database.

� FRA is defined as following


SQL> show parameter recovery
NAME TYPE VALUE
------------------------- ----------- ------
db_recovery_file_dest string /u01/archives

� Primary archive location is defined as


SQL> show parameter log_archive_dest_1
NAME TYPE VALUE
------------------- ----------- ------------------------------
log_archive_dest_1 string location=use_db_recovery_file_dest

SQL> show parameter log_archive_dest_state_1


NAME TYPE VALUE
-------------------------- ----------- -------
log_archive_dest_state_1 string enable

� How do we define the alternate location


Let�s say we want to use log_archive_dest_3 as alternate location.
SQL> alter system set log_archive_dest_3='location=+testarch' scope=both;
SQL> alter system set log_archive_dest_state_3='ALTERNATE' scope=both;

Now change the primary location to reflect �ALTERNATE� setting


SQL> alter system set log_archive_dest_1='location=use_db_recovery_file_dest
noreopen alternate=log_archive_dest_3' scope=both;

Here we have to add �NOREOPEN�. Otherwise it will not spill over to �ALTERNATE�
location.
As per Oracle documentation � If archiving fails and the REOPEN attribute is
specified with a value of zero (0), or NOREOPEN is specified, the Oracle database
server attempts to archive online redo logs to the alternate destination on the
next archival operation.
When archive logs are written to primary location

SQL>select dest_id, dest_name, status from v$archive_dest_status where status <>


'INACTIVE';
DEST_ID DEST_NAME STATUS
---------- --------------------- ---------
1 LOG_ARCHIVE_DEST_1 VALID
2 LOG_ARCHIVE_DEST_3 UNKNOWN

When Primary location is full and archiver cannot write to it, first time it will
throw following error stack

alter system archive log current


*
ERROR at line 1:
ORA-16038: log 2 sequence# 194 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1:
'+DG1/primary/onlinelog/group_2.274.789415247'
ORA-00312: online log 2 thread 1:
'+RECODG/primary/onlinelog/group_2.332.789415247'
But second archiving request will write to �ALTERNATE� location. At this point of
time LOG_ARCHIVE_DEST_1 will be �DISABLED�

SQL>select dest_id, dest_name, status from


v$archive_dest_status where status <> 'INACTIVE';

DEST_ID DEST_NAME STATUS


--------- ------------------ ---------
1 LOG_ARCHIVE_DEST_1 DISABLED
3 LOG_ARCHIVE_DEST_3 VALID
Once the space issue is resolved & we are ready to fallback to PRIMARY location

SQL> alter system set log_archive_dest_state_1=enable;


System altered.
SQL> alter system set log_archive_dest_state_3=alternate;
System altered.

SQL>select dest_id, dest_name, status from


v$archive_dest_status where status <> 'INACTIVE';
DEST_ID DEST_NAME STATUS
---------- --------------------- ---------
1 LOG_ARCHIVE_DEST_1 VALID
2 LOG_ARCHIVE_DEST_3 UNKNOWN

Metalink Documents:
NOTE 270069.1 � How to Automate Archive Log Overflow Using �Alternate�
NOTE 369120.1 � ALTERNATE Attribute of LOG_ARCHIVE_DEST_n Does Not Appear to Work

g:\prints\core\asm int.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ASM

1 What is ASM in Oracle?

Answer
Oracle ASM is Oracle�s volume manager specially designed for Oracle database data.
It is available since Oracle database version 10g and many improvements have been
made in versions 11g release 1 and 2 and 12c
ASM offers support for Oracle RAC clusters without the requirement to install 3rd
party software, such as cluster aware volume managers or file systems.
ASM is shipped as part of the database server software (Enterprise and Standard
editions) and does not cost extra money to run.
ASM simplifies administration of Oracle related files by allowing the administrator
to reference disk groups
rather than individual disks and files, which are managed by ASM.

The ASM functionality is an extention of the Oracle Managed Files (OMF)


functionality that also includes striping and mirroring to provide balanced and
secure storage. The new ASM functionality can be used in combination with existing
raw and cooked file systems, along with OMF and manually managed files.

Oracle ASM Introduction

2 What is ASM instance in Oracle?


Answer
The ASM functionality is controlled by an ASM instance. This is not a full database
instance, just the memory structures and as such is very small and lightweight.

Characteristics of Oracle ASM instance

1)The ASM instance,which is generally named +ASM1, is started with the


INSTANCE_TYPE=ASM init.ora parameter
2) Do not mount the database but manage metadata required to make ASM files
available for DB instances
3) DB Instance access ASM files directly and contact ASM instance only for the
layout of ASM files
4) Requires only the init.ora file for startup
5)Instance Name is +ASM or +ASM1.for RAC
6) ASM instance can be started in same database home or seperate home also

3 What are ASM Background Processes in Oracle?

Answer RBAL � Oracle background process. In an ASM instance coordinated rebalancing


operations. In a DB instance, opens and mount diskgroups from the local ASM
instance.
ARBx � Oracle backgroud processes. In an ASM instance, a slave for rebalancing
operations
PSPx � Oracle backgroud processes. In an ASM instance, Process Spawners
GMON � Oracle backgroud processes. In an ASM instance, diskgroup monitor.
ASMB � Oracle background process. In an DB instance, keeps a (bequeath) persistent
DB connection to the local ASM instance. Provides heart-beat and ASM statistics.
During a diskgroups rebalancing operation ASM communicates to the DB AU changes via
this connection.
O00x � Oracle backgroud processes. Slaves used to connected from the DB to the ASM
instance for �short operations�.

4 What are ASM instance initialization parameters?

Answer INSTANCE_TYPE � Set to ASM or RDBMS depending on the instance type. The
default is RDBMS.

DB_UNIQUE_NAME � Specifies a globally unique name for the database. This defaults
to +ASM but must be altered if you intend to run multiple ASM instances.

ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance.


The valid values range from 1 to 11, with 1 being the default. The higher the limit
the more resources are allocated resulting in faster rebalancing operations. This
value is also used as the default when the POWER clause is omitted from a rebalance
operation.

ASM_DISKGROUPS � The list of disk groups that should be mounted by an ASM instance
during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM
configuration changes are automatically reflected in this parameter.

ASM_DISKSTRING � Specifies a value that can be used to limit the disks considered
for discovery. Altering the default value may improve the speed of disk group mount
time and the speed of adding a disk to a disk group. Changing the parameter to a
value which prevents the discovery of already mounted disks results in an error.
The default value is NULL allowing all suitable disks to be considered.

Oracle ASM Parameters


What are Advantages of ASM in Oracle?

a) Provides automatic load balancing over all the available disks, thus reducing
hot spots in the file system
b) Prevents fragmentation of disks, so you don�t need to manually relocate data to
tune I/O performance
c) Adding disks is straight forward � ASM automatically performs online disk
reorganization when you add or remove storage
d) Uses redundancy features available in intelligent storage arrays
e)The storage system can store all types of database files
f) Using disk group makes configuration easier, as files are placed into disk
groups
g)ASM provides stripping and mirroring
h) ASM and non-ASM oracle files can coexist

5 How Oracle ASM instance works?


Answer The ASM instance manages and communicates the map as to where each file
extent resides. It also controls the process of rebalancing the placement of the
extents when the storage allocation is changed ie, when the disk is added or
removed from ASM. As an ASM instance uses only about 64-MB for its system global
area, it requires a relatively small amount of system resource. In a RAC
configuration, an ASM instance on each node in the cluster manages all disk groups
for that node, in coordination with the other nodes in the cluster.

The ASM instance creates an extent map which has a pointer to each 1MB extent of
the data file is located. When a database instance creates or opens a database file
that is managed by ASM, the database instance messages the ASM instance and ASM
returns an extent map for that file. From that point the database instance performs
all I/O directly to the disks unless the location of that file is being changed.
Three things might cause the extent map for a database instance to be updated: 1)
Rebalancing the disk layout following an storage configuration change (adding or
dropping a disk from a disk group), 2) Opening of a new database file and 3)
extending an existing database file when a tablespace is enlarged.

6 What is ASM Disks ?


Answer The physical disks are known as ASM disks

How to prepare the ASM disks

7 What is ASM diskgroups?

Answer ASM disk groups, each of which comprise of several physical disks that are
controlled as a single unit

ASM Disk groups

8 What is Failure groups

Answer They are defined within a disk group to support the required level of
redundancy. For two-way mirroring you would expect a disk group to contain two
failure groups so individual files are written to two locations.

How ASM Failure Groups and CSS provides high availability

9 Why should we use separate ASM home?

Answer ASM should be installed separately from the database software in its own
ORACLE_HOME directory. This will allow you the flexibility to patch and upgrade ASM
and the database software independently.

10 How many ASM instances should one have?

Answer Several databases can share a single ASM instance. So, although one can
create multiple ASM instances on a single system, normal configurations should have
one and only one ASM instance per system.

For clustered systems, create one ASM instance per node (called +ASM1, +ASM2, etc).

11 How many diskgroups should one have?

Answer Generally speaking one should have only one disk group for all database
files � and, optionally a second for recovery files i.e +DATA for datafile and +FRA
for Recovery files

Here is an example:

CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK �/dev/raw1', �/dev/raw2';


CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK �/dev/raw3', �/dev/raw4';

Here is an example how you can enable automatic file management with such a setup
in each database served by that ASM instance:

ALTER SYSTEM SET db_create_file_dest = �+DATA� SCOPE=SPFILE;


ALTER SYSTEM SET db_recovery_file_dest = �+FRA� SCOPE=SPFILE;

You may also decide to introduce additional disk groups � for example, if you
decide to put historic data on low cost disks, or if you want ASM to mirror
critical data across 2 storage cabinets.
Data with different storage characteristics should be stored in different disk
groups. Each disk group can have different redundancy (mirroring) settings (high,
normal and external), different fail-groups, etc. However, it is generally not
necessary to create many disk groups with the same storage characteristics (i.e.
+DATA1, +DATA2, etc. all on the same type of disks).

How to move database to ASM storage

ASM best practice to add disk

12 What is stripping and mirroring.

Answer Striping is spreading data across multiple disks so that IO is spread across
multiple disks and hence increase in throughput. It provides read/write performance
but fail over support.
ASM offers two types of striping, with the choice depending on the type of database
file. Coarse striping uses a stripe size of 1MB, and you can use coarse striping
for every file in your database, except for the control files, online redo log
files, and flashback files. Fine striping uses a stripe size of 128KB. You can use
fine striping for control files, online redo log files, and flashback files.

Mirroring means redundancy. It may add performance benefit for read operations but
overhead for write operations. It�s basic purpose is to provide fail over support.
There are three ASM mirroring options:

High Redundancy � In this configuration, for each primary extent, there are two
mirrored extents. For Oracle Database Appliance this means, during normal
operations there would be three extents (one primary and two secondary) containing
the same data, thus providing �high� level of protection. Since ASM distributes the
partnering extents in a way that prevents all extents to be unable due to a
component failure in the IO path, this configuration can sustain at least two
simultaneous disk failures on Oracle Database Appliance (which should be rare but
is possible).

Normal Redundancy � In this configuration, for each primary extent, there is one
mirrored (secondary) extent. This configuration protects against at least one disk
failure. Note that in the event a disk fails in this configuration, although there
is typically no outage or data loss, the system operates in a vulnerable state,
should a second disk fail while the old failed disk replacement has not completed.
Many Oracle Database Appliance customers thus prefer the High Redundancy
configuration to mitigate the lack of additional protection during this time.

External Redundancy � In this configuration there are only primary extents and no
mirrored extents. This option is typically used in traditional non-appliance
environments when the storage sub-system may have existing redundancy such as
hardware mirroring or other types of third-party mirroring in place. Oracle
Database Appliance does not support External Redundancy.8. What is a diskgroup?
A disk group consists of multiple disks and is the fundamental object that ASM
manages. Each disk group contains the metadata that is required for the management
of space in the disk group. The ASM instance manages the metadata about the files
in a Disk Group in the same way that a file system manages metadata about its
files. However, the vast majority of I/O operations do not pass through the ASM
instance. In a moment we will look at how file
I/O works with respect to the ASM instance.

13 What is ASM Rebalancing?

Answer The rebalancing speed is controlled by the ASM_POWER_LIMIT initialization


parameter. Setting it to 0 will disable disk rebalancing.

ALTER DISKGROUP DATA REBALANCE POWER 11;

ALTER DISKGROUP DATA REBALANCE POWER 5;


ALTER DISKGROUP DATA REBALANCE POWER 8;

Oracle ASM Rebalance

14 What happens when an Oracle ASM diskgroup is created?

Answer When an ASM diskgroup is created, a hierarchical filesystem structure is


created.

15 How does this filesystem structure appear?

Answer Oracle ASM disk group�s filesystem structure is similar to UNIX filesystem
hierarchy or Windows filesystem hierarchy.

16 Where are the Oracle ASM files stored?

Answer Oracle ASM files are stored within the Oracle ASM diskgroup. If we dig into
internals, oracle ASM files are stored within the Oracle ASM filesystem structures.

17 How are the Oracle ASM files stored within the Oracle ASM filesystem structure?
Answer Oracle ASM files are stored within the Oracle ASM filesystem structures as
objects that RDBMS instances/Oracle database instance access. RDBMS/Oracle instance
treats the Oracle ASM files as standard filesystem files.

18 What are the Oracle ASM files that are stored within the Oracle ASM file
hierarchy?

Answer Files stored in Oracle ASM diskgroup/Oracle ASM file structures include:
1) Datafile
2) Controlfiles
3) Server Parameter Files(SPFILE)
4) Redo Log files

19 How can you access a database file in ASM diskgroup under RDBMS?
Answer Once the ASM file is created in ASM diskgroup, a filename is generated. This
file is now visible to the user via the standard RDBMS view V$DATAFILE.

20 What will be the syntax of ASM filenames?


Answer ASM filename syntax is given below
+diskgroup_name/database_name/database_file_type/tag_name.file_number.incarnation
where,
+diskgroup_name � Name of the diskgroup that contains this file
database_name � Name of the database that contains this file
datafile � Can be one among 20 different ASM file types
tag_name � corresponds to tablespace name for datafiles, groupnumber for redo log
files
file_number � file_number in ASM instance is used to correlate filenames in
database instance
incarnation_number � It is derived from the timestamp. IT is used to provide
uniqueness

21 What is an incarnation number?


Answer An incarnation number is a part of ASM filename syntax. It is derived from
the time stamp. Once the file is created, its incarnation number doesn�t change.
22 What is the use of an incarnation number in Oracle ASM filename?
Answer Incarnation number distinguishes between a new file that has been created
using the same file number and another file that has been deleted

23 What is an oracle flex ASM?


Answer Oracle flex ASM is a feature that enables an ASM instance to run on separate
physical servers from the database servers

Oracle Flex Cluster 12c

24 What is the use of asmadmin?


Answer asadmin is the operating system group that holds users who have sysasm
database privilege. This privilege is needed for operations like mounting disk
group, dismounting disk group, storage administration

25 What is the purpose of asmoper operating system group?


Answer asmoper operating system group is used for users that have the privilege to
startup and stop the oracle ASM instance. The database privilege for these users
will be sysoper for asm

26 What is the difference between asmdba and asmoper?


Answer The users belonging to asmdba group have sysdba database privilege at ASM
level. This is the highest administrative privilege needed for oracle ASM. Whereas,
asmoper is given sysoper privilege which is less than asmdba

27 How to copy files between asm disk groups?


Answer ASMCMD command cp can be used to copy files between ASM disk groups on local
instance as well as remote instances

28 What is ASM metadata and where is it present?


Answer ASM metadata is the information that ASM uses to control the disk group.It
is present within a disk group.

ASM Metadata

How to collect ASM metadata

29 What is an ASM metadata composed of?


Answer An ASM metadata includes the following:
1) The disks that belong to a disk group
2) Amount of space available within a disk group
3) The filenames of the files within a disk group
4) The location of disk group datafile data extents
5) A redo log that records information about automatically changing data blocks

30 What is oracle ASM filter driver?


Answer Oracle ASM filter driver is a new feature in Oracle database 12c release 2
12.1.0.2. As an abbreviation this is called Oracle ASMFD that happens to be a
kernel module. this kernel modules is included in path of I/O path of oracle asm
disks. this module is included to protect the underlying ASM disks from unnecessary
typically non-oracle writes related I/O operations which in turn protects disks
from being corrupt

31 What it the ASM POWER_LIMIT?

Answer This is the parameter which controls the number of Allocation units the ASM
instance will try to rebalance at any given time. In ASM versions less than
11.2.0.3 the default value is 11 however it has been changed to unlimited in later
versions.

32 What is a rolling upgrade?

Answer A patch is considered a rolling if it is can be applied to the cluster


binaries without having to shutting down the database in a RAC environment. All
nodes in the cluster are patched in a rolling manner, one by one, with only the
node which is being patched unavailable while all other instance open.

33 How does ASM provides Redundancy?


Answer When you create a disk group, you specify an ASM disk group type based on
one of the following three redundancy levels:
1) Normal for 2-way mirroring � When ASM allocates an extent for a normal
redundancy file; ASM allocates a primary copy and a secondary copy. ASM chooses the
disk on which to store the secondary copy in a different failure group other than
the primary copy.
2) High for 3-way mirroring. In this case the extent is mirrored across 3 disks.
3)External to not use ASM mirroring. This is used if you are using Third party
Redundancy mechanism like RAID, Storage arrays.

34 Can we change the Redundancy for Diskgroup after its creation?

Answer No, we cannot modify the redundancy for Diskgroup once it has been created.
To alter it we will be required to create a new Diskgroup and move the files to it.
This can also be done by restoring full backup on the new Diskgroup.

How to move database to ASM storage

35 Does ASM instance automatically rebalances and takes care of hot spots?
Answer No. This is a myth and ASM does not do it. It will initiate automatic
rebalance only when a new disk is added to Diskgroup or we drop a disk from
existing Diskgroup.

36 What is ASMLIB?
Answer ASMLIB is the support library for the ASM. ASMLIB allows an Oracle database
using ASM more efficient and capable access to diskgroups. The purpose of ASMLIB,
is to provide an alternative interface to identify and access block devices.
Additionally, the ASMLIB API enables storage and operating system vendors to supply
extended storage-related features.

37 What is SYSASM role?


Answer Starting from Oracle 11g, SYSASM role can be used to administer the ASM
instances. You can continue using SYSDBA role to connect to ASM but it will
generate following warning messages at time of startup/shutdown, create
Diskgroup/add disk, etc
Alert entry
WARNING: Deprecated privilege SYSDBA for command �STARTUP�

38 Whats is Kfed?
Answer kfed is a utility which can be used to view the ASM Disk information. Syntax
for using it is
kfed read devicename

39 ASM and Cluster Synchronization Service

Answer An ASM storage system requires the use of an additional specialized database
instance called ASM, which will actually manage the storage for a set of Oracle
databases. In order to use ASM storage for your Oracle databases, you must first
ensure that you have Oracle�s Cluster Synchronization Service (CSS) running on your
databases.

CSS is responsible for synchronizing ASM instances and your database instances, and
it is
installed as part of your Oracle software. CSS also synchronizes recovery from an
ASM instance failure. You can find out if the CSS service is running by using the
following command:

40 What processes does the rebalancing?


Answer RBAL, ARBn

41 How to find out the databases, which are using the ASM instance?
Answer

ASMCMD> lsct
SQL> select DB_NAME from V$ASM_CLIENT;

42 What are different types of striping in ASM & their differences?


Answer Fine-grained striping
Coarse-grained striping

43 What is the best LUN size for ASM ?


Answer There is no best size. In most cases the storage team will dictate to you
based on their standardized LUN size. The ASM administrator merely has to
communicate the ASM Best Practices and application characteristics to storage folks
:
a)Need equally sized / performance LUNs
b) Minimum of 4 LUNs
c) The capacity requirement
d) The workload characteristic (random r/w, sequential r/w) & any response time SLA

44 Can my RDBMS and ASM instances run different versions?


Answer Yes. ASM can be at a higher version or at lower version than its client
databases. There�s two components of compatibility:
a) Software compatibility
b) Diskgroup compatibility attributes:
c) compatible.asm
d) compatible.rdbms
This is a diskgroup level change and not an instance level change�no rolling
upgrade here!

45 How do I backup my ASM instance?


Answer There is no backup for ASM instance as ASM has no files to backup Unlike the
database, ASM does require a controlfile type structure or any other external
metadata to bootstrap itself. All the data ASM needs to startup is on on-disk
structures (disk headers and other disk group metadata).

46 How does ASM work with multipathing software?


Answer Multipathing software is at a layer lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify only the path to the multipathing
pseudo devices.
Multipathing tools provides the following benefits:
a) It provide a single block device interface for a multi-pathed LUN
b) it detects any component failures in the I/O path; e.g., fabric port, channel
adapter, or HBA.
c) When a loss of path occurs, ensure that I/Os are re-routed to the available
paths, with no process disruption.
d) Reconfigure the multipaths automatically when events occur.
e) Ensure that failed paths get revalidated as soon as possible and provide
autofailback

g:\prints\core\ASM key features and benefits.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ASM key features and benefits
==============================:

-strips files rather than logical volumes.


-enables online disk reconfiguration and dynamic rebalancing
-provides adjustable rebalancing speed
-provides redundancy on a files basis
-supports only oracle files
-is cluster aware
-is automatically installed as part of the bas code set
-reduces the time significantly to resynchronize a transient
-failure by tracking chanes while disk is offline
-supports reading from mirrored copy instead of primary copy for extended clusters.

g:\prints\core\begin backup mode.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What will happen when we put the database in begin backup mode?

alter database begin backup;

When we put the database in begin backup-mode header of all datafiles get freeze or
SCN will not change, during this process it generates excessive redo logs.

1. The tablespace is checkpointed.


2. The checkpoint SCN in the datafile header freeze to increment with checkpoints
3. Full images of changed DB blocks are written to the redologs

The datafile status are changed in begin backup mode.

SQL> alter database begin backup;


Database altered.
SQL> select * from v$backup;
FILE# STATUS CHANGE# TIME
---------- ------------------ ---------- ---------
1 ACTIVE 1363203 23-JAN-18
2 ACTIVE 1363203 23-JAN-18
3 ACTIVE 1363203 23-JAN-18
4 ACTIVE 1363203 23-JAN-18
5 ACTIVE 1363203 23-JAN-18
6 ACTIVE 1363203 23-JAN-18

Why it will generate excessive redo logs?

suppose you are updating some data in a table and the size of change vector data is
1 KB but the bg process write 8KB of block instead of 1 KB which result in generate
excessive redo logs.
g:\prints\core\bgp.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
10g New Background Process

-MMAN
SGA Background Process
The Automatic Shared Memory Management feature uses a new background process named
Memory Manager (MMAN). MMAN serves as the SGA Memory Broker and coordinates the
sizing of the memory components. The SGA Memory Broker keeps track of the sizes of
the components and pending resize operations

-RVWR
Flashback database

This a new feature introduced in 10g.


Flashbacking a database means going back to a previous database state.
The Flashback Database feature provides a way to quickly revert an entire Oracle
database to the state it was in at a past point in time.
This is different from traditional point in time recovery.
A new background process Recovery Writer ( RVWR) introduced which is responsible
for writing flashback logs which stores pre-image(s) of data blocks.
One can use Flashback Database to back out changes that:
Have resulted in logical data corruptions.
Are a result of user error.
This feature is not applicable for recovering the database in case of media
failure.
The time required for flashbacking a database to a specific time in past is
DIRECTLY PROPORTIONAL to the number of changes made and not on the size of the
database.

-Jnnn
These are job queue processes which are spawned as needed by CJQ0 to complete
scheduled jobs. This is not a new process.

-CTWR
This is a new process Change Tracking Writer (CTWR) which works with the new block
changed tracking features in 10g for fast RMAN incremental backups.

-MMNL
The Memory Monitor Light (MMNL) process is a new process in 10g which works with
the Automatic Workload Repository new features (AWR) to write out full statistics
buffers to disk as needed.

-MMON
The Manageability Monitor (MMON) process was introduced in 10g and is associated
with the Automatic Workload Repository new features used for automatic problem
detection and self-tuning. MMON writes out the required statistics for AWR on a
scheduled basis.

-M000
MMON background slave (m000) processes.

-CJQn
This is the Job Queue monitoring process which is initiated with the
job_queue_processes parameter. This is not new.

-RBAL
This is the ASM related process that performs rebalancing of disk resources
controlled by ASM.
-ARBx
These processes are managed by the RBAL process and are used to do the actual
rebalancing of ASM
controlled disk resources. The number of ARBx processes invoked is directly
influenced by the asm_power_limit parameter.

-ASMB
The ASMB process is used to provide information to and from the Cluster
Synchronization Services used by ASM to manage the disk resources. It is also used
to update statistics and provide a heartbeat mechanism.

11g New Background Process

-ACMS
(atomic controlfile to memory service) per-instance process is an agent that
contributes to ensuring a distributed SGA memory update is either globally
committed on success or globally aborted in the event of a failure in an Oracle RAC
environment.

-DBRM
(database resource manager) process is responsible for setting resource plans and
other resource manager related tasks.

-DIA0
(diagnosability process 0) (only 0 is currently being used) is responsible for hang
detection and deadlock resolution.

-DIAG
(diagnosability) process performs diagnostic dumps and executes global oradebug
commands.

-EMNC
(event monitor coordinator) is the background server process used for database
event management and notifications.

-FBDA
(flashback data archiver process) archives the historical rows of tracked tables
into flashback data archives. Tracked tables are tables which are enabled for
flashback archive. When a transaction containing DML on a tracked table commits,
this process stores the pre-image of the rows into the flashback archive. It also
keeps metadata on the current rows.

FBDA is also responsible for automatically managing the flashback data archive for
space, organization, and retention and keeps track of how far the archiving of
tracked transactions has occurred.

-GTX0-j
(global transaction) processes provide transparent support for XA global
transactions in an Oracle RAC environment. The database autotunes the number of
these processes based on the workload of XA global transactions. Global transaction
processes are only seen in an Oracle RAC environment.

-KATE
performs proxy I/O to an ASM metafile when a disk goes offline.

-MARK
marks ASM allocation units as stale following a missed write to an offline disk.
-SMCO
(space management coordinator) process coordinates the execution of various space
management related tasks, such as proactive space allocation and space reclamation.
It dynamically spawns slave processes (Wnnn) to implement the task.

-VKTM
(virtual keeper of time) is responsible for providing a wall-clock time (updated
every second) and reference-time counter (updated every 20 ms and available only
when running at elevated priority).

Some additional Processes not documented in 10G :

-PZ
(PQ slaves used for global Views) are RAC Parallel Server Slave processes, but they
are not normal parallel slave processes, PZnn processes (starting at 99) are used
to query GV$ views which is done using Parallel Execution on all instances, if more
than one PZ process is needed, then PZ98, PZ97,... (in that order) are created
automatically.

O00 (ASM slave processes) A group of slave processes establish connections to the
ASM instance. Through this connection pool database processes can send messages to
the ASM instance. For example opening a file sends the open request to the ASM
instance via a slave. However slaves are not used for long running operations such
as creating a file. The use slave (pool) connections eliminate the overhead of
logging into the ASM instance for short requests

-x000
Slave used to expell disks after diskgroup reconfiguration

12c New Background Process

-BWnn
There can be 1 to 100 Database Writer Processes. The names of the first 36 Database
Writer Processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th
Database Writer Processes are BW36-BW99. The database selects an appropriate
default setting for the DB_WRITER_PROCESSES parameter or adjusts a user-specified
setting based on the number of CPUs and processor groups.

-FENC
(Fence Monitor Process) Processes fence requests for RDBMS instances which are
using Oracle ASM instances

-IPC0
(IPC Service Background Process) Common background server for basic messaging and
RDMA primitives based on IPC (Inter-process communication) methods.

-LDDn
(Global Enqueue Service Daemon Helper Slave) Helps the LMDn processes with various
tasks

-LGnn
(Log Writer Worker) On multiprocessor systems, LGWR creates worker processes to
improve the performance of writing to the redo log. LGWR workers are not used when
there is a SYNC standby destination. Possible processes include LG00-LG99.

-LREG
(Listener Registration Process) Registers the instance with the listeners
-OFSD
(Oracle File Server Background Process) Serves file system requests submitted to an
Oracle instance

-RPOP
(Instant Recovery Repopulation Daemon) Responsible for re-creating and/or
repopulating data files from snapshot files and backup files

-SAnn
(SGA Allocator) Allocates SGA The SAnn process allocates SGA in small chunks. The
process exits upon completion of SGA allocation.

-SCRB
(ASM Disk Scrubbing Master Process) Coordinates Oracle ASM disk scrubbing
operations

-SCRn
(ASM Disk Scrubbing Slave Repair Process) Performs Oracle ASM disk scrubbing repair
operation

-SCVn
(ASM Disk Scrubbing Slave Verify Process) Performs Oracle ASM disk scrubbing verify
operation

Rac Processes:

What is a Pnnn process used for?

A Pnnn process is a background Parallel Query Slave Process that is used for sql
statements executed in parallel. They can be seen in RAC and single instance
configurations. In addition to being utilized for dml and ddl parallel execution
servers are also used for transaction recovery, instance crash recovery, and
replication operations. The number of Pnnn started in the database is limited by
the value of parallel_max_servers. They start with P000.

What is a PZnn process used for?


A PZ process is a RAC Parallel Server Slave process. Oracle uses PZnn processes
for queries that select from (GV$) global dynamic performance views. GV$ views
store information from all open instances in a RAC environment. PZnn background
processes are numbered backward, starting from PZ99.

In 12c, If the query is a GV$ query, then these background processes are numbered
backward, starting from PPA7

what is LNSn process?

The LNSn process is a network server process used in a Data Guard (primary)
database.

"During asynchronous redo transmission, the network server (LNSn) process transmits
redo data out of the online redo logfiles on the primary database and no longer
interacts directly with the log writer process. This change in behavior allows the
log writer (LGWR) process to write redo data to the current online redo log file
and continue processing the next request without waiting for inter-process
communication or network I/O to complete."
g:\prints\core\bgprocess.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Database 11g introduced 56 new background processes
==========================================================:

Note:
The following post based on the Oracle Database 11g and briefly describes some
important processes.

These processes are mandatory and can be found in all typical database environment.

PMON � Process Monitor


Recover failed user process, Releasing resource, Rollback uncommitted Transaction

SMON � System Monitor


Instance recovery, Cleans and releases temporary segments

LGWR � Log Writer (Redo etc.)


Writes redo entries from redo log buffer to disk (Redo log file)

CKPT � Checkpoint
Ensures data consistency and easy database recovery in case of crash by
sychronizing all the data file headers and control files with recent checkpoint
information.

DBW0�j � DB Writer
Flushing or writing modi?ed dirty data (buffers) from database buffer cache to
disks (datafiles). You can configure addition DB writer processes (up to 20) from
DBW0-DBW9 and DBWa through DBWj.

RECO � Distributed Recovery


Recoverer is responsible for recovering failed distributed transactions in a
distributed database.

MMON & MMNL � Manageability Monitor and Manageability Monitor Lite


MMON performs tasks related to AWR such as taking snapshots, capturing statistics
value for recently modified SQL objects and writes when a metric violates its
threshold value.
MMNL writes statistics from the Active Session History (ASH) buffer in the SGA to
disk. MMNL writes to disk when the ASH buffer is full.

DIAG � Diagnostic Capture


Prior to Oracle database 11g, DIAG was used in RAC environmnent. Monitors overall
health of the instance and performs diagnostic dumps requested by other processes
and dumps triggered by process or instance termination.

Followings are optional and used by specific database feature.

ARCn � Archiver Process


Responsible for copying online redo log to archival storage before being reused.
Runs only when database is in Archivelog Mode.

CJQ0 � Job Queue Coordinator Process


Job Queue coordinator is responsible for managing scheduled job processes within
the database.
Dnnn � Dispatcher Process
Performs network communication in the shared server architecture

MMAN � Memory Manager


Responsible for managing instance memory based on the workloads.

PSP0 � Process Spawner Process


Spawns Oracle background processes whenever needed.

QMNC � AQ (Advanced Queuing) Coordinator Process


Facilitating various background activities required by AQ and Oracle Streams.

These are introduced in Oracle Database 11g. The first three is mandatory and
others could be running depending upon the features being used.

DIA0 � Diagnostic
Responsible for detecting hangs and resolving deadlocks

GEN0 � General Task Execution Process


performs required tasks including SQL and DML

VKTM � Virtual Keeper of TiMe Process


Responsible for providing a wall-clock time (updated every second) and reference-
time counter (updated every 20 ms) and available only when running at elevated
priority.

DBRM � DataBase Resource Manager


Performs resource manager tasks setting resource plans.

FBDA � Flashback Data Archiver Process


Archives historical rows for tracked tables into flashback data archives and
manages archive space, organization, and retention

SMCO � Space Management Coordinator


Coordinates the execution of various space management tasks

Wnnn � Space Management Slaves


Performs various background space management tasks, including proactive space
allocation and space reclamation

RCBG � Result Cache BackGround


Handles result cache messages.

VKRM � Virtual Scheduler for Resource Manager Process


Schedules Resource Manager Activity. Serves as centralized scheduler for Resource
Manager activity. VKRM manages the CPU scheduling for all managed Oracle processes.

g:\prints\core\data recovery advisor.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
RMAN 11G : Data Recovery Advisor - RMAN command line example

What Is the Data Recovery Advisor?

The Data Recovery Advisor is a tool that helps you to diagnose and repair data
failures and corruptions. The Data Recovery Advisor analyzes failures based on
symptoms and intelligently determines optimal repair strategies. The tool can also
automatically repair diagnosed failures.
The Data Recovery Advisor is available from Enterprise Manager (EM) Database
Control and Grid Control. You can also use it via the RMAN command-line.

In this example I will you will see examples of via the RMAN command line utilising
the DRA commands:

This DRA commands are available within RMAN:

List Failure # lists the results of previously executed failure assessments.


Revalidates existing failures and closes them, if possible.
Advise Failure # presents manual and automatic repair options
Repair Failure # automatically fix failures by running optimal repair option,
suggested by ADVISE FAILURE. Revalidates existing failures when completed.
Change Failure # enables you to change the status of failures.

Restrctions:

In the current release, Data Recovery Advisor supports single-instance databases.


Oracle Real Application Clusters databases are not supported in 11.1.0.6 ->
11.1.0.8
Data Recovery Advisor cannot use blocks or files transferred from a standby
database to repair failures on a primary database. Also, you cannot use Data
Recovery Advisor to diagnose and repair failures on a standby database. However,
the Data Recovery Advisor does support failover to a standby database as a repair
option (as mentioned above).

Examples:

RMAN> list failure;


RMAN> advise failure;
RMAN> repair failure;
RMAN> change failure 522 closed;

Others Command:

RMAN> list failure low;


RMAN> list failure high;
RMAN> list failure critical;
RMAN> repair failure preview;
RMAN> change failure 522 priority low;

g:\prints\core\Database Architecture.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Database Architecture

Shared pool
============
The size of shared pool is defined by the parameter called shared_pool_size .
Shared pool contains many components but important components are
library cache,data dictionary cache and control structure .

Library cache
=============
Lib cache consists of shared sql and pl/sql area .
SQL and PL/SQL area stored most recently used sql and pl/sql statements .
We cannot declare the size of lib cache ,but it is complete based on shared pool
size .
If the size of lib cache is small ,the statements are continously reload in the lib
cache which
can effect the performance .
It is managed through LRU.

Data Dictionary cache


=====================
The data dictionary cache hold information about database objects like
table
index
column
privileges
view
trigger .. etc .

The data dictionary cache also known as row cache ,because it store the infromation
in the form of rows instead of buffers.

If the size of DDC is small ,then database has to query database tables repeatedly
which degarde the performance.

Control Structure
=================
Locking information will be stored in control structure.

Database buffer cache


=====================
The size of database buffer cache is defined by the parameter called db_cache_size.
Database buffer cache hold the data which is fetch or read from the database
files .
The size of each buffer in the database buffer cache is equal to the size of
oracle block size .

Database buffer cache consist of two independent sub caches.

1)
DB_KEEP_CACHE_SIZE
==================
It will retain the block in the memory which are likely to be used.

DB_RECYCLE_CACHE_SIZE
====================
It will eliminate the blocks from memory which are having little chances of being
used .

Buffer modes
+++++++++++
unused
++++++++
The buffer is ready to use or available to use ,as it was never used .

Cleaned
+++++++++
The data has been written to database and available for use .

dirty
++++
The data has been modified but not written to the disk .
redo log buffer
===============
The size of redo log buffer is defined by parameter called log_buffer.
It sequentially record all the changes made to database.
log_buffer is a static parameter.

JAVA POOL
=========
The size of java_pool is defined by the parameter called java_pool_size.
If you want to execute java commands inside the database then java pool will be
used.
Whenever you run dbca,netca etc the memory is allocated from java pool .

Large pool
==========
The size of large_pool is defined by the parameter called large_pool_size.
Whenever a rman session is initiated the memory is allocated from large pool and
once finished the memory is de allocated.
It does not follow LRU algorithm.

g:\prints\core\dataguard by qadar.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Data Guard
==================:

Oracle Data Guard is the disaster recovery solution. Protects our production
database from disasters, reducing the work load on it and more effective use it.

Simple Example:

Your primary database is running and you want to reduce downtime because of
unplanned outages. You create a replica of this primary database (termed as standby
database).You regularly ship redo generated in the primary database to standby
database and apply it there. So that is our �Data Guard� standby database and it is
in a continuous state of recovery, validating and applying redo to remain in sync
with the primary database.

Oracle Dataguard provides enhancements in different versions:

->ORACLE 8i

-Read-Only Standby Database


-Managed recovery
-Remote archiving redo log files

->ORALCE 9i

-�Zero Data Loss� Integration


-Data Guard Broker and Data Guard Manager GUI
-Swithcover and Failover operations
-Automatical synchronous
-Logical Standby Database
-Maximum Protection

->ORACLE 10g

-Real-Time Apply
-Forced support for Oracle RAC
-Fast-Start Failover
-sAsynchronous redo transfer
-Flashback Database

->ORACLE 11g

-Active Standby Database (Active Data Guard)


-Snapshot Standby
-Heterogeneous platform support (Production �Linux, Standby � Windows)
-Active Data Guard is another extension of physical standby, which is open in read-
only mode, but applying the logs in the background. This keeps the database up-to-
date and allows Real-Time Queries During Managed Recovery process.
-Zero data loss DR protection across any distance without impacting performance

DATA GUARD 11g SYNCHRONOUS REDO TRANSFER PROCESS ARCHITECTURE (SYNC)-ZERO DATA LOSS

Sync Process flow;

1 � The user initiates a transaction. This transaction is written to a redo buffer.


When the user commit the transaction then the LGWR process writes it redo log file.

2 � LNS (logwriter Network Service) reports to RFS (Remote File Service) committed
redo. RFS writes to standby redo log file. If we use physical standby, the MRP
(Managed Recovery Process) will apply to standby database . In Logical Standby
this is made by LSP (Logical Standby Process) .

3 � RFS sends information to LNS that data is processed successfully. LNS transmits
this information to LGWR . Finally, commit information is send to the user that
initiated the transaction (transaction) .

Data transfer is ensured by the synchronous redo transfer. But there is a


disadvantage. If a network failure occurs between production database (Primary)
and the standby database or Primary database can not access to the standby
database then the primary database will hang until standby response. In other
words, the primary database can not serve. To avoid such situation, We need to use
�NET_TIMEOUT� parameter. With this parameter you can determine the timeout period.
In case of an outage, Primary waits until the timeout period and will continue to
serve when timeout period expires. Default value of this parameter in 10g is 180s
and in 11g is 30s.

DATA GUARD 11g ASYNCHRONOUS REDO TRANSFER PROCESS ARCHITECTURE (ASYNC)

Asynchronous reod transfer flow;

1 � The user initiates a transaction. This transaction is written to a redo buffer.


When the user commit the transaction then the LGWR process writes it redo log file.

2 � LNS (logwriter Network Service) reports to RFS (Remote File Service) committed
redo. RFS writes to standby redo log file. If we use physical standby, the MRP
(Managed Recovery Process) will apply to standby database . In Logical Standby
this is made by LSP (Logical Standby Process) .

3 � Once Redo Buffer is recycled, LNS automatically reads redo log files and
begins to send redo from log files.

RFS doesn�t send information to LNS that data is processed successfully.

The most common used process architecture. Asynchronous redo transfer does not
guarantee zero data loss. The system has recovered with minimal data loss.

g:\prints\core\difference between migrate and upgrade.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What is the difference between startup Upgrade and Migrate

startup migrate:
---------------
Used to upgrade a database till 9i.

Startup Upgrade ?
---------------
From 10G we are using startup upgrade to upgrade database.

What happens internally when you use startup upgrade/migrate?


------------------------------------------------------------
It will adjust few database (init) parameters (irrespective of what you have
defined) automatically to certain values in order to run upgrade scripts smoothely.
In other way..it will issue few alter statements to set certain parameters which
are required to complete the upgrade scripts without any issues.

g:\prints\core\flashbackp by qadar.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The flashback database enables you to wind your entire database backward in time,
reversing the effects of unwanted database changes within a given time window. The
effects are similar to database point-in-time recovery and it is similar to
conventional point in time recovery in its effects, allowing you to return a
database to its state at a time in the recent past and Flashback Database can be
used to reverse most unwanted changes to a database, as long as the datafiles are
intact.

Note:

-The Flashback log files are never archived - they are reused in a circular manner.

-Redo log files are used to forward changes in case of recovery while flashback log
files are used to backward changes in case of flashback operation.

-Flashing back a database is possible only when there is no media failure. If you
lose a data file or it becomes corrupted, you�ll have to recover using a restored
data file from backups.
We can use Flashback Database in the following situations:

-To retrieve a dropped schema


-When a user error affects the entire database
-When we truncate a table in error
-When a batch job performs only partial changes

-Since we need the current data files in order to apply changes to them, we can�t
use the Flashback Database feature in cases where a data file has been damaged or
lost.
-If we have a damaged disk drive, or if there is physical corruption (not logical
corruption due to application or user errors) in our database, we must still use
the traditional methods of restoring backups and using archived redo logs to
perform the recovery.
Flashback Database provides:

-Very effective way to recover from complex human errors


-Faster database point-in-time recovery
-Simplified management and administration
-Little performance overhead

Flashback Levels:

1) Row level

�Flashback Query: Allows us to view old row data based on a point in time or an
SCN.
we can view the older data and, if necessary, retrieve it and undo erroneous
changes.

�Flashback Versions Query: Allows us to view all versions of the same row over a
period of time so that you can undo logical errors. It can also provide an audit
history of changes, effectively allowing us to compare present data against
historical data without performing any DML activity.

�Flashback Transaction Query: Lets us view changes made at the transaction level.
This technique helps in analysis and auditing of transactions, such as when a batch
job runs twice and you want to determine which objects were affected. Using this
technique, we can undo changes made by an entire transaction during a specified
period.

2) Table level

�Flashback Table: Restores a table to a point in time or to a specified SCN without


restoring data files. This feature uses DML changes to undo the changes in a table.
The Flashback Table feature relies on undo data.

�Flashback Drop: Allows us to reverse the effects of a DROP TABLE statement,


without resorting to a point-in-time recovery. The Flashback Drop feature uses the
Recycle Bin to restore a dropped table.

3) Database level

�Flashback database: allows you to rollback database to a time in the past.

Useful if you have :

1. Dropped user
2. Truncated table
3. Batch job:Partial changes.

Flashback database can be issued with 3 different conditions:

1. TO_TIME
2. TO SCN
3. TO SEQUENCE( LOG ARCHIVE SEQ)

DETERMINE IF FLASHBACK DATABASE IS ALREADY ENABLED:

select flashback_on from v$database;

ENABLING FLASHBACK:
ALTER SYSTEM SET db_recovery_file_dest='/u01/flashy' SCOPE=spfile;
ALTER SYSTEM SET db_recovery_file_dest_size=10G SCOPE=spfile;
ALTER SYSTEM SET db_flashback_retention_target=1440; (default 1 day)

SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE FLASHBACK ON;
ALTER DATABASE OPEN;

Set the db_recovery_file_dest to an appropriate location for the flashback recovery


files.
Set the db_recovery_file_dest_size to an appropriate size for the amount and size
of the testing required.
Set the db_flashback_retention_target to an appropriate time, in mins, to retain
flashbackability.

CREATING A RESTORE POINT:


CREATE RESTORE POINT <restore point name> GUARANTEE FLASHBACK DATABASE;

HOW TO IDENTIFY AVLAILABLE RESTORE POINTS:


SELECT NAME, TIME, GUARANTEE_FLASHBACK_DATABASE FROM V$RESTORE_POINT;

ROLLING BACK TO A RESTORE POINT:

SQLPLUS / AS SYSDBA
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT <restore point name>;
ALTER DATABASE OPEN RESETLOGS;

DROPPING A RESTORE POINT:

Restore points can be dropped dynamically, i.e. with the database open.
SQLPLUS / AS SYSDBA
DROP RESTORE POINT <restore point name>;
EXIT

MONITORING FLASHBACK LOGGING:

select estimated_flashback_size/1024/1024/1024 "EST_FLASHBACK_SIZE(GB)" from


v$flashback_database_log;

FINDING THE EARLIEST FLASHBACK POINT:

SQL> alter session set nls_date_format='dd/mm/yy hh24:mi:ss';


SQL> select oldest_flashback_scn,oldest_flashback_time from
v$flashback_database_log;

DISABLING FLASHBACK DATABASE:

Flashback can be disabled with the database open. Any unused Flashback logs will be
automatically removed at this point and a message detailing the file deletion
written to the alert log.
SQLPLUS / AS SYSDBA
ALTER DATABASE FLASHBACK OFF;
EXIT

Flashback Data Dictionary Views:


V$FLASHBACK_DATABASE_LOG, GV$FLASHBACK_DATABASE_LOG
V$FLASHBACK_DATABASE_STAT, GV$FLASHBACK_DATABASE_STAT
V$RECOVERY_FILE_DEST
V$FLASHBACK_DATABASE_LOGFILE, GV$FLASHBACK_DATABASE_LOGFILE
V$FLASH_RECOVERY_AREA_USAGE
V$RESTORE_POINT

g:\prints\core\flback.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
How to enable FLASHBACK in Oracle Database 11G R1 and below versions

1.Database has to be in ARCHIVELOG mode.


2. Flash Recovery Area has to be configured. To configure PFB steps :-
SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE


------------------------------------ ----------- -
-----------------------------
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0

Currently flashback is disabled. To enable :-

A. set db_recovery_file_dest_size and db_recovery_file_dest initialization


parameters.

SQL> alter system set db_recovery_file_dest_size=2g;

System altered.

B. After db_recovery_file_dest_size parameeter has been set, create a location in


OS where your FLASHBACK logs will be stored.

bash-3.2$ mkdir FLASHBACK


bash-3.2$ pwd

C. Now set db_recovery_file_dest initialization parameter.

SQL> alter system set db_recovery_file_dest='/orcl_db/FLASHBACK'; ##########For


Standalone database##########
System altered.

SQL> alter system set db_recovery_file_dest='/orcl_db/FLASHBACK' sid='*';


##########For RAC database##########

System altered.

SQL> show parameter db_recovery

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /u01/flashback
db_recovery_file_dest_size big integer 10G
SQL>
3. Create an Undo Tablespace with enough space to keep data for flashback
operations. More often users update the database more space is required.

4. By default automatic Undo Management is enabled, if not enable it. In 10g


release 2 or later default value of UNDO management is AUTO. If you are using lower
release then PFB to enable it:-

SQL> alter system set undo_management=auto scope=spfile;

System altered

5. Shut Down your database


SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down

6. Startup your database in MOUNT mode


SQL> startup mount;
ORACLE instance started.

Total System Global Area 1025298432 bytes


Fixed Size 1341000 bytes
Variable Size 322963896 bytes
Database Buffers 696254464 bytes
Redo Buffers 4739072 bytes
Database mounted.

7. Change the Flashback mode of the database


SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
NO
SQL>alter database flashback ON;
Database altered.
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES
SQL> alter database open;
Database altered.

FLASHBACK mode of the database has been enabled.


How to disable FLASHBACK in Oracle Database 11G R1 and below versions

1. Shut Down your database


SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down

2. Startup your database in MOUNT mode


SQL> startup mount;
ORACLE instance started.
Total System Global Area 1025298432 bytes
Fixed Size 1341000 bytes
Variable Size 322963896 bytes
Database Buffers 696254464 bytes
Redo Buffers 4739072 bytes
Database mounted.

SQL> select flashback_on from v$database;


FLASHBACK_ON
------------------
YES

SQL>alter database flashback OFF;


Database altered.
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> alter database open;


Database altered.

FLASHBACK mode of the database has been disabled.

How to enable/disable FLASHBACK in Oracle Database 11G R2 and above versions.


From 11GR2 we donot have to bounce the database to alter flashback.

1. Database has to be in ARCHIVELOG mode.


To change ARCHIVE mode refer to -- Change ARCHIVE mode of database
2. Flash Recovery Area has to be configured. To configure PFA steps.
3. TO enable or disable flashback , we can change this while database is in open
mode. PFB

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> alter database flashback on;


Database altered.
SQL> alter database flashback off;
Database altered.

fra usage:
select * from v$flash_recovery_area_usage;

g:\prints\core\KEY ORACLE BACKGROUND PROCESSES RELATED TO BACKUP AND RECOVERY.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
KEY ORACLE BACKGROUND PROCESSES RELATED TO BACKUP AND RECOVERY
===============================================================:

The main background processes which are related to Backup & Recovery process are :

A) THE CHECKPOINT PROCESS


B) THE LOG WRITER PROCESS
C) THE ARCHIVER PROCESS

Read further for brief description on these important oracle background processes.

A) THE CHECKPOINT PROCESS (ora_ckpt_<SID>)

The checkpoint process is a key database �concept� and it does three important
things:

� It signals the database write process (DBWn) at each �checkpoint� to process all
modified buffers in the SGA buffer cache (temporary location of DB blocks) to the
database data files(permanent & original location of DB blocks). After it has been
done, online redo log files can be recycled.
� It updates the datafile headers with the checkpoint information (even if the file
had no changed blocks).
� It updates the control files with the checkpoint information.

Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end


times in the database alert log.

What causes Checkpoint?

i) The most common reason is redo log switch.

You can switch logfile manually to check this in the alert log.

SQL> alter system switch logfile;

System altered.

Alert.log entry>>
�������������������
Mon Feb 03 14:24:49 2014
Beginning log switch checkpoint up to RBA [0x8.2.10], SCN: 1006600
Thread 1 advanced to log sequence 8 (LGWR switch)
Current log# 2 seq# 8 mem# 0: /u01/oracle/DB11G/oradata/brij/redo02.log
Mon Feb 03 14:24:49 2014
Archived Log entry 2 added for thread 1 sequence 7 ID 0x4c45a3de dest 1:
�������������������

ii) Checkpoints can also be forced with the ALTER SYSTEM CHECKPOINT; command. We
generally do checkpoint before taking backups. At some point in time, the data that
is currently in the buffer cache would be placed on disk. We can force that to
happen right now with a user invoked checkpoint.

iii) There are incremental checkpoints controlled by parameters such as


FAST_START_MTTR_TARGET and other triggers that cause dirty blocks to be flushed to
disk.

Frequent Checkpoints usually means redo log file size is small (and it also means a
slow system). But if you increase your redo log files size very high, it will also
increase the mean time to recover. So a DBA should determine log file size on the
basis of various factors like database type (DWH/OLTP etc), Transactions volume,
database behavior as shown in alert log error messages etc.

CKPT actually took one of the earlier responsibility of LGWR. LGWR was responsible
for updating the data file headers before database release 8.0. But with increasing
database size and number of data files this job was givne to CKPT process.
B) THE LOG WRITER PROCESS (ora_lgwr_<SID>)
LGWR play important role of writing the data changes from redo log buffer to online
redo log files. Oracle�s online redo log files record all changes made to the
database in sequential manner (SCN is the counter).

Why we are multiplexing only redo log files and not the datafiles?

Oracle uses a �writeahead� protocol, meaning the logs are written to before the
datafiles are. Data changes aren�t necessarily written to datafiles when you commit
a transaction, but they are always written to the redo log. Before DBWR can write
any of the blocks that are changed to disk, LGWR must flush the redo information
related to these blocks. Therefore, it is critical to always protect the online
logs against loss by ensuring they are multiplexed.

Redo log files come into play when a database instance fails or crashes. Upon
restart, the instance will read the redo log files looking for any committed
changes that need to be applied to the datafiles.

The log writer (LGWR) writes to the online redo files under the following
circumstances:

� At each commit
� Every three seconds
� When the redo log buffer is one-third full or contains 1MB of cached redo log
data
� When LGWR is asked to switch log files

Remember that the data will not reside in the redo buffer for very long. For these
reasons, having an enormous (hundreds/thousands of megabytes) redo log buffer is
not practical; Oracle will never be able to use it all since it pretty much
continuously flushes it.

On our 11gr2 database, the size is about 6MB. The minimum size of the default log
buffer is OS-dependent

SQL> show parameter log_buffer;

NAME TYPE VALUE


������������ ���� ����������
log_buffer integer 6266880

Or you can also see the memory value when the databse is starting

ORACLE instance started.

Total System Global Area 835104768 bytes


Fixed Size 2257840 bytes
Variable Size 536874064 bytes
Database Buffers 289406976 bytes
Redo Buffers 6565888 bytes <<<<< ~6MB
Database mounted.
Database opened.

LGWR does lots of sequential writes(fast operation) to the redo log. This is an
important distinction and one of the reasons that Oracle has a redo log and the
LGWR process as well as the DBWn process. DBWn does lots of scattered writes (slow
operation). The fact that DBWn does its slow job in the background while LGWR does
its faster job while the user waits gives us better overall performance. Oracle
could just write database blocks directly to disk when you commit, but that would
entail a lot of scattered I/O of full blocks, and this would be significantly
slower than letting LGWR write the changes out sequentially.

Also, during Commit, the lengthiest operation is, and always will be, the activity
performed by LGWR, as this is physical disk I/O. For that reason LGWR is designed
such that as you are processing and generating the redo log, LGWR will be
constantly flushing your buffered redo information to disk in the background. So
When it came to the COMMIT, there will not have much left to do and commit will be
faster.

REMEMBER that the purpose of the log buffer is to temporarily buffer transaction
changes and get them quickly written to a safe location on disk (online redo log
files), whereas the database buffer tries to keep blocks in memory as long as
possible to increase the performance of processes using frequently accessed blocks.

C) THE ARCHIVER PROCESS (ora_arc<n>_<SID>)

Although an optional process, but should be considered mandatory for all production
databases!

The job of the ARCn process is to copy an online redo log file to another location
when LGWR fills it up, before they can be overwritten by new data.
The archiver background process is used only if you�re running your database in
archivelog mode. These archived redo log files can then be used to perform media
recovery. Whereas online redo log is used to fix the data files in the event of a
power failure (when the instance is terminated), archived redo logs are used to fix
data files in the event of a hard disk failure.

For example, If we lose the disk drive containing the system.dbf data file , we
can go to our old backups, restore that old copy of the file, and ask the database
to apply all of the archived and online redo logs generated since that backup took
place. This will catch up that file with the rest of the data files in our
database, and we can continue processing with no loss of data.

ARCH copies the online redo log a bit more intelligently than how the operating
system command cp or copy would do: if a log switch is forced, only the used space
of the online log is copied, not the entire log file.

CHECK DATABASE IS IN ARCHIVE MODE OR NOT:

SQL> select log_mode from v$database;

LOG_MODE
����
ARCHIVELOG

CHECK NUMBER OF ARCH PROCESS IN THE DATABASE:

SQL> select * from v$archive_processes where STATUS=�ACTIVE�;

PROCESS STATUS LOG_SEQUENCE STAT ROLES


���- ���- ���� �- ������������
0 ACTIVE 0 IDLE
1 ACTIVE 0 IDLE HEART_BEAT
2 ACTIVE 0 IDLE NO_FAL NO_SRL
3 ACTIVE 0 IDLE
g:\prints\core\Oracle Database Architecture.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Oracle Server:

A server is a collection of database units and it provides comprehensive integrated


approach to info management
It consists of an "Instance & Database "
Oracle Instance:

It means to access an oracle database


It always open one & only one database
It consists of two types :

-Memory Structure
-Back Ground Process

Memory Structure:

System Global Area (SGA)


Program Global Area (PGA)

(I)System Global Area Once The Instance is started it allocated memory to SGA. It
is a basic component of oracle instance its size depends on RAM. The oracle 10g
parameter of SGA and PGA sga_target , sga_max_size , pga_aggregate_target
It consists of

-Shared Pool
- Database Buffer Cache
- Redolog Buffer Cache
- Large pool
- Stream pool
- Java pool

here we can go to see each components in details

(1 )Shared Pool:
- It's parameter is shared_pool_size
- It's consists of Library cache and Data Dictionary Cache

I) Library Cache:

- It stores information about recently used sql and Pl-sql Statements


- Here it checks some of the following

1)Semantic checking - it checks the privilege issued commands by user


2)Syntax checking - it checks the syntax of user issued commands
3)Soft parse - Already Executed Sql statements command
4)Hard parse - New Sql Statements

II)Data Dictionary Cache:

- It stores the collection of most recently used definitions in the databases


includes dbfiles,tables,indexes ,columns etc
- It has the information about database and its read only

(2)Database Buffer Cache:


- It stores copies of data block that have been retrieved from the database
datafiles
- It's parameter is show parameter db_block_size =8kb is default size,
show parameter db_cache_size

(3)Redo log Buffer Cache or Recovery Mechanism:

- It's maintains records of modification database blocks


- Primary purpose is recovery
Show parameter log_file

(4)Large Pool:

- Parallel execution allocates buffers out of the large pool only when sga_traget
- It works to release the burden the shared pool
show parameter parallel_automatic_tuning

(5) Java Pool:

- Parsing requirement of java commands


- Requires installation of java based projects
Show parameter java_pool_size

(6)Stream Pool:

- It's Cache "Oracle Stream" Objects


- Oracle Stream means to allow data multiplication between on oracle databases or
oracle and non-oracle databases,It can be used for Replication,Message
Queuing,Loading data into a Data Warehouse,Event Notification,Data Protection
Automatic Shared Memory Management
(ASMM ) was introduced in Oracle 10g.

its taking care by oracle and allocates SGA components size ASMM taking care of

1)Shared pool
2)Library cache
3)Database buffer cache
4)Large pool
5)Java Pool
6)Stream Pool

(II)Program Global Area

- It reserved memory for each user process connecting to an oracle database


- Allocates memory when a process is created
- De-allocates memory when a process is terminated

Process Structure :

1)USER PROCESS:
- A program that request interaction with oracle server
- It's must first establish a connection
- It does not interact directly with oracle server

2)SERVER PROCESS:
- It directly interacts with oracle server
- It can be a dedicated or shared server
- It always responds to user requests
3)BACKGROUND PROCESS:

- It enforces the relationship between memory structure and database


- To view all background process
!ps -ef | grep databasename

It has some of components are

1)DBWR
2)LGWR
3)SMON
4)PMON
5)CHPKT
Let us we can see each components are

1)DBWR:

- Time Out Error


- Tablespace offline
- Tablespace Read only
- Tablespace Drop or Truncate in above situations, Data 'll be flushed from
database buffer cache into data files

2)LGWR:

- At commit
- Every 3 sec
- When there is full 1MB reached
- Redolog Buffer reached one-third full
- Before DBWR writes In above situations, redolog writes through LGWR from redo
log buffer

3)SMON:

- Monitoring the system is called system monitor


- Instance recovery
- Rolls forward changes into redologs
- Open database for user access
- Rolls back uncommitted transactions

4)PMON:

- Taking Care of All background Process


- Cleaned up after failed process
- Rolling Back

5)CHKPT:

- Updating the control file with checkpoint information.


- It's a process of writing by DBWR ,all modified buffers in SGA cache into Data
files

Alter system checkpoint;

Database :
The Database is a collection of data which contains data files ,control files
,redolog files

1) Data file:
- It is a portion of an oracle database ,it stores the data which includes user
data and undo data
- It's extension ".dbf"
- The default location is " $ORACLE_BASE/oradata"
- To view the location in database use this command
Select name from V$datafile;

2) Control file:

- It's heart of the database


- It holds the information of data file ,redo log file locations and backup
information starting time and ending time
- It's extension ".ctl"
Show parameter control_files
- By default oracle has copied the control files into flash_recovery_area

3) Redo log File:

- It's part of an oracle database


- It's the main purpose is to recover the database
- It's extension ".log"
- When transaction is committed that details in redo log buffer are written to a
redo log file
select * from V$log; or Select * from V$logfile ;

4) Archive log

- It's a group of redo log files to one or more offline destinations, known
collectively as the archived redo log
- Its Default location is Flash_recovery_area
- Must enable archive log mode in the database then only ll be saved on archive
log folder other wise the log buffer overwrites on redo log files through Lgwr.

The three major pieces of any database is:

1) Storage
2) Memory
3) Process

i) Storage

Datafile:
datafiles----inside of the datafile you have data, the data for tables and also
indexes are stored in datafiles.
undo data---
temporary data --- whenever oracle does a sort and can't store the all information
in memory in the (pga) is going to write in
temporary files.

* In oracle 7, 8 and 8i you can have 1,022 datafiles, and now you can have 65,536
datafiles.
* dba_data_files
* v$datafile
* v$dbfile

Controlfile:
controlfife ----- it contains the structure of your database
* Oracle recommend atleast two/three controlfiles in different locations.
* all the three controlfile information is same.
* Inside the controfile it stores the:
db name
db creation time
entire path of your datafile location
checkpoint information
v$controlfile
* control_files is very important parameters

Online Redologs:
online redologs---
* all the dml and ddl commands are stored (undo and redo)
* all the changes made to the databases are stored in redolog files
* all the commands are stored in redologfiles
* its a recorder of your major changes in database
* Oracle recommended that you have multiplex of your redologs in groups in
different locations
* Major used in recovery
* if archivelog is enabled the all information of redolog is stored in archivelog
files

what is an oracle database?

Oracle database comprises three types of files:

datafiles (.dbf)
controlfiles (.ctl)
redologs (.rdo)

archivelog files
spfile
init.ora file
oracle password file

ii) Memory

SGA --- shared global area or system global area


database buffer cache

g:\prints\core\rman backup as copy.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
RMAN BACKUP AS COPY
++++++++++++++++++++:

By default the BACKUP command in RMAN creates BackupSet(s) -- each of which is one
or more BackupPiece(s). A datafile may span BackupPieces but may not span a
BackupSet.

However, RMAN does allow another method -- BACKUP AS COPY. This is talking to
"User Managed Backups" created with OS commands -- except that the ALTER TABLESPACE
| DATABASE BEGIN BACKUP command does not have to be issued.

BACKUP AS COPY creates a byte-for-byte copy of each datafile [except, inasmuch,


blocks being modified by concurrent writes to the datafile].

If an active datafile is corrupted, the DBA can choose to SWITCH TO COPY instead of
having to restore the datafile copy. Thus, a switch can be a fast operation.
Obviously, the DBA must plan carefully where he creates such copies if he intends
to SWITCH anytime later (he wouldn't keep a datafile copy on a non-protected [RAID
or ASM] storage target).

g:\prints\core\sql_query_execution.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
All SQL statements have to go through the following stages of execution:

1) PARSE:
Every SQL statement has to be parsed which includes checking the syntax,
validating, ensuring that all references to objects are correct and ensuring that
relevant privileges to those object exist.

2) BIND:
After parsing, the oracle server knows the meaning of the oracle statement but
still may not have enough info(values for variables) to execute the statement. The
process of obtaining these value is called as bind values.

3) EXECUTE:
After binding, the Oracle server executes the statement.

4) FETCH:
In the fetch stage, rows are selected and ordered and each successive row retrieves
another row of the result until the last row has been fetched. This stage is only
for certain DML statements like SELECT.

How sql statement procees in oracle.

sqlplus scott/tiger@prod

SQL>select * from emp;


SQL>update emp set sallary=30000 where empid=10;
SQL>commit;

So we will understand what is happening internaly

1. Once we hit sqlplus statement as above client process(user) access sqlnet


listener.
2. Sqlnet listener confirms that DB is open for buisness & create server process.
3. Server process allocates PGA.
4. �Connected� Message returned to user.
5. SQL>select * from emp;
6. Server process checks the SGA to see if data is already in buffer cache.
7. If not then data is retrived from disk and copied into SGA (DB Cache).
8. Data is returned to user via PGA & server process.
9. Now another statement is SQL>Update emp set sallary=30000 where empid=10;
10. Server process (Via PGA) checks SGA to see if data is already there in buffer
cache.
11. In our situation chances are the data is still in the SGA (DB Cache).
12. Data updated in DB cache and mark as �Dirty Buffer�.
13. Update employee placed into redo buffer.
14. Row updated message returned to user
15. SQL>commit;
16. Newest SCN obtained from control file.
17. Data in DB cache is marked as �Updated and ready for saving�.
18. commit palced into redo buffer.
19. LGWR writes redo buffer contents to redo log files & remove from redo buffer.
20. Control file is updated with new SCN.
21. Commit complete message return to user.
22. Update emp table in datafile & update header of datafile with latest SCN.
23. SQL>exit;
24. Unsaved changes are rolled back.
25. Server process deallocates PGA.
26. Server process terminates.
27. After some period of time redo log are archived by ARCH process.

Question: What are the internal SQL execution steps? How does Oracle translate a
table name into a read request from a physical datafile?

Answer: Between hitting "enter" and seeing your results, there are many steps in
processing a SQL statement.

All Oracle SQL statements must be processed the first time that they execute
(unless they are cached in the library cache). and SQL execution steps include:

A syntax check - Are all keywords present "select . . . from", etc . .


A semantic check against the dictionary - Are all table names spelled correctly,
etc.
The creation of the cost-based decision tree of possible plans
The generation of the lowest cost execution plan
Binding the execution plan - This is where the table--> tablespace --> datafile
translation occurs.
Executing the query and fetching the rows.
Once the execution plan is created, it is stored in the library cache (part of the
shared_pool_size) to facilitate re-execution. There are two types of parses:

Hard parse - A new SQL statement must be parsed from scratch. (See hard parse
ratio, comparing hard parses to executes).

Soft parse - A reentrant SQL statement where the only unique feature are host
variables. (See soft parse ratio, comparing soft parses to executes).

Excessive data parsing can occur when your shared_pool_size is too small (and
reentrant SQL is paged out), or when you have non-reusable SQL statements without
host variables. See the cursor_sharing parameter for an easy way to make SQL
reentrant and remember that you should always use host variables in you SQL so that
they can be re-entrant.

g:\prints\core\start and stop container databases.txt


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Start and Shutdown Pluggable Database
++++++++++++++++++++++++++++++++++++++:

1)You can start or shutdown direct from inside pluggable database.

SQL> alter session set container=PDB2;


Session altered.

SQL> shutdown immediate;


Pluggable Database closed.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 MOUNTED

You can start with same way.

SQL> startup
Pluggable Database opened.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

2) The another way you can use �alter pluggable database� command from root
container to start and shutdown pluggable database.You connect to container
database.

SQL> alter pluggable database PDB1 close;


Pluggable database altered.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 READ WRITE

You can start pluggable database with same way.

SQL> alter pluggable database PDB1 open;


Pluggable database altered.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Maybe you have a lot of pluggable database in the container database and these
shutdown operation would be disturbed.We can shutdown all pluggable database with
one command from root container.

SQL> alter pluggable database all close;


Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 MOUNTED

To open all PDB�s

SQL> alter pluggable database all open;


Pluggable database altered.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

You may want close all pluggable database except one pluggable database. You can do
this except command as following.

SQL> alter pluggable database all except PDB2 close;


Pluggable database altered.

SQL> select name,open_mode from v$pdbs;


NAME OPEN_MODE
���������� ���-
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB2 READ WRITE

NOTE: When you open all pluggable database you can do same thing �open� command
instead �close� command .

Or you can specify pdb list to perform operation.

SQL> alter pluggable database pdb1,pdb2 close;


Pluggable database altered.

NOTE:When you shutdown container database all PDB will shutdown too. But when you
start container database any PDB is not start automaticly. To start PDB we should
do manually intervention or we can create a trigger as following.

CREATE OR REPLACE TRIGGER open_pdbs


AFTER STARTUP ON DATABASE
BEGIN
EXECUTE IMMEDIATE �ALTER PLUGGABLE DATABASE ALL OPEN�;
END open_pdbs;
/

g:\prints\core\stdby issue.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
STANDBY_FILE_MANAGEMENT parameter is not configured Auto

If you have not set standby_file_management to Auto on the primary and you add a
datafile to the primary you will see the following in the alert log of the standby

File #9 added to control file as �UNNAMED00009� because


the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
The file should be manually created to continue.
MRP0: Background Media Recovery terminated with error 1274

This will stop you standby database from being in recovery mode until this is fixed
, so if you are using OMF do the following to fix it

alter database create datafile


�/u01/app/oracloud/product/12.1.0/db_1/dbs/UNNAMED00009� as new
also change the following parameters using dataguard broker
edit database clouddr set property StandbyFileManagement=AUTO;
edit database cloud set property StandbyFileManagement=AUTO;
then restart the dataguard broker

Вам также может понравиться