Вы находитесь на странице: 1из 4

Document Created By PAWAN MISHRA ( pawanmishra.dba@gmail.

com ) Page 1 of 4

Oracle Cluster Inside:


Oracle Clusterware consists of two separate stacks:

1. First is Upper stack anchored by the Cluster Ready Services (CRS) daemon (crsd).The Cluster Ready Services
(CRS) stack manages cluster resources based on the configuration information that is stored in OCR for each
resource. This includes start, stop, monitor, and failover operations.

2. Second is Lower stack anchored by the Oracle High Availability Services daemon (ohasd). The Oracle High
Availability Services stack is responsible for monitoring and maintaining high availability of Oracle ASM and
Oracle Clusterware itself.
These two stacks have several processes that facilitate cluster operations.

Level 0: The OS automatically starts Clusterware through the OS init process. The init process spawns only one
init.ohasd, which in turn starts the OHASD process. This is configured in the /etc/inittab file.

Level 1: OHASD directly spawns four agent processes:

• cssdmonitor: Cluster Synchronization Service Daemon Monitor

• OHASD orarootagent: Oracle High Availability Service stack Oracle root agent.

• OHASD oraagent: Oracle High Availability Service stack Oracle agent.


Document Created By PAWAN MISHRA ( pawanmishra.dba@gmail.com ) Page 2 of 4

• cssdagent: Cluster Service Synchronization Daemon Agent.

Level 2: On this level, OHASD oraagent spawns five processes:

• ASM: Resource for monitoring ASM instances, mDNSD (mDNS daemon process), GIPCD: Grid Interprocess
Communication, GPnPD: GPnP profile daemon, EVMD: Event Monitor daemon.

Then, OHASD orarootagent spawns the following processes:

• CRSD: CRS daemon, CTSS daemon, Diskmon: Disk Monitor daemon (Exadata Storage Server storage), ACFS: (ASM
Cluster File System) Drivers

Next, the cssdagent starts the CSSD (CSS daemon) process.

Level 3: The CRSD spawns two CRSD agents: CRSD orarootagent and CRSD oraagent.

Level 4: On this level, the CRSD orarootagent starts various resources: Network resource: for the public network,
SCAN VIPs, Node VIPs for each node, GNS VIP: VIP for GNS if you use the GNS option and ACFS Registry

Then, the CRSD orarootagent starts the rest of the resources .

RAC Startup Sequence:

The Oracle High Availability Services daemon (ohasd) is responsible for starting in proper order, monitoring, and
restarting other local Oracle Clusterware daemons, up through the crsd daemon, which in turn manages clusterwide
resources. OHAS uses and maintains the information in OLR.

When a cluster node boots, or Clusterware is started on a running clusterware node, the init process starts ohasd. The
ohasd process then initiates the startup of the processes in the lower, or Oracle High Availability (OHASD) stack.

• The cssdagent process is started, which in turn monitors, starts, and stops the cssd.

• The cssd process manages and monitors the node membership in the cluster and updates the node status
information in VD. discovers the voting disk either in ASM or on shared storage, and then joins the cluster.

• The orarootagent is started. This process is a specialized oraagent process that helps crsd start and
manage resources owned by root, such as the network and the grid virtual IP address

• The oraagent process is started. It is responsible for starting processes that do not need to be run as
root. The oraagent process extends clusterware to support Oracle-specific requirements and complex
resources. This process runs server callout scripts when FAN events occur. This process was known as
RACG in Oracle Clusterware 11g Release 1 (11.1).

• The cssdmonitor is started and works with the cssdagent process to provide the I/O fencing to ensure data
integrity by rebooting the RAC node in case there is an issue with the ocssd.bin process, a CPU starvation, or
an OS locked up. Both cssdagent and cssdmonitor are the new features started in 11gR2 that replace the
previous Oracle Process Monitor daemon (oprocd) in 11gR1.

The orarootagent process is responsible for starting the following processes:


Document Created By PAWAN MISHRA ( pawanmishra.dba@gmail.com ) Page 3 of 4

• osysmond: The system monitor service (osysmond) is the monitoring and operating system
metric collection service that sends data to the cluster logger service, ologgerd. The cluster
logger service receives information from all the nodes and persists in the Cluster Health
Monitor (CHM) repository. There is one system monitor service on every node.

• ologgerd: There is a cluster logger service (ologgerd) on only one node in a cluster and
another node is chosen by the cluster logger service to house the standby for the master
cluster logger service. If the master cluster logger service fails, the node where the standby
resides takes over as master and selects a new node for standby. The master manages the
operating system metric database in the CHM repository and interacts with the standby to
manage a replica of the master operating system metrics database.

• crsd: The Cluster Ready Services (CRS) process is the primary program for managing high
availability operations in a cluster. The CRS daemon (crsd) manages cluster resources based
on the configuration information stored in OCR for each resource. This includes start, stop,
monitor, and failover operations. The crsd process generates events when the status of a
resource changes. When Oracle RAC is installed, the crsd process monitors the Oracle
database components and automatically restarts them when a failure occurs.

• diskmon: The diskmon process monitors and performs I/O fencing for Oracle Exadata.

• ACFS Drivers: These drivers are loaded in support of ASM Dynamic Volume Manager (ADVM)
and ASM Cluster File System (ACFS).

• ctssd: The Cluster Time Synchronization Service process provides time synchronization for the
cluster in the absence of ntpd. If ntpd is configured, ctssd will run in observer mode.

The oraagent process started by ohasd is responsible for starting the following processes:

• gipcd: The Grid Interprocess Communication (GIPC) daemon is a support process that enables
Redundant Interconnect Usage. Redundant Interconnect Usage enables load-balancing and
high availability across multiple (up to four) private networks (also known as interconnects).

• mdnsd: The Multicast Domain Name Service (mDNS) daemon is used by Grid Plug and Play to
locate profiles in the cluster, as well as by GNS to perform name resolution.

• evmd: The Event Management (EVM) daemon is a background process that publishes the
events that Oracle Clusterware creates.

• ASM: ASM provides disk management for Oracle Clusterware and Oracle Database.

• gpnpd:Grid Plug and Play (GPNPD) provides access to the Grid Plug and Play profile and
coordinates updates to the profile among the nodes of the cluster to ensure that all the
nodes have the most recent profile.

The crsd process starts another orarootagent process and another oraagent process. The new orarootagent process is
responsible for starting the following resources:
Document Created By PAWAN MISHRA ( pawanmishra.dba@gmail.com ) Page 4 of 4

• Node vip: The node vip is a node application (nodeapp) responsible for eliminating response
delays (TCP timeouts) to client programs requesting a connection to the database. Each node
vip is assigned an unused IP address. This is usually done via DHCP but can be manually
assigned. There is initially one node vip per cluster node at Clusterware startup. When a
cluster node becomes unreachable, the node vip is failed over to a surviving node and
redirects connection requests made to the unreachable node to a surviving node.

• SCAN vip: SCAN vips or Single Client Access Name vips are part of a connection framework
that eliminates dependencies on static cluster node names. This framework allows nodes to
be added to or removed from the cluster without affecting the ability of clients to connect to
the database. If GNS is used in the cluster, three SCAN vips are started on the member nodes
using the IP addresses assigned by the DHCP server. If GNS is not used, SCAN vip addresses
for the cluster can be defined in the DNS server used by the cluster nodes.

• Network: Network resources required by the cluster are started.

• GNS Vip: If GNS is used to resolve client requests for the cluster, a single GNS vip for the
cluster is started. The IP address is assigned in the GNS server used by the cluster nodes.

The crsd process starts another orarootagent process and another oraagent process. The new orarootagent process is
responsible for starting the following resources:

• Node vip: The node vip is a node application (nodeapp) responsible for eliminating response
delays (TCP timeouts) to client programs requesting a connection to the database. Each node
vip is assigned an unused IP address. This is usually done via DHCP but can be manually
assigned. There is initially one node vip per cluster node at Clusterware startup. When a
cluster node becomes unreachable, the node vip is failed over to a surviving node and
redirects connection requests made to the unreachable node to a surviving node.

• SCAN vip: SCAN vips or Single Client Access Name vips are part of a connection framework
that eliminates dependencies on static cluster node names. This framework allows nodes to
be added to or removed from the cluster without affecting the ability of clients to connect to
the database. If GNS is used in the cluster, three SCAN vips are started on the member nodes
using the IP addresses assigned by the DHCP server. If GNS is not used, SCAN vip addresses
for the cluster can be defined in the DNS server used by the cluster nodes.

• Network: Network resources required by the cluster are started.

• GNS Vip: If GNS is used to resolve client requests for the cluster, a single GNS vip for the
cluster is started. The IP address is assigned in the GNS server used by the cluster nodes.

Вам также может понравиться