Вы находитесь на странице: 1из 240

Oracle Database 10g: Real Application Clusters

Volume 1 Student Guide

D17276GC10 Edition 1.0 January 2005 D40139

Authors James Womack Jean-Francois Verrier Technical Contributors and Reviewers Troy Anthony Harald van Breederode Bill Bridge Michael Cebulla Carol Colrain Jonathan Creighton Joel Goodman Yunrui Li Yi Lu Vijay Lunawat Paul Manning John McHugh Erik Peterson Javier Seen Nitin Vengurlekar Publisher Joseph Fernandez

Copyright 2005, Oracle. All rights reserved. This documentation contains proprietary information of Oracle Corporation. It is provided under a license agreement containing restrictions on use and disclosure and is also protected by copyright law. Reverse engineering of the software is prohibited. If this documentation is delivered to a U.S. Government Agency of the Department of Defense, then it is delivered with Restricted Rights and the following legend is applicable: Restricted Rights Legend Use, duplication or disclosure by the Government is subject to restrictions for commercial computer software and shall be deemed to be Restricted Rights software under Federal law, as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, Rights in Technical Data and Computer Software (October 1988). This material or any portion of it may not be copied in any form or by any means without the express prior written permission of Oracle Corporation. Any other copying is a violation of copyright law and may result in civil and/or criminal penalties. If this documentation is delivered to a U.S. Government Agency not within the Department of Defense, then it is delivered with Restricted Rights, as defined in FAR 52.227-14, Rights in Data-General, including Alternate III (June 1987). The information in this document is subject to change without notice. If you find any problems in the documentation, please report them in writing to Education Products, Oracle Corporation, 500 Oracle Parkway, Box SB-6, Redwood Shores, CA 94065. Oracle Corporation does not warrant that this document is error-free. All references to Oracle and Oracle products are trademarks or registered trademarks of Oracle Corporation. All other products or company names are used for identification purposes only, and may be trademarks of their respective owners.

Contents
I Introduction Overview I-2 What Is a Cluster? I-3 What Is Oracle Real Application Clusters? I-4 Why Use RAC? I-5 Clusters and Scalability I-6 Levels of Scalability I-7 Scaleup and Speedup I-8 Speedup/Scaleup and Workloads I-9 A History of Innovation I-10 Course Objectives I-11 Typical Schedule I-12 Architecture and Concepts Objectives 1-2 Complete Integrated Cluster Ware 1-3 RAC Software Principles 1-4 RAC Software Storage Principles 1-5 OCR Architecture 1-6 RAC Database Storage Principles 1-7 RAC and Shared Storage Technologies 1-8 Oracle Cluster File System 1-10 Automatic Storage Management 1-11 Raw or CFS? 1-12 Typical Cluster Stack with RAC 1-13 RAC Certification Matrix 1-14 The Necessity of Global Resources 1-15 Global Resources Coordination 1-16 Global Cache Coordination: Example 1-17 Write to Disk Coordination: Example 1-18 RAC and Instance/Crash Recovery 1-19 Instance Recovery and Database Availability 1-21 Efficient Inter-Node Row-Level Locking 1-22 Additional Memory Requirement for RAC 1-23 Parallel Execution with RAC 1-24 Global Dynamic Performance Views 1-25 RAC and Services 1-26 Virtual IP Addresses and RAC 1-27 Database Control and RAC 1-28 Summary 1-29

iii

RAC Installation and Configuration (Part I) Objectives 2-2 Oracle Database 10g RAC Installation: New Features 2-3 Oracle Database 10g RAC Installation: Outline 2-5 Preinstallation Tasks 2-6 Hardware Requirements 2-7 Network Requirements 2-8 RAC Network Software Requirements 2-9 Package Requirements 2-10 hangcheck-timer Module Configuration 2-11 Required UNIX Groups and Users 2-12 The oracle User Environment 2-13 User Shell Limits 2-14 Configuring User Equivalency 2-15 Required Directories for the Oracle Database Software 2-17 Linux Kernel Parameters 2-19 Cluster Setup Tasks 2-21 Obtaining OCFS 2-22 Installing the OCFS RPM Packages 2-23 Starting ocfstool 2-24 Generating the ocfs.conf File 2-25 Preparing the Disks 2-26 Loading OCFS at Startup 2-27 Mounting OCFS on Startup 2-28 Using Raw Partitions 2-29 Binding the Partitions 2-31 Raw Device Mapping File 2-33 Installing Cluster Ready Services 2-35 Specifying the Inventory Directory 2-36 File Locations and Language Selection 2-37 Cluster Configuration 2-38 Private Interconnect Enforcement 2-39 Oracle Cluster Registry File 2-40 Voting Disk File 2-41 Summary and Install 2-42 Running the root.sh Script on All Nodes 2-43 Verifying the CRS Installation 2-44 Summary 2-46 Practice 2: Overview 2-47

iv

RAC Installation and Configuration (Part II) Objectives 3-2 OUI Database Configuration Options 3-3 Install the Database Software 3-4 Specify File Locations 3-5 Specify Cluster Installation 3-6 Select Installation Type 3-7 Products Prerequisite Check 3-8 Select Database Configuration 3-9 Check Summary 3-10 The root.sh Script 3-11 Launching the VIPCA with root.sh 3-12 VIPCA Network Interface Discovery 3-13 VIP Configuration Data and Summary 3-14 Installation Progress 3-15 End of Installation 3-16 Database Preinstallation Tasks 3-17 Creating the Cluster Database 3-19 Node Selection 3-20 Select Database Type 3-21 Database Identification 3-22 Cluster Database Management Method 3-23 Passwords for Database Schema Owners 3-24 Storage Options for Database Files 3-25 Database File Locations 3-27 Flash Recovery Area 3-28 Database Components 3-29 Database Services 3-30 Initialization Parameters 3-31 Database Storage Options 3-32 Create the Database 3-33 Monitor Progress 3-34 Manage Default Accounts 3-35 Postinstallation Tasks 3-36 Patches and the RAC Environment 3-37 Inventory List Locks 3-38 Summary 3-39 Practice 3: Overview 3-40

RAC Database Instances Administration Objectives 4-2 The EM Cluster Database Home Page 4-3 Cluster Database Instance Home Page 4-5 Cluster Home Page 4-6 The Configuration Section 4-7 Operating System Details Page 4-8 Performance and Targets Pages 4-9 Starting and Stopping RAC Instances 4-10 Starting and Stopping RAC Instances with EM 4-11 Starting and Stopping RAC Instances with SQL*Plus 4-12 Starting and Stopping RAC Instances with SRVCTL 4-13 RAC Initialization Parameter Files 4-14 SPFILE Parameter Values and RAC 4-15 EM and SPFILE Parameter Values 4-16 RAC Initialization Parameters 4-18 Parameters Requiring Identical Settings 4-20 Parameters Requiring Unique Settings 4-21 Adding a Node to a Cluster 4-22 Adding a Node to an Existing Cluster 4-23 Adding the RAC Software to the New Node 4-25 Reconfigure the Listeners 4-27 Add an Instance by Using DBCA 4-28 Deleting Instances from a RAC Database 4-29 Node Addition and Deletion and the SYSAUX Tablespace 4-31 Quiescing RAC Databases 4-32 How SQL*Plus Commands Affect Instances 4-33 Administering Alerts with Enterprise Manager 4-34 Viewing Alerts 4-35 Blackouts and Scheduled Maintenance 4-37 Summary 4-39 Practice 4: Overview Administering Storage in RAC (Part I) Objectives 5-2 What Is Automatic Storage Management? 5-3 ASM: Key Features and Benefits 5-4 ASM: New Concepts 5-5 ASM: General Architecture 5-6 ASM Instance and Crash Recovery in RAC 5-8 ASMLibs 5-9

vi

Oracle Linux ASMLib Installation: Overview 5-10 Oracle Linux ASMLib Installation 5-11 ASM Library Disk Creation 5-13 ASM Administration 5-15 ASM Instance Functionalities 5-16 ASM Instance Creation 5-17 ASM Instance Initialization Parameters 5-18 RAC and ASM Instances Creation 5-19 ASM Instance Initialization Parameters and RAC 5-20 Discovering New ASM Instances with EM 5-21 Accessing an ASM Instance 5-22 Dynamic Performance View Additions 5-23 ASM Home Page 5-24 ASM Performance Page 5-25 ASM Configuration Page 5-26 Starting Up an ASM Instance 5-27 Shutting Down an ASM Instance 5-28 ASM Administration 5-29 ASM Disk Group 5-30 Failure Group 5-31 Disk Group Mirroring 5-32 Disk Group Dynamic Rebalancing 5-33 ASM Administration Page 5-34 Create Disk Group Page 5-35 ASM Disk Groups with EM in RAC 5-36 Disk Group Performance Page and RAC 5-37 Create or Delete Disk Groups 5-38 Adding Disks to Disk Groups 5-39 Miscellaneous Alter Commands 5-40 Monitoring Long-Running Operations Using V$ASM_OPERATION 5-42 ASM Administration 5-43 ASM Files 5-44 ASM File Names 5-45 ASM File Name Syntax 5-46 ASM File Name Mapping 5-48 ASM File Templates 5-49 Template and Alias: Examples 5-50 Retrieving Aliases 5-51 SQL Commands and File Naming 5-52 DBCA and Storage Options 5-53 Database Instance Parameter Changes 5-54
vii

Summary 5-56 Practice 5 Overview 5-57 6 Administering Storage in RAC (Part II) Objectives 6-2 ASM and SRVCTL with RAC 6-3 ASM and SRVCTL with RAC: Examples 6-4 Migrating to ASM: Overview 6-5 Migration with Extra Space: Overview 6-6 Migration with Extra Space: Example 6-7 Tablespace Migration: Example 6-11 Migrate an SPFILE to ASM 6-12 ASM Disk Metadata Requirements 6-13 ASM and Transportable Tablespaces 6-14 ASM and Storage Arrays 6-15 ASM Scalability 6-16 Redo Log Files and RAC 6-17 Automatic Undo Management and RAC 6-18 Summary 6-19 Practice 6 Overview 6-20 Services Objectives 7-2 Traditional Workload Dispatching 7-3 Grid Workload Dispatching 7-4 What Is a Service? 7-5 High Availability of Services in RAC 7-6 Possible Service Configuration with RAC 7-7 Service Attributes 7-8 Service Types 7-9 Creating Services 7-10 Creating Services with DBCA 7-11 Creating Services with SRVCTL 7-13 Preferred and Available Instances 7-14 Everything Switches to Services 7-15 Using Services with Client Applications 7-16 Using Services with Resource Manager 7-17 Services and Resource Manager with EM 7-18 Services and Resource Manager: Example 7-19 Using Services with Scheduler 7-20

viii

Services and Scheduler with EM 7-21 Services and Scheduler: Example 7-23 Using Services with Parallel Operations 7-24 Using Services with Metric Thresholds 7-25 Changing Service Thresholds Using EM 7-26 Services and Metric Thresholds: Example 7-27 Service Aggregation and Tracing 7-28 Cluster Database: Top Services 7-29 Service Aggregation Configuration 7-30 Service Aggregation: Example 7-31 The trcsess Utility 7-32 Service Performance Views 7-33 Managing Services 7-34 Managing Services with EM 7-36 Managing Services: Example 7-38 Summary 7-39 Practice 7 Overview 7-40 8 High Availability of Connections Objectives 8-2 Types of Workload Distribution 8-3 Client Side Connect-Time Load Balancing 8-4 Client Side Connect-Time Failover 8-5 Server Side Connect-Time Load Balancing 8-6 Fast Application Notification: Overview 8-7 Fast Application Notification Benefits 8-9 FAN-Supported Event Types 8-10 FAN Event Status 8-11 FAN Event Reasons 8-12 FAN Event Format 8-13 Server-Side Callouts Implementation 8-14 Server-Side Callout Parse: Example 8-15 Server-Side Callout Filter: Example 8-16 Configuring the Server-Side ONS 8-17 Configuring the Client-Side ONS 8-18 JDBC Fast Connection Failover: Overview 8-19 JDBC Fast Connection Failover Benefits 8-20 Transparent Application Failover: Overview 8-21 TAF Basic Configuration: Example 8-22 TAF Preconnect Configuration: Example 8-23

ix

TAF Verification 8-24 FAN Connection Pools and TAF Considerations 8-25 Restricted Session and Services 8-26 Summary 8-27 Practice 8 Overview 8-28 9 Managing Backup and Recovery in RAC Objectives 9-2 Protecting Against Media Failure 9-3 Configure RAC Recovery Settings with EM 9-4 Configure RAC Backup Settings with EM 9-5 Initiate Archiving 9-6 Archived Log File Configurations 9-7 RAC and the Flash Recovery Area 9-8 Oracle Recovery Manager 9-9 Configuring RMAN 9-10 RMAN Default Autolocation 9-11 User-Managed Backup Methods 9-12 Offline User-Managed Backup 9-13 Online User-Managed Backup 9-14 Channel Connections to Cluster Instances 9-15 Distribution of Backups 9-16 One Local Drive CFS Backup Scheme 9-17 Multiple Drives CFS Backup Scheme 9-18 Non-CFS Backup Scheme 9-19 RAC Backup and Recovery Using EM 9-20 Restoring and Recovering 9-21 Parallel Recovery in Real Application Clusters 9-22 Fast-Start Parallel Rollback in Real Application Clusters 9-24 Managing OCR: Overview 9-25 Recovering the OCR 9-26 Recovering the Voting Disk 9-27 Summary 9-28 Practice 9: Overview 9-29

10 RAC Performance Tuning Objectives 10-2 CPU and Wait Time Tuning Dimensions 10-3 RAC-Specific Tuning 10-4 Analyzing Cache Fusion Impact in RAC 10-5

Typical Latencies for RAC Operations 10-6 Wait Events for RAC 10-7 Wait Event Views 10-8 Global Cache Wait Events: Overview 10-9 2-way Block Request: Example 10-11 3-way Block Request: Example 10-12 2-way Grant: Example 10-13 Considered Lost Blocks: Example 10-14 Global Enqueue Waits: Overview 10-15 Session and System Statistics 10-16 Most Common RAC Tuning Tips 10-17 Index Block Contention Considerations 10-19 Oracle Sequences and Index Contention 10-20 Undo Block Considerations 10-21 High-Water Mark Considerations 10-22 Cluster Database Performance Page 10-23 Cluster Cache Coherency Page 10-25 Database Locks Page 10-26 Automatic Workload Repository: Overview 10-27 AWR Tables 10-28 AWR Snapshots in RAC 10-29 Generating and Viewing AWR Reports 10-30 AWR Reports and RAC: Overview 10-31 Statspack and AWR 10-33 Automatic Database Diagnostic Monitor 10-34 ADDM Problem Classification 10-35 RAC-Specific ADDM Findings 10-36 ADDM Analysis: Results 10-37 ADDM Recommendations 10-38 Summary 10-39 Practice 10: Overview 10-40 11 Design for High Availability Objectives 11-2 Causes of Unplanned Down Time 11-3 Causes of Planned Down Time 11-4 Oracles Solution to Down Time 11-5 RAC and Data Guard Complementarity 11-6 Maximum Availability Architecture 11-7 RAC and Data Guard Topologies 11-8 RAC and Data Guard Architecture 11-9

xi

Data Guard Broker (DGB) and CRS Integration 11-11 Data Guard Broker Configuration Files 11-12 Hardware Assisted Resilient Data 11-13 Rolling Patch Upgrade Using RAC 11-14 Rolling Release Upgrade Using SQL Apply 11-15 Database High Availability Best Practices 11-16 Extended RAC: Overview 11-17 Extended RAC Connectivity 11-18 Extended RAC Disk Mirroring 11-19 Additional Data Guard Benefits 11-20 Using Distributed Transactions with RAC 11-21 Using a Test Environment 11-22 Summary 11-23 Appendix A: Practices Appendix B: Solutions Appendix C: RAC on Windows Installation Appendix D: Add Node Appendix E: Remove Node

xii

Introduction

Copyright 2005, Oracle. All rights reserved.

Overview

This course is designed for anyone interested in implementing a Real Application Clusters (RAC) database. The coverage is general and contains platformspecific information only when it is necessary to explain a concept using an example. Knowledge of and experience with Oracle Database 10g architecture are assumed. Lecture material is supplemented with hands-on practices.

I-2

Copyright 2005, Oracle. All rights reserved.

Overview The material in this course is designed to provide basic information that is needed to plan or manage Oracle Database 10g for Real Application Clusters. The lessons and practices are designed to build on your knowledge of Oracle used in a nonclustered environment. The material does not cover basic architecture and database management; these topics are addressed by the Oracle Database 10g administration courses offered by Oracle University. If your background does not include working with a current release of the Oracle database, then you should consider taking such training before attempting this course. The practices provide an opportunity for you to work with the features of the database that are unique to Real Application Clusters.

Oracle Database 10g: Real Application Clusters I-2

What Is a Cluster?
Node

Interconnected nodes act as a single server. Cluster software hides the structure. Disks are available for read and write by all nodes.

Disks Interconnect

Cluster ware on each node

I-3

Copyright 2005, Oracle. All rights reserved.

What Is a Cluster? A cluster consists of two or more independent, but interconnected, servers. Several hardware vendors have provided cluster capability over the years to meet a variety of needs. Some clusters were only intended to provide high availability by allowing work to be transferred to a secondary node if the active node fails. Others were designed to provide scalability by allowing user connections or work to be distributed across the nodes. Another common feature of a cluster is that it should appear to an application as if it were a single server. Similarly, management of several servers should be as similar to the management of a single server as possible. The cluster management software provides this transparency. For the nodes to act as if they were a single server, files must be stored in such a way that they can be found by the specific node that needs them. There are several different cluster topologies that address the data access issue, each dependent on the primary goals of the cluster designer. The interconnect is a physical network used as a means of communication between each node of the cluster. In short, a cluster is a group of independent servers that cooperate as a single system.

Oracle Database 10g: Real Application Clusters I-3

What Is Oracle Real Application Clusters?


Instances run on each node

Multiple instances accessing the same database Instances spread on each node Physical or logical access to each database file Software controlled data access

Database files Interconnect

I-4

Copyright 2005, Oracle. All rights reserved.

What Is Oracle Real Application Clusters? Real Application Clusters is a software that enables you to use clustered hardware by running multiple instances against the same database. The database files are stored on disks that are either physically or logically connected to each node, so that every active instance can read from or write to them. The Real Application Clusters software manages data access, so that changes are coordinated between the instances and each instance sees a consistent image of the database. The cluster interconnect enables instances to pass coordination information and data images between each other. This architecture enables users and applications to benefit from the processing power of multiple machines. RAC architecture also achieves redundancy in the case of, for example, a system crashing or becoming unavailable; the application can still access the database on any surviving instances.

Oracle Database 10g: Real Application Clusters I-4

Why Use RAC?

High availability: Survive node and instance failures No scalability limits: Add more nodes as you need them tomorrow Pay as you grow: Pay for just what you need today Key grid computing feature:
Grow and shrink on demand Single-button addition and removal of servers Automatic workload management for services

I-5

Copyright 2005, Oracle. All rights reserved.

Why Use RAC? Oracle Real Application Clusters (RAC) enables high utilization of a cluster of standard, low-cost modular servers such as blades. RAC offers automatic workload management for services. Services are groups or classifications of applications that comprise business components corresponding to application workloads. Services in RAC enable continuous, uninterrupted database operations and provide support for multiple services on multiple instances. You assign services to run on one or more instances, and alternate instances can serve as backup instances. If a primary instance fails, Oracle moves the services from the failed instance to a surviving alternate instance. Oracle also automatically loadbalances connections across instances hosting a service. RAC harnesses the power of multiple low-cost computers to serve as a single large computer for database processing, and provides the only viable alternative to large-scale SMP for all types of applications. RAC, which is based on a shared-disk architecture, can grow and shrink on demand without the need to artificially partition data among the servers of your cluster. RAC also offers a single-button addition and removal of servers to a cluster. Thus, you can easily provide or remove a server to or from the database.

Oracle Database 10g: Real Application Clusters I-5

Clusters and Scalability


SMP model RAC model

Memory

Shared storage

Cache

Cache CPU CPU

SGA

SGA CPU CPU

CPU CPU

CPU CPU

Cache coherency

Cache fusion

I-6

Copyright 2005, Oracle. All rights reserved.

Clusters and Scalability If your application scales transparently on symmetric multiprocessing (SMP) machines, then it is realistic to expect it to scale well on RAC, without having to make any changes to the application code. RAC eliminates the database instance, and the node itself, as a single point of failure, and ensures database integrity in the case of such failures. Following are some scalability examples: Allow more simultaneous batch processes. Allow larger degrees of parallelism and more parallel executions to occur. Allow large increases in the number of connected users in online transaction processing (OLTP) systems.

Oracle Database 10g: Real Application Clusters I-6

Levels of Scalability

Hardware: Disk input/output (I/O) Inter-node communication: High bandwidth and low latency Operating system: Number of CPUs Database management system: Synchronization Application: Design

I-7

Copyright 2005, Oracle. All rights reserved.

Level of Scalability Successful implementation of cluster databases requires optimal scalability on four levels: Hardware scalability: Interconnectivity is the key to hardware scalability, which greatly depends on high bandwidth and low latency. Operating system scalability: Methods of synchronization in the operating system can determine the scalability of the system. In some cases, potential scalability of the hardware is lost because of the operating systems inability to handle multiple resource requests simultaneously. Database management system scalability: A key factor in parallel architectures is whether the parallelism is affected internally or by external processes. The answer to this question affects the synchronization mechanism. Application scalability: Applications must be specifically designed to be scalable. A bottleneck occurs in systems in which every session is updating the same data most of the time. Note that this is not RAC specific and is true on single-instance system too. It is important to remember that if any of the above areas are not scalable (no matter how scalable the other areas are), then parallel cluster processing may not be successful. A typical cause for the lack of scalability is one common shared resource that must be accessed often. This causes the otherwise parallel operations to serialize on this bottleneck. A high latency in the synchronization increases the cost of synchronization, thereby counteracting the benefits of parallelization. This is a general limitation and not a RACspecific limitation.
Oracle Database 10g: Real Application Clusters I-7

Scaleup and Speedup


Original system
Hardware Time 100% of task

Cluster system scaleup


Hardware Time up to 200% of task

Cluster system speedup

Hardware

Time

up to 300% of task

Hardware 100% of task Hardware Time/2

Hardware

Time

I-8

Copyright 2005, Oracle. All rights reserved.

Scaleup and Speedup Scaleup is the ability to sustain the same performance levels (response time) when both workload and resources increase proportionally:
Scaleup=(volume parallel)/(volume original)time for ipc

For example, if 30 users consume close to 100 percent of the CPU during normal processing, then adding more users would cause the system to slow down due to contention for limited CPU cycles. However, by adding CPUs, you can support extra users without degrading performance. Speedup is the effect of applying an increasing number of resources to a fixed amount of work to achieve a proportional reduction in execution times:
Speedup=(time original)/(time parallel)time for ipc

Speedup results in resource availability for other tasks. For example, if queries usually take ten minutes to process and running in parallel reduces the time to five minutes, then additional queries can run without introducing the contention that might occur were they to run concurrently. Note: ipc is the abbreviation for interprocess communication.

Oracle Database 10g: Real Application Clusters I-8

Speedup/Scaleup and Workloads

Workload OLTP and Internet DSS with parallel query Batch (mixed)

Speedup No Yes Possible

Scaleup Yes Yes Yes

I-9

Copyright 2005, Oracle. All rights reserved.

Speedup/Scaleup and Workloads The type of workload determines whether scaleup or speedup capabilities can be achieved using parallel processing. Online transaction processing (OLTP) and Internet application environments are characterized by short transactions that cannot be further broken down, and therefore, no speedup can be achieved. However, by deploying greater amounts of resources, a larger volume of transactions can be supported without compromising the response. Decision support systems (DSS) and parallel query options can attain speedup, as well as scaleup, because they essentially support large tasks without conflicting demands on resources. The parallel query capability within Oracle can also be leveraged to decrease overall processing time of long-running queries and to increase the number of such queries that can be run concurrently. In an environment with a mixed workload of DSS, OLTP, and reporting applications, scaleup can be achieved by running different programs on different hardware. Speedup is possible in a batch environment, but may involve rewriting programs to use the parallel processing capabilities.

Oracle Database 10g: Real Application Clusters I-9

A History of Innovation

Automatic Storage Management RAC Data Guard Nonblocking queries OPS Resource Manager Low-cost commodity clusters

Enterprise

Grids
Automatic Workload management

I-10

Copyright 2005, Oracle. All rights reserved.

A History of Innovation Oracle Database 10g and the specific new manageability enhancements provided by Oracle RAC 10g enable RAC for everyoneall types of applications and enterprise grids, the basis for fourth-generation computing. Enterprise grids are built from large configurations of standardized, commodity-priced components: processors, network, and storage. With Oracle RACs cache fusion technology, the Oracle database adds to this the highest levels of availability and scalability. Also, with Oracle RAC 10g, it becomes possible to perform dynamic provisioning of nodes, storage, CPUs, and memory to maintain service levels more easily and efficiently. Enterprise grids are the data centers of the future and enable business to be adaptive, proactive, and agile for the fourth generation. The next major transition in computing infrastructure is going from the era of big SMPs to the era of grids.

Oracle Database 10g: Real Application Clusters I-10

Course Objectives

In this course, you: Learn the principal concepts of RAC Install the RAC components Administer database instances in a RAC environment Migrate a RAC database to ASM Manage services Back up and recover RAC databases Monitor and tune performance of a RAC database

I-11

Copyright 2005, Oracle. All rights reserved.

Course Objectives This course is designed to give you the necessary information to successfully administer Real Application Clusters. You install Oracle Database 10g with Oracle Universal Installer (OUI) and create your database with the Database Configuration Assistant (DBCA). This ensures that your RAC environment has the optimal network configuration, database structure, and parameter settings for the environment that you selected. As a DBA, after installation your tasks are to administer your RAC environment at three levels: Instance administration Database administration Cluster administration Throughout this course you use various tools to administer each level of RAC: Oracle Enterprise Manager 10g Database Control to perform administrative tasks whenever feasible Task-specific GUIs such as the Database Configuration Assistant (DBCA) and the Virtual Internet Protocol Configuration Assistant (VIPCA) Command-line tools such as SQL*Plus, Recovery Manager, Server Control (SRVCTL).

Oracle Database 10g: Real Application Clusters I-11

Typical Schedule

Topics Concepts and installation

Lessons 1-2 3-4

Day 1 2 3 4 5

Storage and services

5-6 7-8-9

Tuning and design

10-11

I-12

Copyright 2005, Oracle. All rights reserved.

Typical Schedule The lessons in this guide are arranged in the order that you will probably study them in class, and are grouped into the topic areas that are shown in the slide. The individual lessons are ordered so that they lead from more familiar to less familiar areas. The related practices are designed to let you explore increasingly powerful features of a Real Application Clusters database. In some cases, the goals for the lessons and goals for the practices are not completely compatible. Your instructor may, therefore, choose to teach some material in a different order than found in this guide. However, if your instructor teaches the class in the order in which the lessons are printed in this guide, then the class should run approximately as shown in this schedule.

Oracle Database 10g: Real Application Clusters I-12

Architecture and Concepts

Copyright 2005, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following: List the various components of Cluster Ready Services (CRS) and Real Application Clusters (RAC) Describe the various types of files used by a RAC database Describe the various techniques used to share database files across a cluster Describe the purpose of using services with RAC

1-2

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 1-2

Complete Integrated Cluster Ware


9i RAC
Applications System Management

10g RAC
Applications/RAC Services framework Cluster control/Recovery APIs Automatic Storage Management Messaging and Locking Membership Connectivity Hardware/OS kernel Management APIs Event Services

Cluster control Volume Manager file system Messaging and Locking Membership Connectivity Hardware/OS kernel

Event Services
1-3

Copyright 2005, Oracle. All rights reserved.

Complete Integrated Cluster Ware With Oracle9i, Oracle introduced Real Application Clusters. For the first time, you were able to run online transaction processing (OLTP) and decision support system (DSS) applications against a database cluster without having to make expensive code changes or spend large amounts of valuable administrator time partitioning and repartitioning the database to achieve good performance. Although Oracle9i Real Application Clusters did much to ease the task of allowing applications to work in clusters, there are still support challenges and limitations. Among these cluster challenges are complex software environments, support, inconsistent features across platforms, and awkward management interaction across the software stack. Most clustering solutions today were designed with failover in mind. Failover clustering has additional systems standing by in case of a failure. During normal operations, these failover resources may sit idle. With the release of Oracle Database 10g, Oracle provides you with an integrated software solution that addresses cluster management, event management, application management, connection management, storage management, load balancing, and availability. These capabilities are addressed while hiding the complexity through simple-to-use management tools and automation. Real Application Clusters 10g provides an integrated cluster ware layer that delivers a complete environment for applications.
Oracle Database 10g: Real Application Clusters 1-3

RAC Software Principles


Node1 Instance1 Cache
LMON LMD0 LMSx LCK0 DIAG

Cluster

Noden Instancen Cache


LMON LMD0 LMSx LCK0 DIAG

Global resources

Cluster Ready Services CRSD & RACGIMON EVMD OCSSD & OPROCD Applications
ASM, DB, Services, OCR VIP, ONS, EMD, Listener

Cluster interface Global management:


SRVCTL, DBCA, EM

Cluster Ready Services CRSD & RACGIMON EVMD OCSSD & OPROCD Applications
ASM, DB, Services, OCR VIP, ONS, EMD, Listener

1-4

Copyright 2005, Oracle. All rights reserved.

RAC Software Principles You may see a few additional background processes associated with a RAC instance than you would with a single-instance database. These processes are primarily used to maintain database coherency among each instance. They manage what is called the global resources: LMON: Global Enqueue Service Monitor LMD0: Global Enqueue Service Daemon LMSx: Global Cache Service Processes, where x can range from 0 to j LCK0: Lock process DIAG: Diagnosibility process At the cluster level, you find the main processes of the Cluster Ready Services software. They provide a standard cluster interface on all platforms and perform high-availability operations. You find these processes on each node of the cluster: CRSD and RACGIMON: Are engines for high-availability operations OCSSD: Provides access to node membership and group services EVMD: Scans callout directory and invokes callouts in reactions to detected events OPROCD: Is a process monitor for the cluster There are also several tools that are used to manage the various resources available on the cluster at a global level. These resources are the Automatic Storage Management (ASM) instances, the RAC databases, the services, and CRS node applications. Some of the tools that you will use throughout this course are Server Control (SRVCTL), DBCA, and Enterprise Manager.
Oracle Database 10g: Real Application Clusters 1-4

RAC Software Storage Principles


Node1 Instance1 CRS home Oracle home
Local storage

Noden

Node1 Instance1

Noden

Instancen CRS home Oracle home


Local storage

Instancen

Local storage

Local storage

Voting file OCR file Shared storage

Voting file OCR file CRS home Oracle home Shared storage

Permits online patch upgrades Software not a single point of failure


1-5 Copyright 2005, Oracle. All rights reserved.

RAC Software Storage Principles The Oracle Database 10g Real Application Clusters installation is a two-phase installation. In the first phase, you install CRS. In the second phase, you install the Oracle database software with RAC components and create a cluster database. The Oracle home that you use for the CRS software must be different from the one that is used for the RAC software. Although it is possible to install the CRS and RAC software on your cluster shared storage when using certain cluster file systems, software is usually installed on a regular file system that is local to each node. This permits online patch upgrades and eliminates the software as a single point of failure. In addition, two files must be stored on your shared storage: The voting file is essentially used by the Cluster Synchronization Services daemon for node monitoring information across the cluster. Its size is set to around 20 MB. The Oracle Cluster Registry (OCR) file is also a key component of the CRS. It maintains information about the high-availability components in your cluster such as the cluster node list, cluster database instance to node mapping, and CRS application resource profiles (such as services, Virtual Interconnect Protocol addresses, and so on). This file is maintained automatically by administrative tools such as SRVCTL. Its size is around 100 MB. The voting and OCR files cannot be stored in ASM because they must be accessible before starting any Oracle instance. OCR and voting files must be on redundant, reliable storage such as RAID. The recommended best practice location for those files is raw devices.

Oracle Database 10g: Real Application Clusters 1-5

OCR Architecture

Node1 OCR cache

Node2 OCR cache

Node3 OCR cache

OCR process Client process Shared storage

OCR process

OCR process Client process

OCR file

1-6

Copyright 2005, Oracle. All rights reserved.

OCR Architecture Cluster configuration information is maintained in Oracle Cluster Registry. OCR relies on a distributed shared-cache architecture for optimizing queries against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with an OCR process that accesses its OCR cache. Only one of the OCR processes actually reads from and writes to the OCR file on shared storage. This process is responsible for refreshing its own local cache, as well as the OCR cache on other nodes in the cluster. For queries against the cluster repository, the OCR clients communicate directly with the local OCR process on the node from which they originate. When clients need to update the OCR, they communicate through their local OCR process to the OCR process that is performing input/output (I/O) for writing to the repository on disk. The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manager (EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant (DBUA), NetCA, and Virtual Internet Protocol Configuration Assistant (VIPCA). Furthermore, OCR maintains dependency and status information for application resources defined within CRS, specifically databases, instances, services, and node applications. The name of the configuration file is ocr.loc, and the configuration file variable is ocrconfig_loc. The location for the cluster repository is not restricted to raw devices. You can put OCR on shared storage that is managed by a Cluster File System. Note: OCR also serves as a configuration file in a single instance with the ASM, where there is one OCR per node.
Oracle Database 10g: Real Application Clusters 1-6

RAC Database Storage Principles

Node1 Instance1 Archived log files


Local storage

Noden Instancen Archived log files


Local storage

Undo tablespace files for instance1 Online redo log files for instance1

Data files Temp files Control files Flash recovery area files Change tracking file SPFILE Shared storage

Undo tablespace files for instancen Online redo log files for instancen

1-7

Copyright 2005, Oracle. All rights reserved.

RAC Database Storage Principles The primary difference between RAC storage and storage for single-instance Oracle databases is that all data files in RAC must reside on shared devices (either raw devices or cluster file systems) in order to be shared by all the instances that access the same database. You must also create at least two redo log groups for each instance, and all the redo log groups must also be stored on shared devices for instance or crash recovery purposes. Each instances online redo log groups are called an instances thread of online redo. In addition, you must create one shared undo tablespace for each instance for using the recommended automatic undo management feature. Each undo tablespace must be shared by all instances for recovery purposes. Archive logs cannot be placed on raw devices because their names are automatically generated and different for each one. That is why they must be stored on a file system. If you are using a cluster file system (CFS), it enables you to access these archive files from any node at any time. If you are not using a CFS, you are always forced to make the archives available to the other cluster members at the time of recovery; for example by using a network file system (NFS) across nodes. If you are using the recommended flash recovery area feature, then it must be stored in a shared directory so that all instances can access it. Note: A shared directory can be an ASM disk group, or a Cluster File System.
Oracle Database 10g: Real Application Clusters 1-7

RAC and Shared Storage Technologies

Storage is a critical component of grids:


Sharing storage is fundamental New technology trends

Supported shared storage for Oracle grids:


Network Attached Storage Storage Area Network

Supported file systems for Oracle grids:


Raw volumes Cluster file system ASM

1-8

Copyright 2005, Oracle. All rights reserved.

RAC and Shared Storage Technologies Storage is a critical component of any grid solution. Traditionally, storage has been directly attached to each individual server (DAS). Over the past few years, more flexible storage, which is accessible over storage area networks or regular Ethernet networks, has become popular. These new storage options enable multiple servers to access the same set of disks, simplifying provisioning of storage in any distributed environment. Storage Area Network (SAN) represents the evolution of data storage technology to this point. Traditionally, on client server systems, data was stored on devices either inside or directly attached to the server. Next in the evolutionary scale came Network Attached Storage (NAS) that took the storage devices away from the server and connected them directly to the network. SANs take the principle a step further by allowing storage devices to exist on their own separate networks and communicate directly with each other over very fast media. Users can gain access to these storage devices through server systems that are connected to both the local area network (LAN) and SAN. As you already saw, the choice of file system is critical for RAC deployment. Traditional file systems do not support simultaneous mounting by more than one system. Therefore, you must store files in either raw volumes without any file system, or on a file system that supports concurrent access by multiple systems.

Oracle Database 10g: Real Application Clusters 1-8

RAC and Shared Storage Technologies (continued) Thus, three major approaches exist for providing the shared storage needed by RAC: Raw volumes: These are directly attached raw devices that require storage that operates in block mode such as fiber channel or iSCSI. Cluster File System: One or more cluster file systems can be used to hold all RAC files. Cluster file systems require block mode storage such as fiber channel or iSCSI. Automatic Storage Management (ASM) is a portable, dedicated, and optimized cluster file system for Oracle database files. Note: iSCSI is important to SAN technology because it enables a SAN to be deployed in a local area network (LAN), wide area network (WAN), or Metropolitan Area Network (MAN).

Oracle Database 10g: Real Application Clusters 1-9

Oracle Cluster File System

Is a shared disk cluster file system for Linux and Windows Improves management of data for RAC by eliminating the need to manage raw devices Provides open solution on the operating system side (Linux) free and open source Can be downloaded from OTN:
http://oss.oracle.com/software

1-10

Copyright 2005, Oracle. All rights reserved.

Oracle Cluster File System Oracle Cluster File System (OCFS) is a shared file system designed specifically for Oracle Real Application Clusters. OCFS eliminates the requirement that Oracle database files be linked to logical drives and enables all nodes to share a single Oracle Home (on Windows 2000 only), instead of requiring each node to have its own local copy. OCFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements. Following is a list of files that can be placed on an Oracle Cluster File System: Oracle software installation: Currently, this configuration is only supported on Windows 2000. The next major version will provide support for Oracle Home on Linux as well. Oracle files (control files, data files, redo logs, bfiles, and so on) Shared configuration files (spfile) Files created by Oracle during run time Voting and OCR files Oracle Cluster File System is free for developers and customers. The source code is provided under the General Public License (GPL) on Linux. It can be downloaded from the Oracle Technology Network Web site. Note: Please see the release notes for platform-specific limitations for OCFS.
Oracle Database 10g: Real Application Clusters 1-10

Automatic Storage Management

Portable and high-performance cluster file system Manages Oracle database files Data spread across disks to balance load Integrated mirroring across disks Solves many storage management challenges

Application Database File system Volume manager

ASM

Operating system

1-11

Copyright 2005, Oracle. All rights reserved.

Automatic Storage Management The Automatic Storage Management (ASM) is a new feature in Oracle Database 10g. It provides a vertical integration of the file system and the volume manager that is specifically built for Oracle database files. The ASM can provide management for single SMP machines or across multiple nodes of a cluster for Oracle Real Application Clusters support. The ASM distributes I/O load across all available resources to optimize performance while removing the need for manual I/O tuning. It helps DBAs manage a dynamic database environment by allowing them to increase the database size without having to shut down the database to adjust the storage allocation. The ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied, reliable storage mechanisms. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis. The ASM capabilities save DBAs time by automating manual storage and thereby increasing their ability to manage larger databases (and more of them) with increased efficiency. Note: ASM is the strategic and stated direction as to where Oracle database files should be stored. However, OCFS will continue to be developed and supported for those who are using it.
Oracle Database 10g: Real Application Clusters 1-11

Raw or CFS?

Using CFS:
Simpler management Use of OMF with RAC Single Oracle software installation Autoextend

Using raw:
Performance Use when CFS not available Cannot be used for archivelog files (on UNIX)

1-12

Copyright 2005, Oracle. All rights reserved.

Raw or CFS? As already explained, you can either use a cluster file system or place files on raw devices. Cluster file systems provide the following advantages: Greatly simplify the installation and administration of RAC Use of Oracle Managed Files with RAC Single Oracle software installation Autoextend enabled on Oracle data files Uniform accessibility to archive logs in case of physical node failure Raw devices implications: Raw devices are always used when CFS is not available or not supported by Oracle. Raw devices offer best performance without any intermediate layer between Oracle and the disk. Autoextend fails on raw devices if the space is exhausted. ASM, Logical Storage Managers, or Logical Volume Managers can ease the work with raw devices. Also, they can enable you to add space to a raw device online, or you may be able to create raw device names that make the usage of this device clear to the system administrators.

Oracle Database 10g: Real Application Clusters 1-12

Typical Cluster Stack with RAC


Servers Interconnect
High-speed Interconnect: Gigabit Ethernet UDP Oracle CRS RAC Linux, UNIX, Windows ASM RAC Linux Windows OCFS RAC Linux Windows RAW Proprietary Proprietary OS C/W

RAC AIX, HP-UX, Solaris ASM RAW CFS OS CVM

Database shared storage


1-13 Copyright 2005, Oracle. All rights reserved.

Typical Cluster Stack with RAC Each node in a cluster requires a supported interconnect software protocol to support interinstance communication, and Transmission Control Protocol/Internet Protocol (TCP/IP) to support CRS polling. All UNIX platforms use User Datagram Protocol (UDP) on Gigabit Ethernet as one of the primary protocols and interconnect for RAC inter-instance IPC communication. Other supported vendor-specific interconnect protocols include Remote Shared Memory for SCI and SunFire Link interconnects, and Hyper Messaging Protocol for Hyperfabric interconnects. In any case, your interconnect must be certified by Oracle for your platform. Using Oracle clusterware, you can reduce installation and support complications. However, vendor clusterware may be needed if customers use non-Ethernet interconnect or if you have deployed clusterware-dependent applications on the same cluster where you deploy RAC. Similar to the interconnect, the shared storage solution you choose must be certified by Oracle for your platform. If a cluster file system (CFS) is available on the target platform, then both the database area and flash recovery area can be created on either CFS or ASM. If a CFS is unavailable on the target platform, then the database area can be created either on ASM or on raw devices (with the required volume manager), and the flash recovery area must be created on the ASM.

Oracle Database 10g: Real Application Clusters 1-13

RAC Certification Matrix

1. Connect and log in to http://metalink.oracle.com 2. Click the Certify and Availability button on the menu frame 3. Click the View Certifications by Product link 4. Select Real Application Clusters 5. Select the correct platform

1-14

Copyright 2005, Oracle. All rights reserved.

RAC Certification Matrix Real Application Clusters Certification Matrix is designed to address any certification inquiries. Use this matrix to answer any certification questions that are related to RAC. To navigate to Real Application Clusters Certification Matrix, perform the steps shown in the slide above.

Oracle Database 10g: Real Application Clusters 1-14

The Necessity of Global Resources


SGA1 SGA2 SGA1 1008 SGA2

1008
1

1008
2

SGA1 1009

SGA2 1008

SGA1 1009

SGA2

Lost updates! 1008


4

1008
3

1-15

Copyright 2005, Oracle. All rights reserved.

The Necessity of Global Resources In single-instance environments, locking coordinates access to a common resource such as a row in a table. Locking prevents two processes from changing the same resource (or row) at the same time. In RAC environments, internode synchronization is critical because it maintains proper coordination between processes on different nodes, preventing them from changing the same resource at the same time. Internode synchronization guarantees that each instance sees the most recent version of a block in its buffer cache. Note: The slide shows you what can happen in the absence of cache coordination.

Oracle Database 10g: Real Application Clusters 1-15

Global Resources Coordination


Node1 Instance1 GRD Master
LMON LMD0 LMSx LCK0 DIAG

Cluster
Cache
GES GCS

Noden Instancen GRD Master


GES GCS

Global resources Interconnect

Cache

LMON LMD0 LMSx LCK0 DIAG

Global Resource Directory (GRD) Global Cache Service (GCS) Global Enqueue Service (GES)

1-16

Copyright 2005, Oracle. All rights reserved.

Global Resources Coordination Cluster operations require synchronization among all instances to control shared access to resources. RAC uses the Global Resource Directory (GRD) to record information about how resources are used within a cluster database. The Global Cache Service (GCS) and Global Enqueue Service (GES) manage the information in the GRD. Each instance maintains a part of the GRD in its System Global Area (SGA). The GCS and GES nominate one instance to manage all information about a particular resource. This instance is called the resource master. Also, each instance knows which instance masters which resource. Maintaining cache coherency is an important part of a RAC activity. Cache coherency is the technique of keeping multiple copies of a block consistent between different Oracle instances. GCS implements cache coherency by using what is called the Cache Fusion algorithm. The GES manages all non-Cache Fusion inter-instance resource operations and tracks the status of all Oracle enqueuing mechanisms. The primary resources of the GES controls are dictionary cache locks and library cache locks. The GES also performs deadlock detection to all deadlock-sensitive enqueues and resources.

Oracle Database 10g: Real Application Clusters 1-16

Global Cache Coordination: Example


Node1 Instance1 Cache
LMON LMD0 LMSx LCK0 DIAG 2
Block mastered by instance one

Cluster

Node2 Instance2 1009 Cache


LMON LMD0 LMSx LCK0 DIAG 1
Which instance masters the block?

1009

Instance two has the current version of the block

GCS

1008

No disk I/O

1-17

Copyright 2005, Oracle. All rights reserved.

Global Cache Coordination: Example The scenario described in the slide assumes that the data block has been changed, or dirtied, by the first instance. Furthermore, only one copy of the block exists clusterwide, and the content of the block is represented by its SCN. 1. The second instance attempting to modify the block submits a request to the GCS. 2. The GCS transmits the request to the holder. In this case, the first instance is the holder. 3. The first instance receives the message and sends the block to the second instance. The first instance retains the dirty buffer for recovery purposes. This dirty image of the block is also called a past image of the block. A past image block cannot be modified further. 4. On receipt of the block, the second instance informs the GCS that it holds the block. Note: The data block is not written to disk before the resource is granted to the second instance.

Oracle Database 10g: Real Application Clusters 1-17

Write to Disk Coordination: Example


Node1 Instance1 Cache
LMON LMD0 LMSx LCK0 DIAG 1
Need to make room in my cache. Who has the current version of that block?

Cluster

Node2 Instance2 1010 Cache


LMON LMD0 LMSx LCK0 DIAG 2
Instance two owns it. Instance two, flush the block to disk

1009

Block flushed, make room

GCS

1010

Only one disk I/O

1-18

Copyright 2005, Oracle. All rights reserved.

Write to Disk Coordination: Example The scenario described in the slide illustrates how an instance can perform a checkpoint at any time or replace buffers in the cache due to free buffer requests. Because multiple versions of the same data block with different changes can exist in the caches of instances in the cluster, a write protocol managed by the GCS ensures that only the most current version of the data is written to disk. It must also ensure that all previous versions are purged from the other caches. A write request for a data block can originate in any instance that has the current or past image of the block. In this scenario, assume that the first instance holding a past image buffer requests that Oracle writes the buffer to disk: 1. The first instance sends a write request to the GCS. 2. The GCS forwards the request to the second instance, the holder of the current version of the block. 3. The second instance receives the write request and writes the block to disk. 4. The second instance records the completion of the write operation with the GCS. 5. After receipt of the notification, the GCS orders all past image holders to discard their past images. These past images are no longer needed for recovery. Note: In this case, only one I/O is performed to write the most current version of the block to disk.

Oracle Database 10g: Real Application Clusters 1-18

RAC and Instance/Crash Recovery


Use information for other caches

Remaster enqueue resources


1

Remaster cache resources


2

LMON recovers GRD

SMON recovers the database

Build recovery set


3 Merge failed redo threads

Resource claim
4

Roll forward recovery set


5

Recovery time
1-19 Copyright 2005, Oracle. All rights reserved.

RAC and Instance/Crash Recovery When an instance fails and the failure is detected by another instance, the second instance performs the following recovery steps: 1. During the first phase of recovery, GES remasters the enqueues. 2. Then the GCS remasters its resources. The GCS processes remaster only those resources that lose their masters. During this time, all GCS resource requests and write requests are temporarily suspended. However, transactions can continue to modify data blocks as long as these transactions have already acquired the necessary resources. 3. After enqueues are reconfigured, one of the surviving instances can grab the Instance Recovery enqueue. Therefore, at the same time as GCS resources are remastered, SMON determines the set of blocks that need recovery. This set is called the recovery set. Because, with Cache Fusion, an instance ships the contents of its blocks to the requesting instance without writing the blocks to the disk, the on-disk version of the blocks may not contain the changes that are made by either instance. This implies that SMON needs to merge the content of all the online redo logs of each failed instance to determine the recovery set. This is because one failed thread might contain a hole in the redo that needs to be applied to a particular block. So, redo threads of failed instances cannot be applied serially. Also, redo threads of surviving instances are not needed for recovery because SMON could use past or current images of their corresponding buffer caches.
Oracle Database 10g: Real Application Clusters 1-19

RAC and Instance/Crash Recovery (continued) 4. Buffer space for recovery is allocated and the resources that were identified in the previous reading of the redo logs are claimed as recovery resources. This is done to avoid other instances to access those resources. 5. All resources required for subsequent processing have been acquired and the GRD is now unfrozen. Any data blocks that are not in recovery can now be accessed. Note that the system is already partially available. Then, assuming that there are past images or current images of blocks to be recovered in other caches in the cluster database, the most recent is the starting point of recovery for these particular blocks. If neither the past image buffers nor the current buffer for a data block is in any of the surviving instances caches, then SMON performs a log merge of the failed instances. SMON recovers and writes each block identified in step 3, releasing the recovery resources immediately after block recovery so that more blocks become available as recovery proceeds. After all blocks have been recovered and the recovery resources have been released, the system is again fully available. In summary, the recovered database or the recovered portions of the database becomes available earlier, and before the completion of the entire recovery sequence. This makes the system available sooner and it makes recovery more scalable. Note: The performance overhead of a log merge is proportional to the number of failed instances and to the size of the redo logs for each instance.

Oracle Database 10g: Real Application Clusters 1-20

Instance Recovery and Database Availability


Full
A 5 G H

Database availability

Partial

B 2 4 1 2 D 3 E

None

Elapsed time

1-21

Copyright 2005, Oracle. All rights reserved.

Instance Recovery and Database Availability The graphic illustrates the degree of database availability during each step of Oracle instance recovery: A. Real Application Clusters is running on multiple nodes. B. Node failure is detected. C. The enqueue part of the GRD is reconfigured; resource management is redistributed to the surviving nodes. This operation occurs relatively quickly. D. The cache part of the GRD is reconfigured and SMON reads the redo log of the failed instance to identify the database blocks that it needs to recover. E. SMON issues the GRD requests to obtain all the database blocks it needs for recovery. After the requests are complete, all other blocks are accessible. F. Oracle performs roll forward recovery. Redo logs of the failed threads are applied to the database, and blocks are available right after their recovery is completed. G. Oracle performs rollback recovery. Undo blocks are applied to the database for all uncommitted transactions. H. Instance recovery is complete and all data is accessible. Note: The dashed line represents the blocks identified in step 2 on the previous slide. Also, the dotted steps represent the ones identified on the previous slide.

Oracle Database 10g: Real Application Clusters 1-21

Efficient Inter-Node Row-Level Locking


UPDATE

UPDATE

Node1 Instance1

Node2 Instance2

Node1 Instance1

Node2 Instance2

COMMIT

No block-level lock
Node2 Instance2 Node1 Instance1

UPDATE

Node1 Instance1
4

Node2 Instance2
3

1-22

Copyright 2005, Oracle. All rights reserved.

Efficient Inter-Node Row-Level Locking Oracle supports efficient row-level locks. These row-level locks are created when data manipulation language (DML) operations, such as UPDATE, are executed by an application. These locks are held until the application commits or rolls back the transaction. Any other application process will be blocked if it requests a lock on the same row. Cache Fusion block transfers operate independently of these user-visible row-level locks. The transfer of data blocks by the GCS is a low level process that can occur without waiting for row-level locks to be released. Blocks may be transferred from one instance to another while row-level locks are held. GCS provides access to data blocks allowing multiple transactions to proceed in parallel.

Oracle Database 10g: Real Application Clusters 1-22

Additional Memory Requirement for RAC

Heuristics for scalability cases:


15% more shared pool 10% more buffer cache

Smaller buffer cache per instance in the case of single-instance workload distributed across multiple instances Current values:

SELECT resource_name, current_utilization,max_utilization FROM v$resource_limit WHERE resource_name like 'g%s_%';

1-23

Copyright 2005, Oracle. All rights reserved.

Additional Memory Requirement for RAC RAC-specific memory is mostly allocated in the shared pool at SGA creation time. Because blocks may be cached across instances, you must also account for bigger buffer caches. Therefore, when migrating your Oracle database from single instance to RAC, keeping the workload requirements per instance the same as with the single-instance case, then about 10% more buffer cache and 15% more shared pool are needed to run on RAC. These values are heuristics, based on RAC sizing experience. However, these values are mostly upper bounds. If you are using the recommended automatic memory management feature as a starting point, then you can reflect these values in your SGA_TARGET initialization parameter. However, consider that memory requirements per instance are reduced when the same user population is distributed over multiple nodes. Actual resource usage can be monitored by querying the CURRENT_UTILIZATION and MAX_UTILIZATION columns for the GCS and GES entries in the V$RESOURCE_LIMIT view of each instance.

Oracle Database 10g: Real Application Clusters 1-23

Parallel Execution with RAC

Execution slaves have node affinity with the execution coordinator, but will expand if needed.

Node 1

Node 2

Node 3

Node 4

Execution coordinator

Shared disks

Parallel execution server

1-24

Copyright 2005, Oracle. All rights reserved.

Parallel Execution with RAC Oracles cost-based optimizer incorporates parallel execution considerations as a fundamental component in arriving at optimal execution plans. In a RAC environment, intelligent decisions are made with regard to intra-node and internode parallelism. For example, if a particular query requires six query processes to complete the work and six parallel execution slaves are idle on the local node (the node that the user connected to), then the query is processed by using only local resources. This demonstrates efficient intra-node parallelism and eliminates the query coordination overhead across multiple nodes. However, if there are only two parallel execution servers available on the local node, then those two and four of another node are used to process the query. In this manner, both inter-node and intra-node parallelism are used to speed up query operations. In real world decision support applications, queries are not perfectly partitioned across the various query servers. Therefore, some parallel execution servers complete their processing and become idle sooner than others. The Oracle parallel execution technology dynamically detects idle processes and assigns work to these idle processes from the queue tables of the overloaded processes. In this way, Oracle efficiently redistributes the query workload across all processes. Real Application Clusters further extends these efficiencies to clusters by enabling the redistribution of work across all the parallel execution slaves of a cluster.

Oracle Database 10g: Real Application Clusters 1-24

Global Dynamic Performance Views


Store information about all started instances One global view for each local view Use one parallel slave on each instance Make sure that PARALLEL_MAX_SERVERS is big enough
Cluster
Node1 Instance1

GV$INSTANCE

Noden Instancen

V$INSTANCE

V$INSTANCE

1-25

Copyright 2005, Oracle. All rights reserved.

Global Dynamic Performance Views Global dynamic performance views store information about all started instances accessing one RAC database. In contrast, standard dynamic performance views store information about the local instance only. For each of the V$ views available, there is a corresponding GV$ view except for a few exceptions. In addition to the V$ information, each GV$ view possesses an additional column named INST_ID. The INST_ID column displays the instance number from which the associated V$ view information is obtained. You can query GV$ views from any started instance. In order to query the GV$ views, the value of the PARALLEL_MAX_SERVERS initialization parameter must be set to at least 1 on each instance. This is because GV$ views use a special form of parallel execution. The parallel execution coordinator is running on the instance that the client connects to, and one slave is allocated in each instance to query the underlying V$ view for that instance. If PARALLEL_MAX_SERVERS is set to 0 on a particular node, then you do not get a result from that node. Also, if all the parallel servers are busy on a particular node, then you do not get a result either. In the two cases above, you do not get a warning or an error message.

Oracle Database 10g: Real Application Clusters 1-25

RAC and Services


Application server ERP
Stop/Start service connections
Run-time load balancing Service location transparency

CRM Modify service to instance mapping

Service connections

Listeners
Connection load balancing Service availability aware

RAC Instances ERP CRM ERP CRM


Backup Priority Alerts Tuning

ERP CRM

ERP CRM

CRS
Up and down events notification engine Restart failed components

1-26

Copyright 2005, Oracle. All rights reserved.

RAC and Services Services are a logical abstraction for managing workloads. Services divide the universe of work executing in the Oracle database into mutually disjoint classes. Each service represents a workload with common attributes, service level thresholds, and priorities. Services are built into the Oracle database providing single system image for workloads, prioritization for workloads, performance measures for real transactions, and alerts and actions when performance goals are violated. These attributes are handled by each instance in the cluster by using metrics, alerts, scheduler job classes, and resource manager. With RAC, services facilitate load balancing, allow for end-to-end lights-out recovery, and provide full location transparency. A service can span one or more instances of an Oracle database in a cluster, and a single instance can support multiple services. The number of instances offering the service is transparent to the application. Services enable the automatic recovery of work. Following outages, the service is recovered fast and automatically at the surviving instances. When instances are later repaired, services that are not running are restored fast and automatically by CRS. Immediately the service changes state, up or down; a notification is available for applications using the service to trigger immediate recovery and load-balancing actions. Listeners are also aware of services availability, and are responsible for distributing the workload on surviving instances when new connections are made. This architecture forms an end-to-end continuous service for applications.
Oracle Database 10g: Real Application Clusters 1-26

Virtual IP Addresses and RAC


clnode-1 clnode-2 clnode-1vip

clnode-2vip
2

Clients

ERP=(DESCRIPTION= 4 1 ((HOST=clusnode-1)) ((HOST=clusnode-2)) 6 (SERVICE_NAME=ERP)) Timeout wait

ERP=(DESCRIPTION= 5 1 ((HOST=clusnode-1vip)) ((HOST=clusnode-2vip)) 6 (SERVICE_NAME=ERP))


7

clnode-1 clnode-2

clnode-2vip
4 clnode-1vip

1-27

Copyright 2005, Oracle. All rights reserved.

Virtual IP Addresses and RAC Virtual IP addresses (VIP) are all about availability of applications when an entire node fails. When a node fails, the VIP associated with it automatically fails over to some other node in the cluster. When this occurs: The new node indicates to the world the new MAC address for the VIP. For directly connected clients, this usually causes them to see errors on their connections to the old address. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately. This means that when the client issues SQL to the node that is now down (3), or traverses the address list while connecting (1), rather than waiting on a very long TCP/IP timeout (5), which could be as long as ten minutes, the client receives a TCP reset. In the case of SQL, this results in an ORA-3113 error. In the case of connect, the next address in tnsnames is used (6). The slide shows you the connect case with and without VIP. Without using VIPs, clients connected to a node that died will often wait a 10-minute TCP timeout period before getting an error. As a result, you do not really have a good highavailability solution without using VIPs. Note: After you are in the SQL stack and blocked on read/write requests, you need to use Fast Application Notification (FAN) to receive an interrupt. FAN is discussed in more detail in the High Availability of Connections lesson.
Oracle Database 10g: Real Application Clusters 1-27

Database Control and RAC


Cluster Cluster Home Cluster Performance Cluster Targets

DB Control
Node1

Instance1
OC4J
Agent

EM App

Cluster Database Cluster Database Home Cluster Database Performance Cluster Database Administration Cluster Database Maintenance
Node2

DB & Rep

Instance2
OC4J
Agent

EM App

1-28

Copyright 2005, Oracle. All rights reserved.

Database Control and RAC With Real Application Clusters 10g, Enterprise Manager (EM) is the recommended management tool for the cluster as well as the database. EM delivers a single-system image of RAC databases, providing consolidated screens for managing and monitoring individual cluster components. The integration with the cluster allows EM to report status and events, offer suggestions, and show configuration information for the storage and the operating system. This information is available from the Cluster page in a summary form. The flexibility of EM allows you to drill down easily on any events or information that you want to explore. For example, you can use EM to administer your entire processing environment, not just the RAC database. EM enables you to manage a RAC database with its instance targets, listener targets, host targets, and a cluster target, as well as the ASM targets if you are using ASM storage for your database. EM has two different management frameworks: Grid Control and Database Control. RAC is supported in both modes. Database Control is configured within the same ORACLE_HOME of your database target and can be used to manage only one database at a time. Alternatively, Grid Control can be used to manage multiple databases, iAS, and other target types in your enterprise across different ORACLE_HOME directories. The diagram shows you the main divisions that can be seen from the various EM pages.

Oracle Database 10g: Real Application Clusters 1-28

Summary

In this lesson, you should have learned how to: Recognize the various components of CRS and RAC Use the various types of files in a RAC database Share database files across a cluster Use services with RAC

1-29

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 1-29

RAC Installation and Configuration (Part I)

Copyright 2005, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following: Describe the installation of Oracle Database 10g Real Application Clusters (RAC) Perform RAC preinstallation tasks Perform cluster setup tasks Install Oracle Cluster File System (OCFS) Install Oracle Cluster Ready Services

2-2

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 2-2

Oracle Database 10g RAC Installation: New Features


Oracle Database 10g RAC incorporates a twophase installation process:
Phase one installs Cluster Ready Services (CRS). Phase two installs the Oracle Database 10g software with RAC.

New pages and dialogs for the Oracle Universal Installer are introduced. The Virtual Internet Protocol Configuration Assistant (VIPCA) tool is used to configure virtual IPs.

2-3

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g RAC Installation: New Features The installation of Oracle Database 10g requires that you perform a two-phase process in which you run the Oracle Universal Installer (OUI) twice. The first phase installs Oracle Cluster Ready Services Release 1 (10.1.0.2). Cluster Ready Services (CRS) provides highavailability components, and it can also interact with the vendor clusterware, if present, to coordinate cluster membership information. The second phase installs the Oracle Database 10g software with RAC. The installation also enables you to configure services for your RAC environment. If you have a previous Oracle cluster database version, the OUI activates the Database Upgrade Assistant (DBUA) to automatically upgrade your pre-Oracle 10g cluster database. The Oracle Database 10g installation process provides single system image, ease of use, and accuracy for RAC installations and patches. There are new and changed pages and dialogs for the OUI, Database Configuration Assistant (DBCA), and DBUA. The VIPCA is a new tool for this release. The enhancements include the following: The OUI Cluster Installation Mode page enables you to select whether to perform a cluster Oracle Database 10g installation or to perform a single-instance Oracle Database 10g installation.

Oracle Database 10g: Real Application Clusters 2-3

Oracle Database 10g RAC Installation: New Features (continued) The DBCA Services page enables you to configure services for your RAC environment. The VIPCA pages enable you to configure virtual Internet protocol addresses for your RAC database. The gsdctl command is obsolete. The CRS installation stops any group services daemon (GSD) processes. The cluster manager on all platforms in Oracle Database 10g is known as Cluster Synchronization Services (CSS). The Oracle Cluster Synchronization Service Daemon (OCSSD) performs this function. The Oracle Database 10g version of the srvConfig.loc file is the ocr.loc file. The Oracle9i version of srvConfig.loc still exists for backward compatibility.

Oracle Database 10g: Real Application Clusters 2-4

Oracle Database 10g RAC Installation: Outline


1. Complete preinstallation tasks:
Hardware requirements Software requirements Environment configuration, kernel parameters, and so on

2. Perform CRS installation. 3. Perform Oracle Database 10g software installation. 4. Perform cluster database creation. 5. Complete postinstallation tasks.

2-5

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g RAC Installation: Outline To successfully install Oracle Database 10g RAC, it is important that you have an understanding of the tasks that must be completed and the order in which they must occur. Before the installation can begin in earnest, each node that is going to be part of your RAC installation must meet the hardware and software requirements that are covered in this lesson. You must perform step-by-step tasks for hardware and software verification, as well as for the platform-specific preinstallation procedures. You must install the operating system patches (Red Hat Package Managers [RPMs]) required by the cluster database, and you must verify that the kernel parameters are correct for your needs. CRS must be installed by using the OUI. Make sure that your cluster hardware is functioning normally before you begin this step. Failure to do so results in an aborted or nonoperative installation. After CRS has been successfully installed and tested, again use the OUI to install the Oracle Database 10g software, including software options required for a RAC configuration. Although it is possible to create the database by using the OUI, using the DBCA to create it after the software is installed enables you some extra configuration flexibility. After the database has been created, there are a few postinstallation tasks that must be completed before your RAC database is fully functional. The remainder of this lesson provides you with the necessary knowledge to complete these tasks successfully.
Oracle Database 10g: Real Application Clusters 2-5

Preinstallation Tasks

Check system requirements Check software requirements Create groups and users Configure kernel parameters Perform cluster setup

2-6

Copyright 2005, Oracle. All rights reserved.

Preinstallation Tasks Several tasks must be completed before CRS and Oracle Database 10g software can be installed. Some of these tasks are common to all Oracle database installations and should be familiar to you. Others are specific to Oracle Database 10g RAC. Attention to details here simplifies the rest of the installation process. Failure to complete these tasks can certainly affect your installation and possibly force you to restart the process from the beginning.

Oracle Database 10g: Real Application Clusters 2-6

Hardware Requirements
At least 512 MB of physical memory is needed.
# grep MemTotal /proc/meminfo MemTotal: 763976 kB

A minimum of 1 GB of swap space is required.


# grep SwapTotal /proc/meminfo SwapTotal: 1566328 kB

The /tmp directory should be at least 400 MB.


Used Available Use% 2432180 434544 85%

# df -k /tmp Filesystem 1K-blocks /dev/hdb1 3020140

The Oracle Database 10g software requires up to 4 GB of disk space.


Copyright 2005, Oracle. All rights reserved.

2-7

Hardware Requirements The system must meet the following minimum hardware requirements: At least 512 megabytes of physical memory is needed. To determine the amount of physical memory, enter the following command: grep MemTotal
/proc/meminfo

A minimum of one gigabyte of swap space or twice the amount of physical memory is needed. On systems with two gigabytes or more of memory, the swap space can be between one and two times the amount of physical memory. To determine the size of the configured swap space, enter the following command: grep SwapTotal
/proc/meminfo

At least 400 megabytes of disk space must be available in the /tmp directory. To determine the amount of disk space available in the /tmp directory, enter the following command: df -k /tmp Up to four gigabytes of disk space is required for the Oracle Database 10g software, depending on the installation type. The df command can be used to check for the availability of the required disk space.

Oracle Database 10g: Real Application Clusters 2-7

Network Requirements

Each node must have at least two network adapters. Each public network adapter must support TCP/IP. The interconnect adapter must support User Datagram Protocol (UDP). The host name and IP address associated with the public interface must be registered in the domain name service (DNS) or the /etc/hosts file.

2-8

Copyright 2005, Oracle. All rights reserved.

Network Requirements Each node must have at least two network adapters: one for the public network interface and the other for the private network interface or interconnect. In addition, the interface names associated with the network adapters for each network must be the same on all nodes. For the public network, each network adapter must support TCP/IP. For the private network, the interconnect must support UDP using high-speed network adapters and switches that support TCP/IP. Gigabit Ethernet or an equivalent is recommended. Before starting the installation, each node requires an IP address and an associated host name registered in the DNS or the /etc/hosts file for each public network interface. One unused virtual IP address and an associated virtual host name registered in the DNS or the /etc/hosts file that you configure for the primary public network interface is needed for each node. The virtual IP address must be in the same subnet as the associated public interface. After installation, you can configure clients to use the virtual host name or IP address. If a node fails, its virtual IP address fails over to another node. For the private IP address and optional host name for each private interface, Oracle recommends that you use private network IP addresses for these interfaces, for example, 10.*.*.* or 192.168.*.*. You can use the /etc/hosts file on each node to associate private host names with private IP addresses.
Oracle Database 10g: Real Application Clusters 2-8

RAC Network Software Requirements

Supported interconnect software protocols are required:


TCP/IP UDP Remote Shared Memory Hyper Messaging protocol Reliable Data Gram

Token Ring is not supported on AIX platforms.

2-9

Copyright 2005, Oracle. All rights reserved.

RAC Network Software Requirements Each node in a cluster requires a supported interconnect software protocol to support Cache Fusion, and TCP/IP to support CRS polling. In addition to UDP, other supported vendorspecific interconnect protocols include Remote Shared Memory, Hyper Messaging protocol, and Reliable Data Gram. Note that Token Ring is not supported for cluster interconnects on AIX. Your interconnect must be certified by Oracle for your platform. You should also have a Web browser to view online documentation. For functionality required from the vendor clusterware, Oracles clusterware provides the equivalent functionality. Also, using Oracle clusterware reduces installation and support complications. However, vendor clusterware may be needed if customers use non-Ethernet interconnect or if you have deployed clusterware-dependent applications on the same cluster where you deploy RAC.

Oracle Database 10g: Real Application Clusters 2-9

Package Requirements

Required packages and versions for Red Hat 3.0: gcc-3.2.3-2 compat-db-4.0.14.5 compat-gcc-7.3-2.96.122 compat-gcc-c++-7.3-2.96.122 compat-libstdc++-7.3-2.96.122 compat-libstdc++-devel-7.3-2.96.122 openmotif21-2.1.30-8 setarch-1.3-1

2-10

Copyright 2005, Oracle. All rights reserved.

Package Requirements Depending on the products that you intend to install, verify that the packages listed in the slide above are installed on the system. The OUI performs checks on your system to verify that it meets the Linux package requirements of the cluster database and related services. To ensure that these checks succeed, verify the requirements before you start the OUI. To determine whether the required packages are installed, enter a command similar to the following:
# rpm -q package_name # rpm qa |grep package_name_segment

For example, to check the gcc compatability packages, run the following command:
# rpm qa |grep compat compat-db-4.0.14.5 compat-gcc-7.3-2.96.122 compat-gcc-c++-7.3-2.96.122 compat-libstdc++-7.3-2.96.122 compat-libstdc++-devel-7.3-2.96.122

If a package is not installed, install it from your Linux distribution media as the root user by using the rpm i command. For example, to install the compat-db package, use the following command:
# rpm i compat-db-4.0.14.5.i386.rpm

Oracle Database 10g: Real Application Clusters 2-10

hangcheck-timer Module Configuration

The hangcheck-timer module monitors the Linux kernel for hangs. Make sure that the hangcheck-timer module is running on all nodes:

# /sbin/lsmod |grep i hang Module Size Used by Not tainted hangcheck-timer 2648 0 (unused)

Add entry to start the hangcheck-timer module on all nodes, if necessary:


# vi /etc/rc.local /sbin/insmod hangcheck-timer hangcheck_tick=30 \ hangcheck_margin=180

2-11

Copyright 2005, Oracle. All rights reserved.

hangcheck-timer Module Configuration Another component of the required system software for Linux platforms is the hangcheck-timer kernel module. With the introduction of Red Hat 3.0, this module is part of the operating system distribution. The hangcheck-timer module monitors the Linux kernel for extended operating system hangs that can affect the reliability of a RAC node and cause database corruption. If a hang occurs, the module reboots the node. Verify that the hangcheck-timer module is loaded by running the lsmod command as the root user: /sbin/lsmod |grep i hang If the module is not running, you can load it manually by using the insmod command:
/sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

The hangcheck_tick parameter defines how often, in seconds, the hangcheck-timer module checks the node for hangs. The default value is 60 seconds. The hangcheck_margin parameter defines how long, in seconds, the timer waits for a response from the kernel. The default value is 180 seconds. If the kernel fails to respond within the sum of the hangcheck_tick and hangcheck_margin parameter values, then the hangcheck-timer module reboots the system. Using the default values, the node is rebooted if the kernel fails to respond within 240 seconds. This module must be loaded on each node of your cluster. To ensure that the module is loaded every time the system reboots, verify that the local system startup file contains the command shown in the example, or add the command to the /etc/rc.d/rc.local file.
Oracle Database 10g: Real Application Clusters 2-11

Required UNIX Groups and Users

Create an oracle user, a dba, and an oinstall group on each node:

# groupadd -g 500 oinstall # groupadd -g 501 dba # useradd -u 500 -d /home/oracle -g "oinstall" \ G "dba" -m -s /bin/bash oracle

Verify the existence of the nobody nonprivileged user.

# grep nobody /etc/passwd Nobody:x:99:99:Nobody:/:/sbin/nobody

2-12

Copyright 2005, Oracle. All rights reserved.

Required UNIX Groups and Users You must create the oinstall group the first time you install the Oracle database software on the system. This group owns the Oracle inventory, which is a catalog of all the Oracle database software installed on the system. You must create the dba group the first time you install the Oracle database software on the system. It identifies the UNIX users that have database administrative privileges. If you want to specify a group name other than the default dba group, you must choose the custom installation type to install the software, or start the OUI as a user that is not a member of this group. In this case, the OUI prompts you to specify the name of this group. It is recommended that the root user be a member of the dba group for CRS considerations. You must create the oracle user the first time you install the Oracle database software on the system. This user owns all the software installed during the installation. The usual name chosen for this user is oracle. This user must have the Oracle Inventory group as its primary group. It must also have the OSDBA (dba) group as the secondary group. You must verify that the unprivileged user named nobody exists on the system. The nobody user must own the external jobs (extjob) executable after the installation.

Oracle Database 10g: Real Application Clusters 2-12

The oracle User Environment

Set umask to 022. Set the DISPLAY environment variable. Set the ORACLE_BASE environment variable. Set the TMP and TMPDIR variables, if needed.

$ cd $ vi .bash_profile umask 022 ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE TMP=/u01/mytmp; export TMP TMPDIR=$TMP; export TMPDIR

2-13

Copyright 2005, Oracle. All rights reserved.

The oracle User Environment You must run the OUI as the oracle user. However, before you start the OUI, you must configure the environment of the oracle user. To configure the environment, you must: Set the default file mode creation mask (umask) to 022 in the shell startup file Set the DISPLAY and ORACLE_BASE environment variables Secure enough temporary disk space for the OUI If the /tmp directory has less than 400 megabytes of free disk space, identify a file system that is large enough and set the TMP and TMPDIR environment variables to specify a temporary directory on this file system. Use the df -k command to identify a suitable file system with sufficient free space. Make sure that the oracle user and the oinstall group can write to the directory.
# df -k Filesystem 1K-blocks /dev/hdb1 3020140 /dev/hdb2 3826584 /dev/dha1 386008 /dev/hdb5 11472060 /dev/sda1 8030560 # /mkdir /u01/mytmp # chmod 777 /u01/mytmp Used Available Use% Mounted on 2471980 394744 87% / 33020 3599180 1% /home 200000 186008 0% /dev/shm 2999244 7890060 28% /u01 1389664 6640896 18% /ocfs

Oracle Database 10g: Real Application Clusters 2-13

User Shell Limits

* * * *

Add the following lines to the /etc/security/limits.conf file:


soft hard soft hard nproc 2047 nproc 16384 nofile 1024 nofile 65536

Add the following line to the /etc/pam.d/login file:


required /lib/security/pam_limits.so

session

2-14

Copyright 2005, Oracle. All rights reserved.

User Shell Limits To improve the performance of the software, you must increase the following shell limits for the oracle user: nofile: The maximum number of open file descriptors should be 65536. nproc: The maximum number of processes available to a single user must not be less than 16384. The hard values, or upper limits, for these parameters can be set in the /etc/security/limits.conf file as shown in the slide above. The entry configures Pluggable Authentication Modules (PAM) to control session security. PAM is a system of libraries that handle the authentication tasks of applications (services) on the system. The principal feature of the PAM approach is that the nature of the authentication is dynamically configurable.

Oracle Database 10g: Real Application Clusters 2-14

Configuring User Equivalency

1. Edit the /etc/hosts.equiv file. 2. Insert both private and public node names for each node in your cluster.
# vi /etc/hosts.equiv stc-raclin01 stc-raclin02

3. Test the configuration by using rsh


# rsh stc-raclin01 uname r # rsh stc-raclin02 uname r

2-15

Copyright 2005, Oracle. All rights reserved.

Configuring User Equivalency The OUI detects whether the machine on which you are running the OUI is part of the cluster. If it is, you are prompted to select the nodes from the cluster on which you would like the patch set to be installed. For this to work properly, user equivalence must be in effect for the oracle user on each node of the cluster. To enable user equivalence, make sure that the /etc/hosts.equiv file exists on each node with an entry for each trusted host. For example, if the cluster has two nodes, stc-raclin01 and stc-raclin02, the hosts.equiv files should look like this:
[root@stc-raclin01]# cat /etc/hosts.equiv stc-raclin01 stc-raclin02 [root@stc-raclin02]# cat /etc/hosts.equiv stc-raclin01 stc-raclin02

Using ssh The Oracle 10g Universal Installer also supports ssh and scp (OpenSSH) for remote installs. The ssh command is a secure replacement for the rlogin, rsh, and telnet commands. To connect to an OpenSSH server from a client machine, you must have the openssh packages installed on the client machine.

Oracle Database 10g: Real Application Clusters 2-15

Configuring User Equivalency (continued)


$ rpm -qa|grep -i openssh openssh-clients-3.6.1p2-18 openssh-3.6.1p2-18 openssh-askpass-3.6.1p2-18 openssh-server-3.6.1p2-18

Oracle Database 10g: Real Application Clusters 2-16

Required Directories for the Oracle Database Software


You must identify or create four directories for the Oracle database software: Oracle base directory Oracle inventory directory CRS home directory Oracle home directory

2-17

Copyright 2005, Oracle. All rights reserved.

Required Directories for the Oracle Database Software The Oracle base (ORACLE_BASE) directory acts as a top-level directory for the Oracle database software installations. On UNIX systems, the Optimal Flexible Architecture (OFA) guidelines recommend that you must use a path similar to the following for the Oracle base directory:
/mount_point/app/oracle_sw_owner

where mount_point is the mount-point directory for the file system that contains the Oracle database software and oracle_sw_owner is the UNIX username of the Oracle database software owner, which is usually oracle. You must create the ORACLE_BASE directory before starting the installation. A minimum of four gigabytes of disk space is needed. When installing on Linux, do not create the Oracle base directory on an OCFS file system. The Oracle inventory directory (oraInventory) stores the inventory of all software installed on the system. It is required by, and shared by, all the Oracle database software installations on a single system. The first time you install the Oracle database software on a system, the OUI prompts you to specify the path to this directory. If you are installing the software on a local file system, it is recommended that you choose the following path: ORACLE_BASE/oraInventory The OUI creates the directory that you specify and sets the correct owner, group, and permissions on it.
Oracle Database 10g: Real Application Clusters 2-17

Required Directories for the Oracle Database Software (continued) The CRS home directory is the directory where you choose to install the software for Oracle CRS. You must install CRS in a separate home directory. When you run the OUI, it prompts you to specify the path to this directory, as well as a name that identifies it. The directory that you specify must be a subdirectory of the Oracle base directory. It is recommended that you specify a path similar to the following for the CRS home directory:
ORACLE_BASE/product/10.1.0/crs_1

The Oracle home directory is the directory where you choose to install the software for a particular Oracle product. You must install different Oracle products, or different releases of the same Oracle product, in separate Oracle home directories. When you run the OUI, it prompts you to specify the path to this directory, as well as a name that identifies it. The directory that you specify must be a subdirectory of the Oracle base directory. It is recommended that you specify a path similar to the following for the Oracle home directory:
ORACLE_BASE/product/10.1.0/db_1

Oracle Database 10g: Real Application Clusters 2-18

Linux Kernel Parameters


Parameter semmsl semmns semopm semmni shmall shmmax Value 250 32000 100 128 File /proc/sys/kernel/sem /proc/sys/kernel/sem /proc/sys/kernel/sem /proc/sys/kernel/sem

2097152 /proc/sys/kernel/shmall
Half the size of physical memory

/proc/sys/kernel/shmmax

shmmni file-max

4096 65536

/proc/sys/kernel/shmmni /proc/sys/fs/file-max
/proc/sys/net/ipv4/ip_loc al_port_range

ip_local_port_name 102465000
2-19

Copyright 2005, Oracle. All rights reserved.

Linux Kernel Parameters Verify that the kernel parameters shown in the table above are set to values greater than or equal to the recommended value shown. Use the sysctl command to view the default values of the various parameters. For example, to view the semaphore parameters, run the following command:
# sysctl -a|grep sem kernel.sem = 250 32000 32 128

The values shown represent semmsl, semmns, semopm, and semmni in that order. Kernel parameters that can be manually set include: SEMMNS: The number of semaphores in the system SEMMNI: The number of semaphore set identifiers that control the number of semaphore sets that can be created at any one time SEMMSL: Semaphores are grouped into semaphore sets, and SEMMSL controls the array size, or the number of semaphores that are contained per semaphore set. It should be about ten more than the maximum number of the Oracle processes. SEMOPM: The maximum number of operations per semaphore operation call SHMMAX: The maximum size of a single shared-memory segment. This must be slightly larger than the largest anticipated size of the System Global Area (SGA), if possible. SHMMNI: The number of shared memory identifiers
Oracle Database 10g: Real Application Clusters 2-19

Linux Kernel Parameters (continued) You can adjust these semaphore parameters manually by writing the contents of the /proc/sys/kernel/sem file:
# echo SEMMSL_value SEMMNS_value SEMOPM_value \ SEMMNI_value > /proc/sys/kernel/sem

To change these parameter values and make them persistent, edit the /etc/sysctl.conf file as follows:
# vi /etc/sysctl.conf ... kernel.sem = 250 32000 100 128 kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000

Note: The kernel parameters shown above are recommended values only. For production database systems, it is recommended that you tune these values to optimize the performance of the system.

Oracle Database 10g: Real Application Clusters 2-20

Cluster Setup Tasks

1. View the Certifications by Product section at http://metalink.oracle.com/. 2. Verify your high-speed interconnects. 3. Determine the shared storage (disk) option for your system:
OCFS or other shared file system solution Raw devices ASM

4. Install the necessary operating system patches.

2-21

Copyright 2005, Oracle. All rights reserved.

Cluster Setup Tasks Ensure that you have a certified combination of the operating system and the Oracle database software version by referring to the certification information on Oracle MetaLink in the Availability & Certification section. See the Certifications by Product section at: http://metalink.oracle.com Verify that your cluster interconnects are functioning properly. If you are using vendorspecific clusterware, follow the vendors instructions to ensure that it is functioning properly. Determine the storage option for your system, and configure the shared disk. Oracle recommends that you use automatic storage management (ASM) and Oracle Managed Files (OMF), or a cluster file system such as OCFS. If you use ASM or a cluster file system, you can also utilize OMF and other Oracle Database 10g storage features. If the operating system requires specific patches or RPMs in support of your cluster software, apply them before installing any Oracle database software. Note: For more information about ASM, refer to the lessons titled ASM and Administering Storage in RAC in this course.

Oracle Database 10g: Real Application Clusters 2-21

Obtaining OCFS

To get OCFS for Linux, visit the Web site at http://oss.oracle.com/projects/ocfs/files. Download the following Red Hat Package Manager (RPM) packages:
ocfs-support-1.0-n.i686.rpm ocfs-tools-1.0-n.i686.rpm

Download the following RPM kernel module: ocfs-2.4.21-EL-typeversion.rpm, where typeversion is the Linux version.

2-22

Copyright 2005, Oracle. All rights reserved.

Obtaining OCFS Download OCFS for Linux in a compiled form from the following Web site: http://oss.oracle.com/projects/ocfs/ In addition, you must download the following RPM packages: ocfs-support-1.0-n.i686.rpm ocfs-tools-1.0-n.i686.rpm Also, download the RPM kernel module ocfs-2.4.21-4typeversion.rpm, where the variable typeversion stands for the type and version of the kernel that is used. Use the following command to find out which Red Hat kernel version is installed on your system:
uname -a

The alphanumeric identifier at the end of the kernel name indicates the kernel version that you are running. Download the kernel module that matches your kernel version. For example, if the kernel name that is returned with the uname command ends with -21.EL, download the ocfs-2.4.9-21-EL-1.0.11-1.rpm kernel module. Note: Ensure that you use the SMP or enterprise kernel that is shipped with Red Hat Advanced Server 3.0 without any non-Red Hat supplied patches or customization. If you modify the kernel, Oracle Corporation cannot support it.

Oracle Database 10g: Real Application Clusters 2-22

Installing the OCFS RPM Packages

1. Install the support RPM file: ocfs-support-1.0.-n.i686.rpm


# rpm -i ocfs-support-1.0.10-1.i686.rpm

2. Install the correct kernel module RPM file: ocfs-2.4.21-ELtypeversion.rpm


# rpm -i ocfs-2.4.21-EL-1.0.13-1.i686.rpm

3. Install the tools RPM file: ocfs-tools-1.0-n.i686.rpm


# rpm -i ocfs-tools-1.0-10.i686.rpm

2-23

Copyright 2005, Oracle. All rights reserved.

Installing the OCFS RPM Packages Use the following procedure to prepare the environment to run OCFS. Note that you must perform all the steps as the root user and that each step must be performed on all the nodes of the cluster. Install the support RPM file, ocfs-support-1.0.-n.i686.rpm, and then the correct kernel module RPM file for your system. Next, install the tools RPM file, ocfstools-1.0-n.i686.rpm. Note that n represents the most current release of the support and tools RPM (for example, ocfs-tools-1.0.10-1.i686.rpm). To install the files, enter the following command:
# rpm -i ocfs_rpm_package

where the variable ocfs_rpm_package is the name of the RPM package that you are installing. For example, to install the kernel module RPM file for the 21.EL enterprise kernel, you must enter the following command:
# rpm -i ocfs-2.4.21-EL-1.0.13-1.i686.rpm

Make sure all OCFS rpms are installed by running an rpm query:
# rpm -qa|grep -i ocfs ocfs-2.4.21-EL-1.0.13-1 ocfs-support-1.0.10-1 ocfs-tools-1.0.10-1

Oracle Database 10g: Real Application Clusters 2-23

Starting ocfstool
# /usr/bin/ocfstool&

2-24

Copyright 2005, Oracle. All rights reserved.

Starting ocfstool Use the ocfstool utility to generate the /etc/ocfs.conf file. The ocfstool utility is a graphical application. Therefore, you must be sure that your DISPLAY variable is properly set. Start up ocfstool as shown in the following example:
# DISPLAY=:0.0 # export DISPLAY # /usr/bin/ocfstool&

The OCFS Tool window appears in a new window. Click in the window to make it active, and select the Generate Config option from the Tasks menu. The OCFS Generate Config window is displayed.

Oracle Database 10g: Real Application Clusters 2-24

Generating the ocfs.conf File

Confirm that the values are correct.

View the /etc/ocfs.conf file.

$ cat /etc/ocfs.conf # Ensure this file exists in /etc# node_name = stc-raclin01 node_number = 1 ip_address = 148.2.65.11 ip_port = 7000 guid = 98C704EBD14F6EBC68660060976E5460
2-25 Copyright 2005, Oracle. All rights reserved.

Generating the ocfs.conf File When the OCFS Generate Config window appears, check the values that are displayed in the window to confirm that they are correct, and then click the OK button. Based on the information that is gathered from your installation, the ocfstool utility generates the necessary /etc/ocfs.conf file. After the generation is completed, open the /etc/ocfs.conf file in a text file tool and verify that the information is correct before continuing. The guid value is generated from the Ethernet adapter hardware address and must not be edited manually. If the adapter is switched or replaced, remove the ocfs.conf file and regenerate it or run the ocfs_uid_gen utility that is located in /sbin.

Oracle Database 10g: Real Application Clusters 2-25

Preparing the Disks

1. Partition the disk for the OCFS file system 2. Create the necessary mount points 3. Load the ocfs module and start ocfstool 4. Format and mount the partitions

2-26

Copyright 2005, Oracle. All rights reserved.

Preparing the Disks By using the fdisk utility, partition the disk to allocate space for the OCFS (or ASM) file system(s) according to your storage needs. You should partition your system in accordance with Oracle Optimal Flexible Architecture (OFA) standards. In Linux, SCSI disk devices are named by using the following convention: Sd: SCSI disk az: Disks 1 through 26 14: Partitions one through four After the partitions are created, use the mkdir comand command to create the mount points for the OCFS file system:
# mkdir /ocfs1 /ocfs2 /ocfs3 (more as needed) # chown oracle:dba /ocfs1 ...

As the root user, load the OCFS module and start the ocfstool utility:
# load_ocfs /sbin/insmod ocfs node_name=stc-raclin01 ip_address=192.168.1.11 cs=1807 guid=191C46E04CE4C1130B840050BFABD260 comm_voting=1 ip_po0 Using /lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o Module ocfs loaded

# /sbin/ocfstool& Go to Tasks on the menu bar and click Format. After supplying the needed information, click OK in the OCFS Format window. When finished, click the Mount button.
Oracle Database 10g: Real Application Clusters 2-26

Loading OCFS at Startup

The /etc/rc..d/S24ocfs file loads OCFS at startup.


# more S24ocfs ... case "`basename $0`" in *ocfs) MODNAME=ocfs FSNAME=OCFS LOAD_OCFS=/sbin/load_ocfs ;; *ocfs2) MODNAME=ocfs2 FSNAME=OCFS2 LOAD_OCFS=/sbin/load_ocfs2 ;; ... esac...

2-27

Copyright 2005, Oracle. All rights reserved.

Loading OCFS at Startup To start OCFS, the ocfs.o module must be loaded at system startup, before CRS is started. The startup script /etc/rc5.d/S24ocfs is provided to do this. This script is designed to be run before /etc/rc5.d/ S25netfs, which is responsible for mounting network file systems like NFS, SAMBA and of course OCFS. The CRS startup script, /etc/rc5.d/ S96init.crs runs after the aforementioned scripts, needing a mounted OCFS volume. The exception to this is to place the voting disk and CRS repository on raw devices or ASM volumes. The startup script is capable of handling OCFS versions 1 and 2. It is not advisable to modify this startup script directly.

Oracle Database 10g: Real Application Clusters 2-27

Mounting OCFS on Startup

Edit /etc/fstab, and add lines similar to these:


/dev/sda1 /dev/sda2 /ocfs1 /ocfs2 ocfs ocfs _netdev _netdev uid=500,gid=502 uid=500,gid=502

The _netdev option prevents mount attempts before the S24ocfs script runs uid is the user ID of the oracle user as defined in /etc/passwd.
oracle:x:500:501::/home/oracle/:/bin/bash

gid is the group ID of the dba group as defined in /etc/group.


dba:x:502:

2-28

Copyright 2005, Oracle. All rights reserved.

Mounting OCFS on Startup To mount the file system automatically on startup, add lines similar to the following to the /etc/fstab file for each OCFS file system:
/dev/sda1 /ocfs1 ocfs _netdev uid=500,gid=502

Ensure that the OCFS file systems are mounted in sequence, node after node, and wait for each mount to complete before starting the mount on the next node. The OCFS file systems must be mounted after the standard file systems as indicated below:
# cat /etc/fstab LABEL=/ / ext3 defaults 1 1 ... LABEL=/tmp /tmp ext3 defaults 1 2 LABEL=/usr /usr ext3 defaults 1 2 LABEL=/var /var ext3 defaults 1 2 /dev/sdb2 swap swap defaults 0 0 ... /dev/sda1 /ocfs1 ocfs _netdev /dev/sda2 /ocfs2 ocfs _netdev

uid=500,gid=502 uid=500,gid=502

Note: The OCFS file systems must be mounted after the OCFS module is loaded.

Oracle Database 10g: Real Application Clusters 2-28

Using Raw Partitions

1. Install shared disks 2. Identify the shared disks to use 3. Partition the device
# fdisk l Disk /dev/sda: 9173 MB, 9173114880 bytes 255 heads, 63 sectors/track, 1115 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb: 9173 MB, 9173114880 bytes 255 heads, 63 sectors/track, 1115 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes ... # fdisk /dev/sda ...

2-29

Copyright 2005, Oracle. All rights reserved.

Using Raw Partitions Although Red Hat Enterprise Linux 3.0 and SLES 8 provide a Logical Volume Manager (LVM), this LVM is not cluster aware. For this reason, Oracle does not support the use of logical volumes with RAC for either CRS or database files on Linux. The use of logical volumes for raw devices is supported only for single-instance databases. They are not supported for RAC databases. To create the required raw partitions, perform the following steps: 1. If necessary, install the shared disks that you intend to use, and reboot the system. 2. To identify the device name for the disks that you want to use for the database, enter the following command:
# /sbin/fdisk l Disk /dev/sda: 9173 MB, 9173114880 bytes 255 heads, 63 sectors/track, 1115 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb: 9173 MB, 9173114880 bytes 255 heads, 63 sectors/track, 1115 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes ...

Oracle Database 10g: Real Application Clusters 2-29

Using Raw Partitions


Number of Partitions 1 1 Partition Size
(MB)

Purpose SYSTEM tablespace SYSAUX tablespace UNDOTBSn tablespace EXAMPLE tablespace USERS tablespace 2 online redo logs per instance First and second control files TEMP tablespace Server parameter file (SPFILE) Password file Volume for OCR Oracle CRS voting disk

500 300 + 250 per instance 1 per instance 500 1 160 1 120 2 per instance 120 2 110 1 250 1 5 1 5 1 100 1 20
2-30

Copyright 2005, Oracle. All rights reserved.

Using Raw Partitions (continued) 3. Partition the devices.You can create the required raw partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine whether the device contains unused cylinders. Identify the number and size of the raw files that you need for your installation. Use the chart above as a starting point in determining your storage needs. Use the following guidelines when creating partitions: - Use the p command to list the partition table of the device. - Use the n command to create a new partition. - After you have created all the required partitions on this device, use the w command to write the modified partition table to the device.
# fdisk /dev/sda Command (m for help):n e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1020, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-1020): 500M # System TB Command (m for help): w The partition table has been altered!

Oracle Database 10g: Real Application Clusters 2-30

Binding the Partitions

1. Identify the devices that are already bound.


# /usr/bin/raw -qa

2. Edit the /etc/sysconfig/rawdevices file.


# /etc/sysconfig/rawdevices file # raw device bindings /etc/sysconfig/rawdevices file

3. Adjust the ownership and permissions of the OCR file to root:dba and 640, respectively. 4. Adjust the ownership and permissions of all other raw files to oracle:dba and 660, respectively. 5. Execute the rawdevices command.
2-31 Copyright 2005, Oracle. All rights reserved.

Binding the Partitions 1. After you have created the required partitions, you must bind the partitions to raw devices. However, you must first determine which raw devices are already bound to other devices. To determine which raw devices are already bound to other devices, enter the following command:
# /usr/bin/raw qa

Raw devices have device names in the form /dev/raw/rawn, where n is a number that identifies the raw device. 2. Open the /etc/sysconfig/rawdevices file in any text editor, and add a line similar to the following for each partition that you created:
/dev/raw/raw1 /dev/sda1

Specify an unused raw device for each partition. 3. For the raw device that you created for the Oracle Cluster Registry (OCR), enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown root:dba /dev/raw/rawn # chmod 640 /dev/raw/rawn

Oracle Database 10g: Real Application Clusters 2-31

Binding the Partitions (continued) 4. For each additional raw device that you specified in the rawdevices file, enter commands similar to the following to set the owner, group, and permissions on the device file:
# chown oracle:oinstall /dev/dev/rawn # chmod 660 /dev/raw/rawn

5. To bind the partitions to the raw devices, enter the following command:
# /sbin/service rawdevices restart

By editing the rawdevices file, the system binds the partitions to the raw devices when it reboots.

Oracle Database 10g: Real Application Clusters 2-32

Raw Device Mapping File


1. Create a database directory, and set proper permissions.
# mkdir -p $ORACLE_BASE/oradata/dbname # chown oracle:oinstall $ORACLE_BASE/oradata # chmod 775 $ORACLE_BASE/oradata

2. Edit the
$ORACLE_BASE/oradata/dbname/dbname_raw.conf

file.
# cd $ORACLE_BASE/oradata/dbname/ # vi dbname_raw.conf

3. Set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
2-33 Copyright 2005, Oracle. All rights reserved.

Raw Device Mapping File To enable the DBCA to identify the appropriate raw partition for each database file, you must create a raw device mapping file, as follows: 1. Create a database file subdirectory under the Oracle base directory, and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 775 $ORACLE_BASE/oradata

2. Change directory to the $ORACLE_BASE/oradata/dbname directory, and edit the dbname_raw.conf file in any text editor to create a file similar to the following:
system=/dev/raw/raw1 sysaux=/dev/raw/raw2 example=/dev/raw/raw3 users=/dev/raw/raw4 temp=/dev/raw/raw5 undotbs1=/dev/raw/raw6 undotbs2=/dev/raw/raw7 ...

Oracle Database 10g: Real Application Clusters 2-33

Raw Device Mapping File (continued) Use the following guidelines when creating or editing this file: Each line in the file must have the following format:
database_object_identifier=raw_device_path

For a RAC database, the file must specify one automatic undo tablespace data file (undotbsn) and two redo log files (redon_1, redon_2) for each instance. Specify at least two control files (control1, control2). To use manual instead of automatic undo management, specify a single RBS tablespace data file (rbs) instead of the automatic undo management tablespaces. 3. Save the file, and note the file name that you specified. When you configure the oracle users environment later in this lesson, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

Oracle Database 10g: Real Application Clusters 2-34

Installing Cluster Ready Services

$ /cdrom/crs/runInstaller

2-35

Copyright 2005, Oracle. All rights reserved.

Installing Cluster Ready Services Run the OUI by executing the runInstaller command from the /crs subdirectory on the Oracle Cluster Ready Services Release 1 (10.1.0.2) CD-ROM. This is a separate CD that contains the CRS software. When the OUI displays the Welcome page, click Next. If you are performing this installation in an environment in which you have never installed the Oracle database software (that is, the environment does not have an OUI inventory), the OUI displays the Specify Inventory directory and credentials page. If you are performing this installation in an environment where the OUI inventory is already set up, the OUI displays the Specify File Locations page instead of the Specify Inventory directory and credentials page.

Oracle Database 10g: Real Application Clusters 2-35

Specifying the Inventory Directory

# cd /u01/app/oracle/oraInventory # ./orainstRoot.sh

2-36

Copyright 2005, Oracle. All rights reserved.

Specifying the Inventory Directory On the Specify Inventory directory and credentials page, enter the inventory location. If ORACLE_BASE has been properly set, the OUI suggests the proper directory location for the inventory location as per OFA guidelines. If ORACLE_BASE has not been set, enter the proper inventory location according to your requirements. Enter the UNIX group name information oinstall in the Specify Operating System group name field, and then click Next. The OUI displays a dialog box requesting that you run the orainstRoot.sh script from the oraInventory directory. Open a terminal window to the host where the OUI is running, change directory to the oraInventory directory, and execute the script as the root user as follows:
# cd /u01/app/oracle/oraInventory # ./orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing groupname of /u01/app/oracle/oraInventory to oinstall.

After the script is run, click the Continue button to close the dialog box, and then click the Next button to continue.

Oracle Database 10g: Real Application Clusters 2-36

File Locations and Language Selection

2-37

Copyright 2005, Oracle. All rights reserved.

File Locations and Language Selection Next, the OUI displays the Specify File Locations page. The Specify File Locations page contains predetermined information for the source of the installation files and the target destination information. The OUI provides a CRS Home name in the Name field located in the Destination section of the page. You may accept the name or enter a new name at this time. If ORACLE_BASE has been set, an OFA-compliant directory path appears in the Path field located below the Destination section. If not, enter the location in the target destination, and click Next to continue. The OUI displays the Language Selection page next. Select the language that you want to use for your installation in the Available Languages list on the left of the page. Click the right arrow (>>) to move your selection to the Selected Languages list, and then click the Next button to continue.

Oracle Database 10g: Real Application Clusters 2-37

Cluster Configuration

2-38

Copyright 2005, Oracle. All rights reserved.

Cluster Configuration The Cluster Configuration page displays predefined node information if the OUI detects that your system has vendor clusterware. Otherwise, the OUI displays the Cluster Configuration page without the predefined node information. Node Names Vendor: Use vendor node names. Oracle: Use host names as returned by /bin/hostname:
# hostname raclin01

Private interconnect

Names or IP Addresses Names must be resolvable by every node by the DNS or /etc/hosts. Names must exist and be on the same subnet.

Oracle Database 10g: Real Application Clusters 2-38

Private Interconnect Enforcement

2-39

Copyright 2005, Oracle. All rights reserved.

Private Interconnect Enforcement The Private Interconnect Enforcement page enables you to select the network interfaces on your cluster nodes to use for internode communication. Ensure that the network interfaces that you choose for the interconnect have enough bandwidth to support the cluster- and RAC-related network traffic. A gigabit Ethernet interface is highly recommended for the private interconnect. To configure the interface for private use, click in the Interface Type field for the interface that you have chosen (eth1 in the example in the slide), and select Private from the drop-down list. When you have finished, click the Next button to continue.

Oracle Database 10g: Real Application Clusters 2-39

Oracle Cluster Registry File

2-40

Copyright 2005, Oracle. All rights reserved.

Oracle Cluster Registry File When you click the Next button on the Private Interconnect Enforcement page, the OUI looks for the /etc/oracle/ocr.loc file (on Linux systems). If your environment is HP-UX or Solaris, the OUI looks in the /var/opt/oracle directory. On other UNIX systems, the OUI looks for the ocr.loc file in the /etc directory. If the ocr.loc file exists, and if the file has a valid entry for the OCR location, the Voting Disk Location page appears. Click the Next button to continue.
# cat /etc/oracle/*.loc ocrconfig_loc=/ocfs/OCR/ocr.dbf local_only=FALSE

Otherwise, the Oracle Cluster Registry page appears. Enter a fully qualified file name for the raw device or shared file system file for the OCR. Click Next. The Voting Disk page appears.

Oracle Database 10g: Real Application Clusters 2-40

Voting Disk File

2-41

Copyright 2005, Oracle. All rights reserved.

Voting Disk File On the Voting Disk page, enter a complete path and file name for the file in which you want to store the voting disk. This must be a shared raw device or a shared file system file located on a cluster file system, such as OCFS, or a network file system (NFS) mount, such as a Netapps Filer volume. If you are using raw devices, remember that the storage size for the OCR should be at least 100 megabytes. In addition, it is recommended that you use a redundant array of independent disks (RAID) for storing the OCR and the voting disk to ensure continuous availability of partitions. When you are ready to continue, click the Next button. If the Oracle inventories (oraInventory) on the remote nodes are not set up, the OUI displays a dialog box prompting you to run the orainstRoot.sh script on all the nodes:
[raclin01] # /u01/app/oracle/oraInventory/orainstRoot.sh [raclin02] # /u01/app/oracle/oraInventory/orainstRoot.sh

When you have run the oraInventory script on both nodes, click the Continue button to close the dialog box.

Oracle Database 10g: Real Application Clusters 2-41

Summary and Install

2-42

Copyright 2005, Oracle. All rights reserved.

Summary and Install The OUI displays the Summary page. Note that the OUI must install the components shown in the summary window. Click the Install button. The Install page is then displayed, informing you about the progress of the installation. During the installation, the OUI first copies the software to the local node and then copies the software to the remote nodes.

Oracle Database 10g: Real Application Clusters 2-42

Running the root.sh Script on All Nodes

2-43

Copyright 2005, Oracle. All rights reserved.

Running the root.sh Script on All Nodes Next, the OUI displays a dialog box indicating that you must run the root.sh script on all the nodes that are part of this installation. When you complete the final execution of root.sh, the script runs the following assistants without your intervention: Oracle Cluster Registry Configuration Tool (ocrconfig) Cluster Configuration Tool (clscfg) When the root.sh script has been run on all nodes, click the OK button to close the dialog box. Run the olsnodes command from the ORA_CRS_HOME/bin directory to make sure that the software is installed properly. The olsnodes command syntax is:
olsnodes [-n] [-l] [-v] [-g]

where: -n displays the member number with the member name -l displays the local node name -v activates verbose mode -g activates logging The output from this command should be a listing of the nodes on which CRS is installed:
$ /u01/app/oracle/crs_1/bin/olsnodes -n raclin01 1 raclin02 2

Oracle Database 10g: Real Application Clusters 2-43

Verifying the CRS Installation

Check for CRS processes with the ps command. Check the CRS startup entries in the /etc/inittab file.

# cat /etc/inittab # Run xdm in runlevel 5 x:5:respawn:/etc/X11/prefdm -nodaemon h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

2-44

Copyright 2005, Oracle. All rights reserved.

Verifying the CRS Installation Before continuing with the installation of the Oracle database software, you must verify your CRS installation and startup mechanism. In Oracle9i RAC environments, the cluster management was provided by multiple oracm processes. With the introduction of Oracle Database 10g RAC, cluster management is controlled by the evmd, ocssd, and crsd processes. Use the ps command to make sure that the processes are running. Run the following command on both nodes:
$ ps ef|grep d.bin oracle 1797 1523 0 Jun02 ? oracle 1809 1808 0 Jun02 ? root 1823 1805 0 Jun02 ? ... 00:00:00 .../evmd.bin 00:00:00 .../ocssd.bin 00:00:00 .../crsd.bin

Check the startup mechanism for CRS. In Oracle Database 10g RAC, CRS processes are started by entries in the /etc/inittab file, which is processed whenever the run level changes (as it does during system startup and shutdown):
h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

Note: The processes are started at run levels 3 and 5 and are started with the respawn flag.
Oracle Database 10g: Real Application Clusters 2-44

Verifying the CRS Installation (continued) This means that if the processes abnormally terminate, they are automatically restarted. If you kill the CRS processes, they automatically restart or, worse, cause the node to reboot. For this reason, stopping CRS by killing the processes is not recommended. If you want to stop CRS without resorting to shutting down the node, Update inittab entries on all nodes and comment out the CRS-specific entries. Then run the init.crs stop command:
# /etc/init.d/init.crs stop

The init.crs stop command stops the CRS daemons in the following order: crsd, cssd, and evmd. If you encounter difficulty with your CRS installation, it is recommended that you check the associated log files. To do this, check the directories under the CRS Home:
$ORA_CRS_HOME/crs/log: This directory includes traces for CRS resources that are

joining, leaving, restarting, and relocating as identified by CRS.


$ORA_CRS_HOME/crs/init: Any core dumps for the crsd.bin daemon are written here. $ORA_CRS_HOME/css/log: The css logs indicate all actions, such as reconfigurations,

missed checkins, connects, and disconnects, from the client CSS listener. In some cases, the logger logs messages with the category of auth.crit for the reboots performed by CRS. This can be used for checking the exact time when the reboot occurred.
$ORA_CRS_HOME/css/init: Core dumps from ocssd primarily and PID for the cssd

daemon whose death is treated as fatal are located here. If there are abnormal restarts for cssd, the core files have the formats of core.<pid>.
$ORA_CRS_HOME/evm/log: Log files for the evmd and evmlogger daemons. These are

not used as often for debugging as the CRS and CSS directories.
$ORA_CRS_HOME/evm/init: PID and lock files for evmd are found here. Core files for evmd should also be written here. $ORA_CRS_HOME/srvm/log: Log files for OCR are written here.

When you have determined that your CRS installation is successful and fully functional, you may start the Oracle Database 10g software installation. If you must remove a failed CRS install, please refer to Metalink Note: 239998.1

Oracle Database 10g: Real Application Clusters 2-45

Summary

In this lesson, you should have learned how to: Describe the installation of Oracle Database 10g RAC Perform RAC preinstallation tasks Perform cluster setup tasks Install OCFS Install Oracle Cluster Ready Services

2-46

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 2-46

Practice 2: Overview

This practice covers the following topics: Configuring the operating system to support a cluster database installation Installing and configuring OCFS Creating OCFS volumes Installing Oracle Cluster Ready Services

2-47

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 2-47

RAC Installation and Configuration (Part II)

Copyright 2005, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following: Install the Oracle database software Configure virtual IPs with the Virtual Internet Protocol Configuration Assistant (VIPCA) Perform preinstallation database tasks Create a cluster database Perform postinstallation database tasks Identify best configuration practices for RAC

3-2

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 3-2

OUI Database Configuration Options


Configuration Type General purpose, OLTP, DW Description Installs a starter database, Oracle options, networking services, and utilities. At the end of the installation, the DBCA creates your RAC database. Enables you to customize your database options and storage components. Installs only the software. Does not configure the listeners or network infrastructure and does not create a database. Advantages Minimal input required. You can create your database more quickly. Create tablespaces and data files. Customize your Database. Best configuration flexibility.

Advanced

Do not create a starter database

3-3

Copyright 2005, Oracle. All rights reserved.

OUI Database Configuration Options When you run the Oracle Universal Installer (OUI) and choose to create the database, you can select the General Purpose, Transaction Processing, Data Warehouse, or Advanced database configuration types. If you select the Advanced configuration, then you can use the Database Configuration Assistant (DBCA) to create the database. It is recommended that you use the DBCA to create your database. You can also select the Advanced configuration, select a preconfigured template, customize the template, and use the DBCA to create a database by using the template. These templates correspond to the General Purpose, Transaction Processing, and Data Warehouse configuration types. You can also use the DBCA with the Advanced template to create a database. It is recommended that you use one of the preconfigured database options or use the Advanced option and the DBCA to create your database. However, if you want to configure your environment and create your database manually, select the Do not create a starter database configuration option.

Oracle Database 10g: Real Application Clusters 3-3

Install the Database Software

$ id oracle $ /cdrom/dbs/runInstaller

3-4

Copyright 2005, Oracle. All rights reserved.

Install the Database Software The OUI is used to install the Oracle Database 10g software. The OUI must be run as the oracle user. Start the OUI by executing the runInstaller command from the root directory of the Oracle Database 10g Release 1 (10.1.0.2) CD-ROM or the software staging location. When the OUI displays the Welcome page, click the Next button. The Specify File Locations page is displayed.

Oracle Database 10g: Real Application Clusters 3-4

Specify File Locations

3-5

Copyright 2005, Oracle. All rights reserved.

Specify File locations The Source field on the Specify File Locations page is prepopulated with the path to the Oracle Database 10g products.xml file. You need not change this location under normal circumstances. In the Destination section of the page, there are fields for the installation name or Oracle Home name and the path for the installed products. Note that the database software cannot share the same location (Oracle Home) as the previously installed Cluster Ready Services (CRS) software. The Name field is populated with a default or suggested installation name. Accept the suggested name or enter your own Oracle Home name. Next, in the Path field, enter the fully qualified path name for the installation, /u01/app/oracle/product/10.0.1/db_1 in the example in the slide. After entering the information, review it for accuracy, and click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-5

Specify Cluster Installation

3-6

Copyright 2005, Oracle. All rights reserved.

Specify Cluster Installation The Specify Hardware Cluster Installation Mode page is displayed next. Because the OUI is node dependent, you must indicate whether you want the installation to be copied to the recognized and selected nodes in your cluster, or whether you want a single, noncluster installation to take place. Most installation scenarios require the Cluster Installation option. To do this, click the Cluster Installation option button and make sure that all nodes have been selected in Node Name list. Note that the local node is always selected for the installation. Additional nodes that are to be part of this installation must be selected by selecting the check box. If you do not see all your nodes listed here, exit the OUI and make sure that CRS is running on all your nodes. Restart the OUI. Click the Next button when you are ready to proceed with the installation. If the OUI does not display the nodes properly, perform clusterware diagnostics by executing the olsnodes -v command from the ORA_CRS_HOME/bin directory, and analyze its output. Refer to your clusterware documentation if the detailed output indicates that your clusterware is not running.

Oracle Database 10g: Real Application Clusters 3-6

Select Installation Type

3-7

Copyright 2005, Oracle. All rights reserved.

Select Installation Type The Select Installation Type page is displayed next. Your installation options include: Enterprise Edition Standard Edition Custom For most installations, the Enterprise Edition installation is the correct choice (but Standard Edition is also supported). Selecting the Custom installation type option enables you to install only those Oracle product components that you deem necessary. For this, you must have a good knowledge of the installable Oracle components and of any dependencies or interactions that may exist between them. For this reason, it is recommended that you select the Enterprise Edition installation because it installs all components that comprise the Oracle Database 10g 10.1.0 distribution.

Oracle Database 10g: Real Application Clusters 3-7

Products Prerequisite Check

3-8

Copyright 2005, Oracle. All rights reserved.

Products Prerequisite Check The Product-specific Prerequisite Checks page verifies the operating system requirements that must be met for the installation to be successful. These requirements include: Certified operating system check Kernel parameters as required by the database software Required operating system packages and correct revisions Required glibc and glibc-compat (compatability) package versions In addition, the OUI checks whether the ORACLE_BASE user environment variable has been set and, if so, whether the value is acceptable. After each successful check, the Succeeded check box is selected for that test. The test suite results are displayed at the bottom of the page. Any tests that fail are also reported here. The example in the slide shows the results of a completely successful test suite. If you encounter any failures, try opening another terminal window and correct the deficiency. For example, if your glibc version is too low, acquire the correct version of the glibc Red Hat Package Manager (RPM), install it from another terminal window, return to the OUI, and click the Retry button to rerun the tests. When all tests have succeeded, click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-8

Select Database Configuration

3-9

Copyright 2005, Oracle. All rights reserved.

Select Database Configuration The Select Database Configuration page appears. On this page, you can choose to create a database as part of the database software installation or defer creation until later. If you choose to install a database, you must select one of the preconfigured starter database types: General Purpose Transaction Processing Data Warehouse Advanced (user customizable) If you choose one of these options, you are queried about the specifics of your database (cluster database name, shared storage options, and so on). After the OUI stops, the DBCA is launched to install your database with the information that you provided. You may also choose to defer the database creation by clicking the Do not create a starter database option button. This option enables you to create the database by manually invoking the DBCA at some point in time after the OUI finishes installing the database software. This choice provides you with more options than the standard preconfigured database models. In the slide above, the default option is Create a starter database (General Purpose). Instead, select Do not create a starter database option. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-9

Check Summary

3-10

Copyright 2005, Oracle. All rights reserved.

Check Summary The Summary page is displayed next. Review the information on this page. Node information and space requirements can be viewed here, as well as selected software components. If you are satisfied with the summary, click the Install button to proceed. If you are not, you can click the Back button to go back and make the appropriate changes. On the Install page, you can monitor the progress of the installation. During the installation, the OUI copies the software first to the local node and then to the remote nodes.

Oracle Database 10g: Real Application Clusters 3-10

The root.sh Script


# cd /u01/app/oracle/product/10.1.0/db_1 # ./root.sh

3-11

Copyright 2005, Oracle. All rights reserved.

The root.sh script At the end of the installation, the OUI displays a dialog box indicating that you must run the root.sh script as the root user on all the nodes where the software is being installed. Execute the root.sh script on one node at a time, and then click the OK button in the dialog box to continue. Note: The root.sh script launches the Virtual Internet Protocol Configuration Assistant (VIPCA) before exiting. Because the VIPCA is a graphical application, make sure that the root.sh script is run from a graphical terminal session, such as X Windows or VNC, and that the DISPLAY environment variable is properly set.

Oracle Database 10g: Real Application Clusters 3-11

Launching the VIPCA with root.sh

3-12

Copyright 2005, Oracle. All rights reserved.

Launching the VIPCA with root.sh The VIPCA is called from the root.sh script, and it configures the virtual IP addresses for each node. In addition, the VIPCA also configures nodeapps, consisting of the group services daemon (GSD), the Enterprise Manager agent, and Oracle Notification Services (ONS) for the cluster. Before running the VIPCA, you must make sure that you have unused public IP addresses available for each node and that they are configured in the /etc/hosts file or resolvable through DNS. The VIPCA Welcome page appears first. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-12

VIPCA Network Interface Discovery

3-13

Copyright 2005, Oracle. All rights reserved.

VIPCA Network Interface Discovery The Network Interfaces page appears next. All working network adapters should appear in the discovery window. You can choose individual interfaces to be configured by the VIPCA. However, do not select the interface that is acting as your private interconnect. If you have missing interfaces, check whether they are recognized by the operating system by running the ifconfig command:
# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:06:5B:A6:3C:70 inet addr:139.185.35.113 Bcast:139.185.35.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:66641537 errors:0 dropped:0 overruns:1 frame:0 TX packets:538116 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3471151110 (3310.3 Mb) TX bytes:86213945 (82.2 Mb) Interrupt:11 Base address:0xdc80 eth1 Link encap:Ethernet HWaddr 00:06:1B:A4:2B:60 inet addr:139.185.35.114 Bcast:139.185.35.255 UP BROADCAST RUNNING MULTICAST MTU:1500 ...

Mask:255.255.255.0 Metric:1

If all your adapters do not appear in the listing, then you have to troubleshoot your hardware.
Oracle Database 10g: Real Application Clusters 3-13

VIP Configuration Data and Summary

3-14

Copyright 2005, Oracle. All rights reserved.

VIP Configuration Data and Summary The Virtual IPs for cluster nodes page is displayed next. Enter an unused or unassigned public virtual IP address for each node displayed on this page, and click Next. The Summary page is displayed. Review the information on this page, and click the Finish button.

Oracle Database 10g: Real Application Clusters 3-14

Installation Progress

3-15

Copyright 2005, Oracle. All rights reserved.

Installation Progress When you click the Finish button, a progress dialog box appears while the VIPCA configures the virtual IP addresses with the network interfaces that you selected. The VIPCA then creates and starts the VIPs, GSD, and ONS node applications. When the configuration completes, click OK to view the VIPCA configuration results. Review the information on the Configuration Results page, taking care to review any errors that may have been reported. After reviewing the configuration results, click the Exit button to exit the VIPCA.

Oracle Database 10g: Real Application Clusters 3-15

End of Installation

3-16

Copyright 2005, Oracle. All rights reserved.

End of Installation After running root.sh on all the nodes as described in previous slides, click the OK button in the OUI dialog box to continue the installation. This enables the remaining Oracle configuration assistants to run, so that the assistants can perform postinstallation processing. The Network Configuration Assistant (NETCA) runs next to configure listeners for each node in the cluster. If you have chosen to create the database as described earlier in the Select Database Configuration slide, the DBCA is automatically launched to perform database creation. When the configuration assistants stop running, the End of Installation page appears. You can now click the Exit button to exit the OUI and start the DBCA to create your database.

Oracle Database 10g: Real Application Clusters 3-16

Database Preinstallation Tasks

Make sure that CRS processes are running.


00:00:00 .../evmd.bin 00:00:00 .../ocssd.bin 00:00:00 .../crsd.bin

$ ps ef|grep d.bin oracle 1797 1523 0 Jun02 ? oracle 1809 1808 0 Jun02 ? root 1823 1805 0 Jun02 ? ...

Ensure that the GSD node application is running. Set the Oracle databaserelated environment variables:
ORACLE_BASE ORACLE_HOME ORACLE_SID PATH
Copyright 2005, Oracle. All rights reserved.

3-17

Database Preinstallation Tasks Before starting the DBCA to install the database, you must ensure that CRS processes and Group Services are functional. If any of these processes are not running, the database creation fails. Check whether the CRS background processes are running (crsd.bin, ocssd.bin, and evmd.bin) by using the ps command:
oracle ... oracle oracle ... root root ... 1804 1808 1809 1823 1827 1797 1800 1808 1805 1805 0 Jun02 ? 0 Jun02 ? 0 Jun02 ? 0 Jun02 ? 0 Jun02 ? 00:00:00 /u01/.../crs_1/bin/evmd.bin 00:00:00 /u01/.../crs_1/bin/ocssd.bin 00:00:00 /u01/.../crs_1/bin/ocssd.bin 00:00:00 /u01/.../crs_1/bin/crsd.bin 00:00:00 /u01/.../crs_1/bin/crsd.bin

To check whether Group Services is running, use the crs_stat command from the /u01/app/oracle/product/10.1.0/cr_1/bin directory as follows:
$ cd /u01/app/oracle/product/10.1.0/cr_1/bin $ ./crs_stat ... Oracle Database 10g: Real Application Clusters 3-17

Database Preinstallation Tasks (continued)


NAME=ora.raclin01.gsd TYPE=application TARGET=ONLINE STATE=ONLINE on raclin01 ... NAME=ora.raclin02.gsd TYPE=application TARGET=ONLINE STATE=ONLINE on raclin02 ...

You can now set the Oracle databaserelated environment variables for the oracle user, so that they are recognized by the DBCA during database creation: $ cd
$ vi .bash_profile ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORA_CRS_HOME=/u01/app/oracle/product/10.1.0/crs_1; export ORA_CRS_HOME ORACLE_SID=RACDB1; export ORACLE_SID ORACLE_HOME=/u01/app/oracle/product/10.1.0/db_1; export ORACLE_HOME PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin; export PATH

Create a directory called /ocfs/oradata where the cluster database data files reside. Make sure that the owner is oracle and the group is dba.
$ mkdir /ocfs/oradata $ chown oracle:dba /ocfs/oradata $ chmod 775 /ocfs/oradata

Oracle Database 10g: Real Application Clusters 3-18

Creating the Cluster Database


$ cd /u01/app/oracle/product/10.1.0/db_1/bin $ ./dbca datafileDestination /ocfs/oradata

3-19

Copyright 2005, Oracle. All rights reserved.

Creating the Cluster Database This database creation assumes that an Oracle Cluster File System (OCFS) volume is mounted under /ocfs. From a graphical display, start the DBCA with the datafileDestination flag. This flag lets DBCA know that the shared disk volume is actually a cluster file system and not a raw device. This prevents the DBCA from looking for a data file-to-raw device map file. The following example shows the usage of the flag:
$ cd /u01/app/oracle/product/10.1.0/db_1/bin $ ./dbca datafileDestination /ocfs/oradata

Note the mixed case letters in the flag. This is not an error. You must enter it exactly as shown in the example. Later during the installation, when the DBCA prompts you to confirm the data file location, the directory passed as an argument in the dbca command (/ocfs/oradata) is displayed as the default value, simplifying the creation process. The Welcome page appears first. You must select the type of database that you want to install. Click the Oracle Real Application Clusters database option button, and then click Next. The Operations page appears. For a first-time installation, you have two choices only. The first option enables you to create a database and the other option enables you to manage database creation templates. Click the Create a database option button, and then click Next to continue.

Oracle Database 10g: Real Application Clusters 3-19

Node Selection

3-20

Copyright 2005, Oracle. All rights reserved.

Node Selection The Node Selection page is displayed next. Because you are creating a cluster database, choose all the nodes. Click the Select All button to choose all the nodes of the cluster. Each node must be highlighted before continuing. If all the nodes do not appear, you must stop the installation and troubleshoot your environment. The most common reason for encountering an error here is related to problems with the GSD node application. If no problems are encountered, click the Next button to proceed.

Oracle Database 10g: Real Application Clusters 3-20

Select Database Type

3-21

Copyright 2005, Oracle. All rights reserved.

Select Database Type The Database Templates page appears next. If you want OUI to create your database after the Oracle database software is installed, you must choose a template to use for the creation of the database. The templates include: Custom Database Data Warehouse General Purpose Transaction Processing Click the Custom Database option button. This option is chosen because it allows the most flexibility in configuration options. This is also the slowest of the four options because it is the only choice that does not include data files or options specially configured for a particular type of application. All data files that you include in the configuration are created during the database creation process. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-21

Database Identification

3-22

Copyright 2005, Oracle. All rights reserved.

Database Identification On the Database Identification page, you must enter the database name in the Global Database Name field. A global database name includes the database name and database domain such as racdb.oracle.com. The name that you enter on this page must be unique among all the global database names used in your environment. The global database name can be up to 30 characters in length and must begin with an alphabetical character. A system identifier (SID) prefix is required, and the DBCA suggests a name based on your global database name. This prefix is used to generate unique SID names for the two instances that make up the cluster database. For example, if your prefix is RACDB, the DBCA creates two instances on node 1 and node 2, called RACDB1 and RACDB2, respectively. This example assumes that you have a two-node cluster. If you do not want to use the system-supplied prefix, enter a prefix of your choice. The SID prefix must begin with an alphabetical character and contain no more than 5 characters on UNIX-based systems or 61 characters on Windows-based systems. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-22

Cluster Database Management Method

3-23

Copyright 2005, Oracle. All rights reserved.

Cluster Database Management Method The Management Options page is displayed. For small cluster environments, you may choose to manage your cluster with Enterprise Manager Database Control. To do this, select the Configure the Database with Enterprise Manager check box. If you have Grid Control installed somewhere on your network, you can click the Use Grid Control for Management option button. If you select the Enterprise Manager with the Grid Control option and DBCA discovers agents running on the local node, you can select the preferred agent from a list. Grid Control can simplify database management in large, enterprise deployments. You can also configure Database Control to send e-mail notifications when alerts occur. If you want to configure this, you must supply a Simple Mail Transfer Protocol (SMTP) or outgoing mail server and an e-mail address. You can also enable daily backups here. You must supply a backup start time as well as operating system user credentials for this option. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-23

Passwords for Database Schema Owners

3-24

Copyright 2005, Oracle. All rights reserved.

Passwords for Database Schema Owners The Database Credentials page appears next. You must supply passwords for the user accounts created by the DBCA when configuring your database. You can use the same password for all of these privileged accounts by clicking the Use the Same Password for All Accounts option button. Enter your password in the Password field, and then enter it again in the Confirm Password field. Alternatively, you may choose to set different passwords for the privileged users. To do this, click the Use Different Passwords option button, and then enter your password in the Password field, and then enter it again in the Confirm Password field. Repeat this for each user listed in the User Name column. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-24

Storage Options for Database Files

3-25

Copyright 2005, Oracle. All rights reserved.

Storage Options for Database Files On the Storage Options page, you must select the storage medium where your shared database files are stored. Your three choices are: Cluster File System Automatic Storage Management (ASM) Raw Devices If you click the Cluster File System option button, you can click the Next button to continue. If you click the Automatic Storage Management (ASM) option button, you can either use an existing ASM disk group or specify a new disk group to use. If there is no ASM instance on any of the cluster nodes, the DBCA displays the Create ASM Instance page for you. If an ASM instance exists on the local node, the DBCA displays a dialog box prompting you to enter the password for the SYS user for ASM. To initiate the creation of the required ASM instance, enter the password for the SYS user of the ASM instance. After you enter the required information, click Next to create the ASM instance. After the instance is created, the DBCA proceeds to the ASM Disk Groups page. If you have just created a new ASM instance, there is no disk group from which to select, so you must create a new one by clicking Create New to open the Create Disk Group page. After you are satisfied with the ASM disk groups available to you, select the one that you want to use for your database files, and click Next to proceed to the Database File Locations page.
Oracle Database 10g: Real Application Clusters 3-25

Storage Options for Database Files (continued) If you have configured raw devices, click the corresponding button. You must provide a fully qualified mapping file name if you did not previously set the DBCA_RAW_CONFIG environment variable to point to it. You can enter your response or click the Browse button to locate it. The file should follow the format of the example below:
system=/dev/vg_name/rdbname_system_raw_500m sysaux=/dev/vg_name/rdbname_sysaux_raw_800m ... redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m control1=/dev/vg_name/rdbname_control1_raw_110m control2=/dev/vg_name/rdbname_control2_raw_110m spfile=/dev/vg_name/rdbname_spfile_raw_5m pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m

where VG_NAME is the volume group (if configured) and rdbname is the database name. Because this example uses OCFS, click the Cluster File System button, and then Next to continue. Note: For more information about ASM, refer to the lessons titled ASM and Administering Storage in RAC in this course.

Oracle Database 10g: Real Application Clusters 3-26

Database File Locations

3-27

Copyright 2005, Oracle. All rights reserved.

Database File Locations On the Database File Locations page, you must indicate where the database files are created. You can choose to use a standard template for file locations, one common location, or Oracle Managed Files (OMF). This cluster database uses a common location. Therefore, select the Use Common Location for All Database Files option button, and enter the directory in the Database Files Location field. Alternatively, you can use the Browse button to locate the directory where the database files are created. When you have made your choices, click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-27

Flash Recovery Area

3-28

Copyright 2005, Oracle. All rights reserved.

Flash Recovery Area On the Recovery Configuration page, you can select redo log archiving by selecting Enable Archiving. If you are using ASM or cluster file system storages, you can also select the Flash Recovery Area size on the Recovery Configuration page. The size of the area defaults to 2048 megabytes, but you can change this figure if it is not suitable for your requirements. If you are using ASM, the flash recovery area defaults to the ASM Disk Group. If you are using a cluster file system, the flash recovery area defaults to $ORACLE_BASE/flash_recovery_area. You may also define your own variables for the file locations if you plan to use the Database Storage page to define individual file locations. When you have completed your entries, click Next, and the Database Content page is displayed.

Oracle Database 10g: Real Application Clusters 3-28

Database Components

3-29

Copyright 2005, Oracle. All rights reserved.

Database Components The Database Content page has two tabs. On the Database Components page, you can select the components to configure for use in your database. If you choose the Custom Database option, you can select or deselect the database components and their corresponding assigned tablespaces. Select the check box next to each component that you want to install, and select a tablespace from the drop-down list for the product, if you want to install it somewhere other than the default tablespace that is shown. For a seed database, you can select whether to include the sample schemas in your database. The Custom Scripts page enables you to browse and choose scripts to be executed after your database has been created. After selecting components, click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-29

Database Services

3-30

Copyright 2005, Oracle. All rights reserved.

Database Services On the Database Services page, you can add database services to be configured during database creation. To add a service, click the Add button at the bottom of the Database Services section. Enter a service name in the Add a Service dialog box, and then click OK to add the service and return to the Database Services page. The new service name appears under the global database name. Select the service name. The DBCA displays the service preferences for the service on the right of the DBCA Database Services page. Change the instance preference (Not Used, Preferred, or Available) as needed. Go to the Transparent Application Failover (TAF) policy row at the bottom of the page. Make a selection in this row for your failover and reconnection policy preference as described in the following list: None: Do not use TAF. Basic: Establish connections at failover time. Pre-connect: Establish one connection to a preferred instance and another connection to a backup instance that you have selected to be available. In the example in the slide, the Pre-connect policy has been chosen. When you have finished adding and configuring services, click the Next button to continue. Note: For more information about services and TAF, refer to the lessons titled Services and High Availability of Connections in this course.
Oracle Database 10g: Real Application Clusters 3-30

Initialization Parameters

3-31

Copyright 2005, Oracle. All rights reserved.

Initialization Parameters On the Initialization Parameters page, you can set important database parameters. The parameters are grouped under four tabs: Memory Sizing Character Sets Connection Mode On the Memory page, you can set parameters that deal with memory allocation, including shared pool, buffer cache, Java pool, large pool, and PGA size. On the Sizing page, you can adjust the database block size. Note that the default is eight kilobytes. In addition, you can set the number of processes that can connect simultaneously to the database. By clicking the Character Sets tab, you can change the database character set. You can also select the default language and the date format. On the Connection Mode page, you can choose the connection type that clients use to connect to the database. The default type is Dedicated Server Mode. If you want to use Oracle Shared Server, click the Shared Server Mode button. If you want to review the parameters that are not found in the four tabs, click the All Initialization Parameters button. After setting the parameters, click the Next button to continue.

Oracle Database 10g: Real Application Clusters 3-31

Database Storage Options

Adjust SYSTEM tablespace parameters


3-32 Copyright 2005, Oracle. All rights reserved.

Database Storage Options The Database Storage page provides full control over all aspects of database storage, including tablespaces, data files, and log members. Size, location, and all aspects of extent management are under your control here.

Oracle Database 10g: Real Application Clusters 3-32

Create the Database

3-33

Copyright 2005, Oracle. All rights reserved.

Create the Database The Creation Options page appears next. You can choose to create the database, save your responses as a database template, or save your DBCA session as a database creation script by clicking the corresponding button. Select the Create Database check box, and then click the Finish button. The DBCA displays the Summary page, giving you the last chance to review all options, parameters, and so on that have been chosen for your database creation. Review the summary data. The review is to make sure that the actual creation is trouble free. When you are ready to proceed, close the Summary page by clicking the OK button.

Oracle Database 10g: Real Application Clusters 3-33

Monitor Progress

3-34

Copyright 2005, Oracle. All rights reserved.

Monitor Progress The Progress Monitor page appears next. In addition to informing you about how fast the database creation is taking place, it also informs you about the specific tasks being performed by the DBCA in real time. These tasks include: Creating the RAC data dictionary views Configuring the network for the cluster database Starting the listeners and database instances and the high-availability services When the database creation progress reaches 100 percent, the DBCA displays a dialog box announcing the completion of the creation process. It also directs you to the installation log file location, parameter file location, and the Enterprise Manager URL. By clicking the Password Management button, you can manage the database accounts created by the DBCA.

Oracle Database 10g: Real Application Clusters 3-34

Manage Default Accounts

3-35

Copyright 2005, Oracle. All rights reserved.

Manage Default Accounts On the Password Management page, you can manage all accounts created during the database creation process. By default, all database accounts, except SYSTEM, SYS, DBSNMP, and SYSMAN, are locked. You can unlock these additional accounts if you want or leave them as they are. If you unlock any of these accounts, you must set passwords for them, which can be done on the same page. When you have completed database account management, click the OK button to return to the DBCA. The End of Installation page appears next, informing you about the URLs for Ultra Search and iSQL*Plus. When you have finished reviewing this information, click the Exit button to exit the DBCA.

Oracle Database 10g: Real Application Clusters 3-35

Postinstallation Tasks

Verify the Enterprise Manager configuration.

$ srvctl config database -d racdb raclin01 racdb1 /u01/app/.../db_1 raclin02 racdb2 /u01/app/.../db_1

Back up the root.sh script.

$ cd $ORACLE_HOME $ cp root.sh root.sh.bak

Set up additional user accounts.

3-36

Copyright 2005, Oracle. All rights reserved.

Postinstallation Tasks After the cluster database has been successfully created, you must run the following command to verify the Enterprise Manager/Oracle Cluster Registry configuration in your newly installed RAC environment:
$ srvctl config database -d db_name

Server Control (SRVCTL) displays the name of the node and the instance for the node. The following example shows a node named raclin01 running an instance named racdb1. Execute the following command:
$ srvctl config database -d racdb raclin01 racdb1 /u01/app/.../db_1 raclin02 racdb2 /u01/app/.../db_1

It is also recommended that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle Home directory, the OUI updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, you can recover it from the root.sh file copy. You must also add and set up additionally required user accounts. For information about setting up additional optional user accounts, refer to the Administrators Guide for UNIX Systems.

Oracle Database 10g: Real Application Clusters 3-36

Patches and the RAC Environment


stc-raclin01 /u01/app/oracle /product/db_1 stc-raclin02 /u01/app/oracle /product/db_1 stc-raclin03 /u01/app/oracle /product/db_1

Apply a patch set to /u01/app/oracle /product/db_1 on all nodes.


3-37 Copyright 2005, Oracle. All rights reserved.

Patches and the RAC Environment Applying patches to your RAC installation is a simple process with the OUI. The OUI can keep track of multiple ORACLE_HOME deployments, as well as the participating nodes. This intelligence prevents potentially destructive or conflicting patch sets from being applied. In the example in the slide, a patch set is applied to the /u01/app/oracle/product/db_1 Oracle Home on all the three nodes of your cluster database. Although you execute the installation on stc-raclin01, you can choose any of the nodes to perform this task. The steps that you must perform to add a patch set through the OUI are essentially the same as those to install a new release. You must change directory to $ORACLE_HOME/bin. After starting the OUI, you must perform the following steps: 1. Select Installation from a stage location, and enter the appropriate patch set source on the Welcome page. 2. Select the nodes on the Node Selection page, where you need to add the patch, and ensure that they are all available. In this example, this should be all three of the nodes because /u01/app/oracle/product/db_1 is installed on all of them. 3. Check the Summary page to confirm that space requirements are met for each node. 4. Continue with the installation and monitor the progress as usual. The OUI automatically manages the installation progress, including the copying of files to remote nodes, just as it does with the CRS and database binary installations.
Oracle Database 10g: Real Application Clusters 3-37

Inventory List Locks

The OUI employs a timed lock on the inventory list stored on a node. The lock prevents an installation from changing a list being used concurrently by another installation. If a conflict is detected, the second installation is suspended and the following message appears:

"Unable to acquire a writer lock on nodes stcraclin02. Restart the install after verifying that there is no OUI session on any of the selected nodes."

3-38

Copyright 2005, Oracle. All rights reserved.

Inventory List Locks One of the improvements in the OUI is that it prevents potentially destructive concurrent installations. The mechanism involves a timed lock on the inventory list stored on a node. When you start multiple concurrent installations, the OUI displays an error message that is similar to the one shown in the slide. You must cancel the installation and wait until the conflicting installation completes, before retrying it. Although this mechanism works with all types of installations, see how it can function if you attempt concurrent patch set installations in the sample cluster. Use the same configuration as in the previous scenario for your starting point. Assume that you start a patch set installation on stc-raclin01 to update ORACLE_HOME2 on nodes stc-raclin01 and stc-raclin03. While this is still running, you start another patch set installation on stc-raclin02 to update ORACLE_HOME3 on that node. Will these installations succeed? As long as there are no other problems, such as a down node or interconnect, these processes have no conflicts with each other and should succeed. However, what if you start your patch set installation on stc-raclin02 to update ORACLE_HOME3 and then start a concurrent patch set installation for ORACLE_HOME2 (using either stc-raclin01 or stc-raclin03) on all nodes where this Oracle Home is installed? In this case, the second installation should fail with the error shown because the inventory on stc-raclin02 is already locked by the patch set installation for ORACLE_HOME3.
Oracle Database 10g: Real Application Clusters 3-38

Summary

In this lesson, you should have learned how to: Install the Oracle database software Configure virtual IPs with the VIPCA Perform preinstallation database tasks Create a cluster database Perform postinstallation database tasks Identify best configuration practices for RAC

3-39

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 3-39

Practice 3: Overview

This practice covers the following topics: Installing the Oracle database software by using the OUI Confirming that the services needed by the database creation process are running Using the DBCA to create a cluster database

3-40

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 3-40

RAC Database Instances Administration

Copyright 2005, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following: Use the EM Cluster Database home page Start and Stop RAC databases and instances Add a node to a cluster Delete instances from a RAC database Quiesce RAC databases Administer alerts with Enterprise Manager

4-2

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 4-2

The EM Cluster Database Home Page

4-3

Copyright 2005, Oracle. All rights reserved.

The EM Cluster Database Home Page The Cluster Database home page serves as a crossroad for managing and monitoring all aspects of your RAC database. From this page, you can also access the three other main cluster database tabs: Performance, Administration, and Maintenance. On this page, you also find General, High Availability, Space Usage, and Diagnostic Summary sections for information that pertains to your cluster database as a whole. The number of instances is displayed for the RAC database, in addition to the status. A RAC database is considered to be up if at least one instance has the database open. The Cluster Database home page is accessible by clicking the Cluster link in the General section of the page. Other items of interest include the date of the last RMAN backup, archiving information, space utilization within tablespaces and segments, and an alert summary. By clicking the link next to the Archiving label, you can view and set archive logrelated parameters, and adjust the value of the FAST_START_MTTR_TARGET initialization parameter. The Alerts section displays a list of recent cluster database and cluster database instance related events for RAC with links to alert details.

Oracle Database 10g: Real Application Clusters 4-3

The EM Cluster Database Home Page

4-4

Copyright 2005, Oracle. All rights reserved.

The EM Cluster Database Home Page (continued) At the top of the page, you can see a list of alerts in the Alerts section mentioned on the previous slide. The Related Alerts section lists pertinent cluster-database events, such as host and listener alerts. Between the Alerts and Related Alerts sections, you can get an overview of all alerts for your cluster database. The Job Activity section lists job-related specifics for the cluster database. You can use Enterprise Manager to manage Oracle critical patch advisories. When configured, critical advisory information can be accessed under the Critical Patch Advisories section. To promote critical patch application, Enterprise Manager performs an assessment of vulnerabilities by examining your enterprise configuration to determine which Oracle homes have not applied one or more of these critical patches. Enterprise Manager provides a list of critical patch advisories and the Oracle homes to which the critical patches should be applied. The Related Links area provides links to other areas for managing your RAC database. For example, the Jobs link opens the Job Activity page where you can configure jobs for high availability. Finally, the Instances section lists every instance configured in Oracle Cluster Registry (OCR) to be able to open the database. Status, alert, and performance-related information is summarized for each instance. When you click an instance name, the corresponding Instance home page is displayed.
Oracle Database 10g: Real Application Clusters 4-4

Cluster Database Instance Home Page

4-5

Copyright 2005, Oracle. All rights reserved.

Cluster Database Instance Home Page The Cluster Database Instance home page can be reached by clicking one of the instance names from the Instances section of the Cluster Database home page. This page has the same four subpages as the Cluster Database home page: Home, Performance, Administration, and Maintenance. The difference is that tasks and monitored activities from these pages apply primarily to a specific instance. For example, clicking the Shutdown button from this page only shuts down this one instance. However, clicking the Shutdown button from the Cluster Database home page gives you the option of shutting down all or specific instances. By scrolling down on this page, you see the Alerts, Related Alerts, Jobs, and Related Links sections. These provide similar information compared to the same sections in the Cluster Database home page.

Oracle Database 10g: Real Application Clusters 4-5

Cluster Home Page

4-6

Copyright 2005, Oracle. All rights reserved.

Cluster Home Page The slide above shows you the Cluster home page, which is accessible by clicking the Cluster link located in the General section of the Cluster Database home page. The cluster is represented as a composite target composed of nodes and cluster databases. An overall summary of the cluster is provided here. The current status and cluster availability over the past 24 hours is shown. A cluster is deemed to be up if at least one cluster node is up. The cluster is down if all nodes are down. In the Configuration section, you can see the clusterware version, along with the hardware and operating system of the cluster. A list of cluster databases is presented with their status and the number of warning and critical alerts. You can click the value of either the warning or the critical alerts to see a detailed list of the cluster databaserelated alerts and when they occurred. There is also a separate alerts section for the entire cluster, which centralizes alert reporting on hosts across all the nodes in the cluster. The Related Links section is at the bottom of the page. This section enables you to view alert history, blackout information, and deployments. When a blackout is applied to a cluster, all targets in the cluster are blacked out. The Deployments link takes you to the Deployments page, which shows detailed information about hosts and their operating systems, as well as software installations and patches. At the bottom of the page, the Hosts section lists each node in the cluster with its status, warning and critical alerts, policy violations, and performance-related information. Links for each node take you to more detailed pages for a particular node.
Oracle Database 10g: Real Application Clusters 4-6

The Configuration Section

4-7

Copyright 2005, Oracle. All rights reserved.

The Configuration Section The Cluster home page is invaluable for locating configuration-specific data. Locate the Configuration section on the Cluster home page. The View drop-down list allows you to inspect hardware and operating system overview information. Click the Hosts link, and then click the Hardware Details link of the host that you want. On the Hardware Details page, you find detailed information regarding your CPU, disk controllers, network adapters, and so on. This information can be very useful when determining the Linux patches for your platform. When you click the Operating System link, the Operating System Details page is displayed. On this page, you can view the current kernel parameter values for each node in your cluster. Shared memory, semaphore, IP, and other kernel parameter classes (whose values affect how Oracle software performs) can be viewed on this page. Although both operating system and kernel parameter values may be viewed by using a terminal window, the Enterprise Manager interface enables you to view this information much easier.

Oracle Database 10g: Real Application Clusters 4-7

Operating System Details Page

4-8

Copyright 2005, Oracle. All rights reserved.

Operating System Details Page On the Operating System Details page, click the File Systems tab to display the file system information. All mounted file systems are displayed here. These include standard or local file systems, swap, cluster file systems, and NFS/NetApps file systems. You can browse all installed packages by clicking the Packages tab. The information displayed in these tabs may be viewed at the operating system level through a telnet session; the Cluster home page simplifies the retrieval of this data.

Oracle Database 10g: Real Application Clusters 4-8

Performance and Targets Pages

4-9

Copyright 2005, Oracle. All rights reserved.

Performance and Targets Pages From the Cluster home page, performance target information can be viewed. By clicking the Performance tab, you can view CPU utilization, disk I/O activity, and memory utilization, all in real time. The performance data is displayed graphically with information for each node displayed in the same graph. Click the Targets tab to list all Oracle targets in the cluster. The host, ORACLE_HOME, status, and target types are listed for each target displayed in the Targets page. Note: The refresh rate may be adjusted using the View Data drop-down list. You have the choice to refresh data manually, or automatically every 15 seconds.

Oracle Database 10g: Real Application Clusters 4-9

Starting and Stopping RAC Instances

Multiple instances can open the same database simultaneously. Shutting down one instance does not interfere with other running instances. SHUTDOWN TRANSACTIONAL LOCAL does not wait for other instances transactions to finish. RAC instances can be started and stopped using:
Enterprise Manager Server Control (SRVCTL) utility SQL*Plus

Shutting down a RAC database means to shutting down all instances accessing the database.
Copyright 2005, Oracle. All rights reserved.

4-10

Starting and Stopping RAC Instances In a RAC environment, multiple instances can have the same RAC database open at the same time. Also, shutting down one instance does not interfere with the operation of other running instances. The procedures for starting up and shutting down RAC instances are identical to the procedures used in single-instance Oracle, with the following exception: The SHUTDOWN TRANSACTIONAL command with the LOCAL option is useful to shut down an instance after all active transactions on the instance have either committed or rolled back. Transactions on other instances do not block this operation. If you omit the LOCAL option, then this operation waits until transactions on all other instances that started before the shutdown was issued either a COMMIT or a ROLLBACK. You can start up and shut down instances by using Enterprise Manager, SQL*Plus, or Server Control (SRVCTL). Both Enterprise Manager and SRVCTL provide options to start up and shut down all the instances of a RAC database with a single step. Shutting down a RAC database mounted or opened by multiple instances means that you need to shut down every instance accessing that RAC database. However, having only one instance opening the RAC database is enough to declare the RAC database open.

Oracle Database 10g: Real Application Clusters 4-10

Starting and Stopping RAC Instances with EM

4-11

Copyright 2005, Oracle. All rights reserved.

Starting and Stopping RAC Instances with EM On the Cluster Database home page, the cluster database instances are displayed at the bottom of the page. Click an instance name to access the corresponding Cluster Database Instance home page. On this page, you can start or stop the cluster database instance, as well as see an overview of the cluster database instance activity such as CPU and space usage, active sessions, and so on. To start a cluster database instance click Startup, and click Shutdown to stop it. To start or shut down a cluster database (that is, all the instances known to Enterprise Manager), select the database and click Startup or Shutdown on the Cluster Database home page.

Oracle Database 10g: Real Application Clusters 4-11

Starting and Stopping RAC Instances with SQL*Plus


[stc-raclin01] $ echo $ORACLE_SID RACDB1 sqlplus / as sysdba SQL> startup SQL> shutdown [stc-raclin02] $ echo $ORACLE_SID RACDB2 sqlplus / as sysdba SQL> startup SQL> shutdown

OR
[stc-raclin01] $sqlplus / as sysdba SQL> startup SQL> shutdown SQL> connect sys/oracle@RACDB2 as sysdba SQL> startup SQL> shutdown

4-12

Copyright 2005, Oracle. All rights reserved.

Starting and Stopping RAC Instances with SQL*Plus If you want to start or stop just one instance, and you are connected to your local node, then you must first ensure that your current environment includes the SID for the local instance. To start or shut down your local instance, initiate a SQL*Plus session connected as SYSDBA or SYSOPER, and then issue the required command (for example, STARTUP). You can start multiple instances from a single SQL*Plus session on one node by way of Oracle Net Services. To achieve this, you must connect to each instance by using a Net Services connection string, typically an instance-specific alias from your tnsnames.ora file. For example, you can use a SQL*Plus session on a local node to shut down two instances on remote nodes by connecting to each using the instances individual alias name. The above example assumes that the alias name for the second instance is RACDB2. In the above example, there is no need to connect to the first instance using its connect descriptor because the command is issued from the first node with the correct ORACLE_SID. Note: It is not possible to start up or shut down more than one instance at a time in SQL*Plus, so you cannot start or stop all the instances for a cluster database with a single SQL*Plus command.

Oracle Database 10g: Real Application Clusters 4-12

Starting and Stopping RAC Instances with SRVCTL


start/stop syntax:

srvctl start|stop instance -d <db_name> -i <inst_name_list> [-o open|mount|nomount|normal|transactional|immediate|abort>] [-c <connect_str> | -q] srvctl start|stop database -d <db_name> [-o open|mount|nomount|normal|transactional|immediate|abort>] [-c <connect_str> | -q]

Examples:

$ srvctl start instance -d RACDB -i RACDB1,RACDB2 $ srvctl stop instance -d RACDB -i RACDB1,RACDB2 $ srvctl start database -d RACDB -o open

4-13

Copyright 2005, Oracle. All rights reserved.

Starting and Stopping RAC Instances with SRVCTL The srvctl start database command starts a cluster database and its enabled instances. The srvctl stop database command stops a database, its instances, and its services. The srvctl start instance command starts instances of a cluster database. This command also starts all enabled and nonrunning services that have the listed instances either as preferred or available instances. The srvctl stop instance command stops instances, and all enabled and nonrunning services that have these instances as either preferred or available instances. You must disable an object that you intend to remain stopped after you issue a srvctl stop command, otherwise CRS can restart it as a result of another planned operation. For the commands that use a connect string, if you do not provide a connect string, then SRVCTL uses / as sysdba to perform the operation. The q option is asking for a connect string from standard input. SRVCTL does not support concurrent executions of commands on the same object. Therefore, run only one SRVCTL command at a time for each database, service, or other object. In order to use the START or STOP options of the SRVCTL command, your service must be a CRS-enabled, non-running service. That is why it is recommended to use the Database Configuration Assistant (DBCA) because it configures both the CRS resources and the Net Service entries for each RAC database. Note: For more information, refer to the Real Application Clusters Administrators Guide.
Oracle Database 10g: Real Application Clusters 4-13

RAC Initialization Parameter Files

An SPFILE is created if you use DBCA. The SPFILE must be created on a shared volume or shared raw device. All instances use the same SPFILE. If the database was created manually, then create an SPFILE from a PFILE.
Node1 RAC01
initRAC01.ora

Node2 RAC02
initRAC02.ora

SPFILE=
SPFILE

SPFILE=

4-14

Copyright 2005, Oracle. All rights reserved.

Initialization Parameter Files When you create the database, DBCA creates an SPFILE in the file location that you specify. This location can be an automatic storage management (ASM) disk group, cluster file system file, or a shared raw device. If you manually create your database, then it is recommended to create an SPFILE from a PFILE. All instances in the cluster database use the same SPFILE at startup. Because the SPFILE is a binary file, do not edit it. Instead, change the SPFILE parameter settings by using Enterprise Manager or ALTER SYSTEM SQL statements. RAC uses a traditional PFILE only if an SPFILE does not exist or if you specify PFILE in your STARTUP command. Using SPFILE simplifies administration, maintaining parameter settings consistent, and guarantees parameter settings persistence across database shutdown and startup. In addition, you can configure RMAN to back up your SPFILE. In order for each instance to use the same SPFILE at startup, each instance uses its own PFILE file that contains only one parameter called SPFILE. The SPFILE parameter points to the shared SPFILE on your shared storage. This is illustrated in the above graphic. By calling each PFILE init<SID>.ora, and by putting them in the $ORACLE_HOME/dbs directory of each node, a STARTUP command uses the shared SPFILE.

Oracle Database 10g: Real Application Clusters 4-14

SPFILE Parameter Values and RAC

You can change parameter settings using the ALTER SYSTEM SET command from any instance. SPFILE entries such as:
*.<pname> apply to all instances <sid>.<pname> apply only to <sid> <sid>.<pname> takes precedence over *.<pname>

ALTER SYSTEM SET <dpname> SCOPE=MEMORY sid='<sid|*>';

Use current or future *.<dpname> settings for <sid>. Remove an entry from your SPFILE.

ALTER SYSTEM RESET <dpname> SCOPE=MEMORY sid='<sid>';

ALTER SYSTEM RESET <dpname> SCOPE=SPFILE sid='<sid|*>';


4-15 Copyright 2005, Oracle. All rights reserved.

SPFILE Parameter Values and RAC You can modify the value of your initialization parameters by using the ALTER SYSTEM SET command. This is the same as with a single-instance database except that you have the possibility to specify the SID clause in addition to the SCOPE clause. By using the SID clause, you can specify the SID of the instance where the value takes effect. Specify SID='*' if you want to change the value of the parameter for all instances. Specify SID='sid' if you want to change the value of the parameter only for the instance sid. This setting takes precedence over previous and subsequent ALTER SYSTEM SET statements that specify SID='*'. If the instances are started up with an SPFILE, then SID='*' is the default if you do not specify the SID clause. If you specify an instance other than the current instance, then a message is sent to that instance to change the parameter value in its memory if you are not using the SPFILE scope. The combination of SCOPE=MEMORY and SID='sid' of the ALTER SYSTEM RESET command allows you to override the precedence of a currently used <sid>.<dparam> entry. This allows for the current *.<dparam> entry to be used, or for the next created *.<dparam> entry to be taken into account on that particular sid. Using the last example, you can remove a line from your SPFILE.

Oracle Database 10g: Real Application Clusters 4-15

EM and SPFILE Parameter Values

SCOPE=MEMORY

4-16

Copyright 2005, Oracle. All rights reserved.

EM and SPFILE Parameter Values You can access the Initialization Parameters page from the Cluster Database Administration page by clicking the Initialization Parameters link. The Current page shows you the values currently used by the initialization parameters of all the instances accessing the RAC database. You can filter the Initialization Parameters page to show only those parameters that meet the criteria of the filter that you entered in the Filter field. Optionally, you can choose Show All to display on one page all the parameters that are currently used by the running instances. The Instance column shows the instances for which the parameter has the value listed in the table. An asterisk (*) indicates that the parameter has the same value for all remaining instances of the cluster database. Choose a parameter from the Select column and perform one of the following steps: Click Add to add the selected parameter to a different instance. Enter a new instance name and value in the newly created row in the table. Click Reset to reset the value of the selected parameter. Note that you may only reset parameters that do not have an asterisk in the Instance column. The value of the selected column is reset to the value of the remaining instances. Note: For both Add and Reset buttons, the ALTER SYSTEM command uses SCOPE=MEMORY.
Oracle Database 10g: Real Application Clusters 4-16

EM and SPFILE Parameter Values

SCOPE=SPFILE

SCOPE=BOTH

4-17

Copyright 2005, Oracle. All rights reserved.

EM and SPFILE Parameter Values (continued) The SPFile tab displays the current values stored in your SPFILE. As in the Current tab, you can add or reset parameters. However, If you select the Apply changes in SPFile mode check box, then the ALTER SYSTEM command uses SCOPE=BOTH. If this check box is not selected, SCOPE=SPFILE is used. Click Apply to accept and generate your changes.

Oracle Database 10g: Real Application Clusters 4-17

RAC Initialization Parameters

4-18

Copyright 2005, Oracle. All rights reserved.

RAC Initialization Parameters CLUSTER_DATABASE: Enables a database to be started in cluster mode. Set this to TRUE. CLUSTER_DATABASE_INSTANCES: Sets the number of instances in your RAC environment. A proper setting for this parameter can improve memory use. CLUSTER_INTERCONNECTS: Specifies the cluster interconnect when there is more than one interconnect. Refer to your Oracle platform-specific documentation for the use of this parameter, its syntax, and its behavior. You typically do not need to set the CLUSTER_INTERCONNECTS parameter. For example, do not set this parameter for the following common configurations: If you have only one cluster interconnect If the default cluster interconnect meets the bandwidth requirements of your RAC database, which is typically the case If NIC bonding is being used for the interconnect DB_NAME: If you set a value for DB_NAME in instance-specific parameter files, then the setting must be identical for all instances. DISPATCHER: Set the DISPATCHERS parameter to enable a shared-server configuration, that is a server that is configured to allow many user processes to share very few server processes. With shared-server configurations, many user processes connect to a dispatcher. The DISPATCHERS parameter may contain many attributes. Oracle recommends that you configure at least the PROTOCOL and LISTENER attributes.
Oracle Database 10g: Real Application Clusters 4-18

RAC Initialization Parameters (continued) PROTOCOL specifies the network protocol for which the dispatcher process generates a listening end point. LISTENER specifies an alias name for the Oracle Net Services listeners. Set the alias to a name that is resolved through a naming method such as a tnsnames.ora file. MAX_COMMIT_PROPAGATION_DELAY: This is a RAC-specific parameter. Do not alter the default setting for this parameter except under a limited set of circumstances. This parameter specifies the maximum amount of time allowed before the system change number (SCN) held in the SGA of an instance is refreshed by the log writer process (LGWR). It determines whether the local SCN should be refreshed from the SGA when getting the snapshot SCN for a query. SPFILE: When you use an SPFILE, all RAC database instances must use the SPFILE and the file must be on shared storage. THREAD: If specified, this parameter must have unique values on all instances. The THREAD parameter specifies the number of the redo thread to be used by an instance. You can specify any available redo thread number as long as that thread number is enabled and is not used.

Oracle Database 10g: Real Application Clusters 4-19

Parameters Requiring Identical Settings



4-20

ACTIVE_INSTANCE_COUNT ARCHIVE_LAG_TARGET CLUSTER_DATABASE CONTROL_FILES DB_BLOCK_SIZE DB_DOMAIN DB_FILES DB_NAME DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE DB_UNIQUE_NAME MAX_COMMIT_PROPAGATION_DELAY TRACE_ENABLED UNDO_MANAGEMENT
Copyright 2005, Oracle. All rights reserved.

Parameters Requiring Identical Settings Certain initialization parameters that are critical at database creation or that affect certain database operations must have the same value for every instance in RAC. Specify these parameter values in the SPFILE, or within each init_dbname.ora file on each instance. The following list contains the parameters that must be identical on every instance:

ACTIVE_INSTANCE_COUNT ARCHIVE_LAG_TARGET CLUSTER_DATABASE CONTROL_FILES DB_BLOCK_SIZE DB_DOMAIN DB_FILES DB_NAME DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE DB_UNIQUE_NAME MAX_COMMIT_PROPAGATION_DELAY TRACE_ENABLED UNDO_MANAGEMENT

Note: The setting for DML_LOCKS must be identical on every instance only if set to zero.

Oracle Database 10g: Real Application Clusters 4-20

Parameters Requiring Unique Settings

Instance Settings
THREAD ROLLBACK_SEGMENTS INSTANCE_NUMBER UNDO_TABLESPACE (When using automatic undo management)

Environment Variables
ORACLE_SID

4-21

Copyright 2005, Oracle. All rights reserved.

Parameters Requiring Unique Settings If you use the THREAD or ROLLBACK_SEGMENTS parameters, then it is recommended to set unique values for them by using the SID identifier in the SPFILE. However, you must set a unique value for INSTANCE_NUMBER for each instance and you cannot use a default value. Oracle uses the INSTANCE_NUMBER parameter to distinguish among instances at startup. Oracle uses the THREAD number to assign redo log groups to specific instances. To simplify administration, use the same number for both the THREAD and INSTANCE_NUMBER parameters. If you specify UNDO_TABLESPACE with automatic undo management enabled, then set this parameter to a unique undo tablespace name for each instance. Specify the ORACLE_SID environment variable, which comprises the database name and the number of the THREAD assigned to the instance.

Oracle Database 10g: Real Application Clusters 4-21

Adding a Node to a Cluster

1. 2. 3. 4. 5.

Configure the OS and hardware for the new node. Add the node to the cluster. Add the RAC software to the new node. Reconfigure listeners for new node. Add instances via DBCA. RACDB3
RACDB2

RACDB1

4-22

Copyright 2005, Oracle. All rights reserved.

Adding a Node to a Cluster The next several slides explain how to add nodes to clusters. You can do this by setting up the new nodes to be part of your cluster at the network level. Then extend the Cluster Ready Services (CRS) home from an existing CRS home to the new nodes, and then extend the Oracle database software with RAC components to the new nodes. Finally, make the new nodes members of the existing cluster database. If the nodes that you are adding to your cluster do not have clusterware or Oracle software, then you must complete the five steps listed in the slide above. The procedures in these steps assume that you already have an operative UNIX-based or Windows-based RAC environment.

Oracle Database 10g: Real Application Clusters 4-22

Adding a Node to an Existing Cluster

$ cd $ORA_CRS_HOME/oui/bin $ ./addNode.sh

4-23

Copyright 2005, Oracle. All rights reserved.

Adding a Node to an Existing Cluster Run the addNode.sh script from $ORA_CRS_HOME/oui/bin on one of the existing nodes as the oracle user:
$ cd $ORA_CRS_HOME/oui/bin $ ./addNode.sh

When the Oracle Universal Installer (OUI) Welcome page appears, click Next. On the Specify Cluster Nodes to Add to Installation page, add the public and private node names, and then click Next. When the Cluster Node Addition Summary page appears, click Next. The Cluster Node Addition Progress page appears. You are then prompted to run rootaddnode.sh as the root user. Verify that the CLSCFG information in the rootaddnode.sh script is correct. It must contain the new public and private node names and node numbers. For example:
$ clscfg -add -nn node2,2 -pn node2-private,2 -hn <node2>,2

Run the rootaddnode.sh script on the existing node from where you ran the addNode.sh script.
su root cd $ORA_CRS_HOME sh -x rootaddnode.sh

After this is completed, click OK to continue.

Oracle Database 10g: Real Application Clusters 4-23

Adding a Node to an Existing Cluster (continued) At this point another dialog box appears, which prompts you to run $ORA_CRS_HOME/root.sh on the new cluster node:
su root cd $ORA_CRS_HOME sh -x root.sh

After this is completed, click OK in the dialog box to continue. The End of Installation page is displayed. Exit the installer.

Oracle Database 10g: Real Application Clusters 4-24

Adding the RAC Software to the New Node


Add the RAC software to the new node
$ cd $ORACLE_HOME/oui/bin $ ./addNode.sh

4-25

Copyright 2005, Oracle. All rights reserved.

Adding the RAC Software to the New Node Run the addNode.sh script from $ORACLE_HOME/oui/bin on one of the existing nodes as the oracle user:
$ cd $ORACLE_HOME/oui/bin $ ./addNode.sh

When the OUI Welcome page appears, click Next. On the Specify Cluster Nodes to Add to Installation page, specify the node that you want to add. Click Next. When the Cluster Node Addition Summary page appears, click Next. The Cluster Node Addition Progress page appears. You are then prompted to run root.sh as the root user on the new node:
$ su - root $ cd $ORACLE_HOME $ ./root.sh

After this is completed, click OK to continue. The End of Installation page is displayed. Exit the installer. Change directory to the $ORACLE_HOME/bin directory and run the vipca tool with the new node list:
$ $ $ $ su root DISPLAY=ipaddress:0.0; export DISPLAY cd $ORACLE_HOME/bin ./vipca -nodelist <node1>,<node2>

Oracle Database 10g: Real Application Clusters 4-25

Adding the RAC Software to the New Node (continued) The Virtual Internet Protocol Configuration Assistant (VIPCA) Welcome page appears. Click Next. Add the new nodes virtual IP information, and click Next. The Summary page is displayed. Click Finish. A progress bar creating and starting the new CRS resources appears. After this is completed, click OK, view the configuration results, and click the Exit button. Verify that interconnect information is correct with the oifcfg command:
$ oifcfg getif

If it is not correct, change it by using oifcfg:


$ oifcfg setif <interfacename>/<subnet>:<cluster_interconnect|public>

Oracle Database 10g: Real Application Clusters 4-26

Reconfigure the Listeners

4-27

Copyright 2005, Oracle. All rights reserved.

Reconfigure the Listeners Run netca on the new node to verify that the listener is configured on the new node:
$ DISPLAY=ipaddress:0.0; export DISPLAY $ netca

Select Cluster Configuration, and then click Next. After selecting all nodes, click Next. Select Listener configuration, and then click Next. Click Reconfigure, then click Next. Choose the listener that you want to reconfigure, and then click Next. Choose the correct protocol, and then click Next. Choose the correct port, and then click Next. Choose whether or not to configure another listener. Click Next. You may get an error message saying, The information provided for this listener is currently in use by another listener. Ignore this message and click Yes to continue. When the Listener Configuration Complete page appears, click Next to continue. Click Finish to exit the Network Configuration Assistant (NETCA). Run the crs_stat command to verify that the listener CRS resource was created. For example:
cd $ORA_CRS_HOME/bin ./crs_stat

The new listener must be offline. Start it by starting the nodeapps on the new node.
$ srvctl start nodeapps -n <newnode>

Use crs_stat to confirm that all VIPs, GSDs, ONSs, and listeners are online.

Oracle Database 10g: Real Application Clusters 4-27

Add an Instance by Using DBCA

4-28

Copyright 2005, Oracle. All rights reserved.

Add an Instance by Using DBCA To add new instances, open DBCA from an existing node:
$ DISPLAY=ipaddress:0.0; export DISPLAY $ dbca

On the Welcome page, select Oracle Real Application Clusters database, and then click Next. Select Instance Management, click Next. Select Add an Instance, click Next. Choose the database that you want to add an instance to, and specify a user with SYSDBA privileges. Click Next. Choose the correct instance name and node, and then click Next. Review the Storage page, click Next. Review the Summary page, click OK, and wait for the progress bar to start. Allow the progress bar to finish. When asked if you want to perform another operation, click No to exit the DBCA. To verify success, log in to one of the instances and query from GV$INSTANCE. Now you must be able to see all nodes:
SQL> SELECT instance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state, host_name host FROM gv$instance; INST_NO ------1 2 3 INST_NAME -----------RACDB1 RACDB2 RACDB3 PAR --YES YES YES STATUS -----OPEN OPEN OPEN DB_STATUS --------ACTIVE ACTIVE ACTIVE STATE HOST ------ -------------NORMAL sct-raclin01 NORMAL stc-raclin02 NORMAL stc-raclin03

Oracle Database 10g: Real Application Clusters 4-28

Deleting Instances from a RAC Database

4-29

Copyright 2005, Oracle. All rights reserved.

Deleting Instances from a RAC Database The procedures outlined here explain how to use the DBCA to delete an instance from a RAC database. To delete an instance, start the DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page, select Oracle Real Application Clusters Database. Click Next. The Operations page is displayed. Select Instance Management, and then click Next. The Instance Management page appears. On the Instance Management page, select Delete Instance, and then click Next. On the page that displays the list of cluster databases, select a RAC database from which to delete an instance. If your user ID is not operating-system authenticated, then DBCA also prompts you for a user ID and password for a database user that has SYSDBA privileges. Click Next and DBCA displays the List of Cluster Database Instances page which shows the instances associated with the RAC database that you selected and the status of each instance. Select a remote instance to delete, and then click Finish. If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database. Review the information about the instance deletion operation on the Summary page, then click OK. Click OK in the Confirmation dialog box to proceed with the instance deletion operation. The DBCA displays a progress dialog box showing that the DBCA is performing the instance deletion operation. During this operation, the DBCA removes the instance and the instances Oracle Net configuration.
Oracle Database 10g: Real Application Clusters 4-29

Deleting Instances from a RAC Database (continued) When the DBCA completes this operation, it displays a dialog box asking whether you want to perform another operation. Click No and exit the DBCA, or click Yes to perform another operation. If you click Yes, the Operations page is displayed.

Oracle Database 10g: Real Application Clusters 4-30

Node Addition and Deletion and the SYSAUX Tablespace


The SYSAUX tablespace combines the storage needs for the following tablespaces:
DRSYS CWMLITE XDB ODM TOOLS INDEX EXAMPLE OEM-REPO

Use this formula to size the SYSAUX tablespace: 300M + (250M * number_of_nodes)

4-31

Copyright 2005, Oracle. All rights reserved.

The SYSAUX Tablespace A new auxiliary, system-managed tablespace called SYSAUX contains performance data and combines content that was stored in different tablespaces (some of which are no longer required) in earlier releases of the Oracle database. This is a required tablespace for which you must plan disk space. The SYSAUX system tablespace now contains the DRSYS (contains data for OracleText), CWMLITE (contains the OLAP schemas), XDB (for XML features), ODM (for Oracle Data Mining), TOOLS (contains Enterprise Manager tables), INDEX, EXAMPLE, and OEM-REPO tablespaces. If you add nodes to your RAC database environment, then you may need to increase the size of the SYSAUX tablespace. Conversely, if you remove nodes from your cluster database, then you may be able to reduce the size of your SYSAUX tablespace and thus save valuable disk space. The following is a formula that you can use to properly size the SYSAUX tablespace:
300 megabytes + (250 megabytes * number_of_nodes)

If you apply this formula to a four-node cluster, then you will find that the SYSAUX tablespace is sized around 1,300 megabytes (300 + (250 * 4) = 1300).

Oracle Database 10g: Real Application Clusters 4-31

Quiescing RAC Databases

Use the ALTER SYSTEM QUIESCE RESTRICTED statement from a single instance.

SQL> ALTER SYSTEM QUIESCE RESTRICTED; The database cannot be opened until the ALTER SYSTEM QUIESCE statement finishes execution. The ALTER SYSTEM QUIESCE RESTRICTED and ALTER SYSTEM UNQUIESCE statements affect all instances in a RAC environment. Cold backups cannot be taken while the database is in a quiesced state.

4-32

Copyright 2005, Oracle. All rights reserved.

Quiescing RAC Databases To quiesce a RAC database, use the ALTER SYSTEM QUIESCE RESTRICTED statement from one instance. It is not possible to open the database from any instance while the database is in the process of being quiesced from another instance. After all non-DBA sessions become inactive, the ALTER SYSTEM QUIESCE RESTRICTED executes and the database is considered to be quiesced. In a RAC environment, this statement affects all instances. To issue the ALTER SYSTEM QUIESCE RESTRICTED statement in a RAC environment, you must have the Database Resource Manager feature activated, and it must have been activated since instance startup for all instances in the cluster database. The following conditions apply to RAC: If you had issued the ALTER SYSTEM QUIESCE RESTRICTED statement, but Oracle has not finished processing it, then you cannot open the database. You cannot open the database if it is already in a quiesced state. The ALTER SYSTEM QUIESCE RESTRICTED and ALTER SYSTEM UNQUIESCE statements affect all instances in a RAC environment, not just the instance that issues the command. Cold backups cannot be taken while the database is in a quiesced state because the Oracle background processes may still perform updates for internal purposes even while the database is in a quiesced state. Also, the file headers of online data files continue to appear as if they are being accessed. They do not look the same as if a clean shutdown were done.
Oracle Database 10g: Real Application Clusters 4-32

How SQL*Plus Commands Affect Instances


SQL*Plus Command ARCHIVE LOG CONNECT HOST RECOVER Associated Instance Always affects the current instance Affects the default instance if no instance is specified in the CONNECT command Affects the node running the SQL*Plus session Does not affect any particular instance, but rather the database

SHOW PARAMETER and Shows the current instance parameter and SHOW SGA SGA information STARTUP and SHUTDOWN SHOW INSTANCE Affect the current instance Displays information about the current instance
Copyright 2005, Oracle. All rights reserved.

4-33

How SQL*Plus Commands Affect Instances Most SQL statements affect the current instance. You can use SQL*Plus to start and stop instances in the RAC database. You do not need to run SQL*Plus commands as root on UNIX-based systems or as Administrator on Windows-based systems. You need only the proper database account with the privileges that you normally use for single-instance Oracle database administration. Some examples of how SQL*Plus commands affect instances are: The statement ALTER SYSTEM SET CHECKPOINT LOCAL only affects the instance to which you are currently connected, rather than the default instance or all instances. ALTER SYSTEM CHECKPOINT LOCAL affects the current instance. ALTER SYSTEM CHECKPOINT or ALTER SYSTEM CHECKPOINT GLOBAL affects all instances in the cluster database. ALTER SYSTEM SWITCH LOGFILE affects only the current instance. To force a global log switch, use the ALTER SYSTEM ARCHIVE LOG CURRENT statement. The INSTANCE option of ALTER SYSTEM ARCHIVE LOG enables you to archive each online redo log file for a specific instance.

Oracle Database 10g: Real Application Clusters 4-33

Administering Alerts with Enterprise Manager


View alerts for all instances.

4-34

Copyright 2005, Oracle. All rights reserved.

Administering Alerts with Enterprise Manager You can use Enterprise Manager to administer alerts for RAC environments. You can also configure specialized tests for RAC databases such as global cache converts, consistent read requests, and so on. Enterprise Manager distinguishes between database- and instancelevel alerts in RAC environments. Alert thresholds for instance-level alerts, such as archive log alerts, can be set at the instance target level. This enables you to receive alerts for the specific instance if performance exceeds your threshold. You can also configure alerts at the database level, such as setting alerts for tablespaces. This enables you to avoid receiving duplicate alerts at each instance. Enterprise Manager also responds to metrics from across the entire RAC database and publishes alerts when thresholds are exceeded. Enterprise Manager interprets both predefined and customized metrics. You can also copy customized metrics from one cluster database instance to another, or from one RAC database to another. A recent alert summary can be found on the database control home page. Notice that alerts are sorted by relative time and target name.

Oracle Database 10g: Real Application Clusters 4-34

Viewing Alerts
Choose an alert and drill down.

4-35

Copyright 2005, Oracle. All rights reserved.

Viewing Alerts When an alert that requires a closer look is raised, you can click it for more information. There is a statistic summary for the metric displayed to the left of the window. Here you can find information such as high, low, and average values of the metric over the duration of the polling period, how many times the threshold exceeded the warning and critical thresholds, and the current values of those thresholds. If you want to adjust the warning or threshold values, click the Manage Metrics link at the bottom of the page. In addition to adjusting the threshold values, you can also define a response action that will be used when the threshold is exceeded.

Oracle Database 10g: Real Application Clusters 4-35

Viewing Alerts

4-36

Copyright 2005, Oracle. All rights reserved.

Viewing Alerts (continued) It is also possible to view the metric across the cluster in a comparative or overlay fashion. To view this information, click the Compare Targets link at the bottom of the page. When the Compare Targets page appears, choose the instance targets that you want to compare by selecting them, and then clicking the Move button. If you want to compare the metric data from all targets, then click the Move All button. After making your selections, click the OK button to continue. The Metric summary page appears next. Depending on your needs, you can accept the default timeline of 24 hours or select a more suitable value from the drop-down list. If you want to add a comment regarding the event for future reference, then enter a comment in the Comment for Most Recent Alert field, and then click the Add Comment button.

Oracle Database 10g: Real Application Clusters 4-36

Blackouts and Scheduled Maintenance

4-37

Copyright 2005, Oracle. All rights reserved.

Blackouts and Scheduled Maintenance You can use Enterprise Manager database control to define blackouts for all managed targets of your RAC database to prevent alerts from being recorded. Blackouts are useful when performing scheduled or unscheduled maintenance or other tasks that might trigger extraneous or unwanted events. You can define blackouts for an entire cluster database or for specific cluster database instances. To create a blackout event, click the Maintenance tab located on the Database Control home page. Click the Blackouts link in the Enterprise Manager Administration section. The Setup Blackouts page will appear next. Click the Create button located to the right of the window. The Create Blackout: Properties page appears next. You must enter a name or tag in the Name field. If you want, you can also type in a descriptive comment in the Comments field. This is optional. Enter a reason for the blackout in the Enter a Reason field. In the Targets area of the Properties page, you must choose a Target type from the drop-down list. In the example above, the entire cluster database is chosen. Click the cluster database in the Available Targets list, and then click the Move button to move your choice to the Selected Targets list. Click the Next button to continue.

Oracle Database 10g: Real Application Clusters 4-37

Blackouts and Scheduled Maintenance

4-38

Copyright 2005, Oracle. All rights reserved.

Blackouts and Scheduled Maintenance (continued) The Member Targets page appears next. Expand the Selected Composite Targets tree and ensure that all targets that must be included appear in the list. Click the Next button to continue. The Create Blackout: Schedule page appears next. You must supply the start time and duration, and indicate whether the blackout is to be recurring and if so, the frequency of intervals in days. Finally, you must indicate whether the blackout will occur indefinitely or will end at some point in time. If the blackout must stop in the future, enter the time and date the blackout event will end. After supplying the needed information on this page, click the Next button to proceed. The Review page contains a summary of the blackout information that you previously entered. Review the information for accuracy and correct any errors that you may find. Click the Finish button when you are satisfied with the blackout parameters. If you navigate to the Blackouts page, you can see the blackout you just submitted. You can click the View button to see the properties of your blackout or you can click the Edit button to make changes at any point in the life of the new blackout event.

Oracle Database 10g: Real Application Clusters 4-38

Summary

In this lesson, you should have learned how to: Use the EM Cluster Database Home Page Start and Stop RAC databases and instances Add a node to a cluster Delete instances from a RAC database Quiesce RAC databases Administer alerts with Enterprise Manager

4-39

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 4-39

Practice 4: Overview

This practice covers the following topics: Using the srvctl utility to control your cluster database. Starting and stopping the cluster database using EM Dbconsole.

4-40

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 4-40

Administering Storage in RAC (Part I)

Copyright 2005, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to do the following: Describe automatic storage management (ASM) Install the ASM software Set up initialization parameter files for ASM and database instances Start up and shut down ASM instances Add ASM instances to the target list of Database Control Use Database Control to administer ASM in a RAC environment

5-2

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-2

What Is Automatic Storage Management?

Is a purpose-built cluster file system and volume manager Application Manages Oracle database files Spreads data across disks Database to balance load File Provides integrated mirroring system across disks ASM Logical Solves many storage volume manager management challenges
Operating system

5-3

Copyright 2005, Oracle. All rights reserved.

What Is Automatic Storage Management? Automatic storage management (ASM) is a new feature in Oracle Database 10g. It provides a vertical integration of the file system and the Logical Volume Manager (LVM) that is specifically built for Oracle database files. The ASM can provide management for single SMP machines or across multiple nodes of a cluster for Oracle Real Application Clusters support. The ASM distributes input/output (I/O) load across all available resources to optimize performance while removing the need for manual I/O tuning. The ASM helps DBAs manage a dynamic database environment by allowing them to grow the database size without having to shut down the database to adjust the storage allocation. The ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied reliable storage mechanisms. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis. The capabilities of ASM save DBAs time by automating manual storage and thereby increasing their ability to manage larger databases (and more of them) with increased efficiency.

Oracle Database 10g: Real Application Clusters 5-3

ASM: Key Features and Benefits

Stripes files rather than logical volumes Enables online disk reconfiguration and dynamic rebalancing Provides adjustable rebalancing speed Provides redundancy on a file basis Supports only Oracle files Is cluster aware Is automatically installed as part of the base code set

5-4

Copyright 2005, Oracle. All rights reserved.

ASM: Key Features and Benefits The ASM divides a file into pieces and spreads them evenly across all the disks. The ASM uses an index technique to track the placement of each piece. Traditional striping techniques use mathematical functions to stripe complete logical volumes. When your storage capacity changes, ASM does not restripe all the data, but moves an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks. This is done while the database is active. You can adjust the speed of a rebalance operation to increase its speed or to lower the impact on the I/O subsystem. The ASM includes mirroring protection without the need to purchase a third-party Logical Volume Manager. One unique advantage of ASM is that the mirroring is applied on a file basis, rather than on a volume basis. Therefore, the same disk group can contain a combination of files protected by mirroring, or not protected at all. The ASM supports data files, log files, control files, archive logs, Recovery Manager (RMAN) backup sets, and other Oracle database file types. The ASM supports Real Application Clusters (RAC) and eliminates the need for a cluster Logical Volume Manager or a cluster file system. ASM is shipped with the database and does not show up as a separate option in the custom tree installation. It is available in both the Enterprise Edition and Standard Edition installations.
Oracle Database 10g: Real Application Clusters 5-4

ASM: New Concepts


Database ASM disk group ASM file

Tablespace

Data file

Segment

ASM disk File system file or raw device

Extent

Allocation unit

Oracle block

Physical block

5-5

Copyright 2005, Oracle. All rights reserved.

ASM: New Concepts The ASM does not eliminate any existing database functionality. Existing databases are able to operate as they always have. New files may be created as ASM files, while existing ones are administered in the old way or can be migrated to ASM. The diagram depicts the relationships that exist between the various storage components inside an Oracle database. On the left and center parts of the diagram, you can find the relationships that exist in previous releases. The right part of the diagram shows you the new concepts introduced by ASM in Oracle Database 10g. However, these new concepts are only used to describe file storage, and do not replace any existing concepts such as segments and tablespaces. With ASM, database files can now be stored as ASM files. At the top of the new hierarchy, you can find what are called ASM disk groups. Any single ASM file is contained in only one disk group. However, a disk group may contain files belonging to several databases, and a single database may use storage from multiple disk groups. As you can see, one disk group is made up of ASM disks, and each ASM disk belongs to only one disk group. Also, ASM files are always spread across all ASM disks in the disk group. The ASM disks are partitioned in allocation units (AU) of one megabyte each. An AU is the smallest contiguous disk space that ASM allocates. The ASM does not allow physical blocks to be split across AUs. Note: The graphic deals with only one type of ASM file, the data file. However, ASM can be used to store other database file types.
Oracle Database 10g: Real Application Clusters 5-5

ASM: General Architecture


Node1

DB Instance SID=sales
DBW0 ASMB RBAL

Group Services tom=ant dick=ant harry=ant

Group Services tom=bee dick=bee harry=bee

Node2

DB Instance SID=sales
ASMB DBW0

FG FG

ASM Instance SID=ant


RBAL

ASM Instance SID=bee


RBAL ARB0

FG FG

RBAL

ASMB

ASMB DBW0 RBAL

DB Instance SID=test

DBW0 RBAL

ARB0

ARBA

ARBA

DB Instance SID=test

ASM disks

ASM disks

ASM disks

ASM disks

ASM disks

ASM disks

ASM Disk group Tom

ASM Disk group Dick

ASM Disk group Harry

5-6

Copyright 2005, Oracle. All rights reserved.

ASM: General Architecture To use ASM, you must start a special instance called an ASM instance before you start your database instance. ASM instances do not mount databases, but instead manage the metadata needed to make ASM files available to ordinary database instances. Both ASM instances and database instances have access to a common set of disks called disk groups. Database instances access the contents of ASM files directly, communicating with an ASM instance only to get information about the layout of these files. An ASM instance contains two new background processes. One coordinates rebalance activity for disk groups. It is called RBAL. The other performs the actual rebalance activity for AU movements. There can be many of these at a time, and they are called ARB0, ARB1, and so on. An ASM instance also has most of the same background processes as a database instance (SMON, PMON, LGWR, and so on). Each database instance using ASM has two new background processes called ASMB and RBAL. RBAL performs global opens of the disks in the disk groups. At database instance startup, ASMB connects as a foreground process into the ASM instance. All communication between the database and ASM instances is performed via this bridge. This includes physical file changes such as data file creation and deletion. Over this connection, periodic messages are exchanged to update statistics and to verify that both instances are healthy.

Oracle Database 10g: Real Application Clusters 5-6

ASM: General Architecture (continued) Group Services is used to register the connection information needed by the database instances to find ASM instances. When an ASM instance mounts a disk group, it registers the disk group and connect string with Group Services. The database instance knows the name of the disk group, and can therefore use it to look up connect information for the correct ASM instance. Like RAC, ASM instances themselves may be clustered, using the existing Global Cache Services (GCS) infrastructure. There is one ASM instance per node on a cluster. As with existing RAC configurations, ASM requires that the operating system makes the disks globally visible to all ASM instances, irrespective of the node. If there are several database instances for different databases on the same node, then they share the same single ASM instance on that node. If the ASM instance on one node fails, all the database instances connected to it also fail. As with RAC, ASM and database instances on other nodes recover the dead instances and continue operations. Note: A disk group can contain files for many different Oracle databases. Thus, multiple database instances serving different databases can access the same disk group even on a single system without RAC.

Oracle Database 10g: Real Application Clusters 5-7

ASM Instance and Crash Recovery in RAC


ASM instance recovery
Both instances mount disk group
Node1 +ASM1 Node2 +ASM2

ASM instance failure


Node1 +ASM1 Node2 +ASM2

Disk group repaired by surviving instance


Node1 Node2 +ASM2
Disk group A

Disk group A

Disk group A

ASM crash recovery


Only one instance mounts disk group
Node1 +ASM1 Node2 +ASM2

ASM instance failure


Node1 +ASM1 Node2 +ASM2

Disk group repaired when next mounted


Node1 Node2 +ASM2
Disk Group A

Disk Group A

Disk Group A

5-8

Copyright 2005, Oracle. All rights reserved.

ASM Instance and Crash Recovery in RAC Each disk group is self-describing, containing its own file directory, disk directory, and other data such as metadata logging information. ASM automatically protects its metadata by using mirroring techniques even with external redundancy disk groups. With multiple ASM instances mounting the same disk groups, if one ASM instance fails, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. This situation is called ASM instance recovery, and is automatically and immediately detected by the global cache services. With multiple ASM instances mounting different disk groups, or in the case of a single ASM instance configuration, if an ASM instance fails while ASM metadata is open for update, then the disk groups that are not currently mounted by any other ASM instance are not recovered until they are mounted again. When an ASM instance mounts a failed disk group, it reads the disk group log and recovers all transient changes. This situation is called ASM crash recovery. Therefore, when using ASM clustered instances, it is recommended to have all ASM instances always mounting the same set of disk groups. However, it is possible to have a disk group on locally attached disks that are only visible to one node in a cluster, and have that disk group only mounted on the node where the disks are attached. Note: The failure of an Oracle database instance is not significant here because only ASM instances update ASM metadata.
Oracle Database 10g: Real Application Clusters 5-8

ASMLibs

An ASMLib is a storage-management interface between Oracle kernel and disk storage. You can load multiple ASMLibs. Purpose built drivers can provide:
Device discovery More efficient I/O interface Increased performance and reliability

Oracle freely delivers an ASMLib on Linux. Several participating storage vendors such as EMC and HP are joining this initiative.

5-9

Copyright 2005, Oracle. All rights reserved.

ASMLibs ASMLib is a support library for the ASM feature. The objective of ASMLib is to provide a more streamlined and efficient mechanism for identifying and accessing block devices used by ASM disk groups. This API serves as an alternative to the standard operating system interface. The ASMLib kernel driver is released under the GNU General Public License (GPL), and Oracle Corporation freely delivers an ASMLib for Linux platforms. This library is provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API. The main ASMLib functions are grouped into three collections of functions: Device discovery functions must be implemented in any ASMLib. Discover strings usually contain a prefix identifying which ASMLib this discover string is intended for. For the Linux ASMLib provided by Oracle, the prefix is ORCL:. I/O processing functions extend the operating system interface and provide an optimized asynchronous interface for scheduling I/O operations and managing I/O operation completion events. These functions are implemented as a device driver within the operating system kernel. The performance and reliability functions use the I/O processing control structures for passing metadata between the Oracle database and the back-end storage devices. They enable additional intelligence on the part of back-end storage. Note: The database can load multiple ASMLibs, each handling different disks.
Oracle Database 10g: Real Application Clusters 5-9

Oracle Linux ASMLib Installation: Overview


1. Install the ASMLib packages on each node:
http://otn.oracle.com/tech/linux/asmlib Install oracleasm-support, oracleasmlib, and kernel-related packages

2. Configure ASMLib on each node:


Load ASM driver and mount ASM driver file system Use the oracleasm script with the configure option

3. Make disks available to ASMLib by marking disks using oracleasm createdisk on one node. 4. Make sure that disks are visible on other nodes using oracleasm scandisks. 5. Use appropriate discovery strings for this ASMLib.
5-10 Copyright 2005, Oracle. All rights reserved.

Oracle Linux ASMLib Installation: Overview You can download the Oracle ASMLib software from the Oracle Technology Network Web site. There are three packages for each Linux platform. The two essential packages are the oracleasmlib package, which provides the actual ASM library, and the oracleasm support package, which provides the utilities to configure and enable the ASM driver. The remaining package provides the kernel driver for the ASMLib. After the ASMLib software is installed, you need to make the ASM driver available by executing the /etc/init.d/oracleasm configure command. This operation creates the /dev/oracleasm mount point used by the ASMLib to communicate with the ASM driver. When using RAC, installation and configuration must be completed on all nodes of the cluster. In order to place a disk under ASM management, it must first be marked to prevent inadvertent use of incorrect disks by ASM. This is accomplished by using the /etc/init.d/oracleasm createdisk command. With RAC, this operation needs to be performed only on one node because it is a shared-disk architecture. However, the other nodes in the cluster need to ensure the disk is seen and valid. Therefore, the other nodes in cluster need to execute the /etc/init.d/oracleasm scandisks command. After the disks are marked, the ASM initialization parameter can be set to appropriate values.

Oracle Database 10g: Real Application Clusters 5-10

Oracle Linux ASMLib Installation

Install the packages as the root user:


# rpm -i oracleasm-support-version.arch.rpm \ oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm

Run oracleasm with the configure option:


Provide oracle UID as the driver owner. Provide dba GID as the group of the driver. Load the driver at system startup.

# /etc/init.d/oracleasm configure

5-11

Copyright 2005, Oracle. All rights reserved.

Oracle Linux ASMLib Installation If you do not use the ASM library driver, you must bind each disk device that you want to use to a raw device. To install and configure the ASM library driver and utilities, perform the following steps: 1. Enter the following command to determine the kernel version and architecture of the system:
# uname -rm

2. If necessary, download the required ASM library driver packages from the OTN Web site. You must download the following three packages, where version is the version of the ASM library driver, arch is the system architecture, and kernel is the kernel version you are using:
oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm

3. Install the proper packages for your platform. For example, if you are using the Red Hat Enterprise Linux AS 3.0 enterprise kernel, enter a command similar to the following:
# rpm -i oracleasm-support-1.0.0-1.i386.rpm \ oracleasm-2.4.9-e-enterprise-1.0.0-1.i686.rpm \ oracleasmlib-1.0.0-1.i386.rpm

Oracle Database 10g: Real Application Clusters 5-11

Oracle Linux ASMLib Installation (continued) 4. Enter the following command to run the oracleasm initialization script with the configure option: # /etc/init.d/oracleasm configure You will be prompted for the following: - The UID of the driver owner. This will be the UID for the oracle user. - The GID of the driver group. This will be the GID for the dba group. - Whether the ASMlib driver should be loaded at startup The correct answer is YES. The script then completes the following tasks: - Creates the /etc/sysconfig/oracleasm configuration file - Creates the /dev/oracleasm mount point - Loads the oracleasm kernel module - Mounts the ASM library driver file system 5. Repeat this procedure on all cluster nodes where you want to install RAC.

Oracle Database 10g: Real Application Clusters 5-12

ASM Library Disk Creation

Identify the device name for the disks that you want to use with the fdisk l command. Create a single whole-disk partition on the disk device with fdisk. Enter a command similar to the following to mark the shared disk as an ASM disk: To make the disk available on the other nodes, enter the following as root on each node: Set the ASM_DISKSTRING parameter
Copyright 2005, Oracle. All rights reserved.

/etc/init.d/oracleasm createdisk disk1 /dev/sdbn

# /etc/init.d/oracleasm scandisks

5-13

ASM Library Disk Creation To configure the disk devices that you want to use in an ASM disk group, complete the following steps: 1. If necessary, install the shared disks that you intend to use for the disk group and restart the system. 2. To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk l

3. Using fdisk, create a single whole-disk partition on the device that you want to use. 4. Enter a command similar to the following to mark a disk as an ASM disk:
# /etc/init.d/oracleasm createdisk disk1 /dev/sdb1

In this example, disk1 is the tag or name that you want to assign to the disk. 5. To make the disk available on other cluster nodes, enter the following command as root on each node:
# /etc/init.d/oracleasm scandisks

This command identifies all the shared disks attached to the node that are marked as ASM disks.

Oracle Database 10g: Real Application Clusters 5-13

ASM Library Disk Configuration Important oracleasm options: configure: Use this option to reconfigure the ASM library driver, if necessary. enable/disable: Use the disable and enable options to change the behavior of the ASM library driver when the system starts. The enable option causes the ASM library driver to load when the system starts. start/stop/restart: Use the start, stop, and restart options to load or unload the ASM library driver without restarting the system. createdisk: Use this option to mark a disk for use with the ASM library and name it. deletedisk: Use the deletedisk option to unmark a named disk device. Do not use this command to unmark disks that are being used by an ASM disk group. You must drop the disk from the ASM disk group before you unmark it. Querydisk: Use this option to determine whether a disk device or disk name is being used by the ASM library driver. listdisks: Use this option to list the disk names of marked ASM library driver disks. scandisks: Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as ASM library driver disks on another node. After you have prepared your disks, set the ASM_DISKSTRING initialization parameter to an appropriate value. The oracleasm script marks disks with an ASM header label. You can set the ASM_DISKSTRING parameter to the value ORCL:DISK*. This setting enables ASM to scan and qualify all disks with that header label.

Oracle Database 10g: Real Application Clusters 5-14

ASM Administration

ASM instance

Disk groups and disks

Files

0010 0010

5-15

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-15

ASM Instance Functionalities


CREATE DISKGROUP ALTER SYSTEM RESTRICTED SESSION

ASM instance

Database instance ALTER DISKGROUP


5-16

DROP DISKGROUP

Copyright 2005, Oracle. All rights reserved.

ASM Instance Functionalities The main goal of an ASM instance is to manage disk groups and protect their data. ASM instances also communicate file layout to database instances. In this way, database instances can directly access files stored in disk groups. There are several new disk group administrative commands. They all require the SYSDBA privilege and must be issued from an ASM instance. You can add new disk groups. You can also modify existing disk groups to add new disks, remove existing ones, and many other operations. You can remove existing disk groups. Finally, you can prevent database instances from connecting to an ASM instance. When the ALTER SYSTEM ENABLE RESTRICTED SESSION command is issued to an ASM instance, database instances cannot connect to that ASM instance. Conversely, ALTER SYSTEM DISABLE RESTRICTED SESSION enables connections from database instances. This command enables an ASM instance to start up and mount disk groups for the purpose of maintenance without allowing database instances to access the disk groups.

Oracle Database 10g: Real Application Clusters 5-16

ASM Instance Creation

5-17

Copyright 2005, Oracle. All rights reserved.

ASM Instance Creation While creating an ASM-enabled database, the DBCA determines if an ASM instance already exists on your host. If there is one, it gives you the list of managed disk groups. You can then make a selection of whose managed disk groups are used for ASM-enabled database storage. When the ASM instance discovery returns an empty list, the DBCA creates a new ASM instance. As part of the ASM instance creation process, the DBCA automatically creates an entry in the oratab file on supported platforms. This entry is used for discovery purposes. On Windows platforms where a services mechanism is used, the DBCA automatically creates an Oracle Service and the appropriate registry entry to facilitate the discovery of ASM instances. The following configuration files are also automatically created by the DBCA at the time of ASM instance creation: The ASM instance parameter file and the ASM instance password file. Before creating the ASM instance, you have the possibility to specify some initialization parameters for the ASM instance. After the ASM instance is created, the DBCA allows you to create new disk groups that you can use to store your database. Note: ASM instances are smaller than database instances. A 64 MB SGA should be sufficient for all but the largest ASM installations.

Oracle Database 10g: Real Application Clusters 5-17

ASM Instance Initialization Parameters

INSTANCE_TYPE = ASM DB_UNIQUE_NAME = +ASM ASM_POWER_LIMIT = 1 ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*' ASM_DISKGROUPS = dgroupA, dgroupB LARGE_POOL_SIZE = 8MB

PROCESSES = 25 + 15*<#DB inst using ASM for their storage>

5-18

Copyright 2005, Oracle. All rights reserved.

ASM Instance Initialization Parameters INSTANCE_TYPE should be set to ASM for ASM instances. DB_UNIQUE_NAME specifies the service provider name for which this ASM instance manages disk groups.The default value of +ASM should be valid for you. ASM_POWER_LIMIT controls the speed for a rebalance operation. Possible values range from 1 through 11, with 11 being the fastest. If omitted, this value defaults to 1. The number of slaves for a rebalance operation is derived from the parallelization level specified in a manual rebalance command (POWER), or by the ASM_POWER_LIMIT parameter. ASM_DISKSTRING is an operating systemdependent value used by ASM to limit the set of disks considered for discovery. When a new disk is added to a disk group, each ASM instance that has the disk group mounted must be able to discover the new disk by using its ASM_DISKSTRING. If not specified, it is assumed to be NULL and ASM disk discovery finds all disks to which ASM instance has read and write access. ASM_DISKGROUPS is the list of names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT command is used. ASM automatically adds a disk group to this parameter when a disk group is successfully mounted, and automatically removes the disk group when it is dismounted except for dismounts at instance shutdown. Note: The internal packages used by ASM instances are executed from the LARGE POOL; therefore, you must set the value of the initialization parameter LARGE_POOL_SIZE to at least 8 MB. For other buffer parameters, you can use their default values.
Oracle Database 10g: Real Application Clusters 5-18

RAC and ASM Instances Creation

5-19

Copyright 2005, Oracle. All rights reserved.

RAC and ASM Instances Creation When using the Database Configuration Assistant (DBCA) to create ASM instances on your cluster, you need to follow the exact same steps as for a single-instance environment. The only exception is for the first and third steps. You must select the Oracle Real Application Clusters database option in the first step, and then select all nodes of your cluster. The DBCA automatically creates one ASM instance on each selected node. The first instance is called +ASM1, the second +ASM2, and so on.

Oracle Database 10g: Real Application Clusters 5-19

ASM Instance Initialization Parameters and RAC


CLUSTER_DATABASE: This parameter must be set to TRUE. ASM_DISKGROUP:
Multiple instances can have different values. Shared disk groups must be mounted by each ASM instance.

ASM_DISKSTRING:
Multiple instances can have different values. With shared disk groups, every instance should be able to see the common pool of physical disks.

ASM_POWER_LIMIT: Multiple instances can have different values.


Copyright 2005, Oracle. All rights reserved.

5-20

ASM Instance Initialization Parameters and RAC In order to enable ASM instances to be clustered together in a RAC environment, each ASM instance initialization parameter file must set its CLUSTER_DATABASE parameter to TRUE. This enables the global cache services to be started on each ASM instance. Although it is possible for multiple ASM instances to have different values for their ASM_DISKGROUPS parameter, it is recommended for each ASM instance to mount the same set of disk groups. This enables disk groups to be shared amongst ASM instances for recovery purposes. In addition, all disk groups used to store one RAC database must be shared by all ASM instances in the cluster. Consequently, if you are sharing disk groups amongst ASM instances, their ASM_DISKSTRING initialization parameter must point to the same set of physical media. However, this parameter does not need to have the same setting on each node. For example, assume that the physical disks of a disk group are mapped by the OS on node A as /dev/rdsk/c1t1d0s2, and on node B as /dev/rdsk/c2t1d0s2. Although both nodes have different disk string settings, they locate the same devices via the OS mappings. This situation can occur when the hardware configurations of node A and node B are different, for example, when nodes are using different controllers as in the above example. ASM handles this situation because it inspects the contents of the disk header block to determine the disk group to which it belongs, rather than attempting to maintain a fixed list of path names.
Oracle Database 10g: Real Application Clusters 5-20

Discovering New ASM Instances with EM


If new ASM targets are not discovered:
<Target TYPE="osm_instance" NAME="+ASMn" DISPLAY_NAME="+ASMn"> <Property NAME="SID" VALUE="+ASMn"/> <Property NAME="MachineName" VALUE="clusnode1_vip"/> <Property NAME="OracleHome" VALUE="/u01/app/oracle/..."/> <Property NAME="UserName" VALUE="sys"/> <Property NAME="password" VALUE="manager" ENCRYPTED="FALSE"/> <Property NAME="Role" VALUE="sysdba"/> <Property NAME="Port" VALUE="1521"/> </Target>

$ emctl config agent addtarget <filename> $ emctl stop agent $ emctl start agent
5-21 Copyright 2005, Oracle. All rights reserved.

Discovering New ASM Instances with EM If an ASM instance is added in an existing RAC environment, it is not discovered automatically by Database Control. You must perform the following steps to discover the ASM target: 1. Create an XML file in the format shown in the slide above. You must use the correct values for the following parameters: - NAME: The ASM target name. Usually hostname_ASMSid. - DISPLAY_NAME: ASM target display name. Usually +ASMn. - "SID": ASM SID. Usually +ASMn. - "MachineName": Hostname. In RAC environment, use the corresponding VIP. - "OracleHome": ASM Oracle Home - "UserName": Username (default is SYS) of the ASM instance. - "password": ASM users password - "Role": ASM users role (default is SYSDBA) - "Port": ASM port. By default it is 1521. 2. Run emctl config agent addtarget <filename>. This command appends the above target to the list of targets in the targets.xml configuration parameter of Database Control. 3. Restart the agent. Note: Repeat these steps on each node by using the corresponding ASM instances name.
Oracle Database 10g: Real Application Clusters 5-21

Accessing an ASM Instance


AS SYSDBA
ASM instance

All operations

Disk group

Disk group

Storage system

5-22

Copyright 2005, Oracle. All rights reserved.

Accessing an ASM Instance ASM instances do not have a data dictionary, so the only way to connect to one is by using OS authentication, that is, SYSDBA. To connect remotely, a password file must be used. Normally, the SYSDBA privilege is granted through the use of an operating system group. On UNIX, this is typically the dba group. By default, members of the dba group have SYSDBA privileges on all instances on the node, including the ASM instance. Users who connect to the ASM instance with the SYSDBA privilege have complete administrative access to all disk groups in the system.

Oracle Database 10g: Real Application Clusters 5-22

Dynamic Performance View Additions


V$ASM_TEMPLATE V$ASM_CLIENT V$ASM_DISKGROUP Disk group A Disk group B

V$ASM_FILE V$ASM_ALIAS
Storage system

V$ASM_DISK V$ASM_OPERATION
5-23 Copyright 2005, Oracle. All rights reserved.

Dynamic Performance View Additions In an ASM instance, V$ASM_CLIENT contains one row for every database instance using a disk group managed by the ASM instance. In a database instance, it has one row for each disk group with the database name and the ASM instance name. In an ASM instance, V$ASM_DISKGROUP contains one row for every disk group discovered by the ASM instance. In a database instance, V$ASM_DISKGROUP has a row for all disk groups mounted or dismounted. In an ASM instance, V$ASM_TEMPLATE contains one row for every template present in every disk group mounted by the ASM instance. In a database instance, it has rows for all templates in mounted disk groups. In an ASM instance, V$ASM_DISK contains one row for every disk discovered by the ASM instance, including disks which are not part of any disk group. In a database instance, it has rows for disks in the disk groups in use by the database instance. In an ASM instance, V$ASM_OPERATION contains one row for every active ASM longrunning operation executing in the ASM instance. In a database instance, it contains no rows. In an ASM instance, V$ASM_FILE contains one row for every ASM file in every disk group mounted by ASM instance. In a database instance, it contains no rows. In an ASM instance, V$ASM_ALIAS contains one row for every alias present in every disk group mounted by the ASM instance. In a database instance, it contains no rows.
Oracle Database 10g: Real Application Clusters 5-23

ASM Home Page

5-24

Copyright 2005, Oracle. All rights reserved.

ASM Home Page Enterprise Manager provides a user-friendly graphical interface to Oracle database management, administration, and monitoring tasks. Oracle Database 10g extends the existing functionality to transparently support the management, administration, and monitoring of Oracle databases using ASM storage. It also adds support for the new management tasks required for administration of the ASM instance and ASM disk groups. This home page shows the status of the ASM instance along with the metrics and alerts generated by the collection mechanisms. This page also provides the startup and shutdown functionality. When you click the Alerts link, a page providing alert details appears. The DiskGroup Usage chart shows space used by each client database along with free space. Note: You can reach ASM home page from the Database home page. In the General section of the Database home page, click the +ASM link.

Oracle Database 10g: Real Application Clusters 5-24

ASM Performance Page

5-25

Copyright 2005, Oracle. All rights reserved.

ASM Performance Page The Performance tab of the ASM home page shows the I/O response time and throughput for each disk group. You can further drill down to view disk-level performance metrics.

Oracle Database 10g: Real Application Clusters 5-25

ASM Configuration Page

5-26

Copyright 2005, Oracle. All rights reserved.

ASM Configuration Page The Configuration tab of the ASM home page enables you to view or modify the initialization parameters of the ASM instance.

Oracle Database 10g: Real Application Clusters 5-26

Starting Up an ASM Instance

$ sqlplus /nolog SQL> CONNECT / AS sysdba Connected to an idle instance. SQL> STARTUP; ASM instance started Total System Global Area 147936196 Fixed Size 324548 Variable Size 96468992 Database Buffers 50331648 Redo Buffers 811008 ASM diskgroups mounted

bytes bytes bytes bytes bytes

5-27

Copyright 2005, Oracle. All rights reserved.

Starting Up an ASM Instance ASM instances are started similarly to database instances except that the initialization parameter file contains an entry such as INSTANCE_TYPE=ASM. This parameter sets to ASM value signals the Oracle executable that an ASM instance is starting, and not a database instance. Furthermore, for ASM instances, the mount option during startup tries to mount the disk groups specified by the ASM_DISKGROUPS initialization parameter. No database is mounted in this case. Other STARTUP clauses for ASM instances are similar to those for database instances. For example, RESTRICT prevents database instances from connecting to this ASM instance. OPEN is invalid for an ASM instance. NOMOUNT starts up an ASM instance without mounting any disk group.

Oracle Database 10g: Real Application Clusters 5-27

Shutting Down an ASM Instance


SHUTDOWN DB instance immediately aborted

DB

ASM

ASM

SHUTDOWN NORMAL

SHUTDOWN IMMEDIATE

5-28

Copyright 2005, Oracle. All rights reserved.

Shutting Down an ASM Instance Since ASM manages disk groups which hold the database files and its metadata, a shutdown of the ASM instance cannot proceed as long as all of its client database instances are not stopped as well. In case of ASM SHUTDOWN NORMAL, the ASM instance begins shutdown and waits for all sessions to disconnect, just as typical database instances. In addition, and because ASM has a persistent database instance connection, the database instances must be shut down first, in order for ASM to complete its shutdown. In case of ASM SHUTDOWN IMMEDIATE, TRANSACTIONAL, or ABORT, ASM immediately terminates its database instance connections, and as a result, all dependent databases immediately abort. However, for the IMMEDIATE, and TRANSACTIONAL case, ASM waits for any in-progress ASM SQL to complete before shutting down the ASM instance. In a single ASM instance configuration, if the ASM instance fails while disk groups are open for update, then after the ASM instance reinitializes, it reads the disk groups log and recovers all transient changes. With multiple ASM instances sharing disk groups, if one ASM instance should fail, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of a database instance does not affect ASM instances. An ASM instance is expected to be always functional on the host. An ASM instance must be brought up automatically whenever the host is restarted. An ASM instance is expected to use the auto-startup mechanism supported by the underlying operating system. For example, it should run as a Service under Windows. Note: File system failure usually crashes a node.
Oracle Database 10g: Real Application Clusters 5-28

ASM Administration

ASM instance

Disk groups and disks

Files

0010 0010

5-29

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-29

ASM Disk Group

Is a pool of disks managed as a logical unit Partitions total disk space into uniform-sized units Spreads each file evenly across all disks Provides coarse- or fine-grain striping based on file type Administers disk groups, not files

ASM instance

Disk group

5-30

Copyright 2005, Oracle. All rights reserved.

ASM Disk Group A disk group is a collection of disks managed as a logical unit. Storage is added and removed from disk groups in units of ASM disks. Every ASM disk has an ASM disk name, which is a name common to all nodes in a cluster. The ASM disk name abstraction is required because different hosts can use different operating system names to refer to the same disk. ASM always evenly spreads files in 1 MB allocation-unit chunks across all the disks in a disk group. This is called COARSE striping. In this way, ASM eliminates the need for manual disk tuning. However, disks in a disk group should have similar size and performance characteristics to obtain optimal I/O tuning. For most installations, there is only a small number of disk groups, for example, one disk group for a work area and another for a recovery area. For files (such as log files) that require low latency, ASM provides fine-grained (128 KB) striping. FINE striping stripes each allocation unit. FINE striping breaks up medium-sized I/O operations into multiple, smaller I/O operations that execute in parallel. While the number of files and disks increases, you have to manage only a constant number of disk groups. From a database perspective, disk groups can be specified as the default location for files created in the database. Note: Each disk group is self-describing, containing its own file directory, disk directory, and other directories.
Oracle Database 10g: Real Application Clusters 5-30

Failure Group

Controller 1
6 5 4 3 2 1 1 1 7 7 7 13 13 13

Controller 2

Controller 3

1 1 1

7 7 7

13 13 13

1 1 1

7 7 7

13 13 13

Failure group 1

Failure group 2 Disk group A

Failure group 3

5-31

Copyright 2005, Oracle. All rights reserved.

Failure Group A failure group is a set of disks, inside one particular disk group, sharing a common resource whose failure needs to be tolerated. An example of a failure group is a string of SCSI disks connected to a common SCSI controller. A failure of the controller leads to all of the disks on its SCSI bus becoming unavailable, although each of the individual disks is still functional. What constitutes a failure group is site specific. It is largely based on failure modes that a site is willing to tolerate. By default, ASM assigns each disk to its own failure group. When creating a disk group or adding a disk to a disk group, administrators can specify their own grouping of disks into failure groups. After failure groups are identified, ASM can optimize file layout to reduce the unavailability of data due to the failure of a shared resource.

Oracle Database 10g: Real Application Clusters 5-31

Disk Group Mirroring

Mirror at AU level Mix primary and mirror AUs on each disk External redundancy: Defers to hardware mirroring Normal redundancy:
Two-way mirroring At least two failure groups

High redundancy:
Three-way mirroring At least three failure groups

5-32

Copyright 2005, Oracle. All rights reserved.

Disk Group Mirroring ASM has three disk group types that support different types of mirroring: external redundancy, normal redundancy, and high redundancy. External-redundancy disk groups do not provide mirroring. Use an external-redundancy disk group if you use hardware mirroring or if you can tolerate data loss as the result of a disk failure. Normal-redundancy disk groups support two-way mirroring. High-redundancy disk groups provide triple mirroring. ASM uses a unique mirroring algorithm. ASM does not mirror disks, rather it mirrors AUs. As a result, you need spare capacity only in your disk group. When a disk fails, ASM automatically reconstructs the contents of the failed disk on the surviving disks in the disk group by reading the mirrored contents from the surviving disks. In this way, the I/O hit from a disk failure is spread across several disks rather than on a single disk that mirrors the failed drive. When ASM allocates a primary AU of a file to one disk in a disk group, it allocates a mirror copy of that AU to another disk in the disk group. Primary AUs on a given disk can have their respective mirror AUs on one of several partner disks in the disk group. Each disk in a disk group has the same ratio of primary and mirror AUs. ASM ensures that a primary AUs and its mirror copy never reside in the same failure group. If you define failure groups for your disk group, ASM can tolerate the simultaneous failure of multiple disks in a single failure group. Note: For disk groups with external redundancy, failure groups are not used because disks in an external-redundancy disk group are presumed to be highly available.
Oracle Database 10g: Real Application Clusters 5-32

Disk Group Dynamic Rebalancing

Automatic online rebalancing whenever storage configuration changes Only move data proportional to storage added No need for manual I/O tuning Online migration to new storage

5-33

Copyright 2005, Oracle. All rights reserved.

Disk Group Dynamic Rebalancing With ASM, the rebalance process is very easy and happens without any intervention from the DBA or system administrator. ASM automatically rebalances a disk group whenever disks are added or dropped. By using index techniques to spread AUs on the available disks, ASM does not need to restripe all of the data, but instead only needs to move an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks in a disk group. With I/O balanced whenever files are allocated and whenever the storage configuration changes, the DBA never needs to search for hot spots in a disk group and manually move data to restore a balanced I/O load. It is more efficient to add or drop multiple disks at the same time, so that they are rebalanced as a single operation. This avoids unnecessary movement of data. With this technique, it is easy to achieve online migration of your data. All you need to do is add the new disks in one operation and drop the old ones in one operation.

Oracle Database 10g: Real Application Clusters 5-33

ASM Administration Page

5-34

Copyright 2005, Oracle. All rights reserved.

ASM Administration Page The Administration tab of the ASM home page shows the enumeration of disk groups from V$ASM_DISKGROUP. On this page, you can create, edit, or drop a disk group. You can also perform disk group operations such as mount, dismount, and rebalance on a selected disk group. By clicking a particular disk group, you can view all existing disks pertaining to the disk group, and you can add or delete disks as well as checking or resizing disks. You can also access the Performance page, as well as Templates and Files from the Disk Group page. You can define your templates and aliases.

Oracle Database 10g: Real Application Clusters 5-34

Create Disk Group Page

5-35

Copyright 2005, Oracle. All rights reserved.

Create Disk Group Page Click the Create button on the Administration page to open this page. You can input disk group name, redundancy mechanism, and the list of disks that you would like to include in the new disk group. The list of disks is obtained from the V$ASM_DISK fixed view. By default, only the disks with header status of CANDIDATE are shown in the list.

Oracle Database 10g: Real Application Clusters 5-35

ASM Disk Groups with EM in RAC

5-36

Copyright 2005, Oracle. All rights reserved.

ASM Disk Groups with EM in RAC When you add a new disk group from an ASM instance, this disk group is not automatically mounted by other ASM instances. If you want to mount the newly added disk group on all ASM instances, for example, by using SQL*Plus, then you need to manually mount the disk group on each ASM instance. However, if you are using Database Control to add a disk group, then the disk group definition includes a check box to indicate whether the disk group is automatically mounted to all the ASM clustered database instances. This is also true when you mount and dismount ASM disk groups by using Database Control where you can use a check box to indicate which instances mount or dismount the ASM disk group.

Oracle Database 10g: Real Application Clusters 5-36

Disk Group Performance Page and RAC

5-37

Copyright 2005, Oracle. All rights reserved.

Disk Group Performance Page and RAC When you examine the default Disk Group Performance page, you can see an instance-level performance details by clicking a performance characteristic such as Write Response Time or I/O Response Time. You can access the Disk Group Performance page from one of the Automatic Storage Management home pages by clicking the Administration tab. On the Administration Disk Groups page, click the appropriate disk group link in the Name column. When the corresponding Disk Group page is displayed, click the Performance tab.

Oracle Database 10g: Real Application Clusters 5-37

Create or Delete Disk Groups

CREATE DISKGROUP dgroupA NORMAL REDUNDANCY FAILGROUP controller1 DISK '/devices/A1' NAME diskA1 SIZE 120G FORCE, '/devices/A2', '/devices/A3' FAILGROUP controller2 DISK '/devices/B1', '/devices/B2', '/devices/B3';

DROP DISKGROUP dgroupA INCLUDING CONTENTS;

5-38

Copyright 2005, Oracle. All rights reserved.

Create or Delete Disk Groups Assume that ASM disk discovery identified the following disks in the directory /devices: A1, A2, A3, A4, B1, B2, B3, and B4. Suppose that disks A1, A2, A3, and A4 are on a separate SCSI controller from disks B1, B2, B3, and B4. The first example illustrates how to set up a disk group called DGROUPA with two failure groups: CONTROLLER1 and CONTROLLER2. The example also uses NORMAL REDUNDANCY for the disk group. This is the default redundancy characteristic. As shown by the example, you can provide an optional disk name. If not supplied, ASM creates a default name of the form <group>_n, where <group> is the disk group name and n is the disk number. Optionally, you can also provide the size for the disk. If not supplied, ASM attempts to determine the size of the disk. If the size cannot be determined, an error is returned. Over-specification of capacity also returns an error. Under-specification of capacity limits what ASM uses. FORCE indicates that a specified disk should be added to the specified disk group even though the disk is already formatted as a member of an ASM disk group. Using the FORCE option for a disk that is not formatted as a member of an ASM disk group, returns an error. As shown by the second statement, you can delete a disk group along with all its files. To avoid accidental deletions, the INCLUDING CONTENTS option must be specified if the disk group still contains any files besides internal ASM metadata. The disk group must be mounted. After ensuring that none of the disk group files are open, the group and all its drives are removed from the disk group. Then the header of each disk is overwritten to eliminate ASM formatting information.
Oracle Database 10g: Real Application Clusters 5-38

Adding Disks to Disk Groups


ALTER DISKGROUP dgroupA ADD '/dev/rdsk/c0t4d0s2' NAME '/dev/rdsk/c0t5d0s2' NAME '/dev/rdsk/c0t6d0s2' NAME '/dev/rdsk/c0t7d0s2' NAME DISK A5, A6, A7, A8;

ALTER DISKGROUP dgroupA ADD DISK '/devices/A*';

Disk formatting

Disk group rebalancing

5-39

Copyright 2005, Oracle. All rights reserved.

Adding Disks to Disk Groups The example in the slide shows how to add disks to a disk group. You execute an ALTER DISKGROUP ADD DISK command to add the disks. The first statement adds four new disks to the DGROUPA disk group. The second statement demonstrates the interactions of discovery strings. Consider the following configuration: /devices/A1 is a member of disk group DGROUPA. /devices/A2 is a member of disk group DGROUPA. /devices/A3 is a member of disk group DGROUPA. /devices/A4 is a candidate disk. The second command adds A4 to the DGROUPA disk group. It ignores the other disks, even though they match the discovery string, because they are already part of the DGROUPA disk group. As shown by the diagram, when you add a disk to a disk group, the ASM instance ensures that the disk is addressable and usable. The disk is then formatted and rebalanced. The rebalancing process is time-consuming as it moves AUs from every file onto the new disk. Note: Rebalance does not block any database operations. The main impact that rebalance has is on the I/O load on the system. The higher the power of the rebalance, the more I/O load it puts on the system. Thus, less I/O bandwidth is available for database I/Os.
Oracle Database 10g: Real Application Clusters 5-39

Miscellaneous Alter Commands


ALTER DISKGROUP dgroupA DROP DISK A5; ALTER DISKGROUP dgroupA DROP DISK A6 ADD FAILGROUP fred DISK '/dev/rdsk/c0t8d0s2' NAME A9; ALTER DISKGROUP dgroupA UNDROP DISKS; ALTER DISKGROUP dgroupB REBALANCE POWER 5; ALTER DISKGROUP dgroupA DISMOUNT; ALTER DISKGROUP dgroupA CHECK ALL;
5-40

Copyright 2005, Oracle. All rights reserved.

Miscellaneous Alter Commands The first statement shows how to remove one of the disks from disk group DGROUPA. The second statement shows how you can add and drop a disk with a single command. The big advantage in this case is that rebalancing is not started until the command completes. The third statement shows how to cancel the dropping of the disk from a previous example. The UNDROP command operates only on pending drops, not after drop completion. The fourth statement rebalances DGROUPB disk group if necessary. This command is generally not necessary because it is automatically done as disks are added, dropped, or resized. However, it is useful if you want to use the POWER clause to override the default and maximum speed defined by the ASM_POWER_LIMIT initialization parameter. You can change the power level of an ongoing rebalance operation by reentering the command with a new level. A power level of zero causes rebalancing to halt until the command is either implicitly or explicitly reinvoked. The fifth statement dismounts DGROUPA. The MOUNT and DISMOUNT options allow you to make one or more disk groups available or unavailable to the database instances.

Oracle Database 10g: Real Application Clusters 5-40

Miscellaneous Alter Commands (continued) The sixth statement shows how to verify the internal consistency of disk group metadata and to repair any error found. It is also possible to use the NOREPAIR clause if you only want to be alerted about errors. Although the example requests a check across all disks in the disk group, checking can be specified on a file or an individual disk. This command requires that the disk group be mounted. If any error is found, a summary error message is displayed and the details of the detected error are reported in the alert log. Note: Except for the last two statements, the examples trigger a disk group rebalancing.

Oracle Database 10g: Real Application Clusters 5-41

Monitoring Long-Running Operations Using V$ASM_OPERATION


Column GROUP_NUMBER Disk group OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES Description

Type of operation: REBAL State of operation: QUEUED or RUNNING Power requested for this operation Power allocated to this operation Number of allocation units moved so far Estimated number of remaining allocation units Estimated number of allocation units moved per minute Estimated amount of time (in minutes) for operation termination
Copyright 2005, Oracle. All rights reserved.

5-42

Monitoring Long-Running Operations Using V$ASM_OPERATION The ALTER DISKGROUP DROP, RESIZE, and REBALANCE commands return before the operation is completed. To monitor progress of these long-running operations, you can query the V$ASM_OPERATION fixed view. This view is described in the table in the slide above. Note: A power limit can be set to zero, but it does not show up in V$ASM_OPERATION as an outstanding operation.

Oracle Database 10g: Real Application Clusters 5-42

ASM Administration

ASM instance

Disk groups and disks

Files

0010 0010

5-43

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-43

ASM Files
CREATE TABLESPACE sample DATAFILE '+dgroupA';
Database file RMAN 1 2 3 4
Automatic ASM file management

Mandatory for backups

ASM file automatically spread inside dgroupA


5-44 Copyright 2005, Oracle. All rights reserved.

ASM Files ASM files are Oracle database files stored in ASM disk groups. When a file is created, certain file attributes are permanently set. Among these are its protection policy and its striping policy. ASM files are Oracle-managed files. Any file that is created by ASM is automatically deleted when it is no longer needed. However, ASM files that are created by specifying a user alias are not considered Oracle-managed files. These files are not automatically deleted. When ASM creates a data file for a permanent tablespace (or a temp file for a temporary tablespace), the data file is set to auto-extensible with an unlimited maximum size and 100 MB default size. An AUTOEXTEND clause may override this default. All circumstances where a database must create a new file allow for the specification of a disk group for automatically generating a unique file name. With ASM, file operations are specified in terms of database objects. Administration of databases never requires knowing the name of a file, though the name of the file is exposed through some data dictionary views or the ALTER DATABASE BACKUP CONTROLFILE TO TRACE command. Because each file in a disk group is physically spread across all disks in the disk group, a backup of a single disk is not useful. Database backups of ASM files must be made with RMAN. Note: ASM does not manage binaries, alert logs, trace files, password files, or Cluster Ready Services files.
Oracle Database 10g: Real Application Clusters 5-44

ASM File Names

ASM file name

Reference

Single-file creation

Multiple-file creation

Fully qualified
5-45

Numeric

Alias

Alias with Incomplete template

Incomplete with template

Copyright 2005, Oracle. All rights reserved.

ASM File Names ASM file names can take several forms: Fully qualified Numeric Alias Alias with template Incomplete Incomplete with template The correct form to use for a particular situation depends on the context of how the file name is used. There are three such contexts: When an existing file is being referenced When a single file is about to be created When multiple files are about to be created As shown in the graphic, each context has possible choices for file name form. Note: ASM files that are created by specifying a user alias are not considered Oracle Managed Files. The files are not automatically deleted.

Oracle Database 10g: Real Application Clusters 5-45

ASM File Name Syntax

1 2 3 4 5 6

+<group>/<dbname>/<file_type>/<tag>.<file#>.<incarnation#>

+<group>.<file#>.<incarnation#>

+<group>/<directory1>//<directoryn>/<file_name>

+<group>/<directory1>//<directoryn>/<file_name>(<temp>)

+<group>

+<group>(<temp>)

5-46

Copyright 2005, Oracle. All rights reserved.

ASM File Name Syntax The examples in the slide give you the syntax that you can use to refer to ASM files: 1. Fully qualified ASM file names are used for referencing existing ASM files. They specify a disk group name, a database name, a file type, a type-specific tag, a file number, and an incarnation number. The fully qualified name is automatically generated for every ASM file when it is created. Even if a file is created via an alias, a fully qualified name is also created. Because ASM assigns the name as part of file creation, fully qualified names cannot be used for file creation. The names can be found in the same hierarchical directory structure as alias names. All the information in the name is automatically derived by ASM. Fully qualified ASM file names are also called system aliases, implying that these aliases are created and maintained by ASM. End users cannot modify them. A fully qualified name has the following form: +<group>/<dbname>/<file type>/<tag>.<file>.<incarnation> where: - <group> is the disk group name. - <dbname> is the database name to which the file belongs. - <file type> is the Oracle file type (CONTROLFILE, DATAFILE, and so on). - <tag> is type-specific information about the file (such as the tablespace name for a data file). - <file>.<incarnation> is the file/incarnation number pair used for uniqueness.
Oracle Database 10g: Real Application Clusters 5-46

ASM File Name Syntax (continued) An example of a fully qualified ASM file name is the following: +dgroupA/db1/controlfile/CF.257.8675309 2. Numeric ASM file names are used for referencing existing ASM files. They specify a disk group name, a file number, and an incarnation number. Because ASM assigns the file and incarnation numbers as part of creation, numeric ASM file names cannot be used for file creation. These names do not appear in ASM directory hierarchy. They are derived from the fully qualified name. These names are never reported to you by ASM, but they can be used in any interface that needs the name of an existing file. The following is an example of a numeric ASM file name: +dgroupA.257.8675309 3. Alias ASM file names are used both for referencing existing ASM files and for creating new ASM files. Alias names specify a disk group name, but instead of a file and incarnation number, they include a user-friendly name string. Alias ASM file names are distinguished from fully qualified or numeric names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends with a dotted pair of numbers. Alias file names are provided to allow administrators to reference ASM files with human-understandable names. Alias file names are implemented using a hierarchical directory structure, with the slash (/) separating name components. Name components are in UTF-8 format and may be up to 48 bytes in length, but must not contain a slash. This implies a 48-character limit in a single-byte language but a lower limit in a multibyte language depending upon how many multibyte characters are present in the string. The total length of the alias file name, including all components and all separators, is limited to 256 bytes. The components of alias file names can have space between sets of characters, but the space should not be the first or last character of a component. Alias ASM file names are case-insensitive. Here is a possible example of ASM alias file name: +dgroupA/myfiles/control_file1 +dgroupA/A rather LoNg and WeiRd name/for a file Every ASM file will be given a fully qualified name during file creation based upon its attributes. An administrator may create an additional alias for each file during file creation, or an alias can be created for an existing file using the ALTER DISKGROUP ADD ALIAS command. An alias ASM file name is normally used in the CONTROL_FILES initialization parameter. An administrator may create directory structures as needed to support whatever naming convention is desired, subject to the 256-byte limit. 4. Alias ASM file names with templates are used only for ASM file creation operations. They specify a disk group name, an alias name, and a file creation template name. (See the following slide in this lesson.) If an alias ASM file name with template is specified, and the alias portion refers to an existing file, then the template specification is ignored. An example of an alias ASM file name with template is the following: +dgroupA/config1(spfile) 5. Incomplete ASM file names are used only for file creation operations. They consist of a disk group name only. ASM uses a default template for incomplete ASM file names as defined by their file type. An example of an incomplete ASM file name is the following: +dgroupA 6. Incomplete ASM file names with templates are used only for file creation operations. They consist of a disk group name followed by a template name. The template name determines the file creation attributes applied to the file. An example of an incomplete ASM file name with template is the following: +dgroupA(datafile)
Oracle Database 10g: Real Application Clusters 5-47

ASM File Name Mapping


Oracle File Type <File Type>
Control files Data files Online logs Archive logs Temp files Data file backup pieces Data file incremental backup pieces Arch log backup piece Data file copy Initialization parameters Broker configurations Flashback logs Change tracking bitmaps Auto backup Data Pump dump set Cross-platform converted data files
5-48 Copyright 2005, Oracle. All rights reserved.

<Tag>
CF/BCF <ts_name>_<file#> log_<thread#> parameter <ts_name>_<file#> Client Specified Client Specified Client Specified <ts_name>_<file#> spfile drc <thread#>_<log#> BITMAP Client Specified dump

Def Template
CONTROLFILE DATAFILE ONLINELOG ARCHIVELOG TEMPFILE BACKUPSET BACKUPSET BACKUPSET DATAFILE PARAMETERFILE DATAGUARDCONFIG FLASHBACK CHANGETRACKING AUTOBACKUP DUMPSET XTRANSPORT

controlfile datafile online_log archive_log temp backupset backupset backupset datafile init drc rlog CTB AutoBackup Dumpset

ASM File Name Mapping ASM supports most file types required by the database. However, certain classes of file types, such as operating system executables, are not supported by ASM. Each file type is associated with a default template name. This table specifies ASM-supported file types with their corresponding naming conventions. ASM applies attributes to the files that it creates as specified by the corresponding system default template.

Oracle Database 10g: Real Application Clusters 5-48

ASM File Templates


System Template
CONTROLFILE DATAFILE ONLINELOG ARCHIVELOG TEMPFILE BACKUPSET XTRANSPORT PARAMETERFILE DATAGUARDCONFIG FLASHBACK CHANGETRACKING AUTOBACKUP DUMPSET

External
unprotected unprotected U unprotected n unprotected p

Normal
2-way mirror 2-way mirror 2 2-way mirror 2-way mirror w

High
3-way mirror 3-way mirror 3 3-way mirror 3-way mirror w

Striped
fine coarse fine coarse coarse coarse coarse coarse coarse fine coarse coarse coarse

r unprotected o unprotected t unprotected e unprotected c unprotected t unprotected e unprotected d


unprotected unprotected

a 2-way mirror y 2-way mirror


2-way mirror

a 3-way mirror y 3-way mirror


3-way mirror

M 2-way mirror i 2-way mirror r 2-way mirror r 2-way mirror o 2-way mirror r
2-way mirror

M 3-way mirror i 3-way mirror r 3-way mirror r 3-way mirror o 3-way mirror r
3-way mirror

5-49

Copyright 2005, Oracle. All rights reserved.

ASM File Templates ASM file templates are named collections of attributes applied to files during file creation. Templates simplify file creation by mapping complex file-attribute specifications on to a single name. Templates, while applied to files, are associated with a disk group. When a disk group is created, ASM establishes a set of initial system default templates associated with that disk group. These templates contain the default attributes for the various Oracle database file types. Attributes of the default templates can be changed by the administrator. Additionally, administrators may add their own unique templates as required. This enables you to specify the appropriate file creation attributes as a template for less sophisticated administrators to use. System default templates cannot be deleted. If you need to change an ASM file attribute after the file has been created, then the file must be copied via RMAN into a new file with the new attributes. This is the only method of changing file attributes. Depending on the defined disk group redundancy characteristics, the system templates are created with the attributes shown. When defining or altering a template, you can specify whether the files must be mirrored or not. You can also specify if the files created under that template are COARSE or FINE striped. Note: The redundancy and striping attributes used for ASM metadata files are predetermined by ASM and are not changeable by the template mechanism.
Oracle Database 10g: Real Application Clusters 5-49

Template and Alias: Examples


ALTER DISKGROUP dgroupA ADD TEMPLATE reliable ATTRIBUTES (MIRROR); ALTER DISKGROUP dgroupA DROP TEMPLATE reliable; ALTER DISKGROUP dgroupA DROP FILE '+dgroupA.268.8675309'; ALTER DISKGROUP dgroupA ADD DIRECTORY '+dgroupA/mydir'; ALTER DISKGROUP dgroupA ADD ALIAS '+dgroupA/mydir/datafile.dbf' FOR '+dgroupA.274.38745'; ALTER DISKGROUP dgroupA DROP ALIAS '+dgroupA/mydir/datafile.dbf';

5-50

Copyright 2005, Oracle. All rights reserved.

Template and Alias: Examples The first statement shows how to add a new template to a disk group. In this example, the RELIABLE template is created in disk group DGROUPA that is two-way mirrored. The second statement shows how you can remove the previously defined template. The third statement shows you how a file might be removed from a disk group. The fourth statement creates a user directory called MYDIR. The parent directory must exist before attempting to create a subdirectory or alias in that directory. Then, the example creates an alias for the +dgroupA.274.38745 file. The same code example shows you how to delete the alias. You can also drop a directory by using the ALTER DISKGROUP DROP DIRECTORY command. You can also rename an alias or a directory by using the ALTER DISKGROUP RENAME command. Note: Files can only be dropped if they are not in use.

Oracle Database 10g: Real Application Clusters 5-50

Retrieving Aliases

SELECT reference_index INTO :alias_id FROM V$ASM_ALIAS WHERE name = '+dgroupA'; SELECT reference_index INTO :alias_id FROM V$ASM_ALIAS WHERE parent_index = :alias_id AND name = 'mydir'; SELECT name FROM V$ASM_ALIAS WHERE parent_index = :alias_id;

5-51

Copyright 2005, Oracle. All rights reserved.

Retrieving Aliases Assume that you want to retrieve all aliases that are defined inside the previously defined directory +dgroupA/mydir. You can traverse the directory tree, as shown in the example. The REFERENCE_INDEX number can be used only for entries that are directory entries in the alias directory. For nondirectory entries, the reference index is set to zero. The example retrieves REFERENCE_INDEX numbers for each subdirectory and uses the last REFERENCE_INDEX as the PARENT_INDEX of needed aliases.

Oracle Database 10g: Real Application Clusters 5-51

SQL Commands and File Naming

CREATE CONTROLFILE DATABASE sample RESETLOGS ARCHIVELOG MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 LOGFILE GROUP 1 ('+dgroupA','+dgroupB') SIZE 100M, GROUP 2 ('+dgroupA','+dgroupB') SIZE 100M DATAFILE '+dgroupA.261.12345678' SIZE 100M, '+dgroupA.262.12345678' SIZE 100M;

5-52

Copyright 2005, Oracle. All rights reserved.

SQL Commands and File Naming ASM file names are accepted in SQL commands wherever file names are legal. For most commands, there is an alternate method for identifying the file (for example, a file number) so that the name need not be entered. Because one of the principal design objectives of ASM is to eliminate the need for specifying file names, you are discouraged from using ASM file names as much as possible. However, certain commands must have file names as parameters. For example, data files and log files stored in an ASM disk group should be given to the CREATE CONTROLFILE command using the file reference context form. However, the use of the RESETLOGS option requires the use of file creation context form for the specification of the log files.

Oracle Database 10g: Real Application Clusters 5-52

DBCA and Storage Options

5-53

Copyright 2005, Oracle. All rights reserved.

DBCA and Storage Options In order to support ASM as a storage option, a new page is added to the DBCA. This allows you to choose the storage options file system, ASM, or raw devices.

Oracle Database 10g: Real Application Clusters 5-53

Database Instance Parameter Changes

INSTANCE_TYPE = RDBMS LOG_ARCHIVE_FORMAT DB_BLOCK_SIZE DB_CREATE_ONLINE_DEST_n DB_CREATE_FILE_DEST_n DB_RECOVERY_FILE_DEST CONTROL_FILES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST STANDBY_ARCHIVE_DEST

5-54

Copyright 2005, Oracle. All rights reserved.

Database Instance Parameter Changes INSTANCE_TYPE defaults to RDBMS and specifies that this instance is an RDBMS instance. LOG_ARCHIVE_FORMAT is ignored if LOG_ARCHIVE_DEST is set to an incomplete ASM file name: +dGroupA, for example. If LOG_ARCHIVE_DEST is set to an ASM directory (for example, +dGroupA/myarchlogdir/), then LOG_ARCHIVE_FORMAT is used and the files are non-OMF. Unique file names for archived logs are automatically created by the Oracle database. DB_BLOCK_SIZE must be set to one of the standard block sizes (2 KB, 4 KB, 8 KB, 16 KB, or 32 KB). Databases using nonstandard block sizes, such as 6 KB, are not supported. The following parameters accept the multifile creation context form of ASM file names as a destination: DB_CREATE_ONLINE_DEST_n DB_CREATE_FILE_DEST_n DB_RECOVERY_FILE_DEST CONTROL_FILES LOG_ARCHIVE_DEST_n LOG_ARCHIVE_DEST STANDBY_ARCHIVE_DEST

Oracle Database 10g: Real Application Clusters 5-54

Database Instance Parameter Changes


Add at least 600 KB to LARGE_POOL_SIZE Add the following to SHARED_POOL_SIZE:
OR OR

(DB_SPACE/100+2)*#_External_Red (DB_SPACE/50+4)*#_Normal_Red (DB_SPACE/33+6)*#_High_Red

SELECT d+l+t DB_SPACE FROM (SELECT SUM(bytes)/(1024*1024*1024) d FROM v$datafile), (SELECT SUM(bytes)/(1024*1024*1024) l FROM v$logfile a, v$log b WHERE a.group#=b.group#), (SELECT SUM(bytes)/(1024*1024*1024) t FROM v$tempfile WHERE status='ONLINE');

Add at least 16 to PROCESSES

5-55

Copyright 2005, Oracle. All rights reserved.

Database Instance Parameter Changes (continued) The SGA parameters for a database instance needs slight modification to support ASM AUs maps and other ASM information. The following are guidelines for SGA sizing on the database instance: Add at least 600 KB to your large pool, and make sure that its size is at least 8 MB. Additional memory is required to store AU maps in the shared pool. Use the result of the above query to obtain the current database storage size (DB_SPACE) that is either already on ASM or will be stored in ASM. Then determine the redundancy type that is used (or will be used), and add to the shared pool size, one of the the following values: - For disk groups using external redundancy: Every 100 GB of space needs 1 MB of extra shared pool plus a fixed amount of 2 MB of shared pool. - For disk groups using normal redundancy: Every 50 GB of space needs 1 MB of extra shared pool plus a fixed amount of 4 MB of shared pool. - For disk groups using high redundancy: Every 33 GB of space needs 1 MB of extra shared pool plus a fixed amount of 6 MB of shared pool. Add at least 16 to the value of the PROCESSES initialization parameter. Note: If the Automatic Memory Management (AMM) feature is being used, then this sizing data can be treated as informational only, or as supplemental data in gauging best values for the SGA. Oracle Corporation highly recommends using the AMM feature.
Oracle Database 10g: Real Application Clusters 5-55

Summary

In this lesson, you should have learned how to: Use the DBCA to create an ASM instance Start up and shut down ASM instances Create and maintain ASM disk groups Create database files using ASM

5-56

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-56

Practice 5 Overview

This practice covers the following topics: Installing ASMLib. Using the DBCA to create ASM instances. Discovering ASM instances in Database Control. Creating new ASM disk groups using Database Control. Generating automatic disk group rebalancing operations.

5-57

Copyright 2005, Oracle. All rights reserved.

Oracle Database 10g: Real Application Clusters 5-57

Вам также может понравиться