Вы находитесь на странице: 1из 696

Welcome to:

AU54 - HACMP System Administration I:


Planning and Implementation

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Course Objectives
After completing this unit, you should be able to:
Explain what High Availability is
Outline the capabilities of HACMP for AIX
Design and plan a highly available cluster
Correctly configure networks and shared disks for a highly
available cluster
Install and configure HACMP in the following modes of operation:
Primary node with an idle standby node
Two applications on two nodes in a Mutual Takeover
configuration
Single application running concurrently on two nodes (optional)
Perform basic system administration of an HACMP cluster
Perform basic customization of an HACMP cluster
Carry out basic problem determination and recovery

© Copyright IBM Corporation 2004


Course Agenda
Day 1

09:00 - 09:30 Course Introduction


09:30 - 10:20 Unit 1 - Introduction to High Availability
10:20 - 10:30 Break
10:30 - 12:00 Unit 2 - Introduction to HACMP for AIX
12:00 - 13:00 Lunch Break
13:00 - 14:00 Unit 3 Topics 1,2 - Shared Storage Concepts,
Technologies
14:00 - 14:10 Break
14:10 - 14:50 Unit 3 Topic 3 - Shared Storage From AIX
14:50 - 16:30 Exercise 1: Cluster Design
Exercise 2: Planning Storage
Exercise 3: Setup Storage

© Copyright IBM Corporation 2004


Course Agenda
Day 2

09:00 - 10:00 Unit 4 Topics 1,2 - Networking Considerations


10:00 - 10:10 Break
10:10 - 11:10 Unit 4 Topics 3,4 - Networking Considerations
11:10 - 11:20 Break
11:20 - 12:00 Unit 5 Topic 1 - HACMP Installation
12:00 - 13:00 Lunch Break
13:00 - 16:30 Exercise 4: Network Planning, Setup and Test
Exercise 5: HACMP Software Installation
Exercise 6: Client Setup

© Copyright IBM Corporation 2004


Course Agenda
Day 3

09:00 - 10:00 Unit 5 Topics 2,3 - HACMP Architecture


10:00 - 10:10 Break
10:10 - 11:00 Unit 6 Topic 1 - Cluster Configuration
11:00 - 11:10 Break
11:10 - 12:00 Unit 6 Topic 2 - Other Configuration Scenarios
12:00 - 13:00 Lunch Break
13:00 - 16:30 Exercise 7: Cluster Configuration
Exercise 8: Application Integration
Exercise 9: Mutual Takeover

© Copyright IBM Corporation 2004


Course Agenda
Day 4

09:00 - 10:00 Unit 7 - Cluster Single Point of Control


10:00 - 10:10 Break
10:10 - 11:00 Unit 8 - Dynamic Reconfiguration
11:00 - 11:10 Break
11:10 - 12:00 Unit 9 - Integrating NFS Into HACMP
12:00 - 13:00 Lunch Break
13:00 - 16:30 Exercise 10: HACMP Extended Features
Exercise 11: Resource Group Options
Exercise 12: Network File System (NFS)

© Copyright IBM Corporation 2004


Course Agenda
Day 5

09:00 - 09:50 Unit 10 - Cluster Customization


09:50 - 10:00 Break
10:00 - 11:00 Unit 11 - Problem Determination and Recovery
11:00 - 11:10 Break
11:10 - 12:00 Unit 12 - Documenting Your System
12:00 - 13:00 Lunch Break
13:00 - 13:45 Exercise 13: Error Notification
13:45 - 16:30 Open Lab

© Copyright IBM Corporation 2004


Lab Exercises
Points to note :

Work as a team and split the workload.


Manuals are available online.
HACMP software has been loaded and may have already been
installed.
TCP/IP and LVM have not been configured.
Each lab must be completed successfully before continuing on to
the next lab, as each lab is a prerequisite for the next one.
Any questions, ask your instructor.

© Copyright IBM Corporation 2004


Course Summary
Having completed this unit, you should understand that:
There is ample time for the lab exercises.
Thorough design, planning and teamwork are essential.
Prior AIX, LVM, Storage Management and TCP/IP experience is
assumed and required.

© Copyright IBM Corporation 2004


Welcome to:
Introduction to High-Availability

© Copyright IBM Corporation 2004


3.0.2
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand what high availability is
Understand why you might need high availability
Outline the various options for implementing high availability
Compare and contrast the high availability options
State the benefits of using highly available clusters
Understand the key considerations when designing and
implementing a high availability cluster
Be familiar with the basics of risk analysis

© Copyright IBM Corporation 2004


So, What Is High Availability?
High Availability is...

The masking or elimination of both planned and unplanned downtime.


The elimination of single points of failure (SPOFs).
Fault resilience, but NOT fault tolerance.

Workload Fallover

WAN

Client
Production Standby

© Copyright IBM Corporation 2004


So Why Is Planned Downtime Important?

Planned downtime: Unplanned downtime:

Hardware upgrades Administrator Error


Repairs Application failure
Software updates Hardware faults
Backups Environmental Disasters
Testing
Development

85.0%

Hardware Failure (1%)


Other unplanned downtime (14%)
Planned downtime (85%)

14.0%
1.0%

High availability solutions should reduce both


planned and unplanned downtime.
© Copyright IBM Corporation 2004
Continuous Availability Is the Goal
Elimination of Downtime

Continuous
Availability

Continuous High
Operations Availability

Masking or elimination of Masking or elimination of


planned downtime unplanned downtime

© Copyright IBM Corporation 2004


Eliminating Single Points of Failure
Cluster Object Eliminated as a single point of failure by . . .

Node Using multiple nodes

Power Source Using multiple circuits or uninterruptible power supplies

Network adapter Using redundant network adapters

Network Using multiple networks to connect nodes

TCP/IP Subsystem Using serial networks to connect adjoining nodes and clients

Disk adapter Using redundant disk adapters

Disk Using redundant hardware and disk mirroring and/or striping

Application Assigning a node for application takeover; configuring an


application monitor

A fundamental design goal of (successful) cluster design is


the elimination of single points of failure (SPOFs).

© Copyright IBM Corporation 2004


Availability - from Simple to Complex

Fault
Tolerant

High Availability
Cluster

Enhanced

Stand-alone

© Copyright IBM Corporation 2004


The Stand-alone System
The stand-alone system may offer limited availability benefits:

Journaled Filesystem
Dynamic CPU Deallocation
Service Processor
Redundant Power
Redundant Cooling
ECC Memory
Hot Swap Adapters
Dynamic Kernel
Disk mirroring

Example single points of failure:


Disk Adapter/ Data Paths
No Hot Swap Storage
Power for Storage Arrays
Cooling for Storage Arrays
Hot Spare Storage
Node/Operating System
Network
Network Adapter
Application
Site Failure (SAN distance)
Site Failure (via mirroring)
© Copyright IBM Corporation 2004
The Enhanced System
The enhanced system may offer increased availability benefits:
Journaled Filesystem
Dynamic CPU Deallocation
Service Processor
Redundant Power
Redundant Cooling
ECC Memory
Hot Swap Adapters
Dynamic Kernel
Disk Mirroring
Redundant Disk adapters/multiple paths
Hot Swap Storage
Redundant Power for Storage Arrays
Redundant Cooling for Storage Arrays
Hot Spare Storage

Example single points of failure:


Node/Operating System
Network Adapter
Network
Application
Site Failure (SAN distance)
Site Failure (via mirroring)
© Copyright IBM Corporation 2004
High-Availability Clusters (HACMP)
Clustering technologies offer high-availability:

Journaled Filesystem
Dynamic CPU Deallocation
Service Processor
Redundant Power
Redundant Cooling
ECC Memory
Hot Swap Adapters
Dynamic Kernel
Redundant Data Paths
Data Mirroring
Hot Swap Storage
Redundant Power for Storage Arrays
Redundant Cooling for Storage Arrays
Hot Spare Storage
Dual Disk Adapters
Redundant nodes (operating system)
Redundant Network Adapters
Redundant Networks
Application Monitoring
Site Failure (SAN distance)

Example single points of failure:


Site Failure (via mirroring)
© Copyright IBM Corporation 2004
Fault-Tolerant Computing
Fault-tolerant solutions should not fail:

Lock Step CPUs


Hardened Operating System
Hot Swap Storage
Continuous Restart

Example single points of failure:

Site Failure (SAN distance)


Site Failure (via mirroring)

© Copyright IBM Corporation 2004


Availability Solutions
Simple Complex

Enhanced High Availability Fault-tolerant


Stand-alone
Standalone Clusters Computers

Solutions

Journaled Filesystem Redundant Data Paths Redundant Servers Lock Step CPUs
Dynamic CPU Deallocation Data Mirroring Redundant Networks Hardened Operating System
Service Processor Hot Swap Storage Redundant Network Redundant Memory
Availability Redundant Power
Redundant Cooling
Redundant Power for
Storage Arrays
Adapters
Heartbeat Monitoring
Continuous Restart

benefits ECC Memory


Hot Swap Adapters
Redundant Cooling for
Storage Arrays
Failure Detection
Failure Diagnosis
Dynamic Kernel Hot Spare Storage Automated Fallover
Automated Reintegration

Depends, but
Downtime Couple of days Couple of hours In theory, none!
typically 3 mins

Good as your
Data Availability Last transaction Last transaction No loss of Data
last full backup

Relative Cost* 1 1.5 2-3 10+

* All other parameters being equal.


© Copyright IBM Corporation 2004
So, What About Site Failure?
Near distance (using SAN) supported by HACMP 5.2
Far distance, (requires data mirroring) invest in a Geographic
Clustering Solution (for example, HACMP XD*)
Distance unlimited
Data replication across a geography
Application, disk and network independent
Automated site failover and reintegration
A single cluster across two sites
Data Replication

Toronto London
*The HACMP XD feature of HACMP contains IBM's HAGEO product and PPRC support .
© Copyright IBM Corporation 2004
Why Might I Need High Availability?
60% of all large companies now operate round the clock (7x24)

Losses on failure:
330,000 $US per hour (industry average)
Peak losses: 130,000 $US per minute (telephone network) Lose of Revenue $M
Loss of customer loyalty 200
Loss of customer confidence 150
100
50
And, if there is no disaster recovery:
0
50% of affected companies will never reopen
90% of affected companies are out of business in less than two years

Note: High Availability is NOT a Disaster Recovery solution.

E $ £
© Copyright IBM Corporation 2004
Benefits of High-Availability Solutions
High-availability solutions offer the following benefits:

Standard components (no specialized hardware)


Can be built from existing hardware (no need to invest in new kit)
Work with just about any application
Work with wide range of disk and network types
No specialized operating system or microcode
Excellent availability at low cost

+ =

Standard Components High Availability Solution


© Copyright IBM Corporation 2004
Other Considerations for High-Availability
High-availability solutions require the following:
Thorough design and detailed planning
Elimination of single points of failure
Selection of appropriate hardware
Correct implementation
Disciplined system administration practices
Documented operational procedures
Comprehensive testing
People
Systems
Management

High
availability Data

Networking Continuous
availability

Continuous
operation
Hardware
Environment

Software
© Copyright IBM Corporation 2004
A Philosophical View of High Availability
The goal of an HA cluster is to make a service highly available.
Users aren't interested in highly available hardware.
Users aren't even interested in highly available software.
Users are interested in the availability of services.
Therefore, use the hardware and the software to make the services
highly available.
Cluster design decisions should be judged on the basis of whether
or not they:
Contribute to availability (for example, eliminate a SPOF)
Detract from availability (for example, gratuitous complexity)
Since it is impractical if not impossible to truly eliminate all SPOFs,
be prepared to use risk analysis techniques to determine which
SPOFs are tolerated and which must be eliminated

© Copyright IBM Corporation 2004


Classic Risk Analysis
1. Identify relevant policies
What existing risk tolerance policies are available?
2. Study the current environment
Understand what strengths (for example, server room is on a properly sized
UPS) and weaknesses (for example, no disk mirroring) exist today
3. Perform requirements analysis
Just how much availability is required?
What is the acceptable likelihood of a long outage?
4. Hypothesize vulnerabilities
What can possibly go wrong?
5. Identify and quantify risks
The statistical probability of something going wrong over the life of the
project (or the likely number of times something will go wrong over the life of
the project) multiplied by the cost of an occurrence
6. Evaluate countermeasures
What does take to reduce the risk (by reducing the likelihood
or consequences of an occurrence) to an acceptable level
7. Make decisions, create a budget and plan the cluster
© Copyright IBM Corporation 2004
What Do We Plan to Achieve This Week?
Your mission this week is to build a two-node highly available cluster
using two previously separate pSeries systems, each of which has an
application which needs to be made highly available.

A A

B B

© Copyright IBM Corporation 2004


Checkpoint
1. Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware components.
b. High availability solutions always require manual intervention to ensure recovery following
failover.
c. High availability solutions never require customization.
d. High availability solutions offer excellent price performance when compared with Fault
Tolerant solutions.
2. True or False?
High availability solutions never fail.
3. True or False?
A thorough design and detailed planning is required for all high availability solutions.
4. True or False?
The cluster shown on the foil titled "What We Plan to Achieve This Week" has no obvious
single points of failure.
5. A proposed cluster with a two year life (for planning purposes) has a
vulnerability which is likely to occur twice per year at a cost of $10,000 per
occurrence. It costs $25,000 in additional hardware costs to eliminate the
vulnerability. Should the vulnerability be eliminated?
a. yes
b. no
© Copyright IBM Corporation 2004
Checkpoint Answers
1. Which of the following is a characteristic of high availability?
a. High availability always requires specially designed hardware components.
b. High availability solutions always require manual intervention to ensure recovery following
failover.
c. High availability solutions never require customization.
d. High availability solutions offer excellent price performance when compared with Fault
Tolerant solutions.
2. True or False?
High availability solutions never fail.
3. True or False?
A thorough design and detailed planning is required for all high availability solutions.
4. True or False? (the local area network is a SPOF)
The cluster shown on the foil titled "What Will We Achieve This Week" has no obvious
single points of failure.
5. A proposed cluster with a two year life (for planning purposes) has a
vulnerability which is likely to occur twice per year at a cost of $10,000 per
occurrence. It will cost $25,000 in additional hardware costs to eliminate
the vulnerability. Should the vulnerability be eliminated?
a. yes ($25,000 is less than $10,000 times four)
b. no

© Copyright IBM Corporation 2004


Unit Summary
Having completed this unit, you should be able to:
Understand what high availability is
Understand why you might need high availability
Outline the various options for implementing high availability
Compare and contrast the high-availability options
State the benefits of using highly available clusters
Understand the key considerations when designing and
implementing a high-availability cluster
Be familiar with the basics of risk analysis

© Copyright IBM Corporation 2004


Welcome to:
Introduction to HACMP for AIX

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Outline the features and benefits of HACMP for AIX
Describe the physical and logical components of an HACMP
cluster
Understand how HACMP operates in typical configurations
Describe the evolution of the HACMP 5.2 product family

© Copyright IBM Corporation 2004


HACMP Basics
After completing this topic, you should be able to:
Outline the features and benefits for HACMP for AIX
Describe the HACMP concepts of topology and resources
Give examples of topology components and resources
Provide a brief description of the software and hardware
components of a typical HACMP cluster

© Copyright IBM Corporation 2004


IBM's HA Solution for AIX
High Availability Cluster Multiprocessing
Based on cluster technology
Provides two environments:
High Availability: the process of ensuring an application is
available for use through the use of serially accessible shared
data and duplicated resources
Cluster MultiProcessing: concurrent access to shared data

© Copyright IBM Corporation 2004


A Highly Available Cluster

urce
Reso p
Grou

Application
Fallover

Clusters based upon HACMP 5.2 can contain between 2 and 32 nodes.

A cluster comprises physical components (topology) and logical components


(resources).
© Copyright IBM Corporation 2004
HACMP's Topology Components
Ne
IP ork
tw

on

No two
unicati

Ne
Comm rface

n- rk
IP
Inte

n
tio
nica
u e
o mm evic
C D

No
de de
No
r
l uste
C

The Topology components consist of a cluster, nodes and the technology


which connects them together.
© Copyright IBM Corporation 2004
HACMP's Resource Components

Resources

r
sto rt scr erve
S
p s ipt

up
sta tion

pt

ro
cri

G
ca

em
e
m
pli

st
lu

Sy
Ap

Vo
Se ddr

le
rvi ess
A

Fi
ce
IP

Resource Group
Node list
Resource Group Policies
Policies
startup
fallover
© Copyright IBM Corporation 2004
fallback
Solution Components

© Copyright IBM Corporation 2004


AIX's Contribution to High Availability

Object Data Manager (ODM)


System Resource Controller (SRC)
Logical Volume Manager (LVM)
Journaled File System (JFS)
Online JFS Backup (splitlvcopy)
Work Load Manager (WLM)
Quality of Service (Qos)
External Boot
Software Installation Management (installp)
Reliable Scalable Cluster Technology (RSCT)

© Copyright IBM Corporation 2004


Hardware Prerequisites
IBM Eserver pSeries
High-end IBM

IBM

IBM

Midrange IBM

IBM

pSeries 690
IBM

pSeries 680
Entry Deskside IBM

IBM
pSeries

pSeries

pSeries 670
IBM

pSeries 660
pSeries
server
pSeries 620
pSeries 650
Models 6F0, 6F1
server
pSeries 630 pSeries 655
Model 6E4
pSeries 610
Model 6E1
H C R U6

IBM

IBM

pSeries

pSeries 630
pSeries 520
IBM

server
IBM

Model 6C4
pSeries 640
pSeries

pSeries 550
pSeries 610 Model B80 server
pSeries 570
Model 6C1
Entry Rack

All pSeries systems work with HACMP in any combination of nodes within a
cluster. However, a minimum of four free adapter slots is recommended.
© Copyright IBM Corporation 2004
Supported Storage Environments

SSA Adapter

RS/6000
RS/6000

FAStT

SAN
IBM

Maximum 25m
Host
System
ESS Fibre
SCSI
T
RS/6000

Controller
SCSI SCSI SCSI SCSI
Module Module Module Module

Disk Disk Disk Disk


Drive Drive Drive Drive

Most HACMP clusters require shared storage. Disk technologies which


support multihost connections include: SCSI, SSA and FC-AL (with or
without RAID).
© Copyright IBM Corporation 2004
Supported Networking Environments
Ethernet

Token Ring
PC
PC

Server Server
Server Traffic Flow
FDDI
Server

Server

Traffic Flow Server Server Server

Non -IP

Supported IP network interfaces include: Ethernet (10 MB,100 MB, 1 GB),


Etherchannel, Token-Ring, FDDI, ATM, Fibre Channel and the SP switches.
Supported non-IP network devices include: RS232/422, SSA adapters,
(target Mode SSA) and Enhanced Concurrent Volume Group disks.

© Copyright IBM Corporation 2004


Some Clusters Do Not Have Shared Disks

Clusters providing firewall services do not usually have shared disks.


Can you think of any other examples?
© Copyright IBM Corporation 2004
So What Is HACMP Really?

Recovery
cspoc programs event
scripts

dare
CLSTRMGR clsmuxpd
clverify

rgmove RSCT, RMC

HACMP is an application which:


Monitors cluster components,
Detects status changes,
Diagnoses and recovers from failures and...
Reintegrates previous failed components back in to the cluster upon recovery.
Provides tools to create cluster wide definitions and to synchronize
http://www.ibm.com/servers/eserver/pseries/solutions/ha/
© Copyright IBM Corporation 2004
Additional Features of HACMP

Configuration assistant
clstat
planning worksheets

smit via Web


clinfo snmp auto tests
Tivoli

clsmuxpd Application
snmp mib Monitoring

HACMP is shipped with utilities to simplify configuration, monitoring,


customization and cluster administration.

© Copyright IBM Corporation 2004


Some Assembly Required

Customized
Pre-event scripts
Application
Server
HACMP core events
Application start script
Application stop script

Customized
post-event scripts

HACMP is not an out of the box solution.


HACMP's flexibility allows for complex customization in order
to meet availability goals.

© Copyright IBM Corporation 2004


Overview of the Implementation Process
Plan for network, storage, and application
Eliminate single points of failure
Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (ip interfaces, /etc/hosts, non-ip networks and devices)
Application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP ip and non-ip networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
© Copyright IBM Corporation 2004
Hints to Get Started
HACMP Cluster
user
for community
the ABC company

hints
Node A IP Label IP Address Netmask Node A IP Label IP Address Netmask
Service database 192.168.9.3 255.255.255.0 Service webserv 192.168.9.5 255.255.255.0
Boot nodeaboot 192.168.9.4 255.255.255.0 Boot nodebboot 192.168.9.6 255.255.255.0
Standby nodeastand 192.168.254.3 255.255.255.0 Standby nodebstand 192.168.254.3 255.255.255.0

Public Network

Node Name = nodea Node Name =nodeb


Resource group = dbrg Resource group = httprg
Applications = database Applications = http
Resources = cascading Resources = cascading
A-B B-A
Priority = 1,2 Priority = 2,1

Create a cluster diagram.


CWOF = yes CWOF = yes
tmssa network
Label = a_tmssa Label = b_tmssa
Device = /dev/tmssa1 Device = /dev/tmssa2

Use the planning sheets. Label = a_tty


serial network
Label = a_tty

Try to reduce Single Points of


Device = /dev/tty1 Device = /dev/tty1

Failure (SPOFs).
Always include a non IP network. rootvg
raid1
9.1GB
VG =httpvg
rootvg
raid1

Mirror across power and buses.


Raid1 9.1GB
9GB

Document test plan. VG = dbvg


Raid5
100GB

Be methodical.
Execute the test plan prior to Resourse Group httprg contains
Volume Group = httpvg
Resourse Group databaserg contains
Volume Group = dbvg

placing the cluster into production! hdisk2,hdisk8


Major # = 50
hdisk3, hdisk4, hdisk5, hdisk6, hdisk7
Major #
JFS Log
= 51
= dblvlog
JFS Log = httplvlog
Logical Volume = httplv Logical Volume = dblv1, dblv2
FS Mount Point = /http FS Mount Point = /db, /dbdata

© Copyright IBM Corporation 2004


Checkpoint
1. Which of the following are examples of topology components in
HACMP 5.2 (select all that apply)?
a. Node
b. Network
c. Service IP label
d. Hard disk drive
2. True or False?
All clusters require shared disk for storage of HACMP log files.
3. True or False?
All nodes in an HACMP cluster must have roughly equivalent performance characteristics.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. Which of the following are examples of topology components in
HACMP 5.1 (select all that apply)?
a. Node
b. Network
c. Service IP label*
d. Hard disk drive
2. True or False?
All clusters require shared disk for storage of HACMP log files.
3. True or False?
All nodes in an HACMP cluster must have roughly equivalent performance characteristics.

*Service IP labels were considered topology components in HACMP 4.5.

© Copyright IBM Corporation 2004


What Does HACMP Do?
After completing this topic, you should be able to:
Describe the failures which HACMP detects directly
Describe how HACMP deals with network adapter and node failures
Provide an overview of typical two node cluster configurations
Describe some of the considerations and limits of an HACMP cluster

© Copyright IBM Corporation 2004


Just What Does HACMP Do?

HACMP moves resource group resources based upon detecting three


failures by design, others can be dealt with by customization.
A node failure
A network adapter failure
A network failure

HACMP can also monitor applications, processor load and available disk
capacity.

© Copyright IBM Corporation 2004


What Happens When Something Fails?

How the cluster responds to a failure depends on what has failed, what the
resource group's fallover policy is, and if there are any resource group
dependencies
The cluster's configuration is determined by the application's requirements
Typically another equivalent component takes over duties of failed
component (for example, another node takes over from a failed node)

© Copyright IBM Corporation 2004


What Happens When a Problem Is Fixed?

?
How the cluster responds to the recovery of a failed component depends on
what has recovered, what the resource group's fallback policy is, and what
resource group dependencies there are.
The cluster's configuration is determined by the application's requirements.
Cluster administrator may need to indicate/confirm that the fixed component
is approved for use.
© Copyright IBM Corporation 2004
Primary Node With a Standby Node

node Halifax fails, node Vancouver fails,


A
Vancouver takes over node Halifax takes no
resource group A action
Halifax Vancouver

Start policy =
"Home node"
A A
Fallover policy =
"Fallover to next Halifax Vancouver
Halifax Vancouver priority node"

Fallback policy =
"Fallback to higher A
A priority node"
Halifax Vancouver
Halifax Vancouver
Vancouver returns
Halifax returns

Multiple layers of backup nodes are possible--fallover policy determines which node .
For example: primary -> secondary -> tertiary -> quaternary -> quinary -> senary -> septenary -> octonary ->
nonary -> denary ...

© Copyright IBM Corporation 2004


Minimizing Downtime

Halifax fails, node Vancouver fails,


Vancouver takes A node Halifax takes no
over resources action
Halifax Vancouver

Start policy =
"Home node" A
A Fallover policy =
"Fallover to next priority Halifax Vancouver
Halifax Vancouver node"
Fallback policy =
Halifax joins the "Never Fallback "
cluster

Halifax Vancouver
Downtime is minimized by avoiding fallbacks.
Multiple resource groups tend to gather together on the node which has been
up the longest. © Copyright IBM Corporation 2004
Two-Node Mutual Takeover Scenario

Halifax fails, Vancouver Vancouver fails, Halifax


takes over resources takes over resources from
from Halifax A B
Vancouver

Halifax Vancouver

Start policy =
"Home node"
Fallover policy = A B
B A
"Fallover to next
Vancouver priority node" Halifax Vancouver
Halifax
Fallback policy =
"Fallback to higher
priority node" Vancouver joins the
Halifax joins the cluster,
cluster, Halifax releases
Vancouver releases
resource group B
resource group A

A B

Halifax Vancouver

This is a very common HACMP cluster configuration.


© Copyright IBM Corporation 2004
Multiple Active Nodes
Halifax, Regina and Vancouver are
all running Application A, each
using a separate Service IP
Address A A A
Halifax Regina Vancouver

If nodes fail, the application remains


continuously available as long as there are
A A A surviving nodes to run on.
Halifax Regina Vancouver

Fixed nodes resume running their copy of


the application.

Application must be designed to run simultaneously on


multiple nodes.
This has the potential for essentially zero downtime.
© Copyright IBM Corporation 2004
Fundamental HACMP Concepts
A cluster's topology is the cluster from a networking components perspective
A cluster's resources are the entities which are being made highly available (for
example, volume groups, filesystems, service IP labels, applications)
A resource group is a collection of resources which HACMP strives to keep
available as a single unit according to policies specified by the cluster designer /
implementer
A given resource may only appear in at most one resource group
A startup policy determines which node the resource group is activated on
A fallover is the movement of a resource group to another node in response to a
failure. A fallover policy controls the resource group's target node.
A fallback is the movement of a resource group to a more preferred
node, typically in response to the reintegration of the previously
failed node. A fallback policy determines when fallback occurs.
Customization is the process of augmenting HACMP, typically via implementing
scripts which HACMP invokes at appropriate times

© Copyright IBM Corporation 2004


Points to Ponder
Each resource group must be serviced by at least two nodes.
Each resource group can have different policies.
Resource groups can be migrated (manually or automatically) to
rebalance loads.
Fallover policy (that is, node ranking) can be static or dynamic.
Every cluster must have at least one IP network and one non-IP
network.
HACMP does not require that a cluster have any shared storage.
Any combination of supported nodes may appear in a cluster*.
Cluster may be split across two sites which may or may not require
making a copy of the data (HACMP XD is required for data copy).
HACMP/ES 4.5 and later can be customized to automatically restart
failed applications.
The applications must be capable of being restarted
automatically if high availability is to be achieved.

* Application performance requirements and other operational issues


almost certainly impose practical constraints on the size and complexity
of a given cluster.

© Copyright IBM Corporation 2004


HACMP 5.2 Limits
HACMP 5.2 supports a maximum of:
64 resource groups per cluster
32 nodes per cluster
256 IP addresses known to HACMP (for example, service and
boot IP labels)
16 physical IP networks known to HACMP

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
Resource groups may be manually moved from node to node.
2. True or False?
Resources may be shared between resource groups.
3. Which of the following statements are true (select all that apply):
a. A resource group always returns to the primary node when the primary node recovers from
a failure.
b. All resource groups have a primary node.
c. Except during fallovers and fallbacks, resources are always dedicated to a particular node
at any given time.
d. The priority ordering of nodes within a resource group can be changed dynamically.
4. True of False?
All nodes associated with a resource group must have roughly equivalent performance
characteristics.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Resource groups may be manually moved from node to node.
2. True or False?
Resources may be shared between resource groups.
3. Which of the following statements are true (select all that apply):
a. FALSE A resource group always returns to the primary node when the primary node
recovers from a failure.
b. FALSE All resource groups have a primary node.
c. FALSE Except during fallovers and fallbacks, resources are always be dedicated to a
particular node at any given time.
d. TRUE The priority ordering of nodes within a resource group can be changed dynamically.
4. True of False?
All nodes associated with a resource group must have roughly equivalent performance
characteristics.

© Copyright IBM Corporation 2004


HACMP Packaging and Positioning
After completing this topic, you should be able to:
Describe the packaging of the HACMP 4.x and 5.x family of
products.
Give examples of applications for which HACMP is not appropriate.
Describe the evolution of the HACMP product.
Describe where to find information on HACMP.

© Copyright IBM Corporation 2004


HACMP 4.x Product Family
A family of products providing a full spectrum of high availability
solutions:
Concurrent
Clusters Global High
Availability Availability
HACMP CWS
"classic"

Fault
Resilience Concurrent
Resource HAGEO HACWS
Manager

HACMP
ES

Fault
Resilience
and 32 GeoRM
way
Scalability
Remote
Mirroring

© Copyright IBM Corporation 2004


HACMP 4.x to 5.x Repackaging
HACMP
(gone)
"classic"

HACMP
ES
HACMP
(combined)
for
AIX
Concurrent
Resource
Manager

HAGEO
HAGEO (replaced by) HACMP
XD PPRC

GeoRM (still available) GeoRM

HACWS (still available) HACWS

© Copyright IBM Corporation 2004


HACMP 5.x Product Family
Simplified product packaging with an enhanced feature set:

Remote
GeoRM Mirroring
Fault resilience,
32-way scalability
HACMP
and concurrent for
clusters AIX
Global
HACMP Availability
XD

High
HACWS Availability
CWS

© Copyright IBM Corporation 2004


Things HACMP Does Not Do

TSM

Backup and restoration.


Time synchronization.
Application specific configuration.
System Administration tasks unique to each node.
© Copyright IBM Corporation 2004
When Is HACMP Not the Correct Solution?
HACMP can dramatically improve availability, although there
are situations where it may not be the appropriate solution:

Zero downtime required


Maybe a fault tolerant system is the correct choice.
7x24x365, HACMP occasionally needs to be shutdown for maintenance.
Life Critical environments
HACMP is designed to handle one failure.
A second failure could be catastrophic.
Security Issues
Too little security, lots of people with the ability to change the environment.
Too much security, C2 and B1 environments may not allow HACMP to function
as designed.
Unstable Environments
HACMP cannot make an unstable and poorly managed environment stable.
HACMP tends to reduce the availability of poorly managed systems:
Lack of change control
Failure to treat the cluster as a single entity
Too many cooks
Unskilled administrators
Lack of documented operational procedures
© Copyright IBM Corporation 2004
HACMP's Evolution
HACMP is a mature product evolving to meet customer's needs.
Some of the key recent feature changes have been:

HACMP version 4.4.x


Integration with Tivoli, application monitoring, cascading with out fallback, C-SPOC
enhancements, improved migration support, integration of HANFS functionality, soft copy
documentation (html and pdf).
HACMP version 4.5
Requires AIX 5L, Automated configuration discovery, Support for multiple Service labels on each
Network Adapter through the use of IP aliasing, Persistent IP address support, 64-bit-capable
APIs, Monitoring and recovery from loss of volume group quorum.
HACMP version 5.1
Standard and extended configuration procedures, improved automated
configuration discovery, custom resource groups, heartbeating over disks,
fast disk fallover, elimination of "HACMP classic".
HACMP version 5.2
Only policy-based resource groups, two-node configuration assistant, auto correction with verify,
auto verify, cluster test tool, enhanced online worksheets, cluster wide password change,
application startup monitoring, multiple application monitors, improved security (key distribution),
file collections, dependent resource groups, RMC replaces Event Manager, Web interface to the
HACMP smit menus, lpar support for CUoD (PTF1).

© Copyright IBM Corporation 2004


Sources of HACMP Information
The HACMP manuals that come with the product!
/usr/lpp/doc/release_notes
IBM courses:
HACMP Administration I: Implementation (AU54)
HACMP Administration II: Maintenance and Migration (AU57)
HACMP Administration III: Problem Determination and Recovery (AU59)
HACMP Administration IV: Master Class (AU56)
Implementing High Availability on EServer Cluster 1600 (AU58)
IBM HAGEO Implementation (AU52)
IBM Web Site:
http://www-1.ibm.com/servers/aix/products/ibmsw/high_avail_network/hacmp.html

Non-IBM sources (not endorsed by IBM but probably worth a look):


http://www.matilda.com/hacmp/
http://groups.yahoo.com/group/hacmp/

BEFORE all else fails, read the manuals!


© Copyright IBM Corporation 2004
Checkpoint
1. True or False?
Support for the Concurrent Resource Management feature was dropped in HACMP 5.1.
2. True or False?
HACMP XD is a complete solution for building geographically distributed clusters.
3. Which of the following capabilities does HACMP not provide (select all
that apply):
a. Backup and restoration.
b. Time synchronization.
c. Automatic recovery from node and network adapter failure.
d. System Administration tasks unique to each node.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Support for the Concurrent Resource Management feature was dropped in HACMP 5.x.
2. True or False?
HACMP XD is a complete solution for building geographically distributed clusters.
3. Which of the following capabilities does HACMP not provide (select all
that apply):
a. Backup and restoration.
b. Time synchronization.
c. Automatic recovery from node and network adapter failure.
d. System Administration tasks unique to each node.

© Copyright IBM Corporation 2004


Unit Summary
Having completed this unit, you should be able to:
Outline the features and benefits of HACMP for AIX
Describe the physical and logical components of an HACMP
cluster
Understand how HACMP operates in typical configurations
Explain the HACMP 5.x product family

© Copyright IBM Corporation 2004


Welcome to:
Shared Storage Considerations for
High-Availability

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand the fundamental shared storage concepts as they
apply within an HACMP cluster
Understand the capabilities of various disk technologies as they
related to HACMP clusters
Understand the shared storage related facilities of AIX and how
to use them in an HACMP cluster

© Copyright IBM Corporation 2004


Fundamental Shared Storage Concepts
After completing this topic, you should be able to:
Understand the distinction between shared storage and private
storage
Understand how shared storage is used within an HACMP cluster
Understand the importance of controlled access to an HACMP
cluster's shared storage
Understand how access to shared storage is controlled in an
HACMP cluster

© Copyright IBM Corporation 2004


What Is Shared Storage?

SCSI

SSA

ESS

© Copyright IBM Corporation 2004


What Is Private Storage?

SCSI

SSA

ESS

© Copyright IBM Corporation 2004


Where Should The Data Go?

Operating system components


always on private storage
Dynamic application data
for example, databases, Web server content
always on shared storage
Static application data
various categories:
configuration files - usually on shared storage
(easier to keep consistent)
license keys - it depends
private storage if node locked
usually shared storage otherwise
truly static data - wherever it is convenient
Application binaries
it depends:
avoid version mismatches by placing on shared storage
often easier to upgrade if they're on private storage

© Copyright IBM Corporation 2004


Shared Storage Questions
Some questions to ask your user or customer:

For each highly available application in the cluster:


How much shared storage is required?
Upon deployment of the cluster?
In six months?
In one year?
In two years?
How is data organized?
Files within file systems versus production database storage
Is the application:
I/O bandwidth intensive?
Random I/O intensive?
What's required to validate application data after a crash?
How important is REALLY fast recovery from failures?
How will it be backed up?
How much private storage is required?
Usually not enough to be a concern

© Copyright IBM Corporation 2004


Access to Shared Data Must Be Controlled
Consider:
Data is placed in shared storage to facilitate access to the data
from whichever node the application is running on
The application is typically running on only one node at a time*
Updating the shared data from another node (that is, not the node
that the application is running on) could result in data corruption
Viewing the shared data from another node could yield an
inconsistent view of the data
Therefore, only the node actually running the application should be
able to access the data.

© Copyright IBM Corporation 2004


Who Owns the Storage?

A B

ODM ODM

C D

varyonvg,varyoffvg used to control ownership in normal operations

varyonvg/varyoffvg uses one of the following mechanisms:


Reserve or release-based shared storage protection
Used with non- enhanced concurrent volume groups
RSCT-based shared storage protection
Used with Enhanced Concurrent Volume Groups

© Copyright IBM Corporation 2004


Reserve or Release-based Protection

A B varyonvg

ODM ODM

varyonvg C D

Reserve or release-based shared storage protection relies on hardware


support for disk reservation.
Disks are physically reserved to a node when varied on.
Disks are released when varied off.
LVM is unable to varyon a volume group whose disks are reserved to
another node.
Not all shared storage systems support disk reservation.
© Copyright IBM Corporation 2004
Reserve or Release Voluntary Disk Takeover

httpvg
A B varyonvg

ODM ODM

dbvg
C D
varyonvg

A B

ODM ODM
varyoffvg httpvg
dbvg C D
varyonvg

httpvg A B
varyonvg

ODM ODM
varyonvg httpvg
dbvg
C D
varyonvg

© Copyright IBM Corporation 2004


Reserve or Release Involuntary Disk Takeover

A B varyonvg

ODM ODM

varyonvg C D

A B
varyonvg

ODM ODM

varyonvg C D

© Copyright IBM Corporation 2004


Ghost Disks

A B hdisk0
varyonvg
hdisk1
hdisk2
ODM ODM hdisk3
hdisk4
hdisk5
hdisk6
varyonvg C D hdisk7
hdisk8
hdisk9

© Copyright IBM Corporation 2004


RSCT-based Shared Storage Protection

passive varyon A B active varyon

ODM ODM

active varyon C D passive varyon

Requires Enhanced Concurrent Volume Group


Independent of disk type
© Copyright IBM Corporation 2004
RSCT-based Voluntary Fast Disk Takeover

passive httpvg active


A B
varyon varyon

ODM ODM
1. A decision is made to move httpvg
from the right node to the left
active passive
varyon C D varyon
dbvg

passive httpvg
A B
varyon

ODM ODM
2. Right node releases active varyon of
httpvg
active passive
varyon C D varyon
dbvg

active httpvg passive


A B
varyon varyon

ODM ODM
3. Left node obtains active varyon of
httpvg
active passive
varyon C D varyon
dbvg

© Copyright IBM Corporation 2004


RSCT-based Involuntary Fast Disk Takeover

passive httpvg active


A B
varyon varyon
1. Right node fails
ODM ODM
2. Left node realizes that right node has
active passive failed
varyon C D varyon
dbvg

passive httpvg
A B
varyon
Active varyon state and passive varyon state
ODM ODM are concepts which don't apply to failed
active passive
nodes
varyon C D varyon
dbvg

httpvg
active A B passive
varyon varyon
3. Left node obtains active mode varyon of
ODM ODM
httpvg
active passive
C D
varyon dbvg varyon

© Copyright IBM Corporation 2004


Enabling RSCT-based Fast Disk Takeover
Fast disk takeover is enabled automatically for a Volume
Group if all of the following are true:
The cluster is running AIX 5.2 on all nodes
HACMP 5.x is installed on all nodes
The volume group is an enhanced concurrent mode volume
group*

*Enhanced concurrent mode volume groups will be discussed shortly


© Copyright IBM Corporation 2004
Fast Disk Takeover Additional Details
Fast disk takeover is faster than reserve or release-based disk
takeover.
Ghost disks do not occur when fast disk takeover is enabled.
Since fast disk takeover is implemented by RSCT, it is independent
of the disk technology supported by HACMP.
The gsclvmd subsystem which uses group services provides the
protection.
The distinction between active varyon and passive varyon is
private to each node (that is, it isn't recorded anywhere on the
shared disks).

© Copyright IBM Corporation 2004


Checkpoint
1. Which of the following statements is true (select all that apply)?
a. Static application data should always reside on private storage.
b. Dynamic application data should always reside on shared storage.
c. Shared storage must always be simultaneously accessible to all cluster nodes.
d. Regardless of the size of the cluster, all shared storage must always be accessible,
subject to access control, by all cluster nodes.

2. True or False?
Using RSCT-based shared disk protection results in slower fallovers.

3. True or False?
Ghost disks must be checked for and eliminated immediately after every cluster fallover or
fallback.

4. True or False?
The fast disk takeover facility is a risk free performance improvement in HACMP 5.1.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. Which of the following statements is true (select all that apply)?
a. Static application data should always reside on private storage.
b. Dynamic application data should always reside on shared storage.
c. Shared storage must always be simultaneously accessible to all cluster nodes.
d. Regardless of the size of the cluster, all shared storage must always be accessible
(subject to access control) by all cluster nodes.

2. True or False?
Using RSCT-based shared disk protection results in slower fallovers.

3. True or False?
Ghost disks must be checked for and eliminated immediately after every cluster fallover or
fallback.

4. True or False?
The fast disk takeover facility is a risk free performance improvement in HACMP 5.1.

© Copyright IBM Corporation 2004


Shared Disk Technology
After completing this topic, you should be able to:
Understand the capabilities of various disk technologies in an
HACMP environment
Understand the installation considerations of a selected disk
technology when combined with HACMP
Understand the issue of PVID consistency within an HACMP cluster

© Copyright IBM Corporation 2004


SCSI Technology and HACMP
HACMP-related issues with SCSI disk architecture:
SCSI buses require termination at each end
In HACMP environments the terminators have to be external to ensure that
the bus is still terminated properly after a failed system unit has been
removed.
SCSI buses are ID-based. All devices must have a unique ID number.
The default for all SCSI adapters at initial power-on is ID 7.
SCSI adapters on shared SCSI busses must be configured to not use ID 7 in order
to ensure that there isn't an ID conflict when some other SCSI adapter powers on.

Maximum 25m
Host Host
System System
T T SCSI
SCSI
Controller
5 6 Controller
SCSI 4 SCSI 3 SCSI 2 SCSI 1
Module Module Module Module

Disk Disk Disk Disk


Drive Drive Drive Drive

© Copyright IBM Corporation 2004


SCSI Continued
Different SCSI bus types have different maximum cable lengths for
the buses (maximum is 25 meters for differential SCSI)
Certain SCSI subsystems support hot swappable drives.
SCSI cables are not hot pluggable (power must be turned off on all devices
attached to the SCSI bus before a SCSI cable connection is made or
severed).
Clusters using shared SCSI disks often experience ghost disks.
For additional information see:
IBM course AU20, AIX 5L System Administration IV: Storage
Management.

http://www.ibm.com/redbooks © Copyright IBM Corporation 2004


SSA Technology and HACMP
SSA is an open standard disk technology and has the following characteristics
which are relevant in HACMP environments:
SSA is based on a loop technology which offers multiple data paths to disk.
There are restrictions on the number of adapters and types of adapters on each
loop. For example:
SSA loops can support eight adapters per loop.
Adapters used in RAID mode are limited to two per loop.
Shared SSA disks never appear as ghost disks.
For additional information see:
IBM course AU20, AIX 5L System Administration IV: Storage Management.
Redbook, Understanding SSA Subsystems in Your Environment,
SG24-5750-00.
http://www.storage.ibm.com/hardsoft/products/ssa/docs/index.html

SSA Adapter

© Copyright IBM Corporation 2004


Configuring SSA for Maximum Availability

Left Node Right Node

4 1
SSA SSA
A1 A1

A2 A2

Adapter B1

B2
5
16
B1

B2
Adapter
13 7133 8

SSA SSA
A1 A1

A2 12 9 A2

Adapter B1

B2
B1

B2
Adapter

© Copyright IBM Corporation 2004


SSA Adapters
The capabilities of SSA adapters have improved over time:
Only 6215, 6219, 6225 and 6230 adapters support Target Mode SSA and RAID5.
Only the 6230 adapter with 6235 Fast Write Cache Option feature code supports
enabling the write cache with HACMP.
Compatible adapters:
6214 + 6216 or 6217 + 6218 or 6219 + 6215 + 6225 + 6230
For more information and microcode updates:
http://www.storage.ibm.com/hardsoft/products/ssa/
Features and functionality of otherwise identical adapters and drives can vary
depending upon the level of microcode installed on the devices so be careful!
FC Adapter Name Adapters/loop RAID5 TMSSA
RAID/JBOD Cache
6214 SSA 4-port adapter (MCA) -/2 N N
6216 Enhanced SSA 4-port adapter (MCA) -/8 N N
6217 SSA 4-port RAID adapter (MCA) 1/1 N N
6218 SSA 4-port adapter (PCI) 1/1 N N
6219 SSA Multi-Initiator/RAID EL Adapter (MCA) 2/8 Not for HA Y
6215 SSA Multi-Initiator/RAID EL Adapter (PCI) 2/8 Not for HA Y
6225 Advanced SerialRAID Adapter (PCI) 2/8 Not for HA Y
6230 Advanced SerialRaid Adapter Plus (PCI) 2/8 Not for HA Y
6230 + 6235 Fast Write Cache Option 2/8 Yes for HA Y
Note: AIX 5.2 does not support the MCA 6214, 6216, 6217 and 6219 SSA adapters.
© Copyright IBM Corporation 2004
ESS Technology
The ESS is an example of smart storage devices. They provide highly
available storage centrally managed by the storage manager.
The inner workings of the storage device are masked from AIX. Basic
implementations are transparent to HACMP.
The optional HACMP XD add-on can be used to coordinate the fallover of ESS
PPRC based remote data mirrors.
Up to 32 Connection
Host Adapters ports + online upgrades
HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA
8,16 or 32 GB Cache
Switch
Nonvolatile backup of
Dedicated 4-way write data in cache
SMP Systems 64 Internal disk paths
Cache NVS All disks can
communicate at the
Disk Adapters Cluster1 NVS Cache Cluster2
same time
DA DA Hot-swap disks with
redundant spares
SSA Loops +online upgrades
RAID5 providing
performance and
Physical Disks availability
partitioned into
Logical Volumes

Full Duplication - No Single Point Of Failure


© Copyright IBM Corporation 2004
ESS Continued
Advanced features of the Storage unit may be supported by HACMP.
Subsystem Device Driver (SDD) is supported by HACMP with appropriate
PTFs.
For additional information refer to:
IBM course AU20, AIX 5L System Administration IV: Storage
Management.
Implementing the Enterprise Storage Server in Your Environment,
SG24-5420-01.
IBM TotalStorage Enterprise Storage Server Model 800, SG24-6424-00.

http://www.ibm.com/redbooks

© Copyright IBM Corporation 2004


Fibre Channel Technology
Fibre channel is supported by AIX and HACMP:
The gigabit fibre channel adapters (FC6228 and FC6239) are supported
by HACMP.
The IBM fibre channel raid storage server is supported for HACMP
configurations.
The FAStT disk technology is supported with restrictions in AIX and
HACMP.
For more information refer to the following Redbooks:
Planning and Implementing an IBM SAN, SG24-6116-00
Designing an IBM Storage Area Network, SG24-5758-00
Implementing Fibre Channel Attachment on the ESS, SG24-6113-00

http://www.ibm.com/redbooks

© Copyright IBM Corporation 2004


Fibre Channel Continued
An example of a redundant fabric fibre channel implementation:

HACMP node HACMP node


FC-AL
switches

Fibre Channel
Raid
Storage
Server

© Copyright IBM Corporation 2004


Physical Volumes IDs
# lspv
hdisk0 000206238a9e74d7 rootvg
hdisk1 00020624ef3fafcc None
hdisk2 00206983880a1580 None
hdisk3 00206983880a1ed7 None
hdisk4 00206983880a31a7 None
hdisk5 0002051036e6bf76 codevg
#

A B

ODM
C D

© Copyright IBM Corporation 2004


hdisk Inconsistency
# lspv # lspv
hdisk0 000206238a9e74d7 rootvg hdisk0 000206238a9e74d7 rootvg
hdisk1 00020624ef3fafcc None A hdisk1 000206238beef264 rootvg
hdisk2 00206983880a1580 None B hdisk2 00206983880a1ed7 None C
hdisk3 00206983880a1ed7 None C hdisk3 00206983880a31a7 None D
hdisk4 00206983880a31a7 None D hdisk4 00020624ef3fafcc None A
hdisk5 00206983880a1580 None B

A B

ODM ODM

C D

Neither HACMP nor AIX are affected by having a physical disk


known by different hdisk numbers on different systems.
Humans are, unfortunately, more easily confused.
© Copyright IBM Corporation 2004
Removing hdisk Inconsistencies
# rmdev -d -l hdisk1 ; rmdev -d -l hdisk2
# rmdev -d -l hdisk3 ; rmdev -d -l hdisk4
# mkdev -c disk -t 160mb -s scsi -p scsi0 -w 6,1 -d
# cfgmgr # lspv
# lspv hdisk0 000206238a9e74d7 rootvg
hdisk0 000206238a9e74d7 rootvg hdisk1 000206238beef264 rootvg
hdisk2 00020624ef3fafcc None A hdisk2 00020624ef3fafcc None A
hdisk3 00206983880a1580 None B hdisk3 00206983880a1580 None B
hdisk4 00206983880a1ed7 None C hdisk4 00206983880a1ed7 None C
hdisk5 00206983880a31a7 None D hdisk5 00206983880a31a7 None D

"Fake" hdisk1 will exist in a defined state and will not


appear in lspv output (use lscfg to see hdisk1).

A B

ODM ODM

C D

The two systems will now have consistent hdisk PVIDs.


© Copyright IBM Corporation 2004
Checkpoint
1. Which of the following disk technologies are supported by
HACMP?
a. SCSI.
b. SSA.
c. FC-AL.
d. All of the above.

2. True or False?
SSA disk subsystems can support RAID5 (cache-enabled) with HACMP.

3. True or False?
Compatibility must be checked when using different SSA adapters in the same loop.

4. True or False?
SSA can be configured for no single point of failure.

5. True or False?
hdisk numbers must map to the same PVIDs across an entire HACMP cluster.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. Which of the following disk technologies are supported by
HACMP?
a. SCSI.
b. SSA.
c. FC-AL.
d. All of the above.

2. True or False?
SSA disk subsystems can support RAID5 (cache-enabled) with HACMP (although certain
limitations apply).

3. True or False?
Compatibility must be checked when using different SSA adapters in the same loop.

4. True or False?
SSA can be configured for no single point of failure.

5. True or False?
hdisk numbers must map to the same PVIDs across an entire HACMP cluster.

© Copyright IBM Corporation 2004


Shared Storage from the AIX Perspective
After completing this topic, you should be able to:
Understand how LVM aids cluster availability
Understand the quorum issues associated with HACMP
Set up LVM for maximum availability
Configure a new shared volume group, filesystem, and jfslog

© Copyright IBM Corporation 2004


Logical Volume Manager
LVM is one of the major enhancements that AIX brings to
traditional UNIX disk management. LVM's capabilities are exploited by
HACMP.
LVM is responsible for managing logical disk storage. Physical
volumes are organized into volume groups, are identified by a unique
physical volume ID (PVID), and are mapped to logical hdisks.

Physical Logical
Partitions Partitions

PVID

Physical hdisk0 Logical


Volumes Volume
PVID

hdisk1

Volume
Group

© Copyright IBM Corporation 2004


LVM Relationships
LVM manages the components of the disk subsystem. Applications
talk to the disks through LVM.

This example shows an application writing to a filesystem which has its LVs
mirrored in a volume group physically residing on separate hdisks.

LVM
Physical Logical
Partitions Partitions
Volume Group

write to
/filesystem

Mirrored
Logical
Volume
Application

© Copyright IBM Corporation 2004


LVM Mirroring
LVM mirroring has some key advantages over other types of mirroring:

Up to three-way mirroring of all logical volume types, including concurrent


logical volumes, sysdumpdev, paging space, and raw logical volumes.
Disk type and disk bus independence.
Optional parameters for maximizing speed or reliability.
Changes to most LVM parameters can be done while the affected
components are in use.
The splitlvcopy command can be used to perform online backups.

LVM
Physical Logical
Partitions
Volume Group

Partitions

write to
/filesystem
Mirrored
Logical
Volume
Application

© Copyright IBM Corporation 2004


LVM Configuration Options (1 of 2)
# smit mklv
Add a Logical Volume

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Logical volume NAME [sharedlv]
* VOLUME GROUP name sharedvg
* Number of LOGICAL PARTITIONS [100] #
PHYSICAL VOLUME names [] +
Logical volume TYPE []
POSITION on physical volume middle +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
to use for allocation
Number of COPIES of each logical 2 +
partition
Mirror Write Consistency? active +

[MORE...12]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


LVM Configuration Options (2 of 2)
# smit mklv
Add a Logical Volume

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[MORE...12] [Entry Fields]


Allocate each logical partition copy yes +
on a SEPARATE physical volume?
RELOCATE the logical volume during yes +
reorganization?
Logical volume LABEL []
MAXIMUM NUMBER of LOGICAL PARTITIONS [512] #
Enable BAD BLOCK relocation? yes +
SCHEDULING POLICY for reading/writing sequential +
logical partition copies
Enable WRITE VERIFY? no +
File containing ALLOCATION MAP []
Stripe Size? [Not Striped] +
[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Steps to Creating a Mirrored Filesystem
These are the steps to creating a properly mirrored filesystem
for HA environments:

Step Description Options

1 create shared volume group Name the VG something meaningful like shared_vg1

2 change the autovary on flag chvg -an shared_vg1

create a jfslog lv Type=jfslog, size=1pp, separate physical volumes =


3 yes, scheduling = sequential, copies=2
"sharedlvlog"
4 initialize the jfslog logform /dev/sharedlvlog
type= jfs, size=??, separate physical volumes= yes,
5 create a data lv "sharedlv"
copies=2, scheduling = sequential, write verify = ??
create a filesystem on a pick the lv = sharedlv to create the file system on,
6
previously created lv automount = no, assign desired mount point
mount filesystem, lsvg -l shared_vg1 should show 1
7 verify the log file is in use
lv type jfslog, 1 lp, 2pp.

Note: HACMP supports both JFS and the newer JFS2 filesystems. JFS2 filesystems are not supported for NFS export
from an HACMP cluster unless an external log area is used. It would probably be best to always use an external JFS2 log
logical volume as one never knows which filesystems will need to be NFS exported someday.

© Copyright IBM Corporation 2004


Split Off a Mirror
chfs can be used to remove a copy of an LVM mirror and make it
available to be remounted somewhere else for backup.

# mklvcopy sharedlv 3
# syncvg -l sharedlv = =

# chfs -a splitcopy=backuplv -a copy=3 \ =


/sharedlv

(splits off a new logical volume and reduces


the original to 2 copies)

© Copyright IBM Corporation 2004


Adding a Shared Volume Group

VGDA
ODM ODM

#3
#4
#2 cfgmgr
#1 varyoffvg
unmount importvg
mkvg varyoffvg chvg
chvg
mklv (log)
logform
mklv (data)
crfs

© Copyright IBM Corporation 2004


Quorum Checking
AIX performs quorum checking on volume groups in order to
ensure that the volume group remains consistent
The quorum rules are intended to ensure that structural changes to
the volume group (for example, adding or deleting a logical volume)
are consistent across an arbitrary number of varyon-varyoff cycles
Overriding the quorum rules can be VERY DANGEROUS!

Quorum checking ENabled for


VG status Quorum checking DISabled for volume group
volume group

Running >50% VGDA's >1 VGDAs

100% VGDA's
varyonvg >50% VGDA's or if MISSINGPV_VARYON=TRUE
>50% VGDA's

© Copyright IBM Corporation 2004


Disabling Quorum Checking
Cluster designers or implementers are often tempted to disable
quorum checking. Although often necessary/desirable, there are
risks if quorum is disabled or if a volume group varyon is forced:

It may be possible for each side of a two-node cluster to have


different parts of the same volume group vary'd online.

It is possible that volume group structural changes (for example,


add or delete of a logical volume) made during the last varyon are
unknown during the current varyon.

It is possible that volume group structural changes are made to


one part of the volume group which are inconsistent with a
different set of structural changes which are made to another part
of the volume group.

© Copyright IBM Corporation 2004


Eliminating Quorum Problems
The following points help minimize the quorum problems:

Avoid volume groups with less than three disks.


Generally not an issue with HA clusters.

Distribute hard disks across more than one bus.


Use three adapters per node in SCSI.
Use two adapters per node, per loop in SSA.
Use VPATHs and two adapters with Fibre Channel.

Use different power sources.


Connect each power supply in a drawer to a different power source.

Use RAID arrays or Enterprise Storage Solutions.

© Copyright IBM Corporation 2004


The Quorum Buster
In some conditions, loss of quorum may lead to an unplanned
system downtime. The quorum buster can help eliminate this
possibility.

sharedvg

© Copyright IBM Corporation 2004


HACMP 5.x's Forced Varyon Feature
HACMP 5.x provides a new approach to dealing with quorum issues:
Each resource group has a flag which can be set to cause
HACMP to perform a careful forced varyon of the resource group's
volume groups if necessary
If normal varyonvg fails and this flag is set:
HACMP verifies that at least one complete copy of each logical volume
is available
If verification succeeds, HACMP forces the volume group online
This is not a complete and perfect solution to quorum issues:
If the cluster is partitioned then the rest of the volume group might
still be online on a node in the other partition.

© Copyright IBM Corporation 2004


Recommendations for Forced Varyon
Before enabling HACMP's forced varyon feature for a volume group
or the HACMP_MIRROR_VARYON variable for the entire cluster,
ensure that:
The resource group's volume groups are mirrored across disk
enclosures
The resource group's volume groups are set to super-strict
allocation
There are redundant heartbeat networks between all nodes
Use disk heartbeating with at least one disk per enclosure
Administrative policies are in effect to prevent volume group
structural changes when the cluster is running degraded (that is,
failed over or with disks missing)

© Copyright IBM Corporation 2004


Enhanced Concurrent Volume Groups
Introduced in AIX 5.1
Supported for all HACMP-supported disk technologies
Supports JFS and JFS2 filesystems
May be mounted by at most one node at any given time
Default type when using C-SPOC to create Concurrent Volume
Groups in AIX 5.1
Replaces old style SSA Concurrent Volume Groups
C-SPOC can not be used to create SSA Concurrent Volume
Groups on AIX 5.2
C-SPOC can be used to convert SSA Concurrent Volume Groups to
Enhanced Concurrent Volume Groups
Required in order to use:
Heartbeat over disk for a non-ip network*
fast disk takover
* Covered in the network unit
© Copyright IBM Corporation 2004
Active Varyon versus Passive Varyon
Active Varyon (varyonvg -a, lsvg -o)
Behaves like normal varyon (listed with lsvg -o)
Allows all of the usual operations like:
Operations on filesystems (for example, mounts and opening, reading or writing
files)
Execution of applications resident within the volume group
Creating, changing and deleting logical volumes
Synchronizing volume groups
RSCT responsible for ensuring that only one node has VG actively varied on
Passive Varyon (varyonvg -p, lsvg <vg_name>)
Volume group is available in a very limited read-only mode
Only certain operations allowed:
Reading volume group configuration information (for example, lsvg)
Reading logical volume configuration information (for example, lslv)
Most operations are prohibited:
Any operations on filesystems and logical volumes (for example, mounts, open,
create, modify, delete, and so forth)
Modifying,synchronizing the volume group's configuration
Any operation which changes the contents or hardware state of the disks
HACMP uses the appropriate varyonvg commands with enhanced concurrent volume
groups
© Copyright IBM Corporation 2004
lsvg <vg_name>
ON ACTIVE NODE

halifax # lsvg ecmvg


VOLUME GROUP: ecmvg VG IDENTIFIER: 0009314700004c00000000f
e2eaa2d6d
VG STATE: active PP SIZE: 8 MB
VG PERMISSION: read/write TOTAL PPs: 537 (4296 MB)
... ... ...
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled

ON PASSIVE NODE:

toronto # lsvg ecmvg


VOLUME GROUP: ecmvg VG IDENTIFIER: 0009314700004c00000000f
e2eaa2d6d
VG STATE: active PP SIZE: 8 MB
VG PERMISSION: passive-only TOTAL PPs: 537 (4296 MB)
... ... ...
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled

© Copyright IBM Corporation 2004


LVM and HACMP Considerations
Following these simple guidelines helps keep the
configuration easier to administer:

All LVM constructs must have unique names in the cluster.


For example, httplv, httploglv, httpfs and httpvg.

Mirror or otherwise provide redundancy for critical logical volumes.


Don't forget the jfslog.
If it isn't worth mirroring then consider deleting it now rather than having to wait to lose the
data when the wrong disk fails someday!
Even data which is truly temporary is worth mirroring as it avoids an application crash when
the wrong disk fails.
RAID-5 and ESS-based storage are alternative ways to provide redundancy.

The VG major device numbers should be the same.


Mandatory for clusters exporting NFS filesystems, but it is a good habit for any cluster.

Shared data on internal disks is a bad idea.

Focus on the elimination of single points of failure.

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
Lazy update keeps VGDA constructs in sync between cluster nodes.

2. Which of the following commands will bring a volume group online?


a. getvtg <vgname>
b. mountvg <vgname>
c. attachvg <vgname>
d. varyonvg <vgname>

3. True or False?
Quorum should always be disabled on shared volume groups.

4. True or False?
filesystem and logical volume attributes cannot be changed while the cluster is operational.

5. True or False?
An enhanced concurrent volume group is required for the heartbeat over disk feature.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Lazy update keeps VGDA constructs in sync between cluster nodes.

2. Which of the following commands will bring a volume group online?


a. getvtg <vgname>
b. mountvg <vgname>
c. attachvg <vgname>
d. varyonvg <vgname>

3. True or False?
Quorum should always be disabled on shared volume groups.

4. True or False?
filesystem and logical volume attributes cannot be changed while the cluster is operational.

5. True or False?
An enhanced concurrent volume group is required for the heartbeat over disk feature.

© Copyright IBM Corporation 2004


Topic Summary
Having completed this topic, you should be able to:
Understand the fundamental shared storage concepts as they
apply within an HACMP cluster
Understand the capabilities of various disk technologies as they
related to HACMP clusters
Understand the shared storage related facilities of AIX and how
to use them in an HACMP cluster

© Copyright IBM Corporation 2004


Welcome to:
Networking Considerations for
High-Availability

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand how HACMP uses networks
Describe the HACMP networking terminology
Explain and set up IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by failure
recovery
Minimize the impact of failure recovery on client systems

© Copyright IBM Corporation 2004


How HACMP Uses Networks
After completing this topic, you should be able to:
Explain how HACMP uses networks to:
Provide client access to the cluster
Detect failures
Diagnose failures
Communicate with other nodes in the cluster
Explain why a non-IP network is an essential part of any HACMP
cluster
Describe what a persistent node IP label is and what they are
typically used for
Provide an overview of IP Address Takeover

© Copyright IBM Corporation 2004


How Does HACMP Use Networks?
HACMP uses networks to:
Provide clients with highly available access to the cluster's
applications
Detect and diagnose node, network and network interface card
(NIC) failures
Communicate with other HACMP daemons on other nodes in the
cluster

© Copyright IBM Corporation 2004


Providing HA Client Access to the Cluster
Providing clients with highly available access to the cluster's
applications requires:
Multiple NICs per network per node
(Possibly) multiple networks per node
Careful network design and implementation all the way out to the
client's systems

© Copyright IBM Corporation 2004


What HACMP Detects and Diagnoses
Remember, HACMP only handles the following failures directly:
Network Interface Card (NIC) failure
Node failure
Network failure

en0 en1 en0 en1

bondar hudson

© Copyright IBM Corporation 2004


Failure Detection Requires Monitoring
HACMP must monitor the cluster's components in order to detect
failures of these components.
Let's look at what monitoring HACMP does . . .

en0 en1 en0 en1

bondar hudson

© Copyright IBM Corporation 2004


Heartbeat Packets
HACMP sends heartbeat packets across networks
Heartbeat packets are sent and received by every NIC
This is sufficient to detect all NIC, node and network failures
Heartbeat packets are not acknowledged

en0 en1 en0 en1

bondar hudson

© Copyright IBM Corporation 2004


Failure Detection versus Failure Diagnosis
Failure Detection is realizing that something is wrong
For example, realizing that packets have stopped flowing between bondar's
en1 and hudson's en1
Failure Diagnosis is figuring out what is wrong
For example, figuring out that bondar's en1 NIC has failed

en0 en1 en0 en1

bondar hudson

© Copyright IBM Corporation 2004


Failure Diagnosis
When a failure is detected, HACMP (RSCT topology services) uses
specially crafted packet transmission patterns to determine (that is,
diagnose) the actual failure by ruling out other alternatives
Example:
1. RSCT on bondar notices that heartbeat packets are no longer
arriving via en1 and notifies hudson (which has also noticed that
heartbeat packets are no longer arriving via its en1)
2. RSCT on both nodes send diagnostic packets between various
combinations of NICs (including out via one NIC and back in via
another NIC on the same node)
3. The nodes soon realize that all packets involving bondar's en1
are vanishing but packets involving hudson's en1 are being
received
4. DIAGNOSIS: bondar's en1 has failed. ?

© Copyright IBM Corporation 2004


What If All Heartbeat Packets Stop?
A node might notice that heartbeat packets are no longer
arriving on any NIC.
In the configuration below, it's impossible for either node to distinguish
between failure of the network and failure of the other node.
Each node concludes that the other node is down!

en0 en1 en0 en1

bondar hudson

© Copyright IBM Corporation 2004


All Clusters REQUIRE a Non-IP Network!
Distinguishing between the failure of the other node and the failure
of the network requires a second network.
Distinguishing between failure of the other node's IP subsystem and
the total failure of the other node requires a non-IP network.
Therefore, ALL CLUSTERS SHOULD HAVE A NON-IP
NETWORK!!!

en0 en1 en0 en1

non-IP network

bondar hudson

© Copyright IBM Corporation 2004


An Important Implementation Detail
HACMP must ensure that heartbeats are sent out via all NICs and know
which NIC is used.
If a node has multiple NICs on the same logical subnet then AIX
can rotate which NIC is used to send packets to the network.
Therefore, each NIC on each physical IP network on any given
node must have an IP address on a different logical subnet.

en0 en1 en0 en1


192.168.1.1 192.168.2.1 192.168.1.2 192.168.2.2

non-IP network

bondar hudson

© Copyright IBM Corporation 2004


Failure Recovery
HACMP continues to monitor failed components in order to
detect their recovery
Recovered components are reintegrated back into the cluster
Reintegration might trigger significant actions
For example, recovery of primary node will optionally trigger
fallback of resource group to primary node

en0 en1 en0 en1 en0 en1 en0 en1

bondar hudson bondar hudson

© Copyright IBM Corporation 2004


IP Address Takeover (IPAT)
Each highly available application is likely to require its own IP
address (called a service IP address)
This service IP address would usually be placed in the application's
resource group
HACMP would then be responsible for ensuring that the service IP
address was available on the node currently responsible for the
resource group

lab ce IP
NF

el
Sy
Fil tem

rvi
S
e
s

ex

Se

ro e
NF po

G lum
Sm rts

up
er

Vo
ou
nts
ion Serv
licat
App

urce
Reso p
Grou

© Copyright IBM Corporation 2004


IPAT After a Node Failure
If the application's current node fails, HACMP moves the
application's resource group to the other node.
If IPAT is configured for the resource group then the application's
service IP address is associated with the application's new node.

192.168.25.12

192.168.25.12

© Copyright IBM Corporation 2004


IPAT After a NIC Failure
If the NIC associated with the application's service IP address fails,
HACMP moves the service IP address to another NIC.
From HACMPs perspective, NIC failures include anything which
prevent the NIC from sending and receiving packets (for example, a
damaged or disconnected cable, a failed switch port, and so forth).

192.168.25.12

192.168.25.12

© Copyright IBM Corporation 2004


Checkpoint
1. How does HACMP use networks (select all which apply)?
a. Provide client systems with highly available access to the cluster's applications
b. Detect failures
c. Diagnose failures
d. Communicate between cluster nodes
e. Monitor network performance
2. True or False?
HACMP detects the loss of volume group quorum.

3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have occurred.

4. True or False?
Clusters are required to include a non-IP network.

5. True or False?
Each NIC on each physical IP network on each node is required to have an IP address on a
different logical subnet.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. How does HACMP use networks (select all which apply)?
a. Provide client systems with highly available access to the cluster's applications
b. Detect failures
c. Diagnose failures
d. Communicate between cluster nodes
e. Monitor network performance
2. True or False?*
HACMP detects the loss of volume group quorum.

3. True or False?
Heartbeat packets must be acknowledged or a failure is assumed to have occurred.

4. True or False?
Clusters are required to include a non-IP network.

5. True or False?
Each NIC on each physical IP network on each node is required to have an IP address on a
different logical subnet.

*HACMP responds to the loss of quorum but the loss is detected by the Logical Volume
Manager.
© Copyright IBM Corporation 2004
HACMP Concepts and Configuration Rules
After completing this topic, you should be able to:
List the networking technologies supported by HACMP
Describe the purpose of public and private HACMP networks
Describe the topology components and their naming rules
Define key networking related HACMP terms
Describe the basic HACMP network configuration rules

© Copyright IBM Corporation 2004


HACMP Networking Support
Supported IP networking technologies:
Ethernet
All speeds
Not the IEEE 802.3 frame type which uses et0, et1 ...
FDDI
Token-Ring
ATM and ATM LAN Emulation
Etherchannel
SP Switch
Supported non-IP network technologies:
Heartbeat over Disks (diskhb)
Requires Enhanced Concurrent Volume Group and HACMP 5.x
RS232/RS422 (rs232)
Target Mode SSA (tmssa)

© Copyright IBM Corporation 2004


Network Types
HACMP categorizes all networks:

IP:

ethernet, token ring, fiddi, atm, sp switch (hps)

non-IP:

rs232, tmssa, diskhb

© Copyright IBM Corporation 2004


HACMP Topology Components
HACMP uses some unique terminology to describe the type and
function of topology (as in, network) components under its control.
IP la
bel ress
IP
Ne IP add
two vancouver_service 192.168.5.2
rk

TCP/IP network - Internalnet


Comm
unicatio
n Interf
ace

Interface
Interface
Interface

Network
Network
Network

Interface
Network

no

Card
Card
Card

n-
Card

IP
ne
tw
or Communic
k ation Device
Serial Serial
Port non IP - rs232 Port

non IP - tmssa non


-IP
net
bondar wor
k
hudson
non IP - diskhb non
-IP
net
node n wor
a me k
© Copyright IBM Corporation 2004
On the Naming of Nodes
There are several names a node can be known by, including the AIX
hostname, the HACMP node name and IP label. These concepts
should not be confused.
AIX hostname HACMP node name
# hostname # /usr/es/sbin/cluster/utlities/get_local_nodename
gastown vancouver
# uname -a
AIX gastown 2 5 004FD0CD4C00

IP label
# netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lo0 16896 link#1 5338 0 5345 0 0
lo0 16896 127 loopback 5338 0 5345 0 0
lo0 16896 ::1 5338 0 5345 0 0
tr0 1500 link#2 0.4.ac.49.35.58 76884 0 61951 0
0
tr0 1500 192.168.1 vancouver_boot1 76884 0 61951 0
0
tr1 1492 link#3 0.4.ac.48.22.f4 476 0 451 13
0
tr1 1492 192.168.2 vancouver_boot2 476 0 451 13
0

© Copyright IBM Corporation 2004


HACMP 5.x Network Terms (1 of 2)
Communication Interface: A TCP/IP network interface used
by HACMP for communication with clients and/or with other cluster
nodes. It's IP address is defined to the AIX odm (via smit chinet) and
is assigned to the interface at AIX boot time.
Communication Device: A physical device representing one end of
a point-to-point non-IP network connection, such as /dev/tty1,
/dev/hdisk1 and /dev/tmssa1.
Communication Adapter: An X.25 adapter used to support a Highly
Available Communication Link.

© Copyright IBM Corporation 2004


HACMP 5.x Network Terms (2 of 2)
Service IP Label / Address: Address configured by HACMP to
support client traffic. It is kept highly available by HACMP.
Configured on an interface by either replacement or by alias.
Non-service IP Label / Address: An IP label / address defined to
HACMP for communication interfaces and is not used by HACMP for
client traffic. Two types:
interface (stored in AIX odm)
persistent (see below).
Service Interface: A communications interface configured with a
Service IP Label / Address (either by alias or replacement).
Non-Service Interface: A communications interface not configured
with a Service IP Label / Address. Used as a backup for a Service IP
Label / Address.
Persistent IP label / Address: An IP label / address, defined as
an alias to an interface IP Label / Address which stays on a
single node and is kept available on that node by HACMP.
© Copyright IBM Corporation 2004
IP Network Configuration Rules
Non-service interface IP labels must be on different logical subnets.
Should use the same subnet mask
If heartbeat over alias used then same subnet may be used
There must be at least one common IP subnet made up of
non-service IP labels among the nodes in the resource group
For example, each node must have an interface IP address in the
192.168.5.0 IP subnet.
Suggestion: Minimize the number of different IP subnets that you
use.

© Copyright IBM Corporation 2004


IP Network Configuration Examples

IP addresses on IP addresses on Is this configuration valid?


first node second node Assume subnet mask of 255.255.255.0
192.168.5.1 192.168.5.2 No (the IP addresses on the second node
192.168.6.1 192.168.5.3 are all in the same logical IP subnet)
192.168.5.1 192.168.7.1 No (there is no logical IP subnet which is
192.168.6.1 192.168.8.1 shared by all the nodes)
Sort-of (it might be possible to configure
HACMP to use this configuration but there
192.168.5.1 192.168.5.2
are not enough NICs on each node to avoid
the NICs being single points of failure)
192.168.5.1 192.168.5.2
Yes
192.168.6.1 192.168.6.2
192.168.5.1 192.168.5.2 Yes (although fewer logical IP subnets are
192.168.6.1 192.168.7.1 possible)
192.168.5.1
192.168.6.1 192.168.5.2
Yes (each node can have a different
192.168.7.1 192.168.6.2
number of NICs)
192.168.8.1 192.168.7.2
102.168.9.1

© Copyright IBM Corporation 2004


Non-IP Network Configuration Rules
Non-IP networks are strongly recommended in order to provide an
alternate communication path between cluster nodes in the event of
an IP network failure or IP subsystem failure
With more than two nodes you can configure the non-IP network
topology using one of the following layouts:
Mesh: each node connected to all other nodes
Star: one node is connected to all other nodes
Ring (or Loop): each node connected to its two adjacent neighbors

© Copyright IBM Corporation 2004


Persistent Node IP Labels
An IP label associated with a particular node
Useful for administrative purposes:
Provides always on IP address associated with a particular node
Allows external monitoring tools (for example, Tivoli) and
administrative scripts to reach a particular node
Assigned, via IP aliasing, after node synchronization, to a
communications interface on the node
HACMP will strive to keep the persistent node IP label available on
that node -- never moved to another node.
Maximum of one persistent node IP label per network per node
persistent node IP labels must adhere to subnet rules:
Persistent node IP labels must not be in any non-service interface
subnet

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
Clusters must always be configured with a private IP network for HACMP communication.

2. Which of the following are true statements about communication interfaces


(select all that apply)?
a. Has an IP address assigned to it using the AIX TCP/IP smit screens
b. Might have more than one IP address associated with it
c. Sometimes but not always used to communicate with clients
d. Always used to communicate with clients
e. Similar but not identical to the HACMP 4.x adapter concept
3. True or False?
Persistent node IP labels are not supported for IPAT via IP replacement.

4. True or False?
There are no exceptions to the rule that each NIC on each physical network on each node must
have an IP address in a different subnet.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Clusters must always be configured with a private IP network for HACMP communication.

2. Which of the following are true statements about communication interfaces


(select all that apply)?
a. Has an IP address assigned to it using the AIX TCP/IP smit screens
b. Might have more than one IP address associated with it
c. Sometimes but not always used to communicate with clients
d. Always used to communicate with clients*
e. Similar but not identical to the HACMP 4.x adapter concept
3. True or False?
Persistent node IP labels are not supported for IPAT via IP replacement.

4. True or False?**
There are no exceptions to the rule that each NIC on each physical network on each node must
have an IP address in a different subnet.

*Communication interfaces on private IP networks are not intended to be used by clients.


**The HACMP 5.1 heartbeat over IP aliases feature can be used to "rescind" this rule.
© Copyright IBM Corporation 2004
Implementing IP Address Takeover (IPAT)
After completing this topic, you should be able to:
Describe IPAT via IP aliasing and IPAT via IP replacement including:
How to configure a network to support them
What happens when
There are no failed components
A communication interface fails
A communication interface recovers
A node fails
A node recovers
Know how to select which style of IPAT is appropriate in a given
context
Describe how the AIX boot sequences changes when IPAT is
configured in a cluster
Understand the importance of consistent IP addressing and labeling
conventions

© Copyright IBM Corporation 2004


Two Ways to Implement IPAT
IPAT via IP Aliasing:
HACMP adds the service IP address to an (AIX) interface IP
address using AIX's IP aliasing feature:
ifconfig en0 alias 192.168.1.2

IPAT via IP Replacement:


HACMP replaces an (AIX) interface IP addresses with the service
IP addresses:
ifconfig en0 192.168.1.2

© Copyright IBM Corporation 2004


Network: IPAT via IP Aliasing
Define IP interface address in the AIX odm to each network interface.
Each interface IP address must be in a different logical IP subnet*
Define these address in the /etc/host file and configure them in HACMP
topology as communication interfaces
Define service addresses in /etc/hosts and in HACMP resources
They must not be in the same logical IP subnet as any of the interface IP
addresses
HACMP will configure them to AIX when needed

Before starting the application resource group

192.168.10.1 (odm) 192.168.11.1 (odm) 192.168.10.2 (odm) 192.168.11.2 (odm)

* Refer to earlier discussion of heartbeating and failure diagnosis for explanation of why
© Copyright IBM Corporation 2004
IPAT via IP Aliasing in Operation
When the resource group comes up on a node, HACMP aliases the
service IP label onto one of the node's available (that is, currently
functional) interfaces (odm).

After starting the application resource group

192.168.5.1 (alias)

192.168.10.1 (odm) 192.168.11.1 (odm) 192.168.10.2 (odm) 192.168.11.2 (odm)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why

© Copyright IBM Corporation 2004


IPAT via IP Aliasing After an Interface Fails
If the communication interface being used for the service IP label
fails, HACMP aliases the service IP label onto one of the node's
remaining available (for example, currently functional) non-service
(odm) interfaces
The eventual recovery of the failed boot adapter makes it available
again for future use

192.168.5.1 (alias)

192.168.10.1 (odm) 192.168.11.1 (odm) 192.168.10.2 (odm) 192.168.11.2 (odm)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why
© Copyright IBM Corporation 2004
IPAT via IP Aliasing After a Node Fails
If the resource group's node fails, HACMP moves the resource group
to a new node and aliases the service IP label onto one of the new
node's available (for example, currently functional) non-service (odm)
communication interfaces

192.168.5.1 (alias)

192.168.10.2 (odm) 192.168.11.2 (odm)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why
© Copyright IBM Corporation 2004
IPAT via IP Aliasing Summary
Configure each node's communication interfaces with IP addresses
(each on a different subnet)
Assign service IP labels to resource groups as appropriate
There is no limit on the number of resource groups with service IP
labels
There is no limit on the number of service IP labels per resource
group
HACMP assigns service IP labels to communication interfaces
(NICs) using IP aliases as appropriate
IPAT via IP aliasing requires that hardware address takeover is not
configured
IPAT via IP aliasing requires gratuitous arp support

© Copyright IBM Corporation 2004


Network: IPAT via IP Replacement
Define each network interface IP addresses in the AIX odm.
Each interface IP address on a given node must be in a different logical IP
subnet* and there must be a common subnet among the nodes
Define these address in the /etc/host file and configure them in HACMP
topology
Define service IP addresses in /etc/hosts and HACMP resources
The address must be in the SAME subnet as a common interface subnet
HACMP configures them to AIX as required
Before starting the application resource group

192.168.10.1 (odm) 192.168.11.1 (odm) 192.168.10.2 (odm) 192.168.11.2 (odm)

* See earlier discussion of heartbeating and failure diagnosis for explanation of why
© Copyright IBM Corporation 2004
IPAT via IP Replacement in Operation
When the resource group comes up on a node, HACMP replaces an
interface (odm) IP label with the service IP label
It replaces the interface IP label on the same subnet if the resource
group is on its startup node or if the distribution fallover policy is used.
It replaces an interface IP label on a different subnet otherwise

After starting the application resource group

192.168.10.7 (service) 192.168.11.1 (odm) 192.168.10.2 (odm) 192.168.11.2 (odm)

© Copyright IBM Corporation 2004


IPAT via IP Replacement After an I/F Fails
If the communication interface being used for the service IP label
fails, HACMP swaps the service IP label with an interface (odm) IP
label on one of the node's remaining available (that is, currently
functional) communication interfaces
The IP labels remain swapped when the failed interface recovers

NIC A NIC B
192.168.11.1 (odm) 192.168.10.7 (service) 192.168.10.2 (odm) 192.168.11.2 (odm)

© Copyright IBM Corporation 2004


IPAT via IP Replacement After a Node Fails
If the resource group's node fails, HACMP moves the resource
group to a new node and replaces an interface IP label with the
service IP label
If the resource group is on its startup node or if the fallover policy is
distributed, then replaces the interface (odm) IP label in the same
subnet.
Else it replaces an interface (odm) IP label in a different subnet or fail if
there isn't an available interface

192.168.10.2 (odm) 192.168.10.7 (service)

© Copyright IBM Corporation 2004


IPAT via IP Replacement Summary
Configure each node with up to eight communication interfaces
(each on a different subnet)
Assign service IP labels to resource groups as appropriate
Each node can be the most preferred node for at most one
resource group
No limit on number of service IP labels per resource group but
each service IP label must be on a different physical network
HACMP replaces interface IP labels with service IP labels on the
same subnet as the service IP label when the resource group is
running on its most preferred node or if the fallover policy is
distributed
HACMP replaces interface IP labels with service IP labels on a
different subnet from the service IP label when the resource group is
moved to any other node
IPAT via IP replacement supports hardware address
takeover

© Copyright IBM Corporation 2004


IPAT Is Optional
Although practically all clusters use IPAT, it is an optional
feature
If you don't need/want IPAT, you must still decide between IPAT via
IP aliasing style networking and IPAT via IP replacement style
networking:

If IPAT via IP aliasing style:


Define persistent node IP labels and configure clients to use them

If IPAT via IP replacement:


Configure (using SMIT tcp configuration, that is, smit chinet) service IP
labels and standby IP labels on communication interfaces
Configure clients to use the service IP labels to access cluster nodes or
define persistent node IP labels for the clients to use

© Copyright IBM Corporation 2004


Changes to AIX Start Sequence
The startup sequence of AIX networking is changed when
IPAT is enabled. /etc/inittab
/sbin/rc.boot
/etc/inittab cfgmgr
/sbin/rc.boot /etc/rc.net (modified for ipat)
cfgmgr exit 0
/etc/rc.net /etc/rc
cfgif mount all

/etc/rc /usr/sbin/cluster/etc/harc.net
mount all /etc/rc.net -boot
cfgif
/etc/rc.tcpip
daemons start < HACMP startup > clstmgr
event node_up
/etc/rc.nfs node_up_local
daemons start get_disk_vg_fs
exportfs acquire_service_addr
telinit -a
/etc/rc.tcpip
daemons start
/etc/rc.nfs
daemons start
IPAT changes the init exportfs
sequence

© Copyright IBM Corporation 2004


Changes to /etc/inittab
init:2:initdefault:
brc::sysinit:/sbin/rc.boot 3 >/dev/console 2>&1 # Phase 3 of system boot
powerfail::powerfail:/etc/rc.powerfail 2>&1 | alog -tboot > /dev/console # Power
failure detection
.
.
.
srcmstr:2:respawn:/usr/sbin/srcmstr # System Resource Controller
harc:2:wait:/usr/es/sbin/cluster/etc/harc.net # HACMP for AIX network startup
rctcpip:a:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP daemons
rcnfs:a:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons
cron:2:respawn:/usr/sbin/cron
piobe:2:wait:/usr/lib/lpd/pio/etc/pioinit >/dev/null 2>&1 # pb cleanup
cons:0123456789:respawn:/usr/sbin/getty /dev/console
qdaemon:a:wait:/usr/bin/startsrc -sqdaemon
writesrv:a:wait:/usr/bin/startsrc -swritesrv
uprintfd:23456789:respawn:/usr/sbin/uprintfd
.
.
.
ctrmc:2:once:/usr/bin/startsrc -s ctrmc > /dev/console 2>&1
httpdlite:23456789:once:/usr/IMNSearch/httpdlite/httpdlite -r /etc/IMNSearch/http
dlite/httpdlite.conf & >/dev/console 2>&1
clcomdES:2:once:startsrc -s clcomdES >/dev/console 2>&1
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit # HACMP for AIX These must
be the last entries of run level a in inittab!
pst_clinit:a:wait:/bin/echo Created /usr/es/sbin/cluster/.telinit > /dev/console
# HACMP for AIX These must be the last entries of run level a in inittab!

© Copyright IBM Corporation 2004


Talk to Your Network Administrator
Explain how HACMP uses networks
Ask for what you need:
IPAT via IP Aliasing:
Service IP labels in the production network that the cluster will be
attached to
Additional subnets for interface (odm) labels
One per network interface on the node with the most network adapters
These do not need to be routable
IPAT via IP Replacement:
Service label
Interface IP label for each network adapter (one must be in the same
subnet as the service label)
A different subnet for each interface
One per adapter on the node with the most adapters
Only the subnet containing the service label need be routable
Persistent node IP label for each node on at least one network
(very useful but optional)
Ask early (getting subnets assigned may take some time)

© Copyright IBM Corporation 2004


Adopt IP Address Numbering Conventions
HACMP cluster tend to have quite a few IP addresses
associated with them
If at all possible, adopt an IP address numbering convention
Requirements imposed by corporate IT policies or the network
administrators may make it impractical to follow any sort of
convention (do the best you can)

© Copyright IBM Corporation 2004


Adopt Labeling/Naming Conventions
HACMP cluster also tend to have quite a few IP labels and
other names associated with them
Adopt appropriate labeling and naming conventions:
For example:
Node-resident labels should include the node's name
bondar-if1, bondar-if2, hudson-if1, hudson-if2
Service IP labels that move between nodes should describe the
application rather than the node
web1, infodb
Why?
Conventions avoid mistakes
Avoided mistakes improve availability!

hudson-if1
© Copyright IBM Corporation 2004
An IPAT via IP Aliasing Convention
Here's one possible IP label number convention for IPAT via IP
aliasing networks:
IP address is of the form AA.BB.CC.DD
AA.BB is assigned by network administrator
CC indicates which interface, service IP label on each node:
15,16 indicates non-service/interface IP labels
5 chosen for service labels bondar-if1 192.168.15.29
etc (as required) bondar-if2 192.168.16.29
hudson-if1 192.168.15.31
DD indicates which node hudson-if2 192.168.16.31
xweb 192.168.5.92
29 indicates an IP address on bondar yweb 192.168.5.70
31 indicates an IP address on hudson
Be flexible. For example, this convention uses DD=29 for bondar
and DD=31 for hudson because the network administrator assigned
bondar-if1 to be 192.168.15.29 and hudson-if1 to be 192.168.15.31.
Fortunately, the network administrator could be convinced to use .29
and .31 for the other bondar and
hudson interface IP addresses.

© Copyright IBM Corporation 2004


An IPAT via IP Replacement Convention
Here's one possible IP label number convention for IPAT via IP
replacement networks:
IP address is of the form AA.BB.CC.DD
AA.BB is assigned by network administrator
CC indicates which adapter on each node:
15,16 indicate non-service/interface IP labels (defined by network
administrator)
15 also chosen for service labels bondar-if1 192.168.15.29
bondar-if2 192.168.16.29
And so forth (as required) hudson-if1 192.168.15.31
DD indicates which node hudson-if2
xweb
192.168.16.31
192.168.15.92
29 indicates bondar yweb 192.168.15.70

31 indicates hudson

© Copyright IBM Corporation 2004


The /etc/hosts file \
All of the cluster's IP labels must be defined in each cluster
node's /etc/hosts file:
IPAT via IP Aliasing Convention IPAT via IP Replacement Convention

127.0.0.1 loopback localhost 127.0.0.1 loopback localhost

# cluster explorers # cluster explorers


# netmask 255.255.255.0 # netmask 255.255.255.0

# bondar node # bondar node


192.168.15.29 bondar-if1 192.168.15.29 bondar-if1
192.168.16.29 bondar-if2 192.168.16.29 bondar-if2

# hudson node # hudson node


192.168.15.31 hudson-if1 192.168.15.31 hudson-if1
192.168.16.31 hudson-if2 192.168.15.31 hudson-if2

# persistent node IP labels # persistent node IP labels


192.168.5.29bondar 192.168.5.29bondar
192.168.5.31hudson 192.168.5.31hudson

# Service IP labels # Service IP labels


192.168.5.92xweb 192.168.15.92 xweb
192.168.5.70yweb 192.168.15.70 yweb

# test client node # test client node


192.168.5.11 test 192.168.15.11 test

© Copyright IBM Corporation 2004


Service IP Address Examples

Valid service IP Valid service IP addresses


IP addresses on IP addresses on
addresses for IPAT via for IPAT via IP
first node second node
IP aliasing replacement
192.168.7.1 192.168.5.3 and 192.168.5.97
192.168.5.1 192.168.5.2
192.168.183.57 OR
192.168.6.1 192.168.6.2
198.161.22.1 192.168.6.3 and 192.168.6.97
192.168.8.1
192.168.5.1 192.168.5.2 192.168.5.3
192.168.183.57
192.168.6.1 192.168.7.1 192.168.5.97
198.161.22.1

192.168.7.1 192.168.5.3 and 192.168.5.97


192.168.5.1 192.168.5.98
192.168.183.57 OR
192.168.6.14 192.168.6.171
198.161.22.1 192.168.6.3 and 192.168.6.97

192.168.5.1 192.168.5.3 and 192.168.5.97


192.168.4.1
192.168.6.1 192.168.5.2 OR
192.168.10.1
192.168.7.1 192.168.6.2 192.168.6.3 and 192.168.6.97
192.168.183.57
192.168.8.1 192.168.7.2 OR
198.161.22.1
102.168.9.1 192.168.7.3 and 192.168.7.97

© Copyright IBM Corporation 2004


Common TCP/IP Configuration Problems
Subnet masks are not consistent for all HA network adapters.
Interface IP labels are placed on the same logical network.
Service and interface IP labels are placed in the same logical networks in IPAT
via IP aliasing networks.
Service and interface IP labels are placed in different logical networks in IPAT
via IP replacement networks.
Ethernet frame type is set to 802.3. This includes etherchannel.
Ethernet speed is not set uniformly or is set to autodetect.
The contents of /etc/hosts is different on the cluster nodes.

© Copyright IBM Corporation 2004


Single IP Adapter Nodes
Single IP Adapter nodes may appear attractive as they appear
to reduce the cost of the cluster
The cost reduction is an illusion:
1. A node with only a single adapter on a network is a node with a
single point of failure - the single adapter.
2. Clusters with unnecessary single points of failure tend to suffer
more outages
3. Unnecessary outages cost (potentially quite serious) money
One of the fundamental cluster design goals is to reduce
unnecessary outages by avoiding single points of failure.
HACMP requires at least two NICs per IP network for failure
diagnosis.
Clusters with less than two NICs per IP network are not supported*.

* Certain Cluster 1600 SP Switch-based clusters are supported with


only one SP Switch adapter per network.
© Copyright IBM Corporation 2004
Checkpoint
1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.

2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.

3. True or False?
All networking technologies supported by HACMP support IPAT via IP replacement.

4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1
and the right hand node has NICs with the IP addresses 192.168.20.2 and
192.168.21.2 then which of the following are valid service IP addresses if IPAT via
IP aliasing is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1
and the right hand node has NICs with the IP addresses 192.168.20.2 and
192.168.21.2 then which of the following are valid service IP addresses if IPAT via
IP replacement is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
© Copyright IBM Corporation 2004
Checkpoint Answers
1. True or False?
A single cluster can use both IPAT via IP aliasing and IPAT via IP replacement.

2. True or False?
All networking technologies supported by HACMP support IPAT via IP aliasing.

3. True or False?
All networking technologies supported by HACMP support IPAT via IP replacement.

4. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1
and the right hand node has NICs with the IP addresses 192.168.20.2 and
192.168.21.2 then which of the following are valid service IP addresses if IPAT via
IP aliasing is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3 and 192.168.20.4 and 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
5. If the left node has NICs with the IP addresses 192.168.20.1 and 192.168.21.1
and the right hand node has NICs with the IP addresses 192.168.20.2 and
192.168.21.2 then which of the following are valid service IP addresses if IPAT via
IP replacement is being used (select all that apply)?
a. (192.168.20.3 and 192.168.20.4) OR (192.168.21.3 and 192.168.21.4)
b. 192.168.20.3, 192.168.20.4, 192.168.21.3 and 192.168.21.4
c. 192.168.22.3 and 192.168.22.4
d. 192.168.23.3 and 192.168.24.3
© Copyright IBM Corporation 2004
The Impact of IPAT on Clients
After completing this topic, you should be able to:
Explain how user systems are affected by IPAT related operations
Describe what the ARP cache issue is
Explain how gratuitous ARP usually deals with the ARP cache issue
Explain three ways to deal with the ARP cache issue if gratuitous
ARP does not provide a satisfactory resolution to the ARP cache
issue:
Configure clinfo on the client systems
Configure clinfo within the cluster
Configure Hardware Address Takeover within the cluster

© Copyright IBM Corporation 2004


How Are Users Affected?
IP address moves/swaps within a node result in a short outage
Long-term connection oriented sessions typically recover
seamlessly (TCP layer deals with packet retransmission)
Resource group fallovers to a new node result in a longer outage
and sever connection oriented services (long term connections must
be reestablished, short term connections retried)
In either case:
Short-lived TCP-based services like http and SQL queries
experience short server down outage
UDP-based services must deal with lost packets

© Copyright IBM Corporation 2004


What About the Users's Computers?
An IPAT operation renders ARP cache entries on client
systems obsolete
Client systems must (somehow) update their ARP caches

xweb (192.168.5.1) 00:04:ac:62:72:49 xweb (192.168.5.1) 00:04:ac:62:72:49

xweb (192.168.5.1) 00:04:ac:48:22:f4

xweb 192.168.5.1 (alias) 192.168.5.1 (alias) xweb


192.168.10.1 (odm) 192.168.11.1 (odm)
192.168.10.1 (odm) 192.168.11.1 (odm)
00:04:ac:62:72:49 00:04:ac:48:22:f4
00:04:ac:62:72:49 00:04:ac:48:22:f4

© Copyright IBM Corporation 2004 this space over here in this corner intentionally left blank
Local or Remote Client?
If the client is remotely connected through a router, it is the
router's ARP cache which must be corrected.

ARP: ARP:
router (192.168.8.1) 00:04:ac:42:9c:e2 router (192.168.8.1) 00:04:ac:42:9c:e2

192.168.8.3 192.168.8.3
00:04:ac:27:18:09 00:04:ac:27:18:09

ARP: ARP:
xweb (192.168.5.1) 00:04:ac:62:72:49 192.168.8.1 xweb (192.168.5.1) ??? 192.168.8.1
client (192.168.8.3) 00:04:ac:27:18:09 00:04:ac:42:9c:e2 client (192.168.8.3) 00:04:ac:27:18:09 00:04:ac:42:9c:e2

192.168.5.99 192.168.8.99
00:04:ac:29:31:37 00:04:ac:29:31:37
xweb 192.168.5.1 (alias) 192.168.5.1 (alias) xweb
192.168.10.1 (odm) 192.168.11.1 (odm) 192.168.10.1 (odm) 192.168.11.1 (odm)
00:04:ac:62:72:49 00:04:ac:48:22:f4 00:04:ac:62:72:49 00:04:ac:48:22:f4

© Copyright IBM Corporation 2004


Gratuitous ARP
AIX 5L supports a feature called gratuitous ARP
AIX sends out a gratuitous (that is, unrequested) ARP update
whenever an IP address is set or changed on a NIC
Other systems on the local physical network are expected to update
their ARP caches when they receive the gratuitous ARP packet
Remember: only systems on the cluster's local physical network
must respect the gratuitous ARP packet
So arp update problems have been minimized
Required if using IPAT via aliasing

© Copyright IBM Corporation 2004


Gratuitous ARP Support Issues
Gratuitous ARP is supported by AIX on the following network
technologies:
Ethernet (all types and speeds)
Token-Ring
FDDI
SP Switch 1 and SP Switch 2
Gratuitous ARP is not supported on ATM
Operating systems are not required to support gratuitous ARP
packets
Practically every operating system does support gratuitous ARP
Some systems (for example, certain routers) can be configured to
respect or ignore gratuitous ARP packets

© Copyright IBM Corporation 2004


What if Gratuitous ARP is Not Supported?
If the local network technology doesn't support gratuitous ARP
or there is a client system on the local physical network which
must communicate with the cluster and which does not support
gratuitous ARP packets:

Clinfo can used on the client to receive updates of changes.


Clinfo can be used on the servers to ping a list of clients,
forcing an update to their ARP caches.
HACMP can be configured to perform Hardware Address
Takeover (HWAT).

Suggestion: Do not get involved with using either clinfo or HWAT to


deal with ARP cache issues until you've verified that there actually
are ARP issues which need to be dealt with.

© Copyright IBM Corporation 2004


Option 1: clinfo on the Client
The cluster information daemon (clinfo) provides a facility to
automatically flush the ARP cache on a client system.
In this option, clinfo must execute on the client platform
clinfo executables are supplied for AIX
clinfo source code is provided with HACMP to facilitate porting clinfo to
other platforms
clinfo uses SNMP for communications with HACMP nodes
/usr/es/sbin/cluster/etc/clhosts on the client system must contain a
list of persistent node IP labels (one for each cluster node)
clinfo.rc is invoked to flush the local arp cache

192.168.5.1 (alias) xweb


192.168.10.1 (boot) 192.168.11.1 (boot)
00:04:ac:62:72:49 00:04:ac:48:22:f4

snmpd
clinfo
clsmuxpd
clinfo.rc
clstrmgr

© Copyright IBM Corporation 2004


Option 2: clinfo From Within the Cluster
Clinfo may also be used on the cluster's nodes to force an ARP
cache update.
In this option, clinfo runs on every cluster node
If clinfo is only run on one cluster node then that node become a single
point of failure!
clinfo flushes local ARP cache (on the cluster node) then pings a
defined list of clients listed in /usr/es/sbin/cluster/etc/clinfo.rc
Clients pick up the new IP address to hardware address
relationship as a result of the ping request

192.168.5.1 (alias) xweb


192.168.10.1 (boot) 192.168.11.1 (boot)
00:04:ac:62:72:49 00:04:ac:48:22:f4
ping!
snmpd
clinfo
clsmuxpd

clstrmgr clinfo.rc

© Copyright IBM Corporation 2004


clinfo.rc script (extract)
This script is located under /usr/es/sbin/cluster/etc and is present on an
AIX system if the cluster.client fileset has been installed.
A separate file /etc/cluster/ping_client_list can also contain a list of client machines to
ping.

# Example:
#
# PING_CLIENT_LIST="host_a host_b 1.1.1.3"
#
PING_CLIENT_LIST=""

TOTAL_CLIENT_LIST="${PING_CLIENT_LIST}"

if [[ -s /etc/cluster/ping_client_list ]] ; then
#
# The file "/etc/ping_client_list" should contain only a line
# setting the variable "PING_CLIENT_LIST" in the form given
# in the example above. This allows the client list to be
# kept in a file that is not altered when maintenance is
# applied to clinfo.rc.
#
. /etc/cluster/ping_client_list

TOTAL_CLIENT_LIST="${TOTAL_CLIENT_LIST} ${PING_CLIENT_LIST}"
fi

#
# WARNING!!! For this shell script to work properly, ALL entries in
# the TOTAL_CLIENT_LIST must resolve properly to IP addresses or hostnames
# (must be found in /etc/hosts, DNS, or NIS). This is crucial.

© Copyright IBM Corporation 2004


Option 3: Hardware Address Takeover
HACMP can be configured to swap a service IP label's
hardware address between network adapters.
HWAT is incompatible with IPAT via IP aliasing because each
service IP address must have its own hardware address and a NIC
can support only one hardware address at any given time.
Cluster implementer designates a Locally Administered Address
(LAA) which HACMP assigns to the NIC which has the service IP
label
xweb (192.168.5.1) 40:04:ac:62:72:49 xweb (192.168.5.1) 40:04:ac:62:72:49

xweb 192.168.5.1 (service) 192.168.20.1 (odm) 192.168.20.1 (odm) 192.168.5.1 (service) xweb
40:04:ac:62:72:49 00:04:ac:48:22:f4 00:04:ac:62:2e:4c 40:04:ac:62:72:49

© Copyright IBM Corporation 2004


Hardware Address Takeover (1 of 2)
HACMP can move LAAs between nodes in conjunction with an
IPAT fallover.

xweb
bondarstandby 192.168.5.1
192.168.9.1 255.255.255.0
255.255.255.0 40:04:ac:62:72:49
00:04:ac:48:22:f4
The hardware
xweb
address is "moved" hudsonboot

192.168.5.1 with the service IP 192.168.5.2


255.255.255.0
255.255.255.0
40:04:ac:62:72:49 address 00:04:ac:48:22:f6

Bondar Hudson

© Copyright IBM Corporation 2004


Hardware Address Takeover (2 of 2)
xweb
bondarstandby 192.168.5.1
192.168.9.1
tr1 255.255.255.0
255.255.255.0
40:04:ac:62:72:49 tr1
00:04:ac:48:22:f4
When a failed node
hudsonboot
bondarboot comes back to life,
tr0 192.168.5.3
the burned in ROM
192.168.5.2
255.255.255.0 tr0
255.255.255.0
00:04:ac:48:22:f6
00:04:ac:48:22:f4 Address is used on
the service network
adapter.
Bondar Hudson

hudsonstandby
bondarstandby 192.168.9.2
192.168.9.1
tr1
255.255.255.0
255.255.255.0
00:04:ac:48:22:f4
00:04:ac:62:72:61 tr1
After HACMP is
xweb
started the node hudsonboot

tr0 192.168.5.1 reintegrates


192.168.5.2
255.255.255.0 tr0
according to its
255.255.255.0
00:04:ac:48:22:f6
40:04:ac:62:72:49
resource group
parameters

Bondar Hudson

© Copyright IBM Corporation 2004


Selecting the LAA
The LAA must be unique on the cluster's physical network
The MAC address based technologies (ethernet, tokenring and
FDDI) use six byte hardware addresses of the form:
xx.xx.xx.xx.xx.xx
The factory-set MAC address of the NIC will start with 0, 1, 2 or 3
A MAC address that starts with 0, 1, 2 or 3 is called a Globally
Administered Address because it is assigned to the NIC's vendor
by a central authority
Incrementing this first digit by 4 transforms the GAA into a LAA
which will be unique worldwide (unless someone has already used
the same GAA to create an LAA which isn't likely since GAAs are
unique worldwide)

© Copyright IBM Corporation 2004


Selecting the LAA Example
Use netstat -in to get the GAAs of the NICs from one of your
cluster nodes:
# netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 0.60.94.9.36.6b 40032 0 22471 0 0
en0 1500 192.168.15 192.168.15.31 40032 0 22471 0 0
en1 1500 link#3 2.7.1.20.a3.73 811733 0 413441 0 0
en1 1500 192.168.16 192.168.16.31 811733 0 413441 0 0

Make sure each number is two digits long by prepending 0s as


necessary:
02.07.01.20.a3.73
Verify that the first digit is 0, 1, 2 or 3 (if it isn't then pick a different
NIC's hardware address and start over)
Add 4 to the first digit:
42.07.01.20.a3.73
Use this as the LAA for one of your service IP addresses
(repeat using a different vendor-assigned GAA for each
service IP address that needs an LAA)

© Copyright IBM Corporation 2004


Hardware Address Takeover Issues
Do not enable the ALTERNATE hardware address field in smit
devices.
Causes the adapter to boot on your chosen LAA rather than the
burned in ROM address.
Causes serious communications problems and puts the cluster in
to an unstable state.
Correct method is to enter your chosen LAA in to the smit HACMP
menus (remove the periods or colons before entering it into the
field).
The Token-Ring documentation states that the LAA must start with
42
The FDDI documentation states that the first octet (digit) of the first
byte of the LAA must be 4, 5, 6 or 7 (which is compatible with the
method for creating LAAs described earlier)
Token-Ring adapters do not release the LAA if AIX crashes.
AIX must be set to reboot automatically after a system
crash (see smit chgsys).
© Copyright IBM Corporation 2004
Checkpoint
1. True or False?
Clients are required to exit and restart their application after a cluster outage.

2. True or False?
All client systems are potentially directly affected by the ARP cache issue.

3. True or False?
clinfo must not be run both on the cluster nodes and on the client systems.

4. Use the LAA generation technique described earlier to generate an LAA for
each of the following GAA addresses (all but one of these are taken from
real ethernet cards):
00.20.ed.76.fb.15 ____________________
0.4.ac.17.19.64 ____________________
0.6.29.ac.46.8 ____________________
12.7.1.71.1.6 ____________________

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Clients are required to exit and restart their application after a cluster outage.

2. True or False?
All client systems are potentially directly affected by the ARP cache issue.

3. True or False?
clinfo must not be run both on the cluster nodes and on the client systems.

4. Use the LAA generation technique described earlier to generate an LAA for
each of the following GAA addresses (all but the last one of these are
taken from real ethernet cards):
00.20.ed.76.fb.15 40.20.ed.76.fb.15
0.4.ac.17.19.64 40.04.ac.17.19.64
0.6.29.ac.46.8 40.06.29.ac.46.08
12.7.1.71.1.6 52.07.01.71.01.06

© Copyright IBM Corporation 2004


Unit Summary
Having completed this unit, you should be able to:
Understand how HACMP uses networks
Describe the HACMP networking terminology
Explain and set up IP Address Takeover (IPAT)
Configure an IP network for HACMP
Configure a non-IP network
Explain how client systems are likely to be affected by failure
recovery
Minimize the impact of failure recovery on client systems

© Copyright IBM Corporation 2004


Welcome to:
HACMP Architecture

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Describe the installation process for HACMP 5.2
List and explain the purpose of the major HACMP 5.2
components
Describe the HACMP events concept
Describe the resource group definition process in HACMP 5.2

© Copyright IBM Corporation 2004


Preparing for Cluster Configuration
After completing this topic, you should be able to:
Outline the steps required to implement a cluster
Know how to install HACMP 5.2
Be aware of the prerequisites for HACMP 5.2
Know when you are ready to start actual cluster configuration

© Copyright IBM Corporation 2004


Steps for Successful Implementation
HACMP should not be installed upon a system which is in production.

Step Step Description Comments


1 Plan Use planning worksheets and documentation.
2 Assemble hardware Install adapters, connect shared disk and network.
3 Install AIX Ensure you update to the latest maintenance level.
4 Configure networks Requires detailed planning.
5 Configure shared storage Set up shared volume groups and filesystems.
6 Install HACMP Install on all nodes in the cluster (don't forget to install latest fixes).
7 Reboot each node Required after installing or patching HACMP.
8 Define/discover the cluster topology Review what you end up with to make sure that it is what you expected.
9 Configure application servers You will need to write your start and stop scripts.
10 Configure cluster resources Refer to your planning worksheets.
11 Synchronize the cluster Ensure you "actually" do this.
12 Start HACMP Watch the console for messages.
13 See comment Skip this step if you are superstitious.
14 Test the cluster Document your tests and results.

© Copyright IBM Corporation 2004


Where Are We in the Implementation?
Plan for network, storage, and application
Eliminate single points of failure
Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (ip interfaces, /etc/hosts, non-ip networks and devices)
Application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP IP and non-IP networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
© Copyright IBM Corporation 2004
Before All Else Fails . . .

Study the appropriate HACMP manuals:

HACMP for AIX Planning and Installation Guide SC23-4861


Contains Planning Worksheets in Appendix A
Release notes: /usr/lpp/cluster/docs

© Copyright IBM Corporation 2004


What Is In the Box?
HACMP 5.2 (product number 5765-F62)
HACMP 5.1 (product number 5765-F62)
Only one variant which is roughly equivalent to HACMP/ES CRM
in HACMP 4.5 terminology
Marketing ends: 31 Mar 05, Service ends: 1 Sep 06
HACMP 4.5 (product number 5765-E54)
Is available as four distinct variants, each of which comes on its
own CD-ROM:
HACMP classic
HACMP classic CRM
HACMP/ES
HACMP/ES CRM
Make sure that you've ordered the correct variant and that you've
got the correct variant!
Marketing ends: 31 Dec 03, Service ends: 1 Sep 05

© Copyright IBM Corporation 2004


Install the HACMP Filesets (1 of 2)
Here's a listing of the HACMP 5.2 CD:
bos.clvm.enh.5.2.0.31.U
rsct.basic.hacmp.2.3.3.0.U
bos.rte.lvm.5.2.0.31.U
rsct.basic.hacmp.2.3.3.1.U
cluster.adt.es.5.2.0.0.I
rsct.basic.rte.2.3.3.0.U
cluster.doc.en_US.es.5.2.0.0.I
rsct.basic.rte.2.3.3.1.U
cluster.es.5.2.0.0.I
rsct.basic.sp.2.3.3.0.U
cluster.es.cfs.5.2.0.0.I
rsct.compat.basic.hacmp.2.3.3.0.U
cluster.es.clvm.5.2.0.0.I
rsct.compat.basic.rte.2.3.3.0.U
cluster.es.cspoc.5.2.0.0.I
rsct.compat.basic.sp.2.3.3.0.U
cluster.es.plugins.5.2.0.0.I
rsct.compat.clients.hacmp.2.3.3.0.U
cluster.es.worksheets.5.2.0.0.I
rsct.compat.clients.rte.2.3.3.0.U
cluster.hativoli.5.2.0.0.I (requires Tivoli)
rsct.compat.clients.sp.2.3.3.0.U
cluster.haview.4.5.0.0.I (requires Netview)
rsct.core.auditrm.2.3.3.0.U
cluster.license.5.2.0.0.I
rsct.core.errm.2.3.3.0.U
cluster.man.en_US.es.5.2.0.0.I
rsct.core.errm.2.3.3.1.U
cluster.msg.en_US.cspoc.5.2.0.0.I
ALSO cluster.msg.En*, Ja*, ja*
cluster.msg.en_US.es.5.2.0.0.I
cluster.msg.en_US.hativoli.5.2.0.0.I (requires Tivoli)
cluster.msg.en_US.haview.4.5.0.0.I (requires Netview)

Your requirements may vary!

© Copyright IBM Corporation 2004


Install the HACMP Filesets (2 of 2)
cluster.es:
5.2.0.0 ES Base Server Runtime cluster.es.server.rte 5.2.0.0
5.2.0.0 ES Client Libraries cluster.es.client.lib 5.2.0.0
5.2.0.0 ES Client Runtime cluster.es.client.rte 5.2.0.0
5.2.0.0 ES Client Utilities cluster.es.client.utils 5.2.0.0
5.2.0.0 ES Cluster Test Tool cluster.es.server.testtool 5.2.0.0
5.2.0.0 ES Server Diags cluster.es.server.diag 5.2.0.0
5.2.0.0 ES Server Events cluster.es.server.events 5.2.0.0
5.2.0.0 ES Server Utilities cluster.es.server.utils 5.2.0.0
5.2.0.0 ES Two-Node Configuration Assistant cluster.es.server.cfgast 5.2.0.0
5.2.0.0 Web based Smit cluster.es.server.wsm 5.2.0.0

© Copyright IBM Corporation 2004


Don't Forget the Prerequisites
Install the correct version of AIX:
AIX 5L 5.1 ML5 with RSCT 2.2.1.36 or higher
AIX 5L 5.2 ML3 with RSCT 2.3.3.1 or higher
AIX 5.3
Install the correct level of PSSP (on a PSSP managed cluster)
Install the otherwise optional AIX filesets:
bos.adt.lib bos.adt.libm bos.adt.syscalls
bos.net.tcp.clients bos.net.tcp.server bos.rte.SRC
bos.rte.libc bos.rte.libcfg bos.rte.libcur
bos.rte.libpthreads bos.rte.odm
Other prerequisites may exist depending on your plans:
Clusters with (enhanced) concurrent volume groups require
bos.rte.lvm 5.1.0.25 or higher
Fast disk takeover requires AIX 5.2 and bos.clvm.enh 5.2.0.11 or
higher

© Copyright IBM Corporation 2004


Verify That You Have the Required APARs
PTF1
The following APARs are required as noted:
APAR IY42782 (AIX 5.1)
APAR IY55542 (snmp v3)
APAR IY55105, IY55648 (interoperablility with 4.5, 5.1)
APAR IY55069 (AIX 5.1) or IY55594 (AIX 5.2)
Enhanced concurrent volume group support
APAR IY60340 (AIX 5.3)
The above lists are almost certainly out of date. Check

http://www-1.ibm.com/servers/eserver/support/pseries/fixes/

for the latest APARs.

© Copyright IBM Corporation 2004


Some Final Things to Check
The same versions of AIX and HACMP (including patch levels) are installed on all
nodes.
Each node must be rebooted once HACMP has been installed.
Correct filesets and prerequisites have been installed.
Documentation is installed and accessible through a Web browser or Adobe's Acrobat
Reader (acroread on UNIX systems) without requiring a cluster node to be running.
That the /etc/hosts file is configured on all nodes and contains all IP labels for all nodes.
Name resolution works for all IP labels on all nodes.
Use the host command to test this.
Ensure name to IP address translation is the same for all IP labels on all nodes.
IP and non-IP networks are configured.
The subnet mask must be identical on all interfaces known to the cluster.
All interfaces on each node are configured to be in different subnets
See earlier IPAT discussion for more subnet requirements.
Check that a route exists to all logical networks from all nodes.
Check that nodes can ping between all interfaces on the same logical subnet.
Shared storage is configured and recognized the same from all nodes (some folks disagree).
You have a written plan describing what you will configure and how you will test it!

© Copyright IBM Corporation 2004


Checkpoint
1. What is the first step in implementing a cluster?
a. Order the hardware
b. Plan the cluster
c. Install AIX and HACMP
d. Install the applications
e. Take a long nap

2. True or False?
HACMP 5.2 is compatible with any version of AIX 5.x.

3. True or False?
Each cluster node must be rebooted after the HACMP software is installed.

4. True or False?
You should take careful notes while you install and configure HACMP so that you know what to
test when you are done.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. What is the first step in implementing a cluster?*
a. Order the hardware
b. Plan the cluster
c. Install AIX and HACMP
d. Install the applications
e. Take a long nap

2. True or False?
HACMP 5.2 is compatible with any version of AIX 5.x.

3. True or False?
Each cluster node must be rebooted after the HACMP software is installed.

4. True or False?
You should take careful notes while you install and configure HACMP so that you know what to
test when you are done.

*There is some dispute about whether the correct answer is b or e although a disconcerting
number of clusters are implemented in the order a, b, c, d, e (how can you possibly order the
hardware if you do not yet know what you are going to build?) or even just a, c, d (cluster
implementers who skip step b rarely have time for long naps).

© Copyright IBM Corporation 2004


HACMP 5.2 Components
After completing this topic, you should be able to:
Describe the major components of HACMP 5.2
Describe the role played by RSCT
Describe how heartbeat rings are organized

© Copyright IBM Corporation 2004


The Layered Look
Here are the layers of software on an HACMP 5.2 cluster node:

Application Layer
Contains the highly available applications that
use HACMP services

HACMP Layer
Provides highly available services to
applications

RSCT, RMC Layer


Provides monitoring, event management and
coordination of subsystems for HACMP clusters

AIX Layer
Provides operating system services

LVM Layer TCP/IP Layer


Manages disk space at Manages communication
the logical level at the logical level

© Copyright IBM Corporation 2004


HACMP Components and Features
The HACMP software has the following components:
Cluster Manager (+)
Cluster Secure Communication Subsystem (*)
IBM RS/6000 Reliable Scalable Cluster Technology Availability
Services (RSCT)
RMC (%)
Cluster SMUX Peer and SNMP Monitoring Programs (+)
Cluster Information Program (+)
Highly Available NFS Server (+)
Shared External Disk Access (+)
Concurrent Resource Manager (**+)
Cluster Lock Manager removed %

% new in HACMP 5.2


*new in HACMP 5.1
**optional in HACMP 4.x
+common to HACMP/ES 4.5, HACMP "classic" 4.5 and HACMP 5.1

© Copyright IBM Corporation 2004


Cluster Manager
A daemon which runs on each cluster node
Primarily responsible for responding to unplanned events:
Recovering from software and hardware failures
Responding to user-initiated events:
Request to online/offline a node
Request to move/online/offline a resource group
And so forth
A client to RSCT
Keeps Cluster SMUX Peer daemon aware of cluster status
Implemented by the subsystem clstrmgrES

© Copyright IBM Corporation 2004


Cluster Secure Communication Subsystem
Introduced in HACMP 5.1.
Provides a single, common communication infrastructure for all
HACMP related communication between nodes.
HACMP 5.2 provides two authentication security options:
Connection Authentication
Standard
Default security level.
Implemented directly by cluster communication daemon (clcomd).
Uses /usr/es/sbin/cluster/rhosts file to determine legitimate partners.
Kerberos (SP only)
Kerberos used with systems managed by PSSP (that is, SP).
Virtual Private Networks (VPN) using persistent labels.
VPNs are configured within AIX.
HACMP is then configured to use VPNs for all inter-node communication.
Message Authentication and/or Message Encryption
HACMP provides methods for key distribution.
Implemented using a new subsystem -- clcomdES

© Copyright IBM Corporation 2004


Cluster Communication Daemon (clcomd)
Completely replaces various adhoc inter-node communication
mechanisms used by earlier versions of HACMP (for example, rsh).
/.rhosts files are no longer required.
Caches coherent copies of other nodes' ODMs.
Maintains long term socket connections (avoids repeated socket
startup/shutdown overheads).
Implements the principle of least privilege:
Provides other nodes with just as much privilege as required.
Nodes no longer have or require root access on each other.
Started via /etc/inittab entry.
Managed by the SRC (startsrc, stopsrc, refresh all work).

© Copyright IBM Corporation 2004


clcomd Standard Connection Authentication
Source IP addresses for each incoming session are checked
against:
/usr/es/sbin/cluster/etc/rhosts
HACMP adapter ODM
HACMP node ODM
General Approach:
Block all communication if /usr/es/sbin/cluster/etc/rhosts is missing
If /usr/es/sbin/cluster/etc/rhosts is empty then assume cluster is
being configured
Accept connection and add entry to the rhosts file
At synchronization use HACMP ODM to build set of entries for the
rhosts file
In general connection authentication is done as follows:
Connect back and ask for the hostname
Connection is considered authentic if the hostname matches else
connection is rejected

© Copyright IBM Corporation 2004


clcomd Authentication at Initial
Configuration
/usr/es/sbin/cluster/etc/rhosts is the key:
Nodes with an empty /usr/es/.../rhosts can be asked to join a cluster
Once the cluster is formed, participating nodes will not have an
empty /usr/es/.../rhosts file
Nodes with a non-empty /usr/es/.../rhosts file will refuse all HACMP
related communication with nodes not listed in their rhosts file
To reconfigure a cluster:
Empty /usr/es/sbin/.../rhosts on all cluster nodes
Not usually necessary unless IP addresses have changed
Initiate configuration from one of the cluster nodes
To improve security prior to initial configuration:
List the IP addresses of future cluster nodes in each nodes
/usr/es/sbin/.../rhosts file

© Copyright IBM Corporation 2004


RSCT
Originally part of PSSP in the RS/6000 SP environment
Provides:
Scalability to large clusters
Notification of software failures to remote subsystems
Coordinate recovery and synchronization of changes across the
cluster
Key components:
Event Management
Creates events by matching knowledge of cluster's state with
expressions of interest by RSCT clients
Group Services
Coordinates/monitors state changes of an application in the cluster
Topology Services
Uses heartbeats to monitor nodes, networks and network adapters
Diagnoses failures
Coordinates reintegration of recovered components
RMC: Resource Monitoring and Control
HACMP's Cluster Manager is an RSCT client/application
© Copyright IBM Corporation 2004
HACMP from an RSCT Perspective

AIX
Process
Monitor
HACMP 5.2

HA Recovery Driver
Database RSCT ~
Resource RMC Cluster Recovery
Monitor (ctrmc) Manager Programs

Switch
Resource
Monitor

Recovery
RSCT Commands
RSCT
Group ~
Topology
Services Services HACMP Event
Scripts

processor, LAN
heartbeats messages
membership
information

© Copyright IBM Corporation 2004


Heartbeat Rings

Heartbeat

25.8.60.6 25.8.60.5

25.8.60.2 25.8.60.4

25.8.60.3

Heartbeat goes to next lower node in order of IP address only.


© Copyright IBM Corporation 2004
HACMP's SNMP Support
HACMP uses SNMP to provide:
Notification of cluster events
Cluster configuration/state information
SNMP support provided by HACMP's SMUX Peer daemon
(clsmuxpd)
clsmuxpd's role:
Kept aware of cluster's configuration and state by Cluster Manager
Generates SNMP traps when notified of events by Cluster
Manager
Responds to SNMP queries for HACMP information
A client (smux peer) of AIX's snmpdv3 in HACMP 5.2
HACMP 5.2 supports snmpd v1 and v3 in AIX 5.2
Implemented as the subsystem clsmuxpdES

© Copyright IBM Corporation 2004


Cluster Information Daemon (clinfo)
An SNMP-aware client to clsmuxpd
Used by clstat
Provides a cluster information API
Focused on providing HACMP cluster information
Easier to work with than the SNMP APIs
See HACMP for AIX Programming Client Applications manual for
more information
Can be run within the cluster
As part of strategy for dealing with ARP cache issues
As API provider to customer-written utility programs
Can be run on non-cluster nodes
As part of strategy for dealing with ARP cache issues
As API provider to customer-written cluster monitoring tools
Source code provided on the HACMP product CD-ROM
Implemented as the clinfoES subsystem
© Copyright IBM Corporation 2004
Highly Available NFS Server
Replaces now withdrawn HANFS stand-alone product
Preserves file locks and dupcache across fallovers
Cluster administrator can:
Specify a network to be used when a node is acting as an NFS
client (typically to another node in the same cluster)
Define NFS exports and mounts at the directory level
Specify export options for NFS-exported directories and
filesystems
NFS server support is limited to two node clusters
May be used with any resource group policy except start on all
nodes

© Copyright IBM Corporation 2004


Shared External Disk Access
Provides two styles of shared disk support:
Serially reusable shared disks:
Varied on by one node at a time
HACMP coordinates handover (during controlled RG moves) and
takeover (during recovery from node failures)
LVM or RSCT responsible for ensuring disks are not shared in real time
Supported with nonconcurrent mode or enhanced concurrent mode
volume groups (running in nonconcurrent mode)
concurrent access shared disks:
Typically used by concurrent applications writing to raw logical volumes
Two variants:
SSA Concurrent Mode Volume Groups
Enhanced Concurrent Mode Volume Groups running in concurrent mode
Concurrent volume groups requires the clvm fileset in HACMP 5.x

© Copyright IBM Corporation 2004


Cluster Startup
/etc/inittab: rmc
Start Sequence clcomd

Start node in SMIT


ODM machines.lst
Node name Run rc.cluster .
css0 script to start all Network Name SPether
Network Type ether
*
*Node Type Address
Start Topology 0 en 192.168.3.37
Services 1 en0 192.168.3.1
2 en0 192.168.3.2
3 en0 192.168.3.3
.

ODM
Start Group Network Name SPswitch
Network Type HPS
Nodes
Services *
*Node Type Address
Adapters 1 css0 192.168.13.1
2 css0 192.168.13.2
Start event mgmt 3 css0 192.168.13.3
services .

Start Cluster
Manager

Join Group
Services

Run Recovery
Program

Node up
© Copyright IBM Corporation 2004
Application Monitoring

SAP HACMP ES Cluster


Manager

run_clappmond

clappmond

Process Monitoring Custom Monitoring

RSCT - Event User-defined Monitoring


Management Method

Startup or long-running
one or many
Application Application

© Copyright IBM Corporation 2004


Application Availability Analysis Tool
# smit cl_app_AAA.dialog
Application Availability Analysis

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Select an Application [] +
* Begin analysis on YEAR (1970-2038) [] #
* MONTH (01-12) [] #
* DAY (1-31) [] #
* Begin analysis at HOUR (00-23) [] #
* MINUTES (00-59) [] #
* SECONDS (00-59) [] #
* End analysis on YEAR (1970-2038) [2003] #
* MONTH (01-12) [12] #
* DAY (1-31) [31] #
* End analysis at HOUR (00-23) [20] #
* MINUTES (00-59) [54] #
* SECONDS (00-59) [02] #

F1=Help F2=Refresh F3=Cancel F4=List


Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Migrating to HACMP 5.2 (Overview)
Node-by-node migration supported from:
HACMP 4.5 (classic or ES)
HACMP 5.2
Steps to migrate from older versions (HACMP 4.4.0 or later):
1. Take a snapshot, store in non HACMP directory.
2. De-install HACMP.
3. Upgrade AIX (and anything else that needs upgrading).
4. Install HACMP 5.2.
5. Run clconvert_snapshot, apply snapshot.
6. Add custom scripts.
7. Test the cluster.
clconvert_snapshot usage gets complicated if existing cluster is
old enough. It may be easiest to configure cluster using HACMP
smit menus based on (up-to-date) cluster documentation.
HACMP 4.5 is oldest interoperable release with HACMP 5.2
This is an OVERVIEW . . . refer to the HACMP 5.2 manuals and
consider attending AU57 - HACMP Administration II: Maintenance
and Migration
© Copyright IBM Corporation 2004
Checkpoint
1. True or False?
HACMP 5.2 allows multiple application monitors for the same application.

2. Which of the following are true about HACMP 5.2 (select all that apply):
a. RMC used in HACMP only at release 5.2
b. Cluster Lock Manager enhanced
c. Cluster Information Daemon removed
d. Enhanced Concurrent Volume Group supported in nonconcurrent mode
e. Clcomd provides message authentication in HACMP only in release 5.2

3. True or False?
Migrating from older versions of HACMP is a routine task which generally requires little planning.

4. True or False?
The cluster communication daemon (clcomd) eliminates the need for /.rhosts files.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
HACMP 5.1 is derived from the HACMP/ES variant of HACMP for AIX 4.5.

2. Which of the following are part of the Concurrent Resource Manager


functionality (select all that apply):
a. RMC used in HACMP only at release 5.2.
b. Cluster Lock Manager enhanced.
c. Cluster Information Daemon removed.
d. Enhanced Concurrent Volume Group supported in nonconcurrent mode.
e. clcomd provides message authentication in HACMP only in release 5.2.

3. True or False?
Migrating from older versions of HACMP is a routine task which generally requires little planning.

4. True or False?
The cluster communication daemon (clcomd) eliminates the need for /.rhosts files.

© Copyright IBM Corporation 2004


HACMP Events
After completing this topic, you should be able to:
Describe what an HACMP event is
Explain what happens when HACMP processes an event
Describe the sequence of events when:
The first node starts in a cluster
A new node joins an existing cluster
A node leaves a cluster voluntarily

© Copyright IBM Corporation 2004


What is an HACMP Event?
An HACMP event is an incident of interest to HACMP:
A node joins the cluster
A node crashes
A NIC fails
A NIC recovers
Cluster administrator requests a resource group move
Cluster administrator requests a configuration change (synchronization)
An HACMP event script is a script invoked by a recovery program to
perform the recovery function required.
node_up
node_down
fail_interface
join_interface
rg_move
reconfig_topology_start

© Copyright IBM Corporation 2004


HACMP Event

Recovery Command

Recovery Recovery Command

Programs
__
__ Event Script
__

HACMP Cluster Manager #


## Beginning of Event Definition Node Up ###

rules.hacmprd
#
TE_JOIN_NODE
0
/usr/sbin/cluster/events/node_up.rp

file
2
0
# 6) Resource variable only used for event manager events

# 7) Instance vector, only used for event manager events

RMC
(ctrmc) Group Services/ES

Topology Services/ES

© Copyright IBM Corporation 2004


Event Processing
Notify Command

Pre-Event Script (1) Pre-Event Script (n)

Recovery
HACMP Event
Command

No Counter Yes
RC=0
>0
Yes
Boom!
No

Post-Event Script (1) Post-Event Script (n)

Notify Command

© Copyright IBM Corporation 2004


Types of HACMP Events
Primary events Secondary events
(called by clstmgrES recovery programs) (called by other events)
site_up, site_up_complete
site_down, site_down_complete node_up_local
site_merge, site_merge_complete node_up_remote
node_up, node_up_complete node_down_local
node_down, node_down_complete node_down_remote
network_up, network_up_complete node_up_local_complete
network_down, network_down_complete node_up_remote_complete
swap_adapter, swap_adapter_complete node_down_local_complete
swap_address, swap_address_complete node_down_remote_complete
fail_standby acquire_aconn_service
join_standby acquire_service_addr
fail_interface acquire_takeover_addr
join_interface start_server
rg_move, rg_move_complete stop_server
rg_online get_disk_vg_fs
rg_offline get_aconn_rs
event_error release_service_addr
config_too_long release_takeover_addr
reconfig_topology_start release_vg_fs
reconfig_topology_complete release_aconn_rs
reconfig_resource_release swap_aconn_protocols
reconfig_resource_acquire releasing
reconfig_resource_complete acquiring
reconfig_configuration_dependency_acquire rg_up
reconfig_configuration_dependency_complete rg_down
reconfig_configuration_dependency_release rg_error
node_up_dependency rg_temp_error_state
node_up_dependency_complete rg_acquiring_secondary
node_down_dependency rg_up_secondary
node_down_dependency_complete rg_error_secondary
migrate, migrate_complete resume_appmon
suspend_appmon
© Copyright IBM Corporation 2004
First Node Starts HACMP

Start

1) node_up
t node_up_local
ar
St acquire_service_addr
Event RC acquire_takeover_addr
Manager Sta get_disk_vg_fs
rt
RC
2) node_up_complete
node_up_local_complete
start_server
run start script
© Copyright IBM Corporation 2004
Another Node Joins the Cluster

ning ar
t
run St

Event Event
Manager Messages Manager
Sta

St
ar
rt

t
RC
RC

1) node_up 2) node_up
Sta
Sta

node_up_remote node_up_local
rt
RC

rt

stop_server acquire_service_address
RC

run stop script acquire_takeover_address


release_takeover_address get_disk_vg_fs
release_vg_fs
4) node_up_complete
3) node_up_complete node_up_local_complete
node_up_remote_complete start_server
run start script
© Copyright IBM Corporation 2004
Node Leaves the Cluster (Stopped)

ning op
run St
Event Event Sta
Manager Messages Manager rt
RC 1) node_down
St
ar

node_down_local
t

stop_server
3) node_down run stop script
RCSta

St
ar
RC Sto
node_down_remote release_takeover_addr

t
acquire_service_addr release_vg_fs
RC

rt

acquire_takeover_addr release_service _addr


get_disk_vg_fs
p

2) node_down_complete
4) node_down_complete node_down_local_complete
node_down_remote_complete
start_server 3) cluster manager exits

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
HACMP 5.x supports a maximum of five pre and five post events per HACMP event.

2. Which of the following are examples of primary HACMP events


(select all that apply)?
a. node_up
b. node_up_local
c. node_up_complete
d. start_server
e. rg_up

3. When a node joins an existing cluster, what is the correct sequence for
these events?
a. node_up on new node, node_up on existing node, node_up_complete on new node,
node_up_complete on existing node
b. node_up on existing node, node_up on new node, node_up_complete on new node,
node_up_complete on existing node
c. node_up on new node, node_up on existing node, node_up_complete on existing node,
node_up_complete on new node
d. node_up on existing node, node_up on new node, node_up_complete on existing node,
node_up_complete on new node

4. True or False?
Checkpoint questions are boring.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
HACMP 5.x supports a maximum of five pre and five post events per HACMP event.

2. Which of the following are examples of primary HACMP events


(select all that apply)?
a. node_up
b. node_up_local
c. node_up_complete
d. start_server
e. rg_up

3. When a node joins an existing cluster, what is the correct sequence for
these events?
a. node_up on new node, node_up on existing node, node_up_complete on new node,
node_up_complete on existing node
b. node_up on existing node, node_up on new node, node_up_complete on new node,
node_up_complete on existing node
c. node_up on new node, node_up on existing node, node_up_complete on existing node,
node_up_complete on new node
d. node_up on existing node, node_up on new node, node_up_complete on existing node,
node_up_complete on new node

4. True or False?
Checkpoint questions are boring.

© Copyright IBM Corporation 2004


Defining Resource Group Behavior
After completing this topic, you should be able to:
Define the Resource Group behavior policies
Define startup, fallover and fallback policies
Define run-time choices
Explain how the policies differ
Describe Dependent behavior
Explain why you might use each type
Describe how HACMP 5.2 policies relate to previous
releases of HACMP

© Copyright IBM Corporation 2004


Resource Group Policies
Startup Policy
Additional run-time settling time policy
Fallover Policy
Choosing a run-time dynamic node policy
Fallback Policy
Additional run-time fallback timer policy

© Copyright IBM Corporation 2004


Startup Policy
Online on Home Node Only
Online on First Available Node
Run time Settling Time may be set
Online Using Distribution Policy
Online on All Available Nodes

© Copyright IBM Corporation 2004


Online On All Available Nodes
Referred to as concurrent mode in previous HACMP releases
Resource group runs simultaneously on as many nodes as are
currently available
Node failure results in loss of node's processing power
Node recovery results in addition of node's processing power
Resource group restrictions:
No JFS or JFS2 filesystems (only raw logical volumes)
No service IP Labels / Addresses (which means no IPAT)
Requires applications which have been explicitly designed to
operate in concurrent access mode (provide own lock management)
Quite uncommon but have the potential to provide essentially zero
downtime as the likelihood that at least one node is up at any given
time is (presumably) quite high

© Copyright IBM Corporation 2004


Fallover Policy
Fallover To Next Priority Node In The List
Cannot be used with All nodes startup policy
Fallover Using Dynamic Node Priority
Cannot be used with All nodes startup policy
Bring Offline (On Error Node)
Normally used with startup policy Online On All Available Nodes
Cannot be used with Distribution startup policy

© Copyright IBM Corporation 2004


Fallback Policy
Fallback To Higher Priority Node In The List
Can use a run time Delayed Fallback Timer preference
Cannot be used with Distribution startup policy
Never Fallback

© Copyright IBM Corporation 2004


Dependent Resource Groups

Parent RG

Child/Parent Parent/Child
RG RG

Child RG

One resource group can be the parent of another resource group


Parent will be brought online before child
Parent will be brought offline after child
Parent and child may be on same or different nodes
Three levels of dependency supported
All resource groups processed serially if dependency used
Cannot manually move parent if child online

© Copyright IBM Corporation 2004


Migrating Resource Groups

Startup Fallover Fallback

Cascading On Home To Next To Higher


defaults

Cascading On Home To Next Never


CWOF

Cascading On First To Next To Higher


Inactive takeover

Cascading On First To Next Never


Inactive takeover
CWOF

Rotating Distribution To Next Never

Concurrent On All Offline on error Never

© Copyright IBM Corporation 2004


Checkpoint
1. What startup, fallover, fallback policy would be the best to use for a 2 node
mutual takeover cluster using IPAT assuming that there are performance
problems if both applications are running on the same node?
a. home, next, never
b. first, next, higher
c. distribution, next, never
d. all, error, never
e. home, next, higher

2. True or False?
HACMP 5.2 does not support choosing Cascading as a resource group type.

3. Custom resource groups were introduced in:


a. HACMP/ES 4.5
b. HACMP "classic" 4.5
c. HACMP 5.1

4. True or False?
Resource groups support IPAT via IP replacement in HACMP 5.2.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. What startup, fallover, fallback policy would be the best to use for a 2 node
mutual takeover cluster using IPAT assuming that there are performance
problems if both applications are running on the same node?
a. home, next, never
b. first, next, higher
c. distribution, next, never
d. all, error, never
e. home, next, higher

2. True or False?
HACMP 5.2 does not support choosing Cascading as a resource group type.

3. Custom resource groups were introduced in:


a. HACMP/ES 4.5
b. HACMP "classic" 4.5
c. HACMP 5.1

4. True or False?
Resource groups support IPAT via IP replacement in HACMP 5.2.

© Copyright IBM Corporation 2004


Unit Summary
Having completed this unit, you should be able to:
Describe the installation process for HACMP 5.2
List and explain the purpose of the major HACMP 5.2
components
Describe the HACMP events concept
Describe the resource group definition process in HACMP 5.2

© Copyright IBM Corporation 2004


Welcome to:
Cluster Installation and Configuration

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Configure HACMP 5.2
Use Standard and Extended Configuration paths
Two-Node Cluster Configuration Assistant
Configure HACMP Topology to include:
IP-based networks enabled for address takeover via both alias and
replacement
Non-IP networks (rs232, tmssa, diskhb)
Hardware Address Takeover
Configure HACMP Resources:
Create resource groups using startup, fallover, and fallback policies
Add and remove resource groups and nodes on an existing cluster
Take a snapshot
Remove a cluster
Start and stop the cluster on one or more cluster nodes

© Copyright IBM Corporation 2004


Configuring a Two-Node Cluster
After completing this topic, you should be able to:
Configure a two-node HACMP cluster in a primary/standby
configuration using the Two-Node Configuration Assistant:
A single resource group in primary/standby configuration using
An application server with an enhanced concurrent mode volume group
An IP network using IPAT via aliasing
A non-IP network via heartbeat over disk
Configure a two node HACMP cluster in mutual takeover
configuration
Start and stop HACMP on one or more cluster nodes

© Copyright IBM Corporation 2004


What We Are Going to Achieve?
Our aim is to configure a two-node cluster with two resource
groups in a mutual takeover configuration.
We'll start by creating a two-node standby configuration with one
resource group called xwebserver_group which uses bondar as its
home (primary) node and hudson as its backup node
We're going to use the two-node configuration assistant path for this
configuration.

bondar hudson

D D

© Copyright IBM Corporation 2004


Where Are We in the Implementation?
Plan for network, storage, and application
Eliminate single points of failure
Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks and devices)
Application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP ip and non-ip networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
© Copyright IBM Corporation 2004
The Topology Configuration
Here's the key portion of the /etc/hosts file that we'll be using in this unit:
192.168.15.29 bondar-if1 # bondar's first interface IP label
192.168.16.29 bondar-if2 # bondar's second interface IP label
192.168.5.29 bondar-per # persistent node IP label on bondar
192.168.15.31 hudson-if1 # hudson's first interface IP label
192.168.16.31 hudson-if2 # hudson's second interface IP label
192.168.5.31 hudson-per # persistent node IP label on hudson
192.168.5.92 cxweb # the IP label for the application
# normally resident on bondar

Hostnames: bondar, hudson


bondar's network configuration (defined via smit chinet) :
en0 - 192.168.15.29
en1 - 192.168.16.29

hudson's network configuration:


en0 - 192.168.15.31
en1 - 192.168.16.31

These network interfaces are all connected to the same physical network.
The subnet mask is 255.255.255.0 on all networks/NICs.
An enhanced concurrent mode volume group "ecmvg" has been created to
support the xweb application and will be used for a disk non-ip heartbeat
network

© Copyright IBM Corporation 2004


Configuration Methods
HACMP provides two menu paths to configure topology and
resources:
Standard Path
Standard configuration
Two-Node Cluster Configuration Assistant (HACMP 5.2)
Extended Path (the traditional path)
Use whichever method suits your needs
The Two-node Assistant may be all you need
The other Standard path options are similar to the Extended path
but easier with less options
The Extended path is more flexible but with all the options

© Copyright IBM Corporation 2004


Plan Two-Node Configuration Assistant
Plan configuration path to other node = hudson_if1
This node will be the home (primary) node
Plan Application Server name = xwebserver
Used to name the cluster and the resource group
Ensure Application Server start and stop scripts exist and are put on
Bondar (where 2 node assistant will be run from):

/mydir/xweb_start
/mydir/xweb_stop

Plan service IP Label = xweb


Recommended: an enhanced concurrent vg

© Copyright IBM Corporation 2004


Starting at the Very Beginning . . .

System Management

Move cursor to desired item and press Enter.

Software Installation and Maintenance


Software License Management
Devices
System Storage Management (Physical & Logical Storage)
Security & Users
Communications Applications and Services
Print Spooling
Problem Determination
Performance & Resource Scheduling
System Environments
Processes & Subsystems
Applications
Installation Assistant
Cluster System Management
Using SMIT (information only)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Almost There . . .

Communications Applications and Services

Move cursor to desired item and press Enter.

TCP/IP
NFS
HACMP for AIX

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


The Top-Level HACMP smit Menu
# smit hacmp

HACMP for AIX

Move cursor to desired item and press Enter.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


The Standard Configuration Menu

Initialization and Standard Configuration

Move cursor to desired item and press Enter.

hacmp 5.2 Two-Node Cluster Configuration Assistant


Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
hacmp 5.2 HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Two-Node Cluster Configuration Assistant

Two-Node Cluster Configuration Assistant

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Communication Path to Takeover Node [hudson_if1] +
* Application Server Name [xwebserver]
* Application Server Start Script [/mydir/xweb_start]
* Application Server Stop Script [/mydir/xweb_stop]
* Service IP Label [xweb] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Let's See What We've Done
# /usr/es/sbin/cluster/utilities/cltopinfo
Cluster Name: xwebserver_cluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 2 network(s) defined
NODE bondar:
Network net_ether_01
bondar-if1 192.168.15.29
bondar-if2 192.168.16.29
Network net_diskhb_01
bondar_hdisk5_01 /dev/hdisk5
NODE hudson:
Network net_ether_01
hudson-if1 192.168.15.31
hudson-if2 192.168.16.31
Network net_diskhb_01
hudson_hdisk5_01 /dev/hdisk5
Resource Group xwebserver_group
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Fallback To Higher Priority Node In The List
Participating Nodes bondar hudson

© Copyright IBM Corporation 2004


Where Are We in the Implementation?
Plan for network, storage, and application
Eliminate single points of failure
Define and configure the AIX environment
Storage (adapters, LVM volume group, filesystem)
Networks (IP interfaces, /etc/hosts, non-IP networks and devices)
Application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
Cluster, node names, HACMP ip and non-ip networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
© Copyright IBM Corporation 2004
Starting HACMP (1 of 4)

HACMP for AIX

Move cursor to desired item and press Enter.

Initialization and Standard Configuration


Extended Configuration
System Management (C-SPOC)
Problem Determination Tools

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Starting HACMP (2 of 4)

System Management (C-SPOC)

Move cursor to desired item and press Enter.


Manage HACMP Services
HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Starting HACMP (3 of 4)

Manage HACMP Services

Move cursor to desired item and press Enter.

Start Cluster Services


Stop Cluster Services
Show Cluster Services

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Starting HACMP (4 of 4)
# smit clstart

Start Cluster Services

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Start now, on system restart or both now +
Start Cluster Services on these nodes [bondar,hudson] +
BROADCAST message at startup? true +
Startup Cluster Information Daemon? true +
Reacquire resources after forced down ? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Stopping HACMP
# smit clstop

Stop Cluster Services

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [bondar] +
BROADCAST cluster shutdown? true +
* Shutdown mode graceful +

+--------------------------------------------------------------------------+
¦ Shutdown mode ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ graceful ¦
¦ takeover ¦
¦ forced ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Shutdown Modes Explained
HACMP has three modes for stopping HACMP on one or more nodes.
These are known as Graceful, Takeover and Forced.

Mode Is the application Is the application Does takeover of


available on the available on another the resources
stopped node? active node in the occur?
cluster?
Graceful No Only if online on all No
nodes
Takeover No Yes, if a node that Yes, if a node that
services that services that
resource group resource group
remains active in the remains active in the
cluster. cluster.
Forced Yes Only if online on all No
nodes
AIX Shutdown No* Only if online on all No
nodes

* Technically, HACMP does a Forced so there is no takeover


but then AIX stops so the application is not available

© Copyright IBM Corporation 2004


Are We There Yet?
We've configured a two-node cluster with a single resource
group called xwebserver_group which uses bondar as the primary
node and hudson as the backup
Now let's add a second resource group called adventure, which uses
hudson as the primary node and bondar as the backup.
We're going to use the standard configuration path this time and
move a bit faster*.
bondar hudson

D D

A A

*The smit menu screens are left out this time.


© Copyright IBM Corporation 2004
Adding the Second Resource Group

Configure HACMP Resource Groups

Move cursor to desired item and press Enter.

Add a Resource Group


Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group (standard)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Setting Name, Participating Nodes

Add a Resource Group

* Resource Group Name [Adventure]


* Participating Nodes (Default Node Priority) [hudson bondar]
Startup Policy Online On Home Node O> +
Fallover Policy Fallover To Next Prio> +
Fallback Policy Fallback To Higher Pr> +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Choose Startup Policy

Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [adventure]
* Participating Node Names (Default Node Priority) [hudson bondar] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
+--------------------------------------------------------------------------+
¦ Startup Policy ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Online On Home Node Only ¦
¦ Online On First Available Node ¦
¦ Online Using Distribution Policy ¦
¦ Online On All Available Nodes ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Choose Fallover Policy

Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [adventure]
* Participating Node Names (Default Node Priority) [hudson bondar] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
+--------------------------------------------------------------------------+
¦ Fallover Policy ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Fallover To Next Priority Node In The List ¦
¦ Fallover Using Dynamic Node Priority ¦
¦ Bring Offline (On Error Node Only) ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Choose Fallback Policy

Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [adventure]
* Participating Node Names (Default Node Priority) [hudson bondar] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
Fallback Policy Fallback To Higher Pr> +
+--------------------------------------------------------------------------+
¦ Fallback Policy ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Fallback To Higher Priority Node In The List ¦
¦ Never Fallback ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Ready to Add the Resource Group

Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [adventure]
* Participating Node Names (Default Node Priority) [hudson bondar] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
Fallback Policy Fallback To Higher Pr> +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Prepare for Adventure RG Resources
Create start, stop scripts for yweb
Create ywebvg, filesystem, add yweb service label to /etc/hosts
Run discovery

Extended Configuration

Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured Nodes


Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets

Extended Verification and Synchronization


HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Configure the Resources

Configure Resources to Make Highly Available

Move cursor to desired item and press Enter.

Configure Service IP Labels/Addresses


Configure Application Servers
Configure Volume Groups, Logical Volumes and Filesystems
Configure Concurrent Volume Groups and Logical Volumes

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Configuring Second Service IP Label

Configure Service IP Labels/Addresses

Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding Adventure Service Label (1 of 3)

Add a Service IP Label/Address (standard)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* IP Label/Address [] +
* Network Name [] +

+--------------------------------------------------------------------------+
¦ IP Label/Address ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ (none) ((none)) ¦
¦ bondar (192.168.5.29) ¦
¦ hudson (192.168.5.31) ¦
¦ yweb (192.168.5.70) ¦
¦ xweb (192.168.5.92) ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Adding Adventure Service Label (2 of 3)

Add a Service IP Label/Address (standard)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* IP Label/Address [yweb] +
* Network Name [] +

+--------------------------------------------------------------------------+
¦ Network Name ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ net_ether_01 (192.168.15.0/24 192.168.16.0/24) ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F5¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Adding Adventure Service Label (3 of 3)

Add a Service IP Label/Address (standard)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* IP Label/Address [yweb] +
* Network Name [net_ether_01] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Add Adventure Application Server (1 of 2)

Configure Application Servers

Move cursor to desired item and press Enter.

Add an Application Server


Change/Show an Application Server
Remove an Application Server

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Add Adventure Application Server (2 of 2)

Add Application Server

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Server Name [ywebserver]
* Start Script [/usr/local/scripts/startyweb]
* Stop Script [/usr/local/scripts/stopyweb]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding Resources
to the Adventure RG (1 of 2)
Configure HACMP Resource Groups

Move cursor to desired item and press Enter.

Add a Resource Group


Change/Show a Resource Group
Remove a Resource Group
Change/Show Resources for a Resource Group (standard)

+--------------------------------------------------------------------------+
¦ Select a Resource Group ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ xwebserver_group
| adventure ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Adding Resources
to the Adventure RG (2 of 2)
Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Custom Resource Group Name adventure
Participating Node Names (Default Node Priority) hudson bondar

Startup Behavior Online On First Avail>


Fallover Behavior Fallover To Next Prio>
Fallback Behavior Fallback To Higher Pr>

Service IP Labels/Addresses [yweb] +


Application Servers [ywebserver] +
Volume Groups [ywebvg] +
Use forced varyon of volume groups, if necessary false +
Filesystems (empty is ALL for VGs specified) [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Verify and Synchronize the Changes

Initialization and Standard Configuration

Move cursor to desired item and press Enter.

Two-Node Cluster Configuration Assistant


Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
HACMP Cluster Test Tool
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Extended Configuration Menu

Extended Configuration

Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured Nodes


Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets
HACMP 5.2
Extended Verification and Synchronization
HACMP 5.2 HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Extended Topology Configuration Menu

Extended Topology Configuration

Move cursor to desired item and press Enter.

Configure an HACMP Cluster


Configure HACMP Nodes
Configure HACMP Sites
Configure HACMP Networks
Configure HACMP Communication Interfaces/Devices
Configure HACMP Persistent Node IP Label/Addresses
Configure HACMP Global Networks
Configure HACMP Network Modules
Configure Topology Services and Group Services
Show HACMP Topology

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Communication Interfaces and Devices

Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Defining a Non-IP Network (1 of 3)
Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
¦ Select a category ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Add Discovered Communication Interface and Devices ¦
¦ Add Predefined Communication Interfaces and Devices ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

Don't risk a potentially catastrophic partitioned cluster by using cheap rs232 cables!

© Copyright IBM Corporation 2004


Defining a Non-IP Network (2 of 3)

Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
¦ Select a category ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ # Discovery last performed: (Feb 12 18:20) ¦
¦ Communication Interfaces ¦
¦ Communication Devices ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Defining a Non-IP Network (3 of 3)
Press Enter and HACMP defines a new non-IP network with these
communication devices.
Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
+--------------------------------------------------------------------------+
¦ Select Point-to-Point Pair of Discovered Communication Devices to Add ¦
¦ ¦
¦ Move cursor to desired item and press F7. Use arrow keys to scroll. ¦
¦ ONE OR MORE items can be selected. ¦
¦ Press Enter AFTER making all selections. ¦
¦ ¦
¦ # Node Device Device Path Pvid ¦
¦ bondar hdisk5 /dev/hdisk5 000b4a7cd1...¦
¦ hudson hdisk5 /dev/hdisk5 000b4a7cd1...¦
¦ > bondar tty1 /dev/tty1 ¦
¦ > hudson tty1 /dev/tty1 ¦
¦ bondar tmssa1 /dev/tmssa1 ¦
¦ hudson tmssa2 /dev/tmssa2 ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F7=Select F8=Image F10=Exit ¦
F1¦ Enter=Do /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Defining Persistent Node IP Labels (1 of 3)

Configure HACMP Persistent Node IP Label/Addresses

Move cursor to desired item and press Enter.

Add a Persistent Node IP Label/Address


Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Defining Persistent Node IP Labels (2 of 3)

Configure HACMP Persistent Node IP Label/Addresses

Move cursor to desired item and press Enter.

Add a Persistent Node IP Label/Address


Change / Show a Persistent Node IP Label/Address
Remove a Persistent Node IP Label/Address

+--------------------------------------------------------------------------+
¦ Select a Node ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ bondar ¦
¦ hudson ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Defining Persistent Node IP Labels (3 of 3)
Press Enter and then repeat for the hudson persistent IP label on the hudson node.

Add a Persistent Node IP Label/Address

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Node Name bondar
* Network Name [net_ether_01] +
* Node IP Label/Address [bondar-per] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Synchronize Your Changes
The extended configuration path provides verification and synchronization
options.
HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?

* Force synchronization if verification fails? [No] +


* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Don't forget to verify that you actually implemented what was planned by executing your test plan.

© Copyright IBM Corporation 2004


Take a Snapshot
Snapshot default directory is /usr/es/sbin/cluster/snapshots

Add a Cluster Snapshot

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Cluster Snapshot Name [] /
Custom Defined Snapshot Methods [] +
Save Cluster Log Files in snapshot No +
* Cluster Snapshot Description []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


We're There!
We've configured a two-node cluster with two resource groups:
Each node is the home (primary) node for one of the resource
groups.
Each resource group falls back to its home node on recovery
This is called a two-node mutual takeover cluster.

bondar hudson

D D

A A

Each resource group is also configured to use IPAT via IP aliasing.


This particular style of cluster (mutual takeover with IPAT) is, by far, the most
common style of HACMP cluster.
© Copyright IBM Corporation 2004
Checkpoint
1. True or False?
It is possible to configure a recommended simple two-node cluster environment using just the
standard configuration path.

2. In which of the top level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools
3. An orderly shutdown of AIX while HACMP is running is equivalent to which
of the following:
a. Graceful shutdown of HACMP followed by an orderly shutdown of AIX.
b. Takeover shutdown of HACMP followed by an orderly shutdown of AIX.
c. Forced shutdown of HACMP followed by an orderly shutdown of AIX.
d. None of the above.
4. True or False?
It is possible to configure HACMP faster by having someone help you on the other node.

5. True or False?
You must specify exactly which filesystems you want mounted when you put resources into a
resource group.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?*
It is possible to configure a recommended simple two-node cluster environment using just the
standard configuration path.

2. In which of the top level HACMP menu choices is the menu for starting and
stopping cluster nodes?
a. Initialization and Standard Configuration
b. Extended Configuration
c. System Management (C-SPOC)
d. Problem Determination Tools
3. An orderly shutdown of AIX while HACMP is running is equivalent to which
of the following:
a. Graceful shutdown of HACMP followed by an orderly shutdown of AIX.
b. Takeover shutdown of HACMP followed by an orderly shutdown of AIX.
c. Forced shutdown of HACMP followed by an orderly shutdown of AIX.
d. None of the above.
4. True or False?**
It is possible to configure HACMP faster by having someone help you on the other node.

5. True or False?
You must specify exactly which filesystems you want mounted when you put resources into a
resource group.
*This was False in previous releases as it was not possible to configure the recommended non-IP network using the standard
path. However the Two-Node configuration assistant can.
**Whoever synchronizes first will cause their changes to take effect and result in the other person's changes to made prior to
the time of the first synchronization to be thrown away.
© Copyright IBM Corporation 2004
Break Time!

Please don't paddle too far as we'll be resuming shortly . . .

© Copyright IBM Corporation 2004


Other Configuration Scenarios
After completing this topic, you should be able to:
Configure a third resource group to minimize downtime.
Add a new node to an existing cluster.
Remove a node from an existing cluster.
Remove a resource group from a cluster.
Configure hardware address takeover using cascading resource
groups and IPAT via IP replacement.
Configure a target-mode SSA non-IP heartbeat network.
Configure a non-IP disk heartbeat network

© Copyright IBM Corporation 2004


Yet Another Resource Group
The users have asked that a third application be added to the
cluster.
The application uses very little CPU or memory and there's money in
the budget for more disk drives in the disk enclosure.
Minimizing downtime is particularly important for this application.
The resource group is called ballerina (nobody seems to know why).

bondar hudson

D D

A A

B B

© Copyright IBM Corporation 2004


Adding a Third Resource Group
We'll change the startup policy to "Online On First Available Node" so that
the resource group comes up when bondar is started when hudson is down.
Add a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Resource Group Name [ballerina]
* Participating Node Names (Default Node Priority) [bondar hudson]
+

Startup Policy Online On First Avail>


+
Fallover Policy Fallover To Next Prio>
+
Fallback Policy Never Fallback
+

avoid startup delay by starting on first available node

avoid fallback outage by never falling back

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Does the order in which the node names are specified matter?
© Copyright IBM Corporation 2004
Adding a Third Service IP Label (1 of 2)
The extended configuration path screen for adding a service IP label provides
more options. We choose those which mimic the standard path.
Configure HACMP Service IP Labels/Addresses

Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+
¦ Select a Service IP Label/Address type ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Configurable on Multiple Nodes ¦
¦ Bound to a Single Node ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Adding a Third Service IP Label (2 of 2)
The Alternate Hardware Address ... field is used for hardware address takeover
(which we'll configure later).
Add a Service IP Label/Address configurable on Multiple Nodes (extended)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* IP Label/Address [zweb] +
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding a Third Application Server
The Add Application Server screen is identical in both configuration paths.

Add Application Server

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Server Name [zwebserver]
* Start Script [/usr/local/scripts/startzweb]
* Stop Script [/usr/local/scripts/stopzweb]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding Resources to the Third RG (1 of 2)
The extended path's smit screen for updating the contents of a resource group is
MUCH more complicated!
Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Resource Group Name adventure
Resource Group Management Policy custom
Inter-site Management Policy ignore
Participating Node Names (Default Node Priority) hudson bondar

Startup Behavior Online On First Avail>


Fallover Behavior Fallover To Next Prio>
Fallback Behavior Fallback To Higher Pr>
Fallback Timer Policy (empty is immediate) [] +

Service IP Labels/Addresses [zweb] +


Application Servers [zwebserver] +

Volume Groups [zwebvg] +


Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +
Filesystems (empty is ALL for VGs specified) [] +
Filesystems Consistency Check fsck +
[MORE...17]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding Resources to the Third RG (2 of 2)
Even more choices!
Fortunately, only a handful tend to be used in any given context.
Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[MORE...17] [Entry Fields]


Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured false +
Filesystems/Directories to Export [] +
+
Filesystems/Directories to NFS Mount [] +
Network For NFS Mount [] +

Tape Resources [] +
Raw Disk PVIDs [] +

Fast Connect Services [] +


Communication Links [] +

Primary Workload Manager Class [] +


Secondary Workload Manager Class [] +

Miscellaneous Data []
[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Synchronize Your Changes
The extended configuration path provides verification and synchronization options.
HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Don't forget to verify that you actually implemented what was planned by
executing your test plan.
© Copyright IBM Corporation 2004
Expanding the Cluster
The Users "find" money in the budget and decide to "invest" it
to improve the availability of the adventure and discovery
applications.
Nobody seems to be too worried about the ballerina application.

bondar hudson jones

D D D

A A A

B B

© Copyright IBM Corporation 2004


Adding a New Cluster Node
Physically connect the new node to the appropriate networks
and to the shared storage subsystem.
Configure non-IP networks to create a ring encompassing all nodes.
Configure the shared volume groups on the new node.
Add the new node's IP labels to /etc/hosts on one of the existing nodes.
Copy /etc/hosts from the existing node to all other nodes.
Install AIX, HACMP and the application software on the new node:
Install patches required to bring the new node up to the same level as the
existing cluster nodes.
Reboot the new node (always reboot after installing or patching HACMP).
Add the new node to the existing cluster's topology (from one of the existing
nodes) and synchronize your changes.
Start HACMP on the new node.
Add the new node to the appropriate resource groups
and synchronize your changes again.
Run through your (updated) test plan.

© Copyright IBM Corporation 2004


Add Node -- Standard Path

Configure Nodes to an HACMP Cluster (standard)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Cluster Name [xwebserver_cluster]
New Nodes (via selected communication paths) [jones-if1] +
Currently Configured Node(s) bondar hudson

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Add Node -- Standard Path (In Progress)
Here's the output shortly after pressing Enter:

COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

[TOP]
Communication path jones-if1 discovered a new node. Hostname is jones. Addin
g it to the configuration with Nodename jones.

Discovering IP Network Connectivity

Retrieving data from available cluster nodes. This could take a few minutes....

F1=Help F2=Refresh F3=Cancel F6=Command


F8=Image F9=Shell F10=Exit /=Find
n=Find Next

© Copyright IBM Corporation 2004


Add Node -- Extended Path

Add a Node to the HACMP Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Node Name [jones]
Communication Path to Node [jones_if1] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Define the Non-IP rs232 Networks (1 of 2)
We've added (and tested) a fully wired rs232 null modem cable between jones' tty1 and
bondar's tty2 so we define that as a non-IP rs232 network.

Configure HACMP Communication Interfaces/Devices

+--------------------------------------------------------------------------+
¦ Select Point-to-Point Pair of Discovered Communication Devices to Add ¦
¦ ¦
¦ Move cursor to desired item and press F7. Use arrow keys to scroll. ¦
¦ ONE OR MORE items can be selected. ¦
¦ Press Enter AFTER making all selections. ¦
¦ ¦
¦ # Node Device Device Path Pvid ¦
¦ bondar tty0 /dev/tty0 ¦
¦ hudson tty0 /dev/tty0 ¦
¦ jones tty0 /dev/tty0 ¦
¦ bondar tty1 /dev/tty1 ¦
¦ hudson tty1 /dev/tty1 ¦
¦ > jones tty1 /dev/tty1 ¦
¦ > bondar tty2 /dev/tty2 ¦
¦ hudson tty2 /dev/tty2 ¦
¦ jones tty2 /dev/tty2 ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F7=Select F8=Image F10=Exit ¦
F1¦ Enter=Do /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Define the Non-IP rs232 Networks (2 of 2)
We've also added (and tested) a fully wired rs232 null-modem cable between hudson's
tty2 and jones' tty2 so we define that as a non-IP rs232 network.

Configure HACMP Communication Interfaces/Devices

+--------------------------------------------------------------------------+
¦ Select Point-to-Point Pair of Discovered Communication Devices to Add ¦
¦ ¦
¦ Move cursor to desired item and press F7. Use arrow keys to scroll. ¦
¦ ONE OR MORE items can be selected. ¦
¦ Press Enter AFTER making all selections. ¦
¦ ¦
¦ # Node Device Device Path Pvid ¦
¦ bondar tty0 /dev/tty0 ¦
¦ hudson tty0 /dev/tty0 ¦
¦ jones tty0 /dev/tty0 ¦
¦ bondar tty1 /dev/tty1 ¦
¦ hudson tty1 /dev/tty1 ¦
¦ jones tty1 /dev/tty1 ¦
¦ bondar tty2 /dev/tty2 ¦
¦ > hudson tty2 /dev/tty2 ¦
¦ > jones tty2 /dev/tty2 ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F7=Select F8=Image F10=Exit ¦
F1¦ Enter=Do /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Synchronize Your Changes

HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Start HACMP on the New Node
# smit clstart

Start Cluster Services

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Start now, on system restart or both now +
Start Cluster Services on these nodes [jones] +
BROADCAST message at startup? true +
Startup Cluster Lock Services? false +
Startup Cluster Information Daemon? false +
Reacquire resources after forced down ? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Add the Node to a Resource Group

Change/Show a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name adventure
New Resource Group Name []
Participating Node Names (Default Node Priority) [hudson bondar jones] +

Startup Policy Online On Home Node O> +


Fallover Policy Fallover To Next Prio> +
Fallback Policy Fallback To Higher Pr> +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Repeat for the discovery resource group.

© Copyright IBM Corporation 2004


Synchronize Your Changes
Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Shrinking the Cluster
The Auditors aren't impressed with the latest investment
and force the removal of the jones node from the cluster so that it
can be transferred it to a new project (some users suspect that
political considerations may have been involved).

bondar hudson jones

D D

A A

B B
X
© Copyright IBM Corporation 2004
Removing a Cluster Node
Using any cluster node, remove the departing node from all resource groups
(ensure that each resource group is left with at least two nodes) and
synchronize your changes.
Stop HACMP on the departing node.
Using one of the other cluster nodes which is not being removed:
Remove the departing node from the cluster's topology (using the
Remove a Node from the HACMP Cluster smit screen in the extended
configuration path) and synchronize your change.
Once the synchronization is completed successfully, the departing node is
no longer a member of the cluster.
Remove the departed node's IP addresses from
/usr/es/sbin/cluster/etc/rhosts on the remaining nodes (prevents the
departed node from interfering with HACMP on the remaining nodes).
Physically disconnect the (correct) rs232 cables.
Disconnect the departing node from the shared storage subsystem (strongly
recommended as it makes it impossible for the departed
node to screw up the cluster's shared storage).
Run through your (updated) test plan.
© Copyright IBM Corporation 2004
Removing an Application
The zwebserver application has been causing problems and a
decision has been made to move it out of the cluster.

bondar hudson

D D

A A

B B

© Copyright IBM Corporation 2004


Removing a Resource Group (1 of 3)
Take the resource group offline
Using any cluster node and either configuration path:
Remove the departing resource group using the Remove a Resource
Group smit screen.
Remove any service IP labels previously used by the departing resource
group using the Remove Service IP Labels/Addresses smit screen.
Synchronize your changes (this will shutdown the resource group's
applications using the application server's stop script and release any
resources previously used by the resource group).
Clean out anything that is no longer needed by the cluster:
Export any shared volume groups previously used by the application.
Consider deleting service IP labels from the /etc/hosts file.
Uninstall the application.
Run through your (updated) test plan.

© Copyright IBM Corporation 2004


Removing a Resource Group (2 of 3)

HACMP Extended Resource Group Configuration

Move cursor to desired item and press Enter.

Add a Resource Group


Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Show All Resources by Node or Resource Group

+--------------------------------------------------------------------------+
¦ Select a Resource Group ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ adventure ¦
¦ ballerina ¦
¦ discovery ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Removing a Resource Group (3 of 3)
HACMP Extended Resource Group Configuration

Move cursor to desired item and press Enter.

Add a Resource Group


Change/Show a Resource Group
Change/Show Resources and Attributes for a Resource Group
Remove a Resource Group
Show All Resources by Node or Resource Group

+--------------------------------------------------------------------------+
¦ ARE YOU SURE? ¦
¦ ¦
¦ Continuing may delete information you may want ¦
¦ to keep. This is your last chance to stop ¦
¦ before continuing. ¦
¦ Press Enter to continue. ¦
¦ Press Cancel to return to the application. ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
F1¦ F8=Image F10=Exit Enter=Do ¦
F9+--------------------------------------------------------------------------+

Press enter (if you are sure).

© Copyright IBM Corporation 2004


Synchronize Your Changes
Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Implementing Hardware Address Takeover
Someone just got a great deal on a dozen used FOOL-97x
computers for the summer students to use.
They run some strange proprietary operating system which refuses
to update its ARP cache in response to either ping or gratuitous ARP
packets.

bondar hudson

D D

A A

© Copyright IBM Corporation 2004


Our Plan for Implementing HWAT
Stop HACMP on both cluster nodes (use graceful shutdown option to
bring down the resource groups and their applications).
Remove the alias service labels from the Resources (they are in the wrong
subnet for replacement). They are automatically removed from an RG.
Convert the net_ether_01 ethernet network to use IPAT via IP replacement:
Disable IPAT via IP aliasing on the ethernet network.
Update /etc/hosts on both cluster nodes to describe service IP labels and
addresses on the 192.168.15.0 subnet.
Use the procedure described in the networking to select the LAA
addresses.
Configure new service IP labels with these LAA addresses in the HACMP
smit screens.
Define resource groups to use the new service IP labels.
Synchronize the changes
Restart HACMP on the two nodes.

© Copyright IBM Corporation 2004


Stopping HACMP
# smit clstop

Stop Cluster Services

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Stop now, on system restart or both now +
Stop Cluster Services on these nodes [bondar,hudson] +
BROADCAST cluster shutdown? true +
* Shutdown mode graceful +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Removing a Service IP Label
Press Enter here and you will be prompted to confirm the removal.

Configure HACMP Service IP Labels/Addresses

Move cursor to desired item and press Enter.

Add a Service IP Label/Address


Change/Show a Service IP Label/Address
Remove Service IP Label(s)/Address(es)

+--------------------------------------------------------------------------+
¦ Select Service IP Label(s)/Address(es) to Remove ¦
¦ ¦
¦ Move cursor to desired item and press F7. ¦
¦ ONE OR MORE items can be selected. ¦
¦ Press Enter AFTER making all selections. ¦
¦ ¦
¦ xweb ¦
¦ yweb ¦
¦ zweb ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F7=Select F8=Image F10=Exit ¦
F1¦ Enter=Do /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

Repeat for both service IP labels.


© Copyright IBM Corporation 2004
Change Network to Disable IPAT via Aliases
Set the "Enable IP Address Takeover via IP Aliases" setting to "No" and press
Enter.
Change/Show an IP-Based Network in the HACMP Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Network Name net_ether_01
New Network Name []
* Network Type [ether] +
* Netmask [255.255.255.0] +
* Enable IP Address Takeover via IP Aliases [No] +
IP Address Offset for Heartbeating over IP Aliases []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


The Updated /etc/hosts
Here's the key portion of the /etc/hosts file with the service IP
labels moved to the 192.168.15.0 subnet:
192.168.5.29 bondar # persistent node IP label on bondar
192.168.15.29 bondar-if1 # bondar's first boot IP label
192.168.16.29 bondar-if2 # bondar's second boot IP label
192.168.5.31 hudson # persistent node IP label on hudson
192.168.15.31 hudson-if1 # hudson's first boot IP label
192.168.16.31 hudson-if2 # hudson's second boot IP label
192.168.15.92 xweb # the IP label for the application normally
# resident on bondar
192.168.15.70 yweb # the IP label for the application normally
# resident on hudson

Note that neither bondar or hudson's network configuration (as


defined with the AIX TCP/IP smit screens) needs to be changed.
Note that we are not renaming the interface IP labels to something
like bondar_boot and bondar_standby as changing IP labels in an
HACMP cluster can be quite a bit of work (it is often easier to delete
the cluster definition and start over)

© Copyright IBM Corporation 2004


Selecting LAA Addresses
Here are two Globally Administered Addresses (GAAs) taken
from ethernet adapters in the cluster:
0.4.ac.17.19.64
0.6.29.ac.46.8
First we make sure that each number is two digits long by adding
leading zeros as necessary:
00.04.ac.17.19.64
00.06.29.ac.46.08
Verify that the first digit is 0, 1, 2 or 3:
Yep!
Add 4 to the first digit of each GAA:
40.04.ac.17.19.64
40.06.29.ac.46.08
Done! The two addresses just above are now LAAs.

© Copyright IBM Corporation 2004


Redefining the Service IP Labels for HWAT
Redefine the two service IP labels. Note that the periods are stripped out before
the LAA is entered into the HW Address field.
Add a Service IP Label/Address configurable on Multiple Nodes (extended)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* IP Label/Address [xweb] +
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address [4004ac171964]

You probably shouldn't use the particular LAAs


shown on these foils in your cluster. Select
your own LAAs using the procedure described
in the networking unit.

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Don't forget to specify the second LAA for the second service IP label.
© Copyright IBM Corporation 2004
Synchronize Your Changes
Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Implementing Target Mode SSA
The serial cable being used to implement the rs232 non-IP network
has been borrowed by someone and nobody noticed.
A decision has been made to implement a target mode SSA (tmssa)
non-IP network as it won't fail unless complete access to the shared
SSA disks is lost by one of the nodes (and someone is likely to
notice that).

bondar hudson

D D

A A

© Copyright IBM Corporation 2004


Setting the SSA Node Number
The first step is to give each node a unique SSA node number.
We'll set bondar's ssa node number to 1 and hudson's to 2.
Change/Show the SSA Node Number For This System

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
SSA Node Number [1] +#

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Use the "smit ssaa" fastpath to get to AIX's SSA Adapters menu.
© Copyright IBM Corporation 2004
Configuring the tmssa Devices
This is a three-step process for a two-node cluster as each
node needs tmssa devices which refer to the other node:
1. run cfgmgr on one of the nodes (bondar).
bondar is now ready to respond to tmssa queries.
2. run cfgmgr on the other node (hudson).
hudson is now ready to respond to tmssa queries.
hudson also knows that bondar supports tmssa and has created the
tmssa devices (/dev/tmssa1.im and /dev/tmssa1.tm) which refer to
bondar.
3. run cfgmgr again on the first node (bondar).
bondar now also knows that hudson supports tmssa and has created
the tmssa devices (/dev/tmssa2.im and /dev/tmssa2.tm) which refer to
hudson.
bondar now has /dev/tmssa2.im /dev/tmssa2.tm devices which refer
to hudson

© Copyright IBM Corporation 2004


Rediscover the HACMP Information
Next we need to get HACMP to know about the new communication
devices so we run the auto-discovery procedure again on one of the nodes.
Extended Configuration

Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured Nodes


Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration

Extended Verification and Synchronization

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Defining a Non-IP tmssa Network (1 of 3)
This should look very familiar as it is the same procedure that was used to define
the non-IP rs232 network earlier.
Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
¦ Select a category ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ Add Discovered Communication Interface and Devices ¦
¦ Add Predefined Communication Interfaces and Devices ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Defining a Non-IP tmssa Network (2 of 3)

Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


Change/Show Communication Interfaces/Devices
Remove Communication Interfaces/Devices
Update HACMP Communication Interface with Operating System Settings

+--------------------------------------------------------------------------+
¦ Select a category ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ # Discovery last performed: (Feb 12 18:20) ¦
¦ Communication Interfaces ¦
¦ Communication Devices ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Defining a Non-IP tmssa Network (3 of 3)
Now we need to define the tmssa network using a process

Configure HACMP Communication Interfaces/Devices

Move cursor to desired item and press Enter.

Add Communication Interfaces/Devices


+--------------------------------------------------------------------------+
¦ Select Point-to-Point Pair of Discovered Communication Devices to Add ¦
¦ ¦
¦ Move cursor to desired item and press F7. Use arrow keys to scroll. ¦
¦ ONE OR MORE items can be selected. ¦
¦ Press Enter AFTER making all selections. ¦
¦ ¦
¦ # Node Device Device Path Pvid ¦
¦ > hudson tmssa1 /dev/tmssa1 ¦
¦ > bondar tmssa2 /dev/tmssa2 ¦
¦ bondar tty0 /dev/tty0 ¦
¦ hudson tty0 /dev/tty0 ¦
¦ bondar tty1 /dev/tty1 ¦
¦ hudson tty1 /dev/tty1 ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F7=Select F8=Image F10=Exit ¦
F1¦ Enter=Do /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Synchronize Your Changes
Synchronize the changes and run through the test plan.

HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Removing a Cluster
Use Extended Topology Configuration.

Configure an HACMP Cluster

Move cursor to desired item and press Enter.

Add/Change/Show an HACMP Cluster


Remove an HACMP Cluster
Reset Cluster Tunables

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

Make /usr/es/sbin/cluster/etc/rhosts a null file:


cat "" > /usr/es/sbin/cluster/etc/rhosts

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
It is impossible to add a node while HACMP is running.

2. Which of the following are not supported by HACMP 5.1? (select all that
apply)
a. Cascading resource group with IPAT via IP aliasing.
b. Custom resource group with IPAT via IP replacement.
c. HWAT in a resource group which uses IPAT via IP aliasing.
d. HWAT in a custom resource group.
e. More than three custom resource groups in a two node cluster.
3. Which of the following sequences of steps implement HWAT in a cluster
currently using custom resource groups?
a. Delete custom RGs, define cascading RGs, places resources in new RGs, disable IPAT
via IP aliasing on network, delete old service IP labels, define new service IP labels,
synchronize
b. Delete custom RGs, define cascading RGs, places resources in new RGs, delete old
service IP labels, disable IPAT via IP aliasing on network, define new service IP labels,
synchronize
c. Delete custom RGs, disable IPAT via IP aliasing on network, delete old service IP labels,
define new service IP labels, define cascading RGs, places resources in new RGs,
synchronize
d. Delete custom RGs, delete old service IP labels, disable IPAT via IP aliasing on network,
define new service IP labels, define cascading RGs, places resources in new RGs,
synchronize

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
It is impossible to add a node while HACMP is running.

2. Which of the following are not supported by HACMP 5.1? (select all that
apply)
a. Cascading resource group with IPAT via IP aliasing.
b. Custom resource group with IPAT via IP replacement.
c. HWAT in a resource group which uses IPAT via IP aliasing.
d. HWAT in a custom resource group.
e. More than three custom resource groups in a two node cluster.
3. Which of the following sequences of steps implement HWAT in a cluster
currently using custom resource groups? *
a. Delete custom RGs, define cascading RGs, places resources in new RGs, disable IPAT
via IP aliasing on network, delete old service IP labels, define new service IP labels,
synchronize
b. Delete custom RGs, define cascading RGs, places resources in new RGs, delete old
service IP labels, disable IPAT via IP aliasing on network, define new service IP labels,
synchronize
c. Delete custom RGs, disable IPAT via IP aliasing on network, delete old service IP labels,
define new service IP labels, define cascading RGs, places resources in new RGs,
synchronize
d. Delete custom RGs, delete old service IP labels, disable IPAT via IP aliasing on network,
define new service IP labels, define cascading RGs, places resources in new RGs,
synchronize
*Old service IP labels must be deleted before disabling IPAT via IP aliasing and new service IP labels
must exist before they can be placed into the resource groups.
© Copyright IBM Corporation 2004
Unit Summary
Having completed this unit, you should be able to:
Configure HACMP 5.2
Use Standard and Extended Configuration paths
Two-Node Cluster Configuration Assistant
Configure HACMP Topology to include:
IP-based networks enabled for address takeover via both alias and
replacement
Non-IP networks (rs232, tmssa, diskhb)
Hardware Address Takeover
Configure HACMP Resources:
Create resource groups using startup, fallover, and fallback policies
Add and remove resource groups and nodes on an existing cluster
Take a snapshot
Remove a cluster
Start and stop the cluster on one or more cluster nodes

© Copyright IBM Corporation 2004


Welcome to:
Cluster Single Point of Control

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives

After completing this unit, you should be able to:


Understand the need for change management when using
HACMP
Understand the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
LVM
Users
Disk
Perform RGmove operations

© Copyright IBM Corporation 2004


Administering a High Availability Cluster
Administering a HA cluster is different from administering a
stand-alone server:
Changes made to one node need to be reflected on the other
node
Poorly considered changes can have far reaching implications
Beware the law of unintended consequences
Aspects of the clusters configuration could be quite subtle and yet
critical
Scheduling downtime to install and test changes can be
challenging
Saying oops while sitting at a cluster console could get you fired!

© Copyright IBM Corporation 2004


Recommendations
Implement and adhere to a change control/management
process
Wherever possible, use HACMP's C-SPOC facility to make changes
to the cluster (details to follow).
Document routine operational procedures in a step-by-step list
fashion (for example, shutdown, startup, increasing size of a
filesystem).
Restrict access to the root password to trained High Availability
cluster administrators.
Always take a snapshot (explained later) of your existing
configuration before making a change.

© Copyright IBM Corporation 2004


Change Control or Management
A real change control or management process requires a serious
commitment on the part of the entire organization:
Every change must be carefully considered
The onus should be on the requester of the change to
demonstrate that it is necessary
Not on the cluster administrators to demonstrate that it is unwise.
management must support the process
Defend cluster administrators against unreasonable request or pressure
Not allow politics to affect a change's priority or schedule
Every change, even the minor ones, must follow the process
The cluster administrators must not sneak changes past the process
The notion that a change might be permitted without following the
process must be considered to be absurd

The alternative is that the process rapidly becomes a farce

© Copyright IBM Corporation 2004


Change Considerations
Every change must be carefully considered:
Is the change necessary?
How urgent is the change?
How important is the change? (not the same thing as urgent)
What impact does the change have on other aspects of the
cluster?
What is the impact if the change is not allowed to occur?
Are all of the steps required to implement the change clearly
understood and documented?
How is the change to be tested?
What is the plan for backing out the change if necessary?
Is the appropriate expertise be available should problems
develop?
When is the change scheduled?
Have the users been notified?
Does the maintenance period include sufficient time for a full set
of backups prior to the change and sufficient time for a full
restore afterwards should the change fail testing?
© Copyright IBM Corporation 2004
Masking or Eliminating Planned Downtime
Elimination of Downtime

Continuous
Availability

Continuous High
Operations Availability

Masking or elimination of Masking or elimination of


planned downtime through unplanned downtime
change management
© Copyright IBM Corporation 2004
Cluster Single Point of Control (C-SPOC)
C-SPOC provides facilities for performing common cluster-wide
administration tasks from any node within the cluster.
HACMP 4.x requires either /.rhosts or kerberos to be configured
on all nodes
HACMP 5.x uses the clcomdES socket based subsystem.
C-SPOC operations fail if any target node is down at the time of
execution or selected resource is not available.
Any change to a shared VGDA is synchronized automatically if
C-SPOC is used to change a shared LVM component.
C-SPOC uses a script parser called the command execution
language
Target Target
node node

Initiating
node

Target Target
node node

© Copyright IBM Corporation 2004


The Top-Level C-SPOC Menu

System Management (C-SPOC)

Move cursor to desired item and press Enter.

Manage HACMP Services


HACMP Communication Interface Management
HACMP Resource Group and Application Management
HACMP Log Viewing and Management
HACMP 5.2 -- HACMP File Collection Management
HACMP Security and Users Management
HACMP Logical Volume Management
HACMP Concurrent Logical Volume Management
HACMP Physical Volume Management

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding a User to the Cluster (1 of 2)

Add a User to the Cluster

Type or select a value for the entry field.


Press Enter AFTER making all desired changes.

[Entry Fields]
Select nodes by Resource Group [] +
*** No selection means all nodes! ***

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding a User to the Cluster (2 of 2)

Add a User to the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Select nodes by Resource Group adventure
*** No selection means all nodes! ***

* User NAME [danny]


User ID [500] #
ADMINISTRATIVE USER? false +
Primary GROUP [] +
Group SET [] +
ADMINISTRATIVE GROUPS [] +
Another user can SU TO USER? true +
SU GROUPS [ALL] +
HOME directory [/home/danny]
Initial PROGRAM []
[MORE...34]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Removing a User from the Cluster

Remove a User from the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Select nodes by Resource Group
*** No selection means all nodes! ***

* User NAME [paul] +


Remove AUTHENTICATION information? Yes +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Passwords in an HACMP Cluster

Passwords in an HACMP cluster

Move cursor to desired item and press Enter.

Change a User's Password in the Cluster


HACMP 5.2-- Change Current Users Password
HACMP 5.2-- Manage List of Users Allowed to Change Password
HACMP 5.2-- Modify System Password Utility

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding a Physical Disk to a Cluster

Add an SSA Logical Disk

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Name(s) to which disk is attached bondar,hudson +
Device type disk +
Disk Type hdisk
Disk interface ssar
Description SSA Logical Disk Driv>
Parent ssar
* CONNECTION address [] +
Location Label []
ASSIGN physical volume identifier yes +
RESERVE disk on open yes +
Queue depth [] +
Maximum Coalesce [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Managing Shared LVM Components

HACMP Logical Volume Management

Move cursor to desired item and press Enter.

Shared Volume Groups


Shared Logical Volumes
Shared File Systems
Synchronize Shared LVM Mirrors
Synchronize a Shared Volume Group Definition

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Creating a Shared Volume Group

Create a Shared Volume Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Names bondar,hudson
PVID 00055207bbf6edab 0000>
VOLUME GROUP name [bernhardvg]
Physical partition SIZE in megabytes 64 +
Volume group MAJOR NUMBER [207] #
HACMP 5.2-- Enable Cross-Site LVM Mirroring Verification false +

Warning :
Changing the volume group major number may result
in the command being unable to execute
successfully on a node that does not have the
major number currently available. Please check
for a commonly available major number on all nodes
before changing this setting.

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Discover, Add VG to a Resource Group

Extended Configuration

Move cursor to desired item and press Enter.

Discover HACMP-related Information from Configured Nodes


Extended Topology Configuration
Extended Resource Configuration
Extended Event Configuration
Extended Performance Tuning Parameters Configuration
Security and Users Configuration
Snapshot Configuration
Export Definition File for Online Planning Worksheets

Extended Verification and Synchronization


HACMP Cluster Test Tool

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

© Copyright IBM Corporation 2004


Creating a Shared File System (1 of 2)
First create mirrored logical volumes for the filesystem and jfslog.
Do not forget to logform the jfslog logical volume.
Add a Shared Logical Volume

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[TOP] [Entry Fields]


Resource Group Name adventure
VOLUME GROUP name bernhardvg
Reference node
* Number of LOGICAL PARTITIONS [200] #
PHYSICAL VOLUME names
Logical volume NAME [norbertfs]
Logical volume TYPE [jfs]
POSITION on physical volume middle +
RANGE of physical volumes minimum +
MAXIMUM NUMBER of PHYSICAL VOLUMES [] #
to use for allocation
Number of COPIES of each logical 2 +
partition
[MORE...11]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

The volume group must be online somewhere and listed in a resource group or it does
not appear in the pop-up list. © Copyright IBM Corporation 2004
Creating a Shared File System (2 of 2)
Then create the filesystem in the now "previously defined logical volume".

Add a Standard Journaled File System

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Node Names bondar,hudson
LOGICAL VOLUME name norbertfs
* MOUNT POINT [/norbert]
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 4096 +
Number of bytes per inode 4096 +
Allocation Group Size (MBytes) 8 +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


LVM Change Management
Historically, lack of LVM change management has been a major
cause of cluster failure during fallover. There are several methods
available to ensure LVM changes are correctly synced across the
cluster.
Manual updates to each node to synchronize the ODM records.
Lazy update.
C-SPOC synchronization of ODM records.
C-SPOC LVM operations - cluster enabled equivalents of the
standard SMIT LVM functions.
RSCT for Enhanced Concurrent Volume Groups

VGDA = ODM

© Copyright IBM Corporation 2004


LVM Changes, Manual
To perform manual changes the Volume Group must be varied on
to one of the nodes.
1. Make necessary changes to the volume group or filesystem.
2. Unmount filesystems and varyoff the vg.
On all the other nodes that share the volume group.
1. Export the volume group from the ODM.
2. Import the information from the VGDA.
3. Change the auto vary on flag.
4. Correct the permissions and ownership's on the logical volumes as required.
5. Repeat to all other nodes.

#chfs -a size=+8192 /sharedfs


#unmount /sharedfs
#varyoffvg sharedvg

#
#importvg -V123 -L sharedvg hdisk3
#chvg -an sharedvg
#varyoffvg sharedvg

© Copyright IBM Corporation 2004


LVM Changes, Lazy Update
At fallover time, lazy update compares the time stamp value in the
VGDA with one stored in /usr/sbin/cluster/etc/vg/<vgname>. If the time
stamps are the same, then the varyonvg proceeds.
If the timestamps do not agree, then HACMP does the export/import cycle
similar to a manual update.
NOTE: HACMP does change the VG auto vary on flag AND it preserves
permissions and ownership of the logical volumes.

12 12
11 1 11 1
10 2 10 2

9 3 9 3
8 4 8 4
7 5 7 5
6 6

VGDA timestamp timestamp in


/usr/sbin/cluster/etc/vg/<vgname>

© Copyright IBM Corporation 2004


LVM Changes, C-SPOC Synchronization
Manually make your change to the LVM on one node.
Use C-SPOC to propagate the changes to all nodes in the resource
group.
This is only available for volume groups that are inactive
everywhere in the cluster (not VARYed on)
Downtime is experienced for the volume group.

update vg constructs C-SPOC updates ODM


use C-SPOC syncvg and the time stamp file

© Copyright IBM Corporation 2004


The Best Method: C-SPOC LVM Changes

Journaled File Systems

Move cursor to desired item and press Enter.

Add a Journaled File System


Add a Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared File System
Remove a Shared File System

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


LVM Changes, Select Your Filesystem

Journaled File Systems

Move cursor to desired item and press Enter.

Add a Journaled File System


Add a Journaled File System on a Previously Defined Logical Volume
List All Shared File Systems
Change / Show Characteristics of a Shared File System
Remove a Shared File System

+--------------------------------------------------------------------------+
¦ File System Name ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ # Resource Group File System ¦
¦ adventure /norbert ¦
¦ discovery /ron ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Update the Size of a Filesystem

Change/Show Characteristics of a Shared File System in the Cluster

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group Name discovery
File system name /ron
NEW mount point [/ron]
SIZE of file system [4000000]
Mount GROUP []
Mount AUTOMATICALLY at system restart? no +
PERMISSIONS read/write +
Mount OPTIONS [] +
Start Disk Accounting? no +
Fragment Size (bytes) 4096
Number of bytes per inode 4096
Compression algorithm no

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


HACMP Resource Group Operations

HACMP Resource Group and Application Management

Move cursor to desired item and press Enter.

Bring a Resource Group Online


Bring a Resource Group Offline
Move a Resource Group to Another Node

Suspend/Resume Application Monitoring


Application Availability Analysis

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

© Copyright IBM Corporation 2004


Priority Override Location (POL)
Assigned during a resource group move operation.
The destination node for a resource group online, offline or move
request becomes the resource group's POL.
Remains in effect until:
A move to "Restore_Node_Priority_Order" is done
Cluster is restarted (unless option chosen to persist)
HACMP 4.x has the notion of a sticky location which is similar to the
notion of a persistent POL.
Can be viewed with the command:
/usr/es/sbin/cluster/utilities/clRGinfo -p

*This foil describes how priority override locations work for nonconcurrent resource
groups. See the HACMP 5.2 Administration and Troubleshooting Guide
(SC-23-4862-03) for information on how priority override locations work for concurrent
access resource groups

© Copyright IBM Corporation 2004


Moving a Resource Group

Move a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group to be Moved adventure
Destination Node hudson
Persist Across Cluster Reboot? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Selecting Destination

HACMP Resource Group and Application Management

Move cursor to desired item and press Enter.


lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Select a Destination Node x
x x
x Move cursor to desired item and press Enter. Use arrow keys to scroll. x
x x
x # To choose the highest priority available node for the x
x # resource group, and to remove any Priority Override Location x
x # that is set for the resource group, select x
x # "Restore_Node_Priority_Order" below. x
x Restore_Node_Priority_Order x
x x
x # To choose a specific node, select one below. x
x # x
x # Node Site x
x # x
x halifax x
x x
x F1=Help F2=Refresh F3=Cancel x
x Esc+8=Image Esc+0=Exit Enter=Do x
F1x /=Find n=Find Next x
Esmqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

© Copyright IBM Corporation 2004


Taking a Resource Group Offline

Bring a Resource Group Offline

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group to Bring Offline adventure
Node On Which to Bring Resource Group Offline bondar
Persist Across Cluster Reboot? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Bring a Resource Group Back Online

Bring a Resource Group Online

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Resource Group to Bring Online adventure
Destination Node bondar
Persist Across Cluster Reboot? false +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Log Files Generated by HACMP
/usr/es/adm/cluster.log "High level view" of cluster activity.

/usr/es/sbin/cluster/history/cluster.mmddyyyy Cluster history files generated daily.


/tmp/cspoc.log Generated by C-SPOC commands.
/tmp/dms_loads.out Stores log messages every time HACMP loads
the deadman switch kernal extension.
/var/hacmp/clverify/clverify.log Contains verbose messages from clverify
(cluster verification utility).
/tmp/emuhacmp.out Output of emulated events.
/tmp/hacmp.out /tmp/hacmp.out.<1-7> Output of today's HACMP event scripts.
AIX error log All sorts of stuff!
/var/ha/log/topsvcs Tracks execution of topology services daemon.
/var/ha/log/grpsvcs Tracks execution of group services daemon.
/var/ha/log/grpglsm Tracks execution of grpglsm daemon.
/tmp/clstrmgr.debug Tracks internal execution of the cluster
manager.
/var/hacmp/clcomd/clcomd.log Tracks activity of clcomd.
/var/hacmp/clcomd/clcomddiag.log Tracks more detailed activity of clcomd when
tracing is turned on.
/var/adm/clavan.log Output of application availability analysis tool.
HACMP 5.2 /var/hacmp/log/
-Two-Node Cluster Configuration Assistant
clconfigassist.log
clutils.log -Generated by utilities and file propagation
cl_testtool.log -Generated by test tool

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the likelihood that you will make
a mistake.

2. True or False?
C-SPOC reduces the need for a change management process.

3. C-SPOC cannot do which of the following administration tasks?


a. Add a user to the cluster.
b. Change the size of a filesystem.
c. Add a physical disks to the cluster.
d. Add a shared volume groups to the cluster.
e. Synchronize existing passwords.
f. None of the above.

4. True or False?
It does not matter which node in the cluster is used to initiate a C-SPOC operation.

5. Which log file provides detailed output on HACMP event script execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Using C-SPOC reduces the likelihood of an outage by reducing the likelihood that you will make
a mistake.

2. True or False?
C-SPOC reduces the need for a change management process.

3. C-SPOC cannot do which of the following administration tasks?


a. Add a user to the cluster.
b. Change the size of a filesystem.
c. Add a physical disks to the cluster.
d. Add a shared volume group to the cluster.
e. Synchronize existing passwords.
f. None of the above (e was the correct answer in previous releases)

4. True or False?
It does not matter which node in the cluster is used to initiate a C-SPOC operation.

5. Which log file provides detailed output on HACMP event script execution?
a. /tmp/clstrmgr.debug
b. /tmp/hacmp.out
c. /var/adm/cluster.log

© Copyright IBM Corporation 2004


Unit Summary

Having completed this unit, you should be able to:


Understand the need for change management when using
HACMP
Understand the benefits and capabilities of C-SPOC
Perform routine administrative changes using C-SPOC
LVM
Users
Disk
Perform RGmove operations

© Copyright IBM Corporation 2004


Welcome to:
Dynamic Reconfiguration

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand Dynamic Automatic Reconfiguration Events (DARE)
Make changes to cluster topology and resources in an active
cluster
Use snapshot to change a cluster configuration

© Copyright IBM Corporation 2004


Dynamic Reconfiguration
HACMP provides a facility that allows changes to cluster
topology and resources to be made while the cluster is active. This
facility is known as DARE or to give it it's full name Dynamic
Automatic Reconfiguration Event. This requires three copies of the
HACMP ODM.

Default Configuration Directory


DCD which is updated by SMIT/command line
/etc/objrepos

Staging Configuration Directory


SCD which is used during reconfiguration
/usr/es/sbin/cluster/etc/objrepos/staging

rootvg Active Configuration Directory from which


ACD clstrmgr reads the cluster configuration
/usr/es/sbin/cluster/etc/objrepos/active

© Copyright IBM Corporation 2004


What Can DARE Do?
DARE allows changes to be made to most cluster topology
and nearly all resource group components without the need to stop
HACMP, take the application offline or reboot a node. All changes
must be synchronized in order to take effect.
Here are some examples of the tasks that DARE can complete for
Topology and Resources without having to bring HACMP down.

Topology Changes
Adding or removing cluster nodes
Adding or removing networks
Adding or removing communication
interfaces or devices
Swapping a communication interface's IP
address
Resource Changes
All resources can be changed

© Copyright IBM Corporation 2004


What Limitations Does DARE Have?
DARE cannot change all cluster topology and resource group
components without the need to stop HACMP, take the application
offline or reboot a node.
Here are some examples that require a stop and restart of HACMP
for the change to be made.

Topology Changes
Change the name of the cluster
Change the cluster ID*
Change the name of a cluster node
Change a communication interface attribute
Changing whether or not a network uses
IPAT via IP aliasing or via IP replacement
Change the name of a network module*
Add a network interface module*
Removing a network interface module*
Resource Changes
Change the name of a resource group
Change the name of an application server
Change the node relationship

DARE cannot run if two nodes are not at the same HACMP level
© Copyright IBM Corporation 2004
So How Does DARE Work?
DARE uses the three separate copies of the ODM in order to
allow changes to be propagated to all nodes whilst the cluster is
active.
1 2 3 4 5
change topology synchronize topology snapshot taken of cluster manager reads SCD is deleted
or resources in smit or resources in smit the current ACD ACD and refreshes

HACMP HACMP

Move cursor to desired item and press Enter. Move cursor to desired item and press Enter.

Cluster Configuration Cluster Configuration


Cluster Services Cluster Services
Cluster System Management Cluster System Management
Cluster Recovery Aids Cluster Recovery Aids
RAS Support RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help F2=Refresh F3=Cancel Esc+8=Image F1=Help F2=Refresh F3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do Esc+9=Shell Esc+0=Exit Enter=Do

DCD SCD ACD ACD SCD


SCD

© Copyright IBM Corporation 2004


Verifying and Synchronizing (Standard)

Initialization and Standard Configuration

Move cursor to desired item and press Enter.

Two-Node Cluster Configuration Assistant


Add Nodes to an HACMP Cluster
Configure Resources to Make Highly Available
Configure HACMP Resource Groups
Verify and Synchronize HACMP Configuration
Display HACMP Configuration

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Verifying and Synchronizing (Extended)

HACMP Verification and Synchronization (Active Cluster on a Local Node)

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

(When NODE DOWN -- HACMP 5.2) [Entry Fields]


* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?
* Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +
(When NODE UP)
* Emulate or Actual [Actual] +
Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Discarding Unwanted Changes

Problem Determination Tools

Move cursor to desired item and press Enter.

HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Rolling Back from a DARE Operation

Apply a Cluster Snapshot

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Cluster Snapshot Name jami
Cluster Snapshot Description Cuz -- he did the lab>
Un/Configure Cluster Resources? [Yes] +
Force apply if verify fails? [No] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


What If DARE Fails?
If a dynamic reconfiguration should fail due to an unexpected
cluster event, then the staging configuration directory might still exist.
This prevents further changes being made to the cluster.
1 2 3 4 5
change topology synchronize topology snapshot taken of cluster manager reads SCD is deleted
or resources in smit or resources in smit the current ACD ACD and refreshes

HACMP HACMP

Move cursor to desired item and press Enter. Move cursor to desired item and press Enter.

Cluster Configuration Cluster Configuration


Cluster Services Cluster Services
Cluster System Management Cluster System Management
Cluster Recovery Aids Cluster Recovery Aids
RAS Support RASfdsfsfsafsafsfs
fsafsfdsafdsafdsafdsfsdafsdadafsdafsdf
Support

SCD
F1=Help F2=Refresh F3=Cancel Esc+8=Image F1=Help F2=Refresh F3=Cancel Esc+8=Image
Esc+9=Shell Esc+0=Exit Enter=Do Esc+9=Shell Esc+0=Exit Enter=Do

BANG!

DCD SCD ACD ACD SCD


SCD

© Copyright IBM Corporation 2004


Dynamic Reconfiguration Lock

Problem Determination Tools

Move cursor to desired item and press Enter.

HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Cluster Test Tool
HACMP Trace Facility
HACMP Event Emulation
HACMP Error Notification

Open a SMIT Session on a Node

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
DARE operations can be performed while the cluster is running.

2. Which operations can DARE not perform (select all that apply)?
a. Changing the name of the cluster.
b. Removing a node from the cluster.
c. Changing a resource in a resource group.
d. Change whether a network uses IPAT via IP aliasing or via IP replacement.

3. True or False?
It is possible to roll back from a successful DARE operation using an automatically
generated snapshot.

4. True or False?
Running a DARE operation requires three separate copies of the HACMP ODM.

5. True or False?
Cluster snapshots can be applied while the cluster is running.

6. What is the purpose of the dynamic reconfiguration lock?


a. To prevent unauthorized access to DARE functions.
b. To prevent further changes being made until a DARE operation has completed.
c. To keep a copy of the previous configuration for easy rollback.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
DARE operations can be performed while the cluster is running.

2. Which operations can DARE not perform (select all that apply)?
a. Changing the name of the cluster.
b. Removing a node from the cluster.
c. Changing a resource in a resource group.
d. Change whether a network uses IPAT via IP aliasing or via IP replacement.

3. True or False?
It is possible to roll back from a successful DARE operation using an automatically
generated snapshot.

4. True or False?
Running a DARE operation requires three separate copies of the HACMP ODM.

5. True or False?
Cluster snapshots can be applied while the cluster is running.

6. What is the purpose of the dynamic reconfiguration lock?


a. To prevent unauthorized access to DARE functions.
b. To prevent further changes being made until a DARE operation has completed.
c. To keep a copy of the previous configuration for easy rollback.

© Copyright IBM Corporation 2004


Unit Summary
Having completed this unit, you should be able to:
Understand Dynamic Automatic Reconfiguration Events (DARE)
Make changes to cluster topology and resources in an active
cluster
Use snapshot to change a cluster configuration

© Copyright IBM Corporation 2004


Welcome to:
Integrating NFS into HACMP

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Explain the concepts of Network File System (NFS)
Configure HACMP to support NFS
Understand why Volume Group major numbers must be unique
when using NFS with HACMP
Outline the NFS configuration parameters for HACMP

© Copyright IBM Corporation 2004


So, What Is NFS?
The Network File System (NFS) is a client/server application that lets
a computer user view and optionally store and update files on a
remote computer as though they were on the user's own computer.

NFS Client

NFS mount
NFS Server
read-write
NFS mount

read-only

JFS mount
read-only

NFS mount
NFS Client and Server

shared_vg
© Copyright IBM Corporation 2004
NFS Background Processes
NFS uses TCP/IP and a number of background processes to allow
clients to access disk resource on a remote server.
Configuration files are used on the client and server to specify export
and mount options.
NFS Client

NFS Server
n x nfsd and mountd

n x biod

/etc/exports NFS Client and Server


/etc/filesystems n x biod
n x nfsd and mountd

© Copyright IBM Corporation 2004


Combining NFS with HACMP
NFS exports can be made highly available by using the HACMP
resource group to specify NFS exports and mounts.

client system
# mount aservice:/fsa /a
The A resource group specifies:
client system sees /fsa as /a
aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export

export /fsa aservice

A /fsa

# mount /fsa

Bondar Hudson
© Copyright IBM Corporation 2004
NFS Fallover with HACMP
In this scenario, the resource group moves to the surviving node in the cluster,
which exports /fsa. Clients see NFS server not responding during fallover.

client system

The A resource group specifies:


# mount aservice:/fsa /a
client system "sees" /fsa as /a
aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export

aservice export /fsa

/fsa A

# mount /fsa

Bondar Hudson
© Copyright IBM Corporation 2004
Configuring NFS for High Availability

Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[MORE...10] [Entry Fields]

Volume Groups [aaavg] +


Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +

Filesystems (empty is ALL for VGs specified) [/fsa] +


Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured true +
Filesystems/Directories to Export [/fsa] +
Filesystems/Directories to NFS Mount [] +
Network For NFS Mount [] +

[MORE...10]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Cross-mounting NFS Filesystems (1 of 3)
A filesystem configured in a resource group can be made
available to all the nodes in the resource group:
One node has the resource group and acts as an NFS server
Mounts the filesystem (/fsa)
Exports the filesystem (/fsa)
All nodes act as NFS clients
Mount the NFS filesystem (aservice:/fsa) onto a local mount point (/a)
aservice

/a /fsa /a

acts as an NFS server


(exports /fsa) acts as an NFS client
# mount aservice:/fsa /a
© Copyright IBM Corporation 2004
Cross-mounting NFS Filesystems (2 of 3)
When a fallover occurs, the role of NFS server moves with the
resource group.
All (surviving) nodes continue to be NFS clients.

aservice

/a /fsa /a

acts as an NFS server


(exports /fsa)
acts as an NFS client
# mount aservice:/fsa /a

© Copyright IBM Corporation 2004


Cross-mounting NFS Filesystems (3 of 3)
Here's a more detailed look at what is going on:

client system
# mount aservice:/fsa /a

The A resource group specifies: client system "sees" /fsa as /a


aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a

aservice
export /fsa

A /fsa

# mount /fsa
# mount aservice:/fsa /a # mount aservice:/fsa /a
Bondar Hudson
© Copyright IBM Corporation 2004
Choosing the Network for Cross-mounts
In a cluster with multiple IP networks, it may be useful to specify
which network should be used by HACMP for cross-mounts.
This is usually done as a performance enhancement.

The A resource group specifies:


aservice as a service IP label resource
/fsa as a filesystem resource
/fsa as a NFS filesystem to export
/fsa as a NFS filesystem to mount on /a
net_ether_01 is the network for NFS mounts
net_ether_01

net_ether_02

aGservice aservice
export /fsa

A /fsa

# mount /fsa
# mount aservice:/fsa /a # mount aservice:/fsa /a
Bondar Hudson
© Copyright IBM Corporation 2004
Configuring HACMP for Cross-mounting

Change/Show All Resources and Attributes for a Resource Group

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[MORE...10] [Entry Fields]

Volume Groups [aaavg] +


Use forced varyon of volume groups, if necessary false +
Automatically Import Volume Groups false +

Filesystems (empty is ALL for VGs specified) [/fsa] +


Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems mounted before IP configured true +
Filesystems/Directories to Export [/fsa] +
Filesystems/Directories to NFS Mount [/a;/fsa] +
Network For NFS Mount [net_ether_01] +

[MORE...10]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Syntax for Specifying Cross-mounts

Where the filesystem should be mounted over

/a;/fsa

What the filesystem is exported as

# mount aservice:/fsa /a
What HACMP does
(on each node in the resource group)

© Copyright IBM Corporation 2004


Ensuring the VG Major Number Is Unique
Any Volume Group which contains a filesystem that is
offered for NFS export to clients or other cluster nodes must
use the same VG major number on every node in the cluster.
To display the current VG major numbers, use:
# ls -l /dev/*webvg
crw-rw---- 1 root system 201, 0 Sep 04 23:23 /dev/xwebvg
crw-rw---- 1 root system 203, 0 Sep 05 18:27 /dev/ywebvg
crw-rw---- 1 root system 205, 0 Sep 05 23:31 /dev/zwebvg

The command 'lvlstmajor' will list the available major numbers for each node in the cluster.
For example:
# lvlstmajor
43,45...99,101...

The VG major number may be set at the time of creating the VG using smit mkvg or by
using the -V flag on the importvg command. For example:
# importvg -V100 -y shared_vg_a hdisk2

C-SPOC will "suggest" a VG major number which is unique across the nodes
when it is used to create a shared volume group.

© Copyright IBM Corporation 2004


NFS with HACMP Considerations
Some points to note...

1 Resource groups which export NFS filesystems MUST implement


IPAT.
2 The Filesystems mounted before IP configured resource group
attribute must be set to true.
3 HACMP does not use /etc/exports and the default is to export
filesystems rw to the world. Specify NFS export options in
/usr/es/sbin/cluster/etc/exports if you want better control (AIX 5.2
provides an option to specify this path)
4 HACMP only preserves NFS locks if the NFS exporting resource
group has no more than two nodes.

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
HACMP supports all NFS export configuration options.

2. Which of the following is a special consideration when using HACMP to


NFS export filesystems? (select all that apply)
a. NFS exports must be read-write.
b. Secure RPC must be used at all times.
c. A cluster may not use NFS Cross-mounts if there are client systems accessing the NFS
exported filesystems.
d. A volume group which contains filesystems which are NFS exported must have the
same major device number on all cluster nodes in the resource group.

3. What does [/abc;/xyz] mean when specifying a directory to cross-mount?


a. /abc is the name of the filesystem which is exported and /xyz is where it should be
mounted at
b. /abc is where the filesystem should be mounted at and /xyz is the name of the filesystem
which is exported

4. True or False?
HACMP's NFS exporting feature only supports clusters of two nodes.

5. True or False?
IPAT is required in resource groups which export NFS filesystems.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?*
HACMP supports all NFS export configuration options.

2. Which of the following is a special consideration when using HACMP to


NFS export filesystems? (select all that apply)
a. NFS exports must be read-write.
b. Secure RPC must be used at all times.
c. A cluster may not use NFS Cross-mounts if there are client systems accessing the NFS
exported filesystems.
d. A volume group which contains filesystems which are NFS exported must have the
same major device number on all cluster nodes in the resource group.

3. What does [/abc;/xyz] mean when specifying a directory to cross-mount?


a. /abc is the name of the filesystem which is exported and /xyz is where it should be
mounted at
b. /abc is where the filesystem should be mounted at locally and /xyz is the name of the
filesystem which is exported

4. True or False?**
HACMP's NFS exporting feature only supports resource groups with two nodes.

5. True or False?
IPAT is required in resource groups which export NFS filesystems.
*/usr/es/sbin/cluster/exports must be used to specify NFS export options if the default of
"read-write to the world" is not acceptable.
**Resource groups larger than two nodes which export NFS filesystems do not provide full NFS
functionality (for example, NFS file locks are not preserved across a fallover).
© Copyright IBM Corporation 2004
Unit Summary
Having completed this unit, you should be able to:
Explain the concepts of Network File System (NFS)
Configure HACMP to support NFS
Understand why Volume Group major numbers must be unique
when using NFS with HACMP
Outline the NFS configuration parameters for HACMP

© Copyright IBM Corporation 2004


Welcome to:
Cluster Customization

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand the requirements for application server start and stop
scripts
Perform basic cluster customizations
Change HACMP tuning parameters
Monitor other devices outside the control of HACMP

© Copyright IBM Corporation 2004


What Customization Is Necessary?
HACMP is not an out of the box solution to availability.
All clusters require some degree of customization. Here are some
examples of the customization you may need to perform:

Create application start and stop scripts.


Create pre- and post-event scripts.
Tune AIX for increased availability.
Extend your snapshot report to document application specific
information.
Extend cluster verification to test things beyond HA configuration.
Configure event notification to monitor devices beyond the control
of HACMP.

© Copyright IBM Corporation 2004


Application Start and Stop Scripts
All application start and stop scripts must meet the following
basic requirements:
Start and stop scripts should have all required environment
variables set.
The scripts must be present on all nodes used by the application
server.
Start scripts must be able to handle abnormal termination.
If startup fails, the scripts should not leave the cluster in an unstable
state.
Start scripts must check for any dependent processes.
Start scripts must be able to start any required dependent processes
Scripts must declare shell on first line (ie #!/bin/ksh)
Notes:
1. Start and Stop scripts do not have to contain the same commands on all
nodes, thus allowing for different application start up and shutdown sequences
on a node by node basis.
2. HACMP 5.2 provides a file collection facility that can be used to keep your
Start and Stop scripts in sync across the nodes in the resource group.
© Copyright IBM Corporation 2004
Pre- / Post-Events and Notify Commands
HACMP allows a pre- and post-event script to be defined for each of the HACMP
event scripts. These execute immediately before (pre) and after (post) the
HACMP event.
Notify Command

Pre-Event Script (1) Pre-Event Script (n)

Event Manager
Recovery
clcallev HACMP Event
Command
HACMP Event

No Counter Yes
RC=0
>0

Boom!
Yes

ODM
HACMP-
Post-Event Script (1) Post-Event Script (n)
Classes

Notify Command

© Copyright IBM Corporation 2004


Adding/Changing Cluster Events (1 of 3)

Extended Event Configuration

Move cursor to desired item and press Enter.

Configure Pre/Post-Event Commands


Change/Show Pre-Defined HACMP Events
Configure User-Defined Events
Configure Pager Notification Methods
Change/Show Time Until Warning

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding/Changing Cluster Events (2 of 3)

Add a Custom Cluster Event

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Cluster Event Name [stop_printq]
* Cluster Event Description [stop the print queues]
* Cluster Event Script Filename [/usr/local/cluster/events/stop_printq]

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Adding/Changing Cluster Events (3 of 3)

Change/Show Cluster Events

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]

Event Name node_down_local_complete

Description Script run after the >

* Event Command [/usr/es/sbin/cluster/>

Notify Command []
Pre-event Command [] +
Post-event Command [stop_printq] +
Recovery Command []
* Recovery Counter [0] #

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Recovery Commands
If an event script should fail to exit 0, Recovery commands can
be executed if an event script does not exit 0.

Recovery
HACMP Event
Command

No Counter Yes
RC=0
>0
Yes

© Copyright IBM Corporation 2004


Adding/Changing Recovery Commands

Change/Show Cluster Events

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]

Event Name start_server

Description Script run to start a>

* Event Command [/usr/es/sbin/cluster/>

Notify Command []
Pre-event Command [] +
Post-event Command [] +
Recovery Command [/usr/local/bin/recover]
* Recovery Counter [3] #

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


The HACMP 5.2 Events
Primary events Secondary events
(called by clstmgrES recovery programs) (called by other events)
site_up, site_up_complete
node_up_local
site_down, site_down_complete
site_merge, site_merge_complete node_up_remote
node_down_local
node_up, node_up_complete
node_down_remote
node_down, node_down_complete
network_up, network_up_complete node_up_local_complete
node_up_remote_complete
network_down, network_down_complete
node_down_local_complete
swap_adapter, swap_adapter_complete
node_down_remote_complete
swap_address, swap_address_complete
acquire_aconn_service
fail_standby
join_standby acquire_service_addr
acquire_takeover_addr
fail_interface
start_server
join_interface
stop_server
rg_move, rg_move_complete
get_disk_vg_fs
rg_online
get_aconn_rs
rg_offline
release_service_addr
event_error
release_takeover_addr
config_too_long
release_vg_fs
reconfig_topology_start
release_aconn_rs
reconfig_topology_complete
swap_aconn_protocols
reconfig_resource_release
reconfig_resource_acquire releasing
acquiring
reconfig_resource_complete
rg_up
reconfig_configuration_dependency_acquire
rg_down
reconfig_configuration_dependency_complete
rg_error
reconfig_configuration_dependency_release
node_up_dependency rg_temp_error_state
rg_acquiring_secondary
node_up_dependency_complete
rg_up_secondary
node_down_dependency
node_down_dependency_complete rg_error_secondary
resume_appmon
migrate, migrate_complete
suspend_appmon
© Copyright IBM Corporation 2004
Points to Note
The execute bit must be set on all pre-, post-, notify and
recovery scripts.
Synchronization does not copy pre- and post-event script content
from one node to another.
You need to copy all your pre- and post-event scripts to all nodes.
Your pre- and post-event scripts must handle non-zero exit codes.
All scripts must have a header like:

#!/bin/ksh

Test your changes very carefully as a mistake is likely to cause a


fallover to abort.

© Copyright IBM Corporation 2004


Editing an HACMP Event Script (1 of 2)
It is not recommended that you modify an HACMP event script.
If you do, please note the following:

All HACMP event scripts are written in the Korn Shell.


All scripts are located in /usr/es/sbin/cluster/events.
HACMP event scripts are VERY complex as they must operate in
a wide variety of circumstances.
Be particularly careful about the event emulation mechanism
Do not interfere with it
Make sure your changes emulate it or do it as required
Consider changing the location of the edited event script as this
prevents the modified script from being overwritten by an HACMP
patch
Refer to Change/Show Cluster Event screen a few foils back

© Copyright IBM Corporation 2004


Editing an HACMP Event Script (2 of 2)
When changing an HACMP event script :

1. Copy the source event script to a different directory.


2. Edit the Event Script path in the "Change/Show Cluster Events"
HACMP smit panel.
3. Ideally, put any new code into separate script which is called from
within the HACMP event, rather than edit the HACMP event script
directly.
4. Thoroughly document any changes that you make to the HACMP
event script.
5. Thoroughly test the HACMP event script behavior in all
fallover scenarios.

© Copyright IBM Corporation 2004


Performance Tuning HACMP

Extended Performance Tuning Parameters Configuration

Move cursor to desired item and press Enter.

Change/Show I/O pacing


Change/Show syncd frequency

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Enabling I/O Pacing
The HACMP documentation recommends a high water mark of 33
and a low water mark of 24.
Change/Show I/O pacing

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
HIGH water mark for pending write I/Os per file [33] +#
LOW water mark for pending write I/Os per file [24] +#

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Changing the Frequency of syncd
The HACMP documentation recommends a value of 10.

Change/Show syncd frequency

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
syncd frequency (in seconds) [10] #

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Other HACMP Customizations
Dynamic node priority
Custom disk methods
Custom snapshot methods
Custom verification methods
Application monitoring
Application availability analysis tool
File Collection
Configure Pager Notification Methods
Change/Show Time Until Warning
Error notification

© Copyright IBM Corporation 2004


Extending Protection to Other Devices
HACMP provides smit screens for managing the AIX error
logging facility's error notification mechanism.

Disk adapters Disks

CPU

Other shared devices


Disk subsystems

© Copyright IBM Corporation 2004


Error Notification within smit

HACMP Error Notification

Move cursor to desired item and press Enter.

Configure Automatic Error Notification


Add a Notify Method
Change/Show a Notify Method
Remove a Notify Method
Emulate Error Log Entry

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Configuring Error Notification

HACMP Error Notification

Move cursor to desired item and press Enter.

Configure Automatic Error Notification


Add a Notify Method
Change/Show a Notify Method
Remove a Notify Method
Emulate Error Log Entry

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Configuring Automatic Error Notification

Configure Automatic Error Notification

Move cursor to desired item and press Enter.

List Error Notify Methods for Cluster Resources


Add Error Notify Methods for Cluster Resources
Remove Error Notify Methods for Cluster Resources

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Listing Automatic Error Notification

COMMAND STATUS

Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below.

[TOP]
bondar:
bondar: HACMP Resource Error Notify Method
bondar:
bondar: hdisk0 /usr/es/sbin/cluster/diag/cl_failover
bondar: scsi0 /usr/es/sbin/cluster/diag/cl_failover
bondar: hdisk11 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk5 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk9 /usr/es/sbin/cluster/diag/cl_logerror
bondar: hdisk7 /usr/es/sbin/cluster/diag/cl_logerror
bondar: ssa0 /usr/es/sbin/cluster/diag/cl_logerror
hudson:
hudson: HACMP Resource Error Notify Method
[MORE...9]

F1=Help F2=Refresh F3=Cancel F6=Command


F8=Image F9=Shell F10=Exit /=Find
n=Find Next

© Copyright IBM Corporation 2004


Adding Error Notification Methods

Add a Notify Method

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Notification Object Name []
* Persist across system restart? No +
Process ID for use by Notify Method [] +#
Select Error Class None +
Select Error Type None +
Match Alertable errors? None +
Select Error Label [] +
Resource Name [All] +
Resource Class [All] +
Resource Type [All] +
* Notify Method []

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Emulating Errors (1 of 2)

HACMP Error Notification

Mo+--------------------------------------------------------------------------+
¦ Error Label to Emulate ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ [TOP] ¦
¦ SSA_DISK_ERR3 SSA_DISK_DET_ER ¦
¦ LVM_SA_QUORCLOSE bernhardvg ¦
¦ LVM_SA_QUORCLOSE xwebvg ¦
¦ LVM_SA_QUORCLOSE rootvg ¦
¦ SERVICE_EVENT diagela_SE ¦
¦ FCP_ARRAY_ERR6 fcparray_err ¦
¦ DISK_ARRAY_ERR2 ha_hdisk0_0 ¦
¦ DISK_ARRAY_ERR3 ha_hdisk0_1 ¦
¦ DISK_ARRAY_ERR5 ha_hdisk0_2 ¦
¦ DISK_ERR2 ha_hdisk0_3 ¦
¦ [MORE...39] ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


Emulating Errors (2 of 2)

Emulate Error Log Entry

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Error Label Name LVM_SA_QUORCLOSE
Notification Object Name xwebvg
Notify Method /usr/es/sbin/cluster/>

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


What Will This Cause?
# errpt -a
---------------------------------------------------------------------------
LABEL: LVM_SA_QUORCLOSE
IDENTIFIER: CAD234BE

Date/Time: Fri Sep 19 13:58:05 MDT


Sequence Number: 469
Machine Id: 000841564C00
Node Id: bondar
Class: H
Type: UNKN
Resource Name: LVDD
Resource Class: NONE
Resource Type: NONE
Location:

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
00C9 0000
QUORUM COUNT
0
ACTIVE COUNT
0
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------

... and a fallover of the discovery resource group to hudson.


© Copyright IBM Corporation 2004
Selective Fallover for Resource Groups
Selective Fallover is an automatically launched function of
HACMP which attempts to selectively move only the resource group
that has been affected by an individual resource failure to another
node in the cluster, rather than moving all resource groups.
Selective Fallover allows you to selectively provide recovery for
individual resource groups that are affected by failures of specific
resources.
Selective fallover caters to the following failures:
Service IP labels
Network Interface Failures
Local Network Failures
Applications
Communication Links
Volume groups

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
HACMP event scripts are binary executables and cannot be easily modified.

2. Which of the following runs if an HACMP event script fails? (select all that
apply)
a. Pre-event scripts.
b. Post-event scripts.
c. error notification methods.
d. recovery commands.
e. notify methods.

3. What are the recommended values for I/O pacing high and low water
marks?
a. 33,48
b. 48,33
c. 33,24
d. 24,33

4. True or False?
All clusters must be tuned for high availability.

5. True or False?
Writing error notification methods is a normal part of configuring a cluster.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
HACMP event scripts are binary executables and cannot be easily modified.

2. Which of the following runs if an HACMP event script fails? (select all that
apply)
a. Pre-event scripts.
b. Post-event scripts.
c. error notification methods.
d. recovery commands.
e. notify methods.

3. What are the recommended values for I/O pacing high and low water
marks?
a. 33,48
b. 48,33
c. 33,24
d. 24,33

4. True or False? *
All clusters must be tuned for high availability.

5. True or False?
Writing error notification methods is a normal part of configuring a cluster.

*The HACMP documentation recommends that you tune the I/O pacing and syncd parameters.
You may experience "difficulties" getting support until you do this.
© Copyright IBM Corporation 2004
Unit Summary
Having completed this unit, you should be able to:
Understand the requirements for application server start and stop
scripts
Perform basic cluster customizations
Change HACMP tuning parameters
Monitor other devices outside the control of HACMP

© Copyright IBM Corporation 2004


Welcome to:
Problem Determination
and Recovery

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Understand why HACMP can fail
Identify configuration and administration errors
Understand why the Dead Man's Switch invokes
Know when the System Resource Controller kills a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

© Copyright IBM Corporation 2004


Remember This?
User error is the primary cause of unplanned downtime.

Planned downtime: Unplanned downtime:

Hardware upgrades Administrator Error - #1 cause!


Repairs Application failure
Software updates Hardware faults
Backups Environmental Disasters
Testing
Development

85.0%

Hardware Failure (1%)


Other unplanned downtime (14%)
Planned downtime (85%)
14.0%
1.0%

High-availability solutions should reduce both


planned and unplanned downtime.
© Copyright IBM Corporation 2004
Why Do Good Clusters Turn Bad?
Common reasons why HACMP fails:
A poor cluster design and lack of thorough planning.
Basic TCP/IP and LVM configuration problems.
HACMP cluster topology and resource configuration problems.
Absence of change management discipline in a running cluster.
Lack of training for staff administering the cluster.

X X A

Halifax Vancouver
© Copyright IBM Corporation 2004
Test Your Cluster before Going Live!
Careful testing of your production cluster before going live reduces
the risk of problems later.
An example test plan might include:

Test Item Checked

Node Fallover
Network Adapter Swap
IP Network Failure
SSA Adapter Failure
Disk Failure
Clstrmgr Killed
Serial Network Failure
SCSI Adapter for rootvg Failure
Application Failure
Partitioned Cluster

© Copyright IBM Corporation 2004


Tools to Help You Diagnose a Problem
The vast majority of problems that you encounter with
HACMP are related to IP, LVM and cluster configuration errors.
Automatic Cluster Configuration Monitoring %
Automatic Error Correction during Verify %
HACMP Cluster Test Tool %
Emulation Tools
HACMP Administration and Troubleshooting manual
HACMP log files
hacmp.out, cluster.log, clverify.log, clustrmgr.debug
Simple AIX and HACMP commands:
df -k mount lsfs netstat -i
no -a lsdev lsvg [<ecmvg>] lsvg -o
lslv lspv clshowres clfindres
clRGinfo* cltopinfo* ifconfig

% HACMP 5.2 * HACMP 5.x


© Copyright IBM Corporation 2004
Automatic Cluster Configuration Monitoring
HACMP Verification

Move cursor to desired item and press Enter.

Verify HACMP Configuration


Configure Custom Verification Method
Automatic Cluster Configuration Monitoring

Automatic Cluster Configuration Monitoring

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Automatic cluster configuration verification Enabled +
Node name Default +
* HOUR (00 - 23) [00] +#

© Copyright IBM Corporation 2004


Automatic Error Correction During Verify
HACMP Verification and Synchronization

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Verify, Synchronize or Both [Both] +
* Automatically correct errors found during [No] +
verification?
* Force synchronization if verification fails? [No] +
* Verify changes only? [No] +
* Logging [Standard] +

F1=Help F2=Refresh F3=Cancel F4=List


Esc+5=Reset Esc+6=Command Esc+7=Edit

© Copyright IBM Corporation 2004


HACMP Cluster Test Tool
HACMP Cluster Test Tool

Move cursor to desired item and press Enter.

Execute Automated Test Procedure


Execute Custom Test Procedure

F1=Help F2=Refresh F3=Cancel Esc+8=Image


Esc+9=Shell Esc+0=Exit Enter=Do

THESE TESTS ARE DISRUPTIVE

© Copyright IBM Corporation 2004


Event Emulation Tools
HACMP provides tools to emulate common cluster events.
Only certain events are emulated.
Multiple events cannot be emulated.
Each event runs in isolation; results do not impact upon the next
emulated event.
The results are logged in /tmp/emuhacmp.out.
If an event fails when emulated, it's not going to work when it
happens for real.
Failed/Joined Network
Swap Adapter
Failed/Joined Standby

Failed/Joined Node
A

Halifax Vancouver
© Copyright IBM Corporation 2004
Emulating a Network Down Event

Emulate Network Down Event

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Network Name [net_ether_01] +
Node Name [] +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Emulating Common Events

HACMP Event Emulation

Move cursor to desired item and press Enter.

Node Up Event
Node Down Event
Network Up Event
Network Down Event
Fail Standby Event
Join Standby Event
Swap Adapter Event

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Emulating a Node Down Event

Emulate Node Down Event

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* Node Name [hudson] +
* Node Down Mode graceful +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Checking Cluster Processes (1 of 2)

# lssrc -g cluster
Subsystem Group PID Status
clstrmgrES cluster 21032 active
clsmuxpdES cluster 17196 active
clinfoES cluster 21676 active

clstrmgr
Mandatory

Cluster
clsmuxpd Components
clinfo
Mandatory Optional

© Copyright IBM Corporation 2004


Checking Cluster Processes (2 of 2)
Check rsct, clcomd, rmc subsystems

#
# lssrc -g topsvcs
Subsystem Group PID Status
topsvcs topsvcs 12230 active
# lssrc -g grpsvcs
Subsystem Group PID Status
grpsvcs grpsvcs 11736 active
grpglsm grpsvcs 12742 active
# lssrc -g emsvcs
Subsystem Group PID Status
emsvcs emsvcs 12934 active
emaixos emsvcs 13184 active
# lssrc -s clcomdES
Subsystem Group PID Status
clcomdES clcomdES 13420 active
# lssrc -s ctrmc
Subsystem Group PID Status
ctrmc rsct 2954 active
#

© Copyright IBM Corporation 2004


Testing Your Network Connections
To test your IP network:
Ping from service to service and standby to standby
Check the entries in the routing table on each node
netstat -rn
Use the host command to resolve each IP label to its address
For example, host vancouver_service
Use netstat -i and ifconfig to check addresses and mask

To test your non-IP networks:


RS232
On one node type stty < /dev/tty#
This will hang at the command line
Move to the other node which shares the RS232 cable and type stty < /dev/tty#
This causes the tty settings to be displayed on both nodes
Target mode SSA network:
On one node which shares the TMSSA network type cat < /dev/tmssa#.tm
The value of # is the node ID of the target (or receiving) SSA logical router
On the other node type echo test > /dev/tmssa#.im
The value of # in this case is the node ID of the source (or sending) SSA logical router
Heartbeat over disk:
On one node /usr/sbin/rsct/bin/dhb_read -p hdiskx -r (receive is done first)
On another node /usr/sbin/rsct/bin/dhb_read -p hdiskx -t
Do not perform these tests while HACMP is running.
© Copyright IBM Corporation 2004
Dead Man's Switch (DMS Timeout)
If one of your cluster nodes crashes with an 888 LED code,
then you may have experienced a DMS timeout.
Under what circumstances will a DMS timeout occur?
Excessive I/O traffic caused the clstrmgr to be starved of CPU.

Proving that DMS timeout crashed a node.


Copy the system dump to a file.
Run kdb on the dump file.
Run the stat subcommand and look for 'HACMP dms timeout
halting...'

© Copyright IBM Corporation 2004


Avoiding Dead Man's Switch Timeouts
To avoid DMS timeout problems in the first place, carry out the
following in order:

1. Isolate the cause of excessive I/O traffic and fix it, and if that does not
work...
2. Turn on I/O pacing, and if that does not work...
3. Increase the frequency of the syncd, and if that does not work...
4. Reduce the failure detection rate for the slowest network, and if that
does not work...
5. Buy a bigger machine

© Copyright IBM Corporation 2004


SRC Halts a Node
If one of your nodes halts for no apparent reason then you
probably need to change your root password.

Under what circumstances does the SRC halt a node?


The cluster manager was killed or has crashed.
Proving that SRC halted a node:
Check the AIX error log
Look for abnormal termination of clstrmgr daemon
To avoid SRC halts in the first place:
The cluster manager is not prone to abnormal termination so the most
likely cause of SRC halts is an error made by a user of the root
account.
Don't give untrained staff access to the root password.

© Copyright IBM Corporation 2004


Partitioned Clusters and Node Isolation
If you get the message Diagnostics Group Shutting Down
Partition then you have suffered either a partitioned cluster or node
isolation.
Sent when a partitioned cluster or node isolation is detected
Occurs when heartbeats are received from a node that was diagnosed
as failed
Also occurs when HACMP ODM configuration is not the same on a
joining node as nodes already active in the cluster
Also occurs when two clusters with the same ID appear in the same
logical network
A surviving node sends the DGSP message to the rogue recovering
node
The rogue recovering or joining node is halted
Proving that DGSP caused a node to halt:
Look in the /tmp/hacmp.out file

© Copyright IBM Corporation 2004


Avoiding Partitioned Clusters
Install and configure a non IP (serial) network in your cluster.
Consider installing a second non-IP network
Partitioned clusters can lead to data divergence
You do NOT want to experience data divergence
Check your non-IP networks before going live
Disable the non-IP network and verify that HACMP notices
Reconnect the non-IP network and verify that HACMP notices
Watch for non-IP network failures in HACMP log files
Do not segment your cluster's IP networks
Avoid multiple switches
Except in carefully designed highly available network configurations
Avoid bridges

© Copyright IBM Corporation 2004


Please check event status Message
Watch for the message
Cluster <clustername> has been running event <eventname>
for # seconds. Please check event status.

It means that an event script has failed, hung or is taking too long.
HACMP stops processing events until you resolve this issue

© Copyright IBM Corporation 2004


How Long is Too Long?
Timeout value for fast events defaults to 3 minutes (180 seconds)
Timeout value for slow events also defaults to 3 minutes (those
which acquire/release resource groups)
The timeout actually used for slow events is the SUM of fast + slow
Use smit cm_time_before_warning to change these defaults

© Copyright IBM Corporation 2004


Changing the Timeouts

Change/Show Time Until Warning

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Max. Event-only Duration (in seconds) [180] #
Max. Resource Group Processing Time (in seconds) [180] #

Total time to process a Resource Group event 6 minutes and 0 secon>


before a warning is displayed

NOTE: Changes made to this panel must be


propagated to the other nodes by
Verifying and Synchronizing the cluster

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Recovering From an Event Script Failure
1. Make a note of the time at which the message first appears
in the /usr/es/adm/cluster.log file.
2. Open the /tmp/hacmp.out file and move to the point in time
recorded in step 1.
3. Work backwards through the /tmp/hacmp.out file until you find an
AIX error message.
4. Go a little further back in the /tmp/hacmp.out file as the first
message you encounter might not be the most important one.
5. Manually correct the problem so that you complete the event that
failed.
6. Use cluster recovery aids to "Recover from Script Failure".
7. Verify that the "Cluster <name> has been running ..." message is
no longer appearing.
8. Verify that the cluster is now working properly.

© Copyright IBM Corporation 2004


Recovering From an Event Failure

Problem Determination Tools

Move cursor to desired item and press Enter.

HACMP Verification
View Current State
HACMP Log Viewing and Management
Recover From HACMP Script Failure
Restore HACMP Configuration Database from Active Configuration
Release Locks Set By Dynamic Reconfiguration
Clear SSA Disk Fence Registers
HACMP Trace Facility
+--------------------------------------------------------------------------+
¦ Select a Node ¦
¦ ¦
¦ Move cursor to desired item and press Enter. ¦
¦ ¦
¦ bondar ¦
¦ hudson ¦
¦ ¦
¦ F1=Help F2=Refresh F3=Cancel ¦
¦ F8=Image F10=Exit Enter=Do ¦
F1¦ /=Find n=Find Next ¦
F9+--------------------------------------------------------------------------+

© Copyright IBM Corporation 2004


A Troubleshooting Methodology
Save the log files from every available cluster node while they
are still available
Attempt to duplicate the problem
Approach the problem methodically
Distinguish between what you know and what you assume
Keep an open mind
Isolate the problem
Go from the simple to the complex
Make one change at a time
Stick to a few simple troubleshooting tools
Do not neglect the obvious
Watch for what the cluster is not doing
Keep a record of the tests you have completed

© Copyright IBM Corporation 2004


Contacting IBM for Support
Before contacting IBM about a support issue, collect the
following information:

Item Checked

EXACT error messages that appear in HACMP logs or on the console


Your cluster diagram (updated)
A snapshot of your current cluster configuration (not a photo)
Details of any customization performed to HACMP events
Details of current AIX, HACMP and application software levels
Details of any PTFs applied to HACMP or AIX the cluster
The adapter microcode levels (especially for SSA adapters)
Cluster planning worksheets, with all components clearly labeled
A network topology diagram for the network as far as the users
Copies of all HACMP log files

© Copyright IBM Corporation 2004


Checkpoint
1. What is the most common cause of cluster failure?
a. Bugs in AIX or HACMP
b. Cluster administrator error
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design

2. True or False?
Event emulation can emulate all cluster events.

3. If the cluster manager process should die, what will happen to the cluster
node?
a. It continues running but without HACMP to monitor and protect it.
b. It continues running AIX but any resource groups will fallover.
c. Nobody knows because this has never happened before.
d. The System Resource Controller sends an e-mail to root and issue a "halt -q".
e. The System Resource Controller sends an e-mail to root and issue a "shutdown -F".

4. True or False?
A non-IP network is strongly recommended. Failure to include a non-IP network can cause the
cluster to fail or malfunction in rather ugly ways.

5. (bonus question) my favorite graphic in the lower right hand corner of a foil
was: ____________________________________

© Copyright IBM Corporation 2004


Checkpoint Answers
1. What is the most common cause of cluster failure?
a. Bugs in AIX or HACMP
b. Cluster administrator error*
c. Marauding space aliens from another galaxy
d. Cosmic rays
e. Poor/inadequate cluster design*

2. True or False?
Event emulation can emulate all cluster events.

3. If the cluster manager process should die, what will happen to the cluster
node?
a. It continues running but without HACMP to monitor and protect it.
b. It continues running AIX but any resource groups will fallover.
c. Nobody knows because this has never happened before.
d. The System Resource Controller sends an e-mail to root and issue a "halt -q".
e. The System Resource Controller sends an e-mail to root and issue a "shutdown -F".

4. True or False?
A non-IP network is strongly recommended. Failure to include a non-IP network can cause the
cluster to fail or malfunction in rather ugly ways.

*The correct answer is almost certainly "cluster administrator error" although "poor/inadequate
cluster design" would be a very close second.
© Copyright IBM Corporation 2004
Unit Summary
Having completed this unit, you should be able to:
Understand why HACMP can fail
Identify configuration and administration errors
Understand why the Dead Man's Switch invokes
Know when the System Resource Controller will kill a node
Isolate and recover from failed event scripts
Correctly escalate a problem to IBM support

© Copyright IBM Corporation 2004


Welcome to:
Documenting Your Cluster

© Copyright IBM Corporation 2004


3.1
Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 3.0.3
Unit Objectives
After completing this unit, you should be able to:
Explain the importance of cluster planning
Describe the key cluster planning deliverables
Requirements document
Design document
Test plan
Documented operational procedures
Explain how the requirements, design and test plan documents
should be linked together
Use the export to planning worksheets feature of HACMP 5.2

© Copyright IBM Corporation 2004


Cluster Planning
The purpose of cluster planning is to:
Ensure that the cluster implementer understands what the users
want or need the cluster to do
Ensure that the cluster does what the users want/need it to do
Ensure that the cluster does what you intended it to do
The deliverables of the cluster planning process should be:
A requirements document describing what the users want/need the
cluster to do
A design document describing how the cluster is configured to allow it
to do what the users want or need it to do
A test plan describing how to verify that the cluster
Operational documentation

© Copyright IBM Corporation 2004


The Cluster Requirements Document
Describes what the users want or need the cluster to do:
Applications and services that the cluster must provide
Uptime requirements
How many 9s
Time-to-recovery requirements
Performance and resource requirements (as appropriate)
Transactions per second
Simultaneous users
Response time
Data volume
Network bandwidth and response time requirements
Requirements today, in six months, in a year, and so forth
Software licensing requirements
Are applications node locked?
Does vendor offer HA cluster discounts?
Budgetary goals, limits, expectations, and so forth

© Copyright IBM Corporation 2004


The Cluster Design Document (1 of 2)
Describes how the cluster does what the users want or need the
cluster to do:
How each highly available application is configured
How the environment is configured
Power, cooling, physical firewalls, physical security
How the hardware is configured
Diagrams
How AIX is configured
How the storage is configured
Diagrams
How the network is configured
Diagrams
How HACMP is configured
How the cluster notifies relevant people when a failure occurs
Who (job titles, not names) the relevant people are
who should be notified
Even more diagrams

© Copyright IBM Corporation 2004


The Cluster Design Document (2 of 2)
Some additional considerations:
Explain why decisions were made
Explain the cluster from the perspective of potential points of
failure:
Which single points of failure (for example, extended loss of building
power) are not addressed
Why they are not addressed
How, if known, they might be addressed
Which single points of failure are addressed by the cluster design and
how
Explain which parts of the cluster design are rigid (difficult to
change) and which parts are flexible
Relate design decisions back to cluster requirements

© Copyright IBM Corporation 2004


Application Configuration Description
How does the application recover automatically from
crashes
Issues to watch for include:
Startup passwords (for example, SSL certificates on Web servers)
Applications which require GUI buttons to be clicked
Applications which can not reliably recover from a wide range of
crashes without human intervention
Location of:
Configuration files
Application binaries
Static data
Dynamic data
License keys
If node locked, how does the node locking work?
In sufficient detail to be sure that the license keys won't be an obstacle
after a fallover
Should be written in terms that both the application-support and the
cluster-implementation or support folks can understand
© Copyright IBM Corporation 2004
Hardware Configuration Description
What hardware?
Exact model numbers, features, adapter placement, and so forth.
Where will it go?
Which rack and where in the rack? (Is there room and power?)
What will it be connected to?
Cable distances, conduit routing, cable tray capacity requirements
and availability, and so forth.
Which ports?
What kinds of cable? (length, connectors, model numbers, and so
forth)
Detailed shared storage description
ESS (or other bulk storage box) requirements
Mirroring, RAID
Backup requirements, plans, procedures

© Copyright IBM Corporation 2004


AIX Configuration Description
Version, patch level, and so forth
Required filesets (beyond the basic OS installation)
Daemons/services
Which ones to run
How they are configured
If default configuration then say so
Which ones to turn off
Tuning parameters

© Copyright IBM Corporation 2004


Storage Configuration Description
For each volume group:
Statement of purpose (why does it exist?)
Name / size / physical location
Connectivity (which nodes need access to it?)
For each logical volume:
Statement of purpose
Name / size / volume group
Special considerations (for example, high traffic volume)
For each filesystem:
Statement of purpose
Mount point / size / type (JFS or JFS2)
Parameters (for example, blocks per inode for JFS filesystems)
LV name / JFS log LV name
Mount sequencing
Don't forget about each node's rootvg requirements (mirroring,
sizing, traffic volume, and such)
© Copyright IBM Corporation 2004
Network Configuration Description
Which networks does the cluster connect to?
How many IP addresses and switch ports you'll need
Switch and router configuration
Disable auto-negotiation
Detailed network configuration for each network:
IP addresses and labels
Netmasks
Diagrams
How do the users get to the cluster?
Any firewalls in the way?
Should there be firewalls in the way?
Are the links fast enough?

© Copyright IBM Corporation 2004


HACMP Configuration Description
Version, patch level
Filesets to be installed
Cluster topology
In detail
Diagram and words
Cluster resources
Resources:
Service IP labels, filesystems, volume groups, application servers, and
so forth
Resource groups
Statement of purpose
Type
Options
Resources
Cluster parameters
RG processing order, dynamic node priorities,
WLM policies, fallback timer policies, startup
settling time policy, resource group dependencies, and so forth
© Copyright IBM Corporation 2004
The Cluster Test Plan
What do you test?
For example, disk mirroring
How do you test it?
For example, remove a physical volume (which one?)
In what context do you perform the test?
For example, while the application is running on node X while doing Y
with an intensity (load) of Z
How do you know if it worked?
Log messages, HACMP falls over or swaps an adapter, and so forth
What it means if it fails?
Is it mission-critical or can cluster run degraded for a while (how long)?
What is likely to be wrong if it fails
Under what conditions the test should be performed
In the ideal world, you would run through the entire test
plan after every cluster configuration change . . .

© Copyright IBM Corporation 2004


Operational Documentation
What to record and who to call when something goes wrong?
Names and phone numbers (kept up-to-date)
Answers to questions that will be asked:
Cluster name
Vendor support contract number
Manager responsible
Escalation procedures
How to recover from whatever is likely to go wrong (and what is
unlikely to go wrong but delicate/critical if it does go wrong)
Detailed commands and options
Put it in a shell script
smit screens paths (how to get there) and field values
Details (for example, exactly how to identify a failed disk drive and
exactly how to replace it)
Backup procedures

© Copyright IBM Corporation 2004


Document Linkages
The requirements, design and test plan documents produced
during the planning process should all be linked together:
The design document should demonstrate completeness by
referring each design feature back to a requirement
The design document is not complete if there are requirements which
aren't reflected in the design document
The requirements document is not complete if there are design points
which aren't backed up by requirements
The test plan document should demonstrate completeness by
referring each test point back to a design point
The test plan is not complete if there are design points without test
points

Requirements Document Design Document Test Plan


1. blah blah blah but 1. 17 blah slots with 1. disconnect power.
not blah blah x27 feature.
2. cut cables
2. blah must blah 2. blah in slots (scissors in left
except when blah. cabled around hand drawer).
3. blah blah 92.7% blah pipes.
3. repair cables (glue
blah blay blok bloop 3. install coffee in right hand
snog. machine near drawer).
4. blah blah and blah! cluster.
4. buy coffee (cream
5. max 3,126 4. custom blah with and sugar).
simultaneous snaggle option
5. release x27s.
users. disabled.

© Copyright IBM Corporation 2004


Document Review
Document review sessions should be scheduled during which
each document is reviewed on a point by point basis.
Representatives of the interested parties MUST participate.
Hardware and software should not be ordered and the cluster
implementation should not begin until documents have been
reviewed and accepted.
Unless, of course, you have lots and lots of time and money
available to reconfigure the cluster and purchase new equipment
as the requirements and the design changes!
Properly conducted review meetings are quite intensive.
Don't schedule more than about three hours per day.

© Copyright IBM Corporation 2004


Tools to Help You Plan Your Cluster
pSeries eConfigurator.
Builds hardware and software configuration files.
Only available to IBMers and IBM BPs.
Downloaded from ehone.ibm.com.
This tool does not tell you if the hardware and software
components you have chosen will address the business problem
at hand.
Use the Online Planning Worksheets.
These are Java-based tools which are supported on Windows and
AIX (with an appropriate Java Virtual Machine (JVM)).
These create a cluster snapshot which can be applied to HACMP.
Can also be created from current configuration.
Draw a diagram of your cluster.
Create a pictorial representation of your cluster.
Show how the clients connect (including routers).
Label all cluster components.
Keep the diagram up to date if your cluster changes.
© Copyright IBM Corporation 2004
Reference Manuals
Use the cluster planning information in the appropriate HACMP manual:
HACMP classic version 4.5:
HACMP for AIX Planning Guide SC23-4277-04
HACMP for AIX Installation Guide SC23-4278-04
HACMP/ES version 4.5:
HACMP for AIX Enhanced Scalability Installation and Administration Guide
SC23-4306-03
HACMP version 5.2:
HACMP Concepts and Facilities Guide SC23-4864-03
HACMP for AIX Planning and Installation Guide SC23-4861-03
HACMP for AIX Administration and Troubleshooting Guide SC23-4862-03
/usr/lpp/cluster/doc/release_notes (all releases of HACMP)
www.ibm.com/servers/eserver/pseries/library/hacmp_docs.html

© Copyright IBM Corporation 2004


Export to Planning Worksheets

Export Definition File for Online Planning Worksheets

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
* File Name <hacmp/log/cluster.haw] /
Cluster Notes []

F1=Help F2=Refresh F3=Cancel F8=Image


F9=Shell F10=Exit Enter=Do

© Copyright IBM Corporation 2004


Checkpoint
1. True or False?
Each design element needs to a documented test in the test plan document.

2. True or False?
Each requirement in the requirements document needs a design element to explain how it will
be satisfied and a documented test to show how it is verified.

3. True or False?
Each test should test one and only one design feature.

4. Which of the following cluster planning documents is optional?


a. Requirements document
b. Design document
c. Test plan document
d. Operational procedures document
e. None of them

5. True or False?
Proper cluster design documentation is a waste of time because nobody keeps it up-to-date.

6. True or False?
The aspect of cluster design which generally receives the most attention is understanding and
then documenting how the application operates within the cluster.

© Copyright IBM Corporation 2004


Checkpoint Answers
1. True or False?
Each design element needs to a documented test in the test plan document.

2. True or False?
Each requirement in the requirements document needs a design element to explain how it will
be satisfied and a documented test to show how it is verified.

3. True or False?
Each test should test one and only one design feature.

4. Which of the following cluster planning documents is optional?


a. Requirements document
b. Design document
c. Test plan document
d. Operational procedures document
e. None of them

5. True or False?*
Proper cluster design documentation is a waste of time because nobody keeps it up-to-date.

6. True or False?
The aspect of cluster design which generally receives the most attention is understanding and
then documenting how the application operates within the cluster.

*Even if it is not kept up-to-date, proper cluster design documentation will be very useful in ensuring that
the cluster is at least initially configured correctly. Failure to keep the cluster documentation up-to-date will
probably eventually result in "accidental" outages.
© Copyright IBM Corporation 2004
Unit Summary
Having completed this unit, you should be able to:
Explain the importance of cluster planning
Describe the key cluster planning deliverables
Requirements document
Design document
Test plan
Documented operational procedures
Explain how the requirements, design and test plan documents
should be linked together
Use the export to planning worksheets feature of HACMP 5.2

© Copyright IBM Corporation 2004


V3.1.0.1

cover

򔻐򗗠򙳰 Front cover

HACMP Systems
Administration I: Planning and
Implementation
(Course Code AU54)

Instructor Exercises Guide


ERC 5.0

IBM Certified Course Material


Instructor Exercises Guide

Trademarks
IBM® is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AFS AIX AIX 5L
Cross-Site DB2 DB2 Universal Database
DFS Enterprise Storage Server HACMP
NetView POWERparallel pSeries
Redbooks Requisite RS/6000
SP Tivoli TME
TME 10 Versatile Storage Server WebSphere
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

December 2004 Edition

The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

© Copyright International Business Machines Corporation 1998, 2004. All rights reserved.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V3.1.0.1
Instructor Exercises Guide

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Instructor Exercises Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Exercise Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Exercise 1. Cluster Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

Exercise 2. Cluster Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1

Exercise 3. LVM Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1

Exercise 4. Network Setup and Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1

Exercise 5. HACMP Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1

Exercise 6. Client Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1

Exercise 7. Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1

Exercise 8. Application Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1

Exercise 9. Mutual Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1

Exercise 10. HACMP Extended Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1

Exercise 11. IPAT via Replacement and HWAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1

Exercise 12. Network File System (NFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1

Exercise 13. Error Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-1

Appendix A. Cluster Diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

© Copyright IBM Corp. 1998, 2004 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

iv HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM® is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AFS® AIX® AIX 5L™
Cross-Site® DB2® DB2 Universal Database™
DFS™ Enterprise Storage Server® HACMP™
NetView® POWERparallel® pSeries®
Redbooks™ Requisite® RS/6000®
SP™ Tivoli® TME®
TME 10™ Versatile Storage Server™ WebSphere®
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

© Copyright IBM Corp. 1998, 2004 Trademarks v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

vi HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

pref Instructor Exercises Overview


The exercises for this course are based on a case study that is
introduced in Exercise 1. Each student team builds their own cluster
and resource groups. They have the freedom to choose their own
names for resources or follow the guidelines given in the exercises.
The objective is to build a mutual takeover environment.
In general the exercises depend on successfully completing the
previous exercises.

© Copyright IBM Corp. 1998, 2004 Instructor Exercises Overview vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

viii HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

pref Exercise Description


Exercise instructions - This section contains what it is you are going to
accomplish. See the Lab Setup Guide and the course Lab Guide for
instructions and details pertaining to the labs. You are given the
opportunity to work through each exercise given what you learned in
the unit presentation.

© Copyright IBM Corp. 1998, 2004 Exercise Description ix


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

x HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 1. Cluster Design

What This Exercise Is About


This exercise is a high-level design of a cluster. It is scenario-based.
This reinforces the lecture material.

What You Should Be Able to Do


At the end of the exercise, you should be able to:
• Create a high-level design of a cluster
• Interpret the business requirements into a diagram suitable for
creating further HACMP configuration information
• Describe how HACMP will assist in creating the design

Introduction
The scenario that the exercises are based on is a company which is
amalgamating its computer sites to a single location. It is intended to
consolidate computer sites from two cities into one situated roughly in
the middle of the original two. The case study has been designed
around five randomly chosen countries in the world. These countries
and city configurations have been tested in our environment but we
offer the choice to use your own. On to the scenario.

Required Materials
Your imagination.
Paper or a section of a white board.

© Copyright IBM Corp. 1998, 2004 Exercise 1. Cluster Design 1-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
For this example we use the Canada cluster. The original configuration was one computer
located in Halifax and one in Calgary. The systems have been named by their city
designation to keep them straight. The corporate Web server resides on Halifax,. Currently
the systems are running on internal disks, on systems too small for the task. As part of the
consolidation new systems are used. These new systems are to be configured in such a
manner as to provide as close to 7x24x365 access to the Web server as possible with
pSeries technology. Corporate marketing is about to launch a major initiative to promote a
new product solely available on the Web. The corporate management has insisted that this
project is successful, and that the new computer center in Regina resolves all of the issues
of reliability that thus far have caused great corporate embarrassment. All eyes are focused
on this project.
A project briefing has been called by the senior executive to get an overview of how the
funds for the equipment are applied.
Your task is to prepare for that meeting to present a solution.

Exercise Steps
__ 1. Draw each of the computer systems as described.
__ 2. Add the applications to the nodes.
__ 3. Add a network connection to each system for access to the outside world.
__ 4. Evaluate the lack of high availability of the initial drawing of the two separate
systems.
__ 5. Combine the services of the existing networks resulting in a single network.
__ 6. Add new SSA disks to your drawing, showing cable connections.
__ 7. Make the disks highly available, RAID/mirror, redundant disks.
__ 8. Define the resources as described in the text.
__ 9. Define the characteristics of the resources.
__ 10. Indicate how the resources fail and recover.
__ 11. Make the diagram simple to understand.

END OF LAB

1-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 2. Cluster Planning

What This Exercise Is About


This exercise is going to build on the high-level design. You continue
to build upon the cluster. The next step is to document your hardware
to create an inventory of materials to work with. You use a cluster
planning worksheets and a generic cluster diagram to design and
document your cluster topology. The design is based on either the
country scenario provided or the high-level design you created in the
prior exercise.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Create component worksheets and a diagram showing your cluster
topology in detail
• Identify the hardware configuration of the classroom equipment

Introduction
There may be differences in the documentation and the real machines
in the classroom environment. The CPUs, network type, and type of
disk units have been selected to provide a consistent experience but a
variety of equipment may be used. Please ask if you have any
questions.
Note: Throughout this lab the terms shared volume group, shared file
system, node and client refer to components of your HACMP cluster.
The convention of <name> is to be substituted with the appropriate
thing. The example references a generic cluster’s naming of these
components. Some names in your cluster may be different from that
indicated in the notes.
Below is a picture of the generic cluster for this lab. The
communications path may be Ethernet, Token-Ring, FDDI, or any
other network supported by HACMP. There must also be a non IP
serial network-- either RS232, target mode SSA or heartbeat over
disk. The minimum requirement is that there are at least four shared
disks (SCSI, Fiber Channel or SSA) connected to a shared bus so that
two volume groups may be created and passed between nodes. If
adequate disks can be provided for the purposes of mirroring and

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

quorum, then a more realistic configuration can be built. However, this


is not a requirement of the lab exercises.
The systems provided are to be the new systems for the consolidated
computer center. You must prepare these systems to be the
replacements of the production systems. It is time to check out the
equipment to find out what is available to create our highly available
solution on.

Instructor Exercise Overview


Ensure that a team number (or letter) has been assigned to each
cluster team before starting this lab (see step 1). This number must be
a single character. The idea is that everyone can use the lab hints now
-- not just Canada as in the past. For names there will be a convention
like Canada# (where # is the team number). For IP addresses the
format of the third octet is #X where #, is the team number and X is the
subnet number.
Point out that sample entries are shown in the component worksheet
tables.
Note that the diagram below also appears in the appendix. The
appendix has the cheat sheet version as well as two blank templates
so the student can take one blank one home.

2-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty

LAB Reference Cluster


user
community

Network = ________________ (netmask = 255.255.255.0)

Home Node Name = Home Node Name =


Resource Group= Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =
Service IP Label = Service IP Label =
Application server = Application server =
Label = tty Label =
Device =
Device =
Label = disk or
Device = Label =
tmssa Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

© Copyright IBM Corporation 2004

AU545.0

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Part 1: Examine the Cluster Environment and Complete the Cluster
Component Worksheets with Storage Information
Using the cluster component worksheets (located at the end of this exercise), record the
information as listed in the following steps.

__ 1. Write down your team number here: ____. In these lab exercises you must replace
the symbol # with your team number unless otherwise noted.
__ 2. Log in as root on both of your cluster nodes. The root password will be provided by
your instructor.
__ 3. Identify and record in the cluster components worksheet the device names and
location codes of the disk adapters.
__ 4. Identify and record in the cluster components worksheet the device names and
location codes of the external disks (hdisks and pdisk). Note: The external disks
may not have PVIDs on them at this time.
__ 5. Identify and record in the cluster components worksheet the device names and
location codes of the internal disks.
__ 6. The storage needs to be divided into two volume groups. Size of the volume groups
is not important. In a real environment, disks should be mirrored and quorum issues
addressed. Here the emphasis is on the operation of HACMP not how the storage is
organized. You should have four disks so feel free to set up a mirror on one of the
volume groups. Different methods of configuring the disks are going to be used
through out the exercises. Decide on the organization but only create the volume
groups when directed to.

__ 7. Identify and update the cluster planning worksheets with the names of 2 shared
volume groups. Use the following names or choose your own.
»
» shared_vg_a
» shared_vg_b
__ 8. Identify and update the cluster component worksheets the LV component names to
have a shared file system in each of the two volume groups. Select names for the
logical volumes, jfs logs and filesystems. Use the following names or choose your
own.
» data lv’s shared_jfslv_a, shared_jfslv_b
» jfslog lv’s shared_jfslog_a, shared_jfslog_b
» file systems shared_fs_a, shared_fs_b

2-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 9. Now add the just the storage information to the generic cluster diagram of your
cluster. This diagram can be found in Appendix A (there are two blank ones after the
filled in one. One is for in class and the other is to take home). On the other hand
you may want to just compare the information on your component worksheets to the
filled in worksheet at the beginning of Appendix A.
• Only fill in what you know -- the LVM information-- at the bottom of the diagram.

GO NOW TO EXERCISE 3. You return to Part 2 after the lecture for the unit on network
planning.

Part 2: Examine the Cluster Environment and Complete the Cluster


Component Worksheets with Networking Information
__ 10. Identify and record in the cluster components worksheet the device names (entX)
and location codes of the Network Adapters.
__ 11. Identify and record in the cluster components worksheets the IP addresses that will
be used for the cluster communication interfaces using the following guidelines:
• Ensure that the logical subnet rules are complied with. Each communication
interface must be on a different logical subnet.
• Ensure that all of the communication interfaces have the same subnet mask.
• You can use the following names/addresses or select your own. If you choose to
use your own please verify with the instructor that they will not conflict with
another team.
»
» In the following, replace # with your team number.
»
» communication interface halifax#-if1 192.168.#1.1
» communication interface halifax#-if2 192.168.#2.1
» communication interface toronto#-if1 192.168.#1.2
» communication interface toronto#-if2 192.168.#2.2
» netmask is 255.255.255.0
__ 12. Identify and update the cluster components worksheets the names (IP Labels) and
addresses for the service and persistent labels using the following guidelines.
• Ensure that the logical subnet rules are complied with. Assume IPAT via alias
which means that the Service Labels/addresses and persistent addresses may
not be on the same logical subnet as any one of the communication interfaces.
• The following names/addresses may be used or select your own. If you choose
to use your own please verify with the instructor that they will not conflict with
another team.
»
» halifax#-per 192.168.#3.1

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

» toronto#-per 192.168.#3.2
» appA#-svc 192.168.#3.10
» appB#-svc 192.168.#3.11
__ 13. The IP network name is generated by HACMP.
__ 14. Identify and update the cluster components worksheets the name for your cluster
(any string without spaces, up to 32 characters) using the following or choose your
own.
» cluster name is canada#
__ 15. Identify and update the cluster components worksheet the device names and
location codes of the serial ports.
__ 16. The non-IP network name is generated by HACMP.
__ 17. At this point in time most of the names for the various cluster components should
have been selected and populated on the cluster component worksheets. It is
important to have a clear picture of the various names of these components as you
progress through the exercises.
__ 18. Now add the networking information to the generic cluster diagram of your cluster.
This diagram can be found in Appendix A (there are two blank ones after the filled in
one. One is for in class and the other is to take home). On the other hand you may
want to just compare the information on your component worksheets to the filled in
worksheet at the beginning of Appendix A.
• Only fill in what you know -- cluster name, node names (halifax#, toronto#), and
IP information at the top.

2-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Cluster Component Worksheets


Table 1: Non-shared Components Worksheet: FIRST Node
Non-shared Components Description Value

Node Name ------------------N/A-----------------------

***Network Adapter*** **IBM 10/100 Mbps Ethernet *** entX 10-60


Network Adapter IF1
Network Adapter IF2
Network Adapter IF3
Network Adapter IF4

***Ext. Disk Adapter*** ***SSA 160 SerialRAID Adapter*** ssaX 10-90


Ext. Disk Adapter 1
Ext. Disk Adapter 2

***Serial port*** ***Standard I/O Serial Port*** saX 01-S1


Serial port 1
Serial port 2

***TTY device *** ***Asynchronous Terminal*** ttyX 01-S1-00-00


TTY device 1
TTY device 2

***Internal Disk *** 16 Bit LVD SCSI Disk Drive hdiskX 10-80-00-4,0
Internal Disk 1
Internal Disk 2
Internal Disk 3

Persistent Address ------------------N/A-----------------------

***IF IP Label/address*** myname 192.168.#x.yy


IF1 IP Label/address
IF2 IP Label/address
IF3 IP Label/address

TMSSA device ------------------N/A-----------------------

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Table 2: Non-shared Components Worksheet: SECOND Node


Component Description Value

Node Name ------------------N/A-----------------------

***Network Adapter*** **IBM 10/100 Mbps Ethernet *** entX 10-60


Network Adapter IF1
Network Adapter IF2
Network Adapter IF3
Network Adapter IF4

***Ext. Disk Adapter*** ***SSA 160 SerialRAID Adapter*** ssaX 10-90


Ext. Disk Adapter 1
Ext. Disk Adapter 2

***Serial port*** ***Standard I/O Serial Port*** saX 01-S1


Serial port 1
Serial port 2

***TTY device *** ***Asynchronous Terminal*** ttyX 01-S1-00-00


TTY device 1
TTY device 2

***Internal Disk *** 16 Bit LVD SCSI Disk Drive hdiskX 10-80-00-4,0
Internal Disk 1
Internal Disk 2
Internal Disk 3

Persistent Address ------------------N/A-----------------------

***IF IP Label/address*** myname# 192.168.#x.yy


IF1 IP Label/address
IF2 IP Label/address
IF3 IP Label/address

TMSSA device ------------------N/A-----------------------

2-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty
Table 3: Shared Components Worksheet
Component Description Value

Cluster Name ---------------------N/A-----------------


Cluster ID ---------------------N/A-----------------

Cluster Subnet mask ---------------------N/A-----------------

Network Name ---------------------N/A-----------------


Network Name ---------------------N/A-----------------

***Shared Disk *** *P1.1-I3/Q1-W4AC50A84400D* hdiskX pdiskY


Shared Disk 1
Shared Disk 2
Shared Disk 3
Shared Disk 4

Shared vg 1 ---------------------N/A-----------------
Shared jfs log 1 --------------------N/A------------------
Shared jfs lv 1 --------------------N/A------------------
Shared filesystem 1 --------------------N/A------------------
-mount point --------------------N/A------------------

Shared vg 2 --------------------N/A------------------
Shared jfs log 2 --------------------N/A------------------
Shared jfs lv 2 --------------------N/A------------------
Shared filesystem 2 --------------------N/A------------------
-mount point --------------------N/A------------------

ALIAS: myname# 192.168.#x.yy


Service Label/address
Service Label/address
Service Label/address
Service Label/address
REPLACEMENT node1:
Service Label/address
Hardware Address ---------------------N/A-----------------

REPLACEMENT node2:
Service Label/address
Hardware Address ---------------------N/A-----------------

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

2-10 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 3. LVM Components

What This Exercise Is About


This exercise reinforces the steps involved in creating a shared
volume group with a filesystem to be used as an HACMP resource.

What You Should Be Able to Do


At the end of the exercise, you should be able to:
• Create a Volume Group suitable for use as an HACMP resource
• Create a filesystem suitable for use as an HACMP resource
• Manually perform the function of passing a filesystem between
nodes in a cluster

Introduction
The next phase in our scenario is to provide the storage for the highly
available application. We require a filesystem to store the Web pages
on that can be accessed by each machine when that machine is the
active node.
To support the passing of a filesystem between nodes there must be a
volume group, logical volume, and a logical volume for the jfs log.
There are several methods to accomplish this task. Two are going to
be explored during the exercises. First, a manual creation to
emphasize the necessary steps in the process and second, in a later
exercise, an automated cluster aware method will be explored during
the C-SPOC exercise.

Required Materials
• Cluster Planning Worksheets and cluster diagram from the
previous exercise.
• Shared disk storage connected to both nodes.

Instructor Exercise Overview


This exercise should only configure one filesystem. Save the second one for C-SPOC.

© Copyright IBM Corp. 1998, 2004 Exercise 3. LVM Components 3-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Configure Volume Group
__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in to both nodes as root.
__ 3. Verify that both nodes have the same number of disks.
__ 4. Identify the internal and shared disks from the cluster worksheet. These disks might
or might not have PVIDs on them.
If they match between the two systems, then you can skip to step 10.
__ 5. On both systems delete only the external hdisks.
__ 6. On one system add all of PVIDs back in.
__ 7. On the other system update the PVIDs.
__ 8. Verify the PVIDs were updated.
__ 9. The hdisks and PVIDs should match on both systems.
__ 10. Find a VG major number not used on either node __________.
__ 11. Go to your halifax# node. Create an Enhanced Concurrent Volume Group called
shared_vg_a. This will be the volume group for appA#’s shared data.
__ 12. Vary on the volume group and create a jfslog logical volume with a name of
shared_jfslog_a. The type is to be jfslog. Only one lp is required.
__ 13. Format the jfslog logical volume.
__ 14. Create a logical volume for data called shared_jfslv_a.
__ 15. Create a filesystem called shared_fs_a using the Add a Journaled File System on a
previously defined logical volume. The mount point should be /shared_fs_a and the
filesystem should not be automatically activated on system restart.
__ 16. Verify the filesystem can be mounted manually.
__ 17. Check the correct log file is active. If you have a loglv00 then you might not have
formatted the jfs log before you created the jfs.
__ 18. Umount the filesystem.
__ 19. Vary off the volume group.
__ 20. On your Toronto# node, import the volume group using the major number, hdisk and
volume group information. The VG name must be the same as the system it was
created on.
__ 21. Set the autovaryon flag to “off” for the volume group.
__ 22. Mount the filesystem on the second node and verify it functions.
__ 23. Check the correct log file is active.

3-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 24. Unmount the filesystem.


__ 25. Vary off the volume group.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 3. LVM Components 3-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

3-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 4. Network Setup and Test

What This Exercise Is About


This exercise guides you through the set up and testing of the
networks required for HACMP.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Configure TPC/IP networking suitable for HACMP
• Test the TCP/IP configuration
• Configure non IP communications for HACMP
• Test the non IP communications for HACMP
• Configure and test name resolution and authentication

Introduction
This section establishes the communication networks required for
implementing HACMP. Networking is an important component of
HACMP, so all related aspects are configured and tested. The
information used in this exercise is derived from the previous exercise.

Instructor Exercise Overview


diskhb is not included here on purpose. It is automatically configured by the 2node config
assist feature.
This lab should be done after the students complete part 2 of exercise 2.
The client ip information is deliberately left out of the host file in this exercise. It is added as
part of exercise 6.

© Copyright IBM Corp. 1998, 2004 Exercise 4. Network Setup and Test 4-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

LAB Reference Cluster


user
community

Network = ________________ (netmask = 255.255.255.0)

Home Node Name = Home Node Name =


Resource Group= Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =
Service IP Label = Service IP Label =
Application server = Application server =
Label = tty Label =
Device =
Device =
Label = disk or
Device = Label =
tmssa Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

© Copyright IBM Corporation 2004

Figure 4-1. Lab Reference Cluster AU545.0

Required Materials
• Cluster Planning Worksheets and cluster diagram from exercise 2.

4-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise Instructions


Part 1: Configure TCP/IP Interfaces and Name Resolution
__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in as root to both of the cluster nodes.
__ 3. Check the UNIX Hostname (both the host command and the uname -n command
should give you the same desired answer).
__ 4. Using the component worksheets or configuration diagram for values, configure two
network adapters for use as communication interfaces, remember that each
communication interfaces must use a separate logical subnet.
Note: Do NOT use the minimum config and setup option in smit. It changes the
name of the node. Use smit chinet instead.
__ 5. Recheck the hostname.
__ 6. Verify the netmasks are specifically set in smitty chinet. The default could cause
errors later depending on what your network address was.
__ 7. Check the configuration against the cluster worksheets.
__ 8. Repeat for other node. When both nodes are configured, test the communications
between the nodes. Use the ping command to verify connection between each set
of communication interfaces.
__ 9. Update the /etc/hosts file on both nodes (update one and ftp it to the other node).
__ 10. Verify name resolution and connectivity on BOTH nodes for all IP labels.

Part 2: Configure Non IP Interface


__ 11. With your cluster planning sheets available, begin the configuration.
__ 12. Log in as root to both of the cluster nodes.
__ 13. On both nodes check if you can use a tty connection or SSA or both as a non IP
network. You may have to ask your instructor for details.
If not using tty for your non IP network then skip to step __ 16.

Using tty
__ 14. On both nodes check the device configuration of the unused tty device. If the tty
device does not exist, create it. If it does exist, ensure that a getty is not spawned, or
better still, delete it and redefine.
__ 15. Test the non IP communications:
i. On one node execute stty < /dev/tty# where # is your tty number.
ii. The screen appears to hang. This is normal.

© Copyright IBM Corp. 1998, 2004 Exercise 4. Network Setup and Test 4-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

iii. On the other node execute “stty </dev/tty#” where # is your tty number.
iv. If the communications line is good, both nodes return their tty settings.

Using SSA
__ 16. If using target-mode SSA for your non IP network, then check if the prerequisites are
there. A unique node number must be set and the device driver must be installed. If
not add it.
__ 17. Test the non IP communication using SSA.

END OF LAB

4-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 5. HACMP Software Installation

What This Exercise Is About


This exercise installs the components of HACMP for AIX to support all
resource group policies.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Verify the node is prepared for the installation of HACMP
• Identify and install the packages to run HACMP

© Copyright IBM Corp. 1998, 2004 Exercise 5. HACMP Software Installation 5-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• This exercise is composed of two parts, system capacity checks and software
installation.

Part 1: System Capacity Checks


__ 1. Log into halifax# as root (this part can be done in parallel by a second person
working on the other node, toronto#).
__ 2. Verify the following disk space requirements:
i. 140 MB free space in /usr, although the installation of the software will
automatically increase the size if required.
ii. 100 MB free in /tmp, /var and /
__ 3. For most lab environments, check to see that the system paging space is set to
twice the size of main memory. This is the default recommendation for small
memory machines.
__ 4. Ensure Part 1 is performed for your other node, toronto#

Part 2: HACMP Node Installation


__ 5. Log in to halifax# as root (this part can be done in parallel by a second person
working on the other node toronto#).
__ 6. Verify the AIX prerequisites are installed. If any of these are not installed notify your
instructor. RSCT filesets must be at a minimum, version 2.2.1.
- bos.adt.lib
- bos.adt.libm
- bos.adt.syscalls
- bos.data
- rsct.compat.basic
- rsct.compat.clients
- devices.ssa.tm
- devices.scsi.tm
__ 7. Change directory to the location of the filesets. In most classes they can be found in
a subdirectory of the /usr/sys/inst.images directory. If there are questions, ask the
Instructor.
__ 8. Install preview the following HACMP filesets:
• HACMP
- cluster.adt.es
- cluster.doc.en_US.es
- cluster.es

5-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty - cluster.es.clvm
- cluster.es.cspoc
- cluster.license
- cluster.man.en_US.es
- cluster.msg.en_US.cspoc (lower case en)
- cluster.msg.en_US.es
__ 9. If the HACMP packages pass the prerequisite check, set preview to no and install
the HACMP filesets. If there is a prerequest failure, notify your Instructor.
__ 10. Install HACMP maintenance. Check the /usr/sys/inst.images directory for an HA
updates directory (in many classes it will be the subdirectory ./ha52/ptf1). If you
have questions, ask the instructor.
__ 11. Reboot the nodes.
__ 12. Verify the SMIT menus. Check to see if the HACMP screens are available.
__ 13. (Optional) It would be a good idea to set up your /.profile to include paths to the
HACMP commonly used commands so that you don’t have to keep entering full path
names in the later lab exercises.
__ 14. (Very Optional) If the nodes have a tape subsystem attached, now would be a good
time for a mksysb backup.
__ 15. Ensure Part 2 is also performed for your other node toronto#.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 5. HACMP Software Installation 5-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Review/Wrapup
This is a good place to stop for a backup.

5-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 6. Client Setup

What This Exercise Is About


This exercise sets up the client for access to the HACMP system. It is
used to demonstrate how the outside world views the highly available
system.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Ascertain if the client has been set up to access the HACMP
cluster
• Verify the communication between the client and nodes is
functioning correctly

Introduction
Our scenario has a Web server to be made highly available. We are
required to test the availability traits of the Web server. This exercise
creates a client to test from.

Required Materials
HACMP planning sheets.
AIX bonus pack
Client machine

Instructor Exercise Overview


This exercise assumes httpdlite is installed and running on the client machine.

© Copyright IBM Corp. 1998, 2004 Exercise 6. Client Setup 6-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• All exercises of this chapter depend on the availability of specific equipment in your
classroom.
• Replace the symbol # with your team number.

Part 1: Setting Up the Client Communications


__ 1. This exercise requires that there is a third AIX node for your team. If you have only a
PC then you can (after the application integration lab) add clstat.cgi to the
/usr/HTTPServer/cgi-bin directory on both cluster nodes and then use the PC
browser to go to the service address of the HTTP resource group -- your instructor
can help you with this.
__ 2. Log in to the client (your third machine) as root. If CDE is used on this machine then
leave CDE for now. CDE comes back again after the reboot later in this exercise.
__ 3. Execute smit mktcpip to set the hostname and ipaddress of this machine for an enX
interface (that has a cable in it). The IP address must be on the same subnet as one
of the node interfaces. The suggested hostname is regina# and the suggested
address is 192.168.#1.3. Do not set default route or DNS.
__ 4. Create an alias for the interface above to be on the same subnet as the service
labels. The suggested value is 192.168.#3.30 and the subnet mask is
255.255.255.0.
__ 5. Acquire the /etc/host file from halifax# and ensure that the information in this file
agrees with what you did in the previous two steps.
__ 6. Test to ensure that TCP/IP functions correctly.
__ 7. Test name resolution of the client and the nodes.

Part 2: HACMP Client Install and setup


__ 8. Install the HACMP client filesets:
cluster.adt.es
cluster.es (choose only the three client filesets)
cluster.license
cluster.man.en_US.es
cluster.msg.en_US.es (choose only the client fileset)
__ 9. Install the ptf1 updates
__ 10. In order to use clstat.cgi, verify that httpdlite is running and that Netscape is
available on this machine. If not ask your instructor.
__ 11. Verify Netscape starts and can display a URL, like
file:///usr/lpp/bos.sysmgt/mkcd.README.html/

6-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty The next three steps prepare you to use clinfoES from the client machine after HACMP is
started in the next exercise.
__ 12. Copy the clstat.cgi script from /usr/es/sbin/cluster to the /var/docsearch/cgi-bin
directory.
__ 13. Verify that the file /var/docsearch/cgi-bin/clstat.cgi is world-executable (755 or
rwxr-xr-x)
__ 14. Test access to clstat.cgi using the URL
http://localhost:49213/cgi-bin/clstat.cgi <-- you should get a window with the
message “Could not initialize clinfo connection”.
__ 15. Put the cluster nodes ip address (that is, halifax#-per and toronto#-per) into the
/usr/es/sbin/cluster/etc/clhosts file. Make sure you can ping these addresses.
__ 16. Reboot and do the ping tests to verify that this client machine functions as expected.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 6. Client Setup 6-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Review/Wrapup
We have the client all set and ready to go with communication checked, and name
resolution.

6-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 7. Cluster Configuration

What This Exercise is About


This lab covers the configuration and testing of a one sided custom
resource group. The cluster planning worksheets continue to be
updated as the capabilities of the cluster grow.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Use the Initialization and Standard Configuration menu to
• Discover nodes and networks
• Discover Volume groups and Filesystems
• Add a custom resource group to the cluster
• Verify the correct operation of a custom resource group
• Perform failover testing on the configured resource group

Introduction
The scenario is expanding you now create a custom resource group.
This is the beginning of making an application highly available.

Required Materials
Cluster planning worksheets.

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Remember this?

Where are We in the Implementation


Plan for network, storage, and application
eliminate single points of failure
Define and configure the AIX environment
storage (adapters, LVM volume group, filesystem)
networks IP interfaces, /etc/hosts, non-ip networks and devices)
application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
cluster, node names, HACMP ip and non-ip networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
We are now ready to Configure the HACMP environment.
First we must set up the application environment so that we can do the configuration all at
once using the Two-Node Cluster Configuration Assistant.
Note: These steps can only be done on one node. You should choose one of your nodes to
be the administration node. We will assume it is halifax#

7-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Part 1: Setting up the application environment


We use a dummy application for now to see how the Two-Node Cluster Configuration
Assistant works.
__ 1. Log in to halifax# as root
__ 2. Execute the commands:
echo ‘date +starting:%H%M >> /tmp/appA.log’>/tmp/appA_start
echo 'date +stopping:%H:%M >> /tmp/appA.log' > /tmp/appA_stop
THE FOLLOWING WAS reported by an instructor testing this la b for this step:
I think the application displays starting:time and put it into the log. However, I get the
screen display, but no log entry. So I vi the file and added a tee.
"echo +starting:%H%M | tee /dev/pts/0 >> /tmp/appA.log".
__ 3. Execute the commands:
chmod +x /tmp/appA_start
chmod +x /tmp/appA_stop
__ 4. Log in to toronto# as root and execute the command exportvg shared_vg_a (This
is so that you can see that the 2-node assistant automatically imports the vg on the
other node).
__ 5. Return to your halifax# node.

Part 2: Configuring HACMP


With your cluster planning sheets available begin the configuration.
__ 6. Run the Two-Node Cluster Configuration Assistant. You need an ipaddress(label)
for the second node, an application server name unique to your team, start and stop
script names, and a service label.
__ 7. If you encountered an error then do the cluster remove procedure (see lecture or
ask instructor) on both nodes before retrying.
Lets now look at what happened to you as a result of this command.
__ 8. Look at the smit output to see what the Assistant did. You can also find this output in
the /var/hacmp/log/clconfigassist.log file.
__ 9. Log on (go) to your other node (toronto#) to prove that the cluster was created on
both nodes. Use the command cldisp | more to answer the following questions:
• Were the application start and stop scripts copied over? ________________
• Was the volume group imported to the other node? ____________________
Use the command cldisp | more to answer the following questions:
• What is the cluster name? ______________________________________
• What is the resource group name? _______________________________
• What is the startup policy? ______________________________________
• What is the fallback policy?______________________________________

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

• What is the vg resource name (if any)? _____________________________


• What is the non-IP network name (if any)? ___________________________
• On what enX is halifax#-if1? _____________________________________
• What is the ip network name? ____________________________________
• Were the start/stop scripts copied over? ____________________________
__ 10. So were you impressed? _________________________________
__ 11. You can now add the ip network and non-IP network names, that we promised
would be generated by HACMP, to your component work sheets and/or the cluster
diagram if you want to.
__ 12. Return to your administrative node (halifax#).
__ 13. Define an additional Non-IP RS232 or a TMSSA network. The lab environment may
help you decide. Note that a network is automatically created when you choose the
pair of devices that form the endpoints of the network.
__ 14. Execute the command cltopinfo and see that the additional non-IP network was
configured. Add this name to the worksheet and/or diagram.
__ 15. Add a persistent node address for each node in the cluster -- select ‘Configure
HACMP Persistent Node IP Label/Addresses’ from the ‘Extended Topology
Configuration’ menu,
__ 16. Synchronize the changes -- Using the F3 key, traverse back to the Extended
Configuration smit screen.
Review the output upon completion looking for any Errors or Warnings. Errors must
be corrected before continuing, warnings should simply be reviewed and noted.
__ 17. Check to see that your persistent addresses were created. If not then wait until the
cluster is started in Part 3 below and then check again.
__ 18. Take about 10 minutes to review the Startup, Fallover, and Fallback policies using
the F1 key on the Add a Resource Group menu. When you are ready, proceed to
Part 3.

Part 3: Starting HACMP


__ 19. With your cluster planning sheets available as reference documentation, it is time to
start the cluster just on your administrative node (halifax#).
__ 20. Observe the output on one of the logs.
__ 21. Check that all resources were acquired successfully on the halifax# node.
__ 22. Go to your client machine (regina#).
__ 23. Start the clinfoES subsystem and verify that the /usr/es/sbin/cluster/clstat -a
command works.

7-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 24. There is another option on the clstat command, the - r# option. This option sets the
refresh rate of the information. For the lab environment “-r 10 “may be a more
appropriate value. Restart clstat with the -r 10 option.
__ 25. Now start Netscape and make sure that the URL to clstat.cgi is working properly.
• The URL is http://localhost:49213/cgi-bin/clstat.cgi
• You should now see a window with cluster information displayed. Be patient if
this window shows that the cluster is unstable.
• Take a moment to familiarize yourself with what you are looking at. Click on the
resource group name app#
• You will use this session to monitor the failover testing that comes next (or you
can run clstat on one of your cluster nodes)
__ 26. Now go to your administrative node (halifax#) and stop it graceful. Watch what
happens in the clstat browser (be patient -- it may take 2 minutes).
__ 27. Now start HACMP and clinfo on BOTH nodes
__ 28. Use the lsvg command to see that the shared vg is varied on in passive mode on the
other node (toronto#).

Part 4: Failover testing


__ 29. Return to your administrative node (halifax#) with your cluster planning sheets
available for reference.
It is time to test the cluster. Although the failover testing is a function of planning to
eliminate single points of failure, some basic tests should be performed on any cluster.
__ 30. On both nodes verify the IP labels used on each interface. “netstat -i” Notice which
interface has which IP label.
__ 31. On the toronto# node telnet to the appA service address (appA#-svc).
__ 32. Run the tail -f /tmp/hacmp.out. There should not be any scrolling of the log file.
On your halifax# node, fail the adapter (enX) that the appA#-svc address is running on by
executing the command ifconfig enX down (or disconnect the cable from the enX adapter
card).
Watch the reaction on both nodes in the /tmp/hacmp.out file. Also monitor the clstat
window. Notice that the telnet session from the toronto# node was not interrupted
and that the log information scrolled by during the event processing.
Instructor note: ifconfig down is no longer corrected by HACMP at least for IPAT via alias.
__ 33. When swap adapter has completed, verify that the location of the appA#-svc service
address is now on another ethernet adapter.
__ 34. Restore the failed adapter. The interface should now be in an “UP” state.
» ifconfig enX up or
» connect the network cable

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

__ 35. (Optional) - You may wish to swap the service address (and/or) persistent address
back by using C-SPOC.
__ 36. Using the console rather than a telnet session (because you will lose it), monitor the
hacmp.out file on the halifax#-if1x (left) node and disconnect both network cables at
the same time.
__ 37. There should be a network down event executed after a short period of time. What
happens to the resource group on the halifax (left) node, and why?
__ 38. Check the /tmp/hacmp.out file on the toronto# node, it should also have detected a
network failure.
__ 39. Restore both the network connections for the halifax# node. What event do you
observe happens?
__ 40. Where is the resource Group at this time? Verify that the IP labels, volume groups,
and file systems and application are available on that node.
__ 41. You are now going to move resources back from one node to the other. On the
halifax# node monitor the log. On the toronto# node execute smit clstop and stop
the cluster services with the mode of takeover. Leave the default value for the other
fields.
__ 42. The clstat.cgi should change colors from green to yellow (substate unstable,
toronto# leaving) and the state of the toronto# node and interfaces should change to
red (down).
__ 43. All of the components in the resource group should move over to the halifax# node.
Verify the IP labels, volume groups, and file systems on the halifax# node.
__ 44. On the toronto# node restart HACMP. Observe the /tmp/hacmp.out file on the
halifax# node and, of course, the clstat session. The resource group stays put.

END OF LAB

7-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise Review/Wrapup


You have a running cluster. Congratulations, now the fun really begins. Make sure clstat
shows the cluster as stable with the tcp/ip and non-ip networks up.

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

7-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 8. Application Integration

What This Exercise Is About


The HACMP cluster is now functional with a highly available filesystem
and IP label. Adding the Web server to the scenario is the next step.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Add the IBM Web server powered by Apache to the nodes
• Adjust the configuration of the Web server to acknowledge the
highly available IP label
• Introduce a minor configuration change to the Web server to use
the shared storage
• Add an application start and stop script to HACMP
• Test the application functionality

Introduction
The intention is not to become Web server programers but to simply
add an existing application to the HACMP environment. This is to
demonstrate one way to add an application to the HACMP
environment.

Required Materials
A running cluster
The AIX 5L Expansion pack

© Copyright IBM Corp. 1998, 2004 Exercise 8. Application Integration 8-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• As part of this exercise, C-SPOC and DARE are used to enable the addition of
filesystems, applications and resource changes to the cluster while it is running. If all
things function as designed, no system reboots or HACMP restarts are required.

Part 1: Install the IBM Web server file system


__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in as root on the halifax# node.
__ 3. Create a new filesystem for the Web documents. Enter smit hacmp.
__ 4. Verify both nodes know about the new file system.
__ 5. Continue on the halifax# node. Check to see that the filesystem is mounted on the
system that currently owns the resource group (should be halifax#).

Part 2: Install the IBM Web server software


__ 6. Check if http filesets listed below are installed. If not, ask your instructor. On many
class images they may be found in the directory /usr/sys/inst.images/web-appl.
Otherwise you may need the AIX 5L Expansion Pack CD.
» http_server.base
» http_server.admin
» http_server.html

__ 7. On the other node (toronto#), repeat the previous step. Once installed, delete all of
the information in the directory /usr/HTTPServer/htdocs (only on this node!).
__ 8. Go back to the halifax# node. In the directory /usr/HTTPServer/conf/, edit httpd.conf
and change the “ServerName” variable to be the same as the service IP label
(appA#-svc).
Note: The hostname must be resolvable, that is, host hostname should return a
good answer. If the hostname is not resolvable, add the hostname to the 127.0.0.1
address as an alias. If in doubt, ask the Instructor. Remember to do this on both
nodes otherwise successful takeover does not happen.
__ 9. Use ftp to put a copy of the /usr/HTTPServer/conf/httpd.conf file on the toronto#
node.

Part 3: Configure HACMP for the Application


__ 10. Add the Application Server to HACMP.
__ 11. Change the appA_group to use the Application Server http_server.

8-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 12. While the synchronizing takes place, monitor the HACMP logs until you see the
message start server http_server. Check to see that the Apache server started ok.
__ 13. From the client, start a new window in Netscape and connect to the URL
http://appA#-svc. The Web screen Welcome to the IBM HTTP Server window should
pop up.
__ 14. Perform a failover test by halting the Halifax# node in your favorite manner (for
example, “halt -q” or “echo bye > /dev/kmem”).
__ 15. Wait for takeover to complete and verify what happens to the Web server. Use the
page reload button on your Web browser to see if the Web server is really there.
__ 16. Bring up the Halifax# node again and start HACMP.
__ 17. What has happened to the Resource Group, and why?

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 8. Application Integration 8-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Optional Exercises
For the Web-enabled Candidates
__ 1. Change the Web server pages on the shared disk to prove the location of the data
elements.

END OF LAB

8-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 9. Mutual Takeover

What This Exercise is About


This lab exercise expands the capabilities of the cluster. The intent is
to completely add the resource group and all of its components while
the cluster is running.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Use the C-SPOC functionality and DARE capabilities of HACMP to
make changes to the cluster while it is running
• Add a new volume group while the cluster is running
• Add all of the components of a shared file system while the system
is running
• Add a resource group to the cluster and activate it while the cluster
is running
• Test a mutual takeover configuration

Introduction
In the scenario there are two resource groups to be made highly
available. The addition of the second resource group is done with the
C-SPOC commands with the cluster running.

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• Add the shared_res_grp_b resource group components according to the scenarios.
This will require a second filesystem.

Part 1: Add a Second Resource Group and Filesystem to the Cluster


__ 1. Ensure HACMP is running on both nodes and that the HTTP application is running
on halifax#
__ 2. Using the lspv command on BOTH nodes, verify that there is a shared disk hdiskX
available with the same PVID. If so skip to step 8.
__ 3. On the halifax# node make sure that the hdiskX has no PVID
__ 4. Create a new PVID for hdiskX
__ 5. On the toronto# node delete the hdisk.
__ 6. Add the disk back in.
__ 7. Verify the hdisk number and PVID agree between the two nodes.
__ 8. On the administrative node (halifax#) create a shared volume group called
shared_vg_b using C-SPOC.
__ 9. Verify the Volume Group exists on both nodes.
Now that the volume group is created it must be discovered a resource group must be
created and finally the volume group must be added to the resource group before any
further C-SPOC utilities will access it.
__ 10. Discover the volume group using Extended Configuration in smitty hacmp.
__ 11. Create a resource group called appB_group with the toronto# node as the highest
priority and halifax# node as the next priority.
__ 12. Add the volume group to the resource group
__ 13. Synchronize the Cluster.
__ 14. Once synchronized, the Volume Group is varied online, on the owning node
(toronto#). Wait for this to happen. Then on your administrative node halifax# use
C-SPOC to add a jfs log shared logical volume to the shared_vg_b. The name
should be shared_jfslog_b, the LV type should be jfslog, and use 1 PP.
__ 15. Format the jfslog so that it can be used by the filesystem that is created in the next
few steps. If the log is not formatted, it is not used.
__ 16. Back on halifax#, add a second shared logical volume with number of LOGICAL
PARTITIONS=10, NAME= shared_jfslv_b, TYPE=jfs.

9-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 17. Add a shared file system on the previously created Logical volume called
/shared_fs_b. Using the F3 key traverse back to Configure Volume Groups, Logical
Volumes and Filesystems. Select Shared File Systems
__ 18. The filesystem should be available on node toronto# in a few minutes.
The following was observed during additional testing and may or may not be
repeatable: a message on the smit panel saying that shared_fs_b is not a known file
system, and the failed response was posted. However, when I looked at /etc/filesystems
it was there and a manual mount from the Toronto node worked. I was then able to
move the resource group from one node to another and back using the system
management (C-SPOC) menu.

Part 2: Create the application and service label resources


__ 19. Log in to halifax# as root
__ 20. Create the application start script:
echo ‘hostname>>/shared_fs_b/appB.log’ >/tmp/appB_start
echo ‘date +” starting:%r” >> /shared_fs_b/appB.log’>>/tmp/appB_start
__ 21. Create the application stop script
echo ‘hostname>>/shared_fs_b/appB.log’ >/tmp/appB_stop
echo ‘date +” stopping:%r” >> /shared_fs_b/appB.log’>>/tmp/appB_stop
__ 22. ftp the scripts to the other node
__ 23. Make the scripts executable on both nodes:
__ 24. On halifax#, create the Service IP label resource.
__ 25. Create the application server resource
__ 26. Add the resources to the resource group
__ 27. Synchronize the Cluster. Using the F3 key, traverse back to Initialization and
Standard Configuration.
__ 28. Test that the toronto# service IP label is available.
__ 29. Test the new resource group on the toronto# node for network adapter swap/failure
and node failure.
__ 30. OPTIONAL -- If you have an extra disk execute the mirrorvg and chfs commands to
test splitting off a copy as presented in the unit 3 lecture. Note that this step could be
done using C-SPOC to create the mirror. Also note that one purpose of this step is to
show how to undo the backup copy.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Review/Wrapup
The first part of the exercise looked at using C-CSPOC to add a new resource to the
cluster.

9-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 10. HACMP Extended Features

What This Exercise Is About


This lab exercise expands the capabilities provided in the extended
features option.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Create a Cluster ‘Snapshot’
• Use the C-SPOC functions and DARE capabilities of HACMP to
make changes to the cluster while it is running
• Add additional Resource Groups while the cluster is running
• Add additional Service aliases
• Modify Resource Group Behavior Policies
• Configure Settling and Fallback timers

Introduction
To enhance the scenario create two additional resource groups to be
made highly available. The addition of these resource groups and their
behavior modification is done with the C-SPOC commands with the
cluster running.

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• Add additional service aliases, and create an additional custom resource group for each
node.
• Modify the default start and fallback policies of the new resource groups to examine the
resource behavior during cluster startup and reintegration event processing.
• Create

Part 1: Create a Cluster Snapshot


__ 1. In the last exercise we made the goal of the class so let’s save our environment
before continuing by creating a Cluster Snapshot
Notice:
There are two snapshot files <snapshot>.odm and <snapshot>.info
The directory for the snapshot is /usr/es/sbin/cluster/snapshots
The clsnapshotinfo command was run on both nodes (output in the “.info” file)
__ 2. Read the mutual_takover.info file. Go on to the next step when you are ready.

Part 2: Add an additional Service alias and Resource Group to each


Cluster Node
__ 3. Log in to the halifax# node as root.
__ 4. Add two additional service labels to the /etc/hosts file
__ 5. Discover these new addresses in HACMP using the Extended Configuration menu
from smit hacmp.
__ 6. Configure the two additional HACMP Service IP Labels/Addresses as resources in
HACMP.
We now add two resource groups appC_group and appD_group with different startup
policies. The first (appC_group) behaves like the old inactive takeover, and the second
(appD_group) behaves like the old rotating.
__ 7. Add an additional Resource Group called appC_group with a home node of halifax#
and a startup policy of Online on First Available node.
__ 8. Add another Resource Group called appD_group with a home node of toronto# and
a startup policy of Online Using Distribution Policy.
__ 9. Now add the Service IP Labels created in step 1 to the Resource Groups just
created. Using the F3 key, traverse back to the Extended HACMP Resource Group
Configuration and select Change/Show Resources and Attributes for a Resource
Group

10-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 10. In order to mimic the old rotating we need to change the distribution policy to
network. This is done using the smit extended runtime menu Configure Distribution
Policy for Resource Groups. The cluster must be stopped on both nodes first.
__ 11. Synchronize the cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen.

Part 3: Test Resource Group Behavior


__ 12. Start HACMP only on your toront# node.
__ 13. Once the node is stable check the status of the Resource Groups. Does this look
normal? If not, what is wrong -- Should appC_group be online on toronto#?
__ 14. Start HACMP on the Halifax (left) node.
__ 15. Once the node is stable check the status of the Resource Groups. Does everything
look correct now (check the appC_group)? If so, what changed? Why?
__ 16. OPTIONAL: To understand better the distribution policy, stop the nodes and bring up
halifax# first and see what happens to the appD_group. Then stop halifax# with
takeover. Then restart halifax# and see what happens to the appD_group.

Let’s have a look at configuring a settling timer which allows you to modify the behavior
of the Fallback To Higher Priority Node In The List fallback policy so that there are not
two online operations if you bring up the secondary node first.

Part 4: Add a Settling Timer


__ 17. Ensure that you are on your administration node (halifax#) and configure a Settling
Timer (can only be used if startup policy is ‘Online On First Available Node’ ).
__ 18. Synchronize the Cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen. Notice in the smit output the messages about the settling
timer value.

Part 5: Testing Cluster Behavior using a Settling Timer


__ 19. On your administrative node (halifax#), stop Cluster processing on both nodes.
__ 20. Wait 2 minutes and then start HACMP -- only on the toronto# node.
__ 21. Wait until you can see that the appB_group is online then verify that appC_group is
still offline on toronto# using the clRGinfo command (note that the clRGinfo
command can be run from either node as long as HACMP is started on any one of
the nodes)
__ 22. Start HACMP on the halifax# node.

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

__ 23. Verify that the appC_group comes online on halifax# (without first being online on
toronto#). As you can see the purpose of the settling timer is to prevent the
resources from being immediately acquired by the first active node.
__ 24. OPTIONAL -- repeat this part but wait for settling time to expire after starting the
cluster on toronto#. Verify that appC_group comes online on toronto#. Stop the
cluster manager on both nodes, wait 2 minutes, start the cluster manager on both
nodes.

Part 6: Configure Delayed Fallback Timer


__ 25. Cluster should be started on both nodes and appC_group should be online on
halifax#.
__ 26. On your administrative node (halifax#), create a delayed fallback timer policy for 30
minutes from now (instructor may modify this time)
__ 27. Add the fallback timer policy the resource group appC_group
__ 28. Synchronize

Part 7: Testing Cluster Behavior using a Delayed Fallback Timer


__ 29. Verify that appC_group is online on halifax# using the clRGinfo command
__ 30. Stop the cluster manager only on halifax# with takeover
__ 31. Verify that appC_group is now online on toronto# (clRGinfo).
__ 32. Wait 2 minutes (required before a restart)
__ 33. Start the cluster manager on halifax#
__ 34. Monitor the cluster from toronto#. In /tmp/hacmp.out at the event summary for
check_for_site_up_complete halifax#, there is now a message stating the fallback
time. Make sure appC_group is still on toronto# before the fallback then tail -f the
hacmp.out file and wait for the fallback to occur.
__ 35. At the time set for the Delayed Fallback Timer, appC_group should move back to
halifax# (you should see activity from tail command)
__ 36. On your administrative node (halifax#), remove the name of the Delayed Fallback
Timer (my_delayfbt) from the resource group appC_group (you can keep the policy
definition if you want).
__ 37. Reset the Settling time to 0 (from the menu ‘Configure Resource Group Run-Time
Policies’
__ 38. Synchronize.

END OF LAB

10-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise Review/Wrapup


The first part of the exercise looked at using C-CSPOC to add a new resource to the
cluster.

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

10-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 11. IPAT via Replacement and HWAT

What This Exercise Is About


This lab explores the options of removing a cluster and creating an
IPAT via replacement environment.
This lab also examines Gratuitous Arp, and the use of Hardware
Address Takeover (HWAT), functionality for environments where
Gratuitous Arp may not be the best solution.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Describe how to set up IPAT via replacement
• Describe to behavior of Arp updates/refreshes using gratuitous Arp
or Hardware Address Takeover where required.
• Describe how to set up HWAT

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• The first part shows how to remove a cluster
• The second part of this lab looks at setting up IPAT via replacement and using the
standard configuration path to build a cluster.
• The third part of this lab looks at Gratuitous arp.
• The fourth part of this exercise adds hardware address concepts. HWAT or MAC
address takeover would be used in situations where gratuitous arp may not be
supported, as in older hardware, or non-standard operating systems.

Part 1: Remove and add a new Cluster


__ 1. On your administration node (halifax#), stop both the cluster nodes.
__ 2. Snapshot
__ 3. remove cluster
__ 4. Add a replacement service address to your /etc/hosts file (must be on the same
subnet as one of the if interfaces.
__ 5. Configure a new cluster on halifax#. Go to the HACMP for AIX smit panel and select
Initialization and Standard Configuration.
__ 6. Use Extended Configuration to set the network to turn off IPAT via aliases.
__ 7. Use Extended Configuration to configure a non-IP network by choosing the pair of
devices that will make up the network (Using the F3 key, traverse back to the
Extended Topology Configuration smit screen.
__ 8. Redo the Persistent Addresses from your planning worksheet. (Using the F3 key,
traverse back to the Extended Topology Configuration smit screen).
__ 9. Create the Service IP Label resource. Using the F3 key, traverse back to the
‘Extended Configuration’ smit screen. Select ‘Extended Resource Configuration’.
__ 10. Create a resource groups. Using the F3 key, traverse back to the Extended
Resource Configuration smit screen.
__ 11. Add Resources to the Resource Group. Using the F3 key, traverse back to the
HACMP extended Resource Group Configuration smit screen.
__ 12. Synchronize the cluster. Using the F3 key, traverse back to the Extended
Configuration smit screen. Select Extended Verification and Synchronization
__ 13. Start HACMP on the toronto# node.
__ 14. Verify the appR_group did not come online because of the startup policy.
__ 15. Start HACMP on the halifax# node.
__ 16. Verify that the appR_group is online on halifax#

11-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Part 2: Gratuitous ARP


From the AIX 5L Version 5.1 commands reference for ifconfig. “Gratuitous ARP is
supported for ethernet, token-ring, and FDDI interfaces. This means when an IP address is
assigned, the host sends an ARP request for its own address (the new address) to inform
other machines of its address so that they can update their ARP entry immediately. It also
lets hosts detect duplicate IP addresses.”
This will make it a little difficult to create a failure with AIX clients but the tests are valid.
__ 17. Log on the client machine. Verify that clinfo has not been started.
__ 18. Use the ping command to test the service IP Label of appR_group on halifax#).
__ 19. Check the contents of the arp cache.
__ 20. On the halifax# node generate a swap adapter event. Be aware that you need to do
this fairly quickly before the arp cache times out.
__ 21. Check the contents of the arp cache on the client, compare the results with the
previous iteration of the command.
__ 22. The hardware address should have updated in the arp cache on the client without
any intervention.
Note: If the entry is not in the arp cache when the Gratuitous arp is broadcast it is
ignored.

Part 3: Hardware Address Takeover


In this scenario the router in Regina is a bit of an antique and does not support gratuitous
ARP. It was highlighted as a problem since the ARP cache retention is 15 minutes. This
problem was discovered during the preliminary cluster testing.
__ 23. On the halifax# node log in as root.
__ 24. Identify the interface that is reconfigured with the appR-repl service address and
write the mac address here._____________________________________
__ 25. Identify the alternate mac address. To specify an alternate hardware address for an
Ethernet interface, change the 1st byte xx to 4x j
_________________________________________
__ 26. Change the appR-repl service IP label to add an alternate hardware address in the
field.
__ 27. Synchronize the cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen. Notice the following message in the smit log: cldare:
Detected changes to service IP label appR1-repl. Please note that changing
parameters of service IP label via a DARE may result in releasing resource group
appR_group.

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

__ 28. Bring the appR_group online using the C-SPOC menu. If, on the client, there is no
arp cache entry for the appR-repl service address, then ping the appR-repl service
address.
__ 29. Verify that the alternate hardware address is now configured on the interface for the
appR#-repl service address.
__ 30. Fail the halifax# node in your favorite manner.
__ 31. Check that the halifax service address is on the toronto# node and observe the
hardware address associated with that service address

Part 4: Re-create a Cluster from a Snapshot


__ 32. Ensure the cluster manager is stopped on both clusters nodes.
__ 33. Apply the snapshot that contains all the cluster definitions you made in exercise 10.
__ 34. Start HACMP
__ 35. For each resource group, verify to yourself that you understand how the online node
was chosen.
__ 36. Fail the halifax# node in your favorite manner.
__ 37. Restart the failed node and observe the re-integration. Verify that you understand
how the online node was chosen for each of the resource groups

END OF LAB

11-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise Review/Wrapup


This exercise looked at cascading resource groups, and how to configure both cascading
without Fallback and Inactive Takeover. It also covered setting up and testing Hardware
Address Takeover.
This exercise also looked at rotating resource groups.

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

11-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 12. Network File System (NFS)

What This Exercise Is About


This lab covers a couple of different methods for configuring network
filesystems with HACMP. It also demonstrates how to set various NFS
options in HACMP exported filesystems.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Have HACMP export a filesystem as part of a resource group
• Have HACMP import a filesystems as part of resource group
• Modify the NFS export options for the exported filesystem
• Add an NFS cross-mount
• Modify the NFS cross-mount for performance and flexibility

© Copyright IBM Corp. 1998, 2004 Exercise 12. Network File System (NFS) 12-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• All exercises of this chapter depend on the availability of specific equipment in your
classroom.

Part 1: NFS exports in a resource group


__ 1. Assumptions: You need to start this exercise off with HACMP up on both nodes, and
identify two resource groups -- one whose home node is halifax# (that is,
appA_group) and the other whose home node is toronto# (that is, appB_group).
Each group should have a shared filesystem defined to it (that is, shared_fs_a and
shared_fs_b). On each node, verify that nfs is running (lssrc -g nfs) after HACMP is
started.
__ 2. Modify the resource group appA_group to add /shared_fs_a as a
filesystem/directory to NFS export and set to true the option ‘Filesystems mounted
before IP configured’
__ 3. Modify the resource group appB_group to add /shared_fs_b as a
filesystem/directory to NFS export. Using the F3 key, traverse back to the ’HACMP
Extended Resource Group Configuration’.
__ 4. Synchronize the resources. Using the F3 key, traverse back to ‘Extended
Configuration’ smit screen.
__ 5. When the reconfiguration of resources has completed on each node, check the
directories are exported through NFS.
__ 6. Log in on the client as root.
__ 7. Create a directory /halifax and /toronto.
__ 8. On the client, using the service address for the appA_group, mount the nfs exported
directory /shared_fs_a on the local directory /halifax.
__ 9. On the client, using the service address for the appB_group, mount the nfs exported
directory /shared_fs_b on the local directory /toronto.
__ 10. Verify the nfs directories are mounted where intended.
__ 11. Back on a cluster node -- fail one of the nodes in your favorite manner. Verify that
the nfs directories are still exported on the remaining node and mounted on the
client system.
__ 12. Try to create a file in the /halifax directory. It should not work. Lets see how this can
be addressed.

Part 2: Modifying the NFS Export Options

12-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty __ 13. The output of the lsnfsexp command on the nodes explains that only the cluster
nodes can use user root. To change this we create an override file. Its name is
/usr/es/sbin/cluster/etc/exports. HACMP uses this file to update the /etc/xtabs file
used by NFS.
__ 14. On the running node, use the lsnfsexp command to copy the current /etc/xtabs file to
the HACMP file and then modify the HACMP file using the following commands:
- lsnfsexp > /usr/es/sbin/cluster/etc/exports
- Edit /usr/es/sbin/cluster/etc/exports and add the client to the list of hosts
- Save the file
- ftp the file to the other node
__ 15. Restart the failed node.
__ 16. From the client try to create a file in the nfs directory on the client of the node you
have just restarted.

Part 3: NFS Cross-mount within the Cluster


__ 17. On both nodes create a directory /hanfs.
__ 18. Edit the resource group appA_group and add the following to the option
‘Filesystems/Directories to NFS mount” /hanfs;/shared_fs_a. This will mount the
/shared_fs_a nfs filesystem on the mount point /hanfs for all systems in that
resource group.
__ 19. Synchronize the resources and verify this is true on both nodes.
__ 20. Fail the toronto# node in your favorite manner.
__ 21. Confirm that halifax# node has all the resource groups, and that the NFS mounts are
OK.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 12. Network File System (NFS) 12-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Review/Wrapup
This exercise looked at various methods of implementing NFS in an HACMP cluster.

12-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

EXempty Exercise 13. Error Notification

What This Exercise Is About


This lab covers the adding of error notifications into AIX through the
HACMP smit screens.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Add an error notification for the loss of quorum on a volume group
• Emulate the error condition and test the error notification method
• Optionally add another error notification based on filesystems full

© Copyright IBM Corp. 1998, 2004 Exercise 13. Error Notification 13-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Exercise Instructions
Preface
• This exercise looks at Automatic Error Notification. Before you configure Automatic
Error Notification, you must have a valid HACMP configuration. Using the SMIT options,
you can use the following methods:
- Configure Automatic Error Notification
- List Automatic Error Notification
- Remove Automatic Error Notification.
• Remember that Error Notification is a function of AIX - HACMP just gives you the smit
screens that make it easier to enter error notification methods.

Setting Up the automatic error notifications on the halifax# node.


__ 1. Log in as root on the halifax# node.
__ 2. Stop the Cluster. The cluster must be down to configure Automatic Error Notification.
__ 3. Configure Automatic Error Notification.
When you run automatic error notification, it assigns two error methods for all the error
types noted:
cl_failover is assigned if a disk or network interface card is determined to be a
single point of failure, and that failure would cause the cluster to fall over. If there
is a failure of one of these devices, this method logs the error in hacmp.out and
shuts the cluster node down. A graceful stop is attempted first, if this is
unsuccessful, cl_exit is called to shut down the node.
cl_logerror is assigned for any other error type. If there is a failure of a device
configured with this method, they are logged in hacmp.out.
__ 4. List Error Notification Methods. Use the F3 key to traverse back to the ‘Configure
Automatic Error Notification’ smit screen.
__ 5. To see the AIX odm file, execute the command odmget errnotify | more. The
HACMP generated stanzas will be at the bottom.

END OF LAB

13-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

AP Appendix A. Cluster Diagrams

© Copyright IBM Corp. 1998, 2004 Appendix A. Cluster Diagrams A-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Cluster Planning Diagram


AU54 lab teams client
REPLACE # with team number user
team number= ______ regina#
community
if1192.168.#1.3
alias 192.168.#3.30

_______ IP Label IP Address HW Address _______ IP Label IP Address HW Address


if1 halifax#-if1 192.168.#1.1 _____________ if1 toronto#-if1 192.168.#1.2 _____________
if2 halifax#-if2 192.168.#2.1 _____________ if2 toronto#-if2 192.168.#2.2 _____________
Persist halifax#-per 192.168.#3..1 Persist toronto#-per 192.168.#3.2

Network = ________________ (netmask =


___.___.___.___)

Home Node Name halifax# Home Node Name toronto#


Resource Group = appA_group Resource Group appB_group
Startup Policy =OHNO Startup Policy =OHNO
Fallover Policy =FONP Fallover Policy =FONP
Fallback Policy =FBNF Fallback Policy =FBHP
Service IP Label =appA#-svc Service IP Label =appB#-svc
192.168.#3.10 192.168.#3.20
Application server =appA Application server =appB
serial
Label =halifax#_hiskX_01 Label =toronto#_hiskY_01
Device =/dev/hdiskX Device =/dev/hdiskX

Label =halifax#_tty0_01 serial Label =toronto_tty0_01


Device =/dev/tty0 Device =/dev/tty0

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group appA_group contains Resource Group appB_group contains


Volume Group= shared_vg_a Volume Group= shared_vg_b
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log =shared_jfslog_a JFS Log = shared_jfslog_b
Logical Volume =shared_jfslv_a Logical Volume = shared_jfslv_b
FS Mount Point =/shared_fs_a FS Mount Point = /shared_fs_b

A-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide

AP
Cluster Planning Diagram
client
user
community hostname ___________
if1 _________________
svc alias ____________
_______ IP Label IP Address Hardware Address _______ IP Label IP Address Hardware Address
if1 _______ _________ _______________ if1 _______ _________ _______________
if2 _______ _________ _______________ if2 _______ _________ _______________
Persist _______ _________ Persist _______ _________

Network = _____________netmask=___.___.___.___

Home Node Name = Home Node Name =


Resource Group = Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =

Service IP Label = Service IP Label =

Application server= Application server =

Label = serial Label =


Device = Device =
serial Label =
Label =
Device = Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group __________ contains Resource Group __________ contains


Volume Group= ______________ Volume Group= ______________
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log = ______________ JFS Log = ______________
Logical Volume = ______________ Logical Volume = ______________
FS Mount Point = ______________ FS Mount Point = ______________

© Copyright IBM Corp. 1998, 2004 Appendix A. Cluster Diagrams A-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide

Cluster Planning Diagram


client
user
community hostname ___________
if1 _________________
svc alias ____________
_______ IP Label IP Address Hardware Address _______ IP Label IP Address Hardware Address
if1 _______ _________ _______________ if1 _______ _________ _______________
if2 _______ _________ _______________ if2 _______ _________ _______________
Persist _______ _________ Persist _______ _________

Network = _____________netmask=___.___.___.___

Home Node Name = Home Node Name =


Resource Group = Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =

Service IP Label = Service IP Label =

Application server= Application server =

Label = serial Label =


Device = Device =
serial Label =
Label =
Device = Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group __________ contains Resource Group __________ contains


Volume Group= ______________ Volume Group= ______________
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log = ______________ JFS Log = ______________
Logical Volume = ______________ Logical Volume = ______________
FS Mount Point = ______________ FS Mount Point = ______________

A-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V1.2.2

backpg
V3.1.0.1

cover

򔻐򗗠򙳰 Front cover

HACMP Systems
Administration I: Planning and
Implementation
(Course Code AU54)

Instructor Exercises Guide


with Hints
ERC 5.0

IBM Certified Course Material


Instructor Exercises Guide with Hints

Trademarks
IBM® is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AFS AIX AIX 5L
Cross-Site DB2 DB2 Universal Database
DFS Enterprise Storage Server HACMP
NetView POWERparallel pSeries
Redbooks Requisite RS/6000
SP Tivoli TME
TME 10 Versatile Storage Server WebSphere
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

December 2004 Edition

The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

© Copyright International Business Machines Corporation 1998, 2004. All rights reserved.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V3.1.0.1
Instructor Exercises Guide with Hints

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Instructor Exercises Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Exercise Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Exercise 1. Cluster Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

Exercise 2. Cluster Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1

Exercise 3. LVM Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1

Exercise 4. Network Setup and Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1

Exercise 5. HACMP Software Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1

Exercise 6. Client Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1

Exercise 7. Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1

Exercise 8. Application Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1

Exercise 9. Mutual Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1

Exercise 10. HACMP Extended Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1

Exercise 11. IPAT via Replacement and HWAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1

Exercise 12. Network File System (NFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1

Exercise 13. Error Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-1

Appendix A. Cluster Diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

© Copyright IBM Corp. 1998, 2004 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

iv HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM® is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
AFS® AIX® AIX 5L™
Cross-Site® DB2® DB2 Universal Database™
DFS™ Enterprise Storage Server® HACMP™
NetView® POWERparallel® pSeries®
Redbooks™ Requisite® RS/6000®
SP™ Tivoli® TME®
TME 10™ Versatile Storage Server™ WebSphere®
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product and service names may be trademarks or service marks of others.

© Copyright IBM Corp. 1998, 2004 Trademarks v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

vi HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

pref Instructor Exercises Overview


The exercises for this course are based on a case study that is
introduced in Exercise 1. Each student team builds their own cluster
and resource groups. They have the freedom to choose their own
names for resources or follow the guidelines given in the exercises.
The objective is to build a mutual takeover environment.
In general the exercises depend on successfully completing the
previous exercises.

© Copyright IBM Corp. 1998, 2004 Instructor Exercises Overview vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

viii HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

pref Exercise Description


Exercise instructions - This section contains what it is you are going to
accomplish. See the Lab Setup Guide and the course Lab Guide for
instructions and details pertaining to the labs. You are given the
opportunity to work through each exercise given what you learned in
the unit presentation.

© Copyright IBM Corp. 1998, 2004 Exercise Description ix


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

x HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 1. Cluster Design


(with Hints)

What This Exercise Is About


This exercise is a high-level design of a cluster. It is scenario-based.
This reinforces the lecture material.

What You Should Be Able to Do


At the end of the exercise, you should be able to:
• Create a high-level design of a cluster
• Interpret the business requirements into a diagram suitable for
creating further HACMP configuration information
• Describe how HACMP will assist in creating the design

Introduction
The scenario that the exercises are based on is a company which is
amalgamating its computer sites to a single location. It is intended to
consolidate computer sites from two cities into one situated roughly in
the middle of the original two. The case study has been designed
around five randomly chosen countries in the world. These countries
and city configurations have been tested in our environment but we
offer the choice to use your own. On to the scenario.

Required Materials
Your imagination.
Paper or a section of a white board.

© Copyright IBM Corp. 1998, 2004 Exercise 1. Cluster Design 1-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
(All hints are marked by a » sign)
For this example we use the Canada cluster. The original configuration was one computer
located in Halifax and one in Calgary. The systems have been named by their city
designation to keep them straight. The corporate Web server resides on Halifax,. Currently
the systems are running on internal disks, on systems too small for the task. As part of the
consolidation new systems are used. These new systems are to be configured in such a
manner as to provide as close to 7x24x365 access to the Web server as possible with
pSeries technology. Corporate marketing is about to launch a major initiative to promote a
new product solely available on the Web. The corporate management has insisted that this
project is successful, and that the new computer center in Regina resolves all of the issues
of reliability that thus far have caused great corporate embarrassment. All eyes are focused
on this project.
A project briefing has been called by the senior executive to get an overview of how the
funds for the equipment are applied.
Your task is to prepare for that meeting to present a solution.

Exercise Steps
__ 1. Draw each of the computer systems as described.
__ 2. Add the applications to the nodes.
__ 3. Add a network connection to each system for access to the outside world.
__ 4. Evaluate the lack of high availability of the initial drawing of the two separate
systems.
__ 5. Combine the services of the existing networks resulting in a single network.
__ 6. Add new SSA disks to your drawing, showing cable connections.
__ 7. Make the disks highly available, RAID/mirror, redundant disks.
__ 8. Define the resources as described in the text.
__ 9. Define the characteristics of the resources.
__ 10. Indicate how the resources fail and recover.
__ 11. Make the diagram simple to understand.

END OF LAB

1-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 2. Cluster Planning


(with Hints)

What This Exercise Is About


This exercise is going to build on the high-level design. You continue
to build upon the cluster. The next step is to document your hardware
to create an inventory of materials to work with. You use a cluster
planning worksheets and a generic cluster diagram to design and
document your cluster topology. The design is based on either the
country scenario provided or the high-level design you created in the
prior exercise.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Create component worksheets and a diagram showing your cluster
topology in detail
• Identify the hardware configuration of the classroom equipment

Introduction
There may be differences in the documentation and the real machines
in the classroom environment. The CPUs, network type, and type of
disk units have been selected to provide a consistent experience but a
variety of equipment may be used. Please ask if you have any
questions.
Note: Throughout this lab the terms shared volume group, shared file
system, node and client refer to components of your HACMP cluster.
The convention of <name> is to be substituted with the appropriate
thing. The example references a generic cluster’s naming of these
components. Some names in your cluster may be different from that
indicated in the notes.
Below is a picture of the generic cluster for this lab. The
communications path may be Ethernet, Token-Ring, FDDI, or any
other network supported by HACMP. There must also be a non IP
serial network-- either RS232, target mode SSA or heartbeat over
disk. The minimum requirement is that there are at least four shared
disks (SCSI, Fiber Channel or SSA) connected to a shared bus so that

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

two volume groups may be created and passed between nodes. If


adequate disks can be provided for the purposes of mirroring and
quorum, then a more realistic configuration can be built. However, this
is not a requirement of the lab exercises.
The systems provided are to be the new systems for the consolidated
computer center. You must prepare these systems to be the
replacements of the production systems. It is time to check out the
equipment to find out what is available to create our highly available
solution on.

Instructor Exercise Overview


Ensure that a team number (or letter) has been assigned to each
cluster team before starting this lab (see step 1). This number must be
a single character. The idea is that everyone can use the lab hints now
-- not just Canada as in the past. For names there will be a convention
like Canada# (where # is the team number). For IP addresses the
format of the third octet is #X where #, is the team number and X is the
subnet number.
Point out that sample entries are shown in the component worksheet
tables.
Note that the diagram below also appears in the appendix. The
appendix has the cheat sheet version as well as two blank templates
so the student can take one blank one home.

2-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty

LAB Reference Cluster


user
community

Network = ________________ (netmask = 255.255.255.0)

Home Node Name = Home Node Name =


Resource Group= Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =
Service IP Label = Service IP Label =
Application server = Application server =
Label = tty Label =
Device =
Device =
Label = disk or
Device = Label =
tmssa Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

© Copyright IBM Corporation 2004

AU545.0

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
All hints are marked by a » sign.

Part 1: Examine the Cluster Environment and Complete the Cluster


Component Worksheets with Storage Information
Using the cluster component worksheets (located at the end of this exercise), record the
information as listed in the following steps.

__ 1. Write down your team number here: ____. In these lab exercises you must replace
the symbol # with your team number unless otherwise noted.
__ 2. Log in as root on both of your cluster nodes. The root password will be provided by
your instructor.
__ 3. Identify and record in the cluster components worksheet the device names and
location codes of the disk adapters.
» lsdev -Cc adapter
__ 4. Identify and record in the cluster components worksheet the device names and
location codes of the external disks (hdisks and pdisk). Note: The external disks
may not have PVIDs on them at this time.
» lspv
» lsdev -Cc disk
» smitty ssadlog
__ 5. Identify and record in the cluster components worksheet the device names and
location codes of the internal disks.
» lsdev -Cc disk
» lsdev -Cc pdisk
__ 6. The storage needs to be divided into two volume groups. Size of the volume groups
is not important. In a real environment, disks should be mirrored and quorum issues
addressed. Here the emphasis is on the operation of HACMP not how the storage is
organized. You should have four disks so feel free to set up a mirror on one of the
volume groups. Different methods of configuring the disks are going to be used
through out the exercises. Decide on the organization but only create the volume
groups when directed to.

__ 7. Identify and update the cluster planning worksheets with the names of 2 shared
volume groups. Use the following names or choose your own.
»

2-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » shared_vg_a
» shared_vg_b
__ 8. Identify and update the cluster component worksheets the LV component names to
have a shared file system in each of the two volume groups. Select names for the
logical volumes, jfs logs and filesystems. Use the following names or choose your
own.
» data lv’s shared_jfslv_a, shared_jfslv_b
» jfslog lv’s shared_jfslog_a, shared_jfslog_b
» file systems shared_fs_a, shared_fs_b
__ 9. Now add the just the storage information to the generic cluster diagram of your
cluster. This diagram can be found in Appendix A (there are two blank ones after the
filled in one. One is for in class and the other is to take home). On the other hand
you may want to just compare the information on your component worksheets to the
filled in worksheet at the beginning of Appendix A.
• Only fill in what you know -- the LVM information-- at the bottom of the diagram.

GO NOW TO EXERCISE 3. You return to Part 2 after the lecture for the unit on network
planning.

Part 2: Examine the Cluster Environment and Complete the Cluster


Component Worksheets with Networking Information
__ 10. Identify and record in the cluster components worksheet the device names (entX)
and location codes of the Network Adapters.
» lsdev -Cc adapter
» lsdev -Cc if
__ 11. Identify and record in the cluster components worksheets the IP addresses that will
be used for the cluster communication interfaces using the following guidelines:
• Ensure that the logical subnet rules are complied with. Each communication
interface must be on a different logical subnet.
• Ensure that all of the communication interfaces have the same subnet mask.
• You can use the following names/addresses or select your own. If you choose to
use your own please verify with the instructor that they will not conflict with
another team.
»
» In the following, replace # with your team number.
»
» communication interface halifax#-if1 192.168.#1.1
» communication interface halifax#-if2 192.168.#2.1
» communication interface toronto#-if1 192.168.#1.2

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» communication interface toronto#-if2 192.168.#2.2


» netmask is 255.255.255.0
__ 12. Identify and update the cluster components worksheets the names (IP Labels) and
addresses for the service and persistent labels using the following guidelines.
• Ensure that the logical subnet rules are complied with. Assume IPAT via alias
which means that the Service Labels/addresses and persistent addresses may
not be on the same logical subnet as any one of the communication interfaces.
• The following names/addresses may be used or select your own. If you choose
to use your own please verify with the instructor that they will not conflict with
another team.
»
» halifax#-per 192.168.#3.1
» toronto#-per 192.168.#3.2
» appA#-svc 192.168.#3.10
» appB#-svc 192.168.#3.11
__ 13. The IP network name is generated by HACMP.
__ 14. Identify and update the cluster components worksheets the name for your cluster
(any string without spaces, up to 32 characters) using the following or choose your
own.
» cluster name is canada#
__ 15. Identify and update the cluster components worksheet the device names and
location codes of the serial ports.
» lsdev -C | grep - i serial
__ 16. The non-IP network name is generated by HACMP.
__ 17. At this point in time most of the names for the various cluster components should
have been selected and populated on the cluster component worksheets. It is
important to have a clear picture of the various names of these components as you
progress through the exercises.
__ 18. Now add the networking information to the generic cluster diagram of your cluster.
This diagram can be found in Appendix A (there are two blank ones after the filled in
one. One is for in class and the other is to take home). On the other hand you may
want to just compare the information on your component worksheets to the filled in
worksheet at the beginning of Appendix A.
• Only fill in what you know -- cluster name, node names (halifax#, toronto#), and
IP information at the top.

2-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Cluster Component Worksheets


Table 1: Non-shared Components Worksheet: FIRST Node
Non-shared Components Description Value

Node Name ------------------N/A-----------------------

***Network Adapter*** **IBM 10/100 Mbps Ethernet *** entX 10-60


Network Adapter IF1
Network Adapter IF2
Network Adapter IF3
Network Adapter IF4

***Ext. Disk Adapter*** ***SSA 160 SerialRAID Adapter*** ssaX 10-90


Ext. Disk Adapter 1
Ext. Disk Adapter 2

***Serial port*** ***Standard I/O Serial Port*** saX 01-S1


Serial port 1
Serial port 2

***TTY device *** ***Asynchronous Terminal*** ttyX 01-S1-00-00


TTY device 1
TTY device 2

***Internal Disk *** 16 Bit LVD SCSI Disk Drive hdiskX 10-80-00-4,0
Internal Disk 1
Internal Disk 2
Internal Disk 3

Persistent Address ------------------N/A-----------------------

***IF IP Label/address*** myname 192.168.#x.yy


IF1 IP Label/address
IF2 IP Label/address
IF3 IP Label/address

TMSSA device ------------------N/A-----------------------

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Table 2: Non-shared Components Worksheet: SECOND Node


Component Description Value

Node Name ------------------N/A-----------------------

***Network Adapter*** **IBM 10/100 Mbps Ethernet *** entX 10-60


Network Adapter IF1
Network Adapter IF2
Network Adapter IF3
Network Adapter IF4

***Ext. Disk Adapter*** ***SSA 160 SerialRAID Adapter*** ssaX 10-90


Ext. Disk Adapter 1
Ext. Disk Adapter 2

***Serial port*** ***Standard I/O Serial Port*** saX 01-S1


Serial port 1
Serial port 2

***TTY device *** ***Asynchronous Terminal*** ttyX 01-S1-00-00


TTY device 1
TTY device 2

***Internal Disk *** 16 Bit LVD SCSI Disk Drive hdiskX 10-80-00-4,0
Internal Disk 1
Internal Disk 2
Internal Disk 3

Persistent Address ------------------N/A-----------------------

***IF IP Label/address*** myname# 192.168.#x.yy


IF1 IP Label/address
IF2 IP Label/address
IF3 IP Label/address

TMSSA device ------------------N/A-----------------------

2-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty
Table 3: Shared Components Worksheet
Component Description Value

Cluster Name ---------------------N/A-----------------


Cluster ID ---------------------N/A-----------------

Cluster Subnet mask ---------------------N/A-----------------

Network Name ---------------------N/A-----------------


Network Name ---------------------N/A-----------------

***Shared Disk *** *P1.1-I3/Q1-W4AC50A84400D* hdiskX pdiskY


Shared Disk 1
Shared Disk 2
Shared Disk 3
Shared Disk 4

Shared vg 1 ---------------------N/A-----------------
Shared jfs log 1 --------------------N/A------------------
Shared jfs lv 1 --------------------N/A------------------
Shared filesystem 1 --------------------N/A------------------
-mount point --------------------N/A------------------

Shared vg 2 --------------------N/A------------------
Shared jfs log 2 --------------------N/A------------------
Shared jfs lv 2 --------------------N/A------------------
Shared filesystem 2 --------------------N/A------------------
-mount point --------------------N/A------------------

ALIAS: myname# 192.168.#x.yy


Service Label/address
Service Label/address
Service Label/address
Service Label/address
REPLACEMENT node1:
Service Label/address
Hardware Address ---------------------N/A-----------------

REPLACEMENT node2:
Service Label/address
Hardware Address ---------------------N/A-----------------

© Copyright IBM Corp. 1998, 2004 Exercise 2. Cluster Planning 2-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

2-10 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 3. LVM Components


(with Hints)

What This Exercise Is About


This exercise reinforces the steps involved in creating a shared
volume group with a filesystem to be used as an HACMP resource.

What You Should Be Able to Do


At the end of the exercise, you should be able to:
• Create a Volume Group suitable for use as an HACMP resource
• Create a filesystem suitable for use as an HACMP resource
• Manually perform the function of passing a filesystem between
nodes in a cluster

Introduction
The next phase in our scenario is to provide the storage for the highly
available application. We require a filesystem to store the Web pages
on that can be accessed by each machine when that machine is the
active node.
To support the passing of a filesystem between nodes there must be a
volume group, logical volume, and a logical volume for the jfs log.
There are several methods to accomplish this task. Two are going to
be explored during the exercises. First, a manual creation to
emphasize the necessary steps in the process and second, in a later
exercise, an automated cluster aware method will be explored during
the C-SPOC exercise.

Required Materials
• Cluster Planning Worksheets and cluster diagram from the
previous exercise.
• Shared disk storage connected to both nodes.

© Copyright IBM Corp. 1998, 2004 Exercise 3. LVM Components 3-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Instructor Exercise Overview


This exercise should only configure one filesystem. Save the second one for C-SPOC.

3-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Instructions with Hints


Preface
• All hints are marked by a » sign.

Configure Volume Group


__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in to both nodes as root.
__ 3. Verify that both nodes have the same number of disks.
» lspv
» lsdev -Cc disk
__ 4. Identify the internal and shared disks from the cluster worksheet. These disks might
or might not have PVIDs on them.
» lscfg | grep hdisk
If they match between the two systems, then you can skip to step 10.
__ 5. On both systems delete only the external hdisks.
» hint assumes hdisk0 and hdisk1 are internal disks and that hdisk 2-5 are
external shared disks
» rmdev -l hdisk2 -d
» rmdev -l hdisk3 -d
» rmdev -l hdisk4 -d
» rmdev -l hdisk5 -d
» ALTERNATE METHOD:
» for i in $(lspv|grep -v hdisk0|grep -v hdisk1|awk '{print $1}";do
» >rmdev -l $i -d
» >done
__ 6. On one system add all of PVIDs back in.
» hint assumes hdisk0 and hdisk1 are internal disks and that hdisk2-5 are
external shared disks
» cfgmgr -l ssar
» chdev -a pv=yes -l hdisk2
» chdev -a pv=yes -l hdisk3
» chdev -a pv=yes -l hdisk4
» chdev -a pv=yes -l hdisk5
__ 7. On the other system update the PVIDs.
» cfgmgr
__ 8. Verify the PVIDs were updated.
» lspv

© Copyright IBM Corp. 1998, 2004 Exercise 3. LVM Components 3-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

__ 9. The hdisks and PVIDs should match on both systems.


__ 10. Find a VG major number not used on either node __________.
» on both nodes execute lvlstmajor (the numbers listed are available)
__ 11. Go to your halifax# node. Create an Enhanced Concurrent Volume Group called
shared_vg_a. This will be the volume group for appA#’s shared data.
» smitty vg (or smitty ->system storage -> logical volume manager ->volume
groups)
» -> add a volume group)
» Fill in VG name
» Set partition size (if default value won’t work) <-- pop-up list is available
» Using F4 select a single physical volume (you may select another volume if
you have three or more shared disks)
» Set activate volume group automatically at system restart to ‘NO’
» Set the VG major number (use previous step)
» Set “Create VG Concurrent Capable?” to enhanced concurrent
__ 12. Vary on the volume group and create a jfslog logical volume with a name of
shared_jfslog_a. The type is to be jfslog. Only one lp is required.
» varyonvg shared_vg_a
» smitty lv
» Select Logical Volumes -> Add a Logical Volume
» Select VG just created, a list is provided (F4)
» Fill in the NAME=shared_jfslog_a, number of lp = 1,TYPE = jfslog
» Fill in your favorite options
__ 13. Format the jfslog logical volume.
» logform /dev/shared_jfslog_a
» answer yes to delete all the information.
__ 14. Create a logical volume for data called shared_jfslv_a.
» smitty lv
» Add logical volume
» Enter your favorite options (lp=10 should be enough)
__ 15. Create a filesystem called shared_fs_a using the Add a Journaled File System on a
previously defined logical volume. The mount point should be /shared_fs_a and the
filesystem should not be automatically activated on system restart.
» smitty jfs
» Add/Change/Show/Delete File Systems
» Journaled File Systems
» Add a Journaled File System on a previously defined logical volume
» Add a standard Journaled File System
» F4 list, must be used to select the logical volume just created
» Use /shared_fs_a as the mount point

3-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » Set mount automatically at system restart to ‘NO’


__ 16. Verify the filesystem can be mounted manually.
» mount /shared_fs_a
__ 17. Check the correct log file is active. If you have a loglv00 then you might not have
formatted the jfs log before you created the jfs.
» lsvg -l shared_vg_a
» ALTERNATE METHOD: mount command with no arguments.
__ 18. Umount the filesystem.
» umount /shared_fs_a
__ 19. Vary off the volume group.
» varyoff vg shared_vg_a
__ 20. On your Toronto# node, import the volume group using the major number, hdisk and
volume group information. The VG name must be the same as the system it was
created on.
» importvg -V ?? -y sharev_vg_a hdiskX
__ 21. Set the autovaryon flag to “off” for the volume group.
» chvg -an shared_vg_a
__ 22. Mount the filesystem on the second node and verify it functions.
» mount /shared_fs_a
__ 23. Check the correct log file is active.
» lsvg -l shared_vg_a
» ALTERNATE METHOD: mount command with no arguments.
__ 24. Unmount the filesystem.
» umount /shared_fs_a
__ 25. Vary off the volume group.
» varyoffvg shared_vg_a

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 3. LVM Components 3-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

3-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 4. Network Setup and Test


(with Hints)

What This Exercise Is About


This exercise guides you through the set up and testing of the
networks required for HACMP.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Configure TPC/IP networking suitable for HACMP
• Test the TCP/IP configuration
• Configure non IP communications for HACMP
• Test the non IP communications for HACMP
• Configure and test name resolution and authentication

Introduction
This section establishes the communication networks required for
implementing HACMP. Networking is an important component of
HACMP, so all related aspects are configured and tested. The
information used in this exercise is derived from the previous exercise.

Instructor Exercise Overview


diskhb is not included here on purpose. It is automatically configured by the 2node config
assist feature.
This lab should be done after the students complete part 2 of exercise 2.
The client ip information is deliberately left out of the host file in this exercise. It is added as
part of exercise 6.

© Copyright IBM Corp. 1998, 2004 Exercise 4. Network Setup and Test 4-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

LAB Reference Cluster


user
community

Network = ________________ (netmask = 255.255.255.0)

Home Node Name = Home Node Name =


Resource Group= Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =
Service IP Label = Service IP Label =
Application server = Application server =
Label = tty Label =
Device =
Device =
Label = disk or
Device = Label =
tmssa Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

© Copyright IBM Corporation 2004

Figure 4-1. Lab Reference Cluster AU545.0

Required Materials
• Cluster Planning Worksheets and cluster diagram from exercise 2.

4-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Instructions with Hints


Preface
• All hints are marked by a » sign.

Part 1: Configure TCP/IP Interfaces and Name Resolution


__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in as root to both of the cluster nodes.
__ 3. Check the UNIX Hostname (both the host command and the uname -n command
should give you the same desired answer).
» To display: hostname and uname -n
» To change: smitty hostname and/or uname -S ‘hostname’
__ 4. Using the component worksheets or configuration diagram for values, configure two
network adapters for use as communication interfaces, remember that each
communication interfaces must use a separate logical subnet.
Note: Do NOT use the minimum config and setup option in smit. It changes the
name of the node. Use smit chinet instead.
» smitty chinet - or
» smitty -> communication ->tcpip -> further configuration -> network interfaces
-> network interface selection -> change/show network interface -> edit each
interface (one at a time) with tcp/ip address, netmask and state (up).
__ 5. Recheck the hostname.
» hostname
» uname -n
__ 6. Verify the netmasks are specifically set in smitty chinet. The default could cause
errors later depending on what your network address was.
» smitty chinet -- select each interface, verify netmask settings are the same.,
For this class the netmask should be 255.255.255.0
» or ifconfig -a
__ 7. Check the configuration against the cluster worksheets.
» netstat -i
» netstat -in
» ifconfig -a
__ 8. Repeat for other node. When both nodes are configured, test the communications
between the nodes. Use the ping command to verify connection between each set
of communication interfaces.
__ 9. Update the /etc/hosts file on both nodes (update one and ftp it to the other node).

© Copyright IBM Corp. 1998, 2004 Exercise 4. Network Setup and Test 4-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» vi /etc/hosts
» 127.0.0.1 loopback localhost halifax# toronto#
Note: Assumes using suggested hostnames --client are covered in exercise
6)
» 192.168.#1.1 halifax#-if1
» 192.168.#2.1 halifax#-if2
» 192.168.#3.1 halifax#-per
» 192.168.#3.10 appA#-svc
» 192.168.#1.2 toronto#-if1
» 192.168.#2.2 toronto#-if2
» 192.168.#3.2 toronto#-per
» 192.168.#3.20 appB#-svc
»
__ 10. Verify name resolution and connectivity on BOTH nodes for all IP labels.
» host halifax#-if1
» ping halifax#-if1
» host halifax#-if2
» ping halifax#-if2
» host toronto#-if1
» ping toronto#-if1
» host toronto#-if2
» ping toronto#-if2

Part 2: Configure Non IP Interface


__ 11. With your cluster planning sheets available, begin the configuration.
__ 12. Log in as root to both of the cluster nodes.
__ 13. On both nodes check if you can use a tty connection or SSA or both as a non IP
network. You may have to ask your instructor for details.
If not using tty for your non IP network then skip to step __ 16.

Using tty
__ 14. On both nodes check the device configuration of the unused tty device. If the tty
device does not exist, create it. If it does exist, ensure that a getty is not spawned, or
better still, delete it and redefine.
» smitty tty -> change/show tty or add tty
» Enable login = disable
» baud rate = 9600
» parity= No
» bits = 8
» stop bits = 1

4-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty __ 15. Test the non IP communications:


i. On one node execute stty < /dev/tty# where # is your tty number.
ii. The screen appears to hang. This is normal.
iii. On the other node execute “stty </dev/tty#” where # is your tty number.
iv. If the communications line is good, both nodes return their tty settings.

Using SSA
__ 16. If using target-mode SSA for your non IP network, then check if the prerequisites are
there. A unique node number must be set and the device driver must be installed. If
not add it.
» lsdev -C | grep ssa
» lsattr -El ssar
» lscfg -vl ssa0
» lslpp -L devices.ssa.tm.rte (If not installed on both nodes, ask your
instructor if it can be added and where are the needed information or
resources.)
• Install missing software.
• Install usable microcode (if required).
» chdev -l ssar -a node_number=<a unique number> (different on each
node)
» run cfgmgr -- must be run on the first node, then the second node, then the
first node.
» ls -l /dev | grep ssa -- verify the existence of the .tm and .im device files
__ 17. Test the non IP communication using SSA.
» Node A: cat < /dev/tmssa<number>.tm <-- node number of node B
» Node B: cat <filename> > /dev/tmssa<number>.im <-- node number of
node A

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 4. Network Setup and Test 4-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

4-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 5. HACMP Software Installation


(with Hints)

What This Exercise Is About


This exercise installs the components of HACMP for AIX to support all
resource group policies.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Verify the node is prepared for the installation of HACMP
• Identify and install the packages to run HACMP

© Copyright IBM Corp. 1998, 2004 Exercise 5. HACMP Software Installation 5-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• This exercise is composed of two parts, system capacity checks and software
installation.
• All hints are marked by a » sign.

Part 1: System Capacity Checks


__ 1. Log into halifax# as root (this part can be done in parallel by a second person
working on the other node, toronto#).
__ 2. Verify the following disk space requirements:
i. 140 MB free space in /usr, although the installation of the software will
automatically increase the size if required.
ii. 100 MB free in /tmp, /var and /
» Use df -k to check the filesystem sizes.
__ 3. For most lab environments, check to see that the system paging space is set to
twice the size of main memory. This is the default recommendation for small
memory machines.
» lsps -a to see the paging file space
» lsattr -E -l sys0 -a realmem to see the amount of memory
__ 4. Ensure Part 1 is performed for your other node, toronto#

Part 2: HACMP Node Installation


__ 5. Log in to halifax# as root (this part can be done in parallel by a second person
working on the other node toronto#).
__ 6. Verify the AIX prerequisites are installed. If any of these are not installed notify your
instructor. RSCT filesets must be at a minimum, version 2.2.1.
- bos.adt.lib
- bos.adt.libm
- bos.adt.syscalls
- bos.data
- rsct.compat.basic
- rsct.compat.clients
- devices.ssa.tm
- devices.scsi.tm
» lslpp -L bos.adt*
» lslpp -L bos.data
» lslpp -L rsct.compat*
» lslpp -L devices.ssa.tm.rte

5-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » lslpp -L devices.scsi.tm.rte


__ 7. Change directory to the location of the filesets. In most classes they can be found in
a subdirectory of the /usr/sys/inst.images directory. If there are questions, ask the
Instructor.
» ls -l /usr/sys/inst.images
__ 8. Install preview the following HACMP filesets:
• HACMP
- cluster.adt.es
- cluster.doc.en_US.es
- cluster.es
- cluster.es.clvm
- cluster.es.cspoc
- cluster.license
- cluster.man.en_US.es
- cluster.msg.en_US.cspoc (lower case en)
- cluster.msg.en_US.es
» Use smitty --> Software Installation and Maintenance
--> Install and Update Software
-->Install and Update Software from ALL Available Software
» Enter “.” for the directory
» Press F4 to see the list of filesets.
» After choosing the filesets and returning to the install menu, set “PREVIEW”
to yes and set “ACCEPT new license agreements” to yes. Then execute the
preview install.
__ 9. If the HACMP packages pass the prerequisite check, set preview to no and install
the HACMP filesets. If there is a prerequest failure, notify your Instructor.
__ 10. Install HACMP maintenance. Check the /usr/sys/inst.images directory for an HA
updates directory (in many classes it will be the subdirectory ./ha52/ptf1). If you
have questions, ask the instructor.
» Change directory to the maintenance directory.
» smitty install -> Install and Update software
--> Update installed software to latest level (Update All)
» Enter “.” for the directory
» Change the commit updates field from ‘yes’ to ‘no’
» Change the save replaced files from ‘no’ to ‘yes’
__ 11. Reboot the nodes.
» shutdown -Fr
__ 12. Verify the SMIT menus. Check to see if the HACMP screens are available.
» smitty hacmp

© Copyright IBM Corp. 1998, 2004 Exercise 5. HACMP Software Installation 5-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

__ 13. (Optional) It would be a good idea to set up your /.profile to include paths to the
HACMP commonly used commands so that you don’t have to keep entering full path
names in the later lab exercises.
» PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities
» PATH=$PATH:/usr/es/sbin/cluster/etc
» PATH=$PATH:/usr/es/sbin/cluster/diag
__ 14. (Very Optional) If the nodes have a tape subsystem attached, now would be a good
time for a mksysb backup.
__ 15. Ensure Part 2 is also performed for your other node toronto#.

END OF LAB

5-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Review/Wrapup


This is a good place to stop for a backup.

© Copyright IBM Corp. 1998, 2004 Exercise 5. HACMP Software Installation 5-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

5-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 6. Client Setup


(with Hints)

What This Exercise Is About


This exercise sets up the client for access to the HACMP system. It is
used to demonstrate how the outside world views the highly available
system.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Ascertain if the client has been set up to access the HACMP
cluster
• Verify the communication between the client and nodes is
functioning correctly

Introduction
Our scenario has a Web server to be made highly available. We are
required to test the availability traits of the Web server. This exercise
creates a client to test from.

Required Materials
HACMP planning sheets.
AIX bonus pack
Client machine

Instructor Exercise Overview


This exercise assumes httpdlite is installed and running on the client machine.

© Copyright IBM Corp. 1998, 2004 Exercise 6. Client Setup 6-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• All exercises of this chapter depend on the availability of specific equipment in your
classroom.
• Replace the symbol # with your team number.
• All hints are marked by a » sign.

Part 1: Setting Up the Client Communications


__ 1. This exercise requires that there is a third AIX node for your team. If you have only a
PC then you can (after the application integration lab) add clstat.cgi to the
/usr/HTTPServer/cgi-bin directory on both cluster nodes and then use the PC
browser to go to the service address of the HTTP resource group -- your instructor
can help you with this.
__ 2. Log in to the client (your third machine) as root. If CDE is used on this machine then
leave CDE for now. CDE comes back again after the reboot later in this exercise.
__ 3. Execute smit mktcpip to set the hostname and ipaddress of this machine for an enX
interface (that has a cable in it). The IP address must be on the same subnet as one
of the node interfaces. The suggested hostname is regina# and the suggested
address is 192.168.#1.3. Do not set default route or DNS.
__ 4. Create an alias for the interface above to be on the same subnet as the service
labels. The suggested value is 192.168.#3.30 and the subnet mask is
255.255.255.0.
» smit inet --> configure Aliases --> Add an IPV4 Network Alias
__ 5. Acquire the /etc/host file from halifax# and ensure that the information in this file
agrees with what you did in the previous two steps.
» ftp halifax#-if1
» get /etc/hosts
» quit
» vi /etc/hosts
__ 6. Test to ensure that TCP/IP functions correctly.
» ping halifax#-if1
» ping toronto#-if1
__ 7. Test name resolution of the client and the nodes.
» Use the host command to test name resolution.

Part 2: HACMP Client Install and setup


__ 8. Install the HACMP client filesets:

6-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty cluster.adt.es
cluster.es (choose only the three client filesets)
cluster.license
cluster.man.en_US.es
cluster.msg.en_US.es (choose only the client fileset)
» Change to the directory with the filesets
» smitty install_all
__ 9. Install the ptf1 updates
» Change to the directory with the filesets
» smitty update_all
__ 10. In order to use clstat.cgi, verify that httpdlite is running and that Netscape is
available on this machine. If not ask your instructor.
» ps -ef | grep httpdlite
» netscape file:///usr/lpp/bos.sysmgt//mkcd.README.html
__ 11. Verify Netscape starts and can display a URL, like
file:///usr/lpp/bos.sysmgt/mkcd.README.html/
The next three steps prepare you to use clinfoES from the client machine after HACMP is
started in the next exercise.
__ 12. Copy the clstat.cgi script from /usr/es/sbin/cluster to the /var/docsearch/cgi-bin
directory.
» cd /var/docsearch/cgi-bin
» cp /usr/es/sbin/cluster/clstat.cgi ./
__ 13. Verify that the file /var/docsearch/cgi-bin/clstat.cgi is world-executable (755 or
rwxr-xr-x)
» chmod +x clstat.cgi
» ls -al clstat.cgi
__ 14. Test access to clstat.cgi using the URL
http://localhost:49213/cgi-bin/clstat.cgi <-- you should get a window with the
message “Could not initialize clinfo connection”.
__ 15. Put the cluster nodes ip address (that is, halifax#-per and toronto#-per) into the
/usr/es/sbin/cluster/etc/clhosts file. Make sure you can ping these addresses.
__ 16. Reboot and do the ping tests to verify that this client machine functions as expected.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 6. Client Setup 6-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Review/Wrapup
We have the client all set and ready to go with communication checked, and name
resolution.

6-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 7. Cluster Configuration


(with Hints)

What This Exercise is About


This lab covers the configuration and testing of a one sided custom
resource group. The cluster planning worksheets continue to be
updated as the capabilities of the cluster grow.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Use the Initialization and Standard Configuration menu to
• Discover nodes and networks
• Discover Volume groups and Filesystems
• Add a custom resource group to the cluster
• Verify the correct operation of a custom resource group
• Perform failover testing on the configured resource group

Introduction
The scenario is expanding you now create a custom resource group.
This is the beginning of making an application highly available.

Required Materials
Cluster planning worksheets.

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Remember this?

Where are We in the Implementation


Plan for network, storage, and application
eliminate single points of failure
Define and configure the AIX environment
storage (adapters, LVM volume group, filesystem)
networks IP interfaces, /etc/hosts, non-ip networks and devices)
application start and stop scripts
Install the HACMP filesets and reboot
Configure the HACMP environment
Topology
cluster, node names, HACMP ip and non-ip networks
Resources:
Application Server
Service labels
Resource group:
Identify name, nodes, policies
Resources: Application Server, service label, VG, filesystem
Synchronize then start HACMP
We are now ready to Configure the HACMP environment.
First we must set up the application environment so that we can do the configuration all at
once using the Two-Node Cluster Configuration Assistant.
Note: These steps can only be done on one node. You should choose one of your nodes to
be the administration node. We will assume it is halifax#
• All hints are marked by a » sign.

7-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Part 1: Setting up the application environment


We use a dummy application for now to see how the Two-Node Cluster Configuration
Assistant works.
__ 1. Log in to halifax# as root
__ 2. Execute the commands:
echo ‘date +starting:%H%M >> /tmp/appA.log’>/tmp/appA_start
echo 'date +stopping:%H:%M >> /tmp/appA.log' > /tmp/appA_stop
THE FOLLOWING WAS reported by an instructor testing this la b for this step:
I think the application displays starting:time and put it into the log. However, I get the
screen display, but no log entry. So I vi the file and added a tee.
"echo +starting:%H%M | tee /dev/pts/0 >> /tmp/appA.log".
__ 3. Execute the commands:
chmod +x /tmp/appA_start
chmod +x /tmp/appA_stop
__ 4. Log in to toronto# as root and execute the command exportvg shared_vg_a (This
is so that you can see that the 2-node assistant automatically imports the vg on the
other node).
__ 5. Return to your halifax# node.

Part 2: Configuring HACMP


With your cluster planning sheets available begin the configuration.
__ 6. Run the Two-Node Cluster Configuration Assistant. You need an ipaddress(label)
for the second node, an application server name unique to your team, start and stop
script names, and a service label.
» smitty hacmp --> Initialization and Standard Configuration
--> Two-Node Cluster Configuration Assistant
» For the Communication Path to Takeover Node, select (F4) toronto#-if1
» For the Application Server Name, type appA
» For the Application Server Start script, type /tmp/appA_start
» For the Application Server Stop Script, type /tmp/appA_stop
» For the Service IP Label, select (F4) appA#-svc.
__ 7. If you encountered an error then do the cluster remove procedure (see lecture or
ask instructor) on both nodes before retrying.
Lets now look at what happened to you as a result of this command.
__ 8. Look at the smit output to see what the Assistant did. You can also find this output in
the /var/hacmp/log/clconfigassist.log file.
__ 9. Log on (go) to your other node (toronto#) to prove that the cluster was created on
both nodes. Use the command cldisp | more to answer the following questions:

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

• Were the application start and stop scripts copied over? ________________
• Was the volume group imported to the other node? ____________________
Use the command cldisp | more to answer the following questions:
• What is the cluster name? ______________________________________
• What is the resource group name? _______________________________
• What is the startup policy? ______________________________________
• What is the fallback policy?______________________________________
• What is the vg resource name (if any)? _____________________________
• What is the non-IP network name (if any)? ___________________________
• On what enX is halifax#-if1? _____________________________________
• What is the ip network name? ____________________________________
• Were the start/stop scripts copied over? ____________________________
__ 10. So were you impressed? _________________________________
__ 11. You can now add the ip network and non-IP network names, that we promised
would be generated by HACMP, to your component work sheets and/or the cluster
diagram if you want to.
__ 12. Return to your administrative node (halifax#).
__ 13. Define an additional Non-IP RS232 or a TMSSA network. The lab environment may
help you decide. Note that a network is automatically created when you choose the
pair of devices that form the endpoints of the network.
» smitty hacmp
» Select, ‘Extended Configuration’
» Select Extended Topology Configuration
» Select ‘Configure HACMP Communication Interfaces/Devices’
» Add Communication Interfaces/Devices
» Select ‘Add Discovered Communication Interface and Devices’
» Select ‘Communication Devices’ from the list
» Select, using F7, the Point-to-Point Pair of Discovered Communication
Devices (either a /dev/tty# pair or a TMSSA# pair).
__ 14. Execute the command cltopinfo and see that the additional non-IP network was
configured. Add this name to the worksheet and/or diagram.
__ 15. Add a persistent node address for each node in the cluster -- select ‘Configure
HACMP Persistent Node IP Label/Addresses’ from the ‘Extended Topology
Configuration’ menu,
» Configure a Persistent Node IP Label/Address
» Select Add a Persistent Node IP Label/Address
» Select a node form the list, press enter
» Select (using F4) the network name and IP Label/Address -- the Network
Name will be the same Network Name that the interfaces and service labels
belong too. The suggested IP labels are names of the form XXX-per.
» Repeat this step for the other node.

7-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty __ 16. Synchronize the changes -- Using the F3 key, traverse back to the Extended
Configuration smit screen.
» Select Verify and Synchronize HACMP Configuration’
Review the output upon completion looking for any Errors or Warnings. Errors must
be corrected before continuing, warnings should simply be reviewed and noted.
__ 17. Check to see that your persistent addresses were created. If not then wait until the
cluster is started in Part 3 below and then check again.
» netstat -i
__ 18. Take about 10 minutes to review the Startup, Fallover, and Fallback policies using
the F1 key on the Add a Resource Group menu. When you are ready, proceed to
Part 3.
» smitty hacmp --> Initialization and Standard Configuration
-->Configure HACMP Resource Groups -->Add a Resource Group

Part 3: Starting HACMP


__ 19. With your cluster planning sheets available as reference documentation, it is time to
start the cluster just on your administrative node (halifax#).
» On the halifax# node, enter smitty clstart (or smit hacmp
» -> System Management (C-SPOC)
» -> Mange HACMP Service
» -> Start Cluster Services <-- choose halifax# and start clinfo.
__ 20. Observe the output on one of the logs.
» execute “tail -f /usr/es/adm/cluster.log” for overview log
» execute “tail -f /tmp/hacmp.out” for detail log
__ 21. Check that all resources were acquired successfully on the halifax# node.
» lsvg -o
» mount
» df -k
» netstat -i or netstat -in (also verify persistent addresses are there)
» There is a “starting” message in the /tmp/appA.log file.
__ 22. Go to your client machine (regina#).
__ 23. Start the clinfoES subsystem and verify that the /usr/es/sbin/cluster/clstat -a
command works.
» startsrc -s clinfoES
» clstat -a
__ 24. There is another option on the clstat command, the - r# option. This option sets the
refresh rate of the information. For the lab environment “-r 10 “may be a more
appropriate value. Restart clstat with the -r 10 option.

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

__ 25. Now start Netscape and make sure that the URL to clstat.cgi is working properly.
• The URL is http://localhost:49213/cgi-bin/clstat.cgi
• You should now see a window with cluster information displayed. Be patient if
this window shows that the cluster is unstable.
• Take a moment to familiarize yourself with what you are looking at. Click on the
resource group name app#
• You will use this session to monitor the failover testing that comes next (or you
can run clstat on one of your cluster nodes)
__ 26. Now go to your administrative node (halifax#) and stop it graceful. Watch what
happens in the clstat browser (be patient -- it may take 2 minutes).
» smitty clstop <-- stop halifax# graceful
__ 27. Now start HACMP and clinfo on BOTH nodes
» smitty clstart <-- start halifax# and toronto# and clinfo
__ 28. Use the lsvg command to see that the shared vg is varied on in passive mode on the
other node (toronto#).
» lsvg shared_vg_a

Part 4: Failover testing


__ 29. Return to your administrative node (halifax#) with your cluster planning sheets
available for reference.
It is time to test the cluster. Although the failover testing is a function of planning to
eliminate single points of failure, some basic tests should be performed on any cluster.
__ 30. On both nodes verify the IP labels used on each interface. “netstat -i” Notice which
interface has which IP label.
__ 31. On the toronto# node telnet to the appA service address (appA#-svc).
__ 32. Run the tail -f /tmp/hacmp.out. There should not be any scrolling of the log file.
On your halifax# node, fail the adapter (enX) that the appA#-svc address is running on by
executing the command ifconfig enX down (or disconnect the cable from the enX adapter
card).
Watch the reaction on both nodes in the /tmp/hacmp.out file. Also monitor the clstat
window. Notice that the telnet session from the toronto# node was not interrupted
and that the log information scrolled by during the event processing.
Instructor note: ifconfig down is no longer corrected by HACMP at least for IPAT via alias.
__ 33. When swap adapter has completed, verify that the location of the appA#-svc service
address is now on another ethernet adapter.
» Use netstat -i
__ 34. Restore the failed adapter. The interface should now be in an “UP” state.

7-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » ifconfig enX up or


» connect the network cable
__ 35. (Optional) - You may wish to swap the service address (and/or) persistent address
back by using C-SPOC.
» execute smitty hacmp.
» Select Cluster System Management (C-SPOC)
» Select HACMP Communication Interface Management
» Select Swap IP Addresses between Communication Interfaces
» Select appA#-svc from the ‘Available Service/Communication Interfaces’ smit
screen.
» Select halifax#-if1 from the Swap onto Communication Interface smit screen
(This hint assumes that appA#-svc is currently an alias to halifax#-if2).
» confirm the information displayed and press Enter)
__ 36. Using the console rather than a telnet session (because you will lose it), monitor the
hacmp.out file on the halifax#-if1x (left) node and disconnect both network cables at
the same time.
__ 37. There should be a network down event executed after a short period of time. What
happens to the resource group on the halifax (left) node, and why?
» The resource group should have moved to the toronto# node
» The reason for this is because selective fallover is invoked when a network
down event was detected on halifax#. HACMP moves the resource group to
maximize it’s availability
__ 38. Check the /tmp/hacmp.out file on the toronto# node, it should also have detected a
network failure.
__ 39. Restore both the network connections for the halifax# node. What event do you
observe happens?
» network up
__ 40. Where is the resource Group at this time? Verify that the IP labels, volume groups,
and file systems and application are available on that node.
» The resource group should still be on the toronto# node because the Fallback
policy is Never Fallback.
» netstat -i
» lsvg -o
» /usr/es/sbin/cluster/utilities/clRGinfo
» cat /tmp/appA.log
__ 41. You are now going to move resources back from one node to the other. On the
halifax# node monitor the log. On the toronto# node execute smit clstop and stop
the cluster services with the mode of takeover. Leave the default value for the other
fields.
» execute “tail -f /usr/es/adm/cluster.log” for overview log

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» execute “tail -f /tmp/hacmp.out” for detail log


» smit clstop
__ 42. The clstat.cgi should change colors from green to yellow (substate unstable,
toronto# leaving) and the state of the toronto# node and interfaces should change to
red (down).
__ 43. All of the components in the resource group should move over to the halifax# node.
Verify the IP labels, volume groups, and file systems on the halifax# node.
» netstat -i
» lsvg -o
» /usr/es/sbin/cluster/utilities/clRGinfo
» cat /tmp/appA.log
__ 44. On the toronto# node restart HACMP. Observe the /tmp/hacmp.out file on the
halifax# node and, of course, the clstat session. The resource group stays put.

END OF LAB

7-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Review/Wrapup


You have a running cluster. Congratulations, now the fun really begins. Make sure clstat
shows the cluster as stable with the tcp/ip and non-ip networks up.

© Copyright IBM Corp. 1998, 2004 Exercise 7. Cluster Configuration 7-9


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

7-10 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 8. Application Integration


(with Hints)

What This Exercise Is About


The HACMP cluster is now functional with a highly available filesystem
and IP label. Adding the Web server to the scenario is the next step.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Add the IBM Web server powered by Apache to the nodes
• Adjust the configuration of the Web server to acknowledge the
highly available IP label
• Introduce a minor configuration change to the Web server to use
the shared storage
• Add an application start and stop script to HACMP
• Test the application functionality

Introduction
The intention is not to become Web server programers but to simply
add an existing application to the HACMP environment. This is to
demonstrate one way to add an application to the HACMP
environment.

Required Materials
A running cluster
The AIX 5L Expansion pack

© Copyright IBM Corp. 1998, 2004 Exercise 8. Application Integration 8-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• As part of this exercise, C-SPOC and DARE are used to enable the addition of
filesystems, applications and resource changes to the cluster while it is running. If all
things function as designed, no system reboots or HACMP restarts are required.
• All hints are marked by a » sign.

Part 1: Install the IBM Web server file system


__ 1. With your cluster planning sheets available, begin the configuration.
__ 2. Log in as root on the halifax# node.
__ 3. Create a new filesystem for the Web documents. Enter smit hacmp.
» Select System Management (C-SPOC)
» Select HACMP Logical Volume Management
» Select Shared Logical Volumes
» Select Add a Shared Logical Volume
» Select the resource group.
» Select shared_vg_a
» Select Auto-select.
» Enter the menu information as follows:
» number of Logical Partitions (LP)s -- 10
» The logical volume name is, shared_httplv
» The lv type is jfs.
» Press Enter to create the lv.
» Using F3 key, traverse back to ‘HACMP Logical Volume Management’
» Select Shared File systems
» Select Journaled File Systems
» Select Add a Journaled File System on a Previously Defined Logical Volume
» Add a standard JFS
» Pick the shared_httplv entry from the pop-up menu
» Enter /usr/HTTPServer/htdocs as the mount point. Leave the defaults for the
other values and press enter.
__ 4. Verify both nodes know about the new file system.
» cat /etc/filesystems (on both nodes)
__ 5. Continue on the halifax# node. Check to see that the filesystem is mounted on the
system that currently owns the resource group (should be halifax#).
» df

Part 2: Install the IBM Web server software

8-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty __ 6. Check if http filesets listed below are installed. If not, ask your instructor. On many
class images they may be found in the directory /usr/sys/inst.images/web-appl.
Otherwise you may need the AIX 5L Expansion Pack CD.
» http_server.base
» http_server.admin
» http_server.html

» lslpp -L http_server*
» http_server.man (OPTIONAL man pages)
__ 7. On the other node (toronto#), repeat the previous step. Once installed, delete all of
the information in the directory /usr/HTTPServer/htdocs (only on this node!).
» cd /usr/HTTPServer/htdocs
» rm -r ./*
» This is because when the filesystem fails over this will be covered by the
shared filesystem /usr/HTTPServer/htdocs.
__ 8. Go back to the halifax# node. In the directory /usr/HTTPServer/conf/, edit httpd.conf
and change the “ServerName” variable to be the same as the service IP label
(appA#-svc).
Note: The hostname must be resolvable, that is, host hostname should return a
good answer. If the hostname is not resolvable, add the hostname to the 127.0.0.1
address as an alias. If in doubt, ask the Instructor. Remember to do this on both
nodes otherwise successful takeover does not happen.
__ 9. Use ftp to put a copy of the /usr/HTTPServer/conf/httpd.conf file on the toronto#
node.
» ftp toronto#
» put /usr/HTTPServer/conf/httpd.conf /usr/HTTPServer/conf/httpd.conf.

Part 3: Configure HACMP for the Application


__ 10. Add the Application Server to HACMP.
» smitty hacmp
» Select ‘Initialization and Standard Configuration’
» Select ‘Configure Resources to Make Highly Available’
» Select ‘Configure Application Servers’
» Select ‘Add an Application Server’
» Enter http_server as the application name.
» Enter “/usr/HTTPServer/bin/apachectl start “as the application start script.
» Enter “/usr/HTTPServer/bin/apachectl stop” as the application stop script.
__ 11. Change the appA_group to use the Application Server http_server.
» Use the F3 key to go back to the Initialization and Standard Configuration’
screen

© Copyright IBM Corp. 1998, 2004 Exercise 8. Application Integration 8-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» Select Configure HACMP Resource Groups


» Select Change/Show Resources for a resource Group (Standard)
» Select the ‘appA_group’ resource group.
» Change (use F4) the Application Servers field to the value http_server.
» Leave the Filesystems field blank, as the default is All
» Using the F3 key, traverse back to the Initialization and Standard
Configuration smit screen. Select Verify and Synchronize HACMP
Configuration
__ 12. While the synchronizing takes place, monitor the HACMP logs until you see the
message start server http_server. Check to see that the Apache server started ok.
» ps -ef | grep http should show a number of httpd daemons running.
__ 13. From the client, start a new window in Netscape and connect to the URL
http://appA#-svc. The Web screen Welcome to the IBM HTTP Server window should
pop up.
__ 14. Perform a failover test by halting the Halifax# node in your favorite manner (for
example, “halt -q” or “echo bye > /dev/kmem”).
__ 15. Wait for takeover to complete and verify what happens to the Web server. Use the
page reload button on your Web browser to see if the Web server is really there.
__ 16. Bring up the Halifax# node again and start HACMP.
__ 17. What has happened to the Resource Group, and why?
» /usr/es/sbin/cluster/utilities/clRGinfo
» cllsres -g appA_group
» ps -ef | grep http
» lsvg -o
»
» df -k
» netstat -i

END OF LAB

8-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Optional Exercises


For the Web-enabled Candidates
__ 1. Change the Web server pages on the shared disk to prove the location of the data
elements.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 8. Application Integration 8-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

8-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 9. Mutual Takeover


(with Hints)

What This Exercise is About


This lab exercise expands the capabilities of the cluster. The intent is
to completely add the resource group and all of its components while
the cluster is running.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Use the C-SPOC functionality and DARE capabilities of HACMP to
make changes to the cluster while it is running
• Add a new volume group while the cluster is running
• Add all of the components of a shared file system while the system
is running
• Add a resource group to the cluster and activate it while the cluster
is running
• Test a mutual takeover configuration

Introduction
In the scenario there are two resource groups to be made highly
available. The addition of the second resource group is done with the
C-SPOC commands with the cluster running.

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• Add the shared_res_grp_b resource group components according to the scenarios.
This will require a second filesystem.
• All hints are marked by a » sign.

Part 1: Add a Second Resource Group and Filesystem to the Cluster


__ 1. Ensure HACMP is running on both nodes and that the HTTP application is running
on halifax#
__ 2. Using the lspv command on BOTH nodes, verify that there is a shared disk hdiskX
available with the same PVID. If so skip to step 8.
» lspv
__ 3. On the halifax# node make sure that the hdiskX has no PVID
» lspv
» chdev -a pv=clear -l hdiskX
__ 4. Create a new PVID for hdiskX
» chdev -a pv=yes -l hdiskX
__ 5. On the toronto# node delete the hdisk.
» rmdev -dl hdiskX
__ 6. Add the disk back in.
» cfgmgr
__ 7. Verify the hdisk number and PVID agree between the two nodes.
» lspv on both nodes
__ 8. On the administrative node (halifax#) create a shared volume group called
shared_vg_b using C-SPOC.
» Smitty hacmp
» Select Initialization and Standard Configuration
» Select Configure Resources to make Highly Available.
» Select Configure Concurrent Volume Groups and Logical Volumes.
» Select Concurrent Volume Groups
» Select Create a Concurrent Volume Group
» Select Select ALL (both) the Node Names that share the Volume Group
» Select the PVID that you identified in step 2.
» Fill out the volume group menu:
-Name= group shared_vg_b
-Using F4 select the single physical volume you identified above

9-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty -Check the Physical partition SIZE and major number (C-SPOC
chooses a valid major number); set enhanced concurrent is true)
» Create the volume group. (on the development system there were some “ok”
error messages after the successful “has been imported” message)
__ 9. Verify the Volume Group exists on both nodes.
» lspv
» lsvg
Now that the volume group is created it must be discovered a resource group must be
created and finally the volume group must be added to the resource group before any
further C-SPOC utilities will access it.
__ 10. Discover the volume group using Extended Configuration in smitty hacmp.
__ 11. Create a resource group called appB_group with the toronto# node as the highest
priority and halifax# node as the next priority.
» smitty hacmp
» Select Initialization and Standard Configuration
» Select Configure HACMP Resource Groups
» Select Add a Resource Group
» Enter the resource group name appB_group, from the planning worksheets.
The participating node names must also be entered -- enter toronto# first
Take the defaults for the policies.
__ 12. Add the volume group to the resource group
» Return (F3) to the menu Configure HACMP Resource Groups then
» Select Change/Show Resources for a Resource Group (standard)
» Select appB_group
» Enter the volume group name using F4.
__ 13. Synchronize the Cluster.
» smitty hacmp
» Select Initialization and Standard Configuration
» Select Verify and Synchronize HACMP Configuration
__ 14. Once synchronized, the Volume Group is varied online, on the owning node
(toronto#). Wait for this to happen. Then on your administrative node halifax# use
C-SPOC to add a jfs log shared logical volume to the shared_vg_b. The name
should be shared_jfslog_b, the LV type should be jfslog, and use 1 PP.
» smitty hacmp
» Select Initialization and Standard Configuration
» Select Configure Resources to make Highly Available
» Select Configure Volume Groups, Logical Volumes and Filesystems
» Select Shared Logical Volumes
» Select Add a Shared Logical Volume
» From the list provided, choose the entry for appB_group shared_vg_b

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» From the list provided, choose Auto-select


» Set LP=1, NAME = shared_jfslog_b, LV TYPE = jfslog
__ 15. Format the jfslog so that it can be used by the filesystem that is created in the next
few steps. If the log is not formatted, it is not used.
» On toronto#, execute logform /dev/shared_jfslog_b (answer yes to the
destroy data question
__ 16. Back on halifax#, add a second shared logical volume with number of LOGICAL
PARTITIONS=10, NAME= shared_jfslv_b, TYPE=jfs.
» see step 11 for details
__ 17. Add a shared file system on the previously created Logical volume called
/shared_fs_b. Using the F3 key traverse back to Configure Volume Groups, Logical
Volumes and Filesystems. Select Shared File Systems
» Return (F3) to Configure Volume Groups, Logical Volumes and Filesystems
» Select Shared File Systems
» Select Journaled File Systems
» Select Add a Journaled File System on a Previously Defined Logical Volume
» Select Add a Standard Journaled File System
» Select the Logical Volume created in the previous step, press Enter. Fill in the
File System Mount Point (/shared_fs_b), hit enter. This information should be
in the cluster planning worksheets.
__ 18. The filesystem should be available on node toronto# in a few minutes.
The following was observed during additional testing and may or may not be
repeatable: a message on the smit panel saying that shared_fs_b is not a known file
system, and the failed response was posted. However, when I looked at /etc/filesystems
it was there and a manual mount from the Toronto node worked. I was then able to
move the resource group from one node to another and back using the system
management (C-SPOC) menu.
» tail -f /tmp/hacmp.out or clstat and wait for the cluster to become stable.

Part 2: Create the application and service label resources


__ 19. Log in to halifax# as root
__ 20. Create the application start script:
echo ‘hostname>>/shared_fs_b/appB.log’ >/tmp/appB_start
echo ‘date +” starting:%r” >> /shared_fs_b/appB.log’>>/tmp/appB_start
__ 21. Create the application stop script
echo ‘hostname>>/shared_fs_b/appB.log’ >/tmp/appB_stop
echo ‘date +” stopping:%r” >> /shared_fs_b/appB.log’>>/tmp/appB_stop
__ 22. ftp the scripts to the other node
» ftp toronto#

9-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » put /tmp/appB_start /tmp/appB_start


» put /tmp/appB_stop /tmp/appB_stop
__ 23. Make the scripts executable on both nodes:
» chmod +x /tmp/appB_start
» chmod +x /tmp/appB_stop
» REPEAT on the other node
__ 24. On halifax#, create the Service IP label resource.
» smitty hacmp --> Initialization and Standard Configuration
» Select Configure Resources to make Highly Available
» Select Configure Service IP Labels/Addresses
» Select Add a Service IP Label/Address
» Enter the Service IP Label appB#-svc (use F4) and the network (use F4)
__ 25. Create the application server resource
» Return (F3) to the menu Configure Resources to make Highly Available
» Select Configure Application Servers
» Select Add an Application Server
» Enter appB for the server name and the full path names of the start and stop
scripts.
__ 26. Add the resources to the resource group
» Return (F3) to the menu Initialization and Standard Configuration
» Select Configure HACMP Resource Groups
» Select Change/Show Resources for a Resource Group (standard)
» Select appB_group
» Enter the service label and application server name using F4.
__ 27. Synchronize the Cluster. Using the F3 key, traverse back to Initialization and
Standard Configuration.
» select Verify and Synchronize HACMP configuration
__ 28. Test that the toronto# service IP label is available.
» netstat -i
__ 29. Test the new resource group on the toronto# node for network adapter swap/failure
and node failure.
» Fail the toronto# node in your favorite manner.
__ 30. OPTIONAL -- If you have an extra disk execute the mirrorvg and chfs commands to
test splitting off a copy as presented in the unit 3 lecture. Note that this step could be
done using C-SPOC to create the mirror. Also note that one purpose of this step is to
show how to undo the backup copy.
» stop cluster on toront#
» smit vg (set characteristics, add disk to vg)

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-5


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» smit vg (mirrorvg)
» ensure /shared_fs_b is mounted
» chfs (see man page example)
» lsvg -l shared_vg_b (see new logical volume/filesystem)
» lslv -p hdiskX | grep USED (on both disks -- look for stale)
» umount the new file system
» rmfs
» on disk with stale redeo the lslv command (see stale removed)
» start cluster on toront#.

END OF LAB

9-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Review/Wrapup


The first part of the exercise looked at using C-CSPOC to add a new resource to the
cluster.

© Copyright IBM Corp. 1998, 2004 Exercise 9. Mutual Takeover 9-7


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

9-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 10. HACMP Extended Features


(with Hints)

What This Exercise Is About


This lab exercise expands the capabilities provided in the extended
features option.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Create a Cluster ‘Snapshot’
• Use the C-SPOC functions and DARE capabilities of HACMP to
make changes to the cluster while it is running
• Add additional Resource Groups while the cluster is running
• Add additional Service aliases
• Modify Resource Group Behavior Policies
• Configure Settling and Fallback timers

Introduction
To enhance the scenario create two additional resource groups to be
made highly available. The addition of these resource groups and their
behavior modification is done with the C-SPOC commands with the
cluster running.

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• Add additional service aliases, and create an additional custom resource group for each
node.
• Modify the default start and fallback policies of the new resource groups to examine the
resource behavior during cluster startup and reintegration event processing.
• Create
• All hints are marked by a » sign.

Part 1: Create a Cluster Snapshot


__ 1. In the last exercise we made the goal of the class so let’s save our environment
before continuing by creating a Cluster Snapshot
»smitty hacmp
»Select, Extended Configuration
»Select, Snapshot Configuration
»Select, Add Cluster Snapshot
»Enter snapshot name and description. Example: name= exercise 10,
description = mutual takeover It is helpful to use a meaningful name. When
you refer back to these it can help identify why/when it was taken. This could
be helpful should a cluster restore become necessary.
Notice:
There are two snapshot files <snapshot>.odm and <snapshot>.info
The directory for the snapshot is /usr/es/sbin/cluster/snapshots
The clsnapshotinfo command was run on both nodes (output in the “.info” file)
__ 2. Read the mutual_takover.info file. Go on to the next step when you are ready.

Part 2: Add an additional Service alias and Resource Group to each


Cluster Node
__ 3. Log in to the halifax# node as root.
__ 4. Add two additional service labels to the /etc/hosts file
» 192.168.#3.21 appC#-svc
» 192.168.#3.22 appD#-svc
» ftp /etc/hosts file to your other node.
__ 5. Discover these new addresses in HACMP using the Extended Configuration menu
from smit hacmp.
__ 6. Configure the two additional HACMP Service IP Labels/Addresses as resources in
HACMP.
» smitty HACMP
» Select Extended Configuration

10-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » Select Extended Resource Configuration


» Select HACMP Extended Resources Configuration
» Select Configure HACMP Service IP Labels/Addresses
» Select Add a Service IP Label/Address
» Select Configurable on Multiple Nodes
» Select the network name.
» Enter the Service IP Label
Note: An Alterable Hardware Address may not be selected. Remember that
HWAT is not supported for networks using IPAT via IP Aliasing.
We now add two resource groups appC_group and appD_group with different startup
policies. The first (appC_group) behaves like the old inactive takeover, and the second
(appD_group) behaves like the old rotating.
__ 7. Add an additional Resource Group called appC_group with a home node of halifax#
and a startup policy of Online on First Available node.
» Return (F3) to the menu Extended Resource Configuration
» Select HACMP Extended Resource Group Configuration
» Select Add a Resource Group
» Enter the Resource Group Name, nodes (halifax# first), and change the start
policy to be Online on First Available Node
__ 8. Add another Resource Group called appD_group with a home node of toronto# and
a startup policy of Online Using Distribution Policy.
» Return (F3) to the menu HACMP Extended Resource Group Configuration
» Select Add a Resource Group
» Enter the Resource Group Name appD_group, the nodes with toronto# first,
and the startup policy of Online Using Distribution Policy.
__ 9. Now add the Service IP Labels created in step 1 to the Resource Groups just
created. Using the F3 key, traverse back to the Extended HACMP Resource Group
Configuration and select Change/Show Resources and Attributes for a Resource
Group
» Return (F3) to the menu HACMP Extended Resource Group Configuration
» Select, Change/Show Resources and Attributes for a Resource Group
» Select the correct Resource Group form the list (appC_group).
» Add the correct Service IP Label and press Enter (appC1-svc).
» Repeat this step adding the appD1-svc Service IP Label to the appD_group
Resource Group.
__ 10. In order to mimic the old rotating we need to change the distribution policy to
network. This is done using the smit extended runtime menu Configure Distribution
Policy for Resource Groups. The cluster must be stopped on both nodes first.
» Exit smit
» hacmp clstop
» Select both nodes

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» Exit smit
» Verify cluster stopped (lssrc -g cluster on both nodes)
» smit hacmp
» Select Extended Configuration
» Select Extended Resource Configuration
» Select Configure Resource Group Run-Time Policies
» Select Configure Distribution Policy for Resource Groups
» Change the value to network (notice the deprecated message).
__ 11. Synchronize the cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen.
» Select, Extended Verification and Synchronization
» Notice the menu option Automatically correct errors found during verification.
You only see this option when the cluster is down.

Part 3: Test Resource Group Behavior


__ 12. Start HACMP only on your toront# node.
» smitty clstart -- or use C-SPOC menu ‘Manage HACMP Services’
» Select toronto# only.
__ 13. Once the node is stable check the status of the Resource Groups. Does this look
normal? If not, what is wrong -- Should appC_group be online on toronto#?
» /usr/es/sbin/cluster/utilities/clRGinfo
» /usr/es/sbin/cluster/utilities/clRGinfo -v appC_group
__ 14. Start HACMP on the Halifax (left) node.
» Refer to step 10 hints
__ 15. Once the node is stable check the status of the Resource Groups. Does everything
look correct now (check the appC_group)? If so, what changed? Why?
__ 16. OPTIONAL: To understand better the distribution policy, stop the nodes and bring up
halifax# first and see what happens to the appD_group. Then stop halifax# with
takeover. Then restart halifax# and see what happens to the appD_group.

Let’s have a look at configuring a settling timer which allows you to modify the behavior
of the Fallback To Higher Priority Node In The List fallback policy so that there are not
two online operations if you bring up the secondary node first.

Part 4: Add a Settling Timer


__ 17. Ensure that you are on your administration node (halifax#) and configure a Settling
Timer (can only be used if startup policy is ‘Online On First Available Node’ ).
» smitty hacmp
» Select, Extended Configuration

10-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » Select, Extended Resource Configuration


» Select, Configure Resource Group Run-Time Policies
» Select, Configure Settling Time for Resource Groups
» Set a value (seconds) for the settling time. For this lab use 360.
__ 18. Synchronize the Cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen. Notice in the smit output the messages about the settling
timer value.
» Select Extended Verification and Synchronization

Part 5: Testing Cluster Behavior using a Settling Timer


__ 19. On your administrative node (halifax#), stop Cluster processing on both nodes.
» smitty clstop -- or use C-SPOC menu ‘Manage HACMP Services’ (choose
both nodes)
__ 20. Wait 2 minutes and then start HACMP -- only on the toronto# node.
» smitty clstart -- or use C-SPOC menu ‘Manage HACMP Services’ (choose
only toronto# )
__ 21. Wait until you can see that the appB_group is online then verify that appC_group is
still offline on toronto# using the clRGinfo command (note that the clRGinfo
command can be run from either node as long as HACMP is started on any one of
the nodes)
» /usr/es/sbin/cluster/utilities/clRGinfo appB_group -- wait until you see online
» clRGinfo appC_group -- should be offline
__ 22. Start HACMP on the halifax# node.
__ 23. Verify that the appC_group comes online on halifax# (without first being online on
toronto#). As you can see the purpose of the settling timer is to prevent the
resources from being immediately acquired by the first active node.
» /usr/es/sbin/cluster/utilities/clRGinfo (compare appA_group and appC_group
-- both should come online eventually)
__ 24. OPTIONAL -- repeat this part but wait for settling time to expire after starting the
cluster on toronto#. Verify that appC_group comes online on toronto#. Stop the
cluster manager on both nodes, wait 2 minutes, start the cluster manager on both
nodes.

Part 6: Configure Delayed Fallback Timer


__ 25. Cluster should be started on both nodes and appC_group should be online on
halifax#.
» lssrc -g cluster
» clRGinfo

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

__ 26. On your administrative node (halifax#), create a delayed fallback timer policy for 30
minutes from now (instructor may modify this time)
» Write down the current time ______________
» Make sure both nodes are using the same time (setclock toronto#-if1).
» smitty hacmp
» Extended Configuration... Extended Resource Configuration
» --> Configure Resource Group Run-Time Policies
» --> --> Configure Delayed Fallback Timer Policies
» --> --> --> Add a Delayed Fallback Timer Policy
» use the following values:
daily, name=my_delayfbt, hour/min=30 min from current time
__ 27. Add the fallback timer policy the resource group appC_group
» smitty hacmp
» Extended Configuration... Extended Resource Configuration
» ... HACMP Extended Resource Group Configuration
» ... ... Change/Show Resources and Attributes for a Resource Group
» Select appC_group
» Fill in the field below as indicated:
» Fallback Timer Policy [my_delayfbt] (use F4)
__ 28. Synchronize

Part 7: Testing Cluster Behavior using a Delayed Fallback Timer


__ 29. Verify that appC_group is online on halifax# using the clRGinfo command
» clRGinfo appC_group
__ 30. Stop the cluster manager only on halifax# with takeover
» smitty clstop or use the C-SPOC menu -->‘Manage HACMP Services’
__ 31. Verify that appC_group is now online on toronto# (clRGinfo).
__ 32. Wait 2 minutes (required before a restart)
__ 33. Start the cluster manager on halifax#
» smitty clstart -- or use the C-SPOC menu -->‘Manage HACMP Services’
__ 34. Monitor the cluster from toronto#. In /tmp/hacmp.out at the event summary for
check_for_site_up_complete halifax#, there is now a message stating the fallback
time. Make sure appC_group is still on toronto# before the fallback then tail -f the
hacmp.out file and wait for the fallback to occur.
» vi /tmp/hacmp.out -- look for the fallback time message-- should be very near
the bottom.
» - clRGinfo (verify appC_group is online on toronto# -- before the fallback
time)
» - tail -f /tmp/hacmp.out and wait

10-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty __ 35. At the time set for the Delayed Fallback Timer, appC_group should move back to
halifax# (you should see activity from tail command)
» Execute clRGinfo to verify that appC_group is online on halifax#.
__ 36. On your administrative node (halifax#), remove the name of the Delayed Fallback
Timer (my_delayfbt) from the resource group appC_group (you can keep the policy
definition if you want).
» smitty hacmp
» Extended Configuration... Extended Resource Configuration
» ... HACMP Extended Resource Group Configuration
» ... ... Change/Show Resources and Attributes for a Resource Group
__ 37. Reset the Settling time to 0 (from the menu ‘Configure Resource Group Run-Time
Policies’
» smitty hacmp
» Select, Extended Configuration
» Select, Extended Resource Configuration
» Select, Configure Resource Group Run-Time Policies
__ 38. Synchronize.

END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 10. HACMP Extended Features 10-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Review/Wrapup
The first part of the exercise looked at using C-CSPOC to add a new resource to the
cluster.

10-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 11. IPAT via Replacement and HWAT


(with Hints)

What This Exercise Is About


This lab explores the options of removing a cluster and creating an
IPAT via replacement environment.
This lab also examines Gratuitous Arp, and the use of Hardware
Address Takeover (HWAT), functionality for environments where
Gratuitous Arp may not be the best solution.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Describe how to set up IPAT via replacement
• Describe to behavior of Arp updates/refreshes using gratuitous Arp
or Hardware Address Takeover where required.
• Describe how to set up HWAT

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• The first part shows how to remove a cluster
• The second part of this lab looks at setting up IPAT via replacement and using the
standard configuration path to build a cluster.
• The third part of this lab looks at Gratuitous arp.
• The fourth part of this exercise adds hardware address concepts. HWAT or MAC
address takeover would be used in situations where gratuitous arp may not be
supported, as in older hardware, or non-standard operating systems.
• All hints are marked by a » sign.

Part 1: Remove and add a new Cluster


__ 1. On your administration node (halifax#), stop both the cluster nodes.
» smitty clstop -- choose both nodes
__ 2. Snapshot
» smitty hacmp
» Select, Extended Configuration
» Select, Snapshot Configuration
» Select, Add Cluster Snapshot
» Enter snapshot name and description. Example: name= exercise 11,
description = added resource groups.
__ 3. remove cluster
» smitty hacmp
» ... Select Extended Configuration
» ... Select Extended Topology Configuration
» ... Select Configure an HACMP Cluster
» ... Select Remove an HACMP Cluster
» REPEAT on the other node
» echo ““ >/usr/es/sbin/cluster/etc/rhosts (double quotes with no space)
» REPEAT on the other node
__ 4. Add a replacement service address to your /etc/hosts file (must be on the same
subnet as one of the if interfaces.
» vi /etc/hosts
» add 192.168.#1.10 appR#-repl (# is your team number - remember?)
» document this address in you component worksheets (exercise 2).
» REPEAT on (or send to) your other node
__ 5. Configure a new cluster on halifax#. Go to the HACMP for AIX smit panel and select
Initialization and Standard Configuration.
» Initialization and Standard Configuration

11-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » Select, Add Nodes to an HACMP Cluster


» Enter a cluster name (for example, team#).
» Using the F4 List option, select the appropriate communications paths for
BOTH the nodes (that is, the two “-if1” interfaces).
__ 6. Use Extended Configuration to set the network to turn off IPAT via aliases.
» From the main menu, select Extended ConfigurationSelect
» Select, Extended Topology Configuration
» Select, Configure HACMP Networks
» Select, Change/Show a Network in the HACMP Cluster
» Select the IP network, press Enter
» Change the option Enable IP Address Takeover via IP Aliases to No, hit
enter.
__ 7. Use Extended Configuration to configure a non-IP network by choosing the pair of
devices that will make up the network (Using the F3 key, traverse back to the
Extended Topology Configuration smit screen.
» Select, Configure HACMP Communications Interfaces/Devices
» Select, Add Communication Interfaces/Devices
» From the list, select, Add Discovered Communication Interface and Devices
» From the list select, Communication Devices
» Select the appropriate pair of devices, hdisk/hdisk (or tty/tty or tmssa/tmssa)
__ 8. Redo the Persistent Addresses from your planning worksheet. (Using the F3 key,
traverse back to the Extended Topology Configuration smit screen).
» Configure HACMP Persistent Node IP Label/Addresses
» Select, Add a Persistent Node IP Label/Address
» Select the appropriate node from the list, press Enter
» The F4 List option is available to select both the Network Name and
Persistent IP Label/Address.
» REPEAT for other node
__ 9. Create the Service IP Label resource. Using the F3 key, traverse back to the
‘Extended Configuration’ smit screen. Select ‘Extended Resource Configuration’.
» Select, HACMP Extended Resources Configuration
» Select, Configure HACMP Service IP Labels/Addresses
» Select, Add a Service IP Label/Address
» Select Configurable on Multiple Nodes, press Enter.
» Select the Network Name.
» Finally, select the Service IP Label (appR#-repl) to be used.
__ 10. Create a resource groups. Using the F3 key, traverse back to the Extended
Resource Configuration smit screen.
» Select HACMP Extended Resource Group Configuration
» Select Add a Resource Group

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

» Enter the Resource Group name (appR_group) and set the participating
nodes (use F4 to choose the nodes). Remember the priority order, the first
node listed, is considered the ‘home or owner’ node. Use the default policies
__ 11. Add Resources to the Resource Group. Using the F3 key, traverse back to the
HACMP extended Resource Group Configuration smit screen.
» Select, Change/Show Resources and Attributes for a Resource Group
» In the appropriate fields, use F4 5o choose a Service IP Label and a Volume
Group. For the purposes of this lab, application servers are not required. You
may add them if you wish.
__ 12. Synchronize the cluster. Using the F3 key, traverse back to the Extended
Configuration smit screen. Select Extended Verification and Synchronization
» Review the results for any errors.
__ 13. Start HACMP on the toronto# node.
» smitty clstart (choose only toronto# and start clinfo)
» monitor the /tmp/hacmp.out during startup.
__ 14. Verify the appR_group did not come online because of the startup policy.
» /usr/es/sbin/cluster/clRGinfo
__ 15. Start HACMP on the halifax# node.
» smitty clstart (choose only halifax# and start clinfo)
__ 16. Verify that the appR_group is online on halifax#
» /usr/es/sbin/cluster/clRGinfo

Part 2: Gratuitous ARP


From the AIX 5L Version 5.1 commands reference for ifconfig. “Gratuitous ARP is
supported for ethernet, token-ring, and FDDI interfaces. This means when an IP address is
assigned, the host sends an ARP request for its own address (the new address) to inform
other machines of its address so that they can update their ARP entry immediately. It also
lets hosts detect duplicate IP addresses.”
This will make it a little difficult to create a failure with AIX clients but the tests are valid.
__ 17. Log on the client machine. Verify that clinfo has not been started.
» ps -ef | grep -i clinfo | grep -v grep
__ 18. Use the ping command to test the service IP Label of appR_group on halifax#).
» ping -c 1 appR#-svc
__ 19. Check the contents of the arp cache.
» arp -a

11-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty __ 20. On the halifax# node generate a swap adapter event. Be aware that you need to do
this fairly quickly before the arp cache times out.
» ifconfig enX down (the interface on which the service label is configured).
__ 21. Check the contents of the arp cache on the client, compare the results with the
previous iteration of the command.
» arp -a
__ 22. The hardware address should have updated in the arp cache on the client without
any intervention.
Note: If the entry is not in the arp cache when the Gratuitous arp is broadcast it is
ignored.

Part 3: Hardware Address Takeover


In this scenario the router in Regina is a bit of an antique and does not support gratuitous
ARP. It was highlighted as a problem since the ARP cache retention is 15 minutes. This
problem was discovered during the preliminary cluster testing.
__ 23. On the halifax# node log in as root.
__ 24. Identify the interface that is reconfigured with the appR-repl service address and
write the mac address here._____________________________________
» netstat -i
» There should be 12 digits and no periods. When using the netstat command
leading 0s before each period are omitted. You must put them back in.
__ 25. Identify the alternate mac address. To specify an alternate hardware address for an
Ethernet interface, change the 1st byte xx to 4x j
_________________________________________
__ 26. Change the appR-repl service IP label to add an alternate hardware address in the
field.
» smitty hacmp
» Select Extended Configuration
» Select Extended Resource Configuration
» Select HACMP Extended Resource Configuration
» Select Configure Service IP Labels/Addresses
» Select Change/Show a Service IP Label/Address
» Select appR-repl.
» Create the Alternate Hardware Address using the answer to the previous
step and the description in the statement of this step or see the example from
the configuration Lecture.
__ 27. Synchronize the cluster. Using the F3 key, traverse back to the ‘Extended
Configuration’ smit screen. Notice the following message in the smit log: cldare:
Detected changes to service IP label appR1-repl. Please note that changing

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

parameters of service IP label via a DARE may result in releasing resource group
appR_group.
» Select “Extended Verification and Synchronization’
» clRGinfo (shows appR_group offline.
__ 28. Bring the appR_group online using the C-SPOC menu. If, on the client, there is no
arp cache entry for the appR-repl service address, then ping the appR-repl service
address.
» Select C-SPOC from the main smit hacmp menu
» Select HACMP Resource Group and Application Management
» Select Bring a Resource Group Online
» Select ‘appR_group offline halifax#’
» BE CAREFUL -- select Restore_Node_Priority_Order
» Accept the next menu
__ 29. Verify that the alternate hardware address is now configured on the interface for the
appR#-repl service address.
» netstat -i
__ 30. Fail the halifax# node in your favorite manner.
__ 31. Check that the halifax service address is on the toronto# node and observe the
hardware address associated with that service address
» netstat -i

Part 4: Re-create a Cluster from a Snapshot


__ 32. Ensure the cluster manager is stopped on both clusters nodes.
» smitty clstop
__ 33. Apply the snapshot that contains all the cluster definitions you made in exercise 10.
» smitty hacmp
» Select Extended Configuration
» Select Snapshot Configuration
__ 34. Start HACMP
» smitty clstart (choose both nodes)
__ 35. For each resource group, verify to yourself that you understand how the online node
was chosen.
» clRGinfo
__ 36. Fail the halifax# node in your favorite manner.
» smitty clstop, select takeover.
__ 37. Restart the failed node and observe the re-integration. Verify that you understand
how the online node was chosen for each of the resource groups

11-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty END OF LAB

© Copyright IBM Corp. 1998, 2004 Exercise 11. IPAT via Replacement and HWAT 11-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Review/Wrapup
This exercise looked at cascading resource groups, and how to configure both cascading
without Fallback and Inactive Takeover. It also covered setting up and testing Hardware
Address Takeover.
This exercise also looked at rotating resource groups.

11-8 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 12. Network File System (NFS)


(with Hints)

What This Exercise Is About


This lab covers a couple of different methods for configuring network
filesystems with HACMP. It also demonstrates how to set various NFS
options in HACMP exported filesystems.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Have HACMP export a filesystem as part of a resource group
• Have HACMP import a filesystems as part of resource group
• Modify the NFS export options for the exported filesystem
• Add an NFS cross-mount
• Modify the NFS cross-mount for performance and flexibility

© Copyright IBM Corp. 1998, 2004 Exercise 12. Network File System (NFS) 12-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• All exercises of this chapter depend on the availability of specific equipment in your
classroom.
• All hints are marked by a » sign.

Part 1: NFS exports in a resource group


__ 1. Assumptions: You need to start this exercise off with HACMP up on both nodes, and
identify two resource groups -- one whose home node is halifax# (that is,
appA_group) and the other whose home node is toronto# (that is, appB_group).
Each group should have a shared filesystem defined to it (that is, shared_fs_a and
shared_fs_b). On each node, verify that nfs is running (lssrc -g nfs) after HACMP is
started.
__ 2. Modify the resource group appA_group to add /shared_fs_a as a
filesystem/directory to NFS export and set to true the option ‘Filesystems mounted
before IP configured’
» smitty hacmp
» Select Extended Configuration
» Select Extended Resource Configuration
» Select HACMP Extended Resource Group Configuration
» Select Change/Show Resources and Attributes for a Resource Group
» Select the resource group from the list, press Enter
» Modify the options Filesystems mounted before IP configured and
Filesystems/Directories to Export.
__ 3. Modify the resource group appB_group to add /shared_fs_b as a
filesystem/directory to NFS export. Using the F3 key, traverse back to the ’HACMP
Extended Resource Group Configuration’.
» See the previous step.
__ 4. Synchronize the resources. Using the F3 key, traverse back to ‘Extended
Configuration’ smit screen.
» Select Extended Verification and Synchronization
__ 5. When the reconfiguration of resources has completed on each node, check the
directories are exported through NFS.
» lsnfsexp You see only what is exported from THIS node
» cat /etc/xtab Should look like the output from lsnfsexp. Note /etc/exports is
not used by HACMP-- see what happens when you try to cat /etc/exports
__ 6. Log in on the client as root.
__ 7. Create a directory /halifax and /toronto.

12-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty » mkdir /halifax


» mkdir /toronto
__ 8. On the client, using the service address for the appA_group, mount the nfs exported
directory /shared_fs_a on the local directory /halifax.
» smitty mknfsmnt
__ 9. On the client, using the service address for the appB_group, mount the nfs exported
directory /shared_fs_b on the local directory /toronto.
» smitty mknfsmnt
__ 10. Verify the nfs directories are mounted where intended.
» mount
» df -k
__ 11. Back on a cluster node -- fail one of the nodes in your favorite manner. Verify that
the nfs directories are still exported on the remaining node and mounted on the
client system.
» lsnfsexp on the remaining cluster node
» df on the client system
__ 12. Try to create a file in the /halifax directory. It should not work. Lets see how this can
be addressed.

Part 2: Modifying the NFS Export Options


__ 13. The output of the lsnfsexp command on the nodes explains that only the cluster
nodes can use user root. To change this we create an override file. Its name is
/usr/es/sbin/cluster/etc/exports. HACMP uses this file to update the /etc/xtabs file
used by NFS.
__ 14. On the running node, use the lsnfsexp command to copy the current /etc/xtabs file to
the HACMP file and then modify the HACMP file using the following commands:
- lsnfsexp > /usr/es/sbin/cluster/etc/exports
- Edit /usr/es/sbin/cluster/etc/exports and add the client to the list of hosts
- Save the file
- ftp the file to the other node
__ 15. Restart the failed node.
» The exports file should be used when NFS remounts the directory.
__ 16. From the client try to create a file in the nfs directory on the client of the node you
have just restarted.
» Use the touch command

© Copyright IBM Corp. 1998, 2004 Exercise 12. Network File System (NFS) 12-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Part 3: NFS Cross-mount within the Cluster


__ 17. On both nodes create a directory /hanfs.
__ 18. Edit the resource group appA_group and add the following to the option
‘Filesystems/Directories to NFS mount” /hanfs;/shared_fs_a. This will mount the
/shared_fs_a nfs filesystem on the mount point /hanfs for all systems in that
resource group.
__ 19. Synchronize the resources and verify this is true on both nodes.
__ 20. Fail the toronto# node in your favorite manner.
__ 21. Confirm that halifax# node has all the resource groups, and that the NFS mounts are
OK.
» /usr/es/sbin/cluster/utilities/clRGinfo
» df -k

END OF LAB

12-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise Review/Wrapup


This exercise looked at various methods of implementing NFS in an HACMP cluster.

© Copyright IBM Corp. 1998, 2004 Exercise 12. Network File System (NFS) 12-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

12-6 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

EXempty Exercise 13. Error Notification


(with Hints)

What This Exercise Is About


This lab covers the adding of error notifications into AIX through the
HACMP smit screens.

What You Should Be Able to Do


At the end of the lab, you should be able to:
• Add an error notification for the loss of quorum on a volume group
• Emulate the error condition and test the error notification method
• Optionally add another error notification based on filesystems full

© Copyright IBM Corp. 1998, 2004 Exercise 13. Error Notification 13-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Exercise Instructions with Hints


Preface
• This exercise looks at Automatic Error Notification. Before you configure Automatic
Error Notification, you must have a valid HACMP configuration. Using the SMIT options,
you can use the following methods:
- Configure Automatic Error Notification
- List Automatic Error Notification
- Remove Automatic Error Notification.
• Remember that Error Notification is a function of AIX - HACMP just gives you the smit
screens that make it easier to enter error notification methods.
• All hints are marked by a » sign.

Setting Up the automatic error notifications on the halifax# node.


__ 1. Log in as root on the halifax# node.
__ 2. Stop the Cluster. The cluster must be down to configure Automatic Error Notification.
__ 3. Configure Automatic Error Notification.
» smitty hacmp
» Select Problem Determination Tools
» Select Hacmp Error Notification
» Select Automatic Error Notification
» Select Add Error Notify Methods for Cluster Resources. The error notification
methods are automatically configured on all relevant cluster nodes.
When you run automatic error notification, it assigns two error methods for all the error
types noted:
cl_failover is assigned if a disk or network interface card is determined to be a
single point of failure, and that failure would cause the cluster to fall over. If there
is a failure of one of these devices, this method logs the error in hacmp.out and
shuts the cluster node down. A graceful stop is attempted first, if this is
unsuccessful, cl_exit is called to shut down the node.
cl_logerror is assigned for any other error type. If there is a failure of a device
configured with this method, they are logged in hacmp.out.
__ 4. List Error Notification Methods. Use the F3 key to traverse back to the ‘Configure
Automatic Error Notification’ smit screen.
__ 5. To see the AIX odm file, execute the command odmget errnotify | more. The
HACMP generated stanzas will be at the bottom.

END OF LAB

13-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

AP Appendix A. Cluster Diagrams

© Copyright IBM Corp. 1998, 2004 Appendix A. Cluster Diagrams A-1


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Cluster Planning Diagram


AU54 lab teams client
REPLACE # with team number user
team number= ______ regina#
community
if1192.168.#1.3
alias 192.168.#3.30

_______ IP Label IP Address HW Address _______ IP Label IP Address HW Address


if1 halifax#-if1 192.168.#1.1 _____________ if1 toronto#-if1 192.168.#1.2 _____________
if2 halifax#-if2 192.168.#2.1 _____________ if2 toronto#-if2 192.168.#2.2 _____________
Persist halifax#-per 192.168.#3..1 Persist toronto#-per 192.168.#3.2

Network = ________________ (netmask =


___.___.___.___)

Home Node Name halifax# Home Node Name toronto#


Resource Group = appA_group Resource Group appB_group
Startup Policy =OHNO Startup Policy =OHNO
Fallover Policy =FONP Fallover Policy =FONP
Fallback Policy =FBNF Fallback Policy =FBHP
Service IP Label =appA#-svc Service IP Label =appB#-svc
192.168.#3.10 192.168.#3.20
Application server =appA Application server =appB
serial
Label =halifax#_hiskX_01 Label =toronto#_hiskY_01
Device =/dev/hdiskX Device =/dev/hdiskX

Label =halifax#_tty0_01 serial Label =toronto_tty0_01


Device =/dev/tty0 Device =/dev/tty0

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group appA_group contains Resource Group appB_group contains


Volume Group= shared_vg_a Volume Group= shared_vg_b
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log =shared_jfslog_a JFS Log = shared_jfslog_b
Logical Volume =shared_jfslv_a Logical Volume = shared_jfslv_b
FS Mount Point =/shared_fs_a FS Mount Point = /shared_fs_b

A-2 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V3.0
Instructor Exercises Guide with Hints

AP
Cluster Planning Diagram
client
user
community hostname ___________
if1 _________________
svc alias ____________
_______ IP Label IP Address Hardware Address _______ IP Label IP Address Hardware Address
if1 _______ _________ _______________ if1 _______ _________ _______________
if2 _______ _________ _______________ if2 _______ _________ _______________
Persist _______ _________ Persist _______ _________

Network = _____________netmask=___.___.___.___

Home Node Name = Home Node Name =


Resource Group = Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =

Service IP Label = Service IP Label =

Application server= Application server =

Label = serial Label =


Device = Device =
serial Label =
Label =
Device = Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group __________ contains Resource Group __________ contains


Volume Group= ______________ Volume Group= ______________
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log = ______________ JFS Log = ______________
Logical Volume = ______________ Logical Volume = ______________
FS Mount Point = ______________ FS Mount Point = ______________

© Copyright IBM Corp. 1998, 2004 Appendix A. Cluster Diagrams A-3


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with Hints

Cluster Planning Diagram


client
user
community hostname ___________
if1 _________________
svc alias ____________
_______ IP Label IP Address Hardware Address _______ IP Label IP Address Hardware Address
if1 _______ _________ _______________ if1 _______ _________ _______________
if2 _______ _________ _______________ if2 _______ _________ _______________
Persist _______ _________ Persist _______ _________

Network = _____________netmask=___.___.___.___

Home Node Name = Home Node Name =


Resource Group = Resource Group =
Startup Policy = Startup Policy =
Fallover Policy = Fallover Policy =
Fallback Policy = Fallback Policy =

Service IP Label = Service IP Label =

Application server= Application server =

Label = serial Label =


Device = Device =
serial Label =
Label =
Device = Device =

rootvg rootvg
4.8 GB VG = 4.8 GB

VG =

Resource Group __________ contains Resource Group __________ contains


Volume Group= ______________ Volume Group= ______________
hdisks = ______________ hdisks = ______________
Major # = ______________ Major # = ______________
JFS Log = ______________ JFS Log = ______________
Logical Volume = ______________ Logical Volume = ______________
FS Mount Point = ______________ FS Mount Point = ______________

A-4 HACMP Implementation © Copyright IBM Corp. 1998, 2004


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V1.2.2

backpg

Вам также может понравиться