Вы находитесь на странице: 1из 17

We have configured two node red hat HA cluster on IBM Power System.

Collect All Information for Implementation:


The following information required before configure and implement your Red Hat HA
Cluster.

Cluster Node:

Node Host Name IP Address


Data Communication Node1 cluster1.example.com 192.168.0.125
/Public Network: Node2 cluster2.example.com 192.168.0.126

Cluster Communication Node1 node1.example.com 192.168.70.208


/Private Network: Node2 node2.example.com 192.168.70.209

Fencing device:

Options Value
Node1 Fence Agent fence_lpar
Ipaddr 192.168.70.169
Login Hscroot
Passwd abc123
Managed p740-8205-E6B-SN1041F5P
Port linuxtest2

Node2 Fence Agent fence_lpar


Ipaddr 192.168.70.169
Login Hscroot
Passwd abc123
Managed p740-8205-E6B-SN1041F5P
Port linuxtest3
Cluster Resources:

Failover Domain Name: FOD-FTP


Member: node1 & node2
Failback: No
Priority: Yes, As per Node ID

Service Name Resource Name


IP Address (ip) 192.168.70.80

File System (fs) Device: /dev/datavg01/datalv01


File system: Type: ext4
Name: FTPSRV Mount point: /var/ftp/
Name: ftppubfs

Application (script) File: /etc/init.d/vsftpd


Name: rcftp-script

Note: Here we use only one network for our two node red hat enterprise Linux 6.4 demo
HA cluster installation

Hardware Installations:
We have created two LPAR on IBM p720 System and Install Red Hat Enterprise Linux 6.4
on LPAR system. After that we perform below pre configuration on Operating System for
all target system.

Host Name & IP Address Configuration:


We configure Hostname and /etc/hosts file on both system as per the given information.
Stop Unnecessary Startup Services:
We stop all unnecessary services. Below script will help to do it in a single shot.

# for i in auditd bluetooth cups ip6tables iscsi iscsid


mdmonitor postfix NetworkManager rpcbind rpcgssd rpcidmapd
do
chkconfig $i off
service $i stop
done
Enabling IP Ports:
Before deploying the Red Hat High Availability Add-On, you must enable certain IP ports
on the cluster nodes and on computers that run luci (the Conga user interface server). The
following sections identify the IP ports to be enabled:

We run the below iptable command on both node:

# iptables -I INPUT -m state --state NEW -m multiport -p udp -s 192.168.70.0/24 -d


192.168.70.0/24 --dports 5404,5405 -j ACCEPT
# iptables -I INPUT -m addrtype --dst-type MULTICAST -m state --state NEW -m multiport
-p udp -s 192.168.70.0/24 --dports 5404,5405 -j ACCEPT
# iptables -I INPUT -m state --state NEW -p tcp -s 192.168.70.0/24 -d 192.168.70.0/24
--dport 21064 -j ACCEPT
# iptables -I INPUT -m state --state NEW -p tcp -s 192.168.70.0/24 -d 192.168.70.0/24
--dport 11111 -j ACCEPT
# iptables -I INPUT -m state --state NEW -p tcp -s 192.168.70.0/24 -d 192.168.70.0/24
--dport 16851 -j ACCEPT
# iptables -I INPUT -m state --state NEW -p tcp -s 192.168.70.0/24 -d 192.168.70.0/24
--dport 8084 -j ACCEPT
# service iptables save ; service iptables restart
SElinux Configuration:
Edit /etc/sysconfig/selinux file and set SELINUX=permissive for persistent selinux
configuration setting.

Run below command to check current selinux status & change the selinux setting as
required.

# setenforce 0
# sestatus
# getenforce
Packages Installations:
You can use the following yum install command to install the Red Hat High Availability
Add-On software packages:

# yum install rgmanager lvm2-cluster gfs2-utils

Note: the rgmanager will pull in all necessary dependencies to create an HA cluster from
the HighAvailability channel. The lvm2-cluster and gfs2-utils packages are part of
ResilientStorage channel and may not be needed by your site.

Run below command to verify the installed packages as required.

# rpm -q corosync corosynclib openais openaislib clusterlib


modcluster ricci fence-agents fence-agents-lpar fence-agents-common cman
python-repoze-who-friendlyform cluster-glue-libs resource-agents
rgmanager gfs2-utils luci

Red Hat High Availability Cluster Configuration:


Configuring Red Hat High Availability Add-On software consists of using configuration
tools to specify the relationship among the cluster components.

The following cluster configuration tools are available with Red Hat High Availability
Add-On:

Conga — this is a comprehensive user interface for installing, configuring, and managing
Red Hat High Availability Add-On.
ccs — this command configures and manages Red Hat High Availability Add-On.

Command-line tools — this is a set of command-line tools for configuring and managing
Red Hat High Availability Add-On.

Considerations for ricci:


Red Hat Enterprise Linux 6, ricci replaces ccsd. Therefore, it is necessary that ricci is
running in each cluster node to be able to propagate updated cluster configuration whether
it is via the cm an_tool version -r command, the ccs command, or the luci user interface
server.

You can start ricci by using service ricci start or by enabling it to start at boot time via
chkconfig.

For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password
the first time you propagate updated cluster configuration from any particular node. You
set the ricci password as root after you install ricci on your system with the passwd ricci
command, for user ricci.

Configure Red Hat High Availability Cluster Using Conga:

Creating a cluster with luci consists of naming a cluster, adding cluster nodes to the
cluster, entering the ricci passwords for each node, and submitting the request to create a
cluster. If the node information and passwords are correct, Conga automatically installs
software into the cluster nodes (if the appropriate software packages are not currently
installed) and starts the cluster.

To administer Red Hat High Availability Add-On with Conga, install and run luci as
follows:

1. Select a computer to host luci and install the luci software on that computer. For
example:
2. Start luci using service luci start. For example:

[root@node1 ~]# chkconfig luci on


[root@node1 ~]# service luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to
`node1.example.com' address, to the configuration of self-managed certificate
`/var/lib/luci/etc/cacert.config' (you can change them by editing
`/var/lib/luci/etc/cacert.config', removing the generated certificate
`/var/lib/luci/certs/host.pem' and restarting luci):
(none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key


writing new private key to '/var/lib/luci/certs/host.pem'
Start luci... [ OK ]
Point your web browser to https://node1.example.com:8084 (or equivalent) to access luci

3. At a Web browser, place the URL of the luci server into the URL address box and click
Go (or the equivalent).
https://luci_server_hostname:8084
Step 1: Define a Cluster:
Define the cluster name & input all the information for the two node cluster.

Step 2: Define the fence device:


Define the fence device & input all the information for the fence device like: login,
password, IP address etc.

Step 3: Assign hosts to fence device ports:


Define the power port for each server

Step 4: Define Failover Domains


Define failover domain and set node prioritized as well as failback policy as per your
requirement.

Step 5: Define Resources for Clustered Web Service


Shared Storage (if not in fstab)
IP address
Ftp Server Resource

Step 6: Define Clustered Web Service


Define service
Add storage resource (if not in fstab)
Add ip address resource
Add script resource
Verify the Red Hat High Availability Cluster Configuration:
Final /etc/cluster/cluster.conf file:
To check cluster Stars:

Red Hat High Availability Cluster Administration:


To start cluster services:

To relocate cluster services on other node:


To disable cluster resources group:
To enable cluster resources group:

To freeze cluster resources group:


Red Hat High Availability Cluster Troubleshooting:
To check fencing:
To check cluster multicast communication:

[root@node1 ~]# tcpdump -n multicast -i eth0


[root@node2 ~]# tcpdump -n dst 224.0.0.251 and udp src port 5404 -i eth1
To check cluster communication port & status:
Modify cluster configuration using ccs command line tools:

Вам также может понравиться