Вы находитесь на странице: 1из 5

Rhel 7 yum using rhel 7 iso

==========================================

cd /dump

rhel-server-7.5-x86_64-dvd.iso

mkdir /repo

mount -t iso9660 -o loop /dump/rhel-server-7.5-x86_64-dvd.iso /repo

[root@eoffice dump]# cat /etc/yum.repos.d/test.repo


[repo-update]
gpgcheck=0
enabled=1
baseurl=file:///repo
name=repo-update

[repo-ha]
gpgcheck=0
enabled=1
baseurl=file:///repo/addons/HighAvailability
name=repo-ha

[repo-storage]
gpgcheck=0
enabled=1
baseurl=file:///repo/addons/ResilientStorage
name=repo-storage

===================================================================

Rhel cluster 7 Installation


==================================

1) Disable Firewall

systemctl stop firewalld.service


systemctl disable firewalld.service

2) Disable Selinux

vi /etc/selinux/config.
Set the line SELINUX=enforcing to SELINUX=disabled.

1) Install RHEL 7 Cluster RPM on both the Nodes

yum install pcs pacemaker fence-agents-all psmisc policycoreutils-python

2) Start and enable PCSD service on Both Nodes

systemctl start pcsd


systemctl status pcsd
systemctl enable pcsd

3) Set password for hacluster user on Both Nodes


echo Clus@123 | passwd --stdin hacluster

4)Login to any of the cluster node and authenticate �hacluster� user, using
following command.

pcs cluster auth NODE1 NODE2 -u hacluster -p Clus@123 --force

5)Create a cluster and populate it with some nodes.

pcs cluster setup --force --name SapCluster NODE1 NODE2

6)Start the Cluster on all the Nodes.

pcs cluster start --all


pcs cluster enable --all

7) Check cluster status

pcs status

Note: You can also use the �crm_mon -1� command to check the status of service
running on Cluster.

8)
pcs property show stonith-enabled
pcs property set stonith-enabled=false

9) Note :One of the Important thing, When we deploy Pacemaker is in a 2-node


configuration. quorum as a concept makes no sense in this scenario because you only
have it when more than half the nodes are available, so we�ll disable it too, using
following command.

pcs property set no-quorum-policy=ignore

pcs property show no-quorum-policy

ote: If your cluster nodes are the Virtual machines and hosted on VMware then you
can use �fence_vmware_soap� fencing agent. To configure �fence_vmware_soap� as
fencing agent, refer the below logical steps:

1) Verify whether your cluster nodes can reach to VMware hypervisor or Vcenter

fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> --ssl -z -v


-o list |egrep "(nfs1.example.com|nfs2.example.com)"

or

fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> --ssl -z -o


list |egrep "(nfs1.example.com|nfs2.example.com)"
if you are able to see the VM names in the output then it is fine, otherwise you
need to check why cluster nodes not able to make connection esxi or vcenter.

2) Define the fencing device using below command,


pcs stonith create vmware_fence fence_vmware_soap
pcmk_host_map="node1:nfs1.example.com;node2:nfs2.example.com"
ipaddr=<vCenter_IP_address> ssl=1 login=<user_name> passwd=<password>

3) check the stonith status using below command,

pcs stonith show

10) Add the Cluster Resource.

add VIP resource

pcs resource create SAP_VIP01 ocf:heartbeat:IPaddr2 ip=172.16.1.10 cidr_netmask=32


op monitor interval=30s

Where;

�SAP_VIP01� is the name the service will be known as.


�ocf:heartbeat:IPaddr2� tells heartbeat which script to use.
�op monitor interval=30s� tells Pacemaker to check the health of this service every
2 minutes by calling the agent�s monitor action.

Check pcs status

pcs status

Adminstration RHEL 7 cluster


==================================

1) Mark the node as being in standby mode.

pcs cluster standby <node-undergoing-maintenance>


pcs cluster standby NODE1

2)Once the maintenance is complete you simple unstandby the node.

pcs cluster unstandby <node-exiting-maintenance>

pcs cluster unstandby NODE1

3)Cluster members can be monitored using

pcs status corosync

4)Cluster start/stop on all Nodes

pcs cluster start --all

In the back-end , �pcs cluster start� command will trigger the following command on
each cluster node.
systemctl start corosync.service
systemctl start pacemaker.service

systemctl status corosync


systemctl status pacemaker

Check the corosync communication status

corosync-cfgtool -s

pcs status corosync

pcs cluster stop --all

5) TO DESTROY CLUSTER

pcs cluster destroy <cluster>

6)Print the full cluster configuration with

pcs config

7) MOVE AND MOVE BACK RESOURCES

To move
pcs resource move <resource>

to move back
pcs resource clear <resource>

Cluster configuration file locations :

Redhat Cluster Releases Configuration files


Description
===================================================================================
===============
Prior to Redhat Cluster 7 /etc/cluster/cluster.conf Stores all the
configuration of cluster

Redhat Cluster 7 (RHEL 7) /etc/corosync/corosync.conf Membership and Quorum


configuration

Redhat Cluster 7 (RHEL 7) /var/lib/heartbeat/crm/cib.xml Cluster node and


Resource configuration.

8) When the cluster starts, it automatically records the number and details of the
nodes in the cluster, as well as which stack is being used and the version of
Pacemaker being used. To view the cluster configuration (Cluster Information Base �
CIB) in XML format, use the following command.

pcs cluster cib

9)Disable the STONITH (Fencing)

pcs property set stonith-enabled=false

To Check STONITH
pcs property show stonith-enabled

10) Add the IP which needs to be high-available (Clustered IP).

pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.203.190


cidr_netmask=24 op monitor interval=30s

pcs resource standards


pcs resource providers
pcs resource agents ocf:heartbeat

Вам также может понравиться