Вы находитесь на странице: 1из 6

CentraSite Failover Basics

Recommendation
It is strongly recommended to involve Software AG Global Support in CentraSite
cluster projects.

Basic Architecture of a Failover System


The basic architecture the concerning hardware and system configuration of a high-
availability failover system is shown in the following graphic. It consists of (a minimum
of) two nodes, a disk shared between the two nodes, and a heartbeat connection
connecting the participating cluster nodes. The heartbeat connection is often a
specific direct hardware connection run by a cluster server specific protocol. The
setup is controlled by a cluster server. Each nodes identified by its own IP
address/DNS entry (192.168.02/censerver1, 192.168.0.3/censerver2). The clients
use a third IP address (192.168.0.1/cenvirt), sometimes called the virtual IP address
to access the high-availability cluster. This IP address is attached to at most one of
the nodes at any given point in time.

From a client's view the whole cluster looks like a single machine with the IP address
192.168.0.1. The physical node on which the application is actually running, and the
physical network interface to which the virtual IP address is actually assigned are
under control of the cluster server, and therefore hidden from the client.
The graphic below shows the "virtual host" represented by the virtual IP address in
light blue.

An instance of the cluster server is running on each of the physical nodes (shown in
orange). The cluster server controls an application by means of a plug-in. The plug-in
has knowledge about controlling (like starting, stopping, monitoring) an application.
On the other hand, it uses the cluster server API to provide information about the
status of a controlled application to a cluster server. It can be considered to be glue
software, mediating between the cluster server and the controlled application.

An application client sees just the virtual host as a single target (shown in pale blue)
and is not aware of the details inside.

Since cluster servers differ in their configuration and terminology, the term "plug-in"
stands for the collection of objects required to integrate applications into the cluster
server. These may be are scripts, binaries, descriptions, or configuration files,
depending on the APIs of the given cluster server. A Veritas Cluster Server agent is
an example of such a "plug-in".
Failover group
A failover group comprises all the components in a cluster environment that are
supposed to run on the same node i.e., all the components in a group switch together
in case of failure of one component. Within a failover group there is often a hierarchy
concerning the start-up dependencies. There are components that must be up and
running, before other components can be started. A failover group has different
names on different types of cluster software. For example in Veritas Cluster Server
(VCS) it is called a "Service Group".

Failover scenarios
The following sections describe the most common failover scenarios for CentraSite.
Due to the cluster environment (e.g. the available cluster server, the operating
system used) there may be limitations for setting-up a certain scenario.

Basic Failover Scenario: CRR in one failover group

The availability of the CRR (CentraSite Registry/Repository) can be increased by


creating a failover group containing the CRR database server as one component
together with all the other components it is dependent on or related to.

From the CentraSite point of view a CRR failover group must contain:

• CRR database server (i.e. inosrv)

Other components may also be required:

• All the components making up the shared disk(s) containing the database files
These are system components such as disk groups, volumes, and mounts.
The inosrv is at least dependent on the shared disk being up and running,
before it can be started. There may also be dependencies on the components
making up the shared disk.
• An IP address (i.e. the virtual IP cluster address for CRR).
• A network interface.

All these components must be online in order to make the CRR accessible for the
CentraSite Application Server Tier (CAST).

The graphic shows the CentraSite components of a CRR failover group. The only
CentraSite component to be controlled by the cluster server is the CRR database
server. The disk and network-related components are not shown in this graphic.
CASTs access the CRR via the virtual IP address 192.168.0.1, which is controlled by
the same failover group as the CRR database server.
Example

The example below shows a CRR failover group as a Service Group in a Veritas
Cluster. It shows the components with their dependencies. A CentraSite Application
Server Tier can only use the CRR after the complete Veritas Service Group is up and
running. The CRR database server is represented by the component "CrrCentraSite".
The lines between the components show the start up and shut down dependencies.
Installation and Configuration

For details on installation and configuration of a CRR failover group, see the
document "Setting Up a Failover Solution for CentraSite". It is recommended to install
CentraSite in a CRR-only configuration on the cluster nodes, if only the CRR is put
under cluster control.

Remarks

It is recommended to install CASTs with the virtual CRR IP address as the hostname
(to avoid later reconfiguration).

Multiple CASTs

In this scenario multiple CASTs access one CRR failover cluster.

As a possible variation the CASTs can be accessed through a common load balancer
in front of the CASTs (between the CAST clients and the CASTs). The CASTs can be
installed on the cluster nodes or on other nodes which are not part of the cluster.

The graphics below shows a load-balancer scenario with several CASTs.


Virtual Machines
It is also possible to achieve CentraSite failover solutions based on virtual machines.
Examples of virtual machines are the IBM POWER Virtualization concept, the
zones/container concept on Sun/Solaris, and the VMware virtual machine concept.

Basic Failover Scenario

For the "Basic Failover Scenario" this means that the CRR is installed in a virtual
machine controlled by a cluster server.
It is recommended in such cases to have one virtual machine on a shared disk,
where the CRR is installed in.
There are also at least two alternatives for handling the database files and the file
that holds the ICS (regfile):

• They are just kept within the standard installation in the virtual machine.
• They reside on a filesystem on a second disk. This disk must be mounted
before the CRR database server is started in the virtual machine.

The Multiple CASTs scenario can also be achieved based on virtual machine
configuration.

Major steps before setting-up CentraSite failover


solution
• Step 1: Make yourself familiar with the documents.

In addition to the current document, there are more detailed documents on


setting-up a CentraSite failover solution:

Setting Up a Failover Solution For CentraSite.

Sample Scripts Description.

The sample scripts.


These documents are the detailed descriptions required to set-up failover
solution for CRR

• Step 2: Create a plug-in for the cluster server.


Since Centrasite has to be embedded in a particular cluster server and a
particular cluster environment, a plug-in must be created. Creating cluster
server plug-in requires in-depth knowledge of both: the cluster server and
CentraSite.
There are sample scripts for controlling the CRR, which might serve as basis.

• Step 3: Decide which scenarios to set-up.


It is recommended to have a decision which scenario will be set-up before
executing the CentraSite installation. In some situations a reconfiguration can
be avoided if the cluster configuration is considered at installation time.

Вам также может понравиться