Академический Документы
Профессиональный Документы
Культура Документы
Recommendation
It is strongly recommended to involve Software AG Global Support in CentraSite
cluster projects.
From a client's view the whole cluster looks like a single machine with the IP address
192.168.0.1. The physical node on which the application is actually running, and the
physical network interface to which the virtual IP address is actually assigned are
under control of the cluster server, and therefore hidden from the client.
The graphic below shows the "virtual host" represented by the virtual IP address in
light blue.
An instance of the cluster server is running on each of the physical nodes (shown in
orange). The cluster server controls an application by means of a plug-in. The plug-in
has knowledge about controlling (like starting, stopping, monitoring) an application.
On the other hand, it uses the cluster server API to provide information about the
status of a controlled application to a cluster server. It can be considered to be glue
software, mediating between the cluster server and the controlled application.
An application client sees just the virtual host as a single target (shown in pale blue)
and is not aware of the details inside.
Since cluster servers differ in their configuration and terminology, the term "plug-in"
stands for the collection of objects required to integrate applications into the cluster
server. These may be are scripts, binaries, descriptions, or configuration files,
depending on the APIs of the given cluster server. A Veritas Cluster Server agent is
an example of such a "plug-in".
Failover group
A failover group comprises all the components in a cluster environment that are
supposed to run on the same node i.e., all the components in a group switch together
in case of failure of one component. Within a failover group there is often a hierarchy
concerning the start-up dependencies. There are components that must be up and
running, before other components can be started. A failover group has different
names on different types of cluster software. For example in Veritas Cluster Server
(VCS) it is called a "Service Group".
Failover scenarios
The following sections describe the most common failover scenarios for CentraSite.
Due to the cluster environment (e.g. the available cluster server, the operating
system used) there may be limitations for setting-up a certain scenario.
From the CentraSite point of view a CRR failover group must contain:
• All the components making up the shared disk(s) containing the database files
These are system components such as disk groups, volumes, and mounts.
The inosrv is at least dependent on the shared disk being up and running,
before it can be started. There may also be dependencies on the components
making up the shared disk.
• An IP address (i.e. the virtual IP cluster address for CRR).
• A network interface.
All these components must be online in order to make the CRR accessible for the
CentraSite Application Server Tier (CAST).
The graphic shows the CentraSite components of a CRR failover group. The only
CentraSite component to be controlled by the cluster server is the CRR database
server. The disk and network-related components are not shown in this graphic.
CASTs access the CRR via the virtual IP address 192.168.0.1, which is controlled by
the same failover group as the CRR database server.
Example
The example below shows a CRR failover group as a Service Group in a Veritas
Cluster. It shows the components with their dependencies. A CentraSite Application
Server Tier can only use the CRR after the complete Veritas Service Group is up and
running. The CRR database server is represented by the component "CrrCentraSite".
The lines between the components show the start up and shut down dependencies.
Installation and Configuration
For details on installation and configuration of a CRR failover group, see the
document "Setting Up a Failover Solution for CentraSite". It is recommended to install
CentraSite in a CRR-only configuration on the cluster nodes, if only the CRR is put
under cluster control.
Remarks
It is recommended to install CASTs with the virtual CRR IP address as the hostname
(to avoid later reconfiguration).
Multiple CASTs
As a possible variation the CASTs can be accessed through a common load balancer
in front of the CASTs (between the CAST clients and the CASTs). The CASTs can be
installed on the cluster nodes or on other nodes which are not part of the cluster.
For the "Basic Failover Scenario" this means that the CRR is installed in a virtual
machine controlled by a cluster server.
It is recommended in such cases to have one virtual machine on a shared disk,
where the CRR is installed in.
There are also at least two alternatives for handling the database files and the file
that holds the ICS (regfile):
• They are just kept within the standard installation in the virtual machine.
• They reside on a filesystem on a second disk. This disk must be mounted
before the CRR database server is started in the virtual machine.
The Multiple CASTs scenario can also be achieved based on virtual machine
configuration.