Вы находитесь на странице: 1из 25

CLUSTER COMPUTING

CLUSTER COMPUTING

Introduction:

A computer cluster is a group of linked computers, working together closely so that in many
respects they form a single computer. The components of a cluster are commonly, but not
always, connected to each other through fast local area networks. Clusters are usually deployed
to improve performance and/or availability over that of a single computer, while typically
being much more cost-effective than single computers of comparable speed or availability.

The High Performance Computing (HPC) allows scientists and engineers to deal with very
complex problems using fast computer hardware and specialized software. Since often these
problems require hundreds or even thousands of processor hours to complete, an approach,
based on the use of supercomputers, has been traditionally adopted. Recent tremendous
increase in a speed of PC-type computers opens relatively cheap and scalable solution for HPC,
using cluster technologies. The conventional MPP (Massively Parallel Processing)
supercomputers are oriented on the very high-end of performance. As a result, they are
relatively expensive and require special and also expensive maintenance support. Better
understanding of applications and algorithms as well as a significant improvement in the
communication network technologies and processors speed led to emerging of new class of
systems, called clusters of SMP(symmetric multi processor) or networks of workstations
(NOW), which are able to compete in performance with MPPs and have excellent
price/performance ratios for special applications types.
A cluster is a group of independent computers working together as a single system to ensure
that mission-critical applications and resources are as highly available as possible. The group is
managed as a single system, shares a common namespace, and is specifically designed to
tolerate component failures, and to support the addition or removal of components in a way
that's transparent to users.

1
CLUSTER COMPUTING

What is cluster computing?

Development of new materials and production processes, based on high technologies, requires
a solution of increasingly complex computational problems. However, even as computer
power, data storage, and communication speed continue to improve exponentially; available
computational resources are often failing to keep up with what users’ demand of them.
Therefore high-performance computing (HPC) infrastructure becomes a critical resource for
research and development as well as for many business applications. Traditionally the HPC
applications were oriented on the use of high-end computer systems - so-called
"supercomputers". Before considering the amazing progress in this field, some attention should
be paid to the classification of existing computer architectures. SISD (Single Instruction
stream, Single Data stream) type computers. These are the conventional systems that contain
one central processing unit (CPU) and hence can accommodate one instruction stream that is
executed serially. Nowadays many large mainframes may have more than one CPU but each of
these executes instruction streams that are unrelated. Therefore, such systems still should be
regarded as a set of SISD machines acting on different data spaces. Examples of SISD
machines are for instance most workstations like those of DEC, IBM, Hewlett-Packard, and
Sun Microsystems as well as most personal computers. SIMD (Single Instruction stream,
Multiple Data stream) type computers. Such systems often have a large number of processing
units that all may execute the same instruction on different data in lock-step. Thus, a single
instruction manipulates many data in parallel. Examples of SIMD machines are the CPP DAP
Gamma II and the Alenia Quadrics.
Vector processors, a subclass of the SIMD systems. Vector processors act on arrays of similar
data rather than on single data items using specially structured CPUs. When data can be
manipulated by these vector units, results can be delivered with a rate of one, two and, in
special cases, of three per clock cycle (a clock cycle being defined as the basic internal unit of
time for the system). So, vector processors execute on their data in an almost parallel way but
only when executing in vector mode. In this case they are several times faster than when
executing in conventional scalar mode. For practical purposes vector processors are therefore
mostly regarded as SIMD machines. Examples of such systems are Cray 1 and Hitachi S3600.
MIMD (Multiple Instruction stream, Multiple Data stream) type computers. These machines
execute several instruction streams in parallel on different data. The difference with the multi

2
CLUSTER COMPUTING

processor SISD machines mentioned above lies in the fact that the instructions and data are
related because they represent different parts of the same task to be executed. So, MIMD
systems may run many sub-tasks in parallel in order to shorten the time-to-solution for the
main task to be executed. There is a large variety of MIMD systems like a four-processor NEC
SX-5 and a thousand processor SGI/Cray T3E supercomputers. Besides above mentioned
classification, another important distinction between classes of computing systems can be done
according to the type of memory access

Shared memory (SM) systems have multiple CPUs all of which share the same address space.
This means that the knowledge of where data is stored is of no concern to the user as there is
only one memory accessed by all CPUs on an equal basis. Shared memory systems can be both
SIMD and MIMD. Single-CPU vector processors can be regarded as an example of the former,
while the multi-CPU models of these machines are examples of the latter.

Distributed memory (DM) systems. In this case each CPU has its own associated memory.
The CPUs are connected by some network and may exchange data between their respective
memories when required. In contrast to shared memory machines the user must be aware of the
location of the data in the local memories and will have to move or distribute these data
explicitly when needed. Again, distributed memory systems may be either SIMD or MIMD.

3
CLUSTER COMPUTING

Shared (left) and distributed (right) memory computer architectures

Supercomputers are defined as the fastest, most powerful computers in terms of CPU power
and I/O capabilities. Since computer technology is continually evolving, this is always a
moving target. This year’s supercomputer may well be next year’s entry level personal
computer. In fact, today’s commonly available personal computers deliver performance that
easily bests the supercomputers that were available on the market in the 1980’s. Strong
limitation for further scalability of vector computers was their shared memory architecture.
Therefore, massive parallel processing (MPP) systems using distributed-memory were
introduced by the end of the 1980s. The main advantage of such systems is the possibility to
divide a complex job into several parts, which are executed in parallel by several processors
each having dedicated memory. The communication between the parts of the main job occurs
within the framework of the so-called message-passing paradigm, which was standardized in
the message-passing interface (MPI). The message-passing paradigm is flexible enough to
support a variety of applications and is also well adapted to the MPP architecture. During last
year’s, a tremendous improvement in the performance of standard workstation processors led
to their use in the MPP supercomputers, resulting in significantly lowered price/performance
ratios.
Traditionally, conventional MPP supercomputers are oriented on the very high-end of
performance. As a result, they are relatively expensive and require special and also expensive
maintenance support. To meet the requirements of the lower and medium market segments, the
symmetric multiprocessing (SMP) systems were introduced in the early 1990s to address
commercial users with applications such as databases, scheduling tasks in telecommunications
industry, data mining and manufacturing. Better understanding of applications and algorithms
as well as a significant improvement in the communication network technologies and
processors speed led to emerging of new class of systems, called clusters of SMP or networks
of workstations (NOW), which are able to compete in performance with MPPs and have
excellent price/performance ratios for special applications types. On practice, clustering
technology can be used for any arbitrary group of computers, allowing building homogeneous
or heterogeneous systems. Even bigger performance can be achieved by combining groups of
clusters into Hyper Cluster or even Grid-type system.

4
CLUSTER COMPUTING

Extraordinary technological improvements over the past few years in areas such as
microprocessors, memory, buses, networks, and software have made it possible to assemble
groups of inexpensive personal computers and/or workstations into a cost effective system that
functions in concert and posses tremendous processing power. Cluster computing is not new,
but in company with other technical capabilities, particularly in the area of networking, this
class of machines is becoming a high-performance platform for parallel and distributed
applications Scalable computing clusters, ranging from a cluster of (homogeneous or
heterogeneous) PCs or workstations to SMP (Symmetric Multi Processors), are rapidly
becoming the standard platforms for high-performance and large-scale computing. A cluster is
a group of independent computer systems and thus forms a loosely coupled multiprocessor
system as shown in Figure

5
CLUSTER COMPUTING

A cluster system by connecting 4 SMPS

A network is used to provide inter-processor communications. Applications that are distributed


across the processors of the cluster use either message passing or network shared memory for
communication. A cluster computing system is a compromise between a massively parallel
processing system and a distributed system. An MPP (Massively Parallel Processors) system

6
CLUSTER COMPUTING

node typically cannot serve as a standalone computer; a cluster node usually contains its own
disk and is equipped with complete operating systems, and therefore, it also can handle
interactive jobs. In a distributed system, each node can function only as an individual resource
while a cluster system presents itself as a single system to the user.

Beowulf clusters:
The concept of Beowulf clusters is originated at the Center of Excellence in Space Data and
Information Sciences (CESDIS), located at the NASA Goddard Space Flight Center in
Maryland. The goal of building a Beowulf cluster is to create a cost-effective parallel
computing system from commodity components to satisfy specific computational requirements
for the earth and space sciences community. The first Beowulf cluster was built from 16
IntelDX4TM processors connected by a channel-bonded 10 Mbps Ethernet, and it ran the
Linux operating system. It was an instant success, demonstrating the concept of using a
commodity cluster as an alternative choice for high-performance computing (HPC). After the
success of the first Beowulf cluster, several more were built by CESDIS using several
generations and families of processors and network. Beowulf is a concept of clustering
commodity computers to form a parallel, virtual supercomputer. It is easy to build a unique
Beowulf cluster from components that you consider most appropriate for your applications.
Such a system can provide a cost-effective way to gain features and benefits (fast and reliable
services) that have historically been found only on more expensive proprietary shared memory
systems. The typical architecture of a cluster is shown in Figure 3. As the figure illustrates,
numerous design choices exist for building a Beowulf cluster. For, example, the bold line
indicates our cluster configuration from bottom to top. No Beowulf cluster is general enough to
satisfy the needs of everyone.

7
CLUSTER COMPUTING

Architecture of cluster systems

Logical view of clusters:


A Beowulf cluster uses multi computer architecture, as depicted in figure 4.It features a
parallel computing system that usually consists of one or more master nodes and one or more
compute nodes, or cluster nodes, interconnected via widely available network interconnects.
All of the nodes in a typical Beowulf cluster are commodity systems-PCs, workstations, or
servers-running commodity software such as Linux.

8
CLUSTER COMPUTING

Logical view of cluster

The question may arise why clusters are designed and built when perfectly good commercial
supercomputers are available on the market. The answer is that the latter is expensive. Clusters
are surprisingly powerful. The supercomputer has come to play a larger role in business
applications. In areas from data mining to fault tolerant performance clustering technology has
become increasingly important. Commercial products have their place, and there are perfectly
good reasons to buy a commercially produced supercomputer. If it is within our budget and our
applications can keep machines busy all the time, we will also need to have a data center to
keep it in. then there is the budget to keep up with the maintenance and upgrades that will be
required to keep our investment up to par. However, many who have a need to harness
supercomputing power don’t buy supercomputers because they can’t afford them. Also it is
impossible to upgrade them. Clusters, on the other hand, are cheap and easy way to take off-
the-shelf components and combine them into a single supercomputer. In some areas of research
clusters are actually faster than commercial supercomputer. Clusters also have the distinct
advantage in that they are simple to build using components available from hundreds of
sources. We don’t even have to use new equipment to build a cluster.

9
CLUSTER COMPUTING

Cluster Styles:
There is much kind of clusters that may be used for different applications.
Homogeneous Clusters
If we have a lot identical systems or lot of money at our disposal we will be building a
homogeneous cluster. This means that we will be putting together a cluster in which every
single node is exactly the same. Homogeneous clusters are very easy to work with because no
matter what way we decide to tie them together, all of our nodes are interchangeable and we
can be sure that all of our software will work the same way on all of them.
Heterogeneous Clusters
They come in two general forms. The first and most common are heterogeneous clusters made
from different kinds of computers. It does not matter what the actual hardware is except that
there are different makes and models. A cluster made from such machines will have several
very important details.

Message passing interface (MPI):


MPI is a message-passing library standard that was published in May 1994. The standard of
MPI is based on the consensus of the participants in the MPI Forum, organized by over 40
organizations. Participants included vendors, researchers, academics, software library
developers and users. MPI offers portability, standardization, performance, and functionality.
The advantage for the user is that MPI is standardized on many levels. For example, since the
syntax is standardized, you can rely on your MPI code to execute under any MPI
implementation running on your architecture. Since the functional behavior of MPI calls is also
standardized, your MPI calls should behave the same regardless of the implementation. This
guarantees the portability of your parallel programs. Performance, however, may vary between
different implementations. MPI includes point-to-point message passing and collective (global)
operations. These are all scoped to a user-specified group of processes. MPI provides a
substantial set of libraries for the writing, debugging, and performance testing of distributed
programs. Our system currently uses LAM/MPI, a portable implementation of the MPI
standard developed cooperatively by Notre Dame University. LAM (Local Area Multi
computer) is an MPI programming environment and development system and includes a

10
CLUSTER COMPUTING

visualization tool that allows a user to examine the state of the machine allocated to their job as
well as provides a means of studying message flows between nodes.

Design Considerations:
Before attempting to build a cluster of any kind, think about the type of problems you are
trying to solve. Different kinds of applications will actually run at different levels of
performance on different kinds of clusters. Beyond the brute force characteristics of memory
speed, I/O bandwidth, disk seek/latency time and bus speed on the individual nodes of your
cluster, the way you connect your cluster together can have a great impact on its efficiency.

Architecture:
A cluster is a type of parallel or distributed processing system, which consists of
a collection of interconnected stand-alone computers working together as a single, integrated
computing resource. A computer node can be a single or multiprocessor system (PCs,
workstations, or SMPs) with memory, I/O facilities, and an operating system. A cluster
generally refers to two or more computers (nodes) connected together. The nodes can exist
Cluster Computing at a Glance in a single cabinet or be physically separated and connected via
a LAN. A inter- connected (LAN-based) cluster of computers can appear as a single system to
users and applications. Such a system can provide a cost-effective way to gain features and
benefits (fast and reliable services) that have historically been found only on more expensive
proprietary shared memory systems. The typical architecture of a cluster is shown in Figure

11
CLUSTER COMPUTING

The following are some prominent components of cluster computers:


• Multiple High Performance Computers (PCs, Workstations, or SMPs)
• State-of-the-art Operating Systems (Layered or Micro-kernel based)
• High Performance Networks/Switches (such as Gigabit Ethernet and Myrinet)
• Network Interface Cards (NICs)
• Fast Communication Protocols and Services (such as Active and Fast Messages)
• Cluster Middleware (Single System Image (SSI) and System Availability
Infrastructure) Hardware (such as Digital (DEC) Memory Channel, hardware DSM,
and SMP techniques) Operating System Kernel or Gluing Layer (such as Solaris MC
and GLU-nix)
• Applications and Subsystems
_ Applications (such as system management tools and electronic forms)
_ Runtime Systems (such as software DSM and parallel _le system)
_ Resource Management and Scheduling software (such as LSF (Load Sharing Facility) and
CODINE (Computing in Distributed Net- worked Environments))
_ Parallel Programming Environments and Tools (such as compilers, PVM (Parallel Virtual
Machine), and MPI (Message Passing Interface))

12
CLUSTER COMPUTING

• Applications
_Sequential
_Parallel or Distributed

The network interface hardware acts as a communication processor and is responsible for
transmitting and receiving packets of data between cluster nodes via a network/switch.
Communication software offers a means of fast and reliable data communication among cluster
nodes and to the outside world. Often, clusters with a special net- work/switch like Myrinet use
communication protocols such as active messages for fast communication among its nodes.
They potentially bypass the operating system and thus remove the critical communication
overheads providing direct user-level access to the network interface. The cluster nodes can
work collectively, as an integrated computing resource, or they can operate as individual
computers. The cluster middleware is responsible for offering an illusion of a unified system
image (single system image) and availability out of a collection on independent but
interconnected computers. Programming environments can offer portable, efficient, and easy-
to-use tools for development of applications. They include message passing libraries,
debuggers. It should not be forgotten that clusters could be used for the execution of sequential
or parallel applications.
Network clustering connects otherwise independent computers to work together in some
coordinated fashion. Because clustering is a term used broadly, the hardware configuration of
clusters varies substantially depending on the networking technologies chosen and the purpose
(the so-called "computational mission") of the system. Clustering hardware comes in three
basic flavors: so-called "shared disk," "mirrored disk” and” shared nothing" configurations.
Shared Disk Clusters
One approach to clustering utilizes central I/O devices accessible to all computers ("nodes")
within the cluster. We call these systems shared-disk clusters as the I/O involved is typically
disk storage for normal files and/or databases. Shared-disk cluster technologies include Oracle
Parallel Server (OPS) and IBM's HACMP.
Shared-disk clusters rely on a common I/O bus for disk access but do not require shared
memory. Because all nodes may concurrently write to or cache data from the central disks, a

13
CLUSTER COMPUTING

synchronization mechanism must be used to preserve coherence of the system. An independent


piece of cluster software called the "distributed lock manager" assumes this role.
Shared-disk clusters support higher levels of system availability: if one node fails, other nodes
need not be affected. However, higher availability comes at a cost of somewhat reduced
performance in these systems because of overhead in using a lock manager and the potential
bottlenecks of shared hardware generally. Shared-disk clusters make up for this shortcoming
with relatively good scaling properties: OPS and HACMP support eight-node systems, for
example.
Shared Nothing Clusters
A second approach to clustering is dubbed shared-nothing because it does not involve
concurrent disk accesses from multiple nodes. (In other words, these clusters do not require a
distributed lock manager.) Shared-nothing cluster solutions include Microsoft Cluster Server
(MSCS). MSCS is an atypical example of a shared nothing cluster in several ways. MSCS
clusters use a shared SCSI connection between the nodes, which naturally leads some people to
believe this is a shared-disk solution. But only one server (the one that owns the quorum
resource) needs the disks at any given time, so no concurrent data access occurs. MSCS
clusters also typically include only two nodes, whereas shared nothing clusters in general can
scale to hundreds of nodes.
Mirrored Disk Clusters
Mirrored-disk cluster solutions include Legato's Vinca. Mirroring involves replicating all
application data from primary storage to a secondary backup (perhaps at a remote location) for
availability purposes. Replication occurs while the primary system is active, although the
mirrored backup system -- as in the case of Vinca -- typically does not perform any work
outside of its role as a passive standby. If a failure occurs in the primary system, a failover
process transfers control to the secondary system. Failover can take some time, and
applications can lose state information when they are reset, but mirroring enables a fairly fast
recovery scheme requiring little operator intervention. Mirrored-disk clusters typically include
just two nodes.
Database Replication Clusters:
Database replication is a necessary and useful application of clusters. Many databases are read
intensive with many more read requests made than write requests. By replicating the data

14
CLUSTER COMPUTING

across a community of nodes it is possible to scale the number of read requests which can be
dealt with per second in a linear fashion.

Database Replication Architecture


Figure demonstrates a read intensive replicating database. This kind of architecture is
becoming common in modern web serving environments. A website has large amounts of
content all stored in a MySQL database. The web servers (which are also probably clustered),
make read requests from the replication nodes through a load balancer. Write requests are sent
to the master node. Multi master configurations are also common. In situations with high
levels of writes it is necessary to be creative with the architecture of the database to allow
replication between masters or partition the database such that there are essentially two
separate databases for different queries. For example one database for searches and another for
user data.
Batch Processing
Batch processing systems are a key in the banking industry. Good scheduling and rapid
response are important if for example we are not to be kept waiting at the cash point as our
bank checks we have the money in our account we just asked for.
Sometimes referred to as compute farm, the key part of batch processing systems is
maximizing up time and maintaining performance at peak load while minimizing cost levels.
In this sort of situation it can be wise to save money by reducing capacity during low demand
by shutting down some nodes and bringing them back up when demand will be high. To

15
CLUSTER COMPUTING

maximize efficiency a intelligent Workload Management System (WMS) should be


implemented.

Render Farms
Render farms are a special form of batch processing clusters, with less of an emphasis on
responsiveness - most of the processing jobs will take more than a minute. Low cost hardware
and the quantity of available processing power is most important. Rendering is used in the
visual effects, computer modeling and CGI industries and refers to the process of creating an
image from what are essentially mathematical formulae. Rendering engines provide numerous
different features, which in combination can produce a scene with the desired effects.

Render Farm Architecture


Illustrated in Figure is a simple render farm. An artist working at their workstation submits an
animation job to the farm which distributes it between the nodes and returns the result to the
workstation or keeps the files on a network share for convenience.
Alternatively a single frame can be submitted and distributed between all the nodes. This can
either make use of message passing or split the frame into smaller chunks and pass each one to
the different nodes.
Software Development Architecture - Compile Farms
A compile farm takes a very similar approach to render farms. The difference is in how the
code being developed is handled. Compile farms provide the opportunity to increase the speed
of compilation of a complete program. Although the individual file a developer is working on
may only take a matter of seconds to compile when the entire program is put together it may

16
CLUSTER COMPUTING

take an hour or more! Additionally it is important to provide the means to develop embedded
applications, where compiling on the host may be painfully slow. Ainkaboot believe in
making things a simple a possible, so our systems can be integrated into you existing code
management system or we can deploy an entirely new system with the cluster managing your
CVS (Concurrent Versions System) and development, verification, pre-production and
production environments as well.

Software Development Architecture

MPI Architecture
MPI stands for Message Passing Interface and numerous implementations exist all with their
own particular advantages. However an MPI standard has been agreed on and Ainkaboot
support all the open source MPI implementations available.
The architecture of an MPI cluster depends on the specific application and many
Supercomputing clusters are designed specifically with a couple of applications in mind.
However there are some general points to note.
A key feature of MPI systems is low latency networks for internodes communication. As such
the networks switching technology is important for determining the eventual performance of
the system. Additionally the application must be designed to take advantage of the system and
should also take advantage of the processors architecture in use.

17
CLUSTER COMPUTING

MPI Architecture

Cluster computing Models:

Workload Consolidation/Common Management Domain Cluster


This chart shows a simple arrangement of heterogeneous server tasks — but all are running on
a single physical system (in different partitions, with different granularities of systems resource
allocated to them). One of the major benefits offered by this model it that of convenient and
simple systems management — a single point of control. Additionally, however, this
consolidation model offers the benefit of delivering high quality of service (resources) in a cost
effective manner.

High Availability Cluster Model


This cluster model expands on the simple load-balancing model shown in the previous chart.
Not only does it provide for load balancing, it also delivers high availability through

18
CLUSTER COMPUTING

redundancy of applications and data. This, of course, requires at least two nodes — a primary
and a backup. In this model, the nodes can be active/passive or active/active. In the
active/passive scenario, one server is doing most of the work while the second server is
spending most of its time on replication work. In the active/active scenario, both servers are
doing primary work and both are accomplishing replication tasks so that each server always
"looks" just like the other. In both instance, instant failover is achievable should the primary
node (or the primary node for a particular application) experience a system or application
outage. As with the previous model, this model easily scales up (through application
replication) as the overall volume of users and transactions goes up. The scale-up happens
through simple application replication, requiring little or no application modification or
alteration.

Load-Balancing Cluster Model


With this clustering model, the number of users (or the number of transactions) can be
allocated (via a load-balancing algorithm) across a number of application instances (here, we're
showing Web application server (WAS) application instances) so as to increase transaction
throughput. This model easily scales up as the overall volume of users and transactions goes
up. The scale-up happens through simple application replication only, requiring little or no
application modification or alteration.

19
CLUSTER COMPUTING

High-performance Parallel Application Cluster — Commercial Model


This clustering model demonstrates the capacity to deliver extreme database scalability within
the commercial application arena. In this environment, "shared nothing" or "shared disk" might
be the requirement "of the day," and can be accommodated. You would implement this model
in commercial parallel database situations, such as DB2 UDB EEE, Informix XPS or Oracle
Parallel Server. As with the technical high performance model shown on the previous chart,
this high-performance commercial clustering model requires that the application be
"decomposed" so that segments of its tasks can safely be run in parallel.

High-performance Parallel Application Cluster — Technical Model


In this clustering model, extreme vertical scalability is achievable for a single large computing
task. The logic shown here is essentially based on the message passing interface (MPI)
standard. This model would best be applied to scientific and technical tasks, such as computing
artificial intelligence data. In this high performance model, the application is actually
"decomposed" so that segments of its tasks can safely be run in parallel.

Difference between Cluster Computing and Grid Computing:

20
CLUSTER COMPUTING

When two or more computers are used together to solve a problem, it is called a computer
cluster. Then there are several ways of implementing the cluster, Beowulf is maybe the most
known way to do it, but basically it is just cooperation between computers in order to solve a
task or a problem. Cluster Computing is then just the thing you do when you use a computer
cluster.
Grid computing is something similar to cluster computing, it makes use of several computers
connected is some way, to solve a large problem. There is often some confusion about the
difference between Grids vs. Cluster computing.
• The big difference is that a cluster is homogenous while grids are heterogeneous.
• The computers that are part of a grid can run different operating systems and have
different hardware whereas the cluster computers all have the same hardware and OS.
• A grid can make use of spare computing power on a desktop computer while the
machines in a cluster are dedicated to work as a single unit and nothing else.
• Grid is inherently distributed by its nature over a LAN, metropolitan or WAN. On the
other hand, the computers in the cluster are normally contained in a single location or
complex.
• Another difference lies in the way resources are handled. In case of Cluster, the whole
system (all nodes) behaves like a single system view and resources are managed by
centralized resource manager. In case of Grid, every node is autonomous i.e. it has its
own resource manager and behaves like an independent entity.
Characteristics of Grid Computing
Loosely coupled (Decentralization)
Diversity and Dynamism
Distributed Job Management & scheduling
Characteristics of Cluster computing
Tightly coupled systems
Single system image
On the Windows operating system compute clusters are supported by Windows Compute
Cluster Server 2003 and grid computing is supported by the Digipede Network™.

Cluster Computing on Windows

21
CLUSTER COMPUTING

Cluster computing on Windows is provided by Windows Compute Cluster Server 2003 (CCS)
from Microsoft. CCS is a 64-bit version of Windows Server 2003 operating system packaged
with various software components that greatly eases the management of traditional cluster
computing.
With a dramatically simplified cluster deployment and management experience, CCS removes
many of the 4 Grid and Cluster Computing: Options for Improving Windows Application
Performance obstacles imposed by other solutions.
CCS enables users to integrate with existing Windows infrastructure, including Active
Directory and SQL Server.
CCS supports a cluster of servers that includes a single head node and one or more compute
nodes. The head node controls and mediates all access to the cluster resources and is the single
point of management, deployment, and job scheduling for the compute cluster. All nodes
running in the cluster must have a 64-bit CPU.
How Does It Work?
A user submits a job to the head node. The job identifies the application to run on the cluster.
The job scheduler on the head node assigns each task defined by the job to a node and then
starts each application instance on the assigned node.
Results from each of the application instances are returned to the client via files or databases.
Application parallelization is provided by Microsoft MPI (MSMPI), which supports
communication between tasks running in concurrent processes. MSMPI is a “tuned” MPI
implementation, optimized to deliver high performance on the 64-bit Windows Server OS.
MSMPI calls can be placed within an application, and the mpiexec.exe utility is available to
control applications from the command-line. Because MSMPI enables communication between
the concurrently executing application instances, the nodes are often connected by a high-speed
serial bus such as Gigabit Ethernet or InfiniBand.

Advantages of cluster computing:

Reduced Cost: The price of off-the-shelf consumer desktops has plummeted in recent years,
and this drop in price has corresponded with a vast increase in their processing power and
performance. The average desktop PC today is many times more powerful than the first
mainframe computers.

22
CLUSTER COMPUTING

Processing Power: The parallel processing power of a high-performance cluster can, in many
cases, prove more cost effective than a mainframe with similar power. This reduced price per
unit of power enables enterprises to get a greater ROI from their IT budget.
Improved Network Technology: Driving the development of computer clusters has been a
vast improvement in the technology related to networking, along with a reduction in the price.
Computer clusters are typically connected via a single virtual local area network (VLAN), and
the network treats each computer as a separate node. Information can be passed throughout
these networks with very little lag, ensuring that data doesn’t bottleneck between nodes.
Scalability: Perhaps the greatest advantage of computer clusters is the scalability they offer.
While mainframe computers have a fixed processing capacity, computer clusters can be easily
expanded as requirements change by adding additional nodes to the network.
Availability: When a mainframe computer fails, the entire system fails. However, if a node in
a computer cluster fails, its operations can be simply transferred to another node within the
cluster, ensuring that there is no interruption in service.

Representative Cluster Systems:

There are many projects investigating the development of supercomputing class machines
using commodity of-the-shelf components.

• Network of Workstations (NOW) project at University of California, Berkeley.

• High Performance Virtual Machine (HPVM) project at University of Illinois at


Urbana-Champaign.

• Beowulf Project at the Goddard Space Flight Center, NASA.

• Solaris-MC project at Sun Labs, Sun Microsystems, Inc., Palo Alto, CA.

Applications of cluster computing:

The class of applications that a cluster can typically cope with would be considered grand
challenge or super-computing applications. GCAs (Grand Challenge Applications) are

23
CLUSTER COMPUTING

fundamental problems in science and engineering with broad economic and scientific impact
they are generally considered intractable without the use of state-of-the-art parallel computers.
The scale of their resource requirements, such as processing time, memory, and communication
needs distinguishes GCAs.

A typical example of a grand challenge problem is the simulation of some phenomena that
cannot be measured through experiments. GCAs include massive crystallographic and
microtomographic structural problems, protein dynamics and biocatalysts, relativistic quantum
chemistry of actinides, virtual materials design and processing, global climate modeling, and
discrete event simulation.

Some of the examples of GCA’s:

• Simulation of a phenomenon that can be measured by experiments

• Crystallographic problems

• Microtomographic structural problems

• Relativistic quantum chemistry of actinides

• Discrete event simulation

• Protein dynamics

• Biocatalysts etc.

Disadvantages:

• Good knowledge of parallel programming is required

• Hardware needs to be adjusted to the specific application


(network topology)

• More complex administration

24
CLUSTER COMPUTING

References:

• “Develop Turbocharged Apps for Windows Compute Cluster Server”


http://msdn.microsoft.com/msdnmag/issues/06/04/ClusterComputing/default.aspx

• “Deploying and Managing Microsoft Windows Compute Cluster Server 2003”


http://technet2.microsoft.com/WindowsServer/en/Library/9330fdf8-c680-425f-8583-
c46ee77306981033.mspx?mfr=true

• “Types of Parallel Computing Jobs”

http://technet2.microsoft.com/WindowsServer/en/Library/23afa6ab-bdaa-4c8d-9d89-
44ac67196d5b1033.mspx?mfr=true

• “Distributed Computing with the Digipede Network™”

http://www.digipede.net/downloads/Digipede_Network_Whitepaper.pdf

• “The Digipede Framework™ Software Development Kit (SDK)”

http://www.digipede.net/downloads/Digipede_SDK_Whitepaper.pdf

25

Вам также может понравиться