You are on page 1of 70

Advance Computing Technology

Practical 1
Aim: To study and practice on Beowulf project
Theory:
What makes a cluster a Beowulf?
Cluster is a widely-used term meaning independent computers combined into a unified system through software
and networking. At the most fundamental level, when two or more computers are used together to solve a
problem, it is considered a cluster. Clusters are typically used for High Availability (HA) for greater reliability or
High Performance Computing (HPC) to provide greater computational power than a single computer can provide.
Beowulf Clusters are scalable performance clusters based on commodity hardware, on a private system network,
with open source software (Linux) infrastructure. The designer can improve performance proportionally with
added machines. The commodity hardware can be any of a number of mass-market, stand-alone compute nodes as
simple as two networked computers each running Linux and sharing a file system or as complex as 1024 nodes
with a high-speed, low-latency network.
Class I clusters are built entirely using commodity hardware and software using standard technology such as
SCSI, Ethernet, and IDE. They are typically less expensive than Class II clusters which may use specialized
hardware to achieve higher performance.
Common uses are traditional technical applications such as simulations, biotechnology, and petro-clusters;
financial market modeling, data mining and stream processing; and Internet servers for audio and games.
Beowulf programs are usually written using languages such as C and FORTRAN. They use message passing to
achieve parallel computations. See Beowulf History for more information on the development of the Beowulf
architecture.
One question that is commonly enough asked on the Beowulf list is "How hard is it to build or care for a
beowulf?"
Mind you, it is quite possible to go into beowulfery with no more than a limited understanding of networking, a
handful of machines (or better, a pocketful of money) and a willingness to learn, and over the years I've watched
and sometimes helped as many groups and individuals (including myself) in many places went from a state of
near-total ignorance to a fair degree of expertise on little more than guts and effort.
However, this sort of school is the school of hard (and expensive!) knocks; one ought to be able to do better and
not make the same mistakes and reinvent the same wheels over and over again, and this book is an effort to
smooth the way so that you can.
One place that this question is often asked is in the context of trying to figure out the human costs of beowulf
construction or maintenance, especially if your first cluster will be a big one and has to be right the first time.
After all, building a cluster of more than 16 or so nodes is an increasingly serious proposition. It may well be that
beowulfs are ten times cheaper than a piece of "big iron'' of equivalent power (per unit of aggregate compute
power by some measure), but what if it costs ten times as much in human labor to build or run? What if it uses
more power or cooling? What if it needs more expensive physical infrastructure of any sort?
These are all very valid concerns, especially in a shop with limited human resources or with little linux expertise
or limited space, cooling, power. Building a cluster with four nodes, eight nodes, perhaps even sixteen nodes can
often be done so cheaply that it seems ''free'' because the opportunity cost for the resources required are so
minimal and the benefits so much greater than the costs. Building a cluster of 256 nodes without thinking hard
1

Vishal Shah

Digitally signed by Vishal Shah


DN: cn=Vishal Shah
Date: 2011.10.15 18:50:06 +05'30'

INDUS Institute of Technology and Engineering

Advance Computing Technology


about cost issues, infrastructure, and cost-benefit analysis is very likely to have a very sad outcome, the least of
which is that the person responsible will likely lose their job.
If that person (who will be responsible) is you, then by all means read on. I cannot guarantee that the following
sections will keep you out of the unemployment line, but I'll do my best.
Projects:
Here is a partial list of other sites that are working on Beowulf Related Projects:

Grendel Clemson University PVFS and system development


Drexel Drexel University cyborg cluster
Stone SouperComputer Oak Ridge National Lab (ORNL) a 126 node cluster at zero dollars per node
Naegling CalTech's Beowulf Linux cluster
Loki Los Alamos Beowulf cluster has an especially cool logo
theHive Goddard Space Flight Center one of the large Beowulf cluster at Goddard
AENEAS University of California, Irvine

2
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 2
Aim: To study Berkely NOW project.
Thoery:
The Berkeley Network of Workstations (NOW) project seeks to harness the power of clustered machines
connected via high-speed switched networks. By leveraging commodity workstations and operating systems,
NOW can track industry performance increases. The key to NOW is the advent of the killer switch-based and
high-bandwidth network. This technological evolution allows NOW to support a variety of disparate workloads,
including parallel, sequential, and interactive jobs, as well as scalable web services, including the world's fastest
web search engine, and commercial workloads, such as NOW-Sort, the world's fastest disk-to-disk sort. On April
30th, 1997, the NOW team achieved over 10 GFLOPS on the LINPACK benchmark, propelling the NOW into
the top 200 fastest supercomputers in the world! Click here for more NOW news. The NOW Project is sponsored
by a number of different contributers.

The Berkeley NOW project is building system support for using a network of workstations (NOW) to act as a
distributed supercomputer on a building-wide scale. Because of the volume production, commercial workstations
today offer much better price/performance than the individual nodes of MPP's. In addition, switch-based networks
such as ATM will provide cheap, high-bandwidth communication. This price/performance advantage is increased
if the NOW can be used for both the tasks traditionally run on workstations and these large programs.
In conjunction with complementary research efforts in operating systems and communication architecture, we
hope to demonstrate a practical 100 processor system in the next few years that delivers at the same time
(1) better cost-performance for parallel applications than a massively parallel processing architecture (MPP) and
(2) better performance for sequential applications than an individual workstation. This goal requires combining
elements of workstation and MPP technology into a single system. If this project is successful, this project has the
potential to redefine the high-end of the computing industry.
To realize this project, we are conducting research and development into network interface hardware, fast
communication protocols, distributed file systems, and distributed scheduling and job control.
The NOW project is being conducted by the Computer Science Division at the University of California at
Berkeley.
The core hardware/software infrastructure for the project will include 100 SUN Ultrasparcs and 40 SUN
Sparcstations running Solaris, 35 Intel PC's running Windows NT or a PC UNIX variant, and between 500-1000
disks, all connected by a Myrinet switched network. Most of this hardware/software has been donated by the
companies involved. In addition, the Computer Science Division has been donated more than 300 HP
workstations which we are also planning on integrating into the NOW project
Using GLUnix
Taking advantage of NOW functionality is straightforward. Simply ensure that /usr/now/bin is in your shell's
PATH, and /usr/now/man in the MANPATH. To start taking advantage of GLUnix functionality, log into
now.cs.berkeley.edu and start a glush shell. While the composition of the GLUnix parition may change over time,
we make every effort to guarantee that now.cs is always running GLUnix. The glush shell runs most commands
remotely on the lightly loaded nodes in the cluster.
3
INDUS Institute of Technology and Engineering

Advance Computing Technology


Load balancing GLUnix shell scripts are available. Syntax is identical to the csh command language. Simply
begin your shell shell scripts with #!/usr/now/bin/glush. Note that you do not have to be running glush as your
interactive shell in order to run load-balanced shell scripts.

Utility Programs
We have built a number of utility programs for GLUnix. All of these programs located in /usr/now/bin. Man
pages are available for all of these programs, either by running man from a shell, or by clicking here. A brief
description of each utility program follows:
glush:

The GLUnix shell is a modified version of tcsh. Most jobs submitted to the shell are load
balanced among GLUnix machines. However, some jobs must be run locally since GLUnix
does not provide completely transparent TTY support and since IO bandwidth to stdin, stdout,
and stderr are limited by TCP bandwidth. The shell automatically runs a number of these jobs
locally, however users may customize this list by adding programs to the glunix_runlocal shell
variable. The variable indicates to glush those programs which should be run locally.

glumake:

A modified version of GNU's make program. A -j argument specifies the degree of parallelism
for the make. The degree of parallelism defaults to the number of nodes available in the cluster.

glurun:

This program runs the specified program on the GLUnix cluster. For example glurun bigsim
will run bigsim on the least loaded machine in the GLUnix cluster. You can run parallel
program on the NOW by specifying the parameter -N where N is a number representing the
degree of parallelism you wish. Thus glurun -5 bigsim will run bigsim on 5, least-loaded nodes.

glustat:

Prints the status of all machines in the GLUnix cluster.

glups:

Similar to Unix ps but only prints information about GLUnix processes.

glukill:

Sends an arbitrary signal (defaults to SIGTERM) to a specified GLUnix process.

gluptime:

Similar to Unix uptime, reporting on how long the system has been up and the current system
load.

GLUnix Implementation Status


The following functionality is implemented in NOW-1:
Remote
Execution:

Jobs can be started on any node in the GLUnix cluster. A single job may spawn multiple
worker processes on different nodes in the system.

Load
Balancing:

GLUnix maintains imprecise information on the load of each machine in the cluster. The
system farms out jobs to the node which it considers least loaded at request time.

Signal
Propagation:

A signal sent to a process is multiplexed to all worker processes comprising the GLUnix
process.

Coscheduling:

Jobs spawned to multiple nodes can be gang scheduled to achieve better performance. The
4
INDUS Institute of Technology and Engineering

Advance Computing Technology


current coscheduling time quantum is 1 second.
IO
Redirection:

Output to stdout or stderr are piped back to the startup node. Characters sent to stdin are
multiplexed to all worker processes. Output redirection is limited by network bandwidth.

5
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 3
Aim: A Sample GLUnix Program
Theory:
Each program running under GLUnix has a startup process which runs in your shell and a number of child
processes which run on remote nodes. There must be at least one child process, and may be up to one for each
node currently running GLUnix. The startup process is responsible for routing signal information (for example, if
you type ^Z or ^C) and input/output to the child processes. The child processes then make up the program itself. If
there is more than one child, this is a parallel program, else it is a sequential program.
Here is the code and Makefile for a sample program which runs under GLUnix (use gmake with this Makefile).
This routine provides the code for both the startup and child processes. The distinction between the two kinds of
processes is made using the Glib_AmIStartup() library call.
Program:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include "glib/types.h"
#include "glib.h"
int
main(int argc, char **argv)
{
int numNodes;
VNN vnn;
if(!Glib_Initialize()) {
fprintf(stderr,"Glib_Initialize failed\n");
exit(-1);
}
if (argc > 1) {
numNodes = atoi(argv[1]);
}
else {
numNodes = 2;
}
if (Glib_AmIStartup()) {
/* Startup process runs here */
printf("Startup is spawning %d children\n", numNodes);
Glib_Spawnef(numNodes, GLIB_SPAWN_OUTPUT_VNN,
argv[0], argv, environ);
The Makefile for this program (if you call it test.c) is:
CC = gcc
CFLAGS = -Wall -g
6
INDUS Institute of Technology and Engineering

Advance Computing Technology


TARGET = test
SRCS = test.c
LIBS = -lglunix -lam2 -lsocket -lnsl
MANS = test.1
MANHOME = ../../man/man1
BINHOME = ../../bin/sun4-solaris2.4-gamtcp
LIBPATH = /usr/now/lib
INCLUDEPATH = /usr/now/include/
###############################################################
LLIBPATH = $(addprefix -L,$(LIBPATH))
RLIBPATH = $(addprefix -R,$(LIBPATH))
INCPATH = $(addprefix -I,$(INCLUDEPATH))
all: $(TARGET)
$(TARGET): $(SRCS)
gcc $(CFLAGS) -o $(TARGET) $(SRCS) $(RLIBPATH) \
$(LLIBPATH) $(INCPATH) $(LIBS)
clean:
rm -f $(TARGET) core *~ *.o
install: $(TARGET) installman
cp $(TARGET) $(BINHOME)
installman:
cp $(MANS) $(MANHOME)
Output from this program should look something like this (though the order of the output lines
may vary):
% ./test
Startup is spawning 2 children
1:***** I am a child process
1:***** VNN: 1
1:***** Degree of program parallelism: 2
1:***** Total Nodes in system: 14
1:***** Doing Barrier
0:***** I am a child process
0:***** VNN: 0
0:***** Degree of program parallelism: 2
0:***** Total Nodes in system: 14
0:***** Child 0 is sleeping
0:***** Doing Barrier
1:***** Done with Barrier
0:***** Done with Barrier
%

7
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 4
Aim: To study and practice Alchemi grid framework.

Theory:
1. Introduction and Concepts

This section gives you an introduction to how Alchemi implements the concept of grid computing and
discusses concepts required for using Alchemi. Some key features of the framework are highlighted along
the way.
1.1. The Network is the Computer

The idea of meta-computing - the use of a network of many independent computers as if they were one
large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale
processing power to be had at a fraction of the cost of traditional supercomputers.

While traditional virtual machines (e.g. clusters) have been designed for a small number of tightly coupled
homogeneous resources, the exponential growth in Internet connectivity allows this concept to be applied
on a much larger scale. This, coupled with the fact that desktop PCs in corporate and home environments
are heavily underutilized typically only one-tenth of processing power is used has given rise to interest in
harnessing the vast amounts of processing power that is available in the form of spare CPU cycles on
Internet- or intranet-connected desktops. This new paradigm has been dubbed Grid Computing.

1.2. How Alchemi Works


There are four types of distributed components (nodes) involved in the construction of Alchemi grids and
execution of grid applications: Manager, Executor, User & Cross-Platform Manager.

8
INDUS Institute of Technology and Engineering

Advance Computing Technology

A grid is created by installing Executors on each machine that is to be part of the grid and linking them to a
central Manager component. The Windows installer setup that comes with the Alchemi distribution and minimal
configuration makes it very easy to set up a grid.

An Executor can be configured to be dedicated (meaning the Manager initiates thread execution directly) or
non-dedicated (meaning that thread execution is initiated by the Executor.) Non-dedicated Executors can work
through firewalls and NAT servers since there is only one-way communication between the Executor and
Manager. Dedicated Executors are more suited to an intranet environment and non-dedicated Executors are
more suited to the Internet environment.

Users can develop, execute and monitor grid applications using the .NET API and tools which are part of the
Alchemi SDK. Alchemi offers a powerful grid thread programming model which makes it very easy to develop
grid applications and a grid job model for grid-enabling legacy or non-.NET applications.

An optional component (not shown) is the Cross Platform Manager web service which offers interoperability
with custom non-.NET grid middleware.

2. Installation, Configuration and Operation


This section documents the installation, configuration and operation of the various parts of the framework for
setting up Alchemi grids. The various components can be downloaded from:
2.1. Common Requirements
9
INDUS Institute of Technology and Engineering

Advance Computing Technology


Microsoft .NET Framework 1.1
2.2. Manager
The Manager should be installed on a stable and reasonably capable machine. The Manager requires:
SQL Server 2000 or MSDE 2000
If using SQL Server, ensure that SQL Server authentication is enabled. Otherwise, follow these instructions to
install and prepare MSDE 2000 for Alchemi. Make a note of the system administrator (sa) password in either
case. [Note: SQL Server / MSDE do not necessarily need to be installed on the same machine as the
Manager.]

The Alchemi Manager can be installed in two modes

As a normal Windows desktop application

As a windows service. (supported only on Windows NT/2000/XP/2003)

To install the manager as a windows application, use the Manager Setup installer. For service-mode
installation use the Manager Service Setup. The configuration steps are the same for both modes. In case of
the service-mode, the Alchemi Manager Service installed and configured to run automatically on Windows
start-up. After installation, the standard Windows service control manager can be used to control the
service. Alternatively the Alchemi ManagerServiceController program can be used. The Manager service
controller is a graphical interface, which is exactly similar to the normal Manager application.

Install the Manager via the Manager installer. Use the sa password noted previously to install the database
during the installation.

Configuration & Operation

10
INDUS Institute of Technology and Engineering

Advance Computing Technology


The Manager can be run from the desktop or Start -> Programs -> Alchemi -> Manager -> Alchemi Manager.
The database configuration settings used during installation automatically appear when the Manager is first
started.

Click the "Start" button to start the Manager.

When closed, the Manager is minimised to the system tray.

Under service-mode operation, the GUI shown in fig. 3 is used to start / stop the Manager service. The service
will continue to operate even after the service controller application exits.
Manager Logging
The manager logs its output and errors to a log file called alchemi-manager.log. This can be used to debug
the manager / report errors / verify the manager operation. The log file is placed in the dat directory under
the installation directory.
11
INDUS Institute of Technology and Engineering

Advance Computing Technology


2.3. Role-Based Security
Every program connecting to the Manager must supply a valid username and password. Three default
accounts are created during installation: executor (password: executor), user (password: user) and admin
(password: admin) belonging to the 'Executors', 'Users' and 'Administrators' groups respectively.

Users are administered via the 'Users' tab of the Alchemi Console (located in the Alchemi SDK). Only
Administrators have permissions to manage users; you must therefore initially log in with the default admin
account.

The Console lets you add users, modify their group membership and change passwords.

The Users group (grp_id = 3) is meant for users executing grid applications.

The Executors group (grp_id = 2) is meant for Alchemi Executors. By default, Executors attempting to connect to
the Manager will use the executor account. If you do not wish Executors to connect anonymously, you can change
the password for this account.

You should change the default admin password for production use.
2.4. Cross Platform Manager
The Cross Platform Manager (XPManager) requires:
Internet Information Services (IIS)
ASP.NET
Installation
Install the XPManager web service via the Cross Platform Manager installer.
12
INDUS Institute of Technology and Engineering

Advance Computing Technology


Configuration
If the XPManager is installed on a different machine that the Manager, or if the default port of the Manager is
changed, the web service's configuration must be modified. The XPManager is configured via the ASP.NET
Web.config file located in the installation directory (wwwroot\Alchemi\CrossPlatformManager by default):

<appSettings>
<add key="ManagerUri" value="tcp://localhost:9000/Alchemi_Node" />
</appSettings>
Operation
The XPManager web service URL is of the format
http://[host_name]/[installation_path]
The default is therefore
http://[host_name]/Alchemi/CrossPlatformManager
The web service interfaces with the Manager. The Manager must therefore be running and started for the web
service to work.
2.5. Executor
Installation
The Alchemi Executor can be installed in two modes
As a normal Windows desktop application
As a windows service. (supported only on Windows NT/2000/XP/2003)
To install the executor as a windows application, use the Executor Setup installer. For service-mode installation
use the Executor Service Setup. The configuration steps are the same for both modes. In case of the service-mode,
the Alchemi Executor Service installed and configured to run automatically on Windows start-up. After
installation, the standard Windows service control manager can be used to control the service. Alternatively the
Alchemi ExecutorServiceController program can be used. The Executor service controller is a graphical interface,
which looks very similar to the normal Executor application.
Install the Executor via the Executor installer and follow the on-screen instructions.

Configuration & Operation


The Executor can be run from the desktop or Start -> Programs -> Alchemi -> Executor -> Alchemi Executor.
The Executor is configured from the application itself.
You need to configure 2 aspects of the Executor:

The host and port of the Manager to connect to.


13
INDUS Institute of Technology and Engineering

Advance Computing Technology

Dedicated / non-dedicated execution. A non-dedicated Executor executes grid threads on a voluntary


basis (it requests threads to execute from the Manager), while a dedicated Executor is always executing
grid threads (it is directly provided grid threads to execute by the Manager). A non-dedicated Executor
works behind firewalls.

Click the "Connect" button to connect the Executor to the Manager.

If the Executor is configured for non-dedicated execution, you can start executing by clicking the "Start
Executing" button in the "Manage Execution" tab.

14
INDUS Institute of Technology and Engineering

Advance Computing Technology

The Executor only utilises idle CPU cycles on the machine and does not impact on the CPU usage of running
programs. When closed, the Executor sits in the system tray. Other options such a interval of executor
heartbeat (i.e time between pinging the Manager) can be configured via the options tab.

Under service-mode operation, the GUI shown in fig. 8 is used to start / stop the Executor service. The
service will continue to operate even after the service controller application exits.
Executor Logging
The executor logs its output and errors to a log file called alchemi-executor.log. This can be used to debug the
executor / report errors / verify the executor operation. The log file is placed in the dat directory under the
installation directory.
2.6. Software Development Kit
15
INDUS Institute of Technology and Engineering

Advance Computing Technology


The SDK can be unzipped to a convenient location. It contains the following:
Alchemi Console
The Console (Alchemi.Console.exe) is a grid administration and monitoring tool. It is located in the bin directory.
The 'Summary' table shows system statistics and a real-time graph of power availability and usage. The
'Application's tab lets you monitor running applications. The 'Executors' tab provides information on
Executors. The 'Users' tab lets you manage users.

Alchemi.Core.dll
16
INDUS Institute of Technology and Engineering

Advance Computing Technology

Alchemi.Core.dll is a class library for creating grid applications to run on Alchemi grids. It is located in the bin
directory. It must be referenced from by all your grid applications. (For more on developing grid applications,
please see section 3. Grid Programming).

3. Grid Programming
This section is a guide to developing Alchemi grid applications.
3.1. Introduction to Grid Software
For the purpose of grid application development, a grid can be viewed as an aggregation of multiple machines
(each with one or more CPUs) abstracted to behave as one "virtual" machine with multiple CPUs. However, grid
implementations differ in the way they implement this abstraction and one of the key differentiating features of
Alchemi is the way it abstracts the grid, with the aim to make the process of developing grid software as easy as
possible.
Due to the nature of the grid environment (loosely coupled, heterogenous resources connected over an
unreliable, high-latency network), grid applications have the following features:
They can be parallelised into a number of independent computation units
Work units have a high computation time vs. communication time ratio
Alchemi supports two models for parallel application composition.
Course-Grained Abstraction: File-Based Jobs
Traditional grid implementations have only offered a high-level abstraction of the virtual machine, where the
smallest unit of parallel execution is a process. The specification of a job to be executed on the grid at the most
basic level consists of input files, output files and an executable (process). In this scenario, writing software to
run on a grid involves dealing with files, an approach that can be complicated and inflexible.
Fine-Grained Abstraction: Grid Threads
On the other hand, the primary programming model supported by Alchemi offers a more low-level (and hence
more powerful) abstraction of the underlying grid by providing a programming model that is object-oriented and
that imitates traditional multi-threaded programming.
The smallest unit of parallel execution in this case is a grid thread (.NET object), where a grid thread is
programmatically analogous to a "normal" thread (without inter-thread communication).
The grid application developer deals only with grid thread and grid application .NET objects, allowing him/her to
concentrate on the application itself without worrying about the "plumbing" details. Furthermore, abstraction at
this level allows the use of a elegant programming model with clean interfacing between remote and local code.
Note: Hereafter, applications and threads can be taken to mean grid applications and grid threads respectively,
unless stated otherwise.
17
INDUS Institute of Technology and Engineering

Advance Computing Technology


Grids Jobs vs. Grid Threads
Support for execution of grid jobs (programmatically as well as declaratively) is present for the following
reasons:
Grid-enabling legacy or non-.NET applications
Interoperability with grid middleware on other platforms (via a web services interface)
The grid thread model is preferred due to its ease of use, power and flexibility and should be used for new
applications, while the grid job model should be used for grid-enabling legacy/non-.NET applications or by non.NET middleware interoperating with Alchemi.

18
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 5
Aim: Develop Pi calculator in Alchemi
Program:
Manager:
Plouffe_Bellard.cs
namespace Alchemi.Examples.PiCalculator
{
public class Plouffe_Bellard
{
public Plouffe_Bellard() {}
private static int mul_mod(int a, int b, int m)
{
return (int) (((long) a * (long) b) % m);
}
/* return the inverse of x mod y */
private static int inv_mod(int x, int y)
{
int q,u,v,a,c,t;
u=x;
v=y;
c=1;
a=0;
do
{
q=v/u;
t=c;
c=a-q*c;
a=t;
t=u;
u=v-q*u;
v=t;
} while (u!=0);
a=a%y;
if (a<0)
{
a=y+a;
}
return a;
}
/* return (a^b) mod m */
private static int pow_mod(int a, int b, int m)
19
INDUS Institute of Technology and Engineering

Advance Computing Technology


{
int r, aa;
r=1;
aa=a;
while (true)
{
if ((b & 1) != 0)
{
r = mul_mod(r, aa, m);
}
b = b >> 1;
if (b == 0)
{
break;
}
aa = mul_mod(aa, aa, m);
}
return r;
}
/* return true if n is prime */
private static bool is_prime(int n)
{
if ((n % 2) == 0)
{
return false;
}
int r = (int) Math.Sqrt(n);
for (int i = 3; i <= r; i += 2)
{
if ((n % i) == 0)
{
return false;
}
}
return true;
}
/* return the prime number immediatly after n */
private static int next_prime(int n)
{
do
{
n++;
} while (!is_prime(n));
20
INDUS Institute of Technology and Engineering

Advance Computing Technology


return n;
}
public String CalculatePiDigits(int n)
{
int av, vmax, num, den, s, t;
int N = (int) ((n + 20) * Math.Log(10) / Math.Log(2));
double sum = 0;
for (int a = 3; a <= (2 * N); a = next_prime(a))
{
vmax = (int) (Math.Log(2 * N) / Math.Log(a));
av = 1;
for (int i = 0; i < vmax; i++)
{
av = av * a;
}
s = 0;
num = 1;
den = 1;
int v = 0;
int kq = 1;
int kq2 = 1;
for (int k = 1; k <= N; k++)
{
t = k;
if (kq >= a)
{
do
{
t = t / a;
v--;
} while ((t % a) == 0);
kq = 0;
}
kq++;
num = mul_mod(num, t, av);
t = 2 * k - 1;
if (kq2 >= a)
{
if (kq2 == a)
{
21
INDUS Institute of Technology and Engineering

Advance Computing Technology


do
{
t = t / a;
v++;
} while ((t % a) == 0);
}
kq2 -= a;
}
den = mul_mod(den, t, av);
kq2 += 2;
if (v > 0)
{
t = inv_mod(den, av);
t = mul_mod(t, num, av);
t = mul_mod(t, k, av);
for (int i = v; i < vmax; i++)
{
t = mul_mod(t, a, av);
}
s += t;
if (s >= av)
{
s -= av;
}
}
}
t = pow_mod(10, n - 1, av);
s = mul_mod(s, t, av);
sum = (sum + (double) s / (double) av) % 1.0;
}
int Result = (int) (sum * 1e9);
String StringResult = String.Format("{0:D9}", Result);
return StringResult;
}
public int DigitsReturned()
{
return 9;
}
}
}
PiCalcGridThread.cs
using System;
using System.Threading;
22
INDUS Institute of Technology and Engineering

Advance Computing Technology


using System.Reflection;
using System.Text;
using Alchemi.Core;
using Alchemi.Core.Owner;
namespace Alchemi.Examples.PiCalculator
{
[Serializable]
public class PiCalcGridThread : GThread
{
private int _StartDigitNum;
private int _NumDigits;
private string _Result;
public int StartDigitNum
{
get { return _StartDigitNum ; }
}
public int NumDigits
{
get { return _NumDigits; }
}
public string Result
{
get { return _Result; }
}
public PiCalcGridThread(int startDigitNum, int numDigits)
{
_StartDigitNum = startDigitNum;
_NumDigits = numDigits;
}
public override void Start()
{
StringBuilder temp = new StringBuilder();
Plouffe_Bellard pb = new Plouffe_Bellard();
for (int i = 0; i <= Math.Ceiling((double)_NumDigits / 9); i++)
{
temp.Append(pb.CalculatePiDigits(_StartDigitNum + (i * 9)));
}
_Result = temp.ToString().Substring(0, _NumDigits);
for (int i = 0; i < int.MaxValue; i++);
}
}
}
Executer:
PiCalculatorMain.cs
23
INDUS Institute of Technology and Engineering

Advance Computing Technology


using System;
using System.Reflection;
using System.Text;
using Alchemi.Core;
using Alchemi.Core.Owner;
using Alchemi.Core.Utility;
using log4net;
// Configure log4net using the .config file
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
namespace Alchemi.Examples.PiCalculator
{
class PiCalculatorMain
{
// Create a logger for use in this class
private
static
readonly
LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

ILog

logger

static int NumThreads = 10;


static int DigitsPerThread = 10;
static int NumberOfDigits = NumThreads * DigitsPerThread;
static DateTime StartTime;
static GApplication App;
static int th = 0;
[STAThread]
static void Main()
{
Console.WriteLine("[Pi Calculator Grid Application]\n--------------------------------\n");
Console.WriteLine("Press <enter> to start ...");
Console.ReadLine();
Logger.LogHandler += new LogEventHandler(LogHandler);
try
{
// get the number of digits from the user
bool numberOfDigitsEntered = false;
while (!numberOfDigitsEntered)
{
try
{
NumberOfDigits = Int32.Parse(Utils.ValueFromConsole("Digits to calculate",
"100"));
if (NumberOfDigits > 0)
{
numberOfDigitsEntered = true;
24
INDUS Institute of Technology and Engineering

Advance Computing Technology


}
}
catch (Exception)
{
Console.WriteLine("Invalid numeric value.");
numberOfDigitsEntered = false;
}
}
// get settings from user
GConnection gc = GConnection.FromConsole("localhost", "9000", "user", "user");
StartTiming();
// create a new grid application
App = new GApplication(gc);
App.ApplicationName = "PI Calculator - Alchemi sample";
// add the module containing PiCalcGridThread to the application manifest
App.Manifest.Add(new ModuleDependency(typeof(PiCalculator.PiCalcGridThread).Module));
NumThreads

(Int32)Math.Floor((double)NumberOfDigits

DigitsPerThread);
if (DigitsPerThread * NumThreads < NumberOfDigits)
{
NumThreads++;
}
// create and add the required number of grid threads
for (int i = 0; i < NumThreads; i++)
{
int StartDigitNum = 1 + (i*DigitsPerThread);
/// the number of digits for each thread
/// Each thread will get DigitsPerThread digits except the last one
/// which might get less
int
DigitsForThisThread
=
Math.Min(DigitsPerThread,
NumberOfDigits - i * DigitsPerThread);
Console.WriteLine(
"starting a thread to calculate the digits of pi from {0} to
{1}",
StartDigitNum,
StartDigitNum + DigitsForThisThread - 1);
PiCalcGridThread thread = new PiCalcGridThread(
StartDigitNum,
DigitsForThisThread
);
App.Threads.Add(thread);
}
// subcribe to events
25
INDUS Institute of Technology and Engineering

Advance Computing Technology


App.ThreadFinish += new GThreadFinish(ThreadFinished);
App.ApplicationFinish += new GApplicationFinish(ApplicationFinished);
// start the grid application
App.Start();
logger.Debug("PiCalc started.");
}
catch (Exception e)
{
Console.WriteLine("ERROR: {0}", e.StackTrace);
}
Console.ReadLine();
}
private static void LogHandler(object sender, LogEventArgs e)
{
switch (e.Level)
{
case LogLevel.Debug:
string message = e.Source + ":" + e.Member + " - " +
e.Message;
logger.Debug(message,e.Exception);
break;
case LogLevel.Info:
logger.Info(e.Message);
break;
case LogLevel.Error:
logger.Error(e.Message,e.Exception);
break;
case LogLevel.Warn:
logger.Warn(e.Message);
break;
}
}
static void StartTiming()
{
StartTime = DateTime.Now;
}
static void ThreadFinished(GThread thread)
{
th++;
Console.WriteLine("grid thread # {0} finished executing", thread.Id);
//
//
//
//
//
//
//

if (th > 1)
{
Console.WriteLine("For testing aborting threads beyond th=5");
try
{
Console.WriteLine("Aborting thread th=" + th);
thread.Abort();
26
INDUS Institute of Technology and Engineering

Advance Computing Technology


//
//
//
//
//
//
//

Console.WriteLine("DONE Aborting thread th=" + th);


}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
static void ApplicationFinished()
{
StringBuilder result = new StringBuilder();
for (int i=0; i<App.Threads.Count; i++)
{
PiCalcGridThread pcgt = (PiCalcGridThread) App.Threads[i];
result.Append(pcgt.Result);
}
Console.WriteLine(
"===\nThe value of Pi to {0} digits is:\n3.{1}\n===\nTotal time taken = {2}\n===",
NumberOfDigits,
result,
DateTime.Now - StartTime);
//Console.WriteLine("Thread finished fired: " + th + " times");
Console.WriteLine("Application Finished");
}
}

}
App.config
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<!-- Register a section handler for the log4net section -->
<configSections>
<section name="log4net" type="System.Configuration.IgnoreSectionHandler" />
</configSections>
<appSettings>
<!-- To enable internal log4net logging specify the following appSettings key -->
<!-- <add key="log4net.Internal.Debug" value="true"/> -->
</appSettings>
<!-- This section contains the log4net configuration settings -->
<log4net>
<!-- Define some output appenders -->
<appender
name="RollingLogFileAppender"
type="log4net.Appender.RollingFileAppender">
<file value="picalc.log" />
<appendToFile value="true" />
<maxSizeRollBackups value="5" />
<maximumFileSize value="1000000" />
<rollingStyle value="Once" />
<staticLogFileName value="true" />
<layout type="log4net.Layout.PatternLayout">
27
INDUS Institute of Technology and Engineering

Advance Computing Technology


<conversionPattern value="%date [%thread] %-5level %logger [%ndc]
[%mdc] [%F:%M:%L] - %message%newline%newline" />
</layout>
</appender>
<appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger [%ndc]
[%mdc] [%F:%M:%L] &lt;%property{auth}&gt; - %message%newline" />
</layout>
</appender>
<!-- Setup the root category, add the appenders and set the default level -->
<root>
<level value="WARN" />
</root>
<!-- Specify the level for some specific categories -->
<logger name="Alchemi.Core">
<level value="ALL" />
<appender-ref ref="RollingLogFileAppender" />
</logger>
<logger name="Alchemi.Examples">
<level value="ALL" />
<appender-ref ref="RollingLogFileAppender" />
</logger>
</log4net>
</configuration>

28
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 6
Aim: Develop application to generate prime number in Alchemi
Program:
PrimeNumberGenerator.cs
using System;
using System.Reflection;
using Alchemi.Core;
using Alchemi.Core.Owner;
using log4net;
namespace Tutorial
{
[Serializable]
class PrimeNumberChecker : GThread
{
public readonly int Candidate;
public int Factors = 0;
public PrimeNumberChecker(int candidate)
{
Candidate = candidate;
}
public override void Start()
{
// count the number of factors of the number from 1 to the number itself
for (int d=1; d<=Candidate; d++)
{
if (Candidate%d == 0) Factors++;
}
}
}
class PrimeNumberGenerator
{
// Create a logger for use in this class
private
static
readonly
ILog
LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

logger

public static GApplication App = new GApplication();


static DateTime StartTime;
static int max = 1000000;
static int primesFound = 0;
private static void LogHandler(object sender, LogEventArgs e)
{
switch (e.Level)
{
case LogLevel.Debug:
string message = e.Source + ":" + e.Member + " - " + e.Message;
logger.Debug(message,e.Exception);
29
INDUS Institute of Technology and Engineering

Advance Computing Technology


break;
case LogLevel.Info:
logger.Info(e.Message);
break;
case LogLevel.Error:
logger.Error(e.Message,e.Exception);
break;
case LogLevel.Warn:
logger.Warn(e.Message);
break;
}
}
[STAThread]
static void Main(string[] args)
{
Logger.LogHandler += new LogEventHandler(LogHandler);
Console.WriteLine("[PrimeNumber Checker Grid Application]\n-------------------------------\n");
Console.Write("Enter a maximum limit for Prime Number checking [default=1000000]
:");
string input = Console.ReadLine();
if (input!=null || input.Equals(""))
{
try
{
max = Int32.Parse(input);
}catch{}
}
App.ApplicationName = "Prime Number Generator - Alchemi sample";
Console.WriteLine("Connecting to Alchemi Grid...");
// initialise application
Init();
// create grid threads to check if some randomly generated large numbers are prime
Random rnd = new Random();
for (int i=0; i<10; i++)
{
int candidate = rnd.Next(max);
Console.WriteLine("Creating a grid thread to check if {0} is prime...",candidate);
App.Threads.Add(new PrimeNumberChecker(candidate));
}

// start the application


App.Start();
Console.WriteLine("Prime Number Generator completed.") ;
Console.ReadLine();
30
INDUS Institute of Technology and Engineering

Advance Computing Technology


// stop the application
try
{
App.Stop();
}catch {}
}
private static void Init()
{
try
{
// get settings from user
GConnection gc = GConnection.FromConsole("localhost", "9000", "user",
"user");
StartTime = DateTime.Now;
App.Connection = gc;
// grid thread needs to
App.Manifest.Add(new
ModuleDependency(typeof(PrimeNumberChecker).Module));
// subscribe to ThreadFinish event
App.ThreadFinish += new GThreadFinish(App_ThreadFinish);
App.ApplicationFinish += new GApplicationFinish(App_ApplicationFinish);
}
catch (Exception ex)
{
Console.WriteLine("Error: "+ex.Message);
logger.Error("ERROR: ",ex);
}
}
private static void App_ThreadFinish(GThread thread)
{
// cast the supplied GThread back to PrimeNumberChecker
PrimeNumberChecker pnc = (PrimeNumberChecker) thread;
// check whether the candidate is prime or not
bool prime = false;
if (pnc.Factors == 2) prime = true;
// display results
Console.WriteLine("{0} is prime? {1} ({2} factors)", pnc.Candidate, prime, pnc.Factors);
if (prime)
primesFound++;
}
private static void App_ApplicationFinish()
{
Console.WriteLine("Application finished. \nRandom primes found: {0}. Total time taken
: {1}", primesFound, DateTime.Now - StartTime);
}
}
31
INDUS Institute of Technology and Engineering

Advance Computing Technology


}
AssemblyInfo.cs
using System.Reflection;
using System.Runtime.CompilerServices;
//
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
//
[assembly: AssemblyTitle("")]
[assembly: AssemblyDescription("")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("")]
[assembly: AssemblyProduct("")]
[assembly: AssemblyCopyright("")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
//
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Revision and Build Numbers
// by using the '*' as shown below:
[assembly: AssemblyVersion("1.0.*")]
//
// In order to sign your assembly you must specify a key to use. Refer to the
// Microsoft .NET Framework documentation for more information on assembly signing.
//
// Use the attributes below to control which key is used for signing.
//
// Notes:
// (*) If no key is specified, the assembly is not signed.
// (*) KeyName refers to a key that has been installed in the Crypto Service
//
Provider (CSP) on your machine. KeyFile refers to a file which contains
//
a key.
// (*) If the KeyFile and the KeyName values are both specified, the
//
following processing occurs:
//
(1) If the KeyName can be found in the CSP, that key is used.
//
(2) If the KeyName does not exist and the KeyFile does exist, the key
//
in the KeyFile is installed into the CSP and used.
// (*) In order to create a KeyFile, you can use the sn.exe (Strong Name) utility.
//
When specifying the KeyFile, the location of the KeyFile should be
//
relative to the project output directory which is
//
%Project Directory%\obj\<configuration>. For example, if your KeyFile is
//
located in the project directory, you would specify the AssemblyKeyFile
//
attribute as [assembly: AssemblyKeyFile("..\\..\\mykey.snk")]
32
INDUS Institute of Technology and Engineering

Advance Computing Technology


// (*) Delay Signing is an advanced option - see the Microsoft .NET Framework
//
documentation for more information on this.
//
[assembly: AssemblyDelaySign(false)]
[assembly: AssemblyKeyFile("")]
[assembly: AssemblyKeyName("")]

33
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 7
Aim: To study gridsim simulator.
Theory:
GridSim: a toolkit for the modeling and simulation of distributed resource management and scheduling for Grid
computing
INTRODUCTION
The proliferation of the Internet and the availability of powerful computers and high-speed networksas
low-cost commodity components are changing the way we do large-scale parallel and distributed computing. The
interest in coupling geographically distributed (computational) resources is also growing for solving large-scale
problems, leading to what is popularly called the Grid and peer-to-peer (P2P) computing networks. These enable
sharing, selection and aggregation of suitable computational and data resources for solving large-scale data
intensive problems in science, engineering, and commerce. A generic view of Grid computing environment is
shown in Figure. The Grid consists of four key layers of components: fabric, core middleware, user-level
middleware, and applications [3]. The Grid fabric includes computers (low-end and high-end computers including
clusters), networks, scientific instruments, and their resource management systems. The core Grid middleware
provides services that are essential for securely accessing remote resources uniformly and transparently. The
services they provide include security and access management, remote job submission, storage, and resource
information. The user-level middleware provides higher-level tools such as resource brokers, application
development and adaptive runtime environment. The Grid applications include those constructed using Grid
libraries or legacy applications that can be Grid enabled using user-level middleware tools. The user essentially
interacts with a resource broker that hides the complexities of Grid computing. The broker discovers resources
that the user can access using information services, negotiates for access costs using trading services, maps tasks
to resources (scheduling), stages the application and data for processing (deployment), starts job execution, and
finally gathers the results. It is also responsible for monitoring and tracking application execution progress along
with adapting to the changes in Grid runtime environment conditions and resource failures. The computing
environments comprise heterogeneous resources (PCs, workstations, clusters, and supercomputers), fabric
management systems (single system image OS, queuing systems, etc.) and policies, and applications (scientific,
engineering, and commercial) with varied requirements (CPU, input/output (I/O), memory and/or network
intensive). The users: producers (also called resource owners) and consumers (also called end-users) have
different goals, objectives, strategies, and demand patterns. More importantly both resources and end-users are
geographically distributed with different time zones. In managing such complex Grid environments, traditional
approaches to resource management that attempt to optimize system-wide measures of performance cannot be
employed. This is because traditional approaches use centralized policies that need complete state information and
a common fabric management policy, or decentralized consensus based policy. In large-scale Grid environments,
it is impossible to define an acceptable system-wide performance matrix and common
fabric management policy. Apart from the centralized approach, two other approaches that are used in distributed
resource management are: hierarchical and decentralized scheduling or a combination of them. We note that
similar heterogeneity and decentralization complexities exist in humaneconomies where market driven economic
models have been used to successfully manage them

34
INDUS Institute of Technology and Engineering

Advance Computing Technology

we investigated the use of economics as a metaphor for management of resources in Grid computing
environments. A Grid resource broker, called Nimrod-G [5], has been developed that performs scheduling of
parameter sweep, task-farming applications on geographically distributed resources. It supports deadline and
budget-based scheduling driven by market-based economic models. To meet users quality of service
requirements, our broker dynamically leases Grid resources and services at runtime depending on their capability,
cost, and availability.Many scheduling experiments have been conducted on the execution of data-intensive,
science applications such as molecular modeling for drug design under a few Grid scenarios (like 2 h deadline and
10 machines for a single user). The ability to experiment with a large number of Grid scenarios was limited by the
number of resources that were available in the WWG (World-Wide Grid) testbed [9]. Also, it was impossible to
create a repeatable and controlled environment for experimentation and evaluation of scheduling strategies. This
is because resources in the Grid span across multiple administrative domains, each with their own policies, users,
and priorities.
The researchers and students, investigating resource management and scheduling for large-scale
distributed computing, need a simple framework for deterministic modeling and simulation of resources and
applications to evaluate scheduling strategies. For most who do not have access to ready-to-use testbed
infrastructures, building them is expensive and time consuming. Also, even for those who have access, the testbed
size is limited to a few resources and domains; and testing scheduling algorithms for scalability and adaptability,
and evaluating scheduler performance for various applications and resource scenarios is harder and impossible to
trace. To overcome these limitations, we provide a Java-based Grid simulation toolkit called GridSim. The Grid
computing researchers and educators also recognized the importance and the need for such a toolkit for modeling
and simulation environments [10]. It should be noted that this paper has a major orientation towards Grid,
however, we believe that our discussion and thoughts apply equally well to P2P systems since resource
management and scheduling issues in both systems are quite similar. The GridSim toolkit supports modeling and
simulation of a wide range of heterogeneous resources, such as single or multiprocessors, shared and distributed
memory machines such as PCs, workstations, SMPs, and clusters with different capabilities and configurations. It
can be used for modeling and simulation of application scheduling on various classes of parallel and distributed
computing systems such as clusters [11], Grids [1], and P2P networks [2]. The resources in clusters are located in
a single administrative domain and managed by a single entity, whereas in Grid and P2P systems, resources are
geographically distributed across multiple administrative domains with their own management policies and goals.
Another key difference between cluster and Grid/P2P systems arises from the way application scheduling is
performed. The schedulers in cluster systems focus on enhancing overall system performance and utility, as they
are responsible for the whole system. In contrast, schedulers in Grid/P2P systems called resource brokers, focus
on enhancing performance of a specific application in such a way that its end-users requirements are met. The
GridSim toolkit provides facilities for the modeling and simulation of resources and network connectivity with
different capabilities, configurations, and domains. It supports primitives for application composition, information
services for resource discovery, and interfaces for assigning application tasks to resources and managing their
35
INDUS Institute of Technology and Engineering

Advance Computing Technology


execution. These features can be used to simulate resource brokers or Grid schedulers for evaluating performance
of scheduling algorithms or heuristics. We have used the GridSim toolkit to create a resource broker that
simulates Nimrod-G for design and evaluation of deadline and budget constrained scheduling algorithms with
cost and time optimizations. The rest of this paper is organized as follows. Section 2 discusses related work with
highlights on unique features that distinguish our toolkit from other packages. The GridSim architecture and
internal components that make up GridSim simulations are discussed in Section 3. Section 4, discusses how to
build GridSim based scheduling simulations. Sample results of simulation of a resource broker similar to NimrodG with a deadline and budget constrained cost-optimization scheduling algorithm is discussed in Section 5. The
final section summarizes the paper along with suggestions for future work
GridSim: GRID MODELING AND SIMULATION TOOLKIT
The GridSim toolkit provides a comprehensive facility for simulation of different classes of heterogeneous
resources, users, applications, resource brokers, and schedulers. It can be used to
simulate application schedulers for single or multiple administrative domain distributed computing systems such
as clusters and Grids. Application schedulers in the Grid environment, called resource brokers, perform resource
discovery, selection, and aggregation of a diverse set of distributed resources for an individual user. This means
that each user has his or her own private resource broker and hence it can be targeted to optimize for the
requirements and objectives of its owner. In contrast, schedulers, managing resources such as clusters in a single
administrative domain, have complete control over the policy used for allocation of resources. This means that all
users need to submit their jobs to the central scheduler, which can be targeted to perform global optimization such
as higher system utilization and overall user satisfaction depending on resource allocation policy or optimize for
high priority users.
Features
Salient features of the GridSim toolkit include the following.
It allows modeling of heterogeneous types of resources.
Resources can be modeled operating under space- or time-shared mode.
Resource capability can be defined (in the form of MIPS (Million Instructions Per Second) as
per SPEC (Standard Performance Evaluation Corporation) benchmark).
Resources can be located in any time zone.
Weekends and holidays can be mapped depending on resources local time to model non-Grid
(local) workload.
Resources can be booked for advance reservation.
Applications with different parallel application models can be simulated.
Application tasks can be heterogeneous and they can be CPU or I/O intensive.
There is no limit on the number of application jobs that can be submitted to a resource.
Multiple user entities can submit tasks for execution simultaneously in the same resource, which may be timeshared or space-shared. This feature helps in building schedulers that can use different market-driven economic
models for selecting services competitively.
Network speed between resources can be specified.
It supports simulation of both static and dynamic schedulers.
Statistics of all or selected operations can be recorded and they can be analyzed using GridSim statistics analysis
methods.
System architecture
We employed a layered and modular architecture for Grid simulation to leverage existing technologies and
manage them as separate components. A multi-layer architecture and abstraction for the development of GridSim
platform and its applications is shown in Figure 2. The first layer is concerned with the scalable Java interface and
the runtime machinery, called JVM (Java Virtual Machine), whose implementation is available for single and
multiprocessor systems including clusters . The second layer is concerned with a basic discrete-event
infrastructure built using the interfaces provided by the first layer. One of the popular discrete-event infrastructure
36
INDUS Institute of Technology and Engineering

Advance Computing Technology


implementations available in Java is SimJava. Recently, a distributed implementation of SimJava was also made
available. The third layer is concerned with modeling and simulation of core Grid entities such as resources,
information services,and so on application model, uniform access interface, and primitives application modeling
and framework for creating higher level entities. The GridSim toolkit focuses on this layer that simulates system
entities using the discrete-event services offered by the lower-level infrastructure. The fourth layer is concerned
with the simulation of resource aggregators called Grid resource brokers or schedulers. The final layer is focused
on application and resource modeling with different scenarios using the services provided by the two lower-level
layers for evaluating scheduling and resource management policies, heuristics, and algorithms. In this section, we
briefly discuss the SimJava model for discrete events (a second-layer component) and focus mainly on the
GridSim (the third layer) design and implementation. Resource broker simulation and performance evaluation are
highlighted in
the next two sections.

SimJava [14] is a general purpose discrete event simulation package implemented in Java. Simulations in SimJava
contain a number of entities, each of which runs in parallel in its own thread. An entitys behaviour is encoded in
Java using its body() method. Entities have access to a small number of simulation primitives:
sim schedule() sends event objects to other entities via ports;
sim hold() holds for some simulation time;
sim wait() waits for an event object to arrive.
These features help in constructing a network of active entities that communicate by sending and
receiving passive event objects efficiently. The sequential discrete event simulation algorithm, in SimJava, is as
follows. A central object Sim system maintains a timestamp ordered queue of future events. Initially all entities
are created and their body() methods are put in run state. When an entity calls a simulation function, the Sim
system object halts that entitys thread and places an event on the future queue to signify processing the function.
When all entities have halted, Sim system pops the next event off the queue, advances the simulation time
accordingly, and restarts entities as appropriate. This continues until no more events are generated. If the JVM
supports native threads, then all entities starting at exactly the same simulation time may run concurrently.
GridSim entities
37
INDUS Institute of Technology and Engineering

Advance Computing Technology


GridSim supports entities for simulation of single processor and multiprocessor, heterogeneous
resources that can be configured as time- or space-shared systems. It allows setting of the clock to different time
zones to simulate geographic distribution of resources. It supports entities that simulate networks used for
communication among resources. During simulation, GridSim creates a number of multi-threaded entities, each of
which runs in parallel in its own thread. An entitys behavior needs to be simulated within its body() method, as
dictated by SimJava.A simulation environment needs to abstract all the entities and their time-dependent
interactions in the real system. It needs to support the creation of user-defined time-dependent response functions
for the interacting entities. The response function can be a function of the past, current, or both states of entities.
GridSim based simulations contain entities for the users, brokers, resources, information service, statistics, and
network based I/O, as shown in Figure 3. The design and implementation issues of these GridSim entities are
discussed below.

User. Each instance of the User entity represents a Grid user. Each user may differ from the
rest of users with respect to the following characteristics:
types of job created, e.g. job execution time, number of parametric replications, etc.;
scheduling optimization strategy, e.g. minimization of cost, time, or both;
activity rate, e.g. how often it creates new job;
time zone; and
absolute deadline and budget; or
D- and B-factors, deadline and budget relaxation parameters, measured in the range [0, 1]
express deadline and budget affordability of the user relative to the application processing
requirements and available resources.

Broker.
Each user is connected to an instance of the Broker entity. Every job of a user is
first submitted to its broker and the broker then schedules the parametric tasks according to the users scheduling
policy. Before scheduling the tasks, the broker dynamically gets a list of available resources from the global
directory entity. Every broker tries to optimize the policy of its user and therefore, brokers are expected to face
extreme competition while gaining access to resources. The scheduling algorithms used by the brokers must be
highly adaptable to the markets supply and demand situation
Resource:
Each instance of the Resource entity represents a Grid resource. Each resource may differ from the rest of the
resources with respect to the following characteristics:
number of processors;
cost of processing;
speed of processing;
internal process scheduling policy, e.g. time-shared or space-shared;
local load factor; and
time zone.

38
INDUS Institute of Technology and Engineering

Advance Computing Technology

The resource speed and the job execution time can be defined in terms of the ratings of standard
benchmarks such as MIPS and SPEC. They can also be defined with respect to the standard machine. Upon
obtaining the resource contact details from the Grid information service, brokers can query resources directly for
their static and dynamic properties.
Grid information service. Providing resource registration services and keeping track of a list
of resources available in the Grid. The brokers can query this for resource contact, configuration, and status
information.

Input and output:


The flow of information among the GridSim entities happens via their Input and Output entities. Every networked
GridSim entity has I/O channels or ports, which are used for establishing a link between the entity and its own
39
INDUS Institute of Technology and Engineering

Advance Computing Technology


Input and Output entities. Note that the GridSim entity and its Input and Output entities are threaded entities, i.e.
they have their own execution thread with body()method that handles events. The architecture for the entity
communication model in GridSim is illustrated in Figure 4. The use of separate entities for input and output
enables a networked entity to model full duplex and multi-user parallel communications. The support for buffered
input and output channels associated with every GridSim entity provides a simple mechanism for an entity to
communicate with other entities and at the same time enables modeling of the necessary communications delay
transparently
Application model:
GridSim does not explicitly define any specific application model. It is up to the developers (of schedulers and
resource brokers) to define them. We have experimented with a task-farming application model and we believe
that other parallel application models such as process parallelism, Directed Acyclic Graphs (DAGs), divide and
conquer etc., described in [21], can also be modeled and simulated using GridSim.
In GridSim, each independent task may require varying processing time and input files size. Such tasks can be
created and their requirements are defined through Gridlet objects. A Gridlet is a package that contains all the
information related to the job and its execution management details such as job length expressed in MIPS, disk
I/O operations, the size of input and output files, and the job originator. These basic parameters help in
determining execution time, the time required to transport input and output files between users and remote
resources, and returning the processed Gridlets back to the originator along with the results. The GridSim toolkit
supports a wide range of Gridlet management protocols and services that allow schedulers to map a Gridlet to a
resource and manage it throughout the life cycle.

Interaction protocols model:


The protocols for interaction between GridSim entities are implemented using events. In GridSim, entities use
events for both service request and service delivery. The events can be raised by any entity to be delivered
immediately or with specified delay to other entities or itself. The events that are originated from the same entity
are called internal events and those originated from the external entities are called external events. Entities can
distinguish these events based on the source identification associated with them. The GridSim protocols are used
for defining entity services. Depending on the service protocols, the GridSim events can be further classified into
synchronous and asynchronous events. An event is called synchronous when the event source entity waits until
the event destination entity performs all the actions associated with the event (i.e. the delivery of full service). An
event is called asynchronous when the event source entity raises an event and continues with other activities
without waiting for its completion. When the destination entity receives such events or service requests, it
responds back with results by sending one or more events, which can then take appropriate actions. It should be
noted that external events could be synchronous or asynchronous, but internal events need to be raised as
asynchronous events only to avoid deadlocks.
A complete set of entities in a typical GridSim simulation and the use of events for simulating interaction
between them are shown in Figures 5 and 6. Figure 5 emphasizes the interaction between a resource entity that
simulates time-shared scheduling and other entities. Figure 6 emphasizes the interaction between a resource entity
that simulates a space-shared system and other entities. In this section we briefly discuss the use of events for
simulating Grid activities.
The GridSim entities (user, broker, resource, information service, statistics, shutdown, and report writer)
send events to other entities to signify the request for service, to deliver results, or to raise internal actions. Note
that GridSim implements core entities that simulate resource, information service, statistics, and shutdown
services. These services are used to simulate a user with application, a broker for scheduling, and an optional
report writer for creating statistical reports at the end of a simulation. The event source and destination entities
must agree upon the protocols for service request and delivery. The protocols for interaction between the userdefined and core entities are pre-defined.
40
INDUS Institute of Technology and Engineering

Advance Computing Technology


When GridSim starts, the resource entities register themselves with the Grid Information Service (GIS)
entity, by sending events. This resource registration process is similar to GRIS (Grid Resource Information
Server) registering with GIIS (Grid Index Information Server) in the Globus system, Depending on the user
entitys request, the broker entity sends an event to the GIS entity, to signify a

query for resource discovery. The GIS entity returns a list of registered resources and their contactdetails. The
broker entity sends events to resources with a request for resource configuration andproperties. They respond with
dynamic information such as resources cost, capability, availability, load, and other configuration parameters.
These events involving the GIS entity are synchronous in nature.
Depending on the resource selection and scheduling strategy, the broker entity places asynchronousevents
for resource entities in order to dispatch Gridlets for executionthe broker need not wait for a resource to
complete the assigned work. When the Gridlet processing is finished, the resource entity updates the Gridlet status
and processing time and sends it back to the broker by raising an event to signify its completion.
The GridSim resources use internal events to simulate resource behavior and resource allocation. The
entity needs to bemodeled in such a way that it is able to receive all events meant for it. However, it is up to the
entity to decide on the associated actions. For example, in time-shared resource simulations (see Figure 5) internal
events are scheduled to signify the completion time of a Gridlet, which has the smallest remaining processing
time requirement. Meanwhile, if an external event arrives, it changes the share resource availability for each
Gridlet, which means the most recently scheduled event may

41
INDUS Institute of Technology and Engineering

Advance Computing Technology

not necessarily signify the completion of a Gridlet. The resource entity can discard such internal
events without processing.
Resource modelsimulating multitasking and multiprocessing
In the GridSim toolkit, we can create Processing Elements (PEs) with different speeds (measured
in either MIPS or SPEC-like ratings). Then, one or more PEs can be put together to create a machine. Similarly,
one or more machines can be put together to create a Grid resource. Thus, the resulting Grid resource can be a
single processor, shared memory multiprocessors (SMP), or a distributed memory cluster of computers. These
Grid resources can simulate time- or space-shared scheduling depending on the allocation policy. A single PE or
SMP-type Grid resource is typically managed by time-shared operating systems that use a round-robin scheduling
policy for multitasking. The distributed memory multiprocessing systems (such as clusters) are managed by
queuing systems, called space-shared schedulers, that execute a Gridlet by running it on a dedicated PE (see
Figure 12) when allocated. The space-shared systems use resource allocation policies such as first-come-firstserved (FCFS), back filling, shortest-job-first-served (SJFS), and so on. It should also be noted that resource
allocation within high-end SMPs could also be performed using the space-shared schedulers.

42
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 8
Aim:To study and practice Aneka Cloud computing software.
Theory:

Aneka

Manjrasoft is focused on the creation of innovative software technologies for simplifying the development and
deployment of applications on private or public Clouds. Our product Aneka plays the role of Application Platform
as a Service for Cloud Computing. Aneka supports various programming models involving Task Programming,
Thread Programming and MapReduce Programming and tools for rapid creation of applications and their
seamless deployment on private or public Clouds to distribute applications.

Aneka technology primarily consists of two key components:


1. SDK (Software Development Kit) containing application programming interfaces (APIs) and tools
essential for rapid development of applications. Aneka APIs supports three popular Cloud programming
models: Task, Thread, and MapReduce; and
43
INDUS Institute of Technology and Engineering

Advance Computing Technology


2. A Runtime Engine and Platform for managing deployment and execution of applications on private or
public Clouds.
One of the notable characteristics of Aneka PaaS is to support provisioning of private cloud resources ranging
from desktops, clusters to virtual datacenters using VMWare, Citrix Zen server and public cloud resources such as
Windows Azure, Amazon EC2, and GoGrid Cloud Service.
The potential of Aneka as a Platform as a Service has been successfully harnessed by its users and customers in
three various sectors including engineering, life science, education, and business intelligence

Highlights of Aneka
Technical Value

Support of multiple programming and application environments


Simultaneous support of multiple run-time environments
Rapid deployment tools and framework
Simplicity in developing applications on Cloud
Dynamic Scalability
Ability to harness multiple virtual and/or physical machines for accelerating application result
Provisioning based on QoS/SLA

Business Value

Improved reliability
Simplicity
44
INDUS Institute of Technology and Engineering

Advance Computing Technology

Faster time to value


Operational Agility
Definite application performance enhancement
Optimizing the capital expenditure and operational expenditure

APPLICATION
Distributed 3D Rendering

For 3D rendering, Aneka enables you to complete your jobs in a fraction of the usual time using existing
hardware infrastructure without having to do any programming.

45
INDUS Institute of Technology and Engineering

Advance Computing Technology


Build

Aneka includes a Software Development Kit (SDK) which includes a combination of APIs and
Tools to enable you to express your application. Aneka also allows you to build different run-time
environments and build new applications.
Accelerate

Aneka supports Rapid Development and Deployment of Applications in Multiple Run-Time


environments. Aneka uses physical machines as much as possible to achieve maximum utilization in
local environment. As demand increases, Aneka provisions VMs via private clouds (Xen or
VMWare) or Public Clouds (Amazon EC2).
Mange:
Aneka Management includes a Graphical User Interface (GUI) and APIs to set-up, monitor, manage
and maintain remote and global Aneka compute clouds. Aneka also has an accounting mechanism
and manages priorities and scalability based on SLA/QoS which enables dynamic provisioning.
Education and Training
Help educate a new generation of students in the latest area of computing. Add Parallel, Distributed
and Cloud Computing into your curriculum. We provide teaching tools, software and examples to
get your program up and running quickly.

Life Sciences
In the life sciences sector Aneka can be used for drug design, medical imaging, modular & quantum
mechanics, genomic search etc. Using Aneka, simulations take hours instead of days to complete
enabling you to improve your quality and precision of research by carrying out multiple simulations
and decrease your time to market by doing parallel simulations

46
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 9
Aim:Demonstrate Task model application on Aneka
Program:
MyTaskDemo.cs
using System;
using System.Threading;
using System.Collections.Generic;
using Aneka.Entity;
using Aneka.Tasks;
using Aneka.Security;
using Aneka.Data.Entity;
using Aneka.Security.Windows;

namespace Aneka.Examples.TaskDemo
{
/// <summary>
/// Class MyTask. Simple task function wrapping
/// the Gaussian normal distribution. It computes
/// the value of a given point.
/// </summary>
[Serializable]
public class MyTask : ITask
{
/// <summary>
/// value where to calculate the
/// Gaussian normal distribution.
/// </summary>
private double x;
/// <summary>
/// Gets, sets the value where to calculate
/// the Gaussian normal distribution.
/// </summary>
public double X
{ get { return this.x; } set { this.x = value; } }
/// <summary>
/// value where to calculate the
/// Gaussian normal distribution.
/// </summary>
private double result;
/// <summary>
/// Gets, sets the value where to calculate
/// the Gaussian normal distribution.
/// </summary>
public double Result
{
get { return this.result; }
set { this.result = value; }
}
/// <summary>
/// Creates an instance of MyTask.
47
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// </summary>
public MyTask() { }
#region ITask Members
/// <summary>
/// Evaluate the Gaussian normal distribution
/// for the given value of x.
/// </summary>
public void Execute()
{
this.result = (1 / (Math.Sqrt(2 * Math.PI))) *
Math.Exp(-(this.x * this.x) / 2);
Console.WriteLine("{0} : {1}", this.X, this.Result);
}
#endregion
}
/// <summary>
/// Class MyTaskDemo. Simple Driver application
/// that shows how to create tasks and submit
/// them to the grid, getting back the results
/// and handle task resubmission along with the
/// proper synchronization.
/// </summary>
class MyTaskDemo
{
/// <summary>
/// failed task counter
/// </summary>
private static int failed;
/// <summary>
/// completed task counter
/// </summary>
private static int completed;
/// <summary>
/// total number of tasks submitted
/// </summary>
private static int total;
/// <summary>
/// Dictionary containing sampled data
/// </summary>
private static Dictionary<double, double> samples;
/// <summary>
/// synchronization object
/// </summary>
private static object synchLock;
/// <summary>
/// sempahore used to wait for application
/// termination
/// </summary>
48
INDUS Institute of Technology and Engineering

Advance Computing Technology


private static AutoResetEvent semaphore;
/// <summary>
/// grid application instance
/// </summary>
private static AnekaApplication<AnekaTask, TaskManager> app;
/// <summary>
/// boolean flag inidicating which task failure
/// management strategy to use. If true the Log Only
/// strategy will be applied, if false the Full Care
/// strategy will be applied.
/// </summary>
private static bool bLogOnly = false;
/// <summary>
/// Program entry point.
/// </summary>
/// <param name="args">program arguments</param>
public static void Main(string[] args)
{
if (args.Length < 1)
{
Console.WriteLine("Usage TaskDemo [master-url] [username] [password]");
return;
}
Console.WriteLine("Setting Up Grid Application..");
app = Setup(args);
// create task instances and wrap them
// into AnekaTask instances
double step = 1.0;
double min = -2.0;
double max = 2.0;
// initialize trace variables.
total = (int) ((max - min) / step) + 1;
completed = 0;
failed = 0;
samples = new Dictionary<double, double>();
// initialize synchronization data.
synchLock = new object();
semaphore = new AutoResetEvent(false);
// attach events to the grid application
AttachEvents(app);
Console.WriteLine("Submitting {0} tasks...", total);
while (min <= max)
{
// create a task instance
MyTask task = new MyTask();
task.X = min;
49
INDUS Institute of Technology and Engineering

Advance Computing Technology


samples.Add(task.X, double.NaN);

// wrap the task instance into a AnekaTask


AnekaTask gt = new AnekaTask(task);
// submit the execution
app.ExecuteWorkUnit(gt);
min += step;
}
Console.WriteLine("Waiting for termination...");
semaphore.WaitOne();
Console.WriteLine("Application finished. Press any key to quit.");
Console.ReadLine();
}
#region Helper Methods
/// <summary>
/// AnekaApplication Setup helper method. Creates and
/// configures the AnekaApplication instance.
/// </summary>
/// <param name="args">program arguments</param>
private static AnekaApplication<AnekaTask, TaskManager>
Setup(string[] args)
{
Configuration conf = new Configuration(); // Configuration.GetConfiguration();
string username = args.Length > 1 ? args[1] : null;
string password = args.Length > 2 ? args[2] : string.Empty;
// ensure that SingleSubmission is set to false
// and that ResubmitMode to MANUAL.
conf.SchedulerUri = new Uri(args[0]);
conf.SingleSubmission = false;
conf.ResubmitMode = ResubmitMode.MANUAL;
if (username != null)
{
conf.UserCredential = new UserCredentials(username, password);
}
conf.UseFileTransfer = false;
AnekaApplication<AnekaTask, TaskManager> app =
new AnekaApplication<AnekaTask, TaskManager>
("MyTaskDemo", conf);
// ensure that SingleSubmission is set to false
if (args.Length == 1)
{
bLogOnly = (args[0] == "LogOnly" ? true : false);
}
return app;
}
/// <summary>
50
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// Attaches the events to the given instance
/// of the AnekaApplication class.
/// </summary>
/// <param name="app">grid application</param>
private static void AttachEvents(
AnekaApplication<AnekaTask, TaskManager> app)
{
// registering with the WorkUnitFinished event
app.WorkUnitFinished +=
new EventHandler<WorkUnitEventArgs<AnekaTask>>
(OnWorkUnitFinished);
// registering with the WorkUnitFinished event
app.WorkUnitFailed +=
new EventHandler<WorkUnitEventArgs<AnekaTask>>
(OnWorkUnitFailed);
// registering with the ApplicationFinished event
app.ApplicationFinished +=
new EventHandler<ApplicationEventArgs>(OnApplicationFinished);
}
/// <summary>
/// Dumps the results to the console along with
/// some information about the task failed and
/// the tasks used.
/// </summary>
private static void ShowResults()
{
// we do not need to lock anymore
// the samples dictionary because the
// asynchronous events are finished then
// there is no risk of races.
Console.WriteLine("Results");
foreach (KeyValuePair<double, double> sample in samples)
{
Console.WriteLine("{0}\t{1}", sample.Key,
sample.Value);
}
Console.WriteLine("Tasks Failed: " + failed);
string strategy = bLogOnly ? "Log Only" : "Full Care";
Console.WriteLine("Strategy Used: " + strategy);
}
#endregion
#region Event Handler Methods
/// <summary>
/// Handles the WorkUnitFailed event.
/// </summary>
/// <param name="sender">event source</param>
/// <param name="args">event arguments</param>
public static void OnWorkUnitFailed
(object sender, WorkUnitEventArgs<AnekaTask> args)
{
if (bLogOnly == true)
{
// Log Only strategy: we have to simply
51
INDUS Institute of Technology and Engineering

Advance Computing Technology


// record the failure and decrease the
// number of total task by one unit.
lock (synchLock)
{
total = total - 1;
// was this the last task?
if (total == completed)
{
app.StopExecution();
}
failed = failed + 1;
}
}
else
{
// Full Care strategy: we have to resubmit
// the task. We can do this only if we have
// enough information to resubmit it otherwise
// we switch to the LogOnly strategy for this
// task.
AnekaTask submitted = args.WorkUnit;
if ((submitted != null) &&
(submitted.UserTask != null))
{
MyTask task = submitted.UserTask as MyTask;
AnekaTask gt = new AnekaTask(task);
app.ExecuteWorkUnit(gt);
}
else
{
// oops we have to use Log Only.
lock (synchLock)
{
total = total - 1;
// was this the last task?
if (total == completed)
{
app.StopExecution();
}
failed = failed + 1;
}
}
}
}
/// <summary>
/// Handles the WorkUnitFinished event.
/// </summary>
/// <param name="sender">event source</param>
/// <param name="args">event arguments</param>
public static void OnWorkUnitFinished
(object sender, WorkUnitEventArgs<AnekaTask> args)
{
// unwrap the task data
52
INDUS Institute of Technology and Engineering

Advance Computing Technology


MyTask task = args.WorkUnit.UserTask as MyTask;
lock (synchLock)
{
// collect the result
samples[task.X] = task.Result;
// increment the counter
completed = completed + 1;
// was this the last?
Console.WriteLine("Completed so far {0}, Total to complete {1}", completed, total);
if (total == completed)
{
app.StopExecution();
}
}
}
/// <summary>
/// Handles the ApplicationFinished event.
/// </summary>
/// <param name="sender">event source</param>
/// <param name="args">event arguments</param>
public static void
OnApplicationFinished(object sender, ApplicationEventArgs args)
{
// display results
ShowResults();
// release the semaphore
// in this way the main thread can terminate
semaphore.Set();
}
#endregion
}
}

53
INDUS Institute of Technology and Engineering

Advance Computing Technology


Practical 10
Aim: Demonstrate Thread model application on Aneka
Program:
ThreadDemo.xml
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProductVersion>8.0.50727</ProductVersion>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>{753ADD9B-FFF4-4EF4-85E0-D4CC2E68EC9A}</ProjectGuid>
<OutputType>Exe</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>Aneka.Samples.ThreadDemo</RootNamespace>
<AssemblyName>warholizer</AssemblyName>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
<DebugType>pdbonly</DebugType>
<Optimize>true</Optimize>
<OutputPath>bin\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>
<ItemGroup>
<Reference Include="System" />
<Reference Include="System.Drawing" />
<Reference Include="System.Xml" />
</ItemGroup>
<ItemGroup>
<Compile Include="Program.cs" />
<Compile Include="Properties\AssemblyInfo.cs" />
<Compile Include="WarholApplication.cs" />
<Compile Include="WarholFilter.cs" />
</ItemGroup>
<ItemGroup>
<None Include="Diagram.cd" />
<None Include="marilyn.jpg">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
<ItemGroup>
54
INDUS Institute of Technology and Engineering

Advance Computing Technology


<Content Include="conf.xml">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
<!-- To modify your build process, add your task inside one of the targets below and uncomment it.
Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
-->
<ItemGroup>
<ProjectReference Include="..\..\..\..\src\Core\Base\Aneka\Aneka.csproj">
<Project>{487235CA-2F8A-435E-84A2-B1008894062A}</Project>
<Name>Aneka</Name>
</ProjectReference>
<ProjectReference Include="..\..\..\..\src\Models\Thread Model\Threading\Aneka.Threading.csproj">
<Project>{D74DDE33-9C9D-4559-BF51-9050A6C8302E}</Project>
<Name>Aneka.Threading</Name>
</ProjectReference>
<ProjectReference Include="..\..\..\..\src\Core\Base\Aneka.Util\Aneka.Util.csproj">
<Project>{240A024C-8D08-4BBA-8594-14DFC1724180}</Project>
<Name>Aneka.Util</Name>
</ProjectReference>
</ItemGroup>
</Project>
WarholApplication.cs
#region Namespaces
using System;
using System.Collections.Generic;
using System.Text;

// Ilist<...> class.
// StringBuilder class.

using System.IO;
using System.Drawing;

// IOException (IO Errors management)


// Image and Bitmap classes.

using Aneka.Entity;
using Aneka.Threading;
using System.Threading;

// Aneka Common APIs for all models


// Aneka Thread Model
// ThreadStart (AnekaThread initialization)

#endregion
namespace Aneka.Examples.ThreadDemo
{
/// <summary>
/// <para>
/// Class <i><b>WarholApplication</b></i>. This class manages the execution
/// of the <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" /> on Aneka
/// and thus creating a stereo image of a given picture composed by 4 copies
/// of the same image on which the filter is applied with different settings.
/// </para>
55
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <para>
///
In
order
to
speed
up
the
execution
fo
the
filter
the
<see
cref="T:Aneka.Examples.ThreadDemo.WarholApplication"/>
/// uses the support given Aneka for execution virtualization and parallelize
/// the execution of the four filters by using the <i>Grid Thread Programming Model</i>.
/// In particular it uses the following APIs:
///
- <see
cref="T:Aneka.Threading.AnekaThread"
/>
for
remotely execute
the
<see
cref="T:Aneka.Examples.ThreadDemo.WarholFilter" />.
/// - <see cref="T:Aneka.Entity.AnekaApplication{W,M}"/> for managing the execution of the remote threads.
/// </para>
/// <para>
/// This class constitutes a very simple example on how to configure the
/// <see cref="T:Aneka.Entity.AnekaApplication{W,M}"/> class for the <i>Thread
/// Programming Model</i> and how to use the basic <see cref="T:Aneka.Threading.AnekaThread" />
/// APIs.
/// </para>
/// </summary>
public class WarholApplication
{
#region Properties
/// <summary>
/// Path to the input file.
/// </summary>
protected string inputPath;
/// <summary>
/// Gets or sets the path to the input image
/// file that will be processed by th filter.
/// </summary>
public string InputPath
{
get { return this.inputPath; }
set
{
if ((value == null) || (value == string.Empty))
{
throw new ArgumentException("The InputPath cannot be null or empty!", "InputPath");
}
this.inputPath = value;
}
}
/// <summary>
/// Path to the configuration file.
/// </summary>
protected string configPath;
/// <summary>
/// Gets or sets the path to the file
/// containing the <see cref="T:Aneka.Entity.Configuration" />
/// object used to connect to Aneka.
/// </summary>
/// <remarks>
/// If the property is set to <see langword="null" /> or
/// <see cref="F:System.String.Empty" /> the default values
/// are used:
/// <list type="bullet">
56
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <item><i>tcp://localhost:9090/Aneka</i> for the <see cref="P:Aneka.Entity.Configuration.SchedulerUri"
/> property.</item>
/// <item><i>no authentication</i></item>
/// </list>
/// </remarks>
public string ConfigPath
{
get { return this.configPath; }
set { this.configPath = value; }
}
/// <summary>
/// Save path for the filtered image.
/// </summary>
protected string outputPath;
/// <summary>
/// Gets or sets the name of the
/// output file (inclusive of the
/// path) where to save the filtered
/// image.
/// </summary>
/// <remarks>
/// If this value is <see langword="null" />
/// or <see cref="T:Aneka.String.Empty" /> the
/// file is saved into the same directory of
/// <see cref="T:Aneka.Examples.ThreadDemo.WarholApplication.InputPath" />
/// and the name is assigned by appending the ".warhol" suffix to the
/// original name before the extension.
/// </remarks>
public string OutputPath
{
get { return this.outputPath; }
set { this.outputPath = value; }
}
#endregion
#region Implementation Fields
/// <summary>
/// Reference to the <see cref="T:Aneka.Entity.AnekaApplication" /> instance
/// that will be used to submit the execution of the <see cref="T:Aneka.Threading.AnekaThread" />
/// instances used to execute the <see cref="T:Aneka.Examples.ThreadDemo.WahrolFilter" />.
/// </summary>
protected AnekaApplication<AnekaThread, ThreadManager> application;
/// <summary>
/// List of <see cref="T:Aneka.Threading.AnekaThread" /> instances
/// that are currently running.
/// </summary>
protected IList<AnekaThread> running;
/// <summary>
/// List of the filters that have completed the
/// execution.
/// </summary>
protected IList<WarholFilter> done;
57
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <summary>
/// Number of copies of the image that compose
/// one single row of the output image.
/// </summary>
protected int repeatX;
/// <summary>
/// Number of copies of the image that compose
/// one single column of the output image.
/// </summary>
protected int repeatY;
#endregion
#region Public Methods
/// <summary>
/// Creates an empty instance of <see cref="T:Aneka.Examples.ThreadDemo.WarholApplication" />.
/// </summary>
public WarholApplication()
{
}
/// <summary>
/// Performs the distributed filtering by creating
/// four <see cref=-"T:Aneka.Threading.AnekaThread" />
/// instances and waiting for their termination. Then
/// it composes the images back and saves the output.
/// </summary>
///
<exception
cref="T:System.IO.FileNotFoundException"><paramref
name="T:Aneka.Samples.ThreadDemo.WarholApplication.InputPath"/> does not exist.</exception>
public void Run()
{
if (File.Exists(this.inputPath) == false)
{
throw new FileNotFoundException("InputPath does not exist.", "InputPath");
}
try
{
// Initializes the AnekaApplication instance.
this.Init();
// read the bitmap
Bitmap source = new Bitmap(this.inputPath);
// create one filter for each of the four slices that will
// compose the final image and starts their execution on
// Aneka by wrapping them into AnekaThread instances...
this.StartExecution(source);
// wait for all threads to complete...
this.WaitForCompletion();
// collect the processed images and compose them
// into one single image.
this.ComposeResult(source);
58
INDUS Institute of Technology and Engineering

Advance Computing Technology

}
finally
{
// we ensure that the application closes properly
// before leaving the method...
if (this.application != null)
{
if (this.application.Finished == false)
{
this.application.StopExecution();
}
}
}
}
#endregion
#region Helper Methods
/// <summary>
/// Loads the <see cref="T:Aneka.Entity.Configuration" /> and
/// initializes the <see cref="T:Aneka.Entity.AnekaApplication{W,M}" />
/// instance.
/// </summary>
protected void Init()
{
Configuration configuration = null;
if (string.IsNullOrEmpty(this.configPath) == true)
{
// we get the default configuration...
configuration = Configuration.GetConfiguration();
}
else
{
configuration = Configuration.GetConfiguration(this.configPath);
}
this.application = new AnekaApplication<AnekaThread, ThreadManager>(configuration);
}
/// <summary>
/// <para>
/// Starts the execution of the <see cref="T:Aneka.Threading.AnekaThread" />
/// instances.
/// </para>
/// <para>
/// This method createas a set of <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" />
/// instances and configure them with a <see cref="T:Aneka.Threading.AnekaThread" />
/// instance. All the threads are added to a local running queue and then the
cref="T:Aneka.Threading.AnekaThread.Start" />
/// is invoked. The <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" /> are cnfigured with
/// the <see cref="T:System.Drawing.Bitmap" /> <paramref name="source"/> as input image.
/// </para>
/// </summary>

<see

59
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <param name="source"><see cref="T:System.Drawing.Bitmap" /> instance representing the input image
of the filters.</param>
protected void StartExecution(Bitmap source)
{
this.running = new List<AnekaThread>();
WarholFilter[] filters = this.CreateFilters(source);
// creates an AnekaThread for each filter
foreach (WarholFilter filter in filters)
{
AnekaThread thread = new AnekaThread(new ThreadStart(filter.Apply), application);
thread.Start();
this.running.Add(thread);
}
}
/// <summary>
/// Collects the single images processed by the filters
/// and compose them into a single image by juxtapposing
/// the filters result.
/// </summary>
/// <param name="source">a <see cref="T:System.Drawing.Bitmap" /> representing the input image to the
filter.</param>
protected void ComposeResult(Bitmap source)
{
Bitmap output = new Bitmap(source.Width * this.repeatX, source.Height * this.repeatY,
source.PixelFormat);
Graphics graphics = Graphics.FromImage(output);
int row = 0, col = 0;
foreach (WarholFilter filter in this.done)
{
// NOTE: uncomment the follwoing two lines if you want to have the single
//
output of the files saved to disk.
// string fileName = this.GetNewName(this.inputPath, String.Format("{0}.{1}",row,col));
// filter.Image.Save(fileName);
graphics.DrawImage(filter.Image, row * source.Width, col * source.Height);
row++;
if (row == this.repeatX)
{
row = 0;
col++;
}
}
graphics.Dispose();
if (string.IsNullOrEmpty(this.outputPath) == true)
{
this.outputPath = this.GetNewName(this.inputPath, "warhol");
}
output.Save(this.outputPath);
}

60
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <summary>
/// Waits that all the threads submitted to Aneka
/// successfully complete their execution. When this
/// method returns the list of running threads is empty
/// and the list of done threads is equal to the number
/// of submitted threads.
/// </summary>
protected void WaitForCompletion()
{
this.done = new List<WarholFilter>();
bool bSomeToGo = true;
while (bSomeToGo == true)
{
foreach (AnekaThread thread in this.running)
{
thread.Join();
}
for (int i = 0; i < this.running.Count; i++)
{
AnekaThread thread = this.running[i];
if (thread.State == WorkUnitState.Completed)
{
this.running.RemoveAt(i);
i--;
WarholFilter filter = (WarholFilter) thread.Target;
this.done.Add(filter);
}
else
{
// it must be failed...
thread.Start();
}
}
bSomeToGo = this.running.Count > 0;
}
}
/// <summary>
/// Creates the filters that will be used to
/// produce the images that will compose the
/// output image. This method also sets the
/// number of columns and rows that final
/// output image will be composed of.
/// </summary>
/// <param name="source">a <see cref="T:System.Drawing.Bitmap" /> representing the input image to the
filter.</param>
/// <returns>A <see cref="T:System.Array" /> of <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter"
/> instances. </returns>
protected virtual WarholFilter[] CreateFilters(Bitmap source)
{
WarholFilter[] filters = new WarholFilter[4];
61
INDUS Institute of Technology and Engineering

Advance Computing Technology


WarholFilter one = new WarholFilter();
one.Image = source;
one.Palette = WarholFilter.FuchsiaGreenWhite;
filters[0] = one;

WarholFilter two = new WarholFilter();


two.Image = source;
two.Palette = WarholFilter.YellowGreenNavy;
filters[1] = two;
WarholFilter three = new WarholFilter();
three.Image = source;
three.Palette = WarholFilter.FuchsiaOrangeBlue;
filters[2] = three;
WarholFilter four = new WarholFilter();
four.Image = source;
four.Palette = WarholFilter.GreenOrangeGainsboro;
filters[3] = four;
this.repeatX = 2;
this.repeatY = 2;
return filters;
}
/// <summary>
/// Creates a new name from the given file <paramref name="name"/> and
/// which includes the given <paramref name="suffix"/> before the file
/// extension.
/// </summary>
/// <param name="name">A <see langword="string" /> containing the source file name.</param>
/// <param name="suffix">A <see langword="string" /> containing the suffix to append to the file.</param>
/// <returns>A <see langword="string" /> containing the new name.</returns>
protected string GetNewName(string name, string suffix)
{
string pathTarget = Path.GetDirectoryName(name);
string destName = String.Format("{0}.{1}{2}", Path.GetFileNameWithoutExtension(name), suffix,
Path.GetExtension(name));
pathTarget = Path.Combine(pathTarget, destName);
return pathTarget;
}
#endregion
}
}
WarholFilter.cs
using System;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
62
INDUS Institute of Technology and Engineering

Advance Computing Technology


namespace Aneka.Examples.ThreadDemo
{
/// <summary>
/// <para>
/// Class <i><b>WarholThread</b></i>. Applies the Warhol effect on a given <see
cref="T:System.Drawing.Bitmap">.
/// </para>
/// <para>
/// The Warhol effect is an image filter that when applied to an image produces a simplified version
/// of the image with a reducet color set. What characterizes this filter is its resemblance to the
/// to the paintings of Andy Warhol, who invented this technique. Warhol paintings are characterized
/// by a stereographic copy of a given painting. This painting is represented with different color
/// sets for each stereographic copy and the color space has been reduced from the original. All these
/// paintings are then put all together one near the other one in the final painting.
/// </para>
/// <para>
///
This
class
allows
to
reduce
the
color
space
of
<see
cref="P:Aneka.Examples.ThreadDemo.WarholFilter.Image" />
///
and
remaps
the
colors
of
the
image
according
to
the
given
<see
cref="P:Aneka.Examples.ThreadDemo.WarholFilter.Palette" />.
/// The resulting image is then stored in the <see cref="P:Aneka.Examples.ThreadDemo.Image" /> property.
/// </para>
/// </summary>
[Serializable]
public class WarholFilter
{
/// <summary>
/// Common color set 1: Yellow, Blue Navy, Dark Green.
/// </summary>
public static readonly Color[] YellowGreenNavy = new Color[3] { Color.Yellow, Color.DarkGreen,
Color.Navy };
/// <summary>
/// Common color set 2: Fuchsia, Orange, Dark Blue.
/// </summary>
public static readonly Color[] FuchsiaOrangeBlue = new Color[3] { Color.Fuchsia, Color.Orange,
Color.DarkBlue };
/// <summary>
/// Common color set 3: Green, Orange, Gainsboro.
/// </summary>
public static readonly Color[] GreenOrangeGainsboro = new Color[3] { Color.Green, Color.Orange,
Color.Gainsboro };
/// <summary>
/// Common color set 4: Fuchsia, Dark Olive Green, White Smoke.
/// </summary>
public static readonly Color[] FuchsiaGreenWhite = new Color[3] { Color.Fuchsia, Color.DarkOliveGreen,
Color.WhiteSmoke };
/// <summary>
/// <see cref="T:System.Drawing.Image" /> reference that
/// contains the instance that will be filtered.
/// </summary>
protected Bitmap image;
/// <summary>
63
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// Gets or sets the <see cref="T:System.Drawing.Image" />
/// instance that will be filtered. This property is used
/// either as input or as output of the filtering Process.
/// </summary>
public Bitmap Image
{
get { return this.image; }
set { this.image = value; }
}
/// <summary>
/// Color palette used to remap the color space
/// of <see cref="P:Aneka.Examples.ThreadDemo.Warhol.Image" />.
/// </summary>
protected Color[] palette;
/// <summary>
/// Gets or sets the palette used to remap the color
/// of <see cref="P:Aneka.Examples.ThreadDemo.Warhol.Image" />.
/// space of
/// </summary>
public Color[] Palette
{
get { return this.palette; }
set { this.palette = value; }
}
/// <summary>
///
Applies
the
filter
and
processes
the
image
instance
referenced
by
<see
cref="P:Aneka.Samples.ThreadDemo.WarholEffet.Image" />
///
by
remapping
the
color
values
according
to
<see
cref="P:Aneka.Examples.ThreadDemo.WarholEffect.Palette" />.
/// </summary>
///
<exception
cref="T:System.ArgumentNullException"
><see
cref="P:Aneka.Examples.ThreadDemo.WahrolFilter.Image"/> is <see langword="null"/>.</exception>
///
<exception
cref="T:System.ArgumentException"
><see
cref="P:Aneka.Examples.ThreadDemo.WahrolFilter.Palette"/> is <see langword="null"/> or empty.</exception>
public void Apply()
{
if (this.image == null)
{
throw new ArgumentNullException("Cannot apply the filter to a null image.", "Image");
}
if ((this.palette == null) || (this.palette.Length == 0))
{
throw new ArgumentException("The selected palette is null or empty.", "Palette");
}
this.image = this.Filter(this.image, this.palette);
}
/// <summary>
/// Applies the Warhol filter to <paramref name="source"/> by using
/// the color space identified by <paramref name="palette"/>.
/// </summary>
/// <param name="source">A <see cref="T:System.Drawing.Bitmap" /> instance that will be
filtered.</param>
/// <param name="palette">An <see cref="t:System.Array" /> of <see cref="T:System.Drawing.Color" />
valued defining the palette to apply.</param>
64
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// <returns>Filtered <see cref="T:System.Drawing.Bitmap" /> representing the filtered image.</returns>
private Bitmap Filter(Bitmap source, Color[] palette)
{
// reorder the palette first..
Color[] luminance = new Color[palette.Length];
for (int i = 0; i < palette.Length; i++)
{
Color palSample = palette[i];
luminance[i] = palSample;
}
for(int i = 1; i < palette.Length; i++)
for (int j = 0; j < i; j++)
{
if (luminance[j].GetBrightness() > luminance[i].GetBrightness())
{
Color swapColor = luminance[j];
luminance[j] = luminance[i];
luminance[i] = swapColor;
}
}
// now we have to pick up the colors
// and according to their luminosity
// putting them into classes...
// the point is that we want the colors
// equally distributed to the luminance
// palette... so we have to identify the
// color range of the image and equally divide
// into four classes

float max = 0.0f;


float min = 1.0f;
float mid = 0.0f;
Bitmap target = new Bitmap(source.Width, source.Height, source.PixelFormat);
for (int x = 0; x < source.Width; x++)
for (int y = 0; y < source.Height; y++)
{
Color sample = source.GetPixel(x, y);
float b = sample.GetBrightness();
if (b < min)
{
min = b;
}
if (b > max)
{
max = b;
}
mid = mid + b;
65
INDUS Institute of Technology and Engineering

Advance Computing Technology


}
// now we can compute the range
// of colors...
float delta = max - min / luminance.Length;
mid = mid / (source.Width * source.Height);
// we want to center the value of mid
// into the scale in order to do this
// we simply change the values of the
// average brightness in the avg array
float[] brightness = new float[luminance.Length];
// we fix the top and bottom values
this.Rescale(brightness.Length, 0, mid, min, max, brightness);
brightness[brightness.Length - 1] = max;
for (int x = 0; x < source.Width; x++)
for (int y = 0; y < source.Height; y++)
{
Color sample = source.GetPixel(x, y);
float b = sample.GetBrightness();
for (int i = 0; i < brightness.Length; i++)
{
if (b <= brightness[i])
{
target.SetPixel(x, y, luminance[i]);
break;
}
}
}
return target;
}
/// <summary>
/// <para>
/// Fills the <paramref name="values"/> array with an equi distributed set of values
/// ranging from <paramref name="min"/> to <paramref name="max"/> by using <paramref
name="midPoint"/>
/// as a starting point.
/// </para>
/// <para>
/// The method applies recursion on the value of <paramref name="delta"/> that together with
/// <paramref name="start" /> identifies the subarray to fill at each call. The value of <paramref
name="midPoint"/>
/// is used to set the central value of the subarray. Two subarrays are created by using the central position
/// and on each subarray <see cref="M:Aneka.Examples.ThreadDemo.WarholThread.Rescale" /> is called by
computing
/// the value of <paramref name="delta"/> as (<paramref name="delta"/> / 2) and defining the new value of
/// <paramref name="midPoint"/> as the:
/// <list type="bullet">
/// <item><paramref name="min"/> + (<paramref name="midPoint"/> - <paramref name="min"/>) /2 for the
left subarray.</item>
/// <item><paramref name="midPoint"/> + (<paramref name="max"/> - <paramref name="midPoint"/>) /2
for the right subarray.</item>
66
INDUS Institute of Technology and Engineering

Advance Computing Technology


/// </list>
/// The values of <paramref name="min"/> and <paramref name="max"/> are set accordingly.
/// </para>
/// <para>The recursion terminates when the value of <paramref name="delta"/> becomes zero.</para>
/// </summary>
/// <param name="delta">Length of the subarray contained in <paramref name="values"/> that will be filled
with thresold brightness values.</param>
/// <param name="start">Position of the first element of the subarray in <paramref
name="values"/>.</param>
/// <param name="midPoint">Starting (central) value of the brightness that will be used to generate all the
other values.</param>
/// <param name="min">Minimum value of the brightness.</param>
/// <param name="max">Maximum value of the brightness.</param>
/// <param name="values"><see cref name="T:System.Array"/> that will be filled with brightness
values.</param>
protected void Rescale(int delta, int start, float midPoint, float min, float max, float[] values)
{
if (delta > 0)
{
int newDelta = delta / 2;
if (start + newDelta < values.Length)
{
values[start + newDelta] = midPoint;
}
this.Rescale(newDelta, start, min + (midPoint - min) / 2, min, midPoint, values);
int newStart = start + newDelta + 1;
if (newStart < values.Length)
{
this.Rescale(newDelta, newStart, midPoint + (max - midPoint) / 2, midPoint, max, values);
}
}
}
}
}
Conf.xml
<?xml version="1.0" encoding="utf-8" ?>
<Aneka>
<UseFileTransfer value="false" />
<Workspace value="." />
<SingleSubmission value="false" />
<ResubmitMode value="AUTO" />
<PollingTime value="1000" />
<LogMessages value="true" />
<SchedulerUri value="tcp://localhost:9090/Aneka" />
<UserCredential type="Aneka.Security.UserCredentials" assembly="Aneka">
<Instance username="foo" password="bar"/>
</UserCredential>
</Aneka>
Program.cs
67
INDUS Institute of Technology and Engineering

Advance Computing Technology


using System;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
using System.IO;
using Aneka.Entity;
using Aneka.Threading;
namespace Aneka.Examples.ThreadDemo
{
/// <summary>
/// <para>
/// Class <i><b>Program</b></i>. Virtualizes the execution of the
/// <see cref="T:Aneka.Examples.ThreadDemo.WarholFilter" /> by using
/// the Grid Thread Programming Model.
/// </para>
/// <para>
/// The class creates a <see cref="T:Aneka.Entity.AnekaApplication{W,M}"/>
/// instance configured for the <i>Grid Thread Programming Model</i> and
///
provides
a
virtualization
feature
to
the
execution
of
the
<see
cref="T:Aneka.Examples.ThreadDemo.WarholFilter" />
/// The application takes an image as input, creates four copies of it and
/// applies the filter on each image parallely, then it waits for the results
/// and compose the four images into a one single image.
/// </para>
/// <para>
/// The demo shows:
/// <list type="bullet">
/// <item>How to use the <see cref="T:Aneka.Examples.AnekaThread" /> APIs.</item>
/// <item>How to set up the <see cref="T:Aneka.Entity.AnekaApplication{W,M}"/>
/// instance for the Grid Thread Programming Model.</item>
/// <item>How to manage the execution of the <see cref="T:Aneka.Entity.AnekaApplication{W,M}" />
instance.</item>
///
<item>How
to
handle
events
and
Process
results
(<see
cref="T:Aneka.Entity.AnekaApplication{W,M}.ApplicationFinished"/>. </item>.</item>
/// </para>
/// </summary>
class Program
{
/// <summary>
/// Main application entry point. Parses the <paramref name="args"/> array and
/// starts the <see cref="T:Aneka.Examples.ThreadDemo.WarholApplication" /> to
/// perform the filtering.
/// </summary>
/// <param name="args">A <see cref="T:System.Array" /> of <see langword="string" />
/// containing the command line parameters of the application.</param>
static void Main(string[] args)
{
if (args.Length >= 1)
{
string inputFile = args[0];
string outputFile = (args.Length > 1 ? args[1] : null);
string confFile = null;
68
INDUS Institute of Technology and Engineering

Advance Computing Technology


if (File.Exists(inputFile) == false)
{
Console.WriteLine("warholizer: [ERROR] input file [{0}] not found. EXIT", inputFile);
return;
}
else
{
// the infput file exists...
// now we check for the configuration file.
if (args.Length == 3)
{
confFile = args[2];
if (File.Exists(confFile) == false)
{
Console.WriteLine("warholizer: [ERROR] configuration file [{0}] not found. EXIT", inputFile);
return;
}
}
// now we check for the out file to simply issue
// a warning if the file exists...
if (File.Exists(outputFile) == true)
{
Console.WriteLine("warholizer: [WARNING] output file [{0}] already exists and it will be
overwritten.", inputFile);
}
}
// ok at this point we have the following conditions
// 1. inputPath exists
// 2. confFile exists
// we can start the application..
WarholApplication app = new WarholApplication();
app.InputPath = inputFile;
app.OutputPath = outputFile;
app.ConfigPath = confFile;
try
{
app.Run();
}
catch (Exception ex)
{
Console.WriteLine("warholizer: [ERROR] exception:");
Console.WriteLine("\tMessage: " + ex.Message);
Console.WriteLine("\tStacktrace: " + ex.StackTrace);
Console.WriteLine("EXIT");
}
}
else
{
Program.ShowHelp();
}
69
INDUS Institute of Technology and Engineering

Advance Computing Technology


}
/// <summary>
/// Displays to the console a simple help screen for using
/// the application.
/// </summary>
private static void ShowHelp()
{
Console.WriteLine("warholizer v1.0: Virtualized Warhol Effect powered by Aneka");
Console.WriteLine("Copyright @ 2008 Manjrasoft Pty.");
Console.WriteLine();
Console.WriteLine("usage: warholizer input_image_path output_image_path [conf_path]");
Console.WriteLine("where: ");
Console.WriteLine(" input_image_path : path to the input image.");
Console.WriteLine(" output_image_path : path where to save the output image.");
Console.WriteLine(" conf_path : path to the configuration file for connecting to Aneka.");
Console.WriteLine();
}
}
}

70
INDUS Institute of Technology and Engineering