Академический Документы
Профессиональный Документы
Культура Документы
The Grid computing middleware software: Manages and executes all the
activities related to identification, allocation, de-allocation and consolidation of
all the computing resources to the end-users transparently.
It aims at transparent sharing of all computational resources in a grid of
computing systems so as to meet all the dynamic demands of the end-user
community.
Grid computing environment is a necessity to many end-users who cannot
afford huge computational resources, both hardware and software.
Most organizations(large) have a networking of computing resources, which are
under-utilized at some locations and are over-utilized in some locations,
connecting them in a grid and offering transparent ubiquitous services across
the geographical areas, facilitates utilization of under-utilized resources for
specific needs with greater priority.
What is a Metacomputer?
That is why, the terms grid and computational grid are used to describe a
metacomputer.
Metacomputing encompasses two broad categories:
Seamless access to high performance
Linking of computing resources, instruments and other resources.
(a) Processors and memory: The primary resources of a metacomputer are the
processors and the associated memory units. They form the basic computational
power.
A metacomputer is a single virtual view of several of processors and their
associated memory units.
The virtual environment spans the metacomputers and makes the multiple
distributed computers usable as a single system, to both the system
administrator and the individual users, including scientific instruments, if
any and the computer systems located thousands of miles away appear as
a single system.
Public key and private key are mathematically related so that a message encrypted with a
recipient public key can be decrypted only with the recipients private key.
The algorithm developed for this purpose is by Rivest, Shamir and Adleman shortened as
RSA where keys are generated mathematically, in part, by combining prime numbers.
The security of RSA algorithm is based on the fact that it is very difficult to factor
extremely large numbers, especially those with hundreds of digits.
RSA keys are 154 or 512 digit keys and the usage of this technology has led to integer
factorization becoming an active research area.
The factorization challenge provides a test bed for factorizing implementations and
provides one of the largest collection of factorization results from many different
experts worldwide.
Since factorization is a high computational job, parallel factorization algorithms were
developed so that factorizing can be computed parallelly, on several processors on a
network of computational resources, i.e. processors, memory and storage.
The algorithms doesnt require much communication after the initial set up for the
computations and makes it possible for many contributors to provide a small part of the
large factorization project.
Shreenath Acharya, SJEC, Vamanjoor
Initially, the code for factorizing and related information/data was distributed through
e-mail to all the concerned individuals.
Subsequently, the project FAFNER (started by Bellcore Labs, Syracuse University and
Co-operating Systems on the Web) initiated to factor RSA-130 using a new numerical
technique called Number Field Sieve (NFS) factorization method, using web servers for
computation.
A web interface for NFS was produced.
A contributor uses a web form to invoke server side CGI scripts (written in PERL).
Contributors could access (through web pages) a wide range of support services for
the step concerning sieving in factorization.
The activities preferred on the web were: software distribution, project
documentation, user registration, dissemination of sieving tasks, collection of
relations, real time sieving status reports.
The cluster management was done by CGI reports, directing individual sieving
workstations through appropriate day/night sleep cycles to minimize the impact on
the owners of the workstations used in the cluster.
Shreenath Acharya, SJEC, Vamanjoor
Contributors downloaded and built a sieving software daemon (web client), which
used the HTTP protocol to get the values and post the resulting relations back into
CGI script on the web server.
The approach was successful due to several factors such as:
Even single workstations with small memory (4MB) were allowed to perform
useful work using small sieve and small boundaries;
Anonymous registration was supported- users could contribute their hardware
resources anonymously;
A consortium of sites was deployed to run CGI script package locally;
The monitoring was done by RSA-130 web servers hierarchically, round the
clock with minimum human intervention.
2. Project I-WAY: I-WAY (Information Wide Area year) was an experimental high
performance network, linking many servers and addressed virtualization environments.
I-WAY was developed in 1995 with the objective to integrate existing high bandwidth
networks with telephone systems.
The servers, datasets and software environments located in 17 different U.S. locations
were integrated by connecting them with 10 networks of different bandwidths and
protocols, using different routing and switching technologies.
The network, bases on ATM (Asynchronous Transmission Mode), provided the back-up
supporting both TCP/IP and ATM and also direct ATM oriented protocols.
To standardize I-WAY software interface management, key sites installed Point of
Presence (PoP) computer system to serve as their receptive gateways to I-WAY.
These I-POP systems were Unix workstations configured homogeneously and contained
a standard software environment, I-Soft helped to overcome problems and issues
related to heterogeneity, scalability, security and performance.
I-POP machines provided uniform authentication, resource reservation and process
creation.
Shreenath Acharya, SJEC, Vamanjoor
Each POP system was accessible from Internet and operated within its firewall.
It also had ATM interface, which allowed monitoring and management of ATM site
switch.
A resource scheduler, called Computational Resource Broker (CRB), was used
consisting of User-CRB, CRB User, CRB-Local scheduler protocols.
A central scheduler maintains queues of jobs and tables indicating the state of local
machines, allocating jobs to machines, etc.
Multiple local schedulers also operated for local scheduling.
AFS file system was used for file movement and processing functions.
To support user level tools, a software Nexus was used to perform automating
configuration mechanisms, appropriately chosen.
Applications supported were: Supercomputing, virtual reality, multi-virtual reality in
addition to GUI, web video. Many features of I-WAI were inputs to Globus.
Grid computing can be used in metacomputing mode for scientific applications.
Shreenath Acharya, SJEC, Vamanjoor
The goal of grid computing is to provide the users with a single view and a
single mechanism that can be utilized to support any number of computing tasks:
The grid leverages its extensive information capabilities to support the
processing and storage requirements to complete the task, and all this is done
across the globe with clusters of computer systems, but the user sees only a
single virtual computer undertaking his/her own individual computational
requirements to the fullest satisfaction.
The maximum resource utilization is achieved to provide fastest, cheapest and
maximum satisfaction through quality of service to the end-user.
Thus, an organization will be able to ensure maximum utilization of its
computational resources by adopting the grid computing approach, thus saving
on costs and simultaneously improving the quality of service, are achieved
through grid computing-thus meeting the business objectives.