Вы находитесь на странице: 1из 34

Computer and Automation Research Institute Hungarian Academy of Sciences

P-GRADE Visual Parallel Programming Environment


Robert Lovas
Laboratory of Parallel and Distributed Systems MTA SZTAKI Research Institute

rlovas@sztaki.hu www.lpds.sztaki.hu

P-GRADE : Highlights
P-GRADE is a parallel programming environment which supports the whole life-cycle of parallel program development For non-specialist programmers it provides a complete solution for efficient and easy parallel program development Fast reengineering of sequential programs for parallel computers Unified graphical support in program design, debugging and performance analysis Portability on supercomputers and heterogeneous workstation/PC clusters based on PVM (and MPI)

Tools of P-GRADE
GRAPNEL: Hybrid Parallel Prog. Language Graphics to express parallelism C/C++ to describe sequential parts GRED: Graphical Editor GRP2C: Pre-compiler to (C/C++)+(PVM/MPI) DIWIDE: Integrated distributed debugger and animation system GRM: distributed monitoring system PROVE: Integrated visualisation tool TLC Engine: Model checker of Temporal Logic Specifications --> ongoing project with LINZ/GUP

Parallel Program Design


GRAPNEL GRED

GRP file

Life-cycle of Parallel Program Development and its support in P-GRADE

Mapping
User mapping

GRP file Pre-compilation


PVM Library

GRP2C C source code, Cross-ref file, Make file Building executables

MPI Library

GRM Library

GRM Library

GRP-PVM

C compiler, linker

GRP-MPI

executables

Debugging DIWIDE

Monitoring GRM
Trace file

Model Checking TLC Engine

Visualisation PROVE

Design Goals of GRAPNEL


Graphical interface
to define all parallel activities Strong support for hierarchical design Visual abstractions to hide the low level details of message-passing

C/C++ (or Fortran) to describe sequential parts


Strong support for parallelizing sequential applications Support for programming in large No steep learning curve

GRAPNEL = (C/C++) + graphics

GRAPNEL: GRaphical Process NEt Language

Programming paradigm: message-passing


component processes run in parallel and can interact only by means of sending and receiving messages

Communication model:
point-to-point, synchronous/asynchronous collective (e.g. multicast, scatter, reduce, etc.)

Process model:
single processes process groups predefined process communication templates

Three layers of GRAPNEL

GRAPNEL
Hierarchical design levels: Graphics used at application level: Defines interprocess communication topology Port protocols Graphics hides PVM/MPI function calls Support for SPMD programming style Predefined communication patterns Automatic scaling of parallel programs

Communication Templates
Pre-defined regular process topologies process farm pipeline 2D mesh tree User defines: representative processes actual size Automatic scaling

Mesh Template

Tree Template

Process Groups
Hierarchical design (subgraph abstraction) Collective communication (group ports) multicast scatter gather reduce

GRAPNEL
Hierarchical design levels: Graphics used at process internal level C/C++ used at the text level Synch/asynch. comm. Programming in large: Any C/C++ library call can be included in text blocks Graphical support for objectbased programming

GRAPNEL
Structuring facility by macro graphs

GRED Editor
Supports the creation of all the elements of GRAPNEL Drag-and-drop style of drawing Cut/copy/paste/move on graphical objects Automatic port positioning with minimal lengths and crossing of communication channels

GRED Editor
Extremely easy and fast construction of process graph
Automatic arrange of the process graph Automatic resizing of process windows

Cut/copy/paste on graphical objects Macro graph construction at arbitrarily nested level C/C++ code can be edited by any standard text editor

GRP2C Pre-compiler
Automatic generation of PVM and MPI calls based on GRAPNEL graphics GRAPNEL C/C++ graphics GRP2C Generated code C/C++ PVM/MPI

Automatic code instrumentation for debugging and performance monitoring

Debugging by DIWIDE

Macrostep Debugging (and PROVE)

Performance monitoring and analysis with PROVE

Observing?
High-Speed Switch

Phases of Performance Visualisation Source Code Instrumentation (GRAPNEL/GRED) Runtime Monitoring (GRM)

Visualisation

Data Analysis (PROVE)

(PROVE)

Source Code Source Code Instrumentation Instrumentation

Automatic Automatic

Filtering Filtering

Monitoring Monitoring modes modes

Click-back Click-back facility facility

Selectable Selectable program program units units

On/off On/off facility facility

Individual Individual Events Events

Statistics Statistics

Source code click-back facility and click-forward facility

Behaviour Window of PROVE


Scrolling visualisation windows forward and backwards User controlled focus on processors, processes and messages Zooming, event filtering facilities

PROVE Performance analyser


Various views for displaying performance information Synchronised multi-window visualisation

PROVE Summary Windows


Various views for displaying summary information Synchronised multi-window visualisation

PROVE Statistics Windows


Profiling based on counters Analysis of very long running programs is enabled

The GRM Monitor


Off-line monitoring (GRADE)
stores trace events in a (local or global) storage and makes it available after execution for post-mortem processing.

Semi-on-line monitoring (P-GRADE)


stores trace events in a storage but makes it available for the visualisation tool any time during execution if the user asks for it interactive usage of PROVE user can remove already inspected part of the trace evaluation of long-running programs macrostep debugging in P-GRADE with execution visualisation

GRM monitor
Application-level monitor Tracing + statistics collection Semi-on-line
MM
Prove-rdd GRM (MM) Local host

P-GRADE
DIWIDE GRED PROVE

LM

Sharedmemory buffer

Trace file Server host

proc 1

proc 2

LM

LM Host 2

LM Host n Remote cluster

Host
Host 1

socket pipe memory op.

socket

file operation

Portability Supported Hardware/Software Platforms


Workstation clusters
SGI MIPS / IRIX 5.x/6.x (MTA SZTAKI, Univ. of Vienna) Sun UltraSPARC / Solaris 2.x (Univ. of Athens) Intel x86 / Linux (MTA SZTAKI)

Supercomputers
Hitachi SR2201 / HI-UX/MPP (Polish-Japanese School, Warsaw) Cray T3E / UNICOS(Jlich, Germany) Sun Enterprise 10000 / Solaris 8 (Budapest, Hungary)

International installations
Current
UK Austria Spain Portugal Poland Germany Slovakia Greece Japan Mexico USA

Further Developments
Family of parallel programming environments
P-GRADE VisualMP - checkpointing - dynamic load balancing - fault tolerance
(ongoing project)

VisualGrid - grid resource management - grid monitoring - mobile processes

Conclusion

Current applications in physics and weather forecast Download version and further information: www.lpds.sztaki.hu

Thank You ... Thank You ...

Вам также может понравиться