Вы находитесь на странице: 1из 35

4.

MPI
4.

MPI............................................................ 1
4.1. MPI: ...............................................................................3
4.1.1.
.............................................................................3
4.1.2.
.........................................................................................3
4.1.3.
.............................................................................................3
4.1.4.
..................................................................................................................4
4.1.5.
................................................................................................4
4.2. MPI .......................4
4.2.1.
MPI....................................................................................................................4
4.2.1.1 MPI .......................................................................4
4.2.1.2 .........................................................................5
4.2.1.3 ...............................................................................................................6
4.2.1.4 ....................................................................................................................6
4.2.1.5 MPI ....................................................7
4.2.2.
MPI- ................................................9
4.2.3.
...............9
4.2.3.1 ....................................9
4.2.3.2 . ..................11
4.2.3.3 .................................................................................................13
4.3. ...................................................13
4.3.1.
..........................................................................................13
4.3.2.
...............14
4.3.3.
....................................................15
4.4. .......................................................................16
4.4.1.
...................16
4.4.2.
....................17
4.4.3.
...............................18
4.4.4.
..........................................................19
4.4.5.
...............................................20
4.5. MPI.......................................................................................21
4.5.1.
..........................................................................21
4.5.2.
..........................................22
4.5.2.1 ..............................................................................22
4.5.2.2 ...................................................................................22
4.5.2.3 ..................................................................................23
4.5.2.4 ...............................................................................24
4.5.3.
......................................................24
4.5.4.
...............24
4.6. .....................................................26
4.6.1.
.................................................................................................26
4.6.2.
...................................................................................27
4.7. ......................................................................................................28
4.7.1.
() ................................................................................28
4.7.2.
.........................................................................................................30
4.8. MPI.......................................................................................31
4.8.1.
MPI
Fortran 31
4.8.2.
MPI- .....................................32

4.8.3.
MPI-2 .....................................................32
4.9. ........................................................................................................33
4.10.
...........................................................................................................33
4.11.
.....................................................................................................34
4.12.
......................................................................................................34

MPI.
(. . 1.6)
.
(
) .
(message passing
interface - MPI).
1. ,
, ,
.
MPI

! , , ,
-, , -, MPI
, ( ,

).
"
" (single program multiple processes or SPMP1)).
2.
( , ,
) MPI
. ,
3 .
MPI ( ,
MPI).
,

, , , .. .. (2002), Buyya (1999), Andrews (2000)
. , ,
. ,

. , 90-

. ,
MPI,
(the Workshop on Standards for Message Passing in a Distributed Memory Environment,
Williamsburg, Virginia, USA, April 1992). ,
MPI Forum,
1994 . (message passing interface - MPI)
1.0. MPI . 1997 .
MPI 2.0.
, , MPI. -, MPI - ,
. -, MPI
,
MPI. , ,
( MPI)

1)

" " (single program


multiple data or SPMD). MPI
SPMP.
2

C Fortran. "" MPI


. , MPI ,
" MPI" .
MPI MPI , ,
.
, MPI,
.
, MPI, :
MPI
,
C Fortran MPI, ,
,
MPI ,
MPI,
,
MPI , , , ..,
, 3
MPI, ,
, MPI.

4.1. MPI:
, MPI.

4.1.1.
MPI
. ,
(
).
,
.

( SPMP). ,
,
.
C Fortran MPI.

MPI- (
MPI-2 ).
0 p-1, p .
.

4.1.2.
MPI . MPI
(point-to-point) (collective)
.
,
, .
4.3.
, MPI
. 4.2 4.4.

4.1.3.
. MPI
,
(), .
, ,
.
3

. ,
MPI.

. .

MPI_COMM_WORLD.

(intercommunicator).
MPI
4.6.

4.1.4.

MPI . MPI
, C Fortran. ,
MPI
.
MPI
4.5.

4.1.5.
,
,
. ,
( ).
( 3),

.
MPI
(. . 1.7). , ,
, .
, MPI ()
. MPI
4.7.
, , MPI:

C; MPI Fortran . 4.8.1,
MPI
MPI . 4.8.2,
MPI 1.2 (MPI-1);
2.0 . 4.8.3.
MPI, , , , MPI
MPI 125 . , MPI

6 MPI. MPI
.
MPI.

4.2.
MPI
4.2.1. MPI
- MPI,
.
4.2.1.1

MPI

MPI :
4

int MPI_Init ( int *agrc, char ***argv );


MPI-.
.
MPI :
int MPI_Finalize (void);
, , ,
MPI, :
#include "mpi.h"
int main ( int argc, char *argv[] ) {
< MPI >
MPI_Init ( &agrc, &argv );
< MPI >
MPI_Finalize();
< MPI >
return 0;
}
:
1. mpi.h ,
MPI,
2. MPI_Init MPI_Finalize (
) ,
3. MPI_Init MPI_Initialized ,
MPI_Init.
MPI.
MPI, ,
, .
MPI, , .
4.2.1.2


:
int MPI_Comm_size ( MPI_Comm comm, int *size ).
:
int MPI_Comm_rank ( MPI_Comm comm, int *rank ).
, MPI_Comm_size MPI_Comm_rank MPI_Init:
#include "mpi.h"
int main ( int argc, char *argv[] ) {
int ProcNum, ProcRank;
< MPI >
MPI_Init ( &agrc, &argv );
MPI_Comm_size ( MPI_COMM_WORLD, &ProcNum);
MPI_Comm_rank ( MPI_COMM_WORLD, &ProcRank);
< MPI >
MPI_Finalize();
< MPI >
return 0;
}
:
1. MPI_COMM_WORLD, ,
,
2. , MPI_Comm_rank, ,
, .. ProcRank .
5

4.2.1.3

- :
int MPI_Send(void *buf, int count, MPI_Datatype type, int dest,
int tag, MPI_Comm comm),

- buf
,
,
- count ,
- type - ,
- dest - , ,
- tag
- -, ,
- comm - , .
MPI ,
. 4.1.
4.1. (pp) MPI C
MPI_Datatype
MPI_BYTE
MPI_CHAR
MPI_DOUBLE
MPI_FLOAT
MPI_INT
MPI_LONG
MPI_LONG_DOUBLE
MPI_PACKED
MPI_SHORT
MPI_UNSIGNED_CHAR
MPI_UNSIGNED
MPI_UNSIGNED_LONG
MPI_UNSIGNED_SHORT

C Datatype
signed char
double
float
int
long
long double
short
unsigned
unsigned
unsigned
unsigned

char
int
long
short

:
1. (),
.
( buf, count, type )
,
2. , ,
, MPI_Send,
3. tag ,
(.
MPI_Recv).
MPI_Send -
, . ,
, MPI_Send
- -,
, - MPI_Recv. , MPI_Send ,

.
MPI_Recv.
4.2.1.4

- :
int MPI_Recv(void *buf, int count, MPI_Datatype type, int source,
int tag, MPI_Comm comm, MPI_Status *status),
6


- buf, count, type ,
MPI_Send,
- source - ,
,
- tag
- , ,
- comm
- , ,
- status
.
:
1. ,
;
,
2. - source
MPI_ANY_SOURCE,
3. tag
MPI_ANY_TAG,
4. status :
- status.MPI_SOURCE - ,
- status.MPI_TAG
- .

MPI_Get_count(MPI_Status *status, MPI_Datatype type, int *count)


count type .
MPI_Recv
MPI_Send ,
.
MPI_Recv
. , MPI_Recv
-, .. .
, - ,
.
4.2.1.5

MPI

2).
C.
#include <stdio.h>
#include "mpi.h"
int main(int argc, char* argv[]){
int ProcNum, ProcRank, RecvRank;
MPI_Status Status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &ProcNum);
MPI_Comm_rank(MPI_COMM_WORLD, &ProcRank);
if ( ProcRank == 0 ){
// , 0
printf ("\n Hello from process %3d", ProcRank);
for ( int i=1; i<ProcNum; i++ ) {
MPI_Recv(&RecvRank, 1, MPI_INT, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &Status);
printf("\n Hello from process %3d", RecvRank);
}
}
else // , ,
// 0
2)

, MPI,
, .
7

MPI_Send(&ProcRank,1,MPI_INT,0,0,MPI_COMM_WORLD);
MPI_Finalize();
return 0;

4.1.

MPI

, ,
. , 0,
. 0 ,
. ,

( , ). ,
0 (
):
Hello
Hello
Hello
Hello

from
from
from
from

process
process
process
process

0
2
1
3

"" ,
, ..
. ,
,
. ,
- :
MPI_Recv(&RecvRank, 1, MPI_INT, i, MPI_ANY_TAG, MPI_COMM_WORLD, &Status).
- , , ,
(,

).
MPI
,
, , ,
. , MPI- "-",
. , ,

. MPI_Send
0, MPI_Recv ,
.
,
- MPI_Comm_rank ,
.

, , MPI-. ,

(). MPI :
MPI_Comm_rank(MPI_COMM_WORLD, &ProcRank);
if ( ProcRank == 0 ) DoProcess0();
else if ( ProcRank == 1 ) DoProcess1();
else if ( ProcRank == 2 ) DoProcess2();
, ,
0. MPI :
MPI_Comm_rank(MPI_COMM_WORLD, &ProcRank);
if ( ProcRank == 0 ) DoManagerProcess();
else DoWorkerProcesses();
MPI
- MPI .
8

MPI_SUCCESS.
.
,
:
- MPI_ERR_BUFFER ,
- MPI_ERR_COMM ,
- MPI_ERR_RANK ,
. mpi.h.

4.2.2. MPI-


.
, , , ,
.. MPI ,
.
:
double MPI_Wtime(void),
,
. , ,
MPI , ,
MPI_Wtime
. MPI_Wtime
:
double t1, t2, dt;
t1 = MPI_Wtime();

t2 = MPI_Wtime();
dt = t2 t1;
.
:
double MPI_Wtick(void),

.

4.2.3.
MPI_Send MPI_Recv, . 4.2.1,
.
, ,
MPI .
, ;
4.4.
MPI
x (. 2.5):
n

S=

i =1


, ,
,
.

, .
4.2.3.1


x . ,
9


:
MPI_Comm_size(MPI_COMM_WORLD,&ProcNum);
for (i=1; i<ProcNum; i++)
MPI_Send(&x,n,MPI_DOUBLE,i,0,MPI_COMM_WORLD);
,
() . ,
3, log 2 p .

( ) MPI:
int MPI_Bcast(void *buf,int count,MPI_Datatype type,int root,MPI_Comm comm),

- buf, count, type (


0), ,
- root - , ,
- comm - , .
MPI_Bcast buf, count type
, root, , comm (. . 4.1).

root

root

p-1

p-1
)

. 4.1.

*
*

:
1. MPI_Bcast , ,
MPI_Bcast
(. ),
2. MPI_Bcast .
root, ,
.
.

.
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main(int argc, char* argv[]){
double x[100], TotalSum, ProcSum = 0.0;
int ProcRank, ProcNum, N=100;
MPI_Status Status;
//
MPI_Init(&argc,&argv);
10

MPI_Comm_size(MPI_COMM_WORLD,&ProcNum);
MPI_Comm_rank(MPI_COMM_WORLD,&ProcRank);
//
if ( ProcRank == 0 ) DataInitialization(x,N);
//
MPI_Bcast(x, N, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//
// x i1 i2
int k = N / ProcNum;
int i1 = k *
ProcRank;
int i2 = k * ( ProcRank + 1 );
if ( ProcRank == ProcNum-1 ) i2 = N;
for ( int i = i1; i < i2; i++ )
ProcSum = ProcSum + x[i];
// 0
if ( ProcRank == 0 ) {
TotalSum = ProcSum;
for ( int i=1; i < ProcNum; i++ ) {
MPI_Recv(&ProcSum, 1, MPI_DOUBLE, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD,
&Status);
TotalSum = TotalSum + ProcSum;
}
}
else //
MPI_Send(&ProcSum, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
//
if ( ProcRank == 0 )
printf("\nTotal Sum = %10.2f",TotalSum);
MPI_Finalize();
}
4.2.
DataInitialization .
,

.
4.2.3.2



.
(
). ,
.
, , MPI :
int MPI_Reduce(void *sendbuf, void *recvbuf,int count,MPI_Datatype type,
MPI_Op op,int root,MPI_Comm comm),

- sendbuf - ,
- recvbuf (
root),
- count
- ,
- type
,
- op
- , ,
- root
- , ,
- comm
- , .

11

MPI
. . 4.2.
4.2. (pp) MPI

MPI_MAX
MPI_MIN
MPI_SUM
MPI_PROD
MPI_LAND
MPI_BAND
MPI_LOR
MPI_BOR
MPI_LXOR

MPI_BXOR
MPI_MAXLOC
MPI_MINLOC


""

""

""

""

""


""





MPI ., ,
(2002), Group, et al. (1994), Pacheco (1996).
.
4.2. root
, ..
n 1

j = xij , 0 j < n ,
i =0

, MPI_Reduce ( . 4.3.
).

x00 x01 x02

x0,n-1

x10 x11 x12

x1,n-1

xi,n-1

xn-1,n-1

root

y0 y1 y2

yn-1

p-1

xi0 xi1 xi2

p-1
)

xn-1,0 xn-1,1

12

. 4.2.

:
1. MPI_Reduce , ,
,
count, type, op, root, comm,
2. ,
root,
3.
. , ,
MPI_SUM, ,
,
.

-1

-2

2 -1

4 -2 -1

root

-2

. 4.3.


( 4 ,
2)


, , ,
MPI_Reduce:
// 0
MPI_Reduce(&ProcSum,&TotalSum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
4.2.3.3

.
, , ,
,
..
, ..
, MPI:
int MPI_Barrier(MPI_Comm comm);
MPI_Barrier , ,
. MPI_Barrier
,
MPI_Barrier .

4.3.
. 4.2.1 MPI
.

4.3.1.
MPI_Send (Standard)
, (. . 4.2.1.3):

- ,
13

-
-, , -
- MPI_Recv.
MPI
:
(Synchronous) ,
-
, -
,
(Buffered)
; ,
,
(Ready) ,
.
.
MPI
MPI_Send,
, ..
- MPI_Ssend ,
- MPI_Bsend ,
- MPI_Rsend .
MPI_Send.
MPI
:
int MPI_Buffer_attach(void *buf, int size),

- buf - ,
- size .
MPI :
int MPI_Buffer_detach(void *buf, int *size).
:
1. ,
, .. ,
2. ,
()
,
3. , .. .
, .
, MPI_Recv .

4.3.2.
, ..
.

.

.
, , ,
- (
) .
MPI
.
I (Immediate).
request MPI_Request (
MPI_Irecv status):
14

int MPI_Isend(void *buf, int count, MPI_Datatype type, int dest,


int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Issend(void *buf, int count, MPI_Datatype type, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Ibsend(void *buf, int count, MPI_Datatype type, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Irsend(void *buf, int count, MPI_Datatype type, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Irecv(void *buf, int count, MPI_Datatype type, int source,
int tag, MPI_Comm comm, MPI_Request *request)
,
.
request,
.

:
int MPI_Test( MPI_Request *request, int *flag, MPI_status *status),

- request - ,
,
- flag
- (=true, ),
- status - (
).
, ..
, ,
.
:
MPI_Isend(buf,count,type,dest,tag,comm,&request);

do {

MPI_Test(&request,&flag,&status)
} while ( !flag );
,
,
:
int MPI_Wait( MPI_Request *request, MPI_status *status).
, MPI
:
- MPI_Testall
- MPI_Waitall
- MPI_Testany

- ,
,
-
,
- MPI_Waitany
,
- MPI_Testsome -
,
- MPI_Waitsome -
.
.
, ,
(. 8).

4.3.3.

,
15

, , .
, , .

. , ,
, ,
.

MPI:
int MPI_Sendrecv(void *sbuf,int scount,MPI_Datatype stype,int dest, int stag,
void *rbuf,int rcount,MPI_Datatype rtype,int source,int rtag,
MPI_Comm comm, MPI_Status *status),

- sbuf, scount, stype, dest,


stag - ,
- rbuf, rcount, rtype, source, rtag - ,
- comm
- , ,
- status .
, MPI_Sendrecv , (sbuf,
scount, stype, dest, stag), dest ,
(rbuf, rcount, rtype, source, rtag), source.
MPI_Sendrecv . ,
, MPI :
int MPI_Sendrecv_replace (void *buf, int count, MPI_Datatype type, int dest,
int stag, int source, int rtag, MPI_Comm comm, MPI_Status* status).

8 .

4.4.
, MPI ,
.
3.
. 4.2.3
( ) ,
( ).
.
4.4.1.

( )
,
(. . 4.4). :
int MPI_Scatter(void *sbuf,int scount,MPI_Datatype stype,
void *rbuf,int rcount,MPI_Datatype rtype,
int root, MPI_Comm comm),

- sbuf, scount, stype - (scount


, ),
- rbuf, rcount, rtype - , ,
- root , ,
- comm - , .

16

root

0 1 2

p-1

root

p-1

p-1
)

. 4.4.

root

p-1

root
. scount . 0
sbuf 0 scount-1, 1
scount 2* scount-1 .. ,
scount * p , p comm.
, MPI_Scatter ,
.
, MPI_Scatter .
,
, MPI_Scatterv.
MPI_Scatter 7
.
4.4.2.

( )
(. . 4.5). MPI
:
int MPI_Gather(void *sbuf,int scount,MPI_Datatype stype,
void *rbuf,int rcount,MPI_Datatype rtype,
int root, MPI_Comm comm),

- sbuf, scount, stype - ,


- rbuf, rcount, rtype - ,
- root , ,
- comm - , .

17

root

0 1 2

p-1

root

p-1

p-1
)

. 4.5.

root

p-1

MPI_Gather
sbuf root. root rbuf
( -
). , , rbuf
scount * p , p comm.
MPI_Gather ,
.
, MPI_Gather
.
:
int MPI_Allgather(void *sbuf, int scount, MPI_Datatype stype,
void *rbuf, int rcount, MPI_Datatype rtype, MPI_Comm comm).
,
, MPI_Gatherv MPI_Allgatherv.
MPI_Gather 7
.

4.4.3.

(. . 4.6). :
int MPI_Alltoall(void *sbuf,int scount,MPI_Datatype stype,
void *rbuf,int rcount,MPI_Datatype rtype,MPI_Comm comm),

- sbuf, scount, stype - ,


- rbuf, rcount, rtype -
- comm - , .

18

0
0 0 0 1

0 (p-1)

1 0 1 1

1 (p-1)

0 0 1 0

(p-1) 0

0 1 1 1

(p-1) 1

(p-1) i

i 0 i 1
p-1

(p-1) 0 (p-1) 1

i (p-1)

i
p-1

(p-1) (p-1)

0 i 1 i

0 (p-1) 1 (p-1)

. 4.6.

(p-1) (p-1)


( ij, i j
)

MPI_Alltoall scount
(
scount * p , p comm)
.
MPI_Alltoall
.
,
, MPI_Alltoallv.
MPI_Alltoall 7
.

4.4.4.
. 4.2.3.2 MPI_Reduce
.
:
int MPI_Allreduce(void *sendbuf, void *recvbuf,int count,MPI_Datatype type,
MPI_Op op,MPI_Comm comm).
MPI_AllReduce .

MPI_Reduce_scatter.
,
, :
int MPI_Scan(void *sendbuf, void *recvbuf,int count,MPI_Datatype type,
MPI_Op op,MPI_Comm comm).
MPI_Scan . 4.7.

, i, 0 i<n,
, i,..
i

ij = x kj , 0 i, j < n ,
k =0

, MPI_Scan.

19

x00 x01 x02

x0,n-1

y00 y01 y02

y0,n-1

x10 x11 x12

x1,n-1

y10 y11 y12

y1,n-1

yi0 yi1 yi2

yi,n-1

p-1

yn-1,0 yn-1,1

xi0 xi1 xi2

xi,n-1

p-1

xn-1,0 xn-1,1

xn-1,n-1

. 4.7.

yn-1,n-1

4.4.5.

. 4.3.
4.3.

MPI



(
)


( )
-

.3.2.5

MPI_Bcast
. 4.2.3.1

. 4.2.3.1

.3.2.5, 3.2.6

MPI_Reduce
. 4.2.3.2

. 4.2.3.2

.3.2.5, 3.2.6

MPI_Allreduce
MPI_Reduce_scatter
. 4.4.4



(
)


( )

.3.2.5, 3.2.6

MPI_Scan
. 4.4.4

.3.2.7

MPI_Scatter
MPI_Scatterv
. 4.4.1

.3.2.7

MPI_Gather
MPI_Gatherv
. 4.4.2

.3.2.7

MPI_Allgather
MPI_Allgatherv
. 4.4.2

.3.2.8

MPI_Alltoall
MPI_Alltoallv
. 4.4.3

20

4.5. MPI
,
MPI
( MPI . 4.1). ,
. ,
,

.
,
, .
MPI
.
,
.

4.5.1.
MPI
MPI ,
. MPI (type map)
,
, ..
TypeMap = {(type0 , disp0 ),..., (typen1 , dispn1 )} .
MPI :
TypeSignature = {type0 ,..., typen1 } .
,
MPI , , .
, .
.
:
double a; /* 24 */
double b; /* 40 */
int
n; /* 48 */
:
{(MPI_DOUBLE,0),
(MPI_DOUBLE,16),
(MPI_INT,24)
}
MPI :
-
lb(TypeMap) = min j (disp j ) ,
-
ub(TypeMap) = max j (disp j + sizeof (type j )) + ,
-
extent(TypeMap) = ub(TypeMap)-lb(TypeMap).

. ,
.
. ,
, C Fortran, ,
. , int ,
int .
MPI.
a,b n, 0, 32
( 6 4 int). ,
.
21

.
, . , (
).
. , 28, 32 (,
int ).
MPI :
int MPI_Type_extent ( MPI_Datatype type, MPI_Aint *extent ),
int MPI_Type_size
( MPI_Datatype type, MPI_Aint *size ).
:
int MPI_Type_lb ( MPI_Datatype type, MPI_Aint *disp ),
int MPI_Type_ub ( MPI_Datatype type, MPI_Aint *disp ).

:
int MPI_Address ( void *location, MPI_Aint *address )
( ,
C Fortran).

4.5.2.
MPI
:

,

, .
, , , H ,
,
,

.

.
4.5.2.1

MPI :
int MPI_Type_contiguous(int count,MPI_Data_type oldtype,MPI_Datatype *newtype).
, newtype count oldtype.
,
{ (MPI_INT,0),(MPI_DOUBLE,8) },
MPI_Type_contiguous
MPI_Type_contiguous (2, oldtype, &newtype);

{ (MPI_INT,0),(MPI_DOUBLE,8),(MPI_INT,16),(MPI_DOUBLE,24) }.
,
count MPI
.
4.5.2.2

MPI
int MPI_Type_vector ( int count, int blocklen, int stride,
MPI_Data_type oldtype, MPI_Datatype *newtype ),

- count
,
22

- blocklen ,
- stride
,

- oldtype - ,
- newtype - .
int MPI_Type_hvector ( int count, int blocklen, MPI_Aint stride,
MPI_Data_type oldtype, MPI_Datatype *newtype ).
, MPI_Type_hvector, ,
stride ,
.
,
, .
:
( )
nxn:
MPI_Type_vector ( n/2, n, 2*n, &StripRowType, &ElemType ),
nxn:
MPI_Type_vector ( n, 1, n, &ColumnType, &ElemType ),
nxn:
MPI_Type_vector ( n, 1, n+1, &DiagonalType, &ElemType ).
MPI
(
MPI-2):
int MPI_Type_create_subarray ( int ndims, int *sizes, int *subsizes,
int *starts, int order, MPI_Data_type oldtype, MPI_Datatype *newtype ),

- ndims
,
- sizes
,
- subsizes
,
- starts

,
- order
- ,
- oldtype - ,
- newtype - .
4.5.2.3

MPI :
int MPI_Type_indexed ( int count, int blocklens[], int indices[],
MPI_Data_type oldtype, MPI_Datatype *newtype ),

- count
,
- blocklens ,
- indices
(
),
- oldtype
- ,
- newtype
- .
int MPI_Type_hindexed ( int count, int blocklens[], MPI_Aint indices[],
MPI_Data_type oldtype, MPI_Datatype *newtype )
,
,
.
nxn:
//
for ( i=0, i<n; i++ ) {
blocklens[i] = n - i;
23

indices[i]
= i * n + i;
}
MPI_Type_indexed ( n, blocklens, indices, &UTMatrixType, &ElemType ).
, , MPI_Type_hindexed, ,
indices ,
.
, MPI_Type_create_indexed_block
(
MPI-2).
4.5.2.4

,
.
:
int MPI_Type_struct ( int count, int blocklens[], MPI_Aint indices[],
MPI_Data_type oldtypes[], MPI_Datatype *newtype ),

- count
,
- blocklens ,
- indices
( ),
- oldtypes
- ,
- newtype
- .
,
.

4.5.3.

.
:
int MPI_Type_commit (MPI_Datatype *type ).
:
int MPI_Type_free (MPI_Datatype *type ).

4.5.4.
. 4.5.2 MPI
,
.

.
:
int MPI_Pack ( void *data, int count, MPI_Datatype type,
void *buf, int bufsize, int *bufpos, MPI_Comm comm),

- data
,
- count ,
- type
,
- buf
- ,
- buflen ,
- bufpos ( ),
- comm
- .
MPI_Pack count data buf,
bufpos. . 4.8.

24

bufpos

bufpos

MPI_Pack

MPI_ Unpack

bufpos

bufpos
)

. 4.8.

bufpos
MPI_Pack. MPI_Pack
. , a,b n,
:
bufpos = 0;
MPI_Pack(a,1,MPI_DOUBLE,buf,buflen,&bufpos,comm);
MPI_Pack(b,1,MPI_DOUBLE,buf,buflen,&bufpos,comm);
MPI_Pack(n,1,MPI_INT,buf,buflen,&bufpos,comm);
:
int MPI_Pack_size (int count, MPI_Datatype type, MPI_Comm comm, int *size),
size count type.

MPI_PACKED.
MPI_PACKED
:
int MPI_Unpack (void *buf, int bufsize, int *bufpos,
void *data, int count, MPI_Datatype type, MPI_Comm comm),

- buf
- ,
- buflen ,
- bufpos ( ),
- data
,
- count ,
- type
,
- comm
- .
MPI_Unpack bufpos
buf data. .
4.8.
bufpos
MPI_Unpack. MPI_Unpack
,
25

. ,
:
bufpos = 0;
MPI_Pack(buf,buflen,&bufpos,a,1,MPI_DOUBLE,comm);
MPI_Pack(buf,buflen,&bufpos,b,1,MPI_DOUBLE,comm);
MPI_Pack(buf,buflen,&bufpos,n,1,MPI_INT,comm);
.

,
.
.

4.6.
MPI .
,
.
.
; ,
. , .
.
MPI ,
(),
. ,
, .
.

. ,
,
.

MPI_COMM_WORLD.

(intercommunicator).
,
., , (2002), Group, et
al. (1994), Pacheco (1996).

4.6.1.
.
,
MPI_COMM_WORLD.
, , :
int MPI_Comm_group ( MPI_Comm comm, MPI_Group *group ).
, , :
newgroup oldgroup,
n , ranks:
int MPI_Group_incl(MPI_Group oldgroup,int n, int *ranks,MPI_Group *newgroup),
newgroup oldgroup, n ,
, ranks:
int MPI_Group_excl(MPI_Group oldgroup,int n, int *ranks,MPI_Group *newgroup).

, :
newgroup group1 group2:
int MPI_Group_union(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup);
newgroup group1 group2:
26

int MPI_Group_intersection ( MPI_Group group1, MPI_Group group2,


MPI_Group *newgroup ),
newgroup group1 group2:
int MPI_Group_difference ( MPI_Group group1, MPI_Group group2,
MPI_Group *newgroup ).

MPI_COMM_EMPTY.
MPI :

:
int MPI_Group_size ( MPI_Group group, int *size ),
:
int MPI_Group_rank ( MPI_Group group, int *rank ).
:
int MPI_Group_free ( MPI_Group *group )
( , ).

4.6.2.
, ,
. ,

.
:
:
int MPI_Comm_dup ( MPI_Comm oldcom, MPI_comm *newcomm ),
:
int MPI_comm_create (MPI_Comm oldcom, MPI_Group group, MPI_Comm *newcomm).
, ,
( ..
).
, , ,
.
,
, , 0 MPI_COMM_WORLD (

" - " . 6):
MPI_Group WorldGroup, WorkerGroup;
MPI_Comm Workers;
int ranks[1];
ranks[0] = 0;
// MPI_COMM_WORLD
MPI_Comm_group(MPI_COMM_WORLD, &WorldGroup);
// 0
MPI_Group_excl(WorldGroup, 1, ranks, &WorkerGroup);
//
MPI_Comm_create(MPI_COMM_WORLD,WorkerGroup,&Workers);
...
MPI_Group_free(&WorkerGroup);
MPI_Comm_free(&Workers);

:
int MPI_Comm_split ( MPI_Comm oldcomm, int split, int key,
MPI_Comm *newcomm ),

- oldcomm ,
27

- split
, ,
- key
,
- newcomm .
, ,
MPI_Comm_split oldcomm.

split. .
,
key ( key
).

. p=q*q ,
:
MPI_Comm comm;
int rank, row;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
row = rank/q;
MPI_Comm_split(MPI_COMM_WORLD,row,rank,&comm);
, , p=9, (0,1,2)
, (3,4,5) ..
:
int MPI_Comm_free ( MPI_Comm *comm ).

4.7.

. ,
() , () .
,
,
. ,
(
).
,
( ). ,
,
. , ,
.
.
, ,
.
.
MPI -
( ) . ,
MPI ,
.
, ,
.

4.7.1. ()
,
(. . 1.4.1 . 1.7), ,
.
(. 7 8)
(. 12).
() MPI :
int MPI_Cart_create(MPI_Comm oldcomm, int ndims, int *dims, int *periods,
int reorder, MPI_Comm *cartcomm),
28

:
- oldcomm
- ndims
- dims

- ,
- ,
- ndims,
,
- periods - ndims, ,
,
- reorder - ,
- cartcomm .

, ,
.
MPI_Cart_create
4x4, (
):
// 4x4
MPI_Comm GridComm;
int dims[2], periods[2], reorder = 1;
dims[0]
= dims[1]
= 4;
periods[0] = periods[1] = 1;
MPI_Cart_create(MPI_COMM_WORLD, 2, dims, periods, reoreder, &GridComm);
,
.
:
int MPI_Card_coords(MPI_Comm comm,int rank,int ndims,int *coords),
:
- comm
,
- rank
- , ,
- ndims - ,
- coords - .

:
int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank),

- comm
,
- coords - ,
- rank
- .

:
int MPI_Card_sub(MPI_Comm comm, int *subdims, MPI_Comm *newcomm),
:
- comm
- ,
- subdims ,
,
- newcomm - .
, ,
. MPI_Cart_sub
.
MPI_Cart_sub

:
//
MPI_Comm RowComm, ColComm;
int subdims[2];
//
subdims[0] = 0; //
subdims[1] = 1; //
MPI_Cart_sub(GridComm, subdims, &RowComm);
29

//
subdims[0] = 1;
subdims[1] = 0;
MPI_Cart_sub(GridComm, subdims, &ColComm);
44 8 ,
. RowComm
ColComm , .
MPI_Cart_shift
( - . 3).
, ,
:
k i
(i+k) mod dim, dim , ,
k
i i+k ( ).
MPI_Cart_shift ,
(, MPI_Cart_shift) :
int MPI_Card_shift(MPI_Comm comm, int dir, int disp,
int *source, int *dst),
:
- comm
,
- dir
- , ,
- disp
- (<0 ),
- source , ,
- dst
- .
, MPI_Cart_shift ,
. ,
, , MPI_Sendrecv.

4.7.2.
MPI
, , (2002),
Group, et al. (1994), Pacheco (1996).
MPI :
int MPI_Graph_create(MPI_Comm oldcomm, int nnodes, int *index, int *edges,
int reorder, MPI_Comm *graphcomm),
:
- oldcomm - ,
- nnodes
- ,
- index
- ,
- edges
- ,
- reorder - ,
- cartcomm .
, ,
.

. 4.9.


30

, . 4.9.
5, ( ) (4,1,1,1,1),
( , ) :


0
1, 2, 3, 4
1
0
2
0
3
0
4
0
:
//
int index[] = { 4,1,1,1,1 };
int edges[] = { 1,2,3,4,0,0,0,0 };
MPI_Comm StarComm;
MPI_Graph_create(MPI_COMM_WORLD, 5, index, edges, 1, &StarComm);
.
, ,
:
int MPI_Graph_neighbors_count(MPI_Comm comm,int rank, int *nneighbors).
:
int MPI_Graph_neighbors(MPI_Comm comm,int rank,int mneighbors, int *neighbors),
mneighbors neighbors.

4.8. MPI
4.8.1.
Fortran

MPI

MPI Fortran
C:
1. MPI , ,
CALL,
2. ,
,
3. status MPI_STATUS_SIZE ,
4. MPI_Comm MPI_Datatype INTEGER.
Fortran
.
. 4.2.1.5 Fortran.
PROGRAM MAIN
include 'mpi.h'
INTEGER PROCNUM, PROCRANK, RECVRANK, IERR
INTEGER STATUS(MPI_STATUS_SIZE)
CALL MPI_Init(IERR)
CALL MPI_Comm_size(MPI_COMM_WORLD, PROCNUM, IERR)
CALL MPI_Comm_rank(MPI_COMM_WORLD, PROCRANK IERR)
IF ( PROCRANK.EQ.0 )THEN
! , 0
PRINT *,"Hello from process ", PROCRANK
DO i = 1, PROCNUM-1
CALL MPI_RECV(RECVRANK, 1, MPI_INT, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, STATUS, IERR)
PRINT *,"Hello from process ", RECVRANK
END DO
ELSE ! , ,
! 0
CALL MPI_SEND(PROCRANK,1,MPI_INT,0,0,MPI_COMM_WORLD,IERR)
END IF
31

MPI_FINALIZE(IERR);
STOP
END

4.8.2. MPI-

MPI-, , ,
. - , ,
, (, ,
Microsoft Visual Studio), MPI.
,
, ..
,
.
, , .
MPI-. ,
MPI-. MPI-,
.
, ,
MPI, MPI
MPI, .
MPI- ,
mpirun. :
.
localonly.
, .

;
;
, ,
, ;
;
.
,

MPI-.


.

, , Sterling (2001, 2002). ,
MPI- MPI-2.

4.8.3. MPI-2
, MPI-2 1997 .
, , .

, .. ,
MPI-1 ,
MPI-2 .
MPI-2
http://www.mpiforum.org, Group, et al. (1999b).
MPI-2:
,
,
,
,
32

/,
,
, , ,
,
C++.

4.9.

MPI.
, MPI - (message passing interface)
. MPI
( )
. MPI , , ,
, , ,
,
MPI.
4.1 ,
MPI. ,
. ,
(
).
, , .
4.2
MPI.
.
4.3 ,
. MPI
, , , .
.
4.4 .
, 3.
, MPI
.
4.5 , MPI .
,
, .
.
4.6 .
MPI
.
4.7 MPI .
, MPI -
( ) .
4.8 MPI.
MPI Fortran,
MPI-
MPI-2.

4.10.
, MPI. ,
MPI: http://www.mpiforum.org.
http://www

MPI

MPICH

unix.mcs.anl.gov/mpi/mpich ( MPICH2 MPI-2 http://wwwunix.mcs.anl.gov/mpi/mpich2). MPI http://www.parallel.ru.


Group, et al. (1994), Pacheco (1996),
Snir, et al. (1996), Group, et al. (1999a). MPI-2 Group, et al.

33

(1999b). .. ..
(2002), (2002), (2003).
Quinn (2003), MPI
, ,
.

4.11.
1.
?
2. ?
3. ?
4. ?
5. MPI ?
6. ?
7. ?
8. MPI ?
9. ?
10. MPI ?
11. ?
12. ?
13. MPI?
14. MPI?
15. ?
?
16. MPI?
17. MPI?
18. MPI?
19. ?
20. MPI ?
21. ?
22. MPI ?
23. MPI?
24. ?
25. MPI
Fortran?
26. MPI-2?

4.12.
4.2.
1. ()
.
2. .
3. ,
n .
. , .
4.3.
4.
. .
5.
. ,
. ,
- .

34

6. 3
. .
4.4.
7. - MPI .
8. .

MPI .
9. ,
, ( MPI_Allreduce).
4.5.
10. - MPI
.
11. - .
.
12. , , .
13. -
.
14. .
.
. .
15. -
.
4.7.
16. - .
17. - .
18.
(, .).

.., .. (2002). . .: -.
, .., , .. (2001).
. - ., (2 ., 2003).
.. (2003) MPI. -:
,2003
., . (2002).
.: -.
Pacheco, P. (1996). Parallel Programming with MPI. - Morgan Kaufmann.
Gropp, W., Lusk, E., Skjellum, A. (1999a). Using MPI - 2nd Edition: Portable Parallel Programming with
the Message Passing Interface (Scientific and Engineering Computation). - MIT Press.
Gropp, W., Lusk, E., Thakur, R. (1999b). Using MPI-2: Advanced Features of the Message Passing
Interface (Scientific and Engineering Computation). - MIT Press.
Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J. (1996). MPI: The Complete Reference. MIT Press, Boston, 1996.

35