Вы находитесь на странице: 1из 11

Multiple choices (Only one answer is correct in each question) [2 pts

each]
1) What functions have to be included in any MPI program?
A. MPI_Init and MPI_Abort
B. MPI_Init and MPI_Finalize
C. MPI_Send and MPI_Recv
D. MPI_Comm_size and MPI_Comm_rank
2) To run an MPI program on a linear array of 4 nodes, what
allocation of processes (viz., the list of the ranks of the processes
allocated on each node of the linear array) is possible?
A. 0 1 2 3
B. 3 2 1 0
C. 1 2 3 4
D. A and B
3) Suppose that node A is sending an n-packet message to node B in
a distributed-memory system with a static network. Also suppose
that the message must be forwarded through k intermediate nodes.
The startup time is s and the time for transmitting one packet to a
nearby node is c. What is the most proper formula for calculating the
time for the above communication?
A. s + k c + n - 1
B. s + k n c
C. k n (s + c)
D. s + (k + n) c
4) Which one of the following is NOT a collective communication function?
A. MPI_Send
B. MPI_Reduce
C. MPI_Bcast
D. MPI_Allgather
5) What is the primary reason for using parallel computing?
A. Parallel computing is a natural way of programming
B. Parallel computing is another programming paradigm
C. Hardware technology makes building supercomputers feasible
D. We cannot rely on increasing the speed of CPU to meet the needs
for more computational power
6) According to Flynns taxonomy, the classical Von Newman
architecture is a
A. SISD system
B. SIMD system
C. MISD system
D. MIMD system

7) The connectivity of a cluster of computers connected by an


Ethernet hub is a
A. linear array
B. one-dimensional mesh
C. Bus
D. one-dimensional hypercube
8) Which one of the following doesnt look like a Grand Challenging
Problem?
A Weather forecasting
B Automatic exam grading system
C VLSI design
D Image processing
9) Which one of the following call to MPI_Reduce function is most likely to be
correct?
A MPI_Reduce(&operand, &result, 1, MPI_INT, MPI_MAX, 0,
MPI_COMM_WORLD);
B MPI_Reduce(operand, result, 1, MPI_INT, MPI_MAX, 0,
MPI_COMM_WORLD);
C MPI_Reduce(&operand, &result, 1, int, MPI_MAX, 0,
MPI_COMM_WORLD);
D MPI_Reduce(&operand, &operand, 1, MPI_INT, MPI_MAX, 0,
MPI_COMM_WORLD);
10) Hypercube is more scalable than mesh.
A True
B False
11) Operations in SIMD machines are synchronized automatically.
A True
B False
12) MPI uses a directive to operating system to create multiple
processes when running an SPMD program.
A True
B False
13) The necessity for employing parallel computing can be removed
by raising CPU speed.
A True
B False
14) A call to MPI collective communication function at one endpoint
of the communication can be paired up by a call to another function
at other endpoints.
A True
B False

15) A call to a synchronous MPI Send routine can be paired up by a


call to an asynchronous MPI routine.
A True
B False
16) Message-Passing is criticized as an assembly language for
communication.
A True
B False
17) When a parallel algorithm has super-linear speedup, the
speedup is over the number of the processors used to run the
program.
A True
B False
18) Orthogonal recursive bisection method for solving N-Body
problem ensures that the partitioning of the space result in a
balanced tree.
A True
B False
19) Parallel bucket sort algorithm is embarrassingly parallel since
little communications among processes are needed.
A True
B False
20) MIMD stands for
A Multiple Instructions Multiple Data
B Multiple Instruction Streams Multiple Data Streams
C Multiple Instruction Strings Multiple Data Strings
D Mobile Instruction Streams Mobile Data Streams
21) The upper limit of speedup gained by using an SIMD machine
composed of n computational units is
A n/2
Bn
Cn
D n2
22) Suppose the time used for sending a message from one node in
a q q 2-dimensional mesh to an adjacent node is 1 unit, broadcasting the message
from a node at one of the four corners to all other nodes takes at least _____ time
units.
A q-1
Bq
B 2(q-1)
C 2q

23) A tree in which each node (except leaves) has 4 children is


called
A binary tree
B quadtree
C octtree
D square tree
24) The height of a balanced binary tree of n nodes is
A log(n)
Bn
C nlog(n)
D 2n
25) A communication pattern that involves all processes in the
communicator is called
A point-to-point communication
B collective communication
C group communication
D broadcast communication
26) The load balancing approach that involves maintaining a work
pool by the master node and allowing the slaves nodes to request
for work whenever they finished the work assigned to them is called
A static load balancing
B master-slave load balancing
C work pool load balancing
D dynamic load balancing
27) To increase the size of a 3 4 mesh, the minimum number of nodes to
be added into the network is
A3
B4
C7
D 12
28) A communication function is in ____ mode if both endpoints
involved in the communication must be ready before the data
exchange can occur.
A standard
B buffered
C synchronous
D ready
29) In the Monte Carlo method for computing ?, if each process is
assigned the same number of random numbers to compute on at
the beginning of the program execution, we are using _____ load
balancing approach.

A static
B master-slave
C work pool
D dynamic
30) If a parallel program is developed in a way that a single source
program is written and each processor executes its personal copy of
this program, although independently and not in synchronism, this
program is in _____ structure.
A SIMD
B MIMD
C SPMD
D MPMD
31) If a parallel program is developed within the MIMD classification
and each processor will have its own program to execute, this
program is in _____ structure.
A SIMD
B MIMD
C SPMD
D MPMD
32) MPI uses
A static process creation
B dynamic process creation
C a routine spawn() to create processes
D both B and C
33) PVM uses
A static process creation
B dynamic process creation
C a routine spawn() to create processes
D both B and C
34) In the following general style of an MPI SPMD program, assume
process 0 is the master process, and master() and slave() are to be
executed by master process and slave process, respectively, what
should be the missing expression in the if statement?
MPI_Init(&argc, &argv);
.
.
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank
*/
if (_________) // missing expression
master();
else

slave();
.
.
MPI_Finalize();
A myrank == 0
B myrank != 0
C myrank == 1
D myrank != 1
35) MPI is a
A programming language
B set of preprocessor directives
C message passing software
D standard for message passing interface
36) Assume process 0 (the master process) wants to distribute an
array across a set of processes, i.e., partition the array into subarrays and send each sub-array to a process. If we want to use a
single MPI routine to do the partitioning and distribution, which MPI
routine is capable for doing this?
A MPI_Reduce
B MPI_Scatter
C MPI_Gather
D MPI_Alltoall
37) Assume process 0 (the master process) wants to gather results
(single values) from all the processes and combine then into a single
value final result, If we want to use a single MPI routine to do the
data gathering and combination, which MPI routine is capable for
doing this?
A MPI_Reduce
B MPI_Scatter
C MPI_Gather
D MPI_Alltoall
38) Assume each process in a group of processes holds a row of the
matrix in process rank order, i.e., row i of the matrix is held my
process i. If we want to use a single MPI routine to transpose the
matrix across the processes, i.e., elements on each row are
distributed across the process with element i sent to process i, and
after data exchange, each process holds one column of the matrix
with column i resides on process i, which MPI routine is capable for
doing this?
A MPI_Reduce
B MPI_Scatter

C MPI_Gather
D MPI_Alltoall
39) The following MPI program computes the sum of elements in
array x and let process 0 print out the result. What code in the
heading of the for loop should complete the program, without
changing any other parts of the program (assume all variables are
properly defined)?
int x[100];
MPI_Init(&argc, &argv);
.
.
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank
*/
MPI_Comm_size(MPI_COMM_WORLD, &p); /*find total number of
processes */
MPI_Bcast(x, 100, MPI_INT, 0, MPI_COMM_WORLD);
//calcs partial sums
part_sum = 0.0;
for (___________________) // missing code here
{
part_sum += x[i];
}
MPI_Reduce(&part_sum, &result, 1, MPI_INT, MPI_SUM, 0,
MPI_COMM_WORLD);
if (myrank == 0) printf(The final sum is: %d\n, result);
.
.
MPI_Finalize();
A i = myrank; i < n; i += p
B i = myrank+1; i < =n; i += p
C i = myrank*100/p-1; i < (myrank+1)*100/p-1; i++
D i = myrank*100/p+1; i < (myrank+1)*100/p+1; i++
40) If we use tree structured communication to distribute an array of
16 elements evenly across a set of 8 processes, how many elements
will be held by each process?
A1
B2
C3
D4

41) If we use tree structured communication to distribute an array of


16 elements evenly across a set of 8 processes, how many steps are
needed to complete the data distribution?
A1
B2
C3
D4
42) If we use tree structured communication to distribute an array of
16 elements evenly across a set of 8 processes, each process
compute the partial sum, and then we use the same tree structure
to combine partial sums to the root process. how many steps are
needed in the communication, including data distribution and partial
sum combination?
A2
B4
C6
D8
43) A multiprocessor consists of 100 processors. If 10% of the code
is sequential and 90% is parallelizable, what is the sequential
fraction of the program?
A 10%
B 90%
C 80%
D 100%
44) A multiprocessor consists of 100 processors. If 10% of the code
is sequential and 90% is parallelizable, what will be the maximum
speedup when running this program on this multiprocessor?
A ~9.17
B ~10.00
C ~90.00
D ~100.00
45) A multiprocessor consists of 100 processors. If 10% of the code
is sequential and 90% is parallelizable, and the time to run
sequential part of the program is considered time for communication
and the time to run the parallelizable part of the program is
considered time for computation, what will be the computationcommunication ratio of the parallel program?
A 1:9
B 9:1
C 0.9:1
D 0.09:1

46) The law stating that the maximum speedup of a parallel


program is limited by the sequential fraction of the initial sequential
program is called
A Amdahls Law
B Flynns Law
C Moores Law
D Van Neumanns Law
47) If the sequential fraction of a program is f and the number of processors used to
run a parallelized version of the program is allowed to increase indefinitely, the
theoretical limitation on the speedup is:
Af
B 1/f
C 1-f
D 1/(1-f)
48) Super-linear speedup is most likely to happen in which of the
following applications?
A weather forecasting
B modeling motion of astronomical bodies
C search for an item in a search space
D Monte Carlo method for computing definite integral
49) When sending a message, if the routine returns after the local
actions complete, though the message transfer may not have been
completed, the routine is
A a synchronous routine
B an asynchronous routine
C a blocking routine
D a non-blocking routine
50) The mechanism used to differentiate between different types of
messages being sent in a point-to-point MPI communication routine
is
A the count parameter
B the data type
C the message tag
D the communicator
Appendix: The syntax of some MPI functions that may be used to
answer the questions:
int MPI_Init(int* argc_ptr /* in/out */,
char** argv_ptr[] /* in/out */)
int MPI_Finalize(void)
int MPI_Comm_rank(MPI_Comm com /* in */,

int* rank /* out */)


int MPI_Comm_size(MPI_Comm comm. /* in */,
int* size /* out */)
int MPI_Send(void* buffer /* in */,
int count /* in */,
MPI_Datatype datatype /* in */,
int destination /* in */,
int tag /* in */,
MPI_Comm communicator /* in */)
int MPI_Recv(void* buffer /* in */,
int count /* in */,
MPI_Datatype datatype /* in */,
int source /* in */,
int tag /* in */,
MPI_Comm communicator /* in */,
MPI_Status* status /* out */)
int MPI_Bcast(
void* message /* in/out */,
int count /* in */,
MPI_Datatype datatype /* in */,
int root /* in */,
MPI_Comm comm /* in */)
int MPI_Reduce(
void* operand /* in */,
void* result /* out */,
int count /* in */,
MPI_Datatype datatype /* in */,
MPI_Op operator /* in */,
int root /* in */,
MPI_Comm comm /* in */)
int MPI_Gather(
void* send_data /* in */,
int send_count /* in */,
MPI_Datatype send_type /* in */,
void* recv_data /* out */,
int recv_count /* in */,
MPI_Datatype recv_type /* in */,
int root /* in */,
MPI_Comm comm /* in */)
int MPI_Scatter(
void* send_data /* in */,

int send_count /* in */,


MPI_Datatype send_type /* in */,
void* recv_data /* out */,
int recv_count /* in */,
MPI_Datatype recv_type /* in */,
int root /* in */,
MPI_Comm comm /* in */)

Вам также может понравиться