SlideShare une entreprise Scribd logo
1  sur  66
Introduction toMPI Scalable Computing Support Center Duke University http://wiki.duke.edu/display/SCSC scsc at duke.edu John Pormann, Ph.D. jbp1 at duke.edu
Outline Overview of Parallel Computing Architectures “Shared Memory” versus “Distributed Memory” Intro to MPI Parallel programming with only 6 function calls Better Performance Async communication (Latency Hiding)
MPI MPI is short for the Message Passing Interface, an industry standard library for sending and receiving arbitrary messages into a user’s program There are two major versions: v.1.2 -- Basic sends and receives v.2.x -- “One-sided Communication” and process-spawning It is fairly low-level You must explicitly send data-items from one machine to another You must explicitly receive data-items from other machines Very easy to forget a message and have the program lock-up The library can be called from C/C++, Fortran (77/90/95), Java
MPI in 6 Function Calls MPI_Init(…) Start-up the messaging system (and remote processes) MPI_Comm_size(…) How many parallel tasks are there? MPI_Comm_rank(…) Which task number am I? MPI_Send(…) MPI_Recv(…) MPI_Finalize()
“Hello World” #include “mpi.h” int main( intargc, char** argv ) { intSelfTID, NumTasks, t, data; MPI_Statusmpistat; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &NumTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID ); printf(“Hello World from %i of %i”,SelfTID,NumTasks);    if( SelfTID == 0 ) { for(t=1;t<NumTasks;t++) { MPI_Send(&data,1,MPI_INT,t,55,MPI_COMM_WORLD);       }    } else { MPI_Recv(&data,1,MPI_INT,0,55,MPI_COMM_WORLD,&mpistat); printf(“TID%i: received data=%i”,SelfTID,data);    } MPI_Finalize();    return( 0 ); }
What happens in “HelloWorld”? TID#0 MPI_Init(…)
What happens in “HelloWorld”? TID#0 TID#1 MPI_Init(…) MPI_Init(…) TID#3 TID#2 MPI_Init(…) MPI_Init(…)
What happens in “HelloWorld”? TID#0 TID#1 MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) TID#3 TID#2 MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) MPI_Comm_size(…) MPI_Comm_rank(…) printf(…)
What happens in “HelloWorld”? TID#0 TID#1 if( SelfTID == 0 ) {    for(…)       MPI_Send(…) } else {    MPI_Recv(…) TID#3 TID#2 } else {    MPI_Recv(…) } else {    MPI_Recv(…)
MPI Initialization and Shutdown MPI_Init( &argc, &argv) The odd argument list is done to allow MPI to pass the comand-line arguments to all remote processes Technically, only the first MPI-task gets (argc,argv) from the operating system ... it has to send them out to the other tasks Don’t use (argc,argv) until AFTER MPI_Init is called MPI_Finalize() Closes network connections and shuts down any other processes or threads that MPI may have created to assist with communication
Where am I?  Who am I? MPI_Comm_size returns the size of the “Communicator” (group of machines/tasks) that the current task is involved with MPI_COMM_WORLD means “All Machines/All Tasks” You can create your own Communicators if needed, but this can complicate matters as Task-5 in MPI_COMM_WORLD may be Task-0 inNewComm MPI_Comm_rank returns the “rank” of the current task inside the given Communicator ... integer ID from 0 to SIZE-1 These two integers are the only information you get to separate out what task should do what work within your program MPI can do pipeline parallelism, domain decomposition, client-server, interacting peers, etc., it all depends on how you program it
MPI_Send Arguments The MPI_Send function requires a lot of arguments to identify what data is being sent and where it is going (data-pointer,num-items,data-type) specifies the data MPI defines a number of data-types: You can define your own data-types as well (user-defined structures) You can often kludge it with (ptr,sizeof(struct),MPI_CHAR) “Destination” identifier (integer) specifies the remote process Technically, the destination ID is relative to a “Communicator” (group of machines), but MPI_COMM_WORLD is “all machines” MPI_Send(dataptr,numitems,datatype,dest,tag,MPI_COMM_WORLD); MPI_CHARMPI_SHORTMPI_INTMPI_LONGMPI_LONG_LONG MPI_FLOATMPI_DOUBLEMPI_COMPLEXMPI_DOUBLE_COMPLEX MPI_UNSIGNED_INTMPI_UNSIGNED_LONG
MPI Message Tags All messages also must be assigned a “Message Tag” A programmer-defined integer value  It is a way to give some “meaning” to the message  E.g. Tag=44 could mean a 10-number, integer vector, while Tag=55 is a 10-number, float vector E.g. Tag=100 could be from the iterative solver while Tag=200 is used for the time-integrator You use it to match messages between the sender and receiver A message with 3 floats which are coordinates ... Tag=99 A message with 3 floats which are method parameters ... Tag=88 Extremely useful in debugging!!  (always use different tag numbers for every Send/Recv combination in your program)
MPI_Recv Arguments MPI_Recv has many of the same arguments (data-pointer,num-items,data-type) specifies the data “Source” identifier (integer) specifies the remote process But can be MPI_ANY_SOURCE A message tag can be specified Or you can use MPI_ANY_TAG You can force MPI_Recv to wait for an exact match ... or use the MPI_ANY_* options to catch the first available message For higher performance:  take any message and process it accordingly, don’t wait for specific messages MPI_Recv(dataptr,numitems,datatype,src,tag,MPI_COMM_WORLD,&mpistat);
MPI_Recv, cont’d MPI_Recvreturns an “MPI_Status” object which allows you to determine: Any errors that may have occurred (stat.MPI_ERROR) What remote machine the message came from (stat.MPI_SOURCE) I.e. if you used MPI_ANY_SOURCE ... you can figure out who it was What tag-number was assigned to the message (stat.MPI_TAG) I.e. if you used MPI_ANY_TAG You can also use “MPI_STATUS_IGNORE” to skip the Status object Obviously, this is a bit dangerous to do
Message “Matching” MPI requires that the (size,datatype,tag,other-task,communicator) match between sender and receiver Except for MPI_ANY_SOURCE and MPI_ANY_TAG Except that receive buffer can be bigger than ‘size’ sent Task-2 sending (100,float,44) to Task-5 While Task-5 waits to receive ... (100,float,44) from Task-1 – NO MATCH (100,int,44) from Task-2 – NO MATCH (100,float,45) from Task-2 – NO MATCH (99,float,44) from Task-2 – NO MATCH (101,float,44) from Task-2 – MATCH!
MPI_Recv with Variable Message Length As long as your receive-buffer is BIGGER than the incoming message, you’ll be ok You can then find out how much was actually received with: Returns the actual count (number) of items received YOU (Programmer) are responsible for allocating big-enough buffers and dealing with the actual size that was received if( SelfTID == 0 ) { n = compute_message_size( ... ); MPI_Send( send_buf, n, MPI_INT, ... ); } else { intrecv_buf[1024]; MPI_Recv( recv_buf, 1024, MPI_INT, ... ); } MPI_Get_count( &status, MPI_INT, &count );
MPI Messages are “Non-Overtaking” If a sender sends two messages in succession to the same destination, and both match the same receive, then a Recv operation cannot receive the second message if the first one is still pending. If a receiver posts two receives in succession, and both match the same message, then the second receive operation cannot be satisfied by this message, if the first one is still pending. Quoted from: MPI: A Message-Passing Interface Standard  (c) 1993, 1994, 1995 University of Tennessee, Knoxville, Tennessee.
Error Handling In C, all MPI functions return an integer error code Even fancier approach: char buffer[MPI_MAX_ERROR_STRING]; int err,len; err = MPI_Send( ... ); if( err != 0 ) {    MPI_Error_string(err,buffer,&len);    printf(“Error %i [%s]”,i,buffer); } 2 underscores err = MPI_Send( ... ); if( err != 0 ) {    MPI_Error_string(err,buffer,&len);    printf(“Error %i [%s] at %s:%i”,i,buffer,__FILE__,__LINE__); }
Simple Example, Re-visited #include “mpi.h” int main( intargc, char** argv ) { intSelfTID, NumTasks, t, data, err; MPI_Statusmpistat;    err = MPI_Init( &argc, &argv );    if( err ) { printf(“Error=%i in MPI_Init”,err);    } MPI_Comm_size( MPI_COMM_WORLD, &NumTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID ); printf(“Hello World from %i of %i”,SelfTID,NumTasks);    if( SelfTID == 0 ) { for(t=1;t<NumTasks;t++) {          err = MPI_Send(&data,1,MPI_INT,t,100+t,MPI_COMM_WORLD);          if( err ) { printf(“Error=%i in MPI_Send to %i”,err,t);          }       }    } else { MPI_Recv(&data,1,MPI_INT,MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,&mpistat); printf(“TID%i: received data=%i from src=%i/tag=%i”,SelfTID,data, mpistat.MPI_SOURCE,mpistat.MPI_TAG);    } MPI_Finalize();    return( 0 ); }
% mpicc -omycode.o -cmycode.c % mpif77 -O3 -funroll -omysubs.o -cmysubs.f77 % mpicc -omyexemycode.omysubs.o Compiling MPI Programs The MPI Standard “encourages” library-writers to provide compiler wrappers to make the build process easier: Usually something like ‘mpicc’ and ‘mpif77’ But some vendors may have their own names This wrapper should include the necessary header-path to compile, and the necessary library(-ies) to link against, even if they are in non-standard locations Compiler/Linker arguments are usually passed through to the underlying compiler/linker -- so you can select ‘-O’ optimization
Running MPI Programs The MPI standard also “encourages” library-writers to include a program to start-up a parallel program Usually called ‘mpirun’ But many vendors and queuing systems use something different You explicitly define the number of tasks you want to have available to the parallel job You CAN allocate more tasks than you have machines (or CPUs) Each task is a separate Unix process, so if 2 or more tasks end up on the same machine, they simply time-share that machine’s resources This can make debugging a little difficult since the timing of sends and recv’s will be significantly changed on a time-shared system % mpirun -np 4 myexe
“Blocking” Communication MPI_Send and MPI_Recv are “blocking” Your MPI task (program) waits until the message is sent or received before it proceeds to the next line of code Note that for Sends, this only guarantees that the message has been put onto the network, not that the receiver is ready to process it For large messages, it *MAY* wait for the receiver to be ready This makes buffer management easier When a send is complete, the buffer is ready to be re-used It also means “waiting” ... bad for performance What if you have other work you could be doing? What if you have extra networking hardware that allows “off-loading” of the network transfer?
Blocking Communication, Example #include “mpi.h” int main( int argc, char** argv ) {    int SelfTID, NumTasks, t, data1, data2;    MPI_Status mpistat;    MPI_Init( &argc, &argv );    MPI_Comm_size( MPI_COMM_WORLD, &NumTasks );    MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID );    printf(“Hello World from %i of %i”,SelfTID,NumTasks);    if( SelfTID == 0 ) {       MPI_Send(&data1,1,MPI_INT,1,55,MPI_COMM_WORLD);       MPI_Send(&data2,1,MPI_INT,1,66,MPI_COMM_WORLD);    } else if( SelfTID == 1 ) {       MPI_Recv(&data2,1,MPI_INT,0,66,MPI_COMM_WORLD,&mpistat);       MPI_Recv(&data1,1,MPI_INT,0,55,MPI_COMM_WORLD,&mpistat);       printf(“TID%i: received data=%i %i”,SelfTID,data1,data2);    }    MPI_Finalize();    return( 0 ); } Any problems?
Blocking Communication, Example, cont’d TID#0 TID#1 if( SelfTID == 0 ) {    MPI_Send( tag=55 )    /* TID-0 blocks */ } else {    MPI_Recv( tag=66 )    /* TID-1 blocks */ Messages do not match! So both tasks could block ... and the program never completes
MPI_Isend ... “Fire and Forget” Messages MPI_Isend can be used to send data without the program waiting for the send to complete This can make the program logic easier  If you need to send 10 messages, you don’t want to coordinate them with the timing of the recv’s on remote machines But it provides less synchronization, so the programmer has to ensure that any other needed sync is provided some other way Be careful with your buffer management -- make sure you don’t re-use the buffer until you know the data has actually been sent/recv’d! MPI_Isend returns a “request-ID” that allows you to check if it has actually completed yet
MPI_Isend Arguments MPI_Isend( send_buf, count, MPI_INT, dest, tag, MPI_COMM_WORLD, &mpireq ); (data-pointer,count,data-type) ... as with MPI_Send destination, tag, communicator ... as with MPI_Send MPI_Request data-type “Opaque” data-type, you can’t (shouldn’t) print it or do comparisons on it It is returned by the function, and can be sent to other MPI functions which can then use the request-ID MPI_Wait, MPI_Test
MPI_Isend, cont’d MPI_Request mpireq; MPI_Status mpistat; /* initial data for solver */ for(i=0;i<n;i++) {    send_buf[i] = 0.0f; } MPI_Isend( send_buf, n, MPI_FLOAT, dest, tag, MPI_COMM_WORLD, &mpireq ); . . . /* do some “real” work */ . . . /* make sure last send completed */ MPI_Wait( &mpireq, &mpistat ); /* post the new data for the next iteration */ MPI_Isend( send_buf, n, MPI_FLOAT, dest, tag, MPI_COMM_WORLD, &mpireq );
MPI_Irecv ... “Ready to Receive” MPI_Irecv is the non-blocking version of MPI_Recv (much like Send and Isend) Your program is indicating that it wants to eventually receive certain data, but it doesn’t need it right now MPI_Irecv returns a “request-ID” that can be used to check if the data has been received yet NOTE: the recv buffer is always “visible” to your program, but the data is not valid until the recv actually completes Very useful if many data items are to be received and you want to process the first one that arrives If posted early enough, an Irecv could help the MPI library avoid some extra memory-copies (i.e. better performance)
MPI_Wait and MPI_Test To check on the progress of MPI_Isend or MPI_Irecv, you use the wait or test functions: MPI_Wait will block until the specified send/recv request-ID is complete Returns an MPI_Status object just like MPI_Recv would MPI_Test will return Yes/No if the specified send/recv request-ID is complete This is a non-blocking test of the non-blocking communication Useful if you have lots of messages flying around and want to process the first one that arrives Note that MPI_Isend can be paired with MPI_Recv, and MPI_Send with MPI_Irecv Allows different degrees of synchronization as needed by different parts of your program
MPI_Wait and MPI_Test, cont’d err = MPI_Wait( &mpireq, &mpistat ); /* message ‘mpireq’ is now complete */ int flag; err = MPI_Test( &mpireq, &flag, &mpistat ); if( flag ) {    /* message ‘mpireq’ is now complete */ } else {    /* message is still pending */    /* do other work? */ }
MPI_Request_free The pool of Isend/Irecv “request-IDs” is large, but limited ... it is possible to run out of them, so you need to either Wait/Test for them or Free them Example use-case: You know (based on your program’s logic) that when a certain Recv is complete, all of your previous Isends must have also completed So there is no need to Wait for the Isend request-IDs Make sure to use MPI_Request_free to free up the known-complete request-IDs No status information is returned err = MPI_Request_free( &mpireq );
MPI_Isend and MPI_Recv /* launch ALL the messages at once */ for(t=0;t<NumTasks;t++) {   if( t != SelfTID ) { MPI_Isend( &my_psum,1,MPI_FLOAT, t, tag, MPI_COMM_WORLD, &mpireq ); MPI_Request_free( &mpireq );   } } . . .  /* do other work */ . . . tsum= 0.0; for(t=0;t<NumTasks;t++) {   if( t != SelfTID ) { MPI_Recv( recv_psum,1,MPI_FLOAT, t, tag, MPI_COMM_WORLD, &mpistat ); tsum += recv_psum;   } }
MPI_Waitany, MPI_Waitall, MPI_Testany, MPI_Testall There are “any” and “all” versions of MPI_Wait and MPI_Test for groups of Isend/Irecv request-IDs (stored as a contiguous array) MPI_Waitany - waits until any one request-ID is complete, returns the array-index of the one that completed MPI_Testany - tests if any of the request-ID are complete, returns the array-index of the first one that it finds is complete MPI_Waitall - waits for all of the request-IDs to be complete MPI_Testall - returns Yes/No if all of the request-IDs are complete This is “heavy-duty” synchronization ... if 9 out of 10 request-IDs are complete, it will still wait for the 10th one to finish Be careful of your performance if you use the *all functions
Why use MPI_Isend/Irecv? Sometimes you just need a certain piece of data before the task can continue on to do other work … and so you just pay the cost (waiting) that comes with MPI_Recv But often we can find other things to do instead of just WAITING for data-item-X to be sent/received Maybe we can process data-item-Y instead  Maybe we can do another inner iteration on a linear solver Maybe we can restructure our code so that we separate our sends and receives as much as possible Then we can maximize the network’s ability to ship the data around and spend less time waiting
Latency Hiding One of the keys to avoiding Amdahl’s Law is to hide as much of the network latency (time to transfer messages) as possible Compare the following … Assume communication is to the TID-1/TID+1 neighbors MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; for(i=1;i<(ar_size-2);i++) {    x[i] = alpha*(x[i-1]+x[i+1]); } x[ar_size-1] += alpha*NewRight; MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); for(i=1;i<(ar_size-2);i++) {    x[i] = alpha*(x[i-1]+x[i+1]); } MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; x[ar_size-1] += alpha*NewRight; What happens to the network in each program?
Without Latency Hiding MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } x[ar_size-1] += alpha*NewRight;
With Latency Hiding MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; x[ar_size-1] += alpha*NewRight;
Synchronization The basic MPI_Recv call is “Blocking” … the calling task waits until the message is received So there is already some synchronization of tasks happening Another major synchronization construct is a “Barrier” Every task must “check-in” to the barrier before any task can leave Thus every task will be at the same point in the code when all of them leave the barrier together Every task will complete ‘DoWork_X()’ before any one of them starts ‘DoWork_Y()’ DoWork_X(…); MPI_Barrier( MPI_COMM_WORLD ); DoWork_Y(…);
Barriers and Performance Barriers imply that, at some point in time, 9 out of 10 tasks are waiting -- sitting idle, doing nothing -- until the last task catches up Clearly, “WAIT” == bad for performance! You never NEED a Barrier for correctness But it is a quick and easy way to make lots of parallel bugs go away! Larger barriers == more waiting If Task#1 and Task#2 need to synchronize, then try to make only those two synchronize together (not all 10 tasks) MPI_Recv() implies a certain level of synchronization, maybe that is all you really need? ... or MPI_Ssend on the sending side
Synchronization with MPI_Ssend MPI also has a “synchronous” Send call This forces the calling task to WAIT for the receiver to post a receive operation (MPI_Recv or MPI_Irecv) Only after the Recv is posted will the Sender be released to the next line of code The sender thus has some idea of where, in the program, the receiver currently is ... it must be at a matching recv With some not-so-clever use of message-tags, you can have the MPI_Ssend operation provide fairly precise synchronization There is also an MPI_Issend MPI_Ssend( buf, count, datatype, dest, tag, comm )
MPI_Ssend Example TID#0 Be careful with your message tags or use of wildcards ... Here, the Ssend can match either of two locations on the receiver MPI_Ssend( ... tag=44 ... ) TID#1 MPI_Recv( ... tag=44 ... ) . . . MPI_Recv( ... tag=ANY ... )
MPI_Ssend Example TID#0 MPI_Ssend( ... tag=44 ... ) TID#1 Now, the sender knows exactly where the receiver is MPI_Recv( ... tag=44 ... ) . . . MPI_Recv( ... tag=45 ... )
Collective Operations MPI specifies a whole range of group-collective operations (MPI_Op’s) with MPI_Reduce MPI_MIN, MPI_MAX, MPI_PROD, MPI_SUM MPI_AND (logical), MPI_BAND (bitwise) Min-location, Max-location E.g. all tasks provide a partial-sum, then MPI_Reduce computes the global sum, and places the result on 1 task (called the ‘root’) There are mechanisms to provide your own reduction operation MPI_Reduce can take an array argument and do, e.g., MPI_Sum across each entry from each task
MPI_Reduce Example send_buf: int send_buf[4], recv_buf[4]; /* each task fills in send_buf */ for(i=0;i<4;i++) {    send_buf[i] = ...; } err = MPI_Reduce( send_buf, recv_buf, 4,          MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD ); if( SelfTID == 0 ) {    /* recv_buf has the max values */    /* e.g. recv_buf[0] has the max of        send_buf[0]on all tasks */ } else {    /* for all other tasks,        recv_buf has invalid data */ } 1 2 3 4 TID#0 3 8 8 1 TID#1 3 4 9 1 TID#2 7 5 4 3 TID#3 recv_buf: TID#0 4 8 7 9
MPI_Allreduce MPI_Reduce sends the final data to the receive buffer of the “root” task only MPI_Allreduce sends the final data to ALL of the receive buffers (on all of the tasks) Can be useful for computing and distributing the global sum of a calculation (MPI_Allreduce with MPI_SUM) ... computing and detecting convergence of a solver (MPI_MAX) ... computing and detecting “eureka” events (MPI_MINLOC) err = MPI_Allreduce( send_buf, recv_buf, count,  MPI_INT, MPI_MIN, MPI_COMM_WORLD ); MPI_MINLOC/MAXLOC return the rank (task) that has the min/max value
Reduction Operations and Performance Note that all reductions are barriers Just as with MPI_Barrier, using too many of them could impact performance greatly You can always “do-it-yourself” with Sends/Recvs And by splitting out the Sends/Recvs, you may be able to find ways to hide some of the latency of the network But, in theory, the MPI implementation will know how to optimize the reduction for the given network/machine architecture MPI_Op_Create( MPI_User_function *function, intcommute, MPI_Op *op ) typedef void MPI_User_function( void *invec, void *inoutvec, int *len, MPI_Datatype *datatype)
Collective Operations and Round-Off In general, be careful with collective operations on floats and doubles (especially DIY collective operations) as they can produce different results on different processor configs Running with 10 tasks splits the data into certain chunks which may accumulate round-off errors differently than the same data split onto 20 tasks Your results will NOT be bit-wise identical between a 10-task run and a 20-task run If you did it yourself, be sure to use a fixed ordering for your final {sum/min/max/etc.} ... floating point math is not associative! (a+b) != (b+a)
Round-off, cont’d E.g. assume 3 digits of accuracy in the following sums across an array of size 100: 1.00  .001  .001 ...  .001 ===== 1.00  .001  .001  .001 ...  .001 =====  .050 1.00  .001  .001 ...  .001 ===== 1.00 !! TID#0: TID#1: 1 Task: 2 Tasks:   + ===== 1.05 What about on 4 tasks?
Broadcast Messages MPI_Bcast( buffer, count, datatype, root, MPI_COMM_WORLD ) Some vendors or networks may allow for efficient broadcast messages to all tasks in a Communicator At least more efficient than N send/recv’s Fan-out/tree of messages MPI_Bcast is blocking So performance can be an issue You can always DIY ... possibly with latency hiding
MPI Communicators A Communicator is a group of MPI-tasks The Communicators must match between Send/Recv ... there is no MPI_ANY_COMM flag  So messages CANNOT cross Communicators This is useful for parallel libraries -- your library can create its own Communicator and isolate its messages from other user-code MPI_CommNewComm; MPI_GroupNewGroup, OrigGroup; intn = 4; intarray[n] = { 0,1, 4,5 }; /* TIDs to include */ MPI_Comm_group( MPI_COMM_WORLD, &OrigGroup ); MPI_Group_incl( OrigGroup, n, array, &NewGroup ); MPI_Comm_create( MPI_COMM_WORLD, NewGroup, &NewComm );
MPI Communicators, cont’d Once a new Communicator is created, messages within that Communicator will have their source/destination IDs renumbered In the previous example, World-Task-4 could be addressed by: Using NewComm, you CANNOT sent to World-Task-2 For library writers, there is a short-cut to duplicate an entire Communicator (to isolate your messages from others) Then these messages will not conflict: MPI_Send(&data,1,MPI_INT,4,100,MPI_COMM_WORLD); MPI_Send(&data,1,MPI_INT,4,100,NewComm); MPI_Comm_dup( MPI_COMM_WORLD, &NewComm ); MPI_Send(&data,1,MPI_INT,4,100,MPI_COMM_WORLD); MPI_Send(&data,1,MPI_INT,4,100,NewComm);
Parallel Debugging Parallel programming has all the usual bugs that sequential programming has ... Bad code logic Improper arguments Array overruns But it also has several new kinds of bugs that can creep in Deadlock Race Conditions Limited MPI resources (MPI_Request)
Deadlock A “Deadlock” condition occurs when all Tasks are stopped at synchronization points and none of them are able to make progress Sometimes this can be something simple:   Task-1 is waiting for a message from Task-2 Task-2 is waiting for a message from Task-1 Sometimes it can be more complex or cyclic: Task-1 is waiting for Task-2 Task-2 is waiting for Task-3 Task-3 is waiting for ...
Deadlock, cont’d Symptoms: Program hangs  (!) Debugging deadlock is often straightforward … unless there is also a race condition involved Simple approach:  put print statement in front of all Recv/Barrier/Wait functions
Starvation Somewhat related to deadlock is “starvation” One Task needs a “lock signal” or “token” that is never sent by another Task TID#2 is about to send the token to TID#3, but realizes it needs it again ... so TID#3 has to continue waiting Symptom is usually extremely bad performance Simple way to detect starvation is often to put print statements in to watch the “token” move Solution often involves rethinking why the “token” is needed Multiple tokens?  Better “fairness” algorithm?
Race Conditions A “Race” is a situation where multiple Tasks are making progress toward some shared or common goal where timing is critical Generally, this means you’ve ASSUMED something is synchronized, but you haven’t FORCED it to be synchronized E.g. all Tasks are trying to find a “hit” in a database When a hit is detected, the Task sends a message Normally, hits are rare events and so sending the message is “not a big deal” But if two hits occur simultaneously, then we have a race condition Task-1 and Task-2 both send a message ... Task-3 receives Task-1’s message first (and thinks Task-1 is the hit-owner) ... Task-4 receives Task-2’s message first (and thinks Task-2 is the hit-owner)
Race Conditions, cont’d Some symptoms: Running the code with 4 Tasks works fine, running with 5 causes erroneous output Running the code with 4 Tasks works fine, running it with 4 Tasks again produces different results Sometimes the code works fine, sometimes it crashes after 4 hours, sometimes it crashes after 10 hours, … Program sometimes runs fine, sometimes hits an infinite loop Program output is sometimes jumbled (the usual line-3 appears before line-1) Any kind of indeterminacy in the running of the code, or seemingly random changes to its output, could indicate a race condition
Race Conditions, cont’d Race Conditions can be EXTREMELY hard to debug Often, race conditions don’t cause crashes or even logic-errors E.g. a race condition leads two Tasks to both think they are in charge of data-item-X … nothing crashes, they just keep writing and overwriting X (maybe even doing duplicate computations) Often, race conditions don’t cause crashes at the time they actually occur … the crash occurs much later in the execution and for a totally unrelated reason E.g. a race condition leads one Task to think that the solver converged but the other Task thinks we need another iteration … crash occurs because one Task tried to compute the global residual Sometimes race conditions can lead to deadlock Again, this often shows up as seemingly random deadlock when the same code is run on the same input
Parallel Debugging, cont’d Parallel debugging is still in its infancy Lots of people still use print statements to debug their code! To make matters worse, print statements alter the timing of your program ... so they can create new race conditions or alter the one you were trying to debug “Professional” debuggers do exist that can simplify parallel programming They try to work like a sequential debugger ... when you print a variable, it prints on all tasks; when you pause the program, it pauses on all remote machines; etc. Totalview is the biggest commercial version; Allinea is a recent contender Sun’s Prism may still be out there http://totalviewtech.com  ... free student edition!
Parallel Tracing It is often useful to “see” what is going on in your parallel program When did a given message get sent?  recv’d? How long did a given barrier take? You can “Trace” the MPI calls with a number of different tools Take time-stamps at each MPI call, then visualize the messages as arrows between different tasks Technically, all the “work” of MPI is done in PMPI_* functions, the MPI_* functions are wrappers So you can build your own tracing system We will soon have the Intel Trace Visualizer installed on the DSCR
Parallel Tracing, cont’d
Parallel Programming Caveats You already know to check the return values of ALL function calls You really Really REALLY should check the return values of all MPI function calls Don’t do parallel file-write operations!!  (at least not on NFS) It won’t always work as you expect Even flock() won’t help you in all cases Push all output through a single Task  (performance?) Or use MPI-IO or MPI-2 Random numbers may not be so random, especially if you are running 1000 Tasks which need 1M random numbers each Use the SPRNG package
One more time ... Latency Hiding Latency hiding is really the key to 200x, 500x, 1000x speed-ups Amdahl’s Law places a pretty harsh upper-bound on performance If your code is 1% serial ... 100x speed-up is the best you can do ... EVER! Network overheads quickly add up, especially on COTS networks Try to think ahead on where data is needed and where that data was first computed Send the data as soon as possible
Better-than-Expected Speed-Up While Amdahl’s Law always applies, scientists are often interested in HUGE problems Larger problems often have smaller %-serial At some point, it’s not an issue of “how much faster than 1 CPU?” but “how much faster than my last cluster?” This is a comment about practicality... your professor/boss may have other opinions! Cache memory works in your favor It is possible to get super-linear speed-up due to cache effects Cache memory can be 10-100x faster than main memory E.g. 1 CPU has 16MB cache, 100 CPUs have ~1.6GB cache There’s no such thing as a single CPU with 1.6GB of cache, so you can’t make a fair comparison to your 100-CPU results
Where to go from here? http://wiki.duke.edu/display/scsc/  . . . scsc at duke.edu Get on our mailing lists: ,[object Object]

Contenu connexe

Tendances

Tendances (20)

The Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's TermsThe Message Passing Interface (MPI) in Layman's Terms
The Message Passing Interface (MPI) in Layman's Terms
 
MPI Tutorial
MPI TutorialMPI Tutorial
MPI Tutorial
 
What is [Open] MPI?
What is [Open] MPI?What is [Open] MPI?
What is [Open] MPI?
 
Mpi Java
Mpi JavaMpi Java
Mpi Java
 
Parallel programming using MPI
Parallel programming using MPIParallel programming using MPI
Parallel programming using MPI
 
MPI History
MPI HistoryMPI History
MPI History
 
My ppt hpc u4
My ppt hpc u4My ppt hpc u4
My ppt hpc u4
 
Point-to-Point Communicationsin MPI
Point-to-Point Communicationsin MPIPoint-to-Point Communicationsin MPI
Point-to-Point Communicationsin MPI
 
Message Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communicationMessage Passing Interface (MPI)-A means of machine communication
Message Passing Interface (MPI)-A means of machine communication
 
Chapter 6 pc
Chapter 6 pcChapter 6 pc
Chapter 6 pc
 
Using MPI
Using MPIUsing MPI
Using MPI
 
Porting an MPI application to hybrid MPI+OpenMP with Reveal tool on Shaheen II
Porting an MPI application to hybrid MPI+OpenMP with Reveal tool on Shaheen IIPorting an MPI application to hybrid MPI+OpenMP with Reveal tool on Shaheen II
Porting an MPI application to hybrid MPI+OpenMP with Reveal tool on Shaheen II
 
Performance measures
Performance measuresPerformance measures
Performance measures
 
Parallel computing(2)
Parallel computing(2)Parallel computing(2)
Parallel computing(2)
 
Collective Communications in MPI
 Collective Communications in MPI Collective Communications in MPI
Collective Communications in MPI
 
Mpi Test Suite Multi Threaded
Mpi Test Suite Multi ThreadedMpi Test Suite Multi Threaded
Mpi Test Suite Multi Threaded
 
Nug2004 yhe
Nug2004 yheNug2004 yhe
Nug2004 yhe
 
2018867974 sulaim (2)
2018867974 sulaim (2)2018867974 sulaim (2)
2018867974 sulaim (2)
 
Python introduction
Python introductionPython introduction
Python introduction
 
GNU Compiler Collection - August 2005
GNU Compiler Collection - August 2005GNU Compiler Collection - August 2005
GNU Compiler Collection - August 2005
 

Similaire à Intro to MPI

cs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfcs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfssuserada6a9
 
Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10AbdullahMunir32
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPIAnkit Mahato
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPIyaman dua
 
Rgk cluster computing project
Rgk cluster computing projectRgk cluster computing project
Rgk cluster computing projectOstopD
 
Message passing Programing and MPI.
Message passing Programing and MPI.Message passing Programing and MPI.
Message passing Programing and MPI.Munawar Hussain
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMPDivya Tiwari
 
Tutorial on Parallel Computing and Message Passing Model - C3
Tutorial on Parallel Computing and Message Passing Model - C3Tutorial on Parallel Computing and Message Passing Model - C3
Tutorial on Parallel Computing and Message Passing Model - C3Marcirio Chaves
 
parellel computing
parellel computingparellel computing
parellel computingkatakdound
 
Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Marcirio Chaves
 
Lisandro dalcin-mpi4py
Lisandro dalcin-mpi4pyLisandro dalcin-mpi4py
Lisandro dalcin-mpi4pyA Jorge Garcia
 

Similaire à Intro to MPI (20)

cs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdfcs556-2nd-tutorial.pdf
cs556-2nd-tutorial.pdf
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10Parallel and Distributed Computing Chapter 10
Parallel and Distributed Computing Chapter 10
 
High Performance Computing using MPI
High Performance Computing using MPIHigh Performance Computing using MPI
High Performance Computing using MPI
 
25-MPI-OpenMP.pptx
25-MPI-OpenMP.pptx25-MPI-OpenMP.pptx
25-MPI-OpenMP.pptx
 
MPI
MPIMPI
MPI
 
Introduction to MPI
Introduction to MPIIntroduction to MPI
Introduction to MPI
 
Rgk cluster computing project
Rgk cluster computing projectRgk cluster computing project
Rgk cluster computing project
 
More mpi4py
More mpi4pyMore mpi4py
More mpi4py
 
Message passing Programing and MPI.
Message passing Programing and MPI.Message passing Programing and MPI.
Message passing Programing and MPI.
 
Programming using MPI and OpenMP
Programming using MPI and OpenMPProgramming using MPI and OpenMP
Programming using MPI and OpenMP
 
Open MPI 2
Open MPI 2Open MPI 2
Open MPI 2
 
mpi4py.pdf
mpi4py.pdfmpi4py.pdf
mpi4py.pdf
 
Tutorial on Parallel Computing and Message Passing Model - C3
Tutorial on Parallel Computing and Message Passing Model - C3Tutorial on Parallel Computing and Message Passing Model - C3
Tutorial on Parallel Computing and Message Passing Model - C3
 
Lecture9
Lecture9Lecture9
Lecture9
 
parellel computing
parellel computingparellel computing
parellel computing
 
Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2Tutorial on Parallel Computing and Message Passing Model - C2
Tutorial on Parallel Computing and Message Passing Model - C2
 
Lisandro dalcin-mpi4py
Lisandro dalcin-mpi4pyLisandro dalcin-mpi4py
Lisandro dalcin-mpi4py
 
Introduction to Procedural Programming in C++
Introduction to Procedural Programming in C++Introduction to Procedural Programming in C++
Introduction to Procedural Programming in C++
 
Distributed System
Distributed System Distributed System
Distributed System
 

Intro to MPI

  • 1. Introduction toMPI Scalable Computing Support Center Duke University http://wiki.duke.edu/display/SCSC scsc at duke.edu John Pormann, Ph.D. jbp1 at duke.edu
  • 2. Outline Overview of Parallel Computing Architectures “Shared Memory” versus “Distributed Memory” Intro to MPI Parallel programming with only 6 function calls Better Performance Async communication (Latency Hiding)
  • 3. MPI MPI is short for the Message Passing Interface, an industry standard library for sending and receiving arbitrary messages into a user’s program There are two major versions: v.1.2 -- Basic sends and receives v.2.x -- “One-sided Communication” and process-spawning It is fairly low-level You must explicitly send data-items from one machine to another You must explicitly receive data-items from other machines Very easy to forget a message and have the program lock-up The library can be called from C/C++, Fortran (77/90/95), Java
  • 4. MPI in 6 Function Calls MPI_Init(…) Start-up the messaging system (and remote processes) MPI_Comm_size(…) How many parallel tasks are there? MPI_Comm_rank(…) Which task number am I? MPI_Send(…) MPI_Recv(…) MPI_Finalize()
  • 5. “Hello World” #include “mpi.h” int main( intargc, char** argv ) { intSelfTID, NumTasks, t, data; MPI_Statusmpistat; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &NumTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID ); printf(“Hello World from %i of %i”,SelfTID,NumTasks); if( SelfTID == 0 ) { for(t=1;t<NumTasks;t++) { MPI_Send(&data,1,MPI_INT,t,55,MPI_COMM_WORLD); } } else { MPI_Recv(&data,1,MPI_INT,0,55,MPI_COMM_WORLD,&mpistat); printf(“TID%i: received data=%i”,SelfTID,data); } MPI_Finalize(); return( 0 ); }
  • 6. What happens in “HelloWorld”? TID#0 MPI_Init(…)
  • 7. What happens in “HelloWorld”? TID#0 TID#1 MPI_Init(…) MPI_Init(…) TID#3 TID#2 MPI_Init(…) MPI_Init(…)
  • 8. What happens in “HelloWorld”? TID#0 TID#1 MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) TID#3 TID#2 MPI_Comm_size(…) MPI_Comm_rank(…) printf(…) MPI_Comm_size(…) MPI_Comm_rank(…) printf(…)
  • 9. What happens in “HelloWorld”? TID#0 TID#1 if( SelfTID == 0 ) { for(…) MPI_Send(…) } else { MPI_Recv(…) TID#3 TID#2 } else { MPI_Recv(…) } else { MPI_Recv(…)
  • 10. MPI Initialization and Shutdown MPI_Init( &argc, &argv) The odd argument list is done to allow MPI to pass the comand-line arguments to all remote processes Technically, only the first MPI-task gets (argc,argv) from the operating system ... it has to send them out to the other tasks Don’t use (argc,argv) until AFTER MPI_Init is called MPI_Finalize() Closes network connections and shuts down any other processes or threads that MPI may have created to assist with communication
  • 11. Where am I? Who am I? MPI_Comm_size returns the size of the “Communicator” (group of machines/tasks) that the current task is involved with MPI_COMM_WORLD means “All Machines/All Tasks” You can create your own Communicators if needed, but this can complicate matters as Task-5 in MPI_COMM_WORLD may be Task-0 inNewComm MPI_Comm_rank returns the “rank” of the current task inside the given Communicator ... integer ID from 0 to SIZE-1 These two integers are the only information you get to separate out what task should do what work within your program MPI can do pipeline parallelism, domain decomposition, client-server, interacting peers, etc., it all depends on how you program it
  • 12. MPI_Send Arguments The MPI_Send function requires a lot of arguments to identify what data is being sent and where it is going (data-pointer,num-items,data-type) specifies the data MPI defines a number of data-types: You can define your own data-types as well (user-defined structures) You can often kludge it with (ptr,sizeof(struct),MPI_CHAR) “Destination” identifier (integer) specifies the remote process Technically, the destination ID is relative to a “Communicator” (group of machines), but MPI_COMM_WORLD is “all machines” MPI_Send(dataptr,numitems,datatype,dest,tag,MPI_COMM_WORLD); MPI_CHARMPI_SHORTMPI_INTMPI_LONGMPI_LONG_LONG MPI_FLOATMPI_DOUBLEMPI_COMPLEXMPI_DOUBLE_COMPLEX MPI_UNSIGNED_INTMPI_UNSIGNED_LONG
  • 13. MPI Message Tags All messages also must be assigned a “Message Tag” A programmer-defined integer value It is a way to give some “meaning” to the message E.g. Tag=44 could mean a 10-number, integer vector, while Tag=55 is a 10-number, float vector E.g. Tag=100 could be from the iterative solver while Tag=200 is used for the time-integrator You use it to match messages between the sender and receiver A message with 3 floats which are coordinates ... Tag=99 A message with 3 floats which are method parameters ... Tag=88 Extremely useful in debugging!! (always use different tag numbers for every Send/Recv combination in your program)
  • 14. MPI_Recv Arguments MPI_Recv has many of the same arguments (data-pointer,num-items,data-type) specifies the data “Source” identifier (integer) specifies the remote process But can be MPI_ANY_SOURCE A message tag can be specified Or you can use MPI_ANY_TAG You can force MPI_Recv to wait for an exact match ... or use the MPI_ANY_* options to catch the first available message For higher performance: take any message and process it accordingly, don’t wait for specific messages MPI_Recv(dataptr,numitems,datatype,src,tag,MPI_COMM_WORLD,&mpistat);
  • 15. MPI_Recv, cont’d MPI_Recvreturns an “MPI_Status” object which allows you to determine: Any errors that may have occurred (stat.MPI_ERROR) What remote machine the message came from (stat.MPI_SOURCE) I.e. if you used MPI_ANY_SOURCE ... you can figure out who it was What tag-number was assigned to the message (stat.MPI_TAG) I.e. if you used MPI_ANY_TAG You can also use “MPI_STATUS_IGNORE” to skip the Status object Obviously, this is a bit dangerous to do
  • 16. Message “Matching” MPI requires that the (size,datatype,tag,other-task,communicator) match between sender and receiver Except for MPI_ANY_SOURCE and MPI_ANY_TAG Except that receive buffer can be bigger than ‘size’ sent Task-2 sending (100,float,44) to Task-5 While Task-5 waits to receive ... (100,float,44) from Task-1 – NO MATCH (100,int,44) from Task-2 – NO MATCH (100,float,45) from Task-2 – NO MATCH (99,float,44) from Task-2 – NO MATCH (101,float,44) from Task-2 – MATCH!
  • 17. MPI_Recv with Variable Message Length As long as your receive-buffer is BIGGER than the incoming message, you’ll be ok You can then find out how much was actually received with: Returns the actual count (number) of items received YOU (Programmer) are responsible for allocating big-enough buffers and dealing with the actual size that was received if( SelfTID == 0 ) { n = compute_message_size( ... ); MPI_Send( send_buf, n, MPI_INT, ... ); } else { intrecv_buf[1024]; MPI_Recv( recv_buf, 1024, MPI_INT, ... ); } MPI_Get_count( &status, MPI_INT, &count );
  • 18. MPI Messages are “Non-Overtaking” If a sender sends two messages in succession to the same destination, and both match the same receive, then a Recv operation cannot receive the second message if the first one is still pending. If a receiver posts two receives in succession, and both match the same message, then the second receive operation cannot be satisfied by this message, if the first one is still pending. Quoted from: MPI: A Message-Passing Interface Standard (c) 1993, 1994, 1995 University of Tennessee, Knoxville, Tennessee.
  • 19. Error Handling In C, all MPI functions return an integer error code Even fancier approach: char buffer[MPI_MAX_ERROR_STRING]; int err,len; err = MPI_Send( ... ); if( err != 0 ) { MPI_Error_string(err,buffer,&len); printf(“Error %i [%s]”,i,buffer); } 2 underscores err = MPI_Send( ... ); if( err != 0 ) { MPI_Error_string(err,buffer,&len); printf(“Error %i [%s] at %s:%i”,i,buffer,__FILE__,__LINE__); }
  • 20. Simple Example, Re-visited #include “mpi.h” int main( intargc, char** argv ) { intSelfTID, NumTasks, t, data, err; MPI_Statusmpistat; err = MPI_Init( &argc, &argv ); if( err ) { printf(“Error=%i in MPI_Init”,err); } MPI_Comm_size( MPI_COMM_WORLD, &NumTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID ); printf(“Hello World from %i of %i”,SelfTID,NumTasks); if( SelfTID == 0 ) { for(t=1;t<NumTasks;t++) { err = MPI_Send(&data,1,MPI_INT,t,100+t,MPI_COMM_WORLD); if( err ) { printf(“Error=%i in MPI_Send to %i”,err,t); } } } else { MPI_Recv(&data,1,MPI_INT,MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,&mpistat); printf(“TID%i: received data=%i from src=%i/tag=%i”,SelfTID,data, mpistat.MPI_SOURCE,mpistat.MPI_TAG); } MPI_Finalize(); return( 0 ); }
  • 21. % mpicc -omycode.o -cmycode.c % mpif77 -O3 -funroll -omysubs.o -cmysubs.f77 % mpicc -omyexemycode.omysubs.o Compiling MPI Programs The MPI Standard “encourages” library-writers to provide compiler wrappers to make the build process easier: Usually something like ‘mpicc’ and ‘mpif77’ But some vendors may have their own names This wrapper should include the necessary header-path to compile, and the necessary library(-ies) to link against, even if they are in non-standard locations Compiler/Linker arguments are usually passed through to the underlying compiler/linker -- so you can select ‘-O’ optimization
  • 22. Running MPI Programs The MPI standard also “encourages” library-writers to include a program to start-up a parallel program Usually called ‘mpirun’ But many vendors and queuing systems use something different You explicitly define the number of tasks you want to have available to the parallel job You CAN allocate more tasks than you have machines (or CPUs) Each task is a separate Unix process, so if 2 or more tasks end up on the same machine, they simply time-share that machine’s resources This can make debugging a little difficult since the timing of sends and recv’s will be significantly changed on a time-shared system % mpirun -np 4 myexe
  • 23. “Blocking” Communication MPI_Send and MPI_Recv are “blocking” Your MPI task (program) waits until the message is sent or received before it proceeds to the next line of code Note that for Sends, this only guarantees that the message has been put onto the network, not that the receiver is ready to process it For large messages, it *MAY* wait for the receiver to be ready This makes buffer management easier When a send is complete, the buffer is ready to be re-used It also means “waiting” ... bad for performance What if you have other work you could be doing? What if you have extra networking hardware that allows “off-loading” of the network transfer?
  • 24. Blocking Communication, Example #include “mpi.h” int main( int argc, char** argv ) { int SelfTID, NumTasks, t, data1, data2; MPI_Status mpistat; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &NumTasks ); MPI_Comm_rank( MPI_COMM_WORLD, &SelfTID ); printf(“Hello World from %i of %i”,SelfTID,NumTasks); if( SelfTID == 0 ) { MPI_Send(&data1,1,MPI_INT,1,55,MPI_COMM_WORLD); MPI_Send(&data2,1,MPI_INT,1,66,MPI_COMM_WORLD); } else if( SelfTID == 1 ) { MPI_Recv(&data2,1,MPI_INT,0,66,MPI_COMM_WORLD,&mpistat); MPI_Recv(&data1,1,MPI_INT,0,55,MPI_COMM_WORLD,&mpistat); printf(“TID%i: received data=%i %i”,SelfTID,data1,data2); } MPI_Finalize(); return( 0 ); } Any problems?
  • 25. Blocking Communication, Example, cont’d TID#0 TID#1 if( SelfTID == 0 ) { MPI_Send( tag=55 ) /* TID-0 blocks */ } else { MPI_Recv( tag=66 ) /* TID-1 blocks */ Messages do not match! So both tasks could block ... and the program never completes
  • 26. MPI_Isend ... “Fire and Forget” Messages MPI_Isend can be used to send data without the program waiting for the send to complete This can make the program logic easier If you need to send 10 messages, you don’t want to coordinate them with the timing of the recv’s on remote machines But it provides less synchronization, so the programmer has to ensure that any other needed sync is provided some other way Be careful with your buffer management -- make sure you don’t re-use the buffer until you know the data has actually been sent/recv’d! MPI_Isend returns a “request-ID” that allows you to check if it has actually completed yet
  • 27. MPI_Isend Arguments MPI_Isend( send_buf, count, MPI_INT, dest, tag, MPI_COMM_WORLD, &mpireq ); (data-pointer,count,data-type) ... as with MPI_Send destination, tag, communicator ... as with MPI_Send MPI_Request data-type “Opaque” data-type, you can’t (shouldn’t) print it or do comparisons on it It is returned by the function, and can be sent to other MPI functions which can then use the request-ID MPI_Wait, MPI_Test
  • 28. MPI_Isend, cont’d MPI_Request mpireq; MPI_Status mpistat; /* initial data for solver */ for(i=0;i<n;i++) { send_buf[i] = 0.0f; } MPI_Isend( send_buf, n, MPI_FLOAT, dest, tag, MPI_COMM_WORLD, &mpireq ); . . . /* do some “real” work */ . . . /* make sure last send completed */ MPI_Wait( &mpireq, &mpistat ); /* post the new data for the next iteration */ MPI_Isend( send_buf, n, MPI_FLOAT, dest, tag, MPI_COMM_WORLD, &mpireq );
  • 29. MPI_Irecv ... “Ready to Receive” MPI_Irecv is the non-blocking version of MPI_Recv (much like Send and Isend) Your program is indicating that it wants to eventually receive certain data, but it doesn’t need it right now MPI_Irecv returns a “request-ID” that can be used to check if the data has been received yet NOTE: the recv buffer is always “visible” to your program, but the data is not valid until the recv actually completes Very useful if many data items are to be received and you want to process the first one that arrives If posted early enough, an Irecv could help the MPI library avoid some extra memory-copies (i.e. better performance)
  • 30. MPI_Wait and MPI_Test To check on the progress of MPI_Isend or MPI_Irecv, you use the wait or test functions: MPI_Wait will block until the specified send/recv request-ID is complete Returns an MPI_Status object just like MPI_Recv would MPI_Test will return Yes/No if the specified send/recv request-ID is complete This is a non-blocking test of the non-blocking communication Useful if you have lots of messages flying around and want to process the first one that arrives Note that MPI_Isend can be paired with MPI_Recv, and MPI_Send with MPI_Irecv Allows different degrees of synchronization as needed by different parts of your program
  • 31. MPI_Wait and MPI_Test, cont’d err = MPI_Wait( &mpireq, &mpistat ); /* message ‘mpireq’ is now complete */ int flag; err = MPI_Test( &mpireq, &flag, &mpistat ); if( flag ) { /* message ‘mpireq’ is now complete */ } else { /* message is still pending */ /* do other work? */ }
  • 32. MPI_Request_free The pool of Isend/Irecv “request-IDs” is large, but limited ... it is possible to run out of them, so you need to either Wait/Test for them or Free them Example use-case: You know (based on your program’s logic) that when a certain Recv is complete, all of your previous Isends must have also completed So there is no need to Wait for the Isend request-IDs Make sure to use MPI_Request_free to free up the known-complete request-IDs No status information is returned err = MPI_Request_free( &mpireq );
  • 33. MPI_Isend and MPI_Recv /* launch ALL the messages at once */ for(t=0;t<NumTasks;t++) { if( t != SelfTID ) { MPI_Isend( &my_psum,1,MPI_FLOAT, t, tag, MPI_COMM_WORLD, &mpireq ); MPI_Request_free( &mpireq ); } } . . . /* do other work */ . . . tsum= 0.0; for(t=0;t<NumTasks;t++) { if( t != SelfTID ) { MPI_Recv( recv_psum,1,MPI_FLOAT, t, tag, MPI_COMM_WORLD, &mpistat ); tsum += recv_psum; } }
  • 34. MPI_Waitany, MPI_Waitall, MPI_Testany, MPI_Testall There are “any” and “all” versions of MPI_Wait and MPI_Test for groups of Isend/Irecv request-IDs (stored as a contiguous array) MPI_Waitany - waits until any one request-ID is complete, returns the array-index of the one that completed MPI_Testany - tests if any of the request-ID are complete, returns the array-index of the first one that it finds is complete MPI_Waitall - waits for all of the request-IDs to be complete MPI_Testall - returns Yes/No if all of the request-IDs are complete This is “heavy-duty” synchronization ... if 9 out of 10 request-IDs are complete, it will still wait for the 10th one to finish Be careful of your performance if you use the *all functions
  • 35. Why use MPI_Isend/Irecv? Sometimes you just need a certain piece of data before the task can continue on to do other work … and so you just pay the cost (waiting) that comes with MPI_Recv But often we can find other things to do instead of just WAITING for data-item-X to be sent/received Maybe we can process data-item-Y instead Maybe we can do another inner iteration on a linear solver Maybe we can restructure our code so that we separate our sends and receives as much as possible Then we can maximize the network’s ability to ship the data around and spend less time waiting
  • 36. Latency Hiding One of the keys to avoiding Amdahl’s Law is to hide as much of the network latency (time to transfer messages) as possible Compare the following … Assume communication is to the TID-1/TID+1 neighbors MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } x[ar_size-1] += alpha*NewRight; MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; x[ar_size-1] += alpha*NewRight; What happens to the network in each program?
  • 37. Without Latency Hiding MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } x[ar_size-1] += alpha*NewRight;
  • 38. With Latency Hiding MPI_Isend( ... TID-1 ... ); MPI_Isend( ... TID+1 ... ); for(i=1;i<(ar_size-2);i++) { x[i] = alpha*(x[i-1]+x[i+1]); } MPI_Recv( ... TID-1 ... ); MPI_Recv( ... TID+1 ... ); x[0] += alpha*NewLeft; x[ar_size-1] += alpha*NewRight;
  • 39. Synchronization The basic MPI_Recv call is “Blocking” … the calling task waits until the message is received So there is already some synchronization of tasks happening Another major synchronization construct is a “Barrier” Every task must “check-in” to the barrier before any task can leave Thus every task will be at the same point in the code when all of them leave the barrier together Every task will complete ‘DoWork_X()’ before any one of them starts ‘DoWork_Y()’ DoWork_X(…); MPI_Barrier( MPI_COMM_WORLD ); DoWork_Y(…);
  • 40. Barriers and Performance Barriers imply that, at some point in time, 9 out of 10 tasks are waiting -- sitting idle, doing nothing -- until the last task catches up Clearly, “WAIT” == bad for performance! You never NEED a Barrier for correctness But it is a quick and easy way to make lots of parallel bugs go away! Larger barriers == more waiting If Task#1 and Task#2 need to synchronize, then try to make only those two synchronize together (not all 10 tasks) MPI_Recv() implies a certain level of synchronization, maybe that is all you really need? ... or MPI_Ssend on the sending side
  • 41. Synchronization with MPI_Ssend MPI also has a “synchronous” Send call This forces the calling task to WAIT for the receiver to post a receive operation (MPI_Recv or MPI_Irecv) Only after the Recv is posted will the Sender be released to the next line of code The sender thus has some idea of where, in the program, the receiver currently is ... it must be at a matching recv With some not-so-clever use of message-tags, you can have the MPI_Ssend operation provide fairly precise synchronization There is also an MPI_Issend MPI_Ssend( buf, count, datatype, dest, tag, comm )
  • 42. MPI_Ssend Example TID#0 Be careful with your message tags or use of wildcards ... Here, the Ssend can match either of two locations on the receiver MPI_Ssend( ... tag=44 ... ) TID#1 MPI_Recv( ... tag=44 ... ) . . . MPI_Recv( ... tag=ANY ... )
  • 43. MPI_Ssend Example TID#0 MPI_Ssend( ... tag=44 ... ) TID#1 Now, the sender knows exactly where the receiver is MPI_Recv( ... tag=44 ... ) . . . MPI_Recv( ... tag=45 ... )
  • 44. Collective Operations MPI specifies a whole range of group-collective operations (MPI_Op’s) with MPI_Reduce MPI_MIN, MPI_MAX, MPI_PROD, MPI_SUM MPI_AND (logical), MPI_BAND (bitwise) Min-location, Max-location E.g. all tasks provide a partial-sum, then MPI_Reduce computes the global sum, and places the result on 1 task (called the ‘root’) There are mechanisms to provide your own reduction operation MPI_Reduce can take an array argument and do, e.g., MPI_Sum across each entry from each task
  • 45. MPI_Reduce Example send_buf: int send_buf[4], recv_buf[4]; /* each task fills in send_buf */ for(i=0;i<4;i++) { send_buf[i] = ...; } err = MPI_Reduce( send_buf, recv_buf, 4, MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD ); if( SelfTID == 0 ) { /* recv_buf has the max values */ /* e.g. recv_buf[0] has the max of send_buf[0]on all tasks */ } else { /* for all other tasks, recv_buf has invalid data */ } 1 2 3 4 TID#0 3 8 8 1 TID#1 3 4 9 1 TID#2 7 5 4 3 TID#3 recv_buf: TID#0 4 8 7 9
  • 46. MPI_Allreduce MPI_Reduce sends the final data to the receive buffer of the “root” task only MPI_Allreduce sends the final data to ALL of the receive buffers (on all of the tasks) Can be useful for computing and distributing the global sum of a calculation (MPI_Allreduce with MPI_SUM) ... computing and detecting convergence of a solver (MPI_MAX) ... computing and detecting “eureka” events (MPI_MINLOC) err = MPI_Allreduce( send_buf, recv_buf, count, MPI_INT, MPI_MIN, MPI_COMM_WORLD ); MPI_MINLOC/MAXLOC return the rank (task) that has the min/max value
  • 47. Reduction Operations and Performance Note that all reductions are barriers Just as with MPI_Barrier, using too many of them could impact performance greatly You can always “do-it-yourself” with Sends/Recvs And by splitting out the Sends/Recvs, you may be able to find ways to hide some of the latency of the network But, in theory, the MPI implementation will know how to optimize the reduction for the given network/machine architecture MPI_Op_Create( MPI_User_function *function, intcommute, MPI_Op *op ) typedef void MPI_User_function( void *invec, void *inoutvec, int *len, MPI_Datatype *datatype)
  • 48. Collective Operations and Round-Off In general, be careful with collective operations on floats and doubles (especially DIY collective operations) as they can produce different results on different processor configs Running with 10 tasks splits the data into certain chunks which may accumulate round-off errors differently than the same data split onto 20 tasks Your results will NOT be bit-wise identical between a 10-task run and a 20-task run If you did it yourself, be sure to use a fixed ordering for your final {sum/min/max/etc.} ... floating point math is not associative! (a+b) != (b+a)
  • 49. Round-off, cont’d E.g. assume 3 digits of accuracy in the following sums across an array of size 100: 1.00 .001 .001 ... .001 ===== 1.00 .001 .001 .001 ... .001 ===== .050 1.00 .001 .001 ... .001 ===== 1.00 !! TID#0: TID#1: 1 Task: 2 Tasks: + ===== 1.05 What about on 4 tasks?
  • 50. Broadcast Messages MPI_Bcast( buffer, count, datatype, root, MPI_COMM_WORLD ) Some vendors or networks may allow for efficient broadcast messages to all tasks in a Communicator At least more efficient than N send/recv’s Fan-out/tree of messages MPI_Bcast is blocking So performance can be an issue You can always DIY ... possibly with latency hiding
  • 51. MPI Communicators A Communicator is a group of MPI-tasks The Communicators must match between Send/Recv ... there is no MPI_ANY_COMM flag So messages CANNOT cross Communicators This is useful for parallel libraries -- your library can create its own Communicator and isolate its messages from other user-code MPI_CommNewComm; MPI_GroupNewGroup, OrigGroup; intn = 4; intarray[n] = { 0,1, 4,5 }; /* TIDs to include */ MPI_Comm_group( MPI_COMM_WORLD, &OrigGroup ); MPI_Group_incl( OrigGroup, n, array, &NewGroup ); MPI_Comm_create( MPI_COMM_WORLD, NewGroup, &NewComm );
  • 52. MPI Communicators, cont’d Once a new Communicator is created, messages within that Communicator will have their source/destination IDs renumbered In the previous example, World-Task-4 could be addressed by: Using NewComm, you CANNOT sent to World-Task-2 For library writers, there is a short-cut to duplicate an entire Communicator (to isolate your messages from others) Then these messages will not conflict: MPI_Send(&data,1,MPI_INT,4,100,MPI_COMM_WORLD); MPI_Send(&data,1,MPI_INT,4,100,NewComm); MPI_Comm_dup( MPI_COMM_WORLD, &NewComm ); MPI_Send(&data,1,MPI_INT,4,100,MPI_COMM_WORLD); MPI_Send(&data,1,MPI_INT,4,100,NewComm);
  • 53. Parallel Debugging Parallel programming has all the usual bugs that sequential programming has ... Bad code logic Improper arguments Array overruns But it also has several new kinds of bugs that can creep in Deadlock Race Conditions Limited MPI resources (MPI_Request)
  • 54. Deadlock A “Deadlock” condition occurs when all Tasks are stopped at synchronization points and none of them are able to make progress Sometimes this can be something simple: Task-1 is waiting for a message from Task-2 Task-2 is waiting for a message from Task-1 Sometimes it can be more complex or cyclic: Task-1 is waiting for Task-2 Task-2 is waiting for Task-3 Task-3 is waiting for ...
  • 55. Deadlock, cont’d Symptoms: Program hangs (!) Debugging deadlock is often straightforward … unless there is also a race condition involved Simple approach: put print statement in front of all Recv/Barrier/Wait functions
  • 56. Starvation Somewhat related to deadlock is “starvation” One Task needs a “lock signal” or “token” that is never sent by another Task TID#2 is about to send the token to TID#3, but realizes it needs it again ... so TID#3 has to continue waiting Symptom is usually extremely bad performance Simple way to detect starvation is often to put print statements in to watch the “token” move Solution often involves rethinking why the “token” is needed Multiple tokens? Better “fairness” algorithm?
  • 57. Race Conditions A “Race” is a situation where multiple Tasks are making progress toward some shared or common goal where timing is critical Generally, this means you’ve ASSUMED something is synchronized, but you haven’t FORCED it to be synchronized E.g. all Tasks are trying to find a “hit” in a database When a hit is detected, the Task sends a message Normally, hits are rare events and so sending the message is “not a big deal” But if two hits occur simultaneously, then we have a race condition Task-1 and Task-2 both send a message ... Task-3 receives Task-1’s message first (and thinks Task-1 is the hit-owner) ... Task-4 receives Task-2’s message first (and thinks Task-2 is the hit-owner)
  • 58. Race Conditions, cont’d Some symptoms: Running the code with 4 Tasks works fine, running with 5 causes erroneous output Running the code with 4 Tasks works fine, running it with 4 Tasks again produces different results Sometimes the code works fine, sometimes it crashes after 4 hours, sometimes it crashes after 10 hours, … Program sometimes runs fine, sometimes hits an infinite loop Program output is sometimes jumbled (the usual line-3 appears before line-1) Any kind of indeterminacy in the running of the code, or seemingly random changes to its output, could indicate a race condition
  • 59. Race Conditions, cont’d Race Conditions can be EXTREMELY hard to debug Often, race conditions don’t cause crashes or even logic-errors E.g. a race condition leads two Tasks to both think they are in charge of data-item-X … nothing crashes, they just keep writing and overwriting X (maybe even doing duplicate computations) Often, race conditions don’t cause crashes at the time they actually occur … the crash occurs much later in the execution and for a totally unrelated reason E.g. a race condition leads one Task to think that the solver converged but the other Task thinks we need another iteration … crash occurs because one Task tried to compute the global residual Sometimes race conditions can lead to deadlock Again, this often shows up as seemingly random deadlock when the same code is run on the same input
  • 60. Parallel Debugging, cont’d Parallel debugging is still in its infancy Lots of people still use print statements to debug their code! To make matters worse, print statements alter the timing of your program ... so they can create new race conditions or alter the one you were trying to debug “Professional” debuggers do exist that can simplify parallel programming They try to work like a sequential debugger ... when you print a variable, it prints on all tasks; when you pause the program, it pauses on all remote machines; etc. Totalview is the biggest commercial version; Allinea is a recent contender Sun’s Prism may still be out there http://totalviewtech.com ... free student edition!
  • 61. Parallel Tracing It is often useful to “see” what is going on in your parallel program When did a given message get sent? recv’d? How long did a given barrier take? You can “Trace” the MPI calls with a number of different tools Take time-stamps at each MPI call, then visualize the messages as arrows between different tasks Technically, all the “work” of MPI is done in PMPI_* functions, the MPI_* functions are wrappers So you can build your own tracing system We will soon have the Intel Trace Visualizer installed on the DSCR
  • 63. Parallel Programming Caveats You already know to check the return values of ALL function calls You really Really REALLY should check the return values of all MPI function calls Don’t do parallel file-write operations!! (at least not on NFS) It won’t always work as you expect Even flock() won’t help you in all cases Push all output through a single Task (performance?) Or use MPI-IO or MPI-2 Random numbers may not be so random, especially if you are running 1000 Tasks which need 1M random numbers each Use the SPRNG package
  • 64. One more time ... Latency Hiding Latency hiding is really the key to 200x, 500x, 1000x speed-ups Amdahl’s Law places a pretty harsh upper-bound on performance If your code is 1% serial ... 100x speed-up is the best you can do ... EVER! Network overheads quickly add up, especially on COTS networks Try to think ahead on where data is needed and where that data was first computed Send the data as soon as possible
  • 65. Better-than-Expected Speed-Up While Amdahl’s Law always applies, scientists are often interested in HUGE problems Larger problems often have smaller %-serial At some point, it’s not an issue of “how much faster than 1 CPU?” but “how much faster than my last cluster?” This is a comment about practicality... your professor/boss may have other opinions! Cache memory works in your favor It is possible to get super-linear speed-up due to cache effects Cache memory can be 10-100x faster than main memory E.g. 1 CPU has 16MB cache, 100 CPUs have ~1.6GB cache There’s no such thing as a single CPU with 1.6GB of cache, so you can’t make a fair comparison to your 100-CPU results
  • 66.
  • 67.
  • 68. Performance tuning methods (Vtune, Parallel tracing)
  • 69.