How to use MPI

From Computational Biophysics and Materials Science Group
Jump to: navigation, search

Message Passing Interface (MPI) is a standardized and portable message-passing system designed for a better computational performance on a wide variaty of parallel computer, in which parallelly running compiled program, rather than serially run on a single CPU. This kind of technology has greatly enhanced the ability of mainframe computer.

how to use MPI

step 1

Set a routine to initialize the MPI environment

MPI_Init(NULL, NULL)
or MPI_Init(&argc, &argv)
(it depends on the arguments passed to your main function)

step 2

Determine the quantity of processes that are associated with a communicator

MPI_Comm_size (comm,&size)
(If there are N total processors participated in the program run, and then size is N. The size is defined in compiled command)	

Determines the order number of current processes within a communicator, ranges from 0 to N-1

MPI_Comm_rank (comm,&rank) 

After step 2, all the program in main function are running in paralell

step 3

After the completion of the MPI program, terminates the MPI execution environment

MPI_Finalize()

useful functions

The following functions mainly describe the way for data communication between different CPUs.

MPI_Send(void *buf, int count, MPI_Datatype dType, int dest, int tag, MPI_Comm comm) 
MPI_Recv(void *buf, int count, MPI_Datatype dType, int source, int tag, MPI_Comm comm, MPI_Status *status) 
MPI_Bcast(&buffer,count,datatype,root,comm)
MPI_Reduce (&sendbuf,&recvbuf, count, datatype, mpi_red_operation, root, comm)
MPI_Barrier(comm)

The link in the bottom gives detail introduction to these functions as well as simple example.

appendix

An useful introduction to MPI here