Using MPI to parallelize a function
I want to use MPI to parallelize a function that is being called multiple times in my code. What i wanted to know was that if I use MPI_Init
inside the function, will it 开发者_JAVA百科be spawn the processes every time the function is called or will the spawning take place only once? Is there some known design pattern to do this in a systematic way?
The MPI_Init()
call just initialises the MPI enviroment, it doesn't do any parallelisation itself. The parallelism comes from how you write the program.
A parallel "Hello, World", the printf()
does different things depending on what rank (processor) it's running on. The number of processes is determined by how you execute the program (e.g. the number of processes is set via the -n parameter to mpiexec or mpirun)
int main(int argc, char *argv[]) {
char name[BUFSIZ];
int length=BUFSIZ;
int rank;
int numprocesses;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocesses);
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Get_processor_name(name, &length);
printf("%s, Rank %d of %d: hello world\n", name, rank, numprocesses);
MPI_Finalize();
return 0;
}
That's not really the way MPI (or distributed memory programming) works; you can't really just parallelize a function the way you can with something like OpenMP. At MPI, processes aren't spawned at the time of MPI_Init()
, but at the time of running the executable (eg, with mpiexec; this is true even with MPI_Comm_spawn()
. Part of the reason for that is that in distributed memory computing, launching processes on potentially a large number of shared-nothing nodes is a very expensive task.
You could cobble something together by having the function you're calling be in a separate executable, but I'm not sure that's what you want.
精彩评论