MPI double rings, max, min and average
We only have been working on MPI for about one day in my computer programming class, and I now have to write a program for it. I am to write a program that organizes processes into two rings.
The first ring begins with process 0 and proceeds to send a message to the next even process and the last process sending its message back to process 0. For example, 0--> 2 --> 4 --> 6 --> 8 --> 0 (but it goes all the way up to 32 instead of 8). The next ring is the same, but begins with process 1 and sends to the previous off process and then back to 1. For example, 1-开发者_Python百科-> 9--> 7--> 5 --> 3--> 1.
Also, I am supposed to find the max, min, and average of a very large array of integer numbers. I will have to scatter the array into pieces to each process, have each process compute a partial answer, and then reduce back the answer together on process 0 after everyone is done.
Finally, I am to scatter across the processes and each process will have to count how many of each letter appears in a section. That part really makes no sense to me. But we have just learned the very basics, so no fancy stuff please! Here's what I have so far, I have commented out some things to just remind myself of some stuff, so please ignore if necessary.
#include <iostream>
#include "mpi.h"
using namespace std;
// compile: mpicxx program.cpp
// run: mpirun -np 4 ./a.out
int main(int argc, char *argv[])
{
int rank; // unique number associated with each core
int size; // total number of cores
char message[80];
char recvd[80];
int prev_node, next_node;
int tag;
MPI_Status status;
// start MPI interface
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sprintf(message, "Heeeelp! from %d", rank);
MPI_Barrier(MPI_COMM_WORLD);
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
tag = 0;
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
cout << "* Rank " << rank << ": " << recvd << endl;
//max
int large_array[100];
rank == 0;
int max = 0;
MPI_Scatter(&large_array, 1, MPI_INT, large_array, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Reduce(&message, max, 1, MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD);
MPI_Finalize();
return 0;
}
I have a small suggestion about this:
dest = rank + 2;
if (rank == size - 1)
dest = 0;
source = rank - 2;
if (rank == 0)
source = size - 1;
I think dest
and source
, as names, are going to be confusing (as both are destinations of messages, depending on the value of rank
). Using the %
operator might help improve clarity:
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
You can select whether to receive or send to next_node
and prev_node
based on the value of rank % 2
:
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
Doing this once or twice is fine, but if you find your code littered with these sorts of switches, it'd make sense to place these ring routines in a function and pass in the next and previous nodes as parameters.
When it comes time to distribute your arrays of numbers and arrays of characters, keep in mind that n / size
will leave a remainder of n % size
elements at the end of your array that also need to be handled. (Probably on the master node, just for simplicity.)
I added a few more output statements (and a place to store the message from the other nodes) and the simple rings program works as expected:
$ mpirun -np 16 ./a.out | sort -k3n
* Rank 0: Heeeelp! from 14
* Rank 1: Heeeelp! from 3
* Rank 2: Heeeelp! from 0
* Rank 3: Heeeelp! from 5
* Rank 4: Heeeelp! from 2
* Rank 5: Heeeelp! from 7
* Rank 6: Heeeelp! from 4
* Rank 7: Heeeelp! from 9
* Rank 8: Heeeelp! from 6
* Rank 9: Heeeelp! from 11
* Rank 10: Heeeelp! from 8
* Rank 11: Heeeelp! from 13
* Rank 12: Heeeelp! from 10
* Rank 13: Heeeelp! from 15
* Rank 14: Heeeelp! from 12
* Rank 15: Heeeelp! from 1
You can see the two rings there, each in their own direction:
#include <iostream>
#include "mpi.h"
using namespace std;
// compile: mpicxx program.cpp
// run: mpirun -np 4 ./a.out
int main(int argc, char *argv[])
{
int rank; // unique number associated with each core
int size; // total number of cores
char message[80];
char recvd[80];
int prev_node, next_node;
int tag;
MPI_Status status;
// start MPI interface
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sprintf(message, "Heeeelp! from %d", rank);
// cout << "Rank " << rank << ": " << message << endl;
MPI_Barrier(MPI_COMM_WORLD);
next_node = (rank + 2) % size;
prev_node = (size + rank - 2) % size;
tag = 0;
if (rank % 2) {
MPI_Send(&message, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD, &status);
} else {
MPI_Send(&message, 80, MPI_CHAR, next_node, tag, MPI_COMM_WORLD);
MPI_Recv(&recvd, 80, MPI_CHAR, prev_node, tag, MPI_COMM_WORLD, &status);
}
cout << "* Rank " << rank << ": " << recvd << endl;
//cout << "After - Rank " << rank << ": " << message << endl;
// end MPI interface
MPI_Finalize();
return 0;
}
When it comes time to write the larger programs (array min, max, avg, and word counts), you'll need to change things slightly: only rank == 0
will be sending messages at the start; it will send to all the other processes their pieces of the puzzle. All the other processes will receive, do the work, then send back the results. rank == 0
will then need to integrate the results from all of them into a coherent single answer.
精彩评论