开发者

MPI (Asynchronous) Loop Iteration

I have a program similar to the one开发者_开发百科 below. In the code below, all processes know the current iteration step of all other processes. However, I am curious if there is a way to do this without a collective call such as MPI_PUT, especially in a case where each process iterates at a different rate.

#include <errno.h>
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>


/* OUTPUT
 * $ mpirun -np 4 so00.exe
 * @[0]: p |       0       1       2       3
 * @[0]: p |       4       5       6       7
 * @[0]: p |       8       9       10      11
 * @[0]: p |       12      13      14      15
 * @[0]: p |       16      17      18      19
 * @[1]: p |       0       1       2       3
 * @[1]: p |       4       5       6       7
 * @[1]: p |       8       9       10      11
 * @[1]: p |       12      13      14      15
 * @[1]: p |       16      17      18      19
 * @[2]: p |       0       1       2       3
 * @[2]: p |       4       5       6       7
 * @[2]: p |       8       9       10      11
 * @[2]: p |       12      13      14      15
 * @[2]: p |       16      17      18      19
 * @[3]: p |       0       1       2       3
 * @[3]: p |       4       5       6       7
 * @[3]: p |       8       9       10      11
 * @[3]: p |       12      13      14      15
 * @[3]: p |       16      17      18      19
 */
int main(int argc, char *argv[])
{
    int i, n, rank, np;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &np);

    ////////////////////////////////////////////////////////////////////////////

    int *pos;
    MPI_Win win;
    MPI_Alloc_mem(sizeof(int)*np, MPI_INFO_NULL, &pos);
    MPI_Win_create(pos, np, sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win);

    for (i=rank; i<(np*5); i+=np)
    {
        MPI_Win_fence(MPI_MODE_NOPRECEDE, win);
        for (n = 0; n < np; n++)
        {
            MPI_Put(&i, 1, MPI_INT, n, rank, 1, MPI_INT, win);
        }
        MPI_Win_fence((MPI_MODE_NOSTORE | MPI_MODE_NOSUCCEED), win);
        printf("@[%d]: p | ", rank);
        for (n = 0; n < np; n++) printf("\t%d", pos[n]);
        printf("\n");
    }

    ////////////////////////////////////////////////////////////////////////////

    MPI_Win_free(&win);
    MPI_Free_mem(pos);

    MPI_Finalize();

    return EXIT_SUCCESS;
}


It's not required that you use MPI_Win_fence to synchronize RMA calls in MPI. It's also possible to use MPI_Lock (and its variants) to do what is called "passive target" mode. This mode is much less synchronizing as it doesn't require all of the collective calls that the "active target" mode does.

It's far too complex to provide a full answer here, but you can find many papers about the subject along with the documentation on the web. The full MPI-3 standard is available here. Take a look at Chapter 11 for one-sided communication.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜