开发者

Difference between MPI_Allgather and MPI_Allgatherv

Wh开发者_如何转开发at is the Difference between MPI_Allgather() and MPI_Allgatherv()?


From the MPI standard

MPI_GATHERV extends the functionality of MPI_GATHER by allowing a varying count of data from each process, since recvcounts is now an array. It also allows more flexibility as to where the data is placed on the root, by providing the new argument, displs

MPI_ALLGATHERV is then an extension of this.

The signatures for the two functions are

int MPI_Allgather(void * sendbuff, int sendcount, MPI_Datatype sendtype, 
                  void * recvbuf, int recvcount, MPI_Datatype recvtype, 
                  MPI_Comm comm)
int MPI_Allgatherv(void * sendbuff, int sendcount, MPI_Datatype sendtype, 
                   void * recvbuf, int * recvcounts, int * displs, 
                   MPI_Datatype recvtype, MPI_Comm comm)

You can specify both a size and destination offset for each process' data using recvcounts and displs using the v variant.


Just to augment the answer already given by @Scott Wales:

In general MPI provides three types of collective calls:

  • simple ones where the same number of data elements and of the same data type are sent to/received from every destination rank. Typical examples are: MPI_Scatter, MPI_Gather, MPI_Alltoall, etc. There you provide only one argument for the block size in data elements and one argument for the data type;

  • vector variants where it is possible to send/receive different number of elements to/from each destination rank, but the data type is still the same for all sends/receives. Those variants have the suffix "v": MPI_Scatterv, MPI_Gatherv, MPI_Alltoallv, etc. They have almost the same signature as the simple ones except that the argument for the block size is replaced by two integer vector arguments (hence the vector): one for number of elements and one for the offset (in elements) from the beginning of the data buffer of each data block (always in that order);

  • the most general type where it is also possible to send elements of varying data types to each process in the communicator. Those variants have the suffix "w". Not all collectives have such variants with MPI_Alltoallw being the only one in version 2.2 of the MPI standard (the latest published one) and with more to come in the 3.0 version.

Since MPI is a standard and since all MPI implementations are required to comply with the standard (and in fact most do), you can just search for the MPI function of interest using your favourite search engine and then just read the first manual page that comes up.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜