开发者

mpi atomic read/modify/write

Is there an easy way to implement atomic integer operations (one sided) in mpi? last time I looked three years ago, example in mpi book was fairly complex to implemen开发者_JAVA技巧t.


MPI one-sided is fairly complex, with about three (more like two-and-a-half) different mechanisms.

The first two modes are "active target synchronization" where the target (the process being targeted, the one doing the one-sided call is called the origin) explicitly declares an epoch during which its window (the "shared" area) is exposed. You then have a distinction between this epoch being collectively declared (MPI_Win_fence) and it being local to a group (MPI_Win_start/post/wait/complete calls).

Something close to true one-sided is done with the MPI_Win_lock/unlock calls where the origin locks the "shared" area on the target to get exclusive access to it. This is called "passive target synchronization" because the target is completely unaware of anything happening to its shared area; this requires a daemon or so to be running on the target.

Thus far the state of MPI-2. Unfortunately you could only read or write but not both in a lock/unlock epoch, so atomic fetch-and-whatever operations were not possible in a straightforward way. This was solved in MPI-3, which has the MPI_Fetch_and_op instruction.

For instance, if you use MPI_REPLACE you get a readout of the area in "shared" memory and overwrite it with something you specify. That is enough to implement atomic operations.


MPI 3.0 added atomics. See https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node272.htm for details.

  • MPI_Accumulate performs an atomic update on window data.
  • MPI_Get_accumulate fetches the value and performs an update.
  • MPI_Fetch_and_op is similar to MPI_Get_accumulate but is a shorthand function for the common case of a single element.
  • MPI_Compare_and_swap does what the name suggests.

See https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node290.htm for details on the semantic guarantees of these functions.


There is no way to implement a general case "atomic" one-sided read/modify/write operation using MPI.

For operations between nodes, there is no way using common interconnects to get anywhere near an "atomic" operation. The TCP/IP layer can not do any atomic operations. An IBV fabric involves layers of libraries and a kernel module to a local HCA, some path through one or more switches, another HCA with a kernel module and more layers of libraries on the other side.

For operations between ranks on the same node, if you need a guarantee of "atomic" for single integer operations, then shared memory is the appropriate tool to use, MPI is not.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜