开发者

Implied synchronization with MPI_BCAST for both sender and receivers?

When calling MPI_BCAST, is there any implied synchronization? For example, if the sender process were to get to the MPI_BCAST before others could it do the BCAST and then continue without any acknowledgements? Some recent tests with code like:

program test
include 'mpif.h'

integer ierr, tid, tmp

call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, tid, ierr)

tmp = tid

if(tid.eq.0) then
  call MPI_BCAST(tmp,1,MPI_INTEGER,MPI_ROOT,MPI_COMM_WORLD, ierr)
else

endif

write(*,*) tid,'done'
call MPI_FINALIZE(ierr)

end

shows that with two threads they both reach completion, despite only the sender doing a call to MPI_BCAST.

Output:

1 done 开发者_如何学Python          0
0 done           0

Could this be a problem with the MPI installation I'm working with (MPICH), or is this standard behavior for MPI?


Bcast is a collective communication call, and as such blocks. More precisely, it blocks until all processes in the specified communicator have made a matching call to Bcast, at which point communication occurs and execution continues.

Your code is too simplified for debugging purposes. Can you post a working minimal example that demonstrates the problem?


I can attest that MPI_Bcast does NOT block, at least for the root (sending) process. You should call MPI_Barrier immediately afterward if you want to be certain that your program blocks. I know this because I recently accidentally called MPI_Bcast for the root process only (instead of collectively) and the program execution continued as normal until much later when the NEXT unrelated call to MPI_Bcast, in which the old buffer was received into the new different buffers. This mismatch in buffer data type/length produced garbage data and it took me a while to find that bug.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜