开发者

How to use MPI to organize asynchronous communication?

I plan to use MPI to build a solver that supports asynchronous communication. The basic idea is as follows.

Assume there are two parallel processes. Process 1 wants to send good solutions it finds periodically to process 2,开发者_开发技巧 and ask for good solutions from process 2 when it needs diversification.

  1. At some point, process 1 uses MPI_send to send a solution to process 2. How to guarantee there is an MPI_Rev matching this MPI_Send, since this send is triggered dynamically?

  2. When process 1 needs a solution, how can it send a request to process 2, and process 2 will notice its request in time?


There are three ways to achieve what you want, although it is not truly asynchronous communication.

1) Use non-blocking send/recvs. Replace your send/recv calls with irecv/isend and wait. The sender can issue an isend and continue working on the next problem. At some point, you will have to issue a mpi-wait to make sure your previous send was received. Your process2 can issue a recv ahead of time using irecv and continue doing its work. Again, at some point you will call mpi-wait to make sure your irecv was received. this may be a bit cumbersome if I understand you requirement correctly.

2) A Elegant way would be to use One-Sided communication. MPI_Put, Get.

3) Restructure your algorithm in such a way that at certain intervals of time, process 1 & 2 exchange information and state.


Depending on the nature of the MPI_* function you call, the send will block until a matching receive has been called by another process, so you need to make sure that's going to happen in your code. There are also non-blocking function calls MPI_Isend f.ex, which gives you a request-handle which you can check on later to see if the process' send has been received by a matching receive.

Regarding your issue, you could issue a non-blocking receive (MPI_Irecv being the most basic) and check on the status every n seconds depending on your application. The status will then be set to complete when a message has been received and is ready to be read.

If it's time sensitive, use a blocking call while waiting for a message. The blocking mechanism (in OpenMPI at least) uses a spinning poll however, so the waiting process will be eating 100% cpu.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜