开发者

Proper use of MPI_THREAD_SERIALIZED with pthreads

After reading some MPI specs I'm lead to understand that, when initializing with MPI_THREAD_SERIALIZED, a program must ensure that MPI_Send/Recv calls that occur in separate threads must not overlap. In other words, you need a mutex to protect MPI calls.

Consider this situation:

Mutex mpi_lock = MUTEX_INITIALIZER;

void thread1_function(){
    while(true){
        /* things happen */

        lock(mpi_lock);
        MPI_Send(/* some message */);
        unlock(mpi_lock);

        /* eventually break out of loop */
    }
}

void thread2_function(){
    while(true){
        /* things happen */

        char *buffer = CREATE_BUFFER();
        lock(mpi_lock);
        MPI_Recv(buffer /* so开发者_如何学Pythonme message stored in buffer */);
        unlock(mpi_lock);

        /* eventually break out of loop */
    }
}

int main(){
    create_thread(thread1_function);
    create_thread(thread2_function);

    return 0;
}

Here's my question: Is this the correct method and/or is it necessary? In my situation I have to assume that there may be large time gaps between messages being received in thread2_function(). Is there a way to prevent thread1_function() from having to wait for thread2_function() to complete a receive before being able to perform the send?

I'm already aware of MPI_THREAD_MULTIPLE but system constraints mean this is unavailable to me.

I'm open to suggestions for restructuring the code but my goal is to have a "main" thread that constantly does work and MPI_Send's results without being interrupted, while another thread manages receiving and appending to a queue for the main thread.

Thanks in advance.


Such external locking (or some other similar synchronization scheme) is extremely necessary when only using MPI_THREAD_SERIALIZED. The only way to get around it would be to use MPI_THREAD_MULTIPLE, but that seems to be unavailable to you. Also, please don't try to use MPI_THREAD_SINGLE or MPI_THREAD_FUNNELED instead of SERIALIZED in this situation, there are some platforms and implementations where this will cause small pieces of MPI to be broken.

The code you posted above could have a problem if you are sending/receiving messages between threads within a process. If thread2 is launched acquires the lock and enters MPI_Recv before thread1 is able to acquire the lock and post the message to thread2 via MPI_Send, then there will be a deadlock because thread1 will never be able to acquire the lock. A similar deadlock can still occur in some cases if there is a cycle of messages (when viewing message transmission as a directed graph between processes/threads) between processes that interferes with acquiring the lock.

Your best bet for avoiding deadlock from this sort of scenario is to avoid blocking making blocking MPI calls and instead to use nonblocking calls like MPI_Irecv and MPI_Test.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜