How it works...

If we try to run this program (it makes sense to execute it with only two processes), then we note that none of the two processes can proceed:

C:>mpiexec -n 9 python deadLockProblems.py

my rank is : 8
my rank is : 6
my rank is : 7
my rank is : 2
my rank is : 4
my rank is : 3
my rank is : 0
my rank is : 1
sending data a to process 5
data received is = b
my rank is : 5
sending data b :to process 1
data received is = a

Both the processes prepare to receive a message from the other and get stuck there. This happens because of the comm.recv() MPI function and the comm.send() MPI blocking them. This means that the calling process awaits their completion. As for the comm.send() MPI, the completion occurs when the data has been sent and may be overwritten without modifying the message. 

The completion of the comm.recv() MPI instead occurs when the data has been received and can be used. To solve this problem, the first idea is to invert the comm.recv() MPI with the comm.send() MPI, as follows:

if rank==1: 
    data_send= "a" 
    destination_process = 5 
    source_process = 5 
    comm.send(data_send,dest=destination_process) 
    data_received=comm.recv(source=source_process) 

print ("sending data %s " %data_send +
"to process %d" %destination_process)
print ("data received is = %s" %data_received) if rank==5: data_send= "b" destination_process = 1 source_process = 1 data_received=comm.recv(source=source_process) comm.send(data_send,dest=destination_process)

print ("sending data %s :" %data_send +
"to process %d" %destination_process)
print ("data received is = %s" %data_received)

This solution, even if correct, does not guarantee that we will avoid deadlock. In fact, communication is performed through a buffer with the instruction of comm.send().

MPI copies the data to be sent. This mode works without problems, but only if the buffer is able to keep them all. If this does not happen, then there is a deadlock: the sender cannot finish sending the data because the buffer is busy, and the receiver cannot receive data because it is blocked by the comm.send() MPI call, which has not yet completed.

At this point, the solution that allows us to avoid deadlocks is used to swap the sending and receiving functions so as to make them asymmetrical:

if rank==1: 
    data_send= "a" 
    destination_process = 5 
    source_process = 5 
    comm.send(data_send,dest=destination_process) 
    data_received=comm.recv(source=source_process) 
              
if rank==5: 
    data_send= "b" 
    destination_process = 1 
    source_process = 1 
    comm.send(data_send,dest=destination_process) 
    data_received=comm.recv(source=source_process) 

Finally, we get the correct output:

C:>mpiexec -n 9 python deadLockProblems.py 
 
my rank is : 4
my rank is : 0
my rank is : 3
my rank is : 8
my rank is : 6
my rank is : 7
my rank is : 2
my rank is : 1
sending data a to process 5
data received is = b
my rank is : 5
sending data b :to process 1
data received is = a
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.157.186