Collective communication using a broadcast

During the development of parallel code, we often find ourselves in a situation where we must share, between multiple processes, the value of a certain variable at runtime or certain operations on variables that each process provides (presumably with different values).

To resolve these types of situations, communication trees are used (for example, process 0 sends data to the processes 1 and 2, which will, respectively, take care of sending them to processes 3, 4, 5, 6, and so on).

Instead, MPI libraries provide functions that are ideal for the exchange of information or the use of multiple processes that are clearly optimized for the machine in which they are performed:

Broadcasting data from process 0 to processes 1, 2, 3, and 4

A communication method that involves all the processes that belong to a communicator is called a collective communication. Consequently, collective communication generally involves more than two processes. However, instead of this, we will call the collective communication broadcast, wherein a single process sends the same data to any other process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.121.131