Mapping

In the mapping stage of the parallel algorithm design process, we specify where each task is to be executed. The goal is to minimize the total execution time. Here, you must often make trade-offs, as the two main strategies often conflict with each other:

  • The tasks that communicate frequently should be placed in the same processor to increase locality.
  • The tasks that can be executed concurrently should be placed in different processors to enhance concurrency.

This is known as the mapping problem, and it is known to be NP-complete. As such, no polynomial-time solutions to the problem in the general case exist. For tasks of equal size and tasks with easily identified communication patterns, the mapping is straightforward (we can also perform agglomeration here to combine tasks that map to the same processor). However, if the tasks have communication patterns that are hard to predict or the amount of work varies per task, then it is hard to design an efficient mapping and agglomeration scheme.

For these types of problems, load balancing algorithms can be used to identify agglomeration and mapping strategies during runtime. The hardest problems are those in which the amount of communication or the number of tasks changes during the execution of the program. For these kinds of problems, dynamic load balancing algorithms can be used, which run periodically during the execution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.201.71