0%

Book Description

Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field. Table of Contents: Introduction and overview / Dependence analysis, dependence graphs and alias analysis / Program parallelization / Transformations to modify and eliminate dependences / Transformation of iterative and recursive constructs / Compiling for distributed memory machines / Solving Diophantine equations / A guide to further reading

Table of Contents

  1. Cover
  2. Half title
  3. Copyright
  4. Title
  5. Contents
  6. Preface
  7. 1 Introduction and overview
    1. 1.1 Parallelism and independence
    2. 1.2 Parallel execution
      1. 1.2.1 Shared and distributed memory parallelism
      2. 1.2.2 Structures of parallel computation
    3. 1.3 Compiler fundamentals
      1. 1.3.1 Compiler phases
      2. 1.3.2 Parsing
      3. 1.3.3 Intermediate representations
    4. 1.4 Compiler support for parallel machines
  8. 2 Dependence analysis, dependence graphs and alias analysis
    1. 2.1 Dataflow analysis
      1. 2.1.1 Constant propagation
      2. 2.1.2 Alias analysis
    2. 2.2 Abstract interpretation
    3. 2.3 Data dependence analysis
      1. 2.3.1 Determining references to test for dependence
      2. 2.3.2 Testing for dependence
      3. 2.3.3 Arrays of arrays and dependence analysis
    4. 2.4 Control dependence
    5. 2.5 Use-def chains and dependence
    6. 2.6 Dependence analysis in parallel programs
  9. 3 Program parallelization
    1. 3.1 Simple loop parallelization
    2. 3.2 Parallelizing loops with acyclic and cyclic dependence graphs
    3. 3.3 Targeting vector hardware
    4. 3.4 Parallelizing loops using producer/consumer synchronization
      1. 3.4.1 Producer and consumer synchronization
      2. 3.4.2 Optimizing producer/consumer synchronization
    5. 3.5 Parallelizing recursive constructs
    6. 3.6 Parallelization of while loops
      1. 3.6.1 Determining iterations to be executed by a thread
      2. 3.6.2 Dealing with the effects of speculation
    7. 3.7 Software pipelining for instruction level parallelism
  10. 4 Transformations to modify and eliminate dependences
    1. 4.1 Loop peeling and splitting
    2. 4.2 Loop skewing
    3. 4.3 Induction variable substitution
    4. 4.4 Forward substitution
    5. 4.5 Scalar expansion and privatization
    6. 4.6 Array privatization
    7. 4.7 Node splitting
    8. 4.8 Reduction recognition
    9. 4.9 Which transformations are most important?
  11. 5 Transformation of iterative and recursive constructs
    1. 5.1 Loop blocking or strip mining
    2. 5.2 Loop unrolling
    3. 5.3 Loop fusion and fission
    4. 5.4 Loop reversal
    5. 5.5 Loop interchange
    6. 5.6 Tiling
    7. 5.7 Unimodular transformations
  12. 6 Compiling for distributed memory machines
    1. 6.1 Data distribution
      1. 6.1.1 Replicated distribution
      2. 6.1.2 Block distribution
      3. 6.1.3 Cyclic distribution
      4. 6.1.4 Block-cyclic distribution
    2. 6.2 Computing the reference set
    3. 6.3 Computation partitioning
      1. 6.3.1 Bounds of replicated array dimensions
      2. 6.3.2 Bounds of block distributed array dimensions
      3. 6.3.3 Bounds of cyclically distributed array dimensions
      4. 6.3.4 Bounds of block-cyclically distributed array dimensions
      5. 6.3.5 Subscripts with multiple loop indices
      6. 6.3.6 Generating loop bounds for multi-dimensional arrays
      7. 6.3.7 Generating loop bounds with multiple references
    4. 6.4 Generating communication
      1. 6.4.1 The shift communication pattern
      2. 6.4.2 The broadcast communication pattern
    5. 6.5 Distributed memory programming languages
      1. 6.5.1 High Performance Fortran (HPF)
      2. 6.5.2 Co-Array Fortran
      3. 6.5.3 Unified Parallel C (UPC)
  13. 7 Solving Diophantine equations
    1. 7.1 Solving single Diophantine equations
    2. 7.2 Solving multiple Diophantine equations
    3. 7.3 Extreme values of integer functions
  14. 8 A guide to further reading
    1. 8.1 Compiler fundamentals
    2. 8.2 Dependence analysis, dependence graphs and alias analysis
    3. 8.3 Program parallelization
    4. 8.4 Transformations to modify and eliminate dependences
    5. 8.5 Reduction recognition
    6. 8.6 Transformation of iterative and recursive constructs
    7. 8.7 Tiling
    8. 8.8 Compiling for distributed memory machines
    9. 8.9 Current and future directions in parallelizing compilers
  15. Biography
  16. Author’s Biography
3.15.168.214