11.12 SUMMARY OF WORK DONE IN THIS CHAPTER
At this stage, we were able to completely specify the reduced computation domain associated with the matrix–matrix multiplication algorithm. This could represent the required concurrent threads for a software implementation or the required PEs needed for a systolic array hardware implementation. Below we summarize what we have done and why:
1. We started by expressing the matrix multiplication as an iterative Equation (Eq. 11.1).
2. The indices of the iterative Equation defined the multidimensional computation domain . The facets and vertices of this domain were studied in Sections 11.3 and 11.4.
3. We identified the dependence matrix A associated with each variable of the algorithm in Section 11.5. Based on this matrix, we identified its nullvectors, which represent the broadcast subdomain B of the variable. We were also able to identify the intersection points of B with . These intersection points help in supplying input variables or extracting output results. At this stage, we can decide whether to broadcast or to pipeline our variables.
4. Scheduling of data was discussed in Section 11.8 using affine scheduling functions.
5. The projection of domain onto another domain was discussed in Section 11.9. Three different projection operations were discussed, but only one was studied in more detail.
11.13 PROBLEMS
To get a feel for the formal computational geometry analysis technique, it is helpful to apply it to simple 2-D or 3-D algorithms. Some of the following problems are intended for that purpose. In order to analyze the problem, the following steps are required:
1. Determine the computation domain and its facets and vertices.
2. Obtain the dependence matrix of each variable then determine the basis vectors and nullvectors of the matrix.
3. Obtain the feeding or extraction points of the variables, which lie on some of the f acets of .
4. Determine the projection matrix.
5. Determine the scheduling function.
11.1. Apply the computational geometry technique to the 1-D finite impulse response (FIR) digital filter algorithm.
11.2. Apply the computational geometry technique to the 2-D FIR digital filter algorithm.
11.3. Apply the computational geometry technique to the 1-D infinite impulse response (IIR) digital filter algorithm.
11.4. Apply the computational geometry technique to the 2-D IIR digital filter algorithm.
11.5. Apply the computational geometry technique to the matrix–vector multiplication algorithm.
11.6. Apply the computational geometry technique to the 1-D convolution algorithm.
11.7. Apply the computational geometry technique to the 2-D convolution algorithm.
11.8. Apply the computational geometry technique to the 1-D autocorrelation algorithm.
11.9. Apply the computational geometry technique to the 1-D cross-correlation algorithm.
11.10. Apply the computational geometry technique to the 2-D autocorrelation algorithm.
11.11. Apply the computational geometry technique to the 2-D cross-correlation algorithm.
11.12. Apply the computational geometry technique to the 3-D autocorrelation algorithm.
11.13. Apply the computational geometry technique to the 3-D cross-correlation algorithm.
18.222.186.224