Chapter 1, Why GPU Programming?

  1. The first two for loops iterate over every pixel, whose outputs are invariant to each other; we can thus parallelize over these two for loops. The third for loop calculates the final value of a particular pixel, which is intrinsically recursive.
  2. Amdahl's Law doesn't account for the time it takes to transfer memory between the GPU and the host.
  3. 512 x 512 amounts to 262,144 pixels. This means that the first GPU can only calculate the outputs of half of the pixels at once, while the second GPU can calculate all of the pixels at once; this means the second GPU will be about twice as fast as the first here. The third GPU has more than sufficient cores to calculate all pixels at once, but as we saw in problem 1, the extra cores will be of no use to us here. So the second and third GPUs will be equally fast for this problem.
  4. One issue with generically designating a certain segment of code as parallelizable with regards to Amdahl's Law is that this makes the assumption that the computation time for this piece of code will be close to 0 if the number of processors, N, is very large. As we can see from the last problem, this is not the case.
  5. First, using time consistently can be cumbersome, and it might not zero in on the bottlenecks of your program. Second, a profiler can tell you the exact computation time of all of your code from the perspective of Python, so you can tell whether some library function or background activity of your operating system is at fault rather than your code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.220.83