Chapter 4, Kernels, Threads, Blocks, and Grids

  1. Try it.
  2. All of the threads don't operate on the GPU simultaneously. Much like a CPU switching between tasks in an OS, the individual cores of the GPU switch between the different threads for a kernel.
  3. O( n/640 log n), that is, O(n log n).
  4. Try it.

 

  1. There is actually no internal grid-level synchronization in CUDA—only block-level (with __syncthreads). We have to synchronize anything above a single block with the host.
  2. Naive: 129 addition operations. Work-efficient: 62 addition operations.
  3. Again, we can't use __syncthreads if we need to synchronize over a large grid of blocks. We can also launch over fewer threads on each iteration if we synchronize on the host, freeing up more resources for other operations.
  4. In the case of a naive parallel sum, we will likely be working with only a small number of data points that should be equal to or less than the total number of GPU cores, which can likely fit in the maximum size of a block (1032); since a single block can be synchronized internally, we should do so. We should use the work-efficient algorithm only if the number of data points are far greater than the number of available cores on the GPU.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.15.100