Questions

  1. In the launch parameters for the kernel in the first example, our kernels were each launched over 64 threads. If we increase the number of threads to and beyond the number of cores in our GPU, how does this affect the performance of both the original to the stream version?
  2. Consider the CUDA C example that was given at the very beginning of this chapter, which illustrated the use of cudaDeviceSynchronize. Do you think it is possible to get some level of concurrency among multiple kernels without using streams and only using cudaDeviceSynchronize?
  3. If you are a Linux user, modify the last example that was given to operate over processes rather than threads.
  4. Consider the multi-kernel_events.py program; we said it is good that there was a low standard deviation of kernel execution durations. Why would it be bad if there were a high standard deviation?
  5. We only used 10 host-side threads in the last example. Name two reasons why we have to use a relatively small number of threads or processes for launching concurrent GPU operations on the host.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.52.7