Summary

We started with an implementation of Conway's Game of Life, which gave us an idea of how the many threads of a CUDA kernel are organized in a block-grid tensor-type structure. We then delved into block-level synchronization by way of the CUDA function, __syncthreads(), as well as block-level thread intercommunication by using shared memory; we also saw that single blocks have a limited number of threads that we can operate over, so we'll have to be careful in using these features when we create kernels that will use more than one block across a larger grid.

We gave an overview of the theory of parallel prefix algorithms, and we ended by implementing a naive parallel prefix algorithm as a single kernel that could operate on arrays limited by a size of 1,024 (which was synchronized with ___syncthreads and performed both the for and parfor loops internally), and with a work-efficient parallel prefix algorithm that was implemented across two kernels and three Python functions could operate on arrays of arbitrary size, with the kernels acting as the inner parfor loops of the algorithm, and with the Python functions effectively operating as the outer for loops and synchronizing the kernel launches.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.107.85