CUDA - GPU-Accelerated Training

This chapter will look at the hardware side of deep learning. First, we will take a look at how CPUs and GPUs serve our computational needs for building Deep Neural Networks (DNNs), how they are different, and what their strengths are. The performance improvements offered by GPUs are central to the success of deep learning.

We will learn about how to get Gorgonia working with our GPU and how to accelerate our Gorgonia models using CUDA: NVIDIA's software library for facilitating the easy construction and execution of GPU-accelerated deep learning models. We will also learn about how to build a model that uses GPU-accelerated operations in Gorgonia, and then benchmark the performance of these models versus their CPU counterparts to determine which is the best option for different tasks.

In this chapter, the following topics will be covered:

  • CPUs versus GPUs
  • Understanding Gorgonia and CUDA
  • Building a model in Gorgonia with CUDA
  • Performance benchmarking of CPU versus GPU models for training and inference
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.2.157