Basic Linear Algebra Subprograms

As we've seen in the networks we've built so far, tensor operations are fundamental to machine learning. GPUs are designed for these types of vector or matrix operations, but our software also needs to be designed to take advantage of these optimizations. Enter BLAS!

BLAS provide the building blocks for linear algebra operations, commonly used in graphics programming as well as machine learning. BLAS libraries are low level, originally written in Fortran, and group the functionality they offer into three levels, defined by the types of operations covered, as follows:

  • Level 1: Vector operations on strided arrays, dot products, vector norms, and generalized vector addition
  • Level 2: Generalized matrix-vector multiplication, solver for linear equations involving triangular matrices
  • Level 3: Matrix operations, including General Matrix Multiplication (GEMM)

Level 3 operations are what we're really interested in for deep learning. Here's an example from the CUDA-fied convolution operation in Gorgonia.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.35