Chapter 11. Mathematical and Parallel Techniques for Data Analysis

The concurrent execution of a program can result in significant performance improvements. In this chapter, we will address the various techniques that can be used in data science applications. These can range from low-level mathematical calculations to higher-level API-specific options.

Always keep in mind that performance enhancement starts with ensuring that the correct set of application functionality is implemented. If the application does not do what a user expects, then the enhancements are for nought. The architecture of the application and the algorithms used are also more important than code enhancements. Always use the most efficient algorithm. Code enhancement should then be considered. We are not able to address the higher-level optimization issues in this chapter; instead, we will focus on code enhancements.

Many data science applications and supporting APIs use matrix operations to accomplish their tasks. Often these operations are buried within an API, but there are times when we may need to use these directly. Regardless, it can be beneficial to understand how these operations are supported. To this end, we will explain how matrix multiplication is handled using several different approaches.

Concurrent processing can be implemented using Java threads. A developer can use threads and thread pools to improve an application's response time. Many APIs will use threads when multiple CPUs or GPUs are not available, as is the case with Aparapi. We will not illustrate the use of threads here. However, the reader is assumed to have a basic knowledge of threads and thread pools.

The map-reduce algorithm is used extensively for data science applications. We will present a technique for achieving this type of parallel processing using Apache's Hadoop. Hadoop is a framework supporting the manipulation of large datasets, and can greatly decrease the required processing time for large data science projects. We will demonstrate a technique for calculating an average value for a sample set of data.

There are several well-known APIs that support multiple processors, including CUDA and OpenCL. CUDA is supported using Java bindings for CUDA (JCuda) (http://jcuda.org/). We will not demonstrate this technique directly here. However, many of the APIs we will use do support CUDA if it is available, such as DL4J. We will briefly discuss OpenCL and how it is supported in Java. It is worth nothing that the Aparapi API provides higher-level support, which may use either multiple CPUs or GPUs. A demonstration of Aparapi in support of matrix multiplication will be illustrated.

In this chapter, we will examine how multiple CPUs and GPUs can be harnessed to speed up data mining tasks. Many of the APIs we have used already take advantage of multiple processors or at least provide a means to enable GPU usage. We will introduce a number of these options in this chapter.

Concurrent processing is also supported extensively in the cloud. Many of the techniques discussed here are used in the cloud. As a result, we will not explicitly address how to conduct parallel processing in the cloud.

Implementing basic matrix operations

There are several different types of matrix operations, including simple addition, subtraction, scalar multiplication, and various forms of multiplication. To illustrate the matrix operations, we will focus on what is known as matrix product. This is a common approach that involves the multiplication of two matrixes to produce a third matrix.

Consider two matrices, A and B, where matrix A has n rows and m columns. Matrix B will have m rows and p columns. The product of A and B, written as AB, is an n row and p column matrix. The m entries of the rows of A are multiplied by the m entries of the columns of matrix B. This is more explicitly shown here, where:

Implementing basic matrix operations

Where the product is defined as follows:

Implementing basic matrix operations

We start with the declaration and initialization of the matrices. The variables n, m, p represent the dimensions of the matrices. The A matrix is n by m, the B matrix is m by p, and the C matrix representing the product is n by p:

int n = 4; 
int m = 2; 
int p = 3; 
 
double A[][] = { 
    {0.1950, 0.0311}, 
    {0.3588, 0.2203}, 
    {0.1716, 0.5931}, 
    {0.2105, 0.3242}}; 
double B[][] = { 
    {0.0502, 0.9823, 0.9472}, 
    {0.5732, 0.2694, 0.916}}; 
double C[][] = new double[n][p]; 

The following code sequence illustrates the multiplication operation using nested for loops:

for (int i = 0; i < n; i++) { 
    for (int k = 0; k < m; k++) { 
        for (int j = 0; j < p; j++) { 
            C[i][j] += A[i][k] * B[k][j]; 
        } 
    } 
} 

This following code sequence formats output to display our matrix:

out.println("
Result"); 
for (int i = 0; i < n; i++) { 
    for (int j = 0; j < p; j++) { 
        out.printf("%.4f  ", C[i][j]); 
    } 
    out.println(); 
} 

The result appears as follows:

Result
0.0276  0.1999  0.2132  
0.1443  0.4118  0.5417  
0.3486  0.3283  0.7058  
0.1964  0.2941  0.4964

Later, we will demonstrate several alternative techniques for performing the same operation. Next, we will discuss how to tailor support for multiple processors using DL4J.

Using GPUs with DeepLearning4j

DeepLearning4j works with GPUs such as those provided by NVIDIA. There are options available that enable the use of GPUs, specify how many GPUs should be used, and control the use of GPU memory. In this section, we will show how you can use these options. This type of control is often available with other high-level APIs.

DL4J uses n-dimensional arrays for Java (ND4J) (http://nd4j.org/) to perform numerical computations. This is a library that supports n-dimensional array objects and other numerical computations, such as linear algebra and signal processing. It includes support for GPUs and is also integrated with Hadoop and Spark.

A vector is a one-dimensional array of numbers and is used extensively with neural networks. A vector is a type of mathematical structure called a tensor. A tensor is essentially a multidimensional array. We can think of a tensor as an array with three or more dimensions, and each dimension is called a rank.

There is often a need to map a multidimensional set of numbers to a one-dimensional array. This is done by flattening the array using a defined order. For example, with a two-dimensional array many systems will allocate the members of the array in row-column order. This means the first row is added to the vector, followed by the second vector, and then the third, and so forth. We will use this approach in the Using the ND4J API section.

To enable GPU use, the project's POM file needs to be modified. In the properties section of the POM file, the nd4j.backend tag needs to added or modified, as shown here:

<nd4j.backend>nd4j-cuda-7.5-platform</<nd4j.backend> 

Models can be trained in parallel using the ParallelWrapper class. The training task is automatically distributed among available CPUs/GPUs. The model is used as an argument to the ParallelWrapper class' Builder constructor, as shown here:

ParallelWrapper parallelWrapper =  
    new ParallelWrapper.Builder(aModel) 
        // Builder methods... 
        .build(); 

When executed, a copy of the model is used on each GPU. After the number of iterations is specified by the averagingFrequency method, the models are averaged and then the training process continues.

There are various methods that can be used to configure the class, as summarized in the following table:

Method

Purpose

prefetchBuffer

Specifies the size of a buffer used to pre-fetch data

workers

Specifies the number of workers to be used

averageUpdaters

averagingFrequency

reportScoreAfterAveraging

useLegacyAveraging

Various methods to control how averaging is achieved

The number of workers should be greater than the number of GPUs available.

As with most computations, using a lower precision value will speed up the processing. This can be controlled using the setDTypeForContext method, as shown next. In this case, half precision is specified:

DataTypeUtil.setDTypeForContext(DataBuffer.Type.HALF); 

This support and more details regarding optimization techniques can be found at http://deeplearning4j.org/gpu.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.73.127