Decomposition

There are there decompositions provided by numpy.linalg and in this section, we will cover two that are the most commonly used: singular value decomposition (svd) and QR factorization. Let's start by computing the eigenvalues and eigenvectors first. Before we get started, if you are not familiar with eigenvalues and eigenvectors, you may review them at https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors. Let's start:

In [41]: x = np.random.randint(0, 10, 9).reshape(3,3) 
In [42]: x 
Out[42]: 
array([[ 1,  5,  0] 
       [ 7,  4,  0] 
       [ 2,  9,  8]]) 
In [42]: w, v = np.linalg.eig(x) 
In [43]: w 
Out[43]: array([ 8.,  8.6033,  -3.6033]) 
In [44]: v 
Out[44]: 
array([[ 0.,  0.0384,  0.6834] 
       [ 0.,  0.0583, -0.6292] 
       [ 1.,  0.9976,  0.3702]] 
) 

In the previous example, first we created a 3 x 3 ndarray using numpy.random.randint () and we computed the eigenvalues and eigenvectors using np.linalg.eig(). The function returns two tuples: the first one is the eigenvalues, each repeated according to its multiplicity, and the second one is the normalized eigenvectors, in which column v[: , i] is the eigenvector corresponding to the eigenvalue w[i]. In this example, we unpacked the tuples into w and v. If the input ndarray is complex-valued, the computed eigenvectors would be the complex type too, as you can see in the following example:

In [45]: y = np.array([[1, 2j],[-3j, 4]]) 
In [46]: np.linalg.eig(y) 
Out[46]: 
(array([ -0.3723+0.j,  5.3723+0.j]), 
 array([[0.8246+0.j    ,  0.0000+0.416j     ], 
        [-0.0000+0.5658j,  0.9094+0.j    ]])) 

But if the input ndarray is real, the computed eigenvalues would also be real; therefore, when computing, we should be careful of rounding errors, as you can see in the following example:

In [47]: z = np.array([[1 + 1e-10, -1e-10],[1e-10, 1 - 1e-10]]) 
In [48]: np.linalg.eig(z) 
Out[48]: 
(array([ 1.,  1.]), array([[0.70710678,  0.707106], 
        [0.70710678,  0.70710757]])) 

ndarrayz is the real type (numpy.float64), so when computing the eigenvalues it will automatically round off. In theory, the eigenvalues should be 1 +/- 1e- 10, but you can see from the first np.linalg.eig() that the eigenvalues are both rounding up to 1.

svd can be thought of as an extension of the eigenvalue. We can use numpy.linalg.svd() to decompose an M x N array, so let's start with a simple example:

In [51]: np.set_printoptions(precision = 4) 
In [52]: A = np.array([3,1,4,1,5,9,2,6,5]).reshape(3,3) 
In [53]: u, sigma, vh = np.linalg.svd(A) 
In [54]: u 
Out[54]: 
array([[-0.3246,  0.799 ,  0.5062], 
       [-0.7531,  0.1055, -0.6494], 
       [-0.5723, -0.592 ,  0.5675]]) 
In [55]: vh 
Out[55]: 
array([[-0.2114, -0.5539, -0.8053], 
       [ 0.4633, -0.7822,  0.4164], 
       [ 0.8606,  0.2851, -0.422 ]]) 
In [56]: sigma 
Out[56]: array([ 13.5824,   2.8455,   2.3287]) 

In this example, numpy.linalg.svd() returned three tuples of arrays and we unpacked it into three variables: usigma, and vh, in which u stands for the left-singular vectors of A (eigenvectors of AA-1), vh is the inverse matrix of the right singular vectors of A (eigenvectors of (A-1A) -1), and sigma is the non-zero singular values of A (eigenvalues of both AA-1 and A-1A). In the example, there are three eigenvalues and they were returned in order. You might be suspicious about the result, so let's do some math to verify it:

In [57]: diag_sigma = np.diag(sigma) 
In [58]: diag_sigma 
Out[58]: 
array([[ 13.5824,   0.    ,   0.    ], 
       [  0.    ,   2.8455,   0.    ], 
       [  0.    ,   0.    ,   2.3287]]) 
In [59]: Av = u.dot(diag_sigma).dot(vh) 
In [60]: Av 
Out[60]: 
array([[ 3.,  1.,  4.], 
       [ 1.,  5.,  9.], 
       [ 2.,  6.,  5.]]) 
In [61]: np.allclose(A, Av) 
Out[61]: True 

The input array A can be translated to U ∑ V* in svd, where  is the vector of singular values. However, the returned sigma from NumPy is an array with non-zero values, and we need to make it into a vector, so in this example the shape would be (3,3). We first use numpy.diag() to make sigma a diagonal matrix called diag_sigma. Then we just perform a matrix multiplication between udiag_sigma, and vh, to check that the calculated result (Av) is identical to the original input A, meaning we verified the svd result.

QR decomposition, sometimes called polar decomposition, works for any M x N array and decomposes it into an orthogonal matrix (Q) and an upper triangular matrix (R). Let's try to use it to solve the previous Ax = b problem:

In [62]: b = np.array([1,2,3]).reshape(3,1) 
In [63]: q, r = np.linalg.qr(A) 
In [64]: x = np.dot(np.linalg.inv(r), np.dot(q.T, b)) 
In [65]: x 
Out[65]: 
array([[ 0.2667], 
       [ 0.4667], 
       [-0.0667]]) 

We decomposed A using numpy.linalg.qr() to obtain q and r. So now the original equation is translated into (q * r)x = b. We can obtain x using matrix multiplication (the dot product) of inverse r and inverse q and b. Since q is a unitary matrix, we used transpose instead of inverse. As you can see, the result x is the same as when we used matrix and numpy.linalg.solve(); it's just another way to solve the linear problem.

Note

In general, the calculation of the inverse of the triangular matrix is much more efficient, as you can create a large dataset and compare the performance between different solutions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.128.33