Matrix

A matrix is a two-dimensional array of numbers. Each element can be indexed by its row and column position. Thus, a 3 x 2 matrix:

Matrix

Transpose of a matrix

Swapping columns for rows in a matrix produces the transpose. Thus, the transpose of A is a 2 x 3 matrix:

Transpose of a matrix

Matrix addition

Matrix addition is defined as element-wise summation of two matrices with the same shape. Let A and B be two m x n matrices. Their sum C can be written as follows:

Ci,j = Ai,j + Bi,j

Scalar multiplication

Multiplication with a scalar produces a matrix where each element is scaled by the scalar value. Here A is multiplied by the scalar value d:

Scalar multiplication

Matrix multiplication

Two matrices A and B can be multiplied if the number of columns of A equals the number of rows of B. If A has dimensions m x n and B has dimensions n x p, then the product AB has dimensions m x p:

Matrix multiplication

Properties of matrix product

Distributivity over addition: A(B + C) = AB + AC

Associativity: A(BC) = (AB)C

Non-commutativity: AB ≠ BA

Vector dot-product is commutative: xTy = yTx

Transpose of product is product of transposes: (AB)T = ATBT

Linear transformation

There is a special importance to the product of a matrix and a vector in linear algebra. Consider the product of a 3 x 2 matrix A and a 2 x 1 vector x producing a 3 x 1 vector y:

Linear transformation

Linear transformation

Linear transformation

Linear transformation

(C)

Linear transformation

(R)

It is useful to consider two views of the preceding matrix-vector product, namely, the column picture (C) and the row picture (R). In the column picture, the product can be seen as a linear combination of the column vectors of the matrix, whereas the row picture can be thought of as the dot products of the rows of the matrix with the vector Linear transformation

Matrix inverse

The product of a matrix with its inverse is the Identity matrix. Thus:

Matrix inverse

The matrix inverse, if it exists, can be used to solve a system of simultaneous equations represented by the preceding vector-matrix product equation. Consider a system of equations:

x1 + 2x2 = 3

3x1 + 9x2 = 21

This can be expressed as an equation involving the matrix-vector product:

Matrix inverse

We can solve for the variables x1 and x2 by multiplying both sides by the matrix inverse:

Matrix inverse

Matrix inverse

The matrix inverse can be calculated by different methods. The reader is advised to view Prof. Strang's MIT lecture: bit.ly/10vmKcL.

Eigendecomposition

Matrices can be decomposed to factors that can give us valuable insight into the transformation that the matrix represents. Eigenvalues and eigenvectors are obtained as the result of eigendecomposition. For a given square matrix A, an eigenvector is a non-zero vector that is transformed into a scaled version of itself when multiplied by the matrix. The scalar multiplier is the eigenvalue. All scalar multiples of an eigenvector are also eigenvectors:

A v = λ v

In the preceding example, v is an eigenvector and λ is the eigenvalue.

The eigenvalue equation of matrix A is given by:

(A λ I)v = 0

The non-zero solution for the eigenvalues is given by the roots of the characteristic polynomial equation of degree n represented by the determinant:

Eigendecomposition

The eigenvectors can then be found by solving for v in Av = λ v.

Some matrices, called diagonalizable matrices, can be built entirely from their eigenvectors and eigenvalues. If Λ is the diagonal matrix with the eigenvalues of matrix A on its principal diagonal, and Q is the matrix whose columns are the eigenvectors of A:

Eigendecomposition

Then A = Q Λ Q-1.

Positive definite matrix

If a matrix has only positive eigenvalues, it is called a positive definite matrix. If the eigenvalues are positive or zero, the matrix is called a positive semi-definite matrix. With positive definite matrices, it is true that:

xT Ax 0

Singular value decomposition (SVD)

SVD is a decomposition of any rectangular matrix A of dimensions n x p and is written as the product of three matrices:

Singular value decomposition (SVD)

U is defined to be n x n, S is a diagonal n x p matrix, and V is p x p. U and V are orthogonal matrices; that is:

Singular value decomposition (SVD)

The diagonal values of S are called the singular values of A. Columns of U are called left singular vectors of A and those of V are called right singular vectors of A. The left singular vectors are orthonormal eigenvectors of ATA and the right singular vectors are orthonormal eigenvectors of AAT.

The SVD representation expands the original data into a coordinate system such that the covariance matrix is a diagonal matrix.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.184.90