CHAPTER 5

image

Vectors and Matrices

5.1 Vectors and Matrices

We have already seen how vectors and matrices are represented in MATLAB in the chapter dedicated to variables, however we shall recall the notation.

Consider the matrix

image

image

You can enter this in MATLAB in the following ways:

  • A=[a11,a12,...,a1n ; a21,a22,...,a2n ; ... ; am1,am2,...,amn]
  • A=[a11 a12 ... a1n ; a21 a22 ... a2n ; ... ; am1 am2 ... amn]
  • A=maple(‘array([[a11,..,a1n],[a21,..,a2n],..,[am1,..,amn]])’)
  • A=maple(‘matrix(m,n,[a11,..,a1n,a21,..,a2n,..,am1,..,amn])’)
  • A=maple(‘matrix([[a11,..,a1n],[a21,..,a2n],..,[am1,..,amn]])’)

On the other hand, a vector V =(v1,v2,...,vn) is introduced as a special case of a matrix with a single row (i.e. a matrix of dimension 1×n) in the following form:

  • V = [v1, v2,..., vn]
  • V = [v1 v2... vn]
  • V = maple (‘vector([v1, v2,..., vn])’)
  • V = maple (‘vector(n,[v1, v2,..., vn])’)
  • V=maple(‘array([v1, v2, ..., vn])’)

5.2 Operations with Numeric Matrices

MATLAB supports the most common matrix algebra operations (sum, difference, product, scalar product), provided the dimensionality conditions hold.

The common MATLAB matrix commands are summarized below.

  • A + B sum of matrices A and B
  • A - B difference of the matrices A and B (A minus B)
  • c * M product of the scalar c and the matrix M
  • A * B product of the matrices A and B
  • A ^ p matrix A raised to the power of the scalar p
  • p ^ A scalar p raised to the power of the matrix A
  • expm1 (A) eA calculated via Padé approximants
  • expm2 (A) eA calculated via Taylor series
  • expm3 (A) eA calculated via eigenvalues and eigenvectors
  • logm(A) (Napierian logarithm of the matrix A)
  • sqrtm (A) square root of the matrix A
  • funm (A, ‘function’) applies the function to the matrix A
  • transpose (A) or A' transpose of the matrix A
  • inv (A) inverse of the square matrix A  ( A- 1)
  • det (A) determinant of the square matrix A
  • rank (A) range of the matrix A
  • trace (A) sum of the elements of the diagonal of A
  • svd (A) gives the vector V of singular values of A. The singular values of A are the square roots of the eigenvalues of the symmetric matrix A' A.
  • [U, S, V] = svd (A) gives the diagonal matrix S  of singular values of  A (ordered from largest to smallest), and the matrices U and V such that A= U * S * V'.
  • cond (A) gives the condition number of the matrix A (the ratio between the largest and the smallest singular values of A)
  • rcond (A) the reciprocal condition number of the matrix A
  • norm (A) the standard or 2-norm of A (the largest singular value of A)
  • norm(A,1) the 1-norm of A (the maximum column magnitude, where the column magnitude of a column is the sum of the absolute values of its elements)
  • norm(A,inf) the infinity norm of A (the maximum row magnitude, where the row magnitude of a row is the sum of the absolute values of its elements)
  • norm(A,‘fro’) the Frobenius norm of A, defined by sqrt (sum (diag(A'A)))
  • Z = null (A) gives an orthonormal basis for the null space of A obtained from the singular value decomposition, i.e. AZ has negligible elements, size(Z,2) is the nullity of A, and Z'Z = I.
  • Q = orth (A) returns an orthonormal basis for the range of A, Q’Q=I. The columns of Q are vectors which span the range of A. The number of columns in Q is equal to the rank of A.
  • subspace (A, B) finds the angle between two subspaces specified by the columns of A and B. If A and B are column vectors of unit length, this is the same as acos(abs(A'*B)).
  • rref(A) produces the reduced row echelon form of A using Gauss-Jordan elimination with partial pivoting. The number of non-zero rows of rref (A) is the rank of A.

Here are some examples:

We consider the matrix M = [1/3,1/4,1/5; 1/4,1/5,1/6; 1/5,1/6,1/7], and find its transpose, its inverse, its determinant, its range, its trace, its singular values, its condition number, its norm, M3, eM, log (M) and sqrt (M):

>> M = [1/3,1/4,1/5; 1/4,1/5,1/6; 1/5,1/6,1/7]

M =

    0.3333 0.2500 0.2000
    0.2500 0.2000 0.1667
    0.2000 0.1667 0.1429

>> transpose = M'

transpose =

    0.3333 0.2500 0.2000
    0.2500 0.2000 0.1667
    0.2000 0.1667 0.1429

>> inverse = inv(M)

inverse =

  1. 0e + 003 *

    0.3000  -0.9000  0.6300
   -0.9000   2.8800 -2.1000
    0.6300  -2.1000  1.5750

To verify that the inverse has been calculated, we multiply it by M and check that the result is the identity matrix of order 3:

>> M * inv(M)

ans =

    1.0000 0.0000 0.0000
    0.0000 1.0000 0.0000
    0.0000 0.0000 1.0000

>> determinantM = det(M)

determinantM =

  2. 6455e-006

>> rankM=rank(M)

rankM =

     3

>> traceM=trace(M)

traceM =

    0.6762

>> vsingular = svd(M)

vsingular =

    0.6571
    0.0189
    0.0002

>> condition = cond(M)

condition =

  3. 0886e + 003

For the calculation of the norm, we find the standard norm, the 1-norm, the infinity norm and the Frobenius norm:

>> norm(M)

ans =

    0.6571

>> norm(M,1)

ans =

    0.7833

>> norm(M,inf)

ans =

    0.7833

>> norm(M,'fro')

ans =

    0.6573

>> M^3

ans =

    0.1403    0.1096    0.0901
    0.1096    0.0856    0.0704
    0.0901    0.0704    0.0578

>> logm(M)

ans =

   -2.4766    2.2200    0.5021
    2.2200   -5.6421    2.8954
    0.5021    2.8954   -4.7240

>> sqrtm(M)

ans =

    0.4631 0.2832 0.1966
    0.2832 0.2654 0.2221
    0.1966 0.2221 0.2342

The variants using eigenvalues, Padé approximants and Taylor series will be used to calculate eM:

>> expm(M)

ans =

    1.4679    0.3550    0.2863
    0.3550    1.2821    0.2342
    0.2863    0.2342    1.1984

>> expm1(M)

ans =

    1.4679    0.3550    0.2863
    0.3550    1.2821    0.2342
    0.2863    0.2342    1.1984

>> expm2(M)

ans =

    1.4679    0.3550    0.2863
    0.3550    1.2821    0.2342
    0.2863    0.2342    1.1984

>> expm3(M)

As we see, the exponential matrix coincides using all methods.

EXERCISE 5-1

Given the three matrices

image

calculate AB - BA, A2 + B2 + C2, ABC, sqrt(A) + sqrt(B) + sqrt(C), eA(eB+eC) and find the rank, inverse, trace, determinant, condition number and singular values of A, B and C.

>> A=[1 1 0;0 1 1;0 0 1]; B=[i 1-i 2+i;0 -1 3-i;0 0 -i];
   C=[1 1 1; 0 sqrt(2)*i -sqrt(2)*i;1 -1 -1];

>> M1=A*B-B*A

M1 =

        0            -1.0000 - 1.0000i   2.0000
        0                  0             1.0000 - 1.0000i
        0                  0                  0

>> M2=A^2+B^2+C^2

M2 =

   2.0000             2.0000 + 3.4142i   3.0000 - 5.4142i
        0 - 1.4142i   0.0000 + 1.4142i   0.0000 - 0.5858i
        0             2.0000 - 1.4142i   2.0000 + 1.4142i

>> M3=A*B*C

M3 =

   5.0000 + 1.0000i  -3.5858 + 1.0000i  -6.4142 + 1.0000i
   3.0000 - 2.0000i  -3.0000 + 0.5858i  -3.0000 + 3.4142i
        0 - 1.0000i        0 + 1.0000i        0 + 1.0000i

>> M4=sqrtm(A)+sqrtm(B)-sqrtm(C)

M4 =

   0.6356 + 0.8361i  -0.3250 - 0.8204i   3.0734 + 1.2896i
   0.1582 - 0.1521i   0.0896 + 0.5702i   3.3029 - 1.8025i
  -0.3740 - 0.2654i   0.7472 + 0.3370i   1.2255 + 0.1048i

>> M5=expm(A)*(expm(B)+expm(C))

M5 =

  14.1906 - 0.0822i   5.4400 + 4.2724i  17.9169 - 9.5842i
   4.5854 - 1.4972i   0.6830 + 2.1575i   8.5597 - 7.6573i
   3.5528 + 0.3560i   0.1008 - 0.7488i   3.2433 - 1.8406i

>> ranks=[rank(A) rank(B) rank(C)]

ranks =

     3     3     3

>> vsingular=[svd(A),svd(B),svd(C)]

vsingular =

    1.8019    4.2130    2.0000
    1.2470    1.4917    2.0000
    0.4450    0.1591    1.4142

>> traces=[trace(A) trace(B) trace(C)]

traces =

   3.0000            -1.0000                  0 + 1.4142i

>> inv(A)

ans =

     1    -1     1
     0     1    -1
     0     0     1

>> inv(B)

ans =

        0 - 1.0000i  -1.0000 - 1.0000i  -4.0000 + 3.0000i
        0            -1.0000             1.0000 + 3.0000i
        0                  0                  0 + 1.0000i

>> inv(C)

ans =

   0.5000    0               0.5000
   0.2500    0 - 0.3536i    -0.2500
   0.2500    0 + 0.3536i    -0.2500

>> determinants = [det(A) det (B) det (C)]

determinants =

   1.0000 - 1.0000 0 - 5. 6569i

>> conditions = [cond(A) cond (B) cond(C)]

conditions =

    4.0489 26.4765 1.4142

EXERCISE 5-2

Consider the following matrix:

image

Find its transpose, its inverse, its determinant, its rank, its trace, its singular values, its condition number, its norm and M3, regarded as a symbolic matrix.

>> M = sym('[1/3,1/4,1/5; 1/4,1/5,1/6; 1/5,1/6,1/7]')

M =

[1/3,1/4,1/5]
[1/4,1/5,1/6]
[1/5,1/6,1/7]

>> Mtranspose = transpose(M)

Mtranspose =

[1/3, 1/4, 1/5]
[1/4, 1/5, 1/6]
[1/5, 1/6, 1/7]

>> Minverse = inv(M)

Minverse =

[ 300,  -900,   630]
[-900,  2880, -2100]
[ 630, -2100,  1575]

>> Mdeterminant=det(M)

Mdeterminant =

1/378000

>> Mrank=rank(M)

Mrank =

3

>> Mtrace=trace(M)

Mtrace =

71/105

>> numeric(svd(M))

ans =

   0.6571
   0.0002 - 0.0000i
   0.0189 + 0.0000i

>> norm = maple('norm([[1/3,1/4,1/5],[1/4,1/5,1/6],[1/5,1/6,1/7]])')

norm =

47/60

>> sympow(M,3)

ans =

[10603/75600,     1227/11200,  26477/294000]
[1227/11200,    10783/126000, 74461/1058400]
[26477/294000, 74461/1058400,   8927/154350]

Now we find the norms and condition number of M as a numeric matrix:

>> [norm(numeric(M)),norm(numeric(M),1),cond(numeric(M),inf), cond(numeric(M),'fro'),normest(numeric(M))]

ans =

  1.0e+003 *

0.0008    4.6060    3.0900    0.0007    0.8

>> [cond(numeric(M),1),cond(numeric(M),2),cond(numeric(M),'fro'), condest(numeric(M))]

ans =

  1.0e+003 *

    4.6060    3.0886    3.0900    4.6060

EXERCISE 5-3

Define a square matrix A of dimension 5 whose elements are given by A(i,j) = i3 - j2. Extract the submatrix of A formed by rows 2 to 4 and columns 3 to 4. Delete rows 2 to 4 of the matrix A, as well as column 5. Exchange the first and last rows of the matrix A. Exchange the first and last columns of the matrix A. Insert a column of 1s to the right of the matrix A. Insert a column of 1s to the left of the matrix A. Insert two rows of 1s at the top of the matrix A. Perform the same operation at the bottom.

First, we generate the matrix A as follows:

>> A=sym(maple('matrix(5,5,(i,j)-> i^3-j^2)'))

A =

[   0,  -3,  -8, -15, -24]
[   7,   4,  -1,  -8, -17]
[  26,  23,  18,  11,   2]
[  63,  60,  55,  48,  39]
[ 124, 121, 116, 109, 100]

>> maple('A:=matrix(5,5,(i,j)-> i^3-j^2)');
>> sym(maple('submatrix(A,2..4,3..4)'))

ans =

[ -1, -8]
[ 18, 11]
[ 55, 48]

>> sym(maple('delrows(A,2..4)'))

ans =

[   0,  -3,  -8, -15, -24]
[ 124, 121, 116, 109, 100]

>> sym(maple('delcols(A,5..5)'))

ans =

[   0,  -3,  -8, -15]
[   7,   4,  -1,  -8]
[  26,  23,  18,  11]
[  63,  60,  55,  48]
[ 124, 121, 116, 109]

>> pretty(sym(maple('swapcol(A,1,5),swaprow(A,1,5)')))

     [-24     -3     -8    -15      0]  [124    121    116    109    100]
     [                               ]  [                               ]
     [-17      4     -1     -8      7]  [  7      4     -1     -8    -17]
     [                               ]  [                               ]
     [  2     23     18     11     26], [ 26     23     18     11      2]
     [                               ]  [                               ]
     [ 39     60     55     48     63]  [ 63     60     55     48     39]
     [                               ]  [                               ]
     [100    121    116    109    124]  [  0     -3     -8    -15    -24]

>> maple('B:=array([1,1,1,1,1])');
>> pretty(sym(maple('augment(A,B),augment(B,A)')));

[  0     -3     -8    -15    -24    1]  [1      0     -3     -8    -15    -24]
[                                    ]  [                                    ]
[  7      4     -1     -8    -17    1]  [1      7      4     -1     -8    -17]
[                                    ]  [                                    ]
[ 26     23     18     11      2    1], [1     26     23     18     11      2]
[                                    ]  [                                    ]
[ 63     60     55     48     39    1]  [1     63     60     55     48     39]
[                                    ]  [                                    ]
[124    121    116    109    100    1]  [1    124    121    116    109    100]


>> maple('C:=array([[1,1,1,1,1],[1,1,1,1,1]])');
>> pretty(sym(maple('stack(C,A),stack(A,C)')));

     [  1      1      1      1      1]  [  0     -3     -8    -15    -24]
     [                               ]  [                               ]
     [  1      1      1      1      1]  [  7      4     -1     -8    -17]
     [                               ]  [                               ]
     [  0     -3     -8    -15    -24]  [ 26     23     18     11      2]
     [                               ]  [                               ]
     [  7      4     -1     -8    -17], [ 63     60     55     48     39]
     [                               ]  [                               ]
     [ 26     23     18     11      2]  [124    121    116    109    100]
     [                               ]  [                               ]
     [ 63     60     55     48     39]  [  1      1      1      1      1]
     [                               ]  [                               ]
     [124    121    116    109    100]  [  1      1      1      1      1]

5.3 Eigenvalues and Eigenvectors

MATLAB enables commands that allow you to work with eigenvalues and eigenvectors of a square matrix. For numeric matrices, we have the following:

  • eig(A) Finds the eigenvalues of the square matrix A.
  • [V, D] = eig(A) Returns the diagonal matrix D of eigenvalues of A, and a matrix V whose columns are the corresponding eigenvectors, so that A * V = V * D.
  • eig(A,B) Returns a vector with the generalized eigenvalues of the square matrices A and B. The generalized eigenvalues of A and B are the roots of the polynomial in λ:  det ( λ * B - A).
  • [V, D] = eig(A, B) returns the diagonal matrix D of generalized eigenvalues of A and B and a matrix V whose columns are the corresponding eigenvectors, so that A * V = B * V * D.
  • [AA, BB, Q, Z, V] = qz(A, B)
  • Calculates the upper triangular matrices AA and BB and matrices Q and Z such that Q * A * Z = Q and AA * B * Z = BB, and gives the matrix V of generalized eigenvectors of A and B, so that A * V * diag (BB) = B * V * diag (AA).
  • [T, B] = balance(A) Returns a similarity transformation T such that B = TA*T, and B has, as closely as possible, approximately equal row and column norms. The matrix B is called the balanced matrix of A.
  • balance(A) Computes the balanced matrix B of A. This is used to approximate the eigenvalues of A when they are difficult to estimate. We have  eig (A) = eig (balance (A)).
  • [V, D] = cdf2rdf (V, D) If the eigensystem [V,D]= eig(X) has complex eigenvalues appearing in complex-conjugate pairs, cdf2rdf transforms the system so D is in real diagonal form, with 2×2 real blocks along the diagonal replacing the original complex pairs. The eigenvectors are transformed so that X = V*D/V continues to hold.
  • [U, T] = schur (A) Returns a matrix T and a unitary matrix U such that A = U * T * U' and U'* U = eye (U). If A is complex, T is an upper triangular matrix with the eigenvalues of A on its diagonal. If A is real, T has the eigenvalues of A on its diagonal, and the corresponding complex eigenvalues correspond to the 2 × 2 diagonal blocks of T.
  • schur(A) Returns only the matrix T of the above decomposition.
  • [U, T] = rsf2csf (U, T) Converts the real Schur form to the complex form.
  • [H, P] = hess(A) Returns the unitary matrix P and Hessenberg matrix H such that A = P * H * P' and P'* P = eye (size (P)).
  • hess(A) Returns the Hessenberg matrix of A.
  • poly(A) Returns the characteristic polynomial of the matrix A.
  • poly(V) Returns a vector whose components are the coefficients of the polynomial whose roots are the elements of the vector V.
  • vander(C) Returns the Vandermonde matrix A such that its j-th column is A(:,j) = C ^ (n-j).

EXERCISE 5-4

Consider the matrix:

image

Compute its eigenvalues and eigenvectors, the balanced matrix with its eigenvalues, and its characteristic polynomial.

>> M=[1,-1,3;-1,i,-1-2i;i,1,i-2];
>> [V,D] = eig(M)

V =

   0.9129             0.1826 + 0.5477i  -0.1826 + 0.3651i
  -0.2739 - 0.0913i   0.5477 - 0.1826i   0.3651 - 0.7303i
  -0.0913 + 0.2739i  -0.1826 - 0.5477i   0.1826 - 0.3651i

D =

   1.0000 + 1.0000i  0                0
   0                -2.0000 + 1.0000i 0
   0                 0                0

We see that the eigenvalues of M are 1 + i, -2 + i and 0, and the eigenvectors are the columns of the matrix V. We now calculate the balanced matrix of M and verify that its eigenvalues coincide with those of M:

>> balance(M)

ans =

   1.0000            -1.0000             1.5000
  -1.0000             0 + 1.0000i       -0.5000 - 1.0000i
   0 + 2.0000i        2.0000            -2.0000 + 1.0000i

>> eig(balance(M))

ans =

 1.0000 + 1.0000i
-2.0000 + 1.0000i
 0

We now calculate the characteristic of polynomial of M:

>> p=poly(M)

p =

   1.0000    1.0000 - 2.0000i  -3.0000 - 1.0000i    0

>> vpa(poly2sym(p))

ans =

x^3+x^2-2.*i*x^2-3.*x-1.*i*x

Thus, the characteristic polynomial is x3  + x2  – 2ix2  – 3 x – ix.

EXERCISE 5-5

Consider the square matrix A of order 5 whose (i,j)th element is given by 1/(i+j-1/2). Compute the eigenvalues, eigenvectors, characteristic polynomial, minimum polynomial, characteristic matrix and singular values of A. Also find the vector of condition numbers of the eigenvalues and analyze whether A is positive definite, negative definite or positive or negative semidefinite.

MATLAB enables you to define this type of symbolic matrix in the general form:

>> A=sym(maple('matrix(5,5,(i,j)-> 1/(i+j-1/2))'))

A =

[ 2/3,  2/5,  2/7,  2/9, 2/11]
[ 2/5,  2/7,  2/9, 2/11, 2/13]
[ 2/7,  2/9, 2/11, 2/13, 2/15]
[ 2/9, 2/11, 2/13, 2/15, 2/17]
[2/11, 2/13, 2/15, 2/17, 2/19]

>> [V, E] = eig (A)

V =

[ -.1612e-1, -.6740e-2,     .3578,     2.482,    -288.7]
[     .2084,     .1400,    -2.513,    -15.01,     2298.]
[    -.7456,    -.6391,     3.482,     20.13,    -3755.]
[         1,         1,         1,         1,         1]
[    -.4499,    -.5011,    -2.476,    -8.914,     1903.]

E =

[  2/55*.4005e-4,              0,              0,              0,              0]
[              0, 2/55* .3991e-2,              0,              0,              0]
[              0,              0,    2/55* .1629,              0,              0]
[              0,              0,              0,    2/55* 3.420,              0]
[              0,              0,              0,              0,    2/55* 34.16]

As is well known, the eigenvectors are the columns of the matrix V and the eigenvalues are the elements of the diagonal of the matrix E.

>> pretty(simple(poly(A)))


 5   10042  4   362807509088   3    268537284608    2
x  - ----- x  + ------------- x  - --------------- x
     7315       2228304933855      285965799844725

          22809860374528            4359738368
     + --------------------- x - ------------------------
       169975437532179654375     177624332221127738821875

We can approximate the above output as follows:

>> pretty(simple(vpa(poly(A))))

       5        4        3           2         -6           -12
      x -1.373 x +.1628 x -.0009391 x +.1342*10  x -.1934*10

The singular values are calculated in the following way:

>> pretty(simple(svd(A)))
                                 [        -5]
                                 [.1456*10  ]
                                 [          ]
                                 [ .0001451 ]
                                 [          ]
                                 [ .005923  ]
                                 [          ]
                                 [  .1244   ]
                                 [          ]
                                 [  1.242   ]

The minimal polynomial and the characteristic matrix are calculated in the following way:

>> pretty(simple(sym(maple('minpoly(matrix(5,5,(i,j)-> 1/(i+j-1/2)),x)'))))

        34359738368             22809860374528          268537284608    2
- ------------------------ + --------------------- x - --------------- x
  177624332221127738821875   169975437532179654375     285965799844725


       362807509088   3   10042  4    5
     + ------------- x  - ----- x  + x
       2228304933855      7315

>> pretty(simple(sym(vpa(maple('minpoly(matrix(5,5,(i,j)->1/(i+j-1/2)),x)')))))

              -12           -6               2          3          4    5
     -.1934 10    + .1342 10   x - .0009391 x  + .1628 x  - 1.373 x  + x

>> pretty(simple(sym(maple('charmat(matrix(5,5,(i,j)-> 1/(i+j-1/2)),x)'))))


           [                                                 -2   ]
           [x - 2/3     -2/5        -2/7        -2/9         --   ]
           [                                                 11   ]
           [                                                      ]
           [                                     -2          -2   ]
           [ -2/5      x - 2/7      -2/9         --          --   ]
           [                                     11          13   ]
           [                                                      ]
           [                                     -2          -2   ]
           [ -2/7       -2/9      x - 2/11       --          --   ]
           [                                     13          15   ]
           [                                                      ]
           [             -2          -2                      -2   ]
           [ -2/9        --          --       x - 2/15       --   ]
           [             11          13                      17   ]
           [                                                      ]
           [  -2         -2          -2          -2               ]
           [  --         --          --          --       x - 2/19]
           [  11         13          15          17               ]

The vector of condition numbers of the eigenvalues is calculated as follows:

>> condeig(numeric(A))

ans =

    1.0000
    1.0000
    1.0000
    1.0000
    1.0000

In a more complete way, we can calculate the matrix V whose columns are the eigenvectors of A, the diagonal matrix D whose diagonal elements are the eigenvalues of A, and the vector S of condition numbers of the eigenvalues of A, by using the command:

>> [V,D,s] = condeig(numeric(A))

V =

    0.0102    0.0697    0.2756   -0.6523    0.7026
   -0.1430   -0.4815   -0.7052    0.1593    0.4744
    0.5396    0.6251   -0.2064    0.3790    0.3629
   -0.7526    0.2922    0.2523    0.4442    0.2954
    0.3490   -0.5359    0.5661    0.4563    0.2496

D =

    0.0000         0         0         0         0
         0    0.0001         0         0         0
         0         0    0.0059         0         0
         0         0         0    0.1244         0
         0         0         0         0    1.2423

s =
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000

Using the command definite, we find that the matrix A is positive definite:

>> maple('definite(matrix(5,5,(i,j)-> 1/(i+j-1/2)),positive_def)')

ans =

true

5.4 Matrix Decomposition

MATLAB enables commands that allow you to decompose a matrix as a product of orthogonal matrices and diagonal matrices.

We have already seen how the command [U, S, V] = svd (A) returns a diagonal matrix S of singular values of A (in decreasing order of magnitude), and orthogonal matrices U and V such that = U * S * V'.

We have also seen that you can obtain the Jordan decomposition of a square matrix A via the command [V, J] = jordan (A), which returns the Jordan canonical matrix J of A with the eigenvalues of A on its diagonal and the similarity transform V whose columns are the eigenvectors of A, so that V-1  * A * V = J.

On the other hand, we have also seen that you can obtain a decomposition of a square matrix A via the command schur,  [U, T] = schur(A), which returns an array T and an orthogonal matrix U such that A= U * T * U' and U'* U = eye (U). If A is complex, T is an upper triangular matrix with the eigenvalues of A on its diagonal. For real A, the matrix T has real eigenvalues of A on its diagonal and complex eigenvalues in 2×2 diagonal blocks in T.

We can also find the Hessenberg decomposition of the matrix A via the command [H, P] = hess (A), which gives the orthogonal matrix P and Hessenberg matrix H such that A= P * H * P' and P'* P = eye (size (P)).

In addition, MATLAB has a number of other commands for the numeric and symbolic decomposition of a matrix. They include the following:

  • [L, U] = lu (A) Decomposes the matrix A as the product A = L * U (an LU decomposition), where U is an upper triangular matrix and L is a permutation of a lower triangular matrix.
  • [L, U, P] = lu(A) Returns the lower triangular matrix L, the upper triangular matrix U and the permutation matrix P such that P *A = L * U.
  • R = chol(A) Returns the upper triangular matrix R such that R'* R =A (a Cholesky decomposition), where A is positive. If A is not positive, an error is returned.
  • [Q, R] =qr (A) Returns the upper triangular matrix R of the same dimension as A, and the orthogonal matrix Q such that A = Q * R (a QR decomposition). This decomposition can be applied to non-square matrices.
  • [Q, R, E] = qr(A) Returns the upper triangular matrix R of the same dimension as A, the matrix permutation E and the orthogonal matrix Q such that A * E = Q * R.
  • X = pinv(A) Returns the matrix X (the pseudo-inverse of A), of the same dimension as A' such that A * X * A = A and X * A * X = X, where A * X and X * A are hermitian.

In addition, the commands listed below allow the decomposition of both numeric and symbolic matrices. All of these commands must be preceded by the command maple.

  • LUdecomp(A,P=‘p’,L=‘l’,U=‘u’,U1=‘u1’,R=‘r’) decomposes the matrix A into the product A = evalm(P&*L&*U)  (LU decomposition), where U is an upper triangular matrix, L is a lower triangular matrix and P is a pivot factor. In addition, U = evalm(U1&*R)  with  U1  upper triangular and R a row reduced factor, so that A = evalm(P&*L&*U1*R).
  • cholesky(A) returns the lower triangular matrix R such that A = evalm(R&*R')  (Cholesky decomposition of A). A must be positive definite.
  • QRdecomp(A,Q=‘q’) returns the upper triangular matrix R of the same dimension as A, and the orthonormal matrix Q such that  A = evalm(Q&*R)  (QR decomposition  of  A).
  • companion(poly,var) gives the matrix  C  associated with the given monic polynomial in the specified variable. If poly = a0 + a1x +...+ xn, C(i,n)=-coeff(poli,var,i-1), i=1...n, C(i,i-1)=1,  i=2...n, and C(i, j) = 0 for the rest of the elements in the matrix.
  • frobenius(A) or ratform(A) returns the canonical Frobenius form F of the matrix A. F is a block diagonal matrix (F = diag(C1,C2,...,Cn)), where the Ci are the companion matrices associated to polynomials p1, p2,..., pk such that pi divides pi-1,  i = 2... K.
  • frobenius(A,‘P’) assigns to P the transformation matrix corresponding to the Frobenius form of the matrix A, so that evalm (P- 1 & * A & * P) = F.
  • smith(A,var) computes the Smith normal form of a matrix with univariate polynomial entries in var over the integers.
  • smith(A,var,U,V) in addition returns the matrices U and V such that S = evalm(U&*A&*V).
  • ismith(A,var) gives the diagonal matrix corresponding to the Smith normal form S of the square matrix A of polynomials in the variable  var.
  • ismith(A,var,U,V) in addition returns the matrices U and V such that S =evalm(U&*A&*V).
  • hermite(A,var) computes the Hermite normal form (reduced row echelon form) of a matrix A of univariate polynomials in var.
  • hermite(A,var,U) in addition returns the matrix U such that H = evalm(U&*A).
  • ihermite(A,var) computes the Hermite normal form (reduced row echelon form) of a matrix A of univariate polynomials in var over the integers.
  • ihermite(A,var,U) in addition returns the matrix U such that H = evalm(U&*A).
  • gaussjord (A) returns an upper triangular matrix corresponding to the row reduced (Gauss-Jordan) echelon form of the matrix A. This is used to facilitate the solution of systems of linear equations whose coefficient matrix is the matrix A.
  • gaussjord (A, j) returns the j-th column of the above matrix.
  • gaussjord(A,r,d) gives the row reduced echelon form of the matrix A, assigns to the variable r the rank of A and to the variable d the determinant of submatrix(A,1..r,1..r). This subarray is used for solving systems of linear equations whose coefficient matrix is A.
  • gausselim(A) performs Gaussian elimination with row pivoting on A, returning the reduced matrix. This is used to facilitate the solution of systems of linear equations whose coefficient matrix is the matrix A.
  • gausselim(A, j) returns the j-th column of the row reduced matrix of A.
  • gausselim(A,r,d) returns the row reduced matrix of A, assigns the variable r to the rank of A, and the variable d to the determinant of  submatrix(A, 1..r,1..r) . This subarray is used for solving systems of linear equations whose coefficient matrix is A.
  • backsub(A) returns the vector x such that A * x = V, where V is the last column of the matrix A. If A is the result of applying forward Gaussian elimination to the augmented matrix of a system of linear equations (via gausselim or gaussjord, for example), backsub completes the solution by back substitution.
  • backsub(A, V) returns the vector x such that A * x = V.
  • backsub(A,V,t) returns the vector x such that A * x = V, where the parameter t is used for a possible family of parametric solutions of the system.
  • forwardsub(A,V) returns the vector x such that A * x = V. If A is the result of applying Gaussian elimination to the matrix of a system of linear equations (via LUdecomp, for example), forwardsub completes the solution by forward substitution.
  • forwardsub(A,V,t) returns the matrix X such that A * X = V, where the parameter t is used for a possible family of parametric solutions of the system.
  • forwardsub (A) returns the vector x such that A * x = V, where V is the last column of A.
  • forwardsub(A,B) returns the matrix X such that A * X = B.
  • geneqns(A,[x1,...,xn]) generates a system of linear equations in the given variables, equating each to zero, where the coefficients are determined by the matrix A.
  • geneqns(A,[x1,...,xn],V) generates a system of linear equations in the given variables, where the right-hand sides of the equations are determined by the vector V and the coefficients are determined by the matrix A.
  • genmatrix([equation1,...,equationm],[x1,...,xn]) generates the matrix corresponding to the given linear equations with respect to the specified variables.
  • genmatrix([equation1,...,equationm],[x1,...,xn],flag) generates the matrix corresponding to the given linear equations with respect to the specified variables, including as the last column of the matrix the right-hand sides of the equations.
  • genmatrix([equation1,...,equationm],[x1,..,xn],name) generates the matrix corresponding to the given linear equations with respect to the specified variables, and assigns a name to the vector that contains the right-hand sides of the equations.

EXERCISE 5-6

Consider the 3×3 matrix A whose rows are given by the vectors (1,5,-2), (-7,3,1) and (2,2,-2). Find the Schur, LU, QR, Cholesky, Hessenberg and singular value decompositions of A. Verify the results. Also find the pseudoinverse of A.

First, we find the Schur decomposition, checking that the result is correct:

>> A = [1,5,-2; -7,3,1; 2,2,-2];
>> [U, T] = schur (A)

U =

   -0.0530   -0.8892   -0.4544
   -0.9910   -0.0093    0.1337
    0.1231   -0.4573    0.8807

T =

    2.4475   -5.7952   -4.6361
    5.7628    0.3689    2.4332
         0         0   -0.8163

Now, we check that U * T * U'= A and that U * U'= eye (3):

>> [U * T * U', U * U']

ans =
    1.0000    5.0000   -2.0000               1.0000    0.0000    0.0000
   -7.0000    3.0000    1.0000               0.0000    1.0000    0.0000
    2.0000    2.0000   -2.0000               0.0000    0.0000    1.0000

Now, we find the LU, QR, Cholesky, Hessenberg and singular value decompositions, checking the results in each case:

>> [L, U, P] = lu (A)

L =

    1.0000         0         0
   -0.1429    1.0000         0      Lower triangular matrix
   -0.2857    0.5263    1.0000

U =

   -7.0000    3.0000    1.0000
         0    5.4286   -1.8571     Upper triangular matrix
         0         0   -0.7368
P =

     0     1     0
     1     0     0
     0     0     1

>> [P * A, L * U]

ans =

    -7     3     1    -7     3     1
     1     5    -2     1     5    -2     we have that P*A=L*U
     2     2    -2     2     2    -2

>> [Q, R, E] = qr (A)

Q =

   -0.1361 - 0.8785 - 0.4579
    0.9526 - 0.2430   0.1831
   -0.2722 - 0.4112   0.8700

R =

   -7.3485 1.6330 1.7691
    0     -5.9442 2.3366  Upper triangular matrix
    0      0     -0.6410

E =
     1 0 0
     0 1 0
     0 0 1

>> [A * E, Q * R]

ans =

    1.0000 5.0000 -2.0000  1.0000 5.0000 -2.0000
   -7.0000 3.0000  1.0000 -7.0000 3.0000  1.0000
    2.0000 2.0000 -2.0000  2.0000 2.0000 -2.0000

Then, A * E = Q * R.

>> R = chol(A)

??? Error using ==> chol
Matrix must be positive definite.

We obtain an error message because the matrix is not positive definite.

>> [P,H] = hess(A)

P =

    1.0000  0      0
    0      -0.9615 0.2747
    0       0.2747 0.9615

H =

    1.0000 -5.3571 -0.5494
    7.2801  1.8302 -2.0943
    0      -3.0943 -0.8302

>> [P*H*P', P'*P]

ans =

    1.0000 5.0000 -2.0000 1.0000 0      0
   -7.0000 3.0000  1.0000 0      1.0000 0
    2.0000 2.0000 -2.0000 0      0      1.0000

Then, PHP'= A and P'P = I.

>> [U, S, V] = svd (A)

U =

   -0.1034 -0.8623   0.4957
   -0.9808  0.0056  -0.1949
    0.1653 -0.5064  -0.8463

S =

    7.8306 0      0
    0      6.2735 0        diagonal matrix
    0      0      0.5700

V =

    0.9058 -0.3051 0.2940
   -0.3996 -0.8460 0.3530
   -0.1411  0.4372 0.8882

>> U * S * V'

ans =

    1.0000 5.0000 -2.0000
   -7.0000 3.0000  1.0000 therefore USV'= A
    2.0000 2.0000 -2.0000

Now, we calculate the pseudoinverse of A:

>> X = pinv (A)

X =

    0.2857 -0.2143 -0.3929
    0.4286 -0.0714 -0.4643
    0.7143 -0.2857 -1.3571

>> [A * X * A, X * A * X]

ans =

    1.0000 5.0000 -2.0000 0.2857 -0.2143 -0.3929
   -7.0000 3.0000  1.0000 0.4286 -0.0714 -0.4643
    2.0000 2.0000 -2.0000 0.7143 -0.2857 -1.3571

Thus, we have AXA = A and XAX = X.

EXERCISE 5-7

Consider the square matrix of order 5 whose (i,j)th element is defined by Aij = 1 /(i+j-1/2). Calculate its Jordan form (and check the result). Find its LU, QR, Frobenius, Smith and Hermite decompositions, calculating the matrices involved and verifying that they do indeed yield the original matrix.

>> A=sym(maple('matrix(5,5,(i,j)-> i+j-1/2)'))

A =

[3/2, 5/2, 7/2, 9/2, 11/2]
[5/2, 7/2, 9/2, 11/2, 13/2]
[7/2, 9/2, 11/2, 13/2, 15/2]
[9/2, 11/2, 13/2, 15/2, 17/2]
[11/2, 13/2, 15/2, 17/2, 19/2]

>> [V, J] = Jordan (A);
>> pretty(sym(V))

         [             1/2                1/2       22   19]
         [8/9, 9/170 17 + 3/10, 9/170 3/17 +  3/10, --,  --]
         [                                          45   45]
         [                                                 ]
         [-71            1/2           1/2    -7        ]
         [---, - 2/85 17 + 1/5, 2/85 17 + 1/5,---, - 2/9]
         [90                                  18        ]
         [                                                 ]
         [-67          1/2               1/2      -49   -14]
         [---, 1/170 17 + 1/10, -1/170 17 + 1/10, ----, ---]
         [90                                       90   45 ]
         [                                                 ]
         [             1/2         1/2                     ]
         [3/10, 3/85 17,  - 3/85 17,   3/10,  - 2/5]
         [                                                           ]
         [31      11   1/2               11   ½             -13   -23]
         [-- ,   --- 17    - 1/10 ,   - --- 17    - 1/10 ,   -- ,  --]
         [90     170                    170                  90    45]

>> pretty(sym(J))

            [0            0                    0            0    0]
            [                                                     ]
            [                 1/2                                 ]
            [0 55/4 + 15/4 17                  0            0    0]
            [                                                     ]
            [                                   1/2               ]
            [0            0         55/4-15/4 17            0    0]
            [                                                     ]
            [0            0                    0            0    0]
            [                                                     ]
            [0            0                    0            0    0]

>> pretty(simple(sym(symmul(symmul(V,J),inv(V)))))

                    [3/2  5/2   7/2   9/2    11/2]
                    [5/2  7/2   9/2   11/2   13/2]
                    [7/2  9/2   11/2  13/2   15/2]
                    [9/2  11/2  13/2  15/2   17/2]
                    [13/2 11/2  15/2  17/2   19/2]

We have calculated the transformation matrix V and the diagonal matrix (the Jordan form) J of A. We have also proven that V * J * V-1= A. We now calculate the LU decomposition matrix of A and the matrices involved, checking the result. Since symbolic matrices are involved, we will use the maple command.

>> maple('A:=matrix(5,5,(i,j)-> i+j-1/2)');
>> pretty (sym (maple ('LUdecomp(A,P=p,L=l,U=u,U1=u1,R=r)')))

                     [3/2    5/2     7/2     9/2    11/2]
                     [                                  ]
                     [0    - 2/3   - 4/3    - 2    - 8/3]
                     [                                  ]
                     [ 0      0       0       0      0  ]
                     [                                  ]
                     [ 0      0       0       0      0  ]
                     [                                  ]
                     [ 0      0       0       0      0  ]

>> pretty(sym(maple('print(p,l)')))

              [1 0 0 0 0]  [1    0    0    0    0]
              [         ]  [                     ]
              [0 1 0 0 0]  [5/3  1    0    0    0]
              [         ]  [                     ]
              [0 0 1 0 0], [7/3  2    1    0    0]
              [         ]  [                     ]
              [0 0 0 1 0]  [3    3    0    1    0]
              [         ]  [                     ]
              [0 0 0 0 1]  [11/3 4    0    0    1]

>> pretty(sym(maple('print(u1,r)')))

           [3/2    5/2     0    0    0]  [1    0    -1    -2    -3]
           [                          ]  [                        ]
           [0    - 2/3     0    0    0]  [0    1     2     3     4]
           [                          ]  [                        ]
           [ 0      0      1    0    0], [0    0     0     0     0]
           [                          ]  [                        ]
           [ 0      0      0    1    0]  [0    0     0     0     0]
           [                          ]  [                        ]
           [ 0      0      0    0    1]  [0    0     0     0     0]

>> pretty (sym (maple ('evalm(p&*l&*u1&*r), evalm(p&*l&*u)')))

[3/2  5/2  7/2  9/2  11/2]  [3/2 5/2 7/2 9/2   11/2]
[                        ]  [                      ]
[5/2  7/2  9/2 11/2  13/2]  [5/2 7/2 9/2 11/2  13/2]
[                        ]  [                      ]
[7/2  9/2 11/2 13/2  15/2], [7/2 9/2 11/2 13/2 15/2]
[                        ]  [                      ]
[9/2 11/2 13/2 17/2  15/2]  [9/2 11/2 13/2 17/2 15/2]
[                        ]  [                       ]
[13/2 11/2 15/2 17/2 19/2]  [13/2 11/2 15/2 17/2 19/2]

We see that p * l * u1 * r = A and that p * l * u = A. We will now calculate the QR decomposition of A and the matrices involved, checking the result.

>> pretty(sym(maple('print(R)')))

        [       1/2 71    1/2      85    1/2      33    1/2 113    1/2]
        [1/2 285,  --- 285,       --- 285,       --- 285,   --- 285   ]
        [          114            114             38        114       ]
        [                                                             ]
        [                 1/2            1/2            1/2        1/2]
        [0,       2/57 570,      4/57 570,      2/19 570,   8/57 570  ]
        [                                                             ]
        [0 ,            0 ,            0 ,            0 ,            0]
        [                                                             ]
        [0 ,            0 ,            0 ,            0 ,            0]
        [                                                             ]
        [0 ,            0 ,            0 ,            0 ,            0]

>> pretty(sym(maple('print(q)')))

    [        1/2         1/2        1/2                             ]
    [1/95 285,   3/95 570,    1/5/10,         0,                   0]

    [        1/2   11    1/2        1/2        1/2                  ]
    [1/57 285,    --- 570,  - 1/5 10,   1/10 30,                   0]
    [             570                                               ]

    [         1/2        1/2        1/2        1/2              1/2 ]
    [7/285 285, 2/285 570,    - 1/10, - 2/15 30,           1/6 6    ]

    [        1/2         1/2                   1/2             1/2  ]
    [3/95 285, - 1/190 570,        0, - 1/30 30,        - 1/3 6     ]

    [ 11     1/2         1/2        1/2        1/2               1/2]
    [--- 285 , - 1/57 570,   1/10/10,   1/15 30,            1/6 6   ]
    [285                                                            ]

>> pretty(sym(maple('evalm(q&*R)')))

                    [3/2  5/2  7/2  9/2  11/2]
                    [                        ]
                    [5/2  7/2  9/2 11/2  13/2]
                    [                        ]
                    [7/2  9/2 11/2 13/2  15/2]
                    [                        ]
                    [9/2 11/2 13/2 17/2  15/2]
                    [                        ]
                    [13/2 11/2 15/2 17/2 19/2]

We see that q * R = A. Next we find the Smith decomposition of the matrix A and the matrices involved, checking the result.

>> pretty(sym(maple('smith(A,X,U,V)')))

                            [1 0 0 0 0]
                            [         ]
                            [0 1 0 0 0]
                            [         ]
                            [0 0 0 0 0]
                            [         ]
                            [0 0 0 0 0]
                            [         ]
                            [0 0 0 0 0]

>> pretty(sym(maple('print(U,V)')))


                                          [     -13                  ]
         [0     0     0     0      2/11]  [1    ---     1     2     3]
         [                             ]  [      11                  ]
         [0     0     0    11/2   - 9/2]  [                          ]
         [                             ]  [0     1     -2    -3    -4]
         [-1    2    -1     0       0  ], [                          ]
         [                             ]  [0     0      1     0     0]
         [ 0    1    -2     1       0  ]  [                          ]
         [                             ]  [0     0      0     1     0]
         [ 0    0     1     -2      1  ]  [                          ]
                                          [0     0      0     0     1]

>> pretty(sym(maple('evalm(U&*A&*V)')))

                            [1 0 0 0 0]
                            [         ]
                            [0 1 0 0 0]
                            [         ]
                            [0 0 0 0 0]
                            [         ]
                            [0 0 0 0 0]
                            [         ]
                            [0 0 0 0 0]

We see that U * A * V = Smith matrix. Next we calculate the Hermite decomposition of the matrix A and find the matrices involved.

>> pretty(sym(maple('H:=hermite(A,x,V); V:=evalm(V)')))
>> pretty(sym(maple('print(H,V)')))

           [1    0    -1    -2    -3]  [-7/2    5/2     0    0    0]
           [                        ]  [                           ]
           [0    1     2     3     4]  [5/2   - 3/2     0    0    0]
           [                        ]  [                           ]
           [0    0     0     0     0], [ 2       -4     2    0    0]
           [                        ]  [                           ]
           [0    0     0     0     0]  [ 4       -6     0    2    0]
           [                        ]  [                           ]
           [0    0     0     0     0]  [ 6       -8     0    0    2]

>> pretty(sym(maple('evalm(V&*A)')))

                          [1    0    -1    -2    -3]
                          [                        ]
                          [0    1     2     3     4]
                          [                        ]
                          [0    0     0     0     0]
                          [                        ]
                          [0    0     0     0     0]
                          [                        ]
                          [0    0     0     0     0]

We see that V*A = H. Finally, we calculate the Frobenius decomposition of A and find the matrices involved, checking the result.

>> pretty(sym(maple('F:=frobenius(A,P); P:=evalm(P)')))
>> pretty(sym(maple('print(F,P)')))

                                  [ 67                       22      19 ]
                                  [ --     3/2     285/4     --      -- ]
                                  [ 45                       45      45 ]
                                  [                                     ]
      [0    0     0      0    0]  [ -7                       -7         ]
      [                        ]  [ --     5/2     355/4     --     -2/9]
      [1    0    50      0    0]  [ 18                       18         ]
      [                        ]  [                                     ]
      [0    1    55/2    0    0], [-49                      -49     -14 ]
      [                        ]  [---     7/2     425/4    ---     --- ]
      [0    0     0      0    0]  [90                       90      45  ]
      [                        ]  [                                     ]
      [0    0     0      0    0]  [3/10    9/2     495/4    3/10    -2/5]
                                  [                                     ]
                                  [ 13                       13      23 ]
                                  [ --     11/2    565/4     --      -- ]
                                  [ 90                       90      45 ]

>> pretty(sym(maple('evalm(P^(-1)&*A&*P)')))

                          [0    0     0      0    0]
                          [                        ]
                          [1    0    50      0    0]
                          [                        ]
                          [55/2 1     0      0    0]
                          [                        ]
                          [0    0     0      0    0]
                          [                        ]
                          [0    0     0      0    0]

We have shown that P- 1* A * P = F.

EXERCISE 5-8

Consider the 3 × 3 matrix A whose rows are given by the vectors (1,5,-2), (-7,3,1) and (2,2,-2). If V is the vector of ones, solve the system L * x = V based on the LU decomposition of A. Solve the system G * x = V, where G is obtained from A via Gaussian elimination. Solve the system J * x = V where J is the Jordan form of A. Represent the matrix system in the form of equations, and find the Hermite and Smith decompositions of A.

First, we define the matrix A and the vector V using the maple command as follows:

>> maple ('A: = matrix(3,3,[1,5,-2,-7,3,1,2,2,-2]);) V: = array ([1,1,1])');

Then we find the LU decomposition of A, solving the system L*x = V using the command backsub.

>> pretty(sym(maple('L:=LUdecomp(A)')))
>> pretty(sym(maple('backsub(L,V)')))

                             [253   - 233   – 19]
                             [---    ----    ---]
                             [532     532     14]

We have solved the system L * x = V, which can be expressed in the form of equations with the command geneqns as follows:

>> pretty(sym(maple('geneqns(L,[x1,x2,x3],V)')))


                                                             14
            {x 1 + 5 x 2 - 2 x 3 = 1, 38 x 2 - 13 x 3 = 1, - -- x 3 = 1}
                                                             19

Now we solve the system G * x = V where G is obtained from A by Gaussian elimination.

>> pretty(sym(maple('G:=gausselim(A)')))
>> pretty(sym(maple('backsub(G,V)')))

                              [79   - 11        ]
                              [--    ---    -2/7]
                              [56    56        ]

The system of equations is found as follows:

>> pretty(sym(maple('geneqns(G,[x1,x2,x3],V)')))

     {x 1 + 5 x 2 - 2 x 3 = 1, 8 x 2 + 2 x 3 = 1, - 7/2 x 3 = 1}

Now, we solve the system J * x = V where J is the canonical Jordan form of A. We use the command forwardsub.

>> pretty(sym(maple('J:=gaussjord(A)')))
>> pretty(sym(maple('forwardsub(J,V)')))

                                 [1 1 1]

Finally, we find the Smith and Hermite matrices associated with A.

>> pretty(sym(maple('ihermite(A,x)')))

                                [1 1 6]
                                [0 2 3]
                                [0 0 14]

>> pretty(sym(maple('ismith(A)')))

                                [1 0 0]
                                [0 1 0]
                                [0 0 28]

5.5 Similar Matrices and Diagonalization

Two matrices A and B of dimensions (M×N) are equivalent if there exist two invertible matrices U and V such that A = UBV. The MATLAB command [U, S, V] = svd (A) calculates a diagonal matrix S which is equivalent to A.

Two square matrices A and B of order n are said to be congruent if there is an invertible matrix P such that A = PtBP.

The MATLAB command [U, T] = schur (A) calculates a matrix T which is congruent with A.

Congruence implies equivalence, and two congruent matrices must always have the same rank.

Two square matrices of order n, A and B, are similar if there is an invertible matrix P such that A = PBP-1.

Two similar matrices are equivalent.

A matrix A is diagonalizable if it is similar to a diagonal matrix D, that is, if there is an invertible matrix P such that A = PDP- 1.

The process of calculating the diagonal matrix D and the matrix P is called diagonalization of A.

Given a square matrix of real numbers A of order n, if all the eigenvalues of A are real and distinct, then A is diagonalizable. The matrix D will have the eigenvalues of A as the diagonal elements. The matrix P has as columns the eigenvectors of A corresponding to these eigenvalues.

If the matrix A has an eigenvalue with multiplicity r greater than 1, then it is diagonalizable if and only if the kernel of the matrix A - r * In has dimension equal to the degree of multiplicity of the eigenvalue r.

The MATLAB command  [V, J] = jordan (A)  diagonalizes the matrix A by calculating the diagonal matrix J and the matrix V such that A=VJV-1.

EXERCISE 5-9

Diagonalize the symmetric matrix whose rows are the vectors:

(3, -1, 0), (-1, 2, -1), (0, -1, 3).

Check the result and confirm that the eigenvalues of the initial matrix are the elements of the diagonal matrix obtained.

We calculate the diagonal matrix J similar to A, which will have the eigenvalues of A on its diagonal, and the transformation matrix V. To do this, we use the command [V, J] = jordan (A):

>> A = [3, 0, - 1, - 1, 2, - 1; 0, - 1, 3]

A =

     3  -1  0
    -1  -2 -1
     0  -3 -1

>> [V, J] = jordan (A)

V =

[1/6,  1/2, 1/3]
[1/3,  0,  -1/3]
[1/6, -1/2, 1/3]

J =

[1, 0, 0]
[0, 3, 0]
[0, 0, 4]

We now confirm that the diagonal matrix J has the eigenvalues of A on its diagonal:

>> eigensys (A)

ans =

[1]
[3]
[4]

The matrices A and J are similar because there a matrix V satisfying the equation V-1 * A * V = J :

>> symmul(symmul(inv(V),A),V)

ans =

[1, 0, 0]
[0, 3, 0]
[0, 0, 4]

5.6 Sparse Matrices

A matrix is called sparse if it has sufficiently many zero elements that one can take advantage of.  Sparse matrix algorithms do not store most null elements in memory, so when working on matrix processing with sparse matrices one gains time and efficiency. There are specialized commands that can be used to deal with sparse matrices. Some of these commands are listed below.

  • S = sparse (i, j, s, m, n, nzmax), i = vector, j = vector, s = vector. Creates a sparse matrix S of dimension m×n with space for nzmax non-zero elements given by s. The vector i contains the i-input components of the non-null elements and the vector j contains the corresponding j-input components.
  • S=sparse(i,j,s,m,n) creates the sparse matrix S using nzmax=length(s).
  • S = sparse(i,j,s) creates a sparse matrix S with m = max (i) and n = max (j).
  • S = sparse (A) converts the matrix A into sparse form.
  • A = full (S) converts the sparse matrix S into full matrix form A.
  • S = spconvert (D) converts an external ASCII file read with name D into a sparse matrix S.
  • (i, j) = find (A) returns the row and column indices of the non-zero entries of the matrix A.
  • B = spdiags (A, d) builds a sparse matrix by extracting the diagonal elements of A specified by the vector d.
  • S = speye (m, n) creates the sparse m×n matrix with ones on the main diagonal.
  • S = speye (n) creates the sparse square identity matrix of order n.
  • R = sprandn (S) generates a random sparse matrix with non-zero values normally distributed in (0,1) with the same structure as the sparse matrix S.
  • R = sprandsym (S) generates a sparse random symmetric matrix with non-zero entries normally distributed in (0,1) whose lower diagonal triangle has the same structure as S.
  • r = sprank (S) gives the structural rank of the sparse matrix S.
  • n = nnz (S) gives the number of non-zero elements in the sparse matrix S.
  • k = nzmax (S) returns the amount of storage occupied by the non-zero elements in the sparse matrix S. If S is a full matrix then nzmax (S) = prod (size (S)).
  • s=spalloc(m,n,nzmax) creates space in memory for a sparse matrix of dimension m×n.
  • R = spones(S) replaces the zero entries of the sparse matrix S with ones.
  • n = condest(S) computes a lower bound for the 1-norm condition number of a square matrix S.
  • m = normest(S) returns an estimate of the 2-norm of the matrix S.
  • issparse(A) returns 1 if the matrix A is sparse, and 0 otherwise.

Here are some examples:

>> sparse([1,1,2,2,3,4],[4,2,3,1,2,3],[-7,12,25,1,-6,8],4,4,10)

ans =

   (2,1) 1
   (1,2) 12
   (3,2) -6
   (2,3) 25
   (4,3) 8
   (1,4) -7

Now we convert this sparse matrix into complete form:

>> full(ans)

ans =

     0  12  0  -7
     1   0 25   0
     0  -6  0   0
     0  0   8   0

Now we define a sparse matrix whose full form is a diagonal matrix:

sparse(1:5,1:5,-6)

ans =

   (1,1)       -6
   (2,2)       -6
   (3,3)       -6
   (4,4)       -6
   (5,5)       -6

>> full(ans)

ans =

    -6     0     0     0     0
     0    -6     0     0     0
     0     0    -6     0     0
     0     0     0    -6     0
     0     0     0     0    -6

5.7 Special Matrices

MATLAB provides commands to define certain special types of matrices. These include the following:

  • H = hadamard(n): Returns the Hadamard matrix of order n, a matrix with values 1 or –1 such that H'* H = n * eye(n).
  • hankel(V): Returns the square Hankel matrix whose first column is the vector V and whose elements are zero below the first anti-diagonal. The matrix hankel(C,R) has first column vector C and last row vector R.
  • hilb(n): Returns the Hilbert matrix of order n, a matrix whose ij-th element is 1 /(i+j-1).
  • invhilb(n): Returns the inverse of the Hilbert matrix of order n.
  • magic(n): Returns a magic square of order n. Its elements are integers from 1 to n2 with equal sums of rows and columns.
  • pascal(n): Returns the Pascal matrix of order n (symmetric, positive definite with integer entries taken from Pascal’s triangle).
  • rosser: Returns the Rosser matrix, an 8 × 8 matrix with a double eigenvalue, three nearly equal eigenvalues, dominant eigenvalues of the opposite sign, a zero eigenvalue and a small non-zero eigenvalue.
  • toeplitz(C,R): Returns a Toeplitz matrix (not symmetric, with the vector C in the first column and R as the first row vector).
  • vander(C): Returns a Vandermonde matrix whose penultimate column is the vector C. In addition, A(:,j) = C ^ (n-j).
  • wilkinson(n): Returns the Wilkinson matrix of order n (symmetric tridiagonal with pairs of eigenvalues close but not the same).
  • compan(P): Returns the corresponding companion matrix whose first row is -P(2:n)/P(1), where  P  is a vector of polynomial coefficients.
  • maple(‘hadamard (n)’): Returns the Hadamard matrix of order n, a matrix with values 1 or - 1 such that H'* H = n * eye(n).
  • maple (‘hilbert (n)’): Returns the Hilbert matrix of order n, a matrix whose ij-th element is 1 /(i+j-1).
  • maple (‘hilbert(n,exp)’): Returns the matrix of order n with ij-th entry equal to 1 /(i+j-exp).
  • maple(‘bezout(poly1,poly2,x)’): Constructs the Bézout matrix of the given polynomials in x, with dimension max(m,n), where m = degree (poly1) and n = degree (poly2). The determinant of this matrix is the resultant of the two polynomials (resultant(poly1,poly2,x)).
  • maple(‘sylvester(p1,p2,x)’): Constructs the Sylvester matrix of the given polynomials in x, with dimension n+m, where m = degree(p1) and n =degree(p2). The determinant of this matrix is the resultant of the two polynomials.
  • maple (‘fibonacci (n)’): Returns the nth Fibonacci matrix F(n) whose size is the sum of the dimensions of F (n-1) and F (n-2).
  • maple(‘toeplitz([ex1,...,exn])’): Returns the symmetric Toeplitz matrix whose elements are the specified expressions.
  • maple(‘vandermonde([expr1,..., exprn])’): Returns the Vandermonde matrix whose ij-th element is exprij-1.
  • maple (‘wronskian(V,x)’): Returns the Wronskian matrix of the vector V =(f1,...,fn) with respect to the variable x. The ij-th element is diff (fj, x$(i-1)).
  • maple (‘jacobian([expr1,...,exprm],[x1,..., xn])’): Returns the m×n Jacobian matrix with ij-th element diff(expri,xj).
  • maple(‘hessian(exp,[x1,...,xn])’): Returns the m×n Hessian matrix with ij-th element diff(exp, xi,xj).

EXERCISE 5-10

Find the eigenvalues of the Wilkinson matrix of order 8, a magic square of order 8 and the Rosser matrix.

>> [eig(wilkinson(8)), eig(rosser), eig(magic(8))]

ans =

  1. 0e + 003 *

   0.0042  1.0000  0.2600
   0.0043  1.0000  0.0518
   0.0028  1.0200 -0.0518
   0.0026  1.0200  0.0000
   0.0017  1.0199  0.0000 + 0.0000i
   0,0011  0.0001  0.0000 - 0.0000i
   0.0002  0.0000  0.0000 + 0.0000i
  -0.0010 -1.0200  0.0000 - 0.0000i

Observe that the Wilkinson matrix has pairs of eigenvalues which are close, but not equal. The Rosser matrix has a double eigenvalue, three nearly equal eigenvalues, dominant eigenvalues of the opposite sign, a zero eigenvalue and a small non-zero eigenvalue.

EXERCISE 5-11

Find the Smith and Hermite forms of the inverse of the Hilbert matrix of order 2 in the variable x. Also find the corresponding transformation matrices.

>> maple('with(linalg):H:= inverse(hilbert(2,x))');
>> pretty(simple(sym(maple('H'))))

          [            2                                           ]
          [   -(-3 + x)  (-2 + x)        (-3 + x) (-2 + x) (-4 + x)]
          [                                                        ]
          [                                          2             ]
          [(-3 + x) (-2 + x) (-4 + x)       -(-3 + x)  (-4 + x)    ]

>> maple ('B: = smith(H,x,U,V);)U: = eval (U); V: = eval (V)');
>> pretty(simple(sym(maple('B'))))

                     [-3 + x               0              ]
                     [                                    ]
                     [                        2           ]
                     [0           (- 2 + x) (x - 7 x + 12)]

>> pretty(simple(sym(maple('U'))))

                    [       -1                  -1        ]
                    [                                     ]
                    [               2                   2 ]
                    [10 - 13/2 x + x    - 13/2 x + 9 + x  ]

>> pretty(simple(sym(maple('V'))))

                             [-7/2 + x      - 4 + x]
                             [                     ]
                             [-3/2 + x      - 2 + x]

>> maple('HM:=hermite(H,x,Q);Q:=evalm(Q)');
>> pretty(simple(sym(maple('HM'))))

                        [ 2                           ]
                        [x  - 5 x + 6          0      ]
                        [                             ]
                        [                 2           ]
                        [     0          x  - 7 x + 12]

>> pretty(simple(sym(maple('Q'))))

                              [- x + 3    - x + 2]
                              [                  ]
                              [- x + 4    - x + 3]

EXERCISE 5-12

Verify that the functions x, x2 and x3 are linearly independent.

>> maple('v:=[x,x^2,x^3]:w:=wronskian(v,x)');
>> pretty(simple(sym(maple('w'))))

                              [      2       3  ]
                              [x    x       x   ]
                              [                 ]
                              [                2]
                              [1   2 x      3 x ]
                              [                 ]
                              [0     2      6 x ]

>> pretty(simple(sym(maple('det(w)'))))

                                        3
                                     2 x

Since the determinant of the Wronskian is non-zero, the functions are linearly independent.

EXERCISE 5-13

Find the Jacobian matrix and the Jacobian determinant of the transformation:

x = e u sin (v), y = e u cos (v).
>> pretty(sym(maple('jacobian(vector([exp(u) * sin(v), exp(u) * cos(v)]), [u, v])')))

                       [exp (u) sin (u)   exp (u) cos (v)]
                       [                                 ]
                       [exp (u) cos (v) - exp (u) sin (v)]

>> pretty(simple(sym(maple('det(")'))))

                                       2
                                   -exp (u)

EXERCISE 5-14

Find the Bézout and Sylvester matrices B and T for the functions p = a + bx + cx2 and q = d + ex + fx2. Verify that the determinants of B and T coincide with the resultant of p and q.

>> maple('p:=a+b*x+c*x^2; q:= d+e*x+f*x^2; B:=bezout(p, q, x); T:=sylvester(p, q, x)')
>> pretty(sym(maple('B')))

                           [dc - af   db - ae]
                           [                 ]
                           [ec - bf   dc - af]

>> pretty(sym(maple('T')))

                              [c b a 0]
                              [0 c b a]
                              [f e d 0]
                              [0 f e d]


>> pretty(sym(maple('det(B)'))),pretty(sym(maple('det(T)'))),
pretty(sym(maple('resultant(p,q,x)')))


         2  2              2 2               2       2
        d c - 2 d c a f + a f - d b y c + d b f + a e c - a e b f

         2  2              2 2               2       2
        d c - 2 d c a f + a f - d b y c + d b f + a e c - a e b f

         2  2              2 2               2       2
        d c - 2 d c a f + a f - d b y c + d b f + a e c - a e b f

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.98.208