i
i
i
i
i
i
i
i
5.3. Computing with Matrices and Determinants 101
We can deduce that the volume of the parallelepiped formed by the vectors
dened by the columns (or rows since the determinant of the transpose is the
same) is zero. This is equivalent to saying that the columns (or rows) are not
linearly independent. Note that the sum of the rst and third rows is twice the
second row, which implies linear dependence.
5.3.1 Computing Inverses
Determinants give us a tool to compute the inverse of a matrix. It is a very inef-
cient method for large matrices, but often in graphics our matrices are small. A
key to developing this method is that the determinant of a matrix with two iden-
tical rows is zero. This should be clear because the volume of the n-dimensional
parallelepiped is zero if two of its sides are the same. Suppose we have a 4 ×4 A
and we wish to nd its inverse A
1
.Theinverseis
A
1
=
1
|A|
a
c
11
a
c
21
a
c
31
a
c
41
a
c
12
a
c
22
a
c
32
a
c
42
a
c
13
a
c
23
a
c
33
a
c
43
a
c
14
a
c
24
a
c
34
a
c
44
.
Note that this is just the transpose of the matrix where elements of A are replaced
by their respective cofactors multiplied by the leading constant (1 or -1). This
matrix is called the adjoint of A. The adjoint is the transpose of the cofactor
matrix of A. We can see why this is an inverse. Look at the product AA
1
which we expect to be the identity. If we multiply the rst row of A by the rst
column of the adjoint matrix we need to get |A| (remember the leading constant
above divides by |A|:
a
11
a
12
a
13
a
14
····
····
····
a
c
11
···
a
c
12
···
a
c
13
···
a
c
14
···
=
|A|···
· ···
· ···
· ···
.
This is true because the elements in the rst row of A are multiplied exactly
by their cofactors in the rst column of the adjoint matrix which is exactly the
determinant. The other values along the diagonal of the resulting matrix are |A|
for analogous reasons. The zeros follow a similar logic:
····
a
21
a
22
a
23
a
24
····
····
a
c
11
···
a
c
12
···
a
c
13
···
a
c
14
···
=
····
0 ···
····
····
.
i
i
i
i
i
i
i
i
102 5. Linear Algebra
Note that this product is a determinant of some matrix:
a
21
a
c
11
+ a
22
a
c
12
+ a
23
a
c
13
+ a
24
a
c
14
.
The matrix in fact is
a
21
a
22
a
23
a
24
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44
.
.
Because the rst two rows are identical, the matrix is singular, and thus, its deter-
minant is zero.
The argument above does not apply just to four by four matrices; using that
size just simplies typography. For any matrix, the inverse is the adjoint matrix
divided by the determinant of the matrix being inverted. The adjoint is the trans-
pose of the cofactor matrix, which is just the matrix whose elements have been
replaced by their cofactors.
Example. The inverse of one particular three by three matrix whose determinant
is 6 is
112
134
025
1
=
1
6
34
25
12
25
12
34
14
05
12
05
12
14
13
02
11
02
11
13
=
1
6
7 1 2
552
2 22
.
You can check this yourself by multiplying the matrices and making sure you get
the identity.
5.3.2 Linear Systems
We often encounterlinear systems in graphics with n equations andn unknowns,
usually for n =2or n =3. For example,
3x +7y +2z =4,
2x 4y 3z = 1,
5x +2y + z =1.
i
i
i
i
i
i
i
i
5.4. Eigenvalues and Matrix Diagonalization 103
Here x, y,andz are the “unknowns” for which we wish to solve. We can write
this in matrix form:
372
2 4 3
521
x
y
z
=
4
1
1
.
A common shorthand for such systems is Ax = b where it is assumed that A is
a square matrix with known constants, x is an unknown column vector (with ele-
ments x, y,andz in our example), and b is a column matrix of known constants.
There are many ways to solve such systems, and the appropriate method de-
pends on the properties and dimensions of the matrix A. Because in graphics
we so frequently work with systems of size n 4, we’ll discuss here a method
appropriate for these systems, known as Cramer’s rule, which we saw earlier,
from a 2D geometric viewpoint, in the example on page 92. Here, we show this
algebraically. The solution to the above equation is
x =
472
1 4 3
121
372
2 4 3
521
; y =
342
2 1 3
511
372
2 4 3
521
; z =
374
2 4 1
521
372
2 4 3
521
.
The rule here is to take a ratio of determinants, where the denominator is |A| and
the numerator is the determinant of a matrix created by replacing a column of A
with the column vector b. The column replaced corresponds to the position of
the unknown in vector x. For example, y is the second unknown and the second
column is replaced. Note that if |A| =0, the division is undened and there is
no solution. This is just another version of the rule that if A is singular (zero
determinant) then there is no unique solution to the equations.
5.4 Eigenvalues and Matrix Diagonalization
Square matrices have eigenvalues and eigenvectors associated with them. The
eigenvectors are those non-zero vectors whose directions do not change when
multiplied by the matrix. For example, suppose for a matrix A and vector a,we
have
Aa = λa. (5.9)
This means we have stretched or compressed a, but its direction has not changed.
The scale factor λ is called the eigenvalue associated with eigenvectora. Knowing
i
i
i
i
i
i
i
i
104 5. Linear Algebra
the eigenvalues and eigenvectors of matrices is helpful in a variety of practical
applications. We will describe them to gain insight into geometric transformation
matrices and as a step toward singular values and vectors described in the next
section.
If we assume a matrix has at least one eigenvector, then we can do a standard
manipulation to nd it. First, we write both sides as the product of a square matrix
with the vector a:
Aa = λIa, (5.10)
where I is an identity matrix. This can be rewritten
Aa λIa =0. (5.11)
Because matrix multiplication is distributive, we can group the matrices:
(A λI) a =0. (5.12)
This equation can only be true if the matrix (A λI) is singular, and thus its
determinant is zero. The elements in this matrix are the numbers in A except
along the diagonal. For example, for a 2 × 2 matrix the eigenvalues obey
a
11
λa
12
a
21
a
22
λ
= λ
2
(a
11
+ a
22
)λ +(a
11
a
22
a
12
a
21
)=0. (5.13)
Because this is a quadratic equation, we know there are exactly two solutions for
λ. These solutions may or may not be unique or real. A similar manipulation
for an n × n matrix will yield an nth-degree polynomial in λ. Because it is not
possible, in general, to nd exact explicit solutions of polynomial equations of
degree greater than four, we can only compute eigenvalues of matrices 4 × 4 or
smaller by analytic methods. For larger matrices, numerical methods are the only
option.
An important special case where eigenvalues and eigenvectors are particu-
larly simple is symmetric matrices (where A = A
T
). The eigenvalues of real
symmetric matrices are always real numbers, and if they are also distinct, their
eigenvectors are mutually orthogonal. Such matrices can be put into diagonal
form:
A = QDQ
T
, (5.14)
where Q is an orthogonal matrix and D is a diagonal matrix. The columns of Q
Recall that an
orthogo-
nal
matrix has
orthonor-
mal
rows and
orthonormal
columns.
are the eigenvectors of A and the diagonal elements of D are the eigenvalues of
A. Putting A in this form is also called the eigenvalue decomposition, because it
decomposes A into a product of simpler matrices that reveal its eigenvectors and
eigenvalues.
i
i
i
i
i
i
i
i
5.4. Eigenvalues and Matrix Diagonalization 105
Example. Given the matrix
A =
21
11
,
the eigenvalues of A are the solutions to
λ
2
3λ +1=0.
We approximate the exact values for compactness of notation:
λ =
3 ±
5
2
,
2.618
0.382
.
Now we can nd the associated eigenvector. The rst is the nontrivial (not x =
y =0) solution to the homogeneous equation,
2 2.618 1
11 2.618

x
y
=
0
0
.
This is approximately (x, y)=(0.8507, 0.5257). Note that there are innitely
many solutions parallel to that 2D vector, and we just picked the one of unit length.
Similarly the eigenvector associated with λ
2
is (x, y)=(0.5257, 0.8507).This
means the diagonal form of A is (within some precision due to our numeric ap-
proximation):
21
11
=
0.8507 0.5257
0.5257 0.8507

2.618 0
00.382

0.8507 0.5257
0.5257 0.8507
.
We will revisit the geometry of this matrix as a transform in the next chapter.
5.4.1 Singular Value Decomposition
We saw in the last section that any symmetric matrix can be diagonalized, or de-
composed into a convenient product of orthogonal and diagonal matrices. How-
ever, most matrices we encounter in graphics are not symmetric, and the eigen-
value decomposition for non-symmetric matrices is not nearly so convenient or
illuminating, and in general involves complex-valued eigenvalues and eigenvec-
tors even for real-valued inputs.
We would recommend
learning in this order: sym-
metric eigenvalues/vectors,
singular values/vectors,
and
then
unsymmetric
eigenvalues, which are
much trickier.
There is another generalization of the symmetric eigenvalue decomposition to
non-symmetric (and even non-square) matrices; it is the singular value decom-
position (SVD). The main difference between the eigenvalue decomposition of a
symmetric matrix and the SVD of a non-symmetric matrix is that the orthogonal
matrices on the left and right sides are not required to be the same in the SVD:
A = USV
T
.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.190.18