Fundamental Materials and Tools 17
and then solving (3 −c)(1 −c) −2 ×4 = c
2
−4c −5 = 0. The eigenvalues of A are
found to be “−1” and “5.” The eigenvectors associated with the eigenvalue “ −1” is
computed by further solving
[A −(−1)I]v =
'
44
22
('
v
1
v
2
(
=
'
0
0
(
and, in turn , 4 v
1
+ 4v
2
= 0and2v
1
+ 2v
2
= 0. A nontrivial solution to these two
equations is v
2
= −v
1
.Theeigenvectorsoftheeigenvalue“−1” are then given by
v
1
(1,−1)
t
with v
1
being arbitrary. For instance, by setting v
1
= 1, (1,−1)
t
becomes
an eigenvector associated with the eigenvalue “−1.” Similarly, (2, 1)
t
is an eigenvec-
tor associated with the eigenvalue “5” if v
1
= 1isalsoassumed.
If c is an eigenvalue of A,
α
c is the eigenvalue of
α
A for a scalar
α
.
If c is an eigenvalue of a nonsingular matrix A,1/c is the eigenvalue of A
−1
.
The eigenvalues of an upper or lower triangular matrix are theelementsonthe
main diagonal.
If the eigenvalues of a nonsingular matrix A are all distinct, the associated eigen-
vectors are all linearly independent, as implied in the following theorem.
Theorem 1.6
If A is an n ×n nonsingular matrix with n distinct eigenvalues, there exists an n ×n
nonsingular matrix B such that B
−1
AB is a diagonal matrix. A is said to be diago-
nalizable.
Proof Let A be an n ×n matrix with eigenvectors {b
1
,b
2
,...,b
n
} associated with
n distinct eigenvalues {c
1
,c
2
,...,c
n
},respectively.Assumethattheseeigenvec-
tors are linearly dependent. Hence, there exists an integer k ∈ [1,n −1] such that
{b
1
,b
2
,...,b
k
} are linearly ind ependent but {b
1
,b
2
,...,b
k+1
} are linearly d ep en-
dent. This gives
α
1
b
1
+
α
2
b
2
+ ··· +
α
k+1
b
k+1
= 0forsomescalars{
α
1
,
α
2
, ...,
α
k+1
} not all equal to 0. Multiplying by A on both sides results in
α
1
Ab
1
+
α
2
Ab
2
+
···+
α
k+1
Ab
k+1
= 0. By definition, Ab
i
= c
i
b
i
for all i ∈ [1, n].So,theequation
can be further written as
α
1
c
1
b
1
+
α
2
c
2
b
2
+ ··· +
α
k+1
c
k+1
b
k+1
= 0. Subtracting
it from the original equation and then multiplying c
k+1
on both sides of the dif-
ference results in
α
1
(c
1
−c
k+1
)b
1
+
α
2
(c
2
−c
k+1
)b
2
+ ··· +
α
k
(c
k
−c
k+1
)b
k
= 0.
Because {b
1
, b
2
, ..., b
k
} are assumed linearly independent, it can be concluded that
α
i
(c
i
−c
k+1
)b
i
= 0foralli ∈ [1, k].Astheseeigenvaluesarealldistinct,thisgives
c
i
−c
k+1
&= 0and
α
i
must be zero for all i ∈ [1,k + 1],whichcontradictsthenot-all-
zero-scalar and linear-dependen t assumptions. So, the n eigenvectors associated with
the n distinct eigenvalues must be linearly independent.
Assume that these n linear-independent eigenvectors form the columns of an n×n
matrix B.Bydefinition,thereexistsB
−1
.IfAb
i
= c
i
b
i
for i ∈[1, n] are expressed in