The solution vectors of an n×n
can be used to construct a square matrix X=Φ(t)
associated with Eq. (1). Suppose that x1(t), x2(t),…, xn(t)
having these solution vectors as its column vectors, is called a fundamental matrix for the system in (1).
Because the column vector x=xj(t)
In terms of the fundamental matrix Φ(t)
of the system x′=Ax
where c=[c1c2⋯cn]T
for some n×n
In order that the solution x(t) in (3) satisfy a given initial condition
it suffices that the coefficient vector c in (4) be such that Φ(0)c=x0;
When we substitute (6) in Eq. (4), we get the conclusion of the following theorem.
Section 7.3 tells us how to find a fundamental matrix for the system
with constant n×n
for i=1, 2, …,
having the solutions x1, x2, …, xn
In order to apply Eq. (8), we must be able to compute the inverse matrix Φ(0)−1.
is
where Δ=det(A)=ad−bc≠0.
where Δ=det(A)≠0
Find a fundamental matrix for the system
and then use it to find the solution of (13) that satisfies the initial conditions x(0)=1
The linearly independent solutions
found in Example 1 of Section 7.3 yield the fundamental matrix
Then
and the formula in (11) gives the inverse matrix
Hence the formula in (8) gives the solution
and so
Thus the solution of the original initial value problem is given by
An advantage of the fundamental matrix approach is this: Once we know the fundamental matrix Φ(t)
We now discuss the possibility of constructing a fundamental matrix for the constant-coefficient linear system x′=Ax
We have seen that exponential functions play a central role in the solution of linear differential equations and systems, ranging from the scalar equation x′=kx
is a matrix solution of the matrix differential equation
with n×n
The exponential ez
Similarly, if A is an n×n
where I is the identity matrix. The meaning of the infinite series on the right in (17) is given by
where A0=I, A2=AA, A3=AA2, and so on; inductively, An+1=AAn if n≧0. It can be shown that the limit in (18) exists for every n×n square matrix A. That is, the exponential matrix eA is defined (by Eq. (17)) for every square matrix A.
Consider the 2×2 diagonal matrix
Then it is apparent that
for each integer n≧1. It therefore follows that
Thus
so the exponential of the diagonal 2×2 matrix A is obtained simply by exponentiating each diagonal element of A.
The n×n analog of the 2×2 result in Example 2 is established in the same way. The exponential of the n×n diagonal matrix
is the n×n diagonal matrix
obtained by exponentiating each diagonal element of D.
The exponential matrix eA satisfies most of the exponential relations that are familiar in the case of scalar exponents. For instance, if 0 is the n×n zero matrix, then Eq. (17) yields
the n×n identity matrix. In Problem 31 we ask you to show that a useful law of exponents holds for n×n matrices that commute:
In Problem 32 we ask you to conclude that
In particular, the matrix eA is nonsingular for every n×n matrix A (reminiscent of the fact that ez≠0 for all z). It follows from elementary linear algebra that the column vectors of eA are always linearly independent.
If t is a scalar variable, then substitution of At for A in Eq. (17) gives
(Of course, At is obtained simply by multiplying each element of A by t.)
If,
then
so An=0 for n≧3. It therefore follows from Eq. (24) that
that is,
If An=0 for some positive integer n, then the exponential series in (24) terminates after a finite number of terms, so the exponential matrix eA (or eAt) is readily calculated as in Example 3. Such a matrix—with a vanishing power—is said to be nilpotent.
If
then
where D=2I is a diagonal matrix and B is the nilpotent matrix of Example 3. Therefore, (20) and (22) give
thus
It happens that term-by-term differentiation of the series in (24) is valid, with the result
that is,
in analogy to the formula Dt(ekt)=kekt from elementary calculus. Thus the matrix-valued function
satisfies the matrix differential equation
Because the matrix eAt is nonsingular, it follows that the matrix exponential eAt is a fundamental matrix for the linear system x′=Ax. In particular, it is the fundamental matrix X(t) such that X(0)=I. Therefore, Theorem 1 implies the following result.
Thus the solution of homogeneous linear systems reduces to the task of computing exponential matrices. Conversely, if we already know a fundamental matrix Φ(t) for the linear system x′=Ax, then the facts that eAt=Φ(t)C (by Eq. (4′)) and eA·0=e0=I (the identity matrix) yield
So we can find the matrix exponential eAt by solving the linear system x′=Ax.
In Example 1 we found that the system x′=Ax with
has fundamental matrix
Hence Eq. (28) gives
Use an exponential matrix to solve the initial value problem
The coefficient matrix A in (29) evidently has characteristic equation (2−λ)3=0 and thus the triple eigenvalue λ=2, 2, 2. It is easy to see that the eigenvector equation
has (to within a constant multiple) the single solution v=[100]T. Thus there is only a single eigenvector associated with the eigenvalue λ=2, and so we do not yet have the three linearly independent solutions needed for a fundamental matrix. But we note that A is the same matrix whose matrix exponential
was calculated in Example 4. Hence, using Theorem 2, the solution of the initial value problem in (29) is given by
The same particular solution x(t) as in Example 6 could be found using the generalized eigenvector method of Section 7.6. One would start by finding the chain of generalized eigenvectors
corresponding to the triple eigenvalue λ=2 of the matrix A. Then one would—using Eqs. (27) in Section 7.6—assemble the linearly independent solutions
of the differential equation x′=Ax in (29). The final step would be to determine values of the coefficients c1, c2, c3 so that the particular solution x(t)=c1x1(t)+c2x2(t)+c3x3(t) satisfies the initial condition in (29). At this point it should be apparent that—especially if the matrix exponential eAt is readily available (for instance, from a computer algebra system)—the method illustrated in Example 6 can well be more “computationally routine” than the generalized eigenvector method.
The relatively simple calculation of eAt carried out in Example 4 (and used in Example 6) was based on the observation that if
then A−2I is nilpotent:
A similar result holds for any 3×3 matrix A having a triple eigenvalue r, in which case its characteristic equation reduces to (λ−r)3=0. For such a matrix, an explicit computation similar to that in Eq. (30) will show that
(This particular result is a special case of the Cayley-Hamilton theorem of Section 6.3, according to which every matrix satisfies its own characteristic equation.) Thus the matrix A−rI is nilpotent, and it follows that
the exponential series here terminating because of Eq. (31). In this way, we can rather easily calculate the matrix exponential eAt for any square matrix having only a single eigenvalue.
The calculation in Eq. (32) motivates a method of calculating eAt for any n×n matrix A whatsoever. As we saw in Section 7.6, A has n linearly independent generalized eigenvectors u1, u2, …, un. Each generalized eigenvector u is associated with an eigenvalue λ of A and has a rank r≧1 such that
(If r=1, then u is an ordinary eigenvector such that Au=λu.)
Even if we do not yet know eAt explicitly, we can consider the function x(t)=eAtu, which is a linear combination of the column vectors of eAt and is therefore a solution of the linear system x′=Ax with x(0)=u. Indeed, we can calculate x explicitly in terms of A, u, λ, and r:
so
using (33) and the fact that eλIt=eλtI.
If the linearly independent solutions x1(t), x2(t), …, xn(t) of x′=Ax are calculated using (34) with the linearly independent generalized eigenvectors u1, u2, …, un, then the n×n matrix
is a fundamental matrix for the system x′=Ax. Finally, the specific fundamental matrix X(t)=Φ(t)Φ(0)−1 satisfies the initial condition X(0)=I, and thus is the desired matrix exponential eAt. We have therefore outlined a proof of the following theorem.
Find eAt if
Theorem 3 would apply even if the matrix A were not upper triangular. But because A is upper triangular, this fact enables us to see quickly that its characteristic equation is
Thus A has the distinct eigenvalue λ1=5 and the repeated eigenvalue λ2=3.
Case 1: λ1=5. The eigenvector equation (A−λI)u=0 for u=[abc]T is
The last two scalar equations 4c=0 and −2c=0 give c=0. Then the first equation −2a+4b=0 is satisfied by a=2 and b=1. Thus the eigenvalue λ1=5 has the (ordinary) eigenvector u1=[210]T. The corresponding solution of the system x′=Ax is
Case 2: λ2=3. The eigenvector equation (A−λI)u=0 for u=[abc]T is
The first two equations 4b+5c=0 and 2b+4c=0 imply that b=c=0, but leave a arbitrary. Thus the eigenvalue λ2=3 has the single (ordinary) eigenvector u2=[100]T. The corresponding solution of the system x′=Ax is
To look for a generalized eigenvector of rank r=2 in Eq. (33), we consider the equation
The first two equations 8b+16c=0 and 4b+8c=0 are satisfied by b=2 and c=−1, but leave a arbitrary. With a=0 we get the generalized eigenvector u3=[02−1]T of rank r=2 associated with the eigenvalue λ=3. Because (A−3I)2u=0, Eq. (34) yields the third solution
With the solutions listed in Eqs. (39) and (40), the fundamental matrix
defined by Eq. (35) is
Hence Theorem 3 finally yields
As in Example 7, Theorem 3 suffices for the computation of eAt provided that a basis consisting of generalized eigenvectors of A can be found. Alternatively, a computer algebra system can be used as indicated in the project material for this section.
Find a fundamental matrix of each of the systems in Problems 1 through 8, then apply Eq. (8) to find a solution satisfying the given initial conditions.
x′=[2112]x,x(0)=[3−2]
x′=[2−1−42]x,x(0)=[2−1]
x′=[2−54−2]x,x(0)=[01]
x′=[3−111]x,x(0)=[10]
x′=[−3−293]x,x(0)=[1−1]
x′=[7−543]x,x(0)=[20]
x′=[50−62−1−24−2−4]x,x(0)=[210]
x′=[322−5−4−2553]x,x(0)=[10−1]
Compute the matrix exponential eAt for each system x′=Ax given in Problems 9 through 20.
x′1=5x1−4x2, x′2=2x1−x2
x′1=6x1−6x2, x′2=4x1−4x2
x′1=5x1−3x2, x′2=2x1
x′1=5x1−4x2, x′2=3x1−2x2
x′1=9x1−8x2, x′2=6x1−5x2
x′1=10x1−6x2, x′2=12x1−7x2
x′1=6x1−10x2, x′2=2x1−3x2
x′1=11x1−15x2, x′2=6x1−8x2
x′1=3x1+x2, x′2=x1+3x2
x′1=4x1+2x2, x′2=2x1+4x2
x′1=9x1+2x2, x′2=2x1+6x2
x′1=13x1+4x2, x′2=4x1+7x2
In Problems 21 through 24, show that the matrix A is nilpotent and then use this fact to find (as in Example 3) the matrix exponential eAt.
A=[1−11−1]
A=[64−9−6]
A=[1−1−11−11000]
A=[30−350730−3]
Each coefficient matrix A in Problems 25 through 30 is the sum of a nilpotent matrix and a multiple of the identity matrix. Use this fact (as in Example 6) to solve the given initial value problem.
x′=[2502] x,x(0)=[47]
x′=[70117] x,x(0)=[5−10]
x′=[123012001] x,x(0)=[456]
x′=[500105020305] x,x(0)=[405060]
x′=[1234016300120001] x,x(0)=[1111]
x′=[30006300963012963] x,x(0)=[1111]
Suppose that the n×n matrices A and B commute; that is, that AB=BA. Prove that eA+B=eAeB. (Suggestion: Group the terms in the product of the two series on the right-hand side to obtain the series on the left.)
Deduce from the result of Problem 31 that, for every square matrix A, the matrix eA is nonsingular with (eA)−1=e−A.
Suppose that
Show that A2n=I and that A2n+1=A if n is a positive integer. Conclude that
and apply this fact to find a general solution of x′=Ax. Verify that it is equivalent to the general solution found by the eigenvalue method.
Suppose that
Show that eAt=Icos 2t+12Asin 2t. Apply this fact to find a general solution of x′=Ax, and verify that it is equivalent to the solution found by the eigenvalue method.
Apply Theorem 3 to calculate the matrix exponential eAt for each of the matrices in Problems 35 through 40.
A=[3403]
A=[123014001]
A=[234013001]
A=[5203001020005]
A=[1333013300230002]
A=[2444024400240003]
If A is an n×n matrix, then a computer algebra system can be used first to calculate the fundamental matrix eAt for the system
then to calculate the matrix product x(t)=eAtx0 to obtain a solution satisfying the initial condition x(0)=x0. For instance, suppose that we want to solve the initial value problem
After the matrices
have been entered, either the Maple command
with(linalg): exponential(A∗t)
the Mathematica command
MatrixExp[A t]
or the Matlab command
syms t, expm(A∗t)
yields the matrix exponential
Then either the Maple product multiply(expAt,x0)
, the Mathematica product expAt.x0
, or the Matlab product expAt∗x0
gives the solution vector
Obviously this, finally, is the way to do it!
Matrix exponentials also allow for convenient interactive exploration of the system (1). For example, a version of the Mathematica commands
A = {{13, 4}, {4, 7}};
field = VectorPlot[A.{x, y}, {x, −25, 25},
{y, −25, 25}];
Manipulate[
curves = ParametricPlot[
MatrixExp[A t, #]&/@pt, {t, −1, 1},
PlotRange −> 25];
Show[curves, field],
{{pt, {{11, 23}, {20, −10}, {−20, −10},
{−20, 10}}}, Locator}]
was used to generate Fig. 8.1.1, which shows four solution curves of the system (1) with the matrix A chosen as in Eq. (2). The initial conditions for each solution curve-one of which initially passes through the point (11, 23) of our initial value problem, while the other three pass through the points (20,−10), (−20,−10), and (20, 10)—are specified by a “locator point” which can be freely dragged to any desired position in the phase plane, with the corresponding solution curve being instantly redrawn.
Experimenting with such interactive displays can shed considerable light on the behavior of linear systems. For example, notice the straight line solution in Fig. 8.1.1; if you could drag the corresponding locator point around the phase plane, what other straight line solution could you find?How could you have predicted this by examining the matrix A?
For a three-dimensional example, solve the initial value problem
And here’s a four-dimensional problem:
If at this point you’re having too much fun with matrix exponentials to stop, make up some problems of your own. For instance, choose any homogeneous linear system appearing in this chapter and experiment with different initial conditions. The exotic 5×5 matrix A of the Section 7.6 application may suggest some interesting possibilities.
18.224.44.53