7.2 Matrices and Linear Systems

A system of differential equations often can be simplified by expressing it as a single differential equation involving a matrix-valued function. A matrix-valued function, or simply matrix function, is a matrix such as

x(t)=[x1(t)x2(t)xn(t)] (1)

or

A(t)=[a11(t)a12(t)a1n(t)a21(t)a22(t)a2n(t)am1(t)am2(t)amn(t)], (2)

in which each entry is a function of t. We say that the matrix function A(t) is continuous (or differentiable) at a point (or on an interval) if each of its elements has the same property. The derivative of a differentiable matrix function is defined by elementwise differentiation; that is,

A(t)=dAdt=[daijdt]. (3)

Example 1

If

x(t)=[tt2et]andA(t)=[sint1tcost],

then

dxdt=[12tet]andA(t)=[cost01sint].

The differentiation rules

ddt(A+B)=dAdt+dBdt (4)

and

ddt(AB)=AdBdt+dAdtB (5)

follow readily by elementwise application of the analogous differentiation rules of elementary calculus for real-valued functions. If c is a (constant) real number and C is a constant matrix, then

ddt(cA)=cdAdt,ddt(CA)=CdAdt,andddt(AC)=dAdtC. (6)

Because of the noncommutativity of matrix multiplication, it is important not to reverse the order of the factors in Eqs. (5) and (6).

First-Order Linear Systems

The notation and terminology of matrices and vectors may seem rather elaborate when first encountered, but it is readily assimilated with practice. Our main use for matrix notation will be the simplification of computations with systems of differential equations, especially those computations that would be burdensome in scalar notation.

We discuss here the general system of n first-order linear equations

x1=p11(t)x1+p12(t)x2++p1n(t)xn+f1(t),x2=p12(t)x1+p22(t)x2++p2n(t)xn+f2(t),x3=p13(t)x1+p32(t)x2++p3n(t)xn+f3(t),xn=pn1(t)x1+pn2(t)x2++pnn(t)xn+fn(t). (7)

If we introduce the coefficient matrix

P(t)=[pij(t)]

and the column vectors

x=[xi]andf(t)=[fi(t)],

then—as illustrated in Example 2 below—the system in (7) takes the form of a single matrix equation

dxdt=P(t)x+f(t). (8)

We will see that the general theory of the linear system in (7) closely parallels that of a single nth-order equation. The matrix notation used in Eq. (8) not only emphasizes this analogy, but also saves a great deal of space.

A solution of Eq. (8) on the open interval I is a column vector function x(t)=[xi(t)] such that the component functions of x satisfy the system in (7) identically on I. If the functions pij(t) and fi(t) are all continuous on I, then Theorem 1 of Section 7.1 guarantees the existence on I of a unique solution x(t) satisfying preassigned initial conditions x(a)=b.

Example 2

The first-order system

x1=4x13x2,x2=6x17x2

can be recast as a matrix equation by writing

x=[x1x2]=[4x13x26x17x2]=[4367][x1x2]=[4367]x.

Thus we obtain a matrix equation of the form in (8),

dxdt=P(t)x+f(t)withP(t)=[4   36   7]andf(t)=[00]=0.

To verify that the vector functions

x1(t)=[3e2t2e2t]andx2(t)=[ e5t3e5t]

are both solutions of the matrix differential equation with coefficient matrix P, we need only calculate

Px1=[4367][3e2t2e2t]=[6e2t4e2t]=x1

and

Px2=[4367][e5t3e5t]=[5e5t15e5t]=x2.

To investigate the general nature of the solutions of Eq. (8), we consider first the associated homogeneous equation

dxdt=P(t)x, (9)

which has the form shown in Eq. (8), but with f(t)0. We expect it to have n solutions x1, x2, , xn that are independent in some appropriate sense, and that are such that every solution of Eq. (9) is a linear combination of these n particular solutions. Given n solutions x1, x2, , xn of Eq. (9), let us write

xj(t)=[x1j(t)xij(t)xnj(t)]. (10)

Thus xij(t) denotes the ith component of the vector xj(t), so the second subscript refers to the vector function xj(t), whereas the first subscript refers to a component of this function. Theorem 1 is analogous to Theorem 1 of Section 5.2.

Proof: We know that xi=P(t)xi for each i (1in), so it follows immediately that

x=c1x1+c2x2++cnxn=c1P(t)x1+c2P(t)x2++cnP(t)xn=P(t)(c1x1+c2x2++cnxn).

That is, x=P(t)x, as desired. The remarkable simplicity of this proof demonstrates clearly one advantage of matrix notation.

Example 2

Continued If x1 and x2 are the two solutions of

dxdt=[4367]x

discussed in Example 2, then the linear combination

x(t)=c1x1(t)+c2x2(t)=c1[3e2t2e2t]+c2[e5t3e5t]

is also a solution. In scalar form with x=[x1x2]T, this gives the solution

x1(t)=3c1e2t+c2e5t,x2(t)=2c1e2t+3c2e5t.

Independence and General Solutions

Linear independence is defined in the same way for vector-valued functions as for real-valued functions (Section 5.2). The vector-valued functions x1, x2, , xn are linearly dependent on the interval I provided that there exist constants c1, c2, , cn, not all zero, such that

c1x1(t)+c2x2(t)++cnxn(t)=0 (12)

for all t in I. Otherwise, they are linearly independent. Equivalently, they are linearly independent provided that no one of them is a linear combination of the others. For instance, the two solutions x1 and x2 of Example 2 are linearly independent because, clearly, neither is a scalar multiple of the other.

Just as in the case of a single nth-order equation, there is a Wronskian determinant that tells us whether or not n given solutions of the homogeneous equation in (9) are linearly dependent. If x1, x2, , xn are such solutions, then their Wronskian is the n×n determinant

W(t)=|x11(t)x12(t)x1n(t)x21(t)x22(t)x2n(t)xn1(t)xn2(t)xnn(t)|, (13)

using the notation in (10) for the components of the solutions. We may write either W(t) or W(x1,x2,,xn). Note that W is the determinant of the matrix that has as its column vectors the solutions x1, x2, , xn. Theorem 2 is analogous to Theorem 3 of Section 5.2. Moreover, its proof is essentially the same, with the definition of W(x1,x2,,xn) in Eq. (13) substituted for the definition of the Wronskian of n solutions of a single nth-order equation. (See Problems 34 through 36.)

Example 3

It is readily verified (as in Example 2) that

x1(t)=[2et2etet],x2(t)=[2e3t0e3t],andx3(t)=[2e5t2e5te5t]

are solutions of the equation

dxdt=[320132013]x. (14)

The Wronskian of these solutions is

W=|2et2e3t2e5t2et02e5tete3te5t|=e9t|222202111|=16e9t,

which is never zero. Hence Theorem 2 implies that the solutions x1, x2, and x3 are linearly independent (on any open interval).

Theorem 3 is analogous to Theorem 4 of Section 5.2. It says that the general solution of the homogeneous n×n system x=P(t)x is a linear combination

x=c1x1+c2x2++cnxn (15)

of any n given linearly independent solutions x1, x2, , xn.

Proof

Let a be a fixed point of I. We first show that there exist numbers c1, c2, , cn such that the solution

y(t)=c1x1(t)+c2x2(t)++cnxn(t) (16)

has the same initial values at t=a as does the given solution x(t); that is, such that

c1x1(a)+c2x2(a)++cnxn(a)=x(a). (17)

Let X(t) be the n×n matrix with column vectors x1, x2, , xn, and let c be the column vector with components c1, c2, , cn. Then Eq. (17) may be written in the form

X(a)c=x(a). (18)

The Wronskian determinant W(a)=|X(a)| is nonzero because the solutions x1, x2, , xn are linearly independent. Hence the matrix X(a) has an inverse matrix X(a)1. Therefore the vector c=X(a)1x(a) satisfies Eq. (18), as desired.

Finally, note that the given solution x(t) and the solution y(t) of Eq. (16)—with the values of ci determined by the equation c=X(a)1x(a)—have the same initial values (at t=a). It follows from the existence-uniqueness theorem of Section 7.1 that x(t)=y(t) for all t in I. This establishes Eq. (15).

Remark Every n×n system x=P(t)x with continuous coefficient matrix does have a set of n linearly independent solutions x1, x2, , xn as in the hypotheses of Theorem 3. It suffices to choose for xj(t) the unique solution such that

xj(a)=[0000100]position j

—that is, the column vector with all elements zero except for a 1 in row j. (In other words, xj(a) is merely the jth column of the identity matrix.) Then

W(x1,x2,,xn)|t=a=|I|0,

so the solutions x1, x2, , xn are linearly independent by Theorem 2. How to actually find these solutions explicitly is another matter—one that we address in Section 7.3 (for the case of constant-coefficient matrices).

Initial Value Problems and Elementary Row Operations

The general solution in Eq. (15) of the homogeneous linear system x=P(t)x can be written in the form

x(t)=X(t)c, (19)

where

X(t)=[x1(t)x2(t)xn(t)] (20)

is the n×n matrix whose column vectors are the linearly independent solutions x1, x2, , xn, and where c=[c1c2cn]T is the vector of coefficients in the linear combination

x(t)=c1x1(t)+c2x2(t)++cnxn(t). (15)

Suppose now that we wish to solve the initial value problem

dxdt=Px,x(a)=b, (21)

where the initial vector b=[b1b2bn]T is given. Then, according to Eq. (19), it suffices to solve the system

X(a)c=b (22)

to find the coefficients c1, c2, , cn in Eq. (15). For instance, we can use the row-reduction techniques of Sections 3.2 and 3.3.

Example 4

Use the solution vectors given in Example 3 to solve the initial value problem

dxdt=[320132013]x,x(0)=[026]. (23)

Solution

It follows from Theorem 3 that the linear combination

x(t)=c1x1(t)+c2x2(t)+c3x3(t)=c1[2et2etet]+c2[2e3t0e3t]+c3[2e5t2e5te5t]

is a general solution of the 3×3 linear system in (23). In scalar form, this gives the general solution

x1(t)=2c1et+2c2e3t+2c3e5t,x2(t)=2c1et2c3e5t,x3(t)=c1etc2e3t+c3e5t.

We seek the particular solution satisfying the initial conditions

x1(0)=0,x2(0)=2,x3(0)=6.

When we substitute these values in the preceding three scalar equations, we get the algebraic linear system

2c1+2c2+2c3=0,2c12c3=2,c1c2+c3=6

having the augmented coefficient matrix

[222020221116].

Multiplication of each of the first two rows by 12 gives

[111010111116];

then subtraction of the first row both from the second row and from the third row gives the matrix

[111001210206].

The first column of this matrix now has the desired form.

Now, we multiply the second row by 1, then add twice the result to the third row. Thereby, we get the upper triangular augmented coefficient matrix

[111001210044]

that corresponds to the transformed system

c1+c2+c3=0,c2+2c3=1,4c3=4.

We finally solve in turn for c3=1, c2=3, and c1=2. Thus the desired particular solution is given by

x(t)=2x1(t)3x2(t)+x3(t)=[4et6e3t+2e5t4et2e5t2et+3e3t+e5t].

Nonhomogeneous Solutions

We finally turn our attention to a nonhomogeneous linear system of the form

dxdt=P(t)x+f(t). (24)

The following theorem is analogous to Theorem 5 of Section 5.2 and is proved in precisely the same way, substituting the preceding theorems in this section for the analogous theorems of Section 5.2. In brief, Theorem 4 means that the general solution of Eq. (24) has the form

x(t)=xc(t)+xp(t), (25)

where xp(t) is a single particular solution of Eq. (24) and the complementary function xc(t) is a general solution of the associated homogeneous equation x=P(t)x.

Thus, finding a general solution of a homogeneous linear system involves two separate steps:

  1. Finding the general solution xc(t) of the associated homogeneous system;

  2. Finding a single particular solution xp(t) of the nonhomogeneous system.

The sum x(t)=xc(t)+xp(t) will then be a general solution of the nonhomogeneous system.

Example 5

The nonhomogeneous linear system

x1=3x12x29t+13,x2=x1+3x22x3+7t15,x3=x2+3x36t+7

can be recast as a matrix equation by writing

dxdt=[x1x2x3]=[3x12x29t+13x1+3x22x3+7t15x2+3x36t+7]=[3x12x2x1+3x22x3x2+3x2]+[9t+137t156t+7]=[320132013][x1x2x3]+[9t+137t156t+7].

Thus we get the form in (24) with

P(t)=[320132013],f(t)=[9t+137t156t+7].

In Example 3 we saw that a general solution of the associated homogeneous linear system

dxdt=[320132013]x

is given by

xc(t)=[2c1et+2c2e3t+2c3e5t2c1et2c3e5tc1etc2e3t+c3e5t],

and we can verify, by substitution, that the function

xp(t)=[3t52t]

(found by using a computer algebra system, or perhaps by a human being using a method discussed in Section 8.2) is a particular solution of the original nonhomogeneous system. Consequently, Theorem 4 implies that a general solution of the nonhomogeneous system is given by

x(t)=xc(t)+xp(t)

—that is, by

x1(t)=2c1et+2c2e3t+2c3e5t+3t,x2(t)=2c1et2c3e5t+5,x3(t)=c1etc2e3t+c3e5t+2t.

7.2 Problems

In Problems 1 and 2, verify the product law for differentiation, (AB)=AB+AB.

  1. A(t)=[t2t1t31t] and B(t)=[1t1+t3t24t3].

     

  2. A(t)=[ettt2t028t1t3] and B(t)=[32et3t].

In Problems 3 through 12, write the given system in the form x=P(t)x+f(t).

  1. x=3y, y=3x

     

  2. x=3x2y, y=2x+y

     

  3. x=2x+4y+3et, y=5xyt2

     

  4. x=txety+cos t, y=etx+t2ysin t

     

  5. x=y+z, y=z+x, z=x+y

     

  6. x=2x3y, y=x+y+2z, z=5y7z

     

  7. x=3x4y+z+t, y=x3z+t2, z=6y7z+t3

     

  8. x=txy+etz, y=2x+t2yz, z=etx+3ty+t3z

     

  9. x1=x2x2=2x3x3=3x4x4=4x1

     

  10. x1=x2+x3=1,x2=x3+x4=t,x3=x1+x4+t2,x4=x1+x2+t3

In Problems 13 through 22, first verify that the given vectors are solutions of the given system. Then use the Wronskian to show that they are linearly independent. Finally, write the general solution of the system.

  1. x=[4231]x; x1=[2et3et], x2=[e2te2t]

     

  2. x=[3234]x; x1=[e3t3e3t], x2=[2e2te2t]

     

  3. x=[3153]x; x1=e2t[11], x2=e2t[15]

     

  4. x=[4121]x; x1=e3t[11], x2=e2t[12]

     

  5. x=[4367]x; x1=[3e2t2e2t], x2=[e5t3e5t]

     

  6. x=[320132013]x; x1=et[221],x2=e3t[201], x3=e5t[221]

     

  7. x=[011101110]x; x1=e2t[111],x2=et[101], x3=et[011]

     

  8. x=[121610121]x; x1=[1613],x2=e3t[232], x3=e4t[121]

     

  9. x=[8112692661]x; x1=e2t[322],x2=et[111], x3=e3t[110]

     

  10. x=[14020100612160401]x; x1=et[1001],x2=et[0010], x3=et[0102], x4=et[1030]

In Problems 23 through 32, find a particular solution of the indicated linear system that satisfies the given initial conditions.

  1. The system of Problem 14: x1(0)=0, x2(0)=5

     

  2. The system of Problem 15: x1(0)=5, x2(0)=3

     

  3. The system of Problem 16: x1(0)=11, x2(0)=7

     

  4. The system of Problem 17: x1(0)=8, x2(0)=0

     

  5. The system of Problem 18: x1(0)=0, x2(0)=0, x3(0)=4

     

  6. The system of Problem 19: x1(0)=10, x2(0)=12, x3(0)=1

     

  7. The system of Problem 21: x1(0)=1, x2(0)=2, x3(0)=3

     

  8. The system of Problem 21: x1(0)=5, x2(0)=7, x3(0)=11

     

  9. The system of Problem 22: x1(0)=x2(0)=x3(0)=x4(0)=1

     

  10. The system of Problem 22: x1(0)=1, x2(0)=3, x3(0)=4, x4(0)=7

     

    1. Show that the vector functions

      x1(t)=[tt2]andx2=[t2t3]

      are linearly independent on the real line.

    2. Why does it follow from Theorem 2 that there is no continuous matrix P(t) such that x1 and x2 are both solutions of x=P(t)x?

  11. Suppose that one of the vector functions

    x1(t)=[x11(t)x21(t)]andx2(t)=[x12(t)x22(t)]

    is a constant multiple of the other on the open interval I. Show that their Wronskian W(t)=|[xij(t)]| must vanish identically on I. This proves part (a) of Theorem 2 in the case n=2.

     

  12. Suppose that the vectors x1(t) and x2(t) of Problem 34 are solutions of the equation x=P(t)x, where the 2×2 matrix P(t) is continuous on the open interval I. Show that if there exists a point a of I at which their Wronskian W(a) is zero, then there exist numbers c1 and c2 not both zero such that c1x1(a)+c2x2(a)=0. Then conclude from the uniqueness of solutions of the equation x=P(t)x that

    c1x1(t)+c2x2(t)=0

    for all t in I; that is, that x1 and x2 are linearly dependent. This proves part (b) of Theorem 2 in the case n=2.

  13. Generalize Problems 34 and 35 to prove Theorem 2 for n an arbitrary positive integer.

  14. Let x1(t), x2(t), , xn(t) be vector functions whose ith components (for some fixed i) xi1(t), xi2(t), , xin(t) are linearly independent real-valued functions. Conclude that the vector functions are themselves linearly independent.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.176.68